In my column last year, titled “Countries and companies”, I had written about how tech giants like Facebook were becoming like countries, taking on the mantle of improving lives of billions of people. In the surest sign yet of this development, Facebook announced the creation of an independent board which would hear cases brought by users against Facebook itself and give binding decisions. While the board has an innocuous sounding official name — the Independent Oversight Board — in an interview two years back, Mark Zuckerberg had imagined a body such as this as the “Supreme Court” for Facebook.
A Supreme Court for a private organization might seem like a category mistake. After all, courts are institutions of the State, handing down decisions backed by law. But the creation of an independent judicial forum and giving the forum an evocative daak naam is not oversight. It is a daring play by Facebook to recast itself as a responsible global power with the trappings of a nation-state.
The Board itself is designed to perform two roles — first, in relation to Facebook’s content moderation policies, it gets to hear cases brought by users and Facebook itself. So if you, as a user, are unsatisfied that your post glorifying Masterda’s audacious armoury raid in Chittagong was taken down by Facebook thinking it amounted to glorification and spread of terror, you can apply to the Board for remedy. Like a court, a panel of the Board will hand down a decision that Facebook will be bound to follow. Second, on policy questions, such as how Facebook’s content moderation policies are formulated in the first place, it can provide non-binding policy guidance. Facebook has voluntarily agreed to give such guidance its consideration.
Any bold self-regulation of this kind is always welcome. However, it’s what this Board appears to do but will actually not do that makes it seem like a giant public relations exercise. First, in relation to the Board’s binding decisions, its Charter contains an innocuous clause that these decisions will be subject to a compliance check by Facebook with the laws of the land before being implemented. This is frankly befuddling. One would have imagined that the least a Supreme Court-like body, co-chaired by a formal federal judge from the United States of America, would have been capable of doing is ensuring that it does not make a decision that itself violates the law. It is likely that despite its proclamations, the ultimate call on whether to actually implement hard decisions will lie with Facebook itself.
In a similar vein, although the Board has the flavour of an independent adjudicatory institution, cut through the accoutrements and it appears less so. Decisions will be taken by smaller panels of the Board which will remain anonymous before being submitted to the full Board. There is little as repulsive as having a judicial forum where you don’t exactly know who the judges are.
Even the composition of the Board itself, as Kara Swisher of The New York Times describes it, is “impressively impressive... which is why it is also nonoffensively nonoffensive”. While it has a galaxy of respected individuals who are thoughtful, humane and credible, the jury is still out on its ability to function independently. This is not because of the Charter for the Board itself or the document creating an independent trust to control it, both of which scream independence from every sentence. It’s rather because Facebook must do much more to convince the world that it is an entity that no longer thinks like a social network for students in a Harvard dormitory with nothing more at stake than who pays for the beers at the frat party. Especially since the ultimate implementation of the decisions of the Board still rests with Facebook, its success will depend critically on whether Facebook, indeed, has grown up as an organization. The proof of the pudding will be in the eating.
This doubt exists for many reasons, prominent amongst which is its record on policies of content moderation. The big question in this regard, one that makes Facebook State-like in the first place, is this: how exactly does Facebook make and apply its content moderation policies? The rules of posting on Facebook, its community standards, used to be its best-kept secret till the Cambridge Analytica scandal. After that, the company went on a charm offensive consulting widely in developing and updating these standards. But questions of application of these standards remain opaque. According to a persistent stream of news reports, the actual task of determining whether content spreads terror, hate and so on and is in violation of the community standards requiring remedy is determined not by Facebook itself but contractors to whom this task is outsourced. This is a task roughly analogous to what the police force does in every country. An outsourced police force seems like an abhorrent idea.
This is where the real reform is needed — to ensure transparency and fairness in applying its rules. Paying content moderation contractors $28,800 annually, approximately 1/10th of an average Facebook employee (data from The Verge, 2019) while setting up a ‘Supreme Court’ may become a case of pulling the wool over the eyes of the world. At least for now, in the absence of such real reform, the company will have to do more to convince everyone that it has grown up sufficiently.
It would, however, be unfair to single out Facebook as the only tech giant that is enrobing itself with the trappings of the State. Its more grown-up compatriots, Apple and Google, are going about this same task, albeit much more subtly and smartly. As a virtual duopoly in smartphone-operating systems (iOS and Android), Apple and Google recently jointly announced a privacy-protecting framework for all contact tracing applications, which detect the spread of the coronavirus and inform at-risk patients. Applications like Aarogya Setu and other government-owned public health applications in different countries would have to comply with Apple’s and Google’s privacy, security and data control requirements, not the other way around. Unlike Facebook’s bold public relations play to appear like a responsible State, Apple and Google are actually reversing the sovereign equation, dictating rules to countries without much fuss.
The shrinking of the State from the public sphere, the rapid growth of the internet, the inability of countries to keep up with the innovation of tech giants and the tech giants’ own desires of improving people’s lives through technology are four powerful currents whose confluence has led to these companies taking steps to become sovereign-like. If my arguments above appear far-fetched, as they might, since a world that has 195 tech giants instead of 195 countries is seemingly far away, consider this — in 2010, Nick Clegg was appointed deputy prime minister of the United Kingdom, the second most powerful man in a country that was for two centuries not very long ago the most powerful in the world. In 2020, merely ten years later, Clegg has jumped ship, although again as a deputy — this time to Zuckerberg at Facebook. The times they are a-changing.
The author is Research Director, Vidhi Centre for Legal Policy. Views are personal