Before the advent of Facebook, the world knew that unhindered human free speech could be deeply problematic when amplified: it did not know that it could be hugely lucrative. The latter finding is what the latest whistleblower is seeking to document and underscore. That the artificial intelligence at the core of Facebook’s decision-making is programmed to stir up baser emotions which, then, get more views and more advertising.
Frances Haugen’s assertion seems believable when the company’s revenue figures are scrutinized. The data scientist, who worked in the ‘civic misinformation’ section of the social media platform, has been saying that Facebook adopts the strategies it does because it helps the company make more money, actioning “no more than 3-5 per cent of the hate encountered and about 6-tenths of 1% of V & I [violence and incitement]”.
Facebook was founded in 2004. Between 2010 and 2020, its revenue rose from $1,974 million to 61.9 billion, growing consistently year on year. Over 90 per cent of this revenue-generation is from its advertising business. Instagram was responsible for an estimated 28 per cent of all Facebook revenue in 2020. Other social media companies in the billion-dollar league are way behind, with LinkedIn at 8 billion and Twitter at 3.7 billion in 2020.
Through the period when its revenues have grown unhindered, there has been constant controversy. There was the Cambridge Analytica scandal in 2017 when it became known that Facebook had allowed its data to be mined for political targeting by a data analytics firm that was behind Donald Trump’s 2016 election campaign and also involved in the European Union referendum. In 2018, the House Intelligence Committee Minority produced evidence regarding interference in US elections. The experience in the United States of America has shown that despite House Intelligence Committee Minority procuring evidence of how the Russian government placing advertisements on Facebook amounted to targeted interference in the 2016 elections and making it publicly available, Facebook did little to make its platform safer for the 2020 presidential election.
As John Tye, the attorney of the Facebook whistleblower, Frances Haugen, told The Washington Post, Facebook staff were actually making recommendations during the elections last November, saying a flow of conspiracy theories was happening and needed to be curtailed. Zuckerberg did not agree, he says. Within days of the election, Facebook disbanded its civic responsibility team. Then came the insurrection on Capitol hill on January 6 in the run-up to which “Facebook was letting things happen.”
Worrying about the ills of Facebook is for the few, not the many. For the two-billion plus who use it, it is a benign and indispensable platform. But for law-makers, journalists, and internet activists, its consequences can be malign and need to be constrained.
Earlier this month, as journalists from crusading news media in Russia and the Philippines were conferred the Nobel Peace Prize, one of them, Maria Ressa, was accusing Facebook of spreading lies “laced with anger and hate”. She also said that Facebook had become the world’s largest distributor of news without facts and “yet it is biased against facts, it is biased against journalism”. As this paper has reported, Ressa has been the target of intense social-media hatred campaigns from the supporters of President Rodrigo Duterte which, she said, was aimed at destroying her and the credibility of her publication, Rappler. Rappler’s reporting included a series of investigative reports on how the government ‘weaponized’ the internet, using bloggers on its payroll.
India has been described as Facebook’s biggest market by number of users. Partnering Jio’s platforms, as Mark Zuckerberg has done, makes it even more lucrative. So if it means complying with the Indian government’s intermediary guidelines announced in February this year, the company is happy to play ball. As of July, along with Google and Twitter, the compliance reports have been rolling out: how many users reported problems, how many pieces of data have been actioned proactively by Facebook, and in how many categories.
But a country does not gain from such reporting unless the data are mined. How many here have begun doing data journalism that deep dives into those metrics to see what picture emerges? Are users more protected now because 2.6 million pieces of Facebook content have been ‘actioned’ in the category of adult nudity and sexual activity? Or the 324 thousand pieces related to hate speech? Who will zero in on the latter? Doing so would be crucial if we want to force the platform to dial down on hate.
Back in 2018, as Zuckerberg testified before Congress, a bunch of US academics offered some disarming advice in The Conversation.com on how Facebook could reinvent itself. It could act like a media company and take responsibility for the content it published and republished. It could deploy both human and artificial intelligence to categorize and label the information on its platform. It could also start competing to provide the most accurate news instead of the most clickworthy. And, finally, if it wanted to keep making money from user data it could also consider paying users for their data — $1,000 a year for the average social media user, perhaps?
After the whistleblower’s testimony earlier this month, Mark Zuckerberg was at pains to describe in his response on Facebook how both human and artificial intelligence were indeed being harnessed to scrutinize content and keep users safe. “If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?” Frances Haugen has a simple answer to that. To demonstrate concern. Then you disregard the data which emerge if they go against your business interest. As for getting Facebook to pay users for their data which his company monetizes, that is really fanciful thinking.
It is also highly unlikely that Facebook will start competing to provide the most accurate news instead of the most click-worthy. That would destroy its business model. The best the world can hope at this point for is pressure on it to skew its algorithms differently.
Sevanti Ninan is a media commentator and was the founder-editor of TheHoot.org