MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 22 November 2024

Viral entity

Regulating social media content

Luv Puri Published 03.02.21, 01:33 AM
Representational image.

Representational image. Pixabay

On January 3, 2020, a YouTube video had gone viral showing an aggrieved, middle-aged, stocky Imran Chishti screaming on his smartphone, asking his friends to use social media platforms to mobilize a crowd outside Nankana Sahib, a site near Lahore, Pakistan, where Guru Nanak was born. The message spread like wildfire and a mob gathered as a consequence of incitement by Chishti in the name of religion. Chishti had tried to provoke the majority community against the Sikhs, a minority community, because of a family dispute. The mob was dispersed after a few hours by the police.

From the micro to the macro level, social media platforms have often been instrumentalized to mobilize violent mobs against ethnic, religious and racial minorities. However, after the attack on Capitol Hill in Washington DC on January 6 — the president of the United States of America, Joseph R. Biden, called it an “insurrection” — the issue of regulating social media content has acquired urgency. Global social media giants such as Twitter and Facebook suspended the accounts of the former president, Donald Trump, who had infamously spouted online canards against ethnic and racial minorities for almost a decade, apart from being a source and amplifier of hate speech and fake news. Apple, Google and Amazon acted against Parler, a micro-blogging website widely popular among right-wing extremists and conservatives in the US, by withdrawing their digital operational support, thereby practically paralyzing the site.

ADVERTISEMENT

Some viewed the actions of social media companies as opportunistic as they coincided with the new, post-electoral reality in theUS. The Democratic Party has gained control of the executive and both Houses of the legislature. President Biden is seen to be more sensitive to the argument that unfiltered social media messaging is exacerbating social and racial tensions and had reportedly called for revoking Section 230. (In the past, Trump and some Republicans had also expressed concerns about Section 230 since they viewed the moderating practices of social media companies to be discriminatory.) Section 230 of the Communications Decency Act, 1996 of the US Congress grants them immunity from liabilities related to third-party-hosted content while allowing them flexibility to moderate content.

The evidence of social media platforms such as Facebook being used in large-scale killings by inflaming passions against vulnerable minorities is well-documented. The Myanmar military’s disproportionate response to the attacks by militants of the Arakan Rohingya Salvation Army on more than 30 police posts in 2017 is a classic example of Facebook being weaponized by the Tatmadaw to wage an anti-Rohingya propaganda. At least 6,700 Rohingyas, including an estimated 730 children under the age of five, were killed in the violence, according to Médecins Sans Frontières. A total of over 700,000 Rohingyas had to flee Rakhine in Myanmar to neighbouring Bangladesh as the military, in the backdrop of a nationwide frenzy against the Rohingyas, went on a rampage and attacked Rohingya-inhabited villages.

In 2018, a fact-finding mission set up by the Geneva-based UN Human Rights Council found that in Myanmar “Facebook has been a useful instrument for those seeking to spread hate, in a context where for most users Facebook is the Internet.” The 440-page report compiled after 15 months of examination gave detailed examples of how the Rohingya community was demonized and recorded many cases “where individuals, usually human rights defenders or journalists, become the target of an online hate campaign that incites or threatens violence”. In its response, Facebook acknowledged the problem and noted that it was focused on overcoming technical challenges that could target hate speech.

Forewarnings about social media’s potential to inflict greater social harm in Myanmar were repeatedly made, including by leaders of the tech industry. In 2013, in their book, The New Digital Age: Reshaping the Future of People, Nations and Business, Eric Schmidt, the former CEO of Google, and Jared Cohen, a policy wonk, had already shone a light within the industry on the role of social media in exacerbating tensions in the communal violence in Rakhine in May 2012.

The social media and technology companies were never short of resources in exploring ways to meet the multi-dimensional challenges. The only plausible explanation is the profit motive, which means that these companies have consistently ignored the catastrophic potentiality of online malfeasance in the interest of scaling up their online client base to garner more and more digital advertising revenue.

The regulation debate is likely to acquire greater urgency in the mainstream, including Biden’s team and the legislative branches in the US, because of the incident on January 6 that has shaken the political and security establishment. Those in favour of a greater government role in regulation point out that giving censorship and gatekeeping powers to a young tech workforce is inherently anti-democratic. It can lead to arbitrary decisions that could also curb legitimate democratic dissent, particularly in an authoritarian political context where other sources are unavailable. There is also the argument that social and political problems cannot be fixed by technological and legal means.

Based on researching historical patterns of similar debates around the regulation of content on television, movies and video games, a recent Harvard Business Review article, “Social Media Companies Should Self-Regulate. Now.”, argued that companies have often risked creating a “tragedy of the commons” when they put their short-term, individual self-interests ahead of the good of the consuming public or the industry and, in the long term, destroyed the environment that made them successful in the first place. The article, co-authored by three academics, concluded that, “going forward, governments and digital platforms will also need to work together more closely. Since more government oversight over Twitter, Facebook, Google, Amazon, and other platforms seems inevitable, new institutional mechanisms for more participative forms of regulation may be critical to their long-term survival and success.”

One cannot adopt a one-size-fits-all approach as the challenges in developing countries are multi-layered compared to those in advanced countries. Mass-scale educational and economic deprivation with deep-seated ethnic, religious, economic and social fault lines abound in a toxic ecosystem that can be exploited easily. This is coupled with the lack of institutional autonomy of civil servants compared to countries like the US. Poorly trained in digital forensics and, sometimes, compromised, law-enforcement authorities have been complicit in the lack of action on a number of reported social media threats, particularly against women. What compound these deficiencies are the ideological biases and proclivities of the decision-making staff of social media companies in the developing world.

The practical realities of the Global South, particularly in multi-religious, multi-ethnic set-ups, underscore the need for rigorous artificial intelligence filters, protocols and the scaling-up of more knowledgeable, efficient and context-sensitive content moderators. One of the many solutions can be that senior staff members responsible for supervising social media content moderation come from neutral countries to avoid perceived or real conflict of interests. The lessons of the last decade that have coincided with the stupendous global growth in social media users indicate that the status quo is unsustainable and one needs to find creative, participatory, bottom-up, customized and grounded solutions to evolve a democratic, fair and efficient social media regulatory framework.

Meanwhile, a year after the incitement and the international outcry over the attempted attack on the Sikh temple, Pakistan’s anti-terrorism court reportedly awarded up to two years’ imprisonment to the three accused, including Imran Chishti, after they were tried under terrorism and blasphemy charges.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT