MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Saturday, 06 July 2024

Detoxify content

Content moderation is important because it can help platforms independently assess whether content takedown requests issued under Indian law meet required legal threshold

Jade Lyngdoh Published 20.09.22, 03:19 AM
Representational image.

Representational image. File photo

Social media platforms like Facebook, Instagram or Twitter establish rules and standards that all users must follow to avoid misuse of the platform and to make them a safe place. To enforce these, decisions on what content can and cannot remain on the platform are made in accordance with the platform rules. The process through which platforms make these decisions is referred to as ‘content moderation’.

Content moderation is important because it can help platforms independently assess whether content takedown requests issued under Indian law meet the required legal threshold. The reason behind this is that platforms would not be able to understand whether reported content is unlawful if they cannot, at the first instance, independently analyse the content that has been reported. This assessment is an important consideration that upholds the freedom of expression of individuals who use social media. In the recent past, we have seen platforms demonstrate a willingness to uphold this right: Twitter has decided to challenge a number of takedown requests in an Indian court.

ADVERTISEMENT

Additionally, content moderation ensures that social media is not misused to harm society. We have seen this happen before, most notably in Myanmar, where social media played a significant role in the Rohingya genocide.

Platforms use two main methods to moderate content: by using staff who act as content moderators, and by deploying artificial intelligence. Two principal concerns with content moderation arise and these are linked to the use of automated tools and the practice of delegating content moderation to third-party businesses. These are issues that platforms must address in a manner that is transparent and inclusive.

While automated tools can screen content published in major languages, they struggle to screen content published in lesser-spoken languages. Currently, we do not understand the full extent of this concern since platforms are not completely transparent about the languages that their processes support. According to available data, Twitter supports ‘multiple languages’ in content moderation whereas Meta, which owns Facebook and Instagram, supports 70 languages. In India, Meta supports 20 Indian languages,even though the languages included remain unknown. When the number of major languages spoken within the country is compared to the number of languages in which content is moderated in, we understand that platforms must expand support for as many languages as possible.

Automated tools lack the capacity to effectively grasp the context behind content shared on social media. This is true since platforms have previously acknowledged the necessity of understanding such context while moderating content. To understand this better, consider the following hypothetical situation involving the word, ‘momo’. Commonly used to refer to dumplings, the word can also be used as a racist slur to target communities from Northeast India. Automated tools would fail to discern whether someone is alluding to food or a racial slur. This underlines the significance of regional and linguistic context in content moderation,a role that can only be performed by human content moderators.

Platforms continue to rely on human content moderators to fill the gaps exposed by such flaws. It would be a mistake, however, to believe that this alternative approach is without problems. Research has indicated that the delegation of content moderation by platforms to third-party businesses has affected the quality of content moderation. In a separate instance before the Oversight Board, we saw how Meta’s failure to translate internal guidelines to lesser-spoken languages for content moderators resulted in enforcement errors. If platforms are serious about addressing the shortcomings of content moderation, they should focus on the problems confronting content moderators.

(Jade Lyngdoh is a Meta India Tech Scholar at National Law University, Jodhpur. The views expressed are personal and not necessarily those of Meta)

Follow us on:
ADVERTISEMENT