MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Monday, 23 December 2024

Rogue algorithms

It won’t be long before India has its own versions of Gonzalez and Taamneh-style legal actions

Ashish Khetan Published 09.03.23, 04:30 AM
Indian authorities have a challenge on their hands.

Indian authorities have a challenge on their hands. File Photo

What legal responsibilities do social media intermediaries have when they direct users to extremist, violent, and unlawful content through their algorithms? Can SMIs be held criminally liable if it is shown that these algorithms helped foment riots or terrorist attacks? These are some of the questions that the US Supreme Court is currently considering in Gonzalez vs Google and Twitter vs Taamneh, two cases tackling tech platforms’ liability for recommendation algorithms.

Nohemi Gonzalez perished in an Islamic State-backed terrorist strike in Paris in 2015. Her family claimed that YouTube and its parent company, Google, encouraged terrorism by pushing ISIS recruitment videos to users. The Twitter vs Taamneh lawsuit arises from another ISIS-linked terrorist attack in Istanbul in 2017 that killed 39 people. Invoking the Anti-Terrorism Act, the family of one of the victims has sued Twitter, Google and Facebook. By hosting and directing people to ISIS content, these online platforms, it is alleged, contributed to the attack’s instigation. The two cases challenge the protections granted to online platforms by Section 230 of the Communications Decency Act.

ADVERTISEMENT

In India, under the Information Technology Act, 2000, an intermediary is not liable for the third-party information that it holds or transmits. However, to claim such exemption, it must comply with the due diligence standards under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. In accordance with the 2021 rules, SMIs are expected to “make reasonable” attempts to stop users from hosting illegal content on their platform. After receiving a complaint, they must take down any data or communication links related to the forbidden categories of content. Additional due diligence responsibilities include appointing personnel for compliance, enabling identification of the first originator of the information on its platform, and deploying technology-based measures to identify certain types of content. But the use of algorithms to prioritise or de-prioritise particular posts is not covered by Indian IT regulations.

According to some news reports, under the forthcoming Digital India Act meant to replace the IT Act, 2000, the Indian government is considering making online platforms accountable for algorithms they deploy to tailor content based on the specific browsing history and profiles of users. But it seems that the idea is restricted to preventing the misuse of Indian citizens’ data. What requires serious policy and legal debate is the criminal responsibility of algorithms in amplifying harmful content.

Indian authorities have a challenge on their hands. At present, there are 692 million active internet users in India— 351 million from rural India and 341 million from urban India. This figure is estimated to grow to 900 million by 2025. India is also infinitely diverse, with 22 officially recognised languages and nine major faiths. Foreign SMIs have been found wanting in their intent and capability to mediate through these linguistic, cultural, and religious complexities. The platforms have knowingly or unknowingly aided in the dissemination of fake news, hate speech and advocacy of violence. It won’t be long before India has its own versions of Gonzalez and Taamneh-style legal actions.

Unlawful activity in the UAPA Act means “any action taken” to support a claim of cession of a part of the territory of India or to disclaim, question, or disrupt the sovereignty of India. Don’t algorithms recommending ‘seditious’ posts or videos constitute unlawful activity? If algorithms direct one to videos propagating terrorism or on assembling bombs, an argument of aiding and abetting an act of terror can be made.

Internet platforms shouldn’t be forced to comply with onerous filtering and screening requirements. The internet would be destroyed by it. Big tech, however, cannot receive a free pass either. Finding a standard that holds a platform accountable for its conduct is crucial.

Ashish Khetan is Associate Professor of Law, OP Jindal University

Follow us on:
ADVERTISEMENT
ADVERTISEMENT