MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Tuesday, 19 November 2024

Bots and bias

The advent of chatbots like ChatGPT is making the task much harder because it could produce 'natural-sounding text at the click of a button, essentially automating the production of misinformation'

Sevanti Ninan Published 19.06.23, 04:51 AM

Sourced by the Telegraph

With chatbots proliferating, the moral panic over the disinformation this could generate is growing. Wired reported earlier this year on how regional elections in Spain — they were held in May — saw a media house in Madrid make elaborate arrangements to fact-check election reports of politicians’ statements, while also deploying a team to debunk the disinformation being put out (https://www.wired.co.uk/article/fact-checkers-ai-chatgpt-misinformation). The advent of chatbots like ChatGPT is making the task much harder because it could produce “natural-sounding text at the click of a button, essentially automating the production of misinformation.” Fact-checkers are now developing their own language models to counter this.

Are elections — whichever country they might be held in — increasingly going to herald the prospect of the use of technology which seeks to queer the pitch? The acquisition of WhatsApp by Facebook in February 2014 and its accelerated growth in India thereafter (enabled by the spread of smartphones and internet connections via Jio with its inaugural offer of free internet) meant that the messaging app had over 400 million users in India by July 2019. The elections that year saw WhatsApp groups being assiduously created by the Bharatiya Janata Party’s foot soldiers and hired hands in the lead-up to the polls.

ADVERTISEMENT

It was used for single-minded messaging using mobile phones and social media, helping reshape the relationship between media and the practice of politics. Amit Malviya, the head of the BJP’s IT cell, was predicting in an interview to the Economic Times in August 2018 that the upcoming elections would be fought on the mobile phone. He coined the term, “WhatsApp elections”. In January 2019, a Time magazine report described the operation (https://time.com/5512032/whatsapp-india-election-2019/).

The IT cell treated the Karnataka elections in 2018 as a trial run. The Economic Times article said that the BJP party workers and social media volunteers created anywhere between 23,000 and 25,000 WhatsApp groups for their outreach. These consisted of carefully crafted propaganda videos discrediting the Opposition as well as messaging that would mobilise voters. All of this was masterminded by an IT cell begun in 2012 with the twin goals of image building and image destruction.

What has changed today with the rapid proliferation of Generative Artificial Intelligence is that this technology could bring scale to disinformation. When chatbots amplify human bias they become powerful weapons.

Earlier this year, The New York Times looked at how disinformation researchers were using ChatGPT “to produce clean, convincing text that repeated conspiracy theories and misleading narratives.” Companies with names like Newsguard are thus emerging to track online misinformation. And they have a challenge on their hands because crafting false narratives can now be done at scale.

How AI can be regulated will also become a minefield.

Earlier this month, the Delhi-based Media Foundation had a bunch of journalists dissect how Generative AI could upscale the use of AI in journalism and the challenges that this would bring. There was some anxiety about the level and the extent of misinformation that could result from this as the believability of the content would be higher. The scale of fact-checking required would rise exponentially because news me­dia is in the business of trust. Fact-checking is not only for the text and illustrations possibly generated by AI but also for the believability of the claims made by players as we head into a general election. An editor said that fact-checking teams were already overwhelmed. And a tech journalist talked of AI bias and the need to play around with the technology to see how to unbias it.

In January this year, a rep­ort titled “Generative Language Mo­dels and Automated Influence Op­erations: Emerging Threats and Potential Mitigations” was pub­lis­hed (https://arxiv.org/pdf/2301.04246.pdf). The euphemism for dis­in­formation is ‘influence operations’, and when a chatbot gets into the act it is an automated influence operation. Researchers then evolve theories of mitigation to nullify the harm the bot can do. If the propagandist is using a large language model, the mitigation involves doing the following —AI developers need to build models that are more fact-sensitive, developers have to spread radioactive data to make generative models detectable, and governments would have to impose restrictions on data collection as well as put in place access controls on AI hardware.

If propagandists require reliable access to such language models, AI developers have to develop new norms around model release and have those accepted. If disinformation acquires scale, fact-checkers will have to do the same.

Is this then what future elections are going to be about? Groups of people deploying computing power at scale to discredit opponents, and other groups acquiring greater computing power to nullify their efforts?

Once the disinformation game shifts from a no-brainer platform like WhatsApp to AI and its language models, deploying disinformation in Indian languages could hit a cost barrier. Language translation application programming interfaces will come into play; most of them have a pay-as-you-go plan and, for many Indian languages, they are expensive to use.

In theory, chatbots could add more power to Amit Malviya’s elbow. But deploying them to reach the wider electorate will require more elaborate effort than before.

And for the rest of us, social media and trolls will become passé in terms of something to fulminate about. Wait till Generative AI begins to unleash its disinformation potential.

Sevanti Ninan is a media commentator and was the founder-editor of TheHoot.org

Follow us on:
ADVERTISEMENT
ADVERTISEMENT