MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 22 November 2024

Vote to counter fake news

Artificial intelligence companies have been at the vanguard of developing the transformative technology

Tiffany Hsu, Cade Metz Published 11.03.24, 07:20 AM
istock.com/sorbetto

istock.com/sorbetto

Artificial intelligence companies have been at the vanguard of developing the transformative technology. Now they are also racing to set limits on how AI is used in a year stacked with major elections around the world.

Recently, OpenAI said it was working to prevent abuse of its tools in elections, partly by forbidding their use to create chatbots that pretend to be real people or institutions. Google also said it would limit its AI chatbot, Bard, from responding to certain election-related prompts “out of an abundance of caution”. And Meta promised to better label AI-generated content on its platforms so voters could more easily discern what material was real and what was fake.

ADVERTISEMENT

A few days ago, 20 tech firms — including Adobe, Amazon, Anthropic, Google, Meta, Microsoft, OpenAI, TikTok and X — signed a voluntary pledge to help prevent deceptive AI content from disrupting voting in 2024. The accord, announced at the Munich Security Conference, included the companies’ commitments to collaborate on AI detection tools and other actions, but it did not call for a ban on election-related AI content.

At least 83 elections, the largest concentration for at least the next 24 years, are anticipated this year, according to Anchor Change, a consulting firm.

How effective the restrictions on AI tools will be is unclear, especially as tech companies press ahead with increasingly sophisticated technology. Recently, OpenAI unveiled Sora, which can instantly generate realistic videos. Such tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can tell what content is real.

AI-generated content has already popped up in US political campaigning, prompting regulatory and legal pushback. Last month, New Hampshire, US, residents received robocall messages dissuading them from voting in the state primary in a voice that was most likely artificially generated to sound like President Joe Biden.

“We are behind the eight ball here,” said Oren Etzioni, a professor at the University of Washington, US, who specialises in AI and a founder of True Media, a nonprofit working to identify disinformation online in political campaigns. “We need tools to respond to this in real time.”

Anthropic said it was planning tests to identify how its Claude chatbot could produce biased or misleading content. These “red team” tests, which are often used to break through a technology’s safeguards to better identify its vulnerabilities, will also explore how the AI responds to harmful queries, such as prompts asking for voter-suppression tactics.

OpenAI said that it planned to point people to voting information through ChatGPT as well as label AI-generated images.

Synthesia, a startup with an AI video generator that has been linked to disinformation campaigns, also prohibits the use of technology for “newslike content”, including false, polarising, divisive or misleading material. Stability AI, a startup with an image-generator tool, said it prohibited the use of its technology for illegal or unethical purposes, worked to block the generation of unsafe images and applied an imperceptible watermark to all images.

The biggest tech firms have also weighed in beyond the joint pledge in Munich.

Meta said it was collaborating with other firms on technological standards to help recognise when content was generated with AI. Before EU’s parliamentary elections in June, TikTok said in a blog post that it would ban potentially misleading manipulated content and require users to label realistic AI creations.

Google said in December that it, too, would require video creators on YouTube and all election advertisers to disclose digitally altered or generated content. It was preparing for 2024 elections by restricting its AI tools, like Bard, from returning responses for certain election-related queries.

“Like any emerging technology, AI presents new opportunities as well as challenges,” Google said. AI can help fight abuse, it added, “but we are also preparing for how it can change the misinformation landscape”.

NYTNS

Follow us on:
ADVERTISEMENT
ADVERTISEMENT