Ahead of general elections, US-based artificial intelligence research organization OpenAI has said it will not allow its AI to be used for political campaigning and continue to work to prevent misleading 'deepfakes' and chatbots impersonating candidates.
In a blog post, Sam Altman-led firm said it has made a number of policy changes to prevent its generative AI-based technologies such as ChatGPT, Dall-e and the rest from undermining the 'democratic process' during upcoming elections.
"As we prepare for elections in 2024 across the world's largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency," it said.
Besides India, the US and UK will also go to the polls this year.
"Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process," OpenAI said. "We want to make sure that our AI systems are built, deployed, and used safely." Listing out measures it is taking to prevent abuse, OpenAI said it is working to "anticipate and prevent relevant abuse - such as misleading 'deepfakes', scaled influence operations, or chatbots impersonating candidates".
"Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm," it said, adding it has been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests.
For instance, Dall-e has guardrails to decline requests that ask for image generation of real people, including candidates.
Stating that it was still working to understand how effective its tools might be for personalized persuasion, OpenAI said, "Until we know more, we don't allow people to build applications for political campaigning and lobbying." "People want to know and trust that they are interacting with a real person, business, or government. For that reason, we don't allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government)," it said.
Also, it does not allow applications that deter people from participation in democratic processes.
"Better transparency around image provenance - including the ability to detect which tools were used to produce an image - can empower voters to assess an image with trust and confidence in how it was made," it said.
OpenAI said it is experimenting with a provenance classifier, a new tool for detecting images generated by Dall-e. "Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers including journalists, platforms, and researchers—for feedback." Also, ChatGPT is integrating with existing sources of information and users will start to get access to real-time news reporting globally, including attribution and links.
"Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust," it said. "We look forward to continuing to work with and learn from partners to anticipate and prevent potential abuse of our tools in the lead up to this year's global elections."
Except for the headline, this story has not been edited by The Telegraph Online staff and has been published from a syndicated feed.