MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 22 November 2024

AI will take over the world, but first it has to stop hallucinating

A hallucination is when generative AI spews out false or misleading information as fact. These AI hallucinations highlight pitfalls of technology that is set to change the world

Aishani Misra Published 05.08.24, 02:06 PM
Plausible-sounding hallucinations from generative AI pose a problem across the industry

Plausible-sounding hallucinations from generative AI pose a problem across the industry Shutterstock

We don’t know if androids dream of electric sheep, but we do know that artificial intelligence hallucinates. Meta has blamed “hallucinations” for the recent instance where the Meta AI assistant denied that the assassination attempt on former US President Donald Trump took place.

Hallucination is when generative AI spews out false or misleading information as fact.

ADVERTISEMENT

A "fact check" label on Meta’s social media platforms also suggested that the now iconic image of Trump with a raised fist (taken during the shooting) was a fake. In light of the upcoming US election the errors were seen as politically charged.

The artificial intelligence revolution is poised to distort the lines between truth and fiction in our everyday lives. Generative chatbots are a prime example, often providing responses which sound reasonable, but are contradictory, made-up or plain wrong.

A recent study published by data analytics startup GroundTruthAI studied five Large Language Models from Google and OpenAI; and found that they provided incorrect information 27 per cent of the time when asked about voting and the 2024 US election.

Not long after the model's launch, OpenAI CEO Sam Altman posted on X (formerly Twitter): "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now."

AI has arrived in India as well, and not just through customer service chat bots. At least two Indian publications are experimenting with AI to rewrite agency reports and headlines. The Telegraph is not among them.

Perhaps the most crucial problem calling the legitimacy of this technology into question are these plausible-sounding errors: AI hallucinations.

While the term “hallucination” has an obvious anthropomorphic ring, the phrase has been contentious, since large language model (LLM) tools are not sentient, and cannot understand what words mean.

“Human society as a whole has a tendency to anthropomorphise all experiences,” says Lipika Dey, professor of Computer Science at Ashoka University and former chief scientist at Tata Consultancy Services (TCS).

“If I look at the history of computers there are many such terms, some of which can be regarded as very problematic now, such as ‘blacklist’ and ‘whitelist,’ ‘aborting’ and so on. They were used since they uncannily matched a human experience.”

There is indeed a flurry of such computing phrases, from “handshake,” “footprint,” “nibble,” to the Linux commands “touch,” “finger” and “sleep”.

Hallucination is basically an error, Dey explains.

“AI finds the next element to predict or generate based on the immediate past context, so it uses a probabilistic association to predict the next word. Since it has learnt from very large data sets, in many situations it is correct, but there is no guarantee. Whenever we ask something already present in its data, probabilistically it will generate something that aligns with facts. There is, of course, a fact checker, but that is also an automated tool. Machine learning-based technologies like these learn to predict the feasibility of something based on how often it appears in the data they are trained on,” Dey says.

When we ask AI to generate responses based on real-time news events, there can be some obvious problems, she says.

“It is just hitting a lot of these pages and generating a sort of creative reinterpretation of the collated results.”

Amidst the vast quantities of contradictory details and confusion and conspiracy theories that accompany large-scale breaking news events, such as the assassinations, chatbots are prone to bouts of hallucination.

“If you use AI-based techniques to check whether AI based generation is correct or not, both of which are working on probabilistic association, then errors like hallucinations are bound to happen,” Dey says. “You ask the chatbot to generate a note based on some input, and it goes through a fact-checker which is completely automated. There is really no extra annotation. There is no internal reasoning mechanism to check the truth of the claims.”

Besides real-time events, hallucination-errors are also common when there are gaps in the data. In a well-known generative AI scandal around a 2023 US case, Mata vs Avianca, lawyers researched their brief using ChatGPT, which hallucinated fake judicial decisions, quotes, and citations as evidence against Avianca Airlines.

Ideally, there should be a final check from a human being, computer scientists like Dey say.

“Given the volume of requests that these tech companies like Meta handle, obviously this is not possible,” she adds. “However, there can be mechanisms to check the sensitiveness of the content. Terms like ‘American President’ and the names of various world leaders will fall on the list. Of course, there is no end to what can or cannot be sensitive, and that is another debate.”

Chatbots are typically programmed to withhold information or provide generic answers when questioned about contentious issues, such as politics. In its statement on the Trump hallucination incident, Meta admitted the unreliability of artificial intelligence platforms in tackling breaking news or “returning information in real time”.

So, can these hallucinations be regulated? Dey suggests that if the generative content has any of the “sensitive” names or concepts, then companies can bring in added verification and prevent such content from undergoing a completely automated check.

Given the superhuman speed and ability of our endlessly compelling generative chatbots, it is easy to forget that there is no way that artificial intelligence can know fact from fiction.

Hallucinations are a clear testament to their fallibility.

“AI has become a revolutionary technology, some are comparing it to nuclear technology,” Dey says. “People are simply delegating their tasks to these AI tools. But it is the responsibility of users to do their bit and take up the final fact checks. Don't make it an end-point to your query.”

Follow us on:
ADVERTISEMENT
ADVERTISEMENT