MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Wednesday, 18 September 2024

Meta blames ‘hallucinations’ after AI error over assassination attempt on Donald Trump

'These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward,' said Joel Kaplan, Meta VP for global policy

Mathures Paul Calcutta Published 01.08.24, 11:05 AM
Representational image

Representational image File picture

Meta has blamed hallucinations after its AI assistant said that the recent assassination attempt on former US President Donald Trump didn’t happen. Artificial intelligence chatbots are said to "hallucinate" when they generate misleading responses to questions that need factual replies.

“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward,” said Joel Kaplan, Meta VP for global policy.

ADVERTISEMENT

The statement highlights the difficulty tech companies face when it comes to AI keeping pace with breaking news. Meta also explained why its social media platforms had incorrectly applied the "fact check" label to the picture of Trump with his fist in the air, taken right after the assassination attempt.

A doctored version of the photograph made it seem as if Secret Service agents were smiling. “When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well. Given the similarities between the doctored photo and the original image — which are only subtly (although importantly) different — our systems incorrectly applied that fact check to the real photo, too.”

Google, on Tuesday, had to debunk claims that its Search autocomplete feature was censoring results about the assassination attempt. It said in a post on X (formerly Twitter): “These types of prediction and labelling systems are algorithmic. While our systems work very well most of the time, you can find predictions that may be unexpected or imperfect, and bugs will occur.” Meanwhile, Trump posted on Truth Social: “GO AFTER META AND GOOGLE.”

In February, Google was forced to pause the image generation tool included in its AI platform Gemini after users noticed historically inaccurate images were being generated.

Microsoft, in a recent report, emphasised the need for the tech industry and regulators to protect people from misleading AI content. “One of the most important things the US can do is pass a comprehensive deep fake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans,” said Microsoft vice-chair and president Brad Smith. “We don't have all the solutions or perfect ones, but we want to contribute to and accelerate action.”

Last week, Elon Musk, the world’s richest man and owner of X, reposted an edited campaign video for US vice-president Kamala Harris that appears to have been digitally manipulated to change the voice-over in a deceptive manner.

Follow us on:
ADVERTISEMENT