MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 15 November 2024

OpenAI says its latest GPT-4o model comes with ‘medium’ risk

In the safety analysis, the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot

Mathures Paul Published 10.08.24, 07:41 AM
OpenAI has revealed details of AI model safety testing, including concerns about its new anthropomorphic interface.  Illustration: The Telegraph

OpenAI has revealed details of AI model safety testing, including concerns about its new anthropomorphic interface.  Illustration: The Telegraph

OpenAI, the company behind ChatGPT, has released its GPT-4o System Card, a research document that offers an overview of safety measures and risk evaluations the startup conducted before releasing its latest model. This is important when seen in the context of the humanlike voice interface that the company started rolling out last month for ChatGPT.

In the safety analysis, the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot.

ADVERTISEMENT

GPT-4o was launched in May and before its debut, OpenAI used an external group of red teamers to find weaknesses in a system and key risks in the model. The group looked at risks like the possibility that GPT-4o would create unauthorised clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. Researchers found that the model is of “medium” risk. The overall risk level was taken from the highest risk rating of four overall categories: cybersecurity, biological threats, persuasion and model autonomy. All the pillars were deemed low risk except persuasion.

“Building on the safety evaluations and mitigations we developed for GPT-4, and GPT-4V, we’ve focused additional efforts on GPT-4o’s audio capabilities which present novel risks, while also evaluating its text and vision capabilities,” the company said.

Some of the risks that were evaluated include speaker identification, unauthorised voice generation, the potential generation of copyrighted content, ungrounded inference, and disallowed content.

“Our findings indicate that GPT-4o’s voice modality doesn’t meaningfully increase Preparedness risks. Three of the four Preparedness Framework categories scored low, with persuasion, scoring borderline medium,” the company has found.

It’s an important time for OpenAI as the US prepares for its presidential election. There’s a risk of AI models accidentally spreading misinformation or getting taken over by malicious actors. The company is testing real-world scenarios to prevent misuse.

OpenAI is not the only one assessing the risk of AI assistants mimicking human interaction. In April, Google DeepMind released a paper discussing the potential ethical challenges raised by capable AI assistants.

RELATED TOPICS

Follow us on:
ADVERTISEMENT
ADVERTISEMENT