Google has announced that it’s pausing Gemini’s ability to produce images of people using artificial intelligence, saying it will “re-release an improved version soon”. There has been a backlash over the depiction of different ethnicities, genders and images of historical figures.
The company said in a post on X, formerly Twitter, that the AI feature can “generate a wide range of people. And that’s generally a good thing because people around the world use it”.
But it also said that the software feature is “missing the mark here” and the company is “working to improve these kind of depictions immediately”.
The image generator tool was launched earlier this month through Gemini, which was formerly called Bard. The challenge comes at a time when Google is trying to catch up with Microsoft-backed OpenAI. Last week, OpenAI launched Sora, its new generative AI model that can produce video from users’ text prompts.
The image generation aspect of Gemini has been producing incongruous images of historical people, such as pictures of the US founding fathers depicted as American Indians.
Now that Google has disabled Gemini’s ability to generate pictures of people, when users are trying out the feature, they are being told: “We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.”
Research from the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University in August found that AI models may have different political biases depending on how they have been developed.
It’s not the first time AI has got it wrong over real-world questions about diversity. In 2015, Google had to apologise after its photos app labelled a photo of a black couple as “gorillas”.