MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Saturday, 06 July 2024

Where have the gorillas gone?

The Google and Apple photo apps do not spot these animals. Nico Grant and Kashmir Hill have the story

Nico Grant, Kashmir Hill Published 07.08.23, 09:43 AM

istock.com/JaysonPhotography

When Google released its stand-alone Photos app in May 2015, people were wowed by what it could do: analyse images to label the people, places and things in them, an astounding consumer offering at the time. But a couple of months later, a software developer, Jacky Alciné, discovered that Google had labelled photos of him and a friend, who are both Black, as “gorillas”.

In the ensuing controversy, Google prevented its software from categorising anything in Photos as gorillas, and it vowed to fix the problem. To test if Google had resolved the issue, we looked at it and comparable tools from its competitors: Apple, Amazon and Microsoft.

ADVERTISEMENT

There was one member of the primate family that Google and Apple were able to recognise — lemurs.

Google has made the decision to turn off the ability to visually search for primates for fear of making an offensive mistake and labelling a person as an animal. And Apple appeared to disable the ability to look for monkeys and apes as well.

The issue raises larger questions about other unfixed, or unfixable, flaws lurking in services that rely on computer vision — a technology that interprets visual images — as well as other products powered by AI.

Errors can reflect racist attitudes among those encoding the data. In the gorilla incident, two former Google employees who worked on this technology said the problem was the company had not put enough photos of Black people in the image collection it used to train its AI system. As a result, the tech was not familiar enough with darker-skinned people and confused them for gorillas.

As AI becomes more embedded in our lives, it is eliciting fears of unintended consequences. Although computer vision products and AI chatbots like ChatGPT are different, both depend on reams of data that train the software, and both can misfire because of flaws in the data or biases incorporated into their code.

Microsoft recently limited users’ ability to interact with a chatbot built into its search engine, Bing, after it instigated inappropriate conversations.

Microsoft’s decision, like Google’s choice to prevent its algorithm from identifying gorillas altogether, illustrates a common industry approach — to wall off tech features that malfunction rather than fixing them.

“Solving these issues is important,” said Vicente Ordóñez, a professor at Rice University, US, who studies computer vision. “How can we trust this software for other scenarios?”

Michael Marconi, a Google spokesperson, said Google had prevented its photo app from labelling anything as a monkey or ape because it decided the benefit “does not outweigh the risk of harm”.

When Google was developing its photo app, it collected a large amount of images to train the AI system to identify people, animals and objects. Its significant oversight — that there were not enough photos of Black people — caused the app to later malfunction.

Years after the Google Photos error, the company encountered a similar problem with its Nest home security camera during internal testing. The Nest camera, which used AI to determine whether someone on a property was familiar or unfamiliar, mistook some Black people for animals. Google rushed to fix the problem before users had access to the product, the person said.

However, Nest customers continue to complain on the company’s forums about other flaws. In 2021, a customer received alerts that his mother was ringing the doorbell but found his mother-in-law instead on the other side of the door. When users complained that the system was mixing up faces they had marked as “familiar”, a customer support representative in the forum advised them to delete all of their labels and start over.

Margaret Mitchell, a researcher and co-founder of Google’s Ethical AI group, joined the company after the gorilla incident and collaborated with the Photos team. She said she was a proponent of Google’s decision to remove “the gorillas label, at least for a while”.

These systems are never foolproof, said Mitchell, who is no longer working at Google. “It only takes one mistake to have massive social ramifications,” she said.

NYTNS

Follow us on:
ADVERTISEMENT