MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Sunday, 24 November 2024

Artificial Intelligence poses risk of extinction, warns group of industry leaders

The signatories included top executives from three of the leading AI companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic

Kevin Roose New York Published 31.05.23, 04:45 AM
The statement comes at a time of growing concern about the potential harms of artificial intelligence.

The statement comes at a time of growing concern about the potential harms of artificial intelligence. File picture

A group of industry leaders are planning to warn that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement expected to be released by the Center for AI Safety, a non-profit organisation. The open letter has been signed by more than 350 executives, researchers and engineers working in AI

ADVERTISEMENT

The signatories included top executives from three of the leading AI companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern AI movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s AI research efforts, had not signed as of Tuesday.)

The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of AI system used by ChatGPT and other chatbots — have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

Eventually, some believe, AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.

“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

Some sceptics argue that AI technology is still too immature to pose an existential threat.

When it comes to today’s AI systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

But others have argued that AI is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others.

New York Times News Service

Follow us on:
ADVERTISEMENT
ADVERTISEMENT