On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life’s work could potentially extinguish all of humanity.
The brief statement, which CAIS says is meant to open up discussion on the topic of “a broad spectrum of important and urgent risks from AI,” reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.
This statement comes as Altman travels the globe, taking meetings with heads of state regarding AI and its potential dangers. Earlier in May, Altman argued for regulations of his industry in front of the US Senate.
Considering its short length, the CAIS open letter is notable for what it doesn’t include. For example, it does not specify exactly what it means by “AI,” considering that the term can apply to anything from ghost movements in Pac-Man to language models that can write sonnets in the style of a 1940s wise-guy gangster. Nor does the letter suggest how risks from extinction might be mitigated, only that it should be a “global priority.”
However, in a related press release, CAIS says it wants to “put guardrails in place and set up institutions so that AI risks don’t catch us off guard,” and likens warning about AI to J. Robert Oppenheimer warning about the potential effects of the atomic bomb.
AI ethics experts are not amused
An AI-generated image of a globe that has stopped spinning. Credit: Stable Diffusion
This isn’t the first open letter about hypothetical, world-ending AI dangers that we’ve seen this year. In March, the Future of Life Institute released a more detailed statement signed by Elon Musk that advocated for a six-month pause in AI models “more powerful than GPT-4,” which received wide press coverage but was also met with a skeptical response from some in the machine-learning community.
Experts who often focus on AI ethics aren’t amused by this emerging open-letter trend.
Dr. Sasha Luccioni, a machine-learning research scientist at Hugging Face, likens the new CAIS letter to sleight of hand: “First of all, mentioning the hypothetical existential risk of AI in the same breath as very tangible risks like pandemics and climate change, which are very fresh and visceral for the public, gives it more credibility,” she says. “It’s also misdirection, attracting public attention to one thing (future risks) so they don’t think of another (tangible current risks like bias, legal issues and consent).”
Writer and futurist Daniel Jeffries tweeted, “AI risks and harms are now officially a status game where everyone piles onto the bandwagon to make themselves look good … So why do people keep harping on this? Looks good. Costs nothing. Sounds good. That’s about it.”
The organization behind the recent open letter, the Center for AI Safety, is a San Francisco-based nonprofit whose goal is to “reduce societal-scale risks from artificial intelligence” through technical research and advocacy. One of its co-founders, Dan Hendrycks, has a PhD in computer science from UC Berkeley and formerly worked as an intern at DeepMind. Another co-founder, Oliver Zhang, sometimes posts about AI Safety on the LessWrong forums, which is an online community well-known for its focus on the hypothetical dangers of AI.
In the machine-learning community, some AI safety researchers are particularly afraid that a superintelligent AI that is exponentially smarter than humans will soon emerge, escape captivity, and either take control of human civilization or wipe it out completely. This belief in the coming of “AGI” informs fundamental safety work at OpenAI, arguably the leading generative AI vendor at the moment. That company is backed by Microsoft, which is baking its AI tech into many of its products, including Windows. That means these apocalyptic visions of AI doom run deep in some quarters of the tech industry.
While that alleged danger looms large in some minds, others argue that signing a vague open letter about the topic is an easy way for the people who might be responsible for other AI harms to relieve their consciences. “It makes the people signing the letter come off as the heroes of the story, given that they are the ones who are creating this technology,” says Luccioni.
To be clear, critics like Luccioni and her colleagues do not think that AI technology is harmless, but instead argue that prioritizing hypothetical future threats serves as a diversion from AI harms that exist right now—those that serve up thorny ethical problems that large corporations selling AI tools would rather forget.
“Certain subpopulations are actively being harmed now,” says Margaret Mitchell, chief ethics scientist at Hugging Face: “From the women in Iran forced to wear clothes they don’t consent to based on surveillance, to people unfairly incarcerated based on shoddy face recognition, to the treatment of Uyghurs in China based on surveillance and computer vision techniques.”
So while it’s possible that someday an advanced form of artificial intelligence may threaten humanity, these critics say that it’s not constructive or helpful to focus on an ill-defined doomsday scenario in 2023. You can’t research something that isn’t real, they note.
“Existential AI risk is a fantasy that does not exist currently and you can’t fix something that does not exist,” tweeted Jeffries in a similar vein. “It’s a total and complete waste of time to try to solve imaginary problems of tomorrow. Solve today’s problems and tomorrow’s problems will be solved when we get there.”
Listing image: Stable Diffusion
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
