No, AI isn't inevitable. We should stop it while we can. | Opinion

4 min read Original article ↗

Jan. 24, 2026, 5:03 a.m. ET

"AI is here to stay,” according to conventional wisdom. From news headlines to academic papers, Americans are led to believe that the rise of artificial intelligence is inevitable, and that we all just have to bear the consequences.

Decades ago, the “inevitability” of globalization was an excuse to hollow out U.S. manufacturing in pursuit of short-term profits, with disastrous effects on former industrial hubs. Do we really need to let AI sweep through society like a hurricane? 

AI companies are working to replace all human workers and concentrate power in the hands of a select few tech elites. Big Tech CEOs themselves are predicting that AI will replace human beings within years, not decades. Tens of thousands of jobs have been cut already (if not more), and recent graduates in AI-exposed fields are struggling to find work

In their race for dominance, AI acolytes are building ever more powerful systems without knowing how to control them. They could subvert the political systemreduce human influence and increase the risk of war.

You might be thinking: “This is silly, all ChatGPT does is output words on a screen.” But we’re not just discussing chatbots.

A humanoid robot sitting in an office.

AI companies are spending trillions of dollars trying to build artificial intelligence and robots that can do everything humans can faster, cheaper and without any human oversight. OpenAI’s ultimate goal is “superintelligent” AI via “recursive self-improvement.”

In other words, use AI to make smarter and smarter AI, until it’s way smarter than humans, and see what happens. What could possibly go wrong?

AI could literally cause human extinction

The risks posed by the pursuit of “superintelligence” are unacceptable. In 2023, I initiated the Center for AI Safety’s Statement on AI Risk, and hundreds of other researchers joined me in alerting the world that AI could literally cause human extinction. It may sound like sci-fi, but researchers have concerns about the significant possibilities of human extinction or disempowerment.

Fortunately, we can stop the reckless race to replace humanity – if we have the political will. AI development is not a law of nature, but rather an immense project that only proceeds through deliberate effort.

Nations around the world have banned human cloning and cooperated to prevent the proliferation of nuclear weapons. Superintelligence could be more dangerous than nukes, but importantly, nobody has built it yet.

We can still choose not to proceed.

The simplest, most robust way would be to cease production of advanced AI chips. Scaling up artificial intelligence relies on an extremely concentrated supply chain.

Taiwan’s TSMC and Netherlands’ ASML are both critical to producing state-of-the-art AI computer chips. These chips are the “weapons-grade plutonium” of superintelligence.

As with nuclear technology, countries could agree to strict rules prohibiting their development and production. The concentrated supply chain and technological difficulty of manufacturing AI hardware would allow countries to verify that their adversaries were not building superintelligence in secret.

Progress in AI algorithms makes such an agreement urgent. We don’t know what will be possible in the future using current hardware. We need a margin of error.

Communities are pushing back against data centers

Sen. Bernie Sanders, I-Vermont, speaks during New York City Mayor Zohran Mamdani's inauguration ceremony on Jan. 1, 2026.

Banning data center construction in the United States, as Sen. Bernie Sanders, I-Vermont, has proposed, wouldn’t stop China. But as the most powerful nation on earth and the world’s AI leader, America can negotiate from a position of strength.

There are real indications that the Chinese Communist Party does not share Silicon Valley’s obsession with superintelligence, providing U.S. diplomats with a starting point for negotiation.

Echoing Sanders’ concern, Florida Gov. Ron DeSantis has moved to protect the rights of local communities to block data center construction.

Communities from Arizona to Wisconsin are already rejecting them, and more than a dozen states have towns and counties that have implemented moratoriums. From the U.S. to Mexico and Ireland, pressure is mounting from local residents, activists and nonprofits to stop data center production.

This is promising, but we need federal action and international diplomacy as the next step. Instead of fueling the AI race, our top priority in 2026 should be an international agreement to defuse it.

Tents house Meta AI data centers in Ohio.

Some will call me a Luddite or worse, but this is not uninformed speculation. I am an AI professor who has been in the field for more than a decade.

Companies are acknowledging the massive risks involved, while telling us we have no choice.

We cannot stand by while the house of humanity burns to the ground. Together, we still have the power to put out the fire.

David Krueger is an assistant professor in Robust, Reasoning and Responsible AI at the University of Montreal. He is also the founder of Evitable, a nonprofit that educates the public about the risks of artificial intelligence.