I’d argue there are ostensibly 4 broad categories of voices in the “what should society do about AI?” conversation, the Accelerationists, the Doomers, the Cheerleaders, and the Skeptics like me.
We, the Skeptics, aren’t what I want to discuss today, but just to set the stage for you: our argument, we think, is pretty straightforward: AI, in its current form, is actively harming people RIGHT NOW, via AI Psychosis, disinformation, Non-Consensual Synthetic Intimate Imagery, job impacts, Hallucinated court cases, faulty facial recognition, faulty license plate recognition, Resource Exhaustion from AI Slop, and any number of other things I could name. Skeptics believe that society needs to be prioritizing the prevention and mitigation of that damage as soon as possible, instead of worrying about any hypothetical future Artificial Intelligence that may or may not actually ever become a problem.
The skeptics concerns are belittled by the rest of the folks in the debate as ignorant, short-sighted, and superfluous.
I’ll be coming back to the skeptic position and discussing it in a future installment of this series, but for the moment, I want to talk about the other camps.
Or maybe I should say camp, singular. But put a pin in that - we’ll come back to it in a bit.
The accelerationists are people who want Artificial Super Intelligence (ASI) to happen as soon as possible. They’re a combination of true believers in the promise of ASI and people who stand to gain financially from AI adoption. I don’t share their faith that ASI will necessarily lead to a utopia, much less a singularity where humans and AI will merge, but I can understand, were I to actually have that faith, why I might hold this position. Discussion of the likelihood of ASI Utopia will be coming in a future installment.
The Doomers are trying to convince everyone that AI is very likely to kill us all. I’m sure some of them are true believers in the dangers of ASI, and the rest of them are grifters. That’s not a charitable or widely accepted characterization of them, but stick with me, and I’ll lay out my case for it a little further down the page.
The Cheerleaders are by far the worst, though. These are people with large platforms and trusted reputations who are pushing the Doomer (or rarely, the accelerationist) narrative, and lending it reach and credibility that it wouldn’t otherwise warrant. Because of the Cheerleaders’ reach and their apparent objectivity, they often infect other public personalities, turning them into cheerleaders as well.
I will have a lot to say about the Cheerleaders in future installments, although if you don’t want to wait, I’ve made a video about it here.
For the rest of this piece, I’m going to lay out the Doomer argument, and then discuss the implications of it. Note that this argument has huge, unwarranted assumptions in it - and I’ll get back to some of the problems with it in a while, but for now, I’m going to give you my best version of the Doomer argument, and do my best to suppress my inner skeptic:
AIs aren’t built, they’re grown.
Because they’re grown, we can’t understand them or predict them.
Because we can’t understand or predict them, they’ll necessarily be misaligned (have priorities different than ours).
Signs of misalignment already exist, such as:
When put in contrived situations where conflicts exists between different goals the AIs have been given, and they have an opportunity to resolve the conflict by taking an action that might result in harm to a human, current AIs often give the harmful response. (This is often reported as: the AI tried to do blackmail or murder someone, but those aren’t accurate descriptions)
A misaligned AI will necessarily come into conflict with humanity (that’s pretty much the definition of misaligned)
Human-level AI (AGI) is inevitable. We’re on track towards that now.
There is no reason to believe that human intelligence is the limit on intelligence.
Because computers can keep getting faster and bigger, and are not limited by the requirements that they fit inside a human skull and can only be powered by glucose, they can get to better than human intelligence.
Human intelligence has improved AI, therefore better-than-human Intelligence could Improve AI more, which can, in turn, improve AI even more, etc. (This is called Recursive Self Improvement)
Therefore, Artificial SuperIntelligence. a.k.a. “ASI” (here defined as ‘Better than any human at every intellectual task’) is inevitable,
We’ve seen with Chess, Go, and Jeopardy that AIs can become better than any human very quickly. Therefore SuperIntelligence will happen at a speed beyond human capability to keep up.
Humans will be so outclassed by Artificial SuperIntelligence that we will be unable to stop it when it attempts to destroy us (which it will because misaligned)
In fact, we will be so outclassed, it will kill us all so fast, we won’t even get a chance to mount a defense.
This is accompanied by an “argument from ignorance” [my phrase] - for example from If Anyone Builds It Everyone Dies: the Aztecs being unable to comprehend the threat the Spanish boats represented until it was far too late for them.
The risk of extinction from Artificial SuperIntelligence is so great, stopping it should be our top priority.
Therefore, we must prevent Artificial SuperIntelligence from being created.
Therefore it must be made illegal.
The Doomers wisely stop here, avoiding the implications of this argument. But let’s continue the line of reasoning as if the above were ironclad:
It is effectively impossible to make Artificial SuperIntelligence illegal in all jurisdictions on Earth that are technologically capable of creating it.
Even if you don’t believe an Artificial SuperIntelligence will necessarily be misaligned, you do have to admit there are humans who do not share your values who, were they to create ASI, would create one that was misaligned with your values.
And even if Artificial SuperIntelligence was made illegal everywhere, it would be impossible to verify. Nuclear weapon tests produce a radiation signature that can be used to detect nuclear programs, but there’s no equivalent when a bad actor is spinning up ASI, so even if we get laws passed everywhere, we have no way of knowing if they are being enforced.
If any misaligned Artificial SuperIntelligence is created, or if someone who does not share our values obtains their own ASI, then we mere humans will be helpless to defend ourselves.
The only thing that can compete with an Artificial SuperIntelligence is another Artificial SuperIntelligence.
Therefore, If all Artificial SuperIntelligence research cannot be stopped, then our only chance is for people “on our side” to get to ASI faster than people “not on our side” and hope the people “on our side” can figure out how to control theirs.
Therefore all other considerations should be suspended while we try to get to Artificial SuperIntelligence faster than anyone else.
And Poof the villains of the current AI nightmare get recast as the heroes, and any near-term harms become necessary and unavoidable in the pursuit of getting to “defensive” ASI first. (Not that Silicon Valley is generally “on our side,” or any side except their own, but that’s another essay).
It’s a sweet setup. The Doomers get as many people as possible scared of ASI, and that fear becomes leverage to attack any attempt to slow down Silicon Valley with “if you slow us (Silicon Valley) down, then the Chinese (or adversary of the moment) will get ASI first and all that stuff you are scared of will happen to you and your family.”
The Silicon Valley AI folks get to be the heroes, they get to appear just as compassionate and concerned about ASI as everyone else, and they get effective immunity from every crime they commit and every peasant (e.g. you and me) they harm along the way.
This is a very effective propaganda strategy incorporating what’s referred to as the “Fallacy of relative privation” where you preempt or counter a concern by equating it with a worse version of the problem that has no ready solution.
Once you get familiar with this rhetoric, you’ll see it everywhere. The most common version of it I see is when person A is complaining about Billionaires or wealth inequality, and person B shuts them down with “Welcome to Late-Stage Capitalism” or the like. The implication being that the only way to counter wealth inequality or reign in the ultra-rich is by changing out the entire economic system. This has of course been disproven by multiple events in history, including the recovery from the Great Depression and the enforcement of the Sherman Anti-Trust Act of 1890.
This Doomer narrative gets pushed by a number of groups, including the authors of the “If Anyone Builds It, Everyone Dies” book, and nonprofits like ControlAI and 80,000 hours that sponsor a bunch of AI-related YouTube content.
And the people pushing this narrative take every opportunity that presents itself to change the subject away from any current AI issue into an existential conversation about an unsolvable, hypothetical problem.
Recently - by which I mean the first couple of months of 2026 - there have been a lot of online conversations about X.Ai’s Grok producing Non-Consensual Synthetic Intimate Imagery (reportedly, even sexual images of children), and security issues that have been found with AI Agents generally, and OpenClaw and Moltbook specifically. And I’ve seen several stories and podcast episodes prompted by these problems that have featured interviews with the CEO of ControlAI in which he does his best to change the subject from “Today’s AIs are hurting people” to “No one knows how AI works, and we have to stop it before it kills us all.”
Are ControlAI and the other people like the Center for AI Policy and Center for AI Safety who are pushing this narrative intentionally trying to prevent conversations about AI’s immediate problems like Insecure Agents, theft of Intellectual Property, AI apps that generate nude versions of people in pictures, or convincing troubled teenagers to not seek help with their mental health struggles? I don’t know, and I couldn’t care less. The result they are having is pulling attention away from very important problems we could be trying to solve, and they should be held accountable for that, whether they are doing it deliberately or not.
And let me be clear - the Doomer narrative that they are pushing is based on conjecture and thought-experiments, mischaracterizations, and quoting people who work at the AI companies who have a financial interest in you believing that they need to be given anything they want so they can get to SuperIntelligence faster than the Chinese do.
Refuting the Doomer nonsense narrative deserves your full attention, so I’m going to be doing a dedicated installment dissecting a bunch of their arguments. But let me give you a few quick things here to ease any stress you have about it until then.
First off, the statements that “they’re grown and not built” or “we don’t know how they work” are disingenuous at best. There are SO Many things in the modern world that you deal with every day that we didn’t build directly that we can’t completely predict, but that we deal with just fine and that haven’t and won’t make us extinct. Like traffic. We build the roads and the cars, but we have to run experiments to understand how different changes we make to vehicles, signage and infrastructure affect safety, congestion, maintenance, etc. And traffic is not an existential threat (despite how it might feel at rush hour). Internet search is another one - there are millions of Gigabytes of indexes distributed across dozens of data centers that are used to tell you which web pages are relevant to your search terms (I’m talking about pre-AI Google here, not the new AI-based stuff), and those indexes are constantly being updated by hundreds or thousands of crawler processes that are pulling in data from millions of websites. There’s no way to know exactly which parts of exactly which index of exactly which machine in exactly which data center was responsible for a given web page in your search results. And again, that’s not an existential threat. I’ve made a video talking about this particular assertion here.
The Doomers’ assertion (or implication) that “we are on track towards Human-level general intelligence (AGI) right now” is not certain, and in fact there is strong evidence that it false, like the disappointing failure of GPT-5 to live up to the vast majority of the promises that were made about what it was going to be able to do, and the fact that some leading AI researchers have finally started admitting things like “the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again.”
The next thing to understand is that there is literally zero evidence that AIs are going to be able to improve themselves without human direction. Recently, a cluster of the best coding agents we know of tried to produce a C compiler from scratch, and did a horrible job. And a C compiler is literally one of the longest studied and most well-understood programs that exists. If it can’t do that, there’s no reason to believe that they can improve the code that manages their own base intelligence.
And the idea is that AIs are going to go from “Can’t build a functional C compiler” to “so smart that they can make and execute a plan to kill all of us that we will have no defense against” and do that so fast that we don’t even notice it’s happening and we’ll have no chance to stop it. Again, this is pure fantasy - there’s zero evidence of that at all.
Might an AI that can improve itself be possible decades from now? I have no idea, and neither does anyone else. But there are three things we know for sure: 1) we’re not anywhere close to that, yet, 2) if we do start to move closer that, there will be clear evidence of it, and 3) AI is really harming real people right now, and there are steps we could be taking to help those people being harmed, but we’re arguing about this instead.
If you want to know how AI should be addressed, ignore the supposed concerns from the AI CEOs and the people like ControlAI that are in effect pushing a pro-Silicon-Valley “we have to beat China“ agenda, and look to people seriously working on AI safety, like the AI Red Lines Statement which talks about “risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations…ensuring that all advanced AI providers are accountable” and responsible AI Safety organizations like the French Center for AI Safety (CeSIA) and The Future Society.
