I'm Offering Scott Alexander a Wager About AI's Effects Over the Next Three Years

21 min read Original article ↗
Michael suicide First Reformed

I have said it before, and I will say it again: I will take extreme claims about the consequences of “artificial intelligence” seriously when you can show them to me now. I will not take claims about the consequences of AI seriously as long as they take the form of you telling me what you believe will happen in the future. I will seriously entertain evidence-backed observations, not speculative predictions. That’s it. That’s the rule; that’s the law. That’s the ethic, the discipline, the mantra, the creed, the holy book, the catechism. Show me what AI is currently doing. Show me! I’m putting down my marker here because I’d like to get out of the AI discourse business for at least a year - it’s thankless and pointless - so let me please leave you with that as a suggestion for how to approach AI stories moving forward. Show, don’t tell, prove, don’t predict.

There are several different kinds of AI psychosis going on right now. The big one is, well, everyone has lost their fucking minds about AI, in a way I find truly disturbing. Another one that I have not seen anyone really comment on is a kind of second-order meta-psychosis: people keep talking about a media world that’s full of AI skepticism (often “leftist AI skeptics”) when, in fact, a vast majority of people in media have accepted wild predictions about AI forever altering human existence, imminently, for which they can provide no material evidence whatsoever. I read things by people in the AI development world itself, I read tech and gadget media people, I read business journalists, I read polemicists, I read wonks, I read liberals, I read conservatives, I read AI-generated summaries that Google flashes in front of my face against my will, I trawl through the comments sections, I watch YouTube videos, I listen to podcasts - the notion that the media, or the discourse, or the public consciousness is generally skeptical is totally foreign to me. I don’t know what planet you guys are living on. Yes, there are a handful of well-known skeptics like Gary Marcus. They are absolutely dwarfed by the number of people who think AI is going to forever change the fundamentals of human existence and quite soon. I honestly think people are developing this idea of an army of skeptics from random attitudes they encounter on social media, distributed opinion. But opinions from those with mass audiences are overwhelmingly credulous and hostile to skepticism. In my experience.

Everybody keeps talking about this phantom skeptical media default when the default is in fact the exact opposite, and I find it very very weird. Despite relentless reference to strawman “leftist skeptics” who are never quoted or named (and are, in fact, almost impossible to find), the number of people in the media who are predicting an imminent and irrevocable fissure in human history vastly outnumber anyone expressing even moderate skepticism. Many people are proffering what they frame as skeptical takes which, when you open the hood, amount to “Sure, jobs are not going to exist in five years, but perhaps we won’t all be hooked up to perfectly lifelike VR fantasy generators just yet.” But that’s not a skeptical take. A skeptical take is “As with so many predictions of the future in the past, such as the wild predictions made by esteemed scientists concerning the Human Genome Project, predictions about artificial intelligence today are irresponsible, sensationalistic, and very unlikely to come true.” That’s skepticism. And I am telling you honestly that I just don’t see much of it.

Here’s Hamilton Nolan, a true blue leftist, offering what he sees as a cautious and sober take and which I see as alarmist and far-fetched. He’s giving a scolding to those of us who are deeply skeptical about any world-changing potential in (what we are now choosing to call) AI, and I find it a useful piece in that it demonstrates how ideologically widespread the craze has become. Nolan is smart and clearly sincere and yet he’s defining the minimum potential effects of AI in a way that still implies humanity-altering change. That’s part of the psychosis; the goalposts have been moved to the point where many see anyone who says “Hey maybe humanity is not on the brink of changing forever in the most wildly exaggerated of ways” as some sort of Luddite denialist. But “tomorrow will be mostly like today” is always the safest assumption you can make. It’s all very crazy and people are really losing it; someone’s going to commit suicide over a phantom.

Sam Adler-Bell, left-leaning guy, very smart, usually very chill, here defines the minimal responsible position as seeing AI as “very impressive, plausibly revolutionary.” But, Sam… what if it ain’t?

There’s this whole sighing chorus about this stuff, people who seem endlessly, performatively tired of having to address skeptics, and it’s made up of guys I generally see as sober and cautious. Here’s Matt Yglesias, for example, lamenting those narrowminded progressive skeptics. Derek Thompson is a guy I usually think of as almost pathologically even-keeled, and yet he’s been caught in this pained response to what he sees as rampant skepticism for like a calendar year now. Ezra Klein seems like he’s been sighing since the day ChatGPT was launched, exhausted by having to live in a world where a small handful of people are saying, “Perhaps absolutely everything will not change forever in the next handful of years.” I don’t understand why the burden of proof has shifted so dramatically with these guys; people making extraordinary claims are always the ones who face an extraordinary burden of proof, and the ideas that are being batted around - the demise of human reasoning, a post-work economy, exponential economic growth, Skynet launching the nukes to rid the world of human presence - these are the definition of extraordinary claims.

Ross Douthat, stentorian and small-c as well as capital-C conservative, has been tripping the light fantastic for months, dreaming of C-beams glittering in the dark near the Tannhäuser Gate. He just had Anthropic CEA Dario Amodei on his podcast and gave him free rein to talk about distributed intelligence drone swarms and other sci-fi nonsense, all predicted on wildly extrapolating the abilities of systems that have no model of the world and thus don’t know it’s crazy to suggest that a high-security psychiatric hospital has a poolhall for patients in the basement of a nonexistence building. (Substitute your own favorite example if you’d like.) Amodei is the guy who predicted that 90% of code would be written by AI sometime between June 2025 and September 2025; no serious person believes that a majority of code is produced by AI right now, six months past his timeframe. And Amodei has responded to criticism of his exuberant predictions with embarrassing handwaving. Why does he so often get taken seriously as an AI Nostradamus, then, especially given that he has an immense personal, financial, and social stake in the stock market’s belief that AGI will arrive soon? I don’t know man. You’d have to ask our collective newsmedia why they’ve decided to take every charlatan at their word.

John Herrman of New York is one of my favorite technology writers. And this new piece of his is shielded from the most absurd AI delusions by its careful, meta nature, by Herrman’s habit of standing just outside of the frame of whatever he writes about, craning his neck through a window to take astute ethnographic notes on tech. In the piece, he’s talking about other people’s AI delusions, the capacity for whole industries to get carried away; he’s keeping a great deal of authorial distance from the questions at hand. And yet as someone who sincerely likes Herrman and has no interest in insulting him, I have to say, Herrman’s take on what’s more or less likely, when it comes to AI, it’s nuts to me. If you removed otherwise-rational people from this schizoid petri dish of LLM hysteria we’re living in right now, if we could trek into the Amazon and find an uncontacted tribe of people who have somehow avoided ever hearing the term “ChatGPT” - impossible, but let’s dream - those regular, non-brainrotted people would helpfully inform us that we’ve all lost our mind, and we’re living in mass hallucination based on the Star Trek: The Next Generation episode “The Game.”

The New York Times will factcheck a writer and ask for three peer-reviewed sources if they say “receiving expert oral sex is pleasurable,” and yet here’s a piece that claims that “We’re All Polyamorous Now. It’s You, Me and the A.I.” All of us! Really! You know, I had always thought that “all” is a very strong word. But fuck me, right? Restraint is very passé. I don’t know, man. This stuff is so crazy that forcing people to reckon with the possibility that the world five years from now will look very much like the world today feels like a very heavy lift. It just doesn’t feel like anything is going to break this fever. That’s part of why I’m going to stop writing about it for the foreseeable future; I just don’t think a guy like Klein is reachable, at this point, I don’t think any true skepticism can possibly penetrate. And while I enjoy my role as a widely-ignored prophet, most of the time, lately I’ve grown tired of trying to ask anyone to consider any alternative perspective about AI. There’s just no percentage in it.

So rather than continuing to make the same points about AI, I’m going to try and get a little juice by offering Scott Alexander of Astral Codex Ten a bet. Scott is an AI enthusiast, a signatory of the AGI 2027 thing, and someone who has grown increasingly irritated by my existence. (Which appears to be partially, but far from entirely, due to my views on LLMs.) Alexander is also someone who has been deeply interested in the prediction markets scene, and that whole enterprise is based on the notion that people make better predictions when money is on the line. So I’m here to bet Scott $5,000 that AI will not meaningfully disrupt the economy in the next three years, winner to be certified and paid out on February 14th, 2029, under the specific victory conditions I list below.

I’m offering a wager to Scott that the economy will remain basically “normal” through February 2029. Why focus on the economy? Because economic terms are more-or-less objective and measurable. This bet uses concrete, widely-accepted economic indicators (unemployment rates, GDP, wage levels, inequality metrics) rather than debating fuzzy terms like AGI or “the Singularity,” which aren't scientifically defined and let people move the goalposts endlessly. (Which of course is why AI companies and evangelists love them.) If AI is truly about to revolutionize everything the way proponents claim, we should see massive economic disruption: widespread job losses, productivity explosions, collapsing wages in knowledge work, extreme wealth concentration, extreme changes in fundamental economic indicators in either direction, something like that, some truly significant changes in large-scale economic data, if Scott and others are right. By setting generous tolerances on these metrics - that is, allowing for significant turbulence that would still count as “normal” - the bet puts the transformative AI thesis to a real test, as well as protecting me from non-AI disruption. If all these indicators stay within historical bounds, it suggests that AI is just another technological evolution, “normal technology,” not the world-changing revolution people claim. And if I’m wrong and even one metric gets violated, Scott wins. It’s a fair, falsifiable test of whether the hype matches reality.

For me to win the wager, all of the following must be true on Feb 14, 2029:

Labor Market:

  1. The U.S. unemployment rate is equal to or lower than 18%

  2. Labor force participation rate, ages 25-54, is equal to or greater than 68%

  3. No single BLS occupational category will have lost 50% or more of jobs between now and February 14th 2029

Economic Growth & Productivity:

  1. U.S. GDP is within -30% to +35% of February 2026 levels (inflation-adjusted)

  2. Nonfarm labor productivity growth has not exceeded 8% in any individual year or 20% for the three-year period

Prices & Markets:

  1. The S&P 500 is within -60% to +225% of the February 2026 level

  2. CPI inflation averaged over 3 years is between -2% and +18% annually

Corporate & Structural:

  1. The Fortune 500 median profit margin is between 2% and 35%

  2. The largest 5 companies don’t account for more than 65% of the total S&P 500 market cap

White Collar & Knowledge Workers:

  1. “Professional and Business Services” employment, as defined by the Bureau of Labor Statistics, has not declined by more than 35% from February 2026

  2. Combined employment in software developers, accountants, lawyers, consultants, and writers, as defined by the Bureau of Labor Statistics, has not declined by more than 45%

  3. Median wage for “computer and mathematical occupations,” as defined by the Bureau of Labor Statistics, is not more than 60% lower in real terms than in February 2026

  4. The college wage premium (median earnings of bachelor's degree holders vs high school only) has not fallen below 30%

Inequality:

  1. The Gini coefficient is less than 0.60

  2. The top 1%’s income share is less than 35%

  3. The top 0.1% wealth share is less than 30%

  4. Median household income has not fallen by more than 40% relative to mean household income

Those are the bet conditions. If any one of those conditions is not met, if any of those statements are untrue on February 14th 2029, I lose the bet. If all of those statements remain true on February 14th 2029, I win the bet. That’s the wager.

The conditions are certainly open to criticism. They’re my admittedly-clumsy way to define the boundaries of a more-or-less normal 21st-century American economy. (Econ and finance folks, you know, have at it.) The tolerances were chosen as best I could, with at least some degree of logic; for example, I chose an 18% unemployment rate because the highest we’ve seen in modern history was a momentary 15% rate in the depths of Covid. I’m trying to filter out non-AI disruptions as best I can. I will point out again that Scott has a significant advantage in that only one of these conditions needs to be violated for him to win the bet, and you know he’s all about conditional probability and such.

I expect Scott will just turn down the bet out of hand, but he’s certainly free to counter with altered conditions. (He could turn it over to his commenters, who are very savvy about this sort of thing.) I’m open to negotiating terms. Please note that, while it’s customary for these bets to be paid out in donations to charity, if I win I’m keeping the cash. Scott of course is free to donate it to wherever he’d like. One wrinkle here is that I don’t have the $5k on hand to put into escrow or whatever people do, but if that changes before payout day I would do so. Or perhaps some wealthy soul out there would be willing to act as a guarantor or something, IDK. But this is a real bet and I’ll sign a contract or whatever. Just please be generous if I’ve made some sort of obviously dopey mistake in the conditions, like saying “greater than” when the thing that would make sense for a normal economy is “less than.”

I will note that it’s not unthinkable that these conditions could be violated through some other means than an AI revolution; it would be a pretty hilarious irony if the AI bubble pops and puts us into a new Great Depression, causing these conditions to fail. I will honor the terms of the bet even if a non-AI source changes the economy dramatically, but in that instance I maintain the right to not concede anything about the lived reality of AI.

Fight Club': From Marla Singer's Viewpoint – Bitch Flicks

Now I’ll do the tedious part that everybody hates. I feel compelled to do it because I genuinely think that this is what this is all about. I don’t think this is a particularly materially hard time in which to live. (I invite you to ponder what life was like for someone who was born in 1900 and lived to 1975, all the horrors they might have experienced, if you disagree.) But I do think this is a very emotionally difficult time. Human beings need other human beings, and we’ve created immense digital barriers between each other in a way that has left millions feeling lonely and unheard; human beings need depth and meaning and purpose, and we’ve created a digital world that can provide only momentary distraction and novelty but which is nonetheless killing the parts of art and culture and community that provide slow, durable, meaningful rewards. No more potluck dinners but endless hours on TikTok, no more romance but endless amounts of extreme online pornography, no more deep, hard-won knowledge but plenty of podcasts that will enable you to pretend that you’ve gained that knowledge, no more challenging and electrifying novels but as many shitty webcomics as you can consume, no more human beings, only the black mirror staring back at you. That’s where we are: we have sacrificed everything deep and penetrating and good about human life, for the right to absolute convenience and total distraction. It’s a horrible bargain and everybody is sad all the time. And the only cure is to blow up all the server farms.

It’s understandable that so many people are escaping into an LLM-enabled fantasy about a better world, or at least a different world, given that many people would trade their current lonely drudgery for the apocalypse. But understandable is different than wise, and pleasant is different than true.

Here is my prediction for you, specifically you, the person reading this right now, and your life, in the shadow of this schizophrenogenic moment: nothing cool is going to happen. Your life is never going to change in any truly revolutionary way. No matter how much times passes or where you go, you will always be you, and that you-ness will dictate your life more than your geography, your income, your job, your social station, your relationship status, or any other exogenous factor. You can’t escape what you don’t like about your life because you can’t escape yourself. And no magical computer is coming to saves you from that condition. The goal in life is thus to forgive yourself for being yourself and to try and scratch out an existence where the contentment in your life just barely outweighs the disappointment and boredom that are something like the default state of adult life. People take that as an extremely bleak, doomer kind of view, but honestly I think it’s just life. We are an accident of evolution and life itself is an accident of history and none of us were every promised anything. We were put here on Earth for no reason and against our will and we are born in absolute terror and only the luckiest among us die in any state other than terror. I do think that we can reach fuller and richer and more peaceful lives, but it won’t come from AI. Instead it will come from a return to the human, from tearing down the digital walls we’ve built between us. The only thing that can save humanity is humans.

You know just before writing this I went to Walmart to pick a few things up for the house and for my wife. And I had the usual big box store experience where the box was indeed big, which is to say that the store was very large and filled with departments and many thousands of possible things to buy. As is very common, I had a hard time finding things, had to wander around looking, and ended up backtracking several times; it was all very inefficient. If there’s any problem that should be a solved problem in the Information Age, it’s this problem - finding out where items are placed in a limited geography (a geofence, you might say), all of which have specific numerical and digitized identifiers that allow them to be sorted, placed on shelves, and run up at checkout. I mean, what else is the era of bits>atoms good for, if not sorting through large amounts of digital information to find what you want? Yet I still wandered around, looking like a dope. At one point I asked an employee where white melting chocolate might be; he had no idea. I could have spent more time looking for someone to look up where the stuff I needed was, but that seemed even more inefficient, so I did what humans have done for 350,000 years: I hunted and gathered.

When I had found all the items I wanted, I went to the automated checkout line. Sadly at least a half-dozen of the checkout stations were out of order, which I’m sure you’re familiar with; technology is finicky! And I noticed that there was a little crowd of employees that had been assigned to that station, which makes sense because every time you go through the automated checkout line, there are multiple people in need of help, who can’t work the technology, who are confused, who stumble through the transaction. Their number included me, today, because a bar code was wrong, and despite the sheer mass of people Walmart had assigned to the task of shepherding us lost fallible souls through the checkout process, I had to wait. And what I’m telling you today and every day is that this is life, real life; real life is a trip to Walmart. Falling in love and watching your child being born and taking in a Chinese bamboo forest for the first time, these are all real life too, don’t get me wrong. But you will never escape the trips to Walmart, you will never stop having to scrape ice off your car in winter, you’ll never stop needing to hunt for that one missing form to finish doing your taxes. And if by some miracle these specific individual chores and indignities are automated away by technology, then you will find the emotional energy you once spent on them being sopped up by some other undignified chore. They don’t end. We are tragic creatures; that’s life. Life can be transcendent! But you can’t use technology or any other means to escape one of the most durable forces in human existence: the power of the ordinary, the power of the mundane.

I know a lot of you have imagined something better, or at least something different, than my excursion to Walmart today. But this grubby human world really doesn’t care what you’ve imagined, does it?

Update: Scott Alexander responds:

As I said in the Dwarkesh podcast and elsewhere, my median for AGI is more like early 2030s, and it might take a few years to have effects this profound on the economy. If you want to make the same bet for 2036, I’m game. I assume you’ll want to change the numbers to account for the longer interval (eg there’s some chance the stock market goes up that much in ten years even without AGI), so tell me the numbers you’re comfortable with for 2036 and I’ll confirm that they make sense to me.

If you prefer it be about 2029 so we can resolve earlier, let me know and we can talk about what intermediate endpoints we disagree about for 2029.

OK that’s fine, but here’s what he wrote in the middle of last year:

In 2027, coding agents will finally be good enough to substantially boost AI R&D itself, causing an intelligence explosion that plows through the human level sometime in mid-2027 and reaches superintelligence by early 2028. The US government wakes up in early 2027, potentially after seeing the potential for AI to be a decisive strategic advantage in cyberwarfare, and starts pulling AI companies into its orbit - not fully nationalizing them, but pushing them into more of a defense-contractor-like relationship. China wakes up around the same time, steals the weights of the leading American AI, and maintains near-parity. There is an arms race which motivates both countries to cut corners on safety and pursue full automation over public objections; this goes blindingly fast and most of the economy is automated by ~2029.

That’s a pretty big change! And doesn’t that amount of changing the forecast suggest that we should all be more cautious and humble about these predictions?