AI optimism is waning

7 min read Original article ↗

SO FAR, we have utterly failed to tell the story of how the future goes well. People have two key points of reference: sci-fi movies where things go terribly wrong, and real life where they’ve seen an early preview of a probably volatile transition period between today and the future. They have naturally assumed that one or both of these is what the future will look like. People straightforwardly do not believe in positive futures right now. Our minds evolved to avoid death and doom is much easier to imagine.

Institutions are weakened and they feel it. The cultural supply chain is corrupted, automated and adversarially optimized; people are tormented by it. Most importantly, people are palpably feeling AI (and the expected new world in general) as a threat to their way of life, to what they rely on economically, to their political importance and leverage, to personal relationships, to certainty about how almost anything will work in the near future, or whether humanity will even survive. This sort of fear has even permeated tech culture, “permanent underclass”, etc., which coincides with the labs becoming the primary objects of focus, growth, capital, status. This all affects mass culture from upstream. Where we previously had a core of very positive energy in tech, we now have people scrambling like maniacs for the exits before Claude automates them. As part of this realignment around “the labs are the only reliable thing happening,” lots of structural stuff in tech culture has changed that’s making it worse on many levels. Tech is, in a repressed way, at least as, if not more, obsessed with status and wealth than any other subculture. The central social contract of tech was always that more wealth was good, positive sum, grow the pie, meritocracy is the way. But this meme of catastrophic human capital depreciation is very powerful, and it’s resulted in people thinking that the tech social contract will soon be dead. You get your bag in the next n months or you are permanently locked out because the returns on technical human capital are going to zero. So even tech people are freaking out, and if they don’t viscerally believe in positive futures, how can we expect anyone else to? And you know, it’s actually hard to tell how wrong they even are.

To the broader public, the labs and their CEOs have become the primary points of attention for AI, and they’re relatively unusual characters (again, to the public) who’ve said confusing, scary, often unhelpful things on the record in the past, which feeds content farms and bolsters the doom pipeline. Also the polarization makes it almost impossible not to get hit with criticism. Much more to say about lab CEOs than I’ll say here, I think they all basically mean well, but again are just illegible enough to the public that their rhetoric has caused a lot of unnecessary cultural volatility. The labs being private companies also means the average person with a 401k or Robinhood account has no stake in their growth, and pre-farmed IPOs at $1T (lol) might be too little too late, though it will probably help at least a little on the margin. I think we’ll look back on letting VC/PE eat the 100-1000x to-date ROI on the big labs as a generationally bad mistake that negatively affected culture way more than it was worth. No matter the outcome it was nonetheless quite unskillful as a strategy.

The datacenter buildout, too, has been such a self-own. It would have been easy to say “We’re going to make your electricity cheaper by adding 10% over our demand to your local grid, and we will send all the dirty water into space. (or whatever)” Instead we have a national movement organized against the building of data centers! Oops.

People feel marginalized and vulnerable in expectation, and basically everyone who is pro AI has failed to articulate the ways in which they’re misguided, insofar as they are. Part of the difficulty is that the system does actually appear to be changing in ways that will reduce the relative power of individual humans. Maybe one minimum condition for people to feel okay about this is that the world and government should feel stable, improving, growing. Belief in personal economic prosperity and ability to live freely goes a long way.

People who feel certain they will benefit from AI love AI. But the upside appears from the outside to be extremely concentrated at the moment (people are bad at counting deflationary effects and pricing in expected consumer surpluses, and in fact the “absolute” economic gains for now are incredibly concentrated, etc.). The OpenAI plan doesn’t seem to solve this, it is too lukewarm. If we want the public behind AI, we will need to be substantially more bold about making unconditional functional abundance happen. i.e. design, build and deploy the full machinery for maximally robust post-scarcity where most things are epsilon marginal cost at the consumer level. Elon simplifying this by saying “you won’t need money in the future” was smart and has had an interesting reception. It hints at a partial direction here which is feasible if we can produce a tremendous amount of automation. We should try to make sure non-panopticon democracy survives while we’re at it.

One more thing. For years I (not uniquely) said there would be manageable risks from AI, and that we would need to proactively address them. I tried to explain how this would work. Thankfully this kind of view became common wisdom among many of the most thoughtful people in AI, and thankfully the leaders of labs as well. This includes things like iterative deployment, etc. Instead of acknowledging risks and focusing on security acceleration, pro AI people found it much more convenient to sweep this stuff under the rug, deny it altogether, and so on. This was a massive mistake. They are still doing it. The debt may end up repaid in backlash.

If the singularity is making it hard to see the future, people’s instinctive reactions will take over. For most people that default is fear. The unknown, the fog, does this. There could easily be a mass bipartisan movement against AI, and there’s essentially no plan for how to quell such a possibility except more lobbying. Dems may get automated and revolt against displacement. Republicans may feel a threat to their norms and way of life. Note that this is already happening on a (presently) minor scale.

It’s not easy to prove to people that the system, and AI as an increasingly integral part of that system, is working decently well on behalf of humanity. At the moment, I believe it largely is, because humans control the capital and most humans are good. Of course it’s not doing so perfectly or even mostly when compared to its absolute capacity; fortunately it happens that our surpluses can, however imperfectly, compensate for the gaps. As a side point it is worth noting that capital leverage (and thus variance of its effects) is increasing for a number of reasons, so each unit of capital can contribute to large perturbations of all of these variables. Lots of implications of this to unpack, another time.

So, AI optimism is waning, for the moment. Many people in tech are in a shameless state of defection. Trampling each other on the way to the lifeboats is not a belief system. If we want a future of human flourishing for us and our descendants, we will have to make it so by fighting against the many powerful forces at odds with this goal. This means, in part, letting good character win out over defection, avoiding shitty asymmetric locked-in equilibriums of power, governance and economics, avoiding the very real potential for dangerous things to destroy lots of what we love on the way to the future, and building durable institutions out of all of this that benefit individuals at scale.

- b

Discussion about this post

Ready for more?