We Are (Still) Living in the Long Boring

18 min read Original article ↗
by MJP

In this piece arguing that the future is here - that the Singularity is here, that it has already been accomplished - Katherine Dee says, “I typed this from an iPhone while looking at 3D-renderings of my unborn children.”

And, you know, that’s cool. We tried to get 3D scans of our little guy while he was still in the womb, but he was already Troll Baby at that point and was turned in the wrong direction on three separate visits to the studio, which required three separate 45-minute drives north and three separate 45-minute drives back home empty-handed. But 3D scans of a baby in utero are cool enough that we were willing to drop a couple hundred bucks on them, had he consented. Granted, the first fetal ultrasounds were administered in the 1950s, and for routine obstetric care there’s no advantage to 3D scans compared to conventional, perhaps even worse. But it’s definitely cool to see the baby that way! Obviously, the most important goal of medical technology is increasing human survival rates, and when it comes to babies, the metric of interest is infant mortality - the number of babies per 1,000 who don’t make it to their first birthday. And current levels of infant mortality are one of the great achievements of the modern world. We’re living in an era in which the death of an infant is a remote possibility, for those of us with access to modern medicine.

But of course, “the modern world” means different things. Modernity, it turns out, began a long time ago.

This is a miracle, certainly, but it’s a miracle that protected me in 1981 almost as much as it protected my son in 2025. Here’s the rate of progress in this metric in the last 30 years.

As the father of a baby who came into the world healthy but with considerable difficulty - we spent the last five weeks of the pregnancy living in the hospital - I am grateful for the two fewer babies who die in their first 12 months compared to 30 years ago. I truly am. But it’s still clearly the case that dramatically reduced infant mortality, in the developed world, is not an achievement of the digital age; all of the heavy lifting was accomplished before most of us were born. In 1900, 100 out of 1000 American infants died before their first birthday, 10% of all lives snuffed out in their first year. By 1950 it was around 30 out of 1000. By 1970 it was about 20. When I was born it was less than 10. Now it sits at a little less than 6. The entire 1995–2024 window we’re looking at is the nearly flat tail-end of a transformation that was essentially complete before the “digital revolution” began. The heavy lifting, the core development and progress in sanitation, antibiotics, pasteurization, hospital births, happened far earlier, specifically in that magic 1870ish to 1970ish window I always talk about. You can say, hey, we haven’t seen major advances here because we’re near the limits of progress, there isn’t much further to go! But if that’s true, it kind of proves the point, right?

A lot of things are like infant mortality, in that we live with the benefits of immense progress, progress that was accomplished by our great-grandparents.

Note that the share of workforce in agriculture isn’t just about a shifting labor force but also about progress in one of the most essential metrics that dominate human existence, how much of our lives are spent in the hunt for food; American households spent about 50% of their budgets on food in 1870, about 15% in 1970. We could add the maternal death rate during childbirth, which fell 99% from 1900 to 1970, and we could add the share of homes with indoor plumbing or electricity, and we could add workplace safety and the decline of workplace mortality by more than 80% in that period, etc and etc and etc. That all constitutes genuinely revolutionary progress, and once you see its scale you can’t unsee it.

And so I have to say, as I so often and so tediously do, that I don’t think 3D photos of babies in utero constitute evidence of having reached a new period of human existence - nor, indeed, does the iPhone. Smartphones are the devices that are most often invoked when people defend the notion that we’re living in a technologically fertile period. But how many of their affordances are things that did not exist prior to their invention? Telephones were more than 130 years old when the iPhone was first released, portable telephones 35 years old. Portable televisions were first available in the 1950s. The first portable GPS (or GPS-like) product was released in the 1980s; the first portable camera a hundred years before that. Text messaging and email are the grandchildren of the telegraph and telegram; their truly revolutionary change of near-instantaneous communication (crossing the Atlantic in a fraction of a second) came in the Victorian era. Of course all of this is cheaper, more flexible, more powerful, better integrated, and more convenient than it was in its discrete forms, to say nothing of more portable. But in many ways, smartphones generally and the digital revolution specifically are the epitome of refinement culture, not of invention.

You know my saw on all of this, to the extent that I’m going to put some of the historical context in this footnote, to spare longtime readers.1 The upshot of all of this is GDP growth and productivity growth that are half of what they were in the post war period, a condition that has bedevilled economists for decades - and which the internet was supposed to fix.

Honestly, a ton of what we’ve developed in my lifetime amounts to scaling up the delivery of information and entertainment and the frictionlessness of certain financial transactions. These are real improvements! They’re not nothing. (The impact of so much information and entertainment at this point seems to be clearly malign, but that is a topic for another day.) But compare them seriously to what came before and the disproportion becomes almost embarrassing. The fundamental architecture of daily material life - how we heat our homes, how we move from place to place, how we grow and store and cook food, how we build structures - has changed remarkably little since 1970. Yes, medicine has progressed a great deal, but look at those charts above; the vast majority of the work of reducing deaths from disease and increasing longevity was accomplished long ago. A person transported from 1926 to 1976 would find the world nearly unrecognizable. A person transported from 1976 to 2026 would find it, after some orientation, quite familiar. The cars go to the same places. The planes aren’t even marginally faster. The houses are built the same way. People still die of cancer.

I’m a big fan of maintenance and I think it gets a really bum rap compared to innovation, which is our culture’s obsession. But it’s funny that that’s the case, given how much less innovative the 21st century has proven to be compared to the 19th and 20th. Things could be a lot worse! I’d rather be living in 2026, enjoying the benefits of that long-passed fertile period, than living in the teeth of all that incredible innovation in the 1910s, watching thousands die of the Spanish flu. I just think people should be clear-eyed about the era they’re living in. What modern invention would you really take over indoor plumbing, or pain killing medication, or the airplane? I think any honest person would have to say, none of it. No, you would not trade food refrigeration for TikTok. No, you would not trade routine handwashing as a mass phenomenon for the OLED TV. And no, you would not trade the EKG for ChatGPT.

On some level, I think people understand this. So there’s this kind of weird… tongue-in-cheek, I guess? quality to a lot of this, which makes it hard for me to parse how serious anyone is about it. Your Sams Altman and Darios Amodei are circus barkers whose net worth is directly dependent on getting you to believe their shpiel, so I’ll leave them aside. So many people talk about this stuff through the argot of the 21st century, defensive irony and jokiness, that I’m not sure if people really believe they’re going to be running their own private asteroid mine in five years or not. Even someone as earnest as Scott Alexander… I don’t know how serious he is about the idea that humanity is on the brink of a new epoch. Or maybe we’re past the brink? Dee - who is far from alone in her eagerness to declare the next era of human progress, obviously - says

we’re already the world of tomorrow.

In 2020, I said the real culture war was about technology. This has been true, arguably, since agriculture, since the alphabet, since reading, since the printing press, since the Industrial Revolution. Mary Harrington has since argued the singularity has already happened. Mary is right.

The revolution you are waiting for is over, and you are, for better and for worse, one of its children.

Alright, but… is the Singularity all that it’s cracked up to be, if most people didn’t even notice it happened? Most people haven’t really countenanced the Holy Singularity because they’re too busy trying to get the kids to school and to lose those stubborn ten pounds and to make dinner without the chicken drying out too much, just like their parents once did. Maybe I’m not meant to take this too seriously; the title of the post is “It's Too Late, Techno-pessimists. We Are As Gods.” The good news (or maybe bad, depending on your point of view) is that I can’t think of an experience more likely to disabuse you of the idea that you are as gods than having a baby. It’ll humble you in the best, deepest ways. And Claude can’t change the baby’s diaper or get him to eat his mashed peas.

I have really been trying to avoid talking about LLMs, or if you must, AI. But things have gotten kind of weird lately. There’s an unsettled quality to the discourse right now; we were briefly in “It’s cringe to believe in AI,” now we’ve swung back to “It’s cringe not to believe in AI,” but no one seems to share the same conception of what believing in AI entails. The influence of programming looms large, as it has over the culture writ large for some time. We were in another lull of disappointment in what LLMs can do, and then Claude Code came out, and suddenly everyone’s promising us asteroid mines and radical life extension and abundant clean energy again. But this is a category error: none of those things can be achieved with code.

The most telling thing about the LLM moment is what this technology is actually good at. LLMs write code, generate images, produce music, summarize documents, draft prose… which is to say, they have achieved mastery over the exact domains that were already, by any sane measure, overprovisioned. Was anyone saying that we didn’t have enough digital writing, images, videos, music, video games, or applications, a few years ago? The core triumph of technological growth is taking scarcity and creating abundance. Well, LLMs create an abundance, that’s for sure. But there was already an abundance of text, online, and an abundance of images, and there’s some insane stat like 24 hours of video gets uploaded to YouTube every second or whatever, and yes, there has been an abundance of code, of programs, of apps. And before we got these fancy new tools to produce more code, there wasn’t a lot of people saying “Gee, what we need is more apps, the app store is too empty.”

The internet in 2022, before the ChatGPT wave broke, already contained more text than any human being could read in ten thousand lifetimes, more images than any eye could see, more music than any ear could hear. When I was a younger man, the get-rich-quick scheme du jour was to create the next great iPhone app, which led to a world of smartphone apps so wildly overserved that we all got tired of apps and no one has sincerely gotten excited about a new one in like ten years. And now… we get more. The scarcity that these tools have abolished, in other words, was not a scarcity anyone was actually suffering from. We did not need more “content”; we did not need to produce digital entertainments at a faster pace. We needed (and still need) cheaper energy, more housing, better cancer treatments, functional mass transit, and a replacement for the internal combustion engine people actually want to use. What we received instead was a machine that can write a cover letter in four seconds and generate a photorealistic image of SpongeBob jackin it. The question of whether this constitutes civilizational transformation should answer itself. Right?

This is the “bits are easy, atoms are hard” problem in its starkest form. Every task LLMs perform (some of which they do pretty well, like help write code) happens on screens, in files, in the virtual world that computation has always occupied. And the lesson of the last fifty years of digital technology is that software’s limits are the limits of the screen itself. Code cannot insulate your house; no algorithm has ever laid a water pipe; the internet has not built a single mile of high-speed rail. What our current stagnation shows, collectively, is that the improvements in material human life that matter the most - abundance in warmth, in calories, in clean water, in physical safety, in hours of freedom from labor - were all achieved by technologies that operated on atoms: steel, concrete, copper wire, chlorine, penicillin. The digital revolution produced real and genuine gains within its own domain, but it never breached that membrane between the virtual and the physical, and LLMs show no signs of doing so either.

Claude Code has genuinely transformed how programmers write software, which is great, but also largely beside the point: the biggest technological lessons of the 21st century are about the limits of code.

You have not heard any of the many, many excitable AI maximalists in the media address this reality, the bits vs atoms barrier, because they have no response that can preserve their intense attachment to the idea that the world is about to change forever. So they resolutely ignore this basic reality: most of the world is not computers. Most of your life is dependent on technologies other than computers. Inconveniently, we also have few arenas of human endeavor that are seeing rapid development other than in computing.

And so the grander promises (curing cancer, cracking fusion, colonizing Mars, achieving material abundance through AI-directed science) function less as predictions than as a kind of promissory theology, perpetually redeemable in a future that recedes as you approach it. The actual connection between a model that autocompletes code and a cure for pancreatic cancer is speculative in the most precise sense: the sense of having no demonstrated mechanism. AI has produced real if modest contributions to protein folding and drug candidate screening. These are genuinely good things. But the leap from “AlphaFold is sometimes useful to structural biologists” to “we are on the threshold of defeating disease” is not an inference supported by evidence but rather a narrative that a certain kind of mind finds emotionally necessary. And when you look at the pattern of these promises historically - fusion has been twenty years away for seventy years, the paperless office was supposed to arrive with the PC, every home will soon have a large 3D printer that will provide them with the plastic goods they once bought at Walmart - the most responsible explanation is not that the breakthrough is imminent but that each generation of technologists, confronting the gap between what their tools can do and what they wish they could do, fills that gap with imagination and calls it the future.

Dee mentions Ray Kurzweil and calls him prescient.

Ray Kurzweil was prescient about many things, and one of them is this: the merger has started. He predicted the outer layers of our neocortex would be wired to the cloud by the 2030s, extending human thought the way the last round of neocortical expansion produced us. But think carefully about what consumer technology alone already does. (And that’s just CONSUMER technology.) We have built ourselves a second nervous system.

“We have built ourselves a second nervous system”! This is the kind of sentence that sounds like revelation and means, on inspection, that you can look things up very quickly on your phone. We have indeed built ourselves a very fast library. That library has caused a lot of unhappiness, but certainly it’s a remarkable technological achievement. That achievement did not, however, eliminate tuberculosis.

And while we’re talking about Kurzweil and nervous systems, we should take time to point out his fundamental misapprehension of that system. Kurzweil has always had one goal, above all others: to avoid death. As a means to achieve this ambitious project, he has repeatedly invoked the desire to “upload” his consciousness to a computer. But this is folly: there is no consciousness that is distinct from the brain that houses it. Consciousness is brain, is tissue, is cells, is wetware. There is no discrete program that is the self that can be extracted from the brain and deposited into a conveniently durable chassis. To imagine a consciousness that can be housed on a floppy disc is to participate in a dualist fantasy of the kind that should have died out hundreds of years ago. Kurzweil has had this pointed out to him many times, but his desire to live forever apparently overwhelms his more rational faculties. The fantasy wins.

Dee dismisses “techno-pessimists” as people trying to stop something that has already happened. (Jasmine Sun goes with “AI populists,” a term I find a little inscrutable.) Perhaps I am a techno-pessimist, but if so, it’s only because I’ve been alive for most of the dispiriting past 50 years. “We were promised flying cars,” goes the cliche. But flying cars are at least possible; it’s just that they’re hideously inefficient and offer no advantage over our current boring-but-effective combination of cars and airplanes. We also were told to dream of time travel and faster-than-light travel, both of which are forever forbidden by elementary physics, and of colonizing distant worlds, which is forever forbidden by more factors than I can list. As Kim Stanley Robinson and others have pointed out, that last bit is essential, because if we recognize that we only have one world to live in, we might become better stewards of it. And that’s why I’m a techno-pessimist in general. Though I’m frequently accused of hoeing this particular row because I like disillusioning other people, I am instead trying to make this reality clear: we cannot sit back and wait for technological progress to save us. The only solutions to our problems - the problems of hunger, of poverty, of injustice, of disillusionment, of alienation - are political solutions. I understand feeling totally defeated by that idea, given what politics is like on this planet. But it’s all we have. We start to build the political structures that can enable humanity to take care of all of us or we drown. There is no fate but what we make.

Whatever you think of my motives, I will not stop pointing out that we are still here, still in this boring muck, still circling the parking lot at Target looking for a space. And until and unless the usual suspects can produce actual evidence of something happening right now, the skeptic’s work is not over. They promise AI will cure all disease; AI has not cured a single disease. Ezra Klein routinely throws around 20% economic growth as a baseline for the AI age; these few years with LLMs have produced the same anemic ~2% growth as we’ve been used to in this, the digital century. And I still say, wake me up when that changes. My techno-pessimism is a pessimism grounded in a fact derived from the historical record: that civilizational-scale technological transformation is extraordinarily rare, that it happened once in a rapidly-receding extraordinary century, and that we have been living in its long shadow ever since. And now some mistake that shadow for the sun.