The AI Dilemma

11 min read Original article ↗

Hello. It’s 2026.

Currently, we are three years into fearmongering that AI is replacing programmers. Every single product in existence is shoving AI down your throat whether you like it or not. Tech celebrities are making clickbait YouTube videos about how the latest AI models mentally and physically broke them. We are witnessing mass psychosis induced by AI relationships and chatbots who will happily tell you everything you want to hear and confirm all your biases. We are living in a hellscape where people can prompt Grok to undress women and children on Twitter, which I’m fairly certain is illegal, but I’m no lawyer. Also, RAM costs a fortune because chip makers are abandoning consumer markets to fund AI projects, but at least we have funny anthropomorphic cat videos to keep us entertained.

None of this is normal.

To be clear, I don’t completely hate AI. I occasionally use AI for coding, and it’s been especially useful in helping me learn concepts or fix things around the house. At the same time, I strongly feel that it was irresponsible to release this technology to the public before dealing with the ethical ramifications. It all feels like a big for-profit experiment that we didn’t consent to.

We benefit from AI in some ways, sure, but it’s also been incredibly harmful and has eroded our trust in the veracity of nearly all online content. I miss the old days of the internet when I could read something online or watch a video and take comfort in knowing it was at least produced by a real human being, and that faking things required a certain level of finesse that the average child had not yet developed. I miss arguing with real people on the internet, rather than ragebait bots. I miss forums where you could chat with strangers and feel like you were connecting. Now I can’t tell the bots from the humans anymore. I have to choose between breaking my grandmother’s heart and revealing that the funny Facebook video she sent me is actually AI, or protecting her innocence and allowing her to find joy in slop. I hate that every em dash is now scrutinized. I hate this gnawing suspicion in the back of my mind that nothing I read or watch is real anymore.

Anyway, ethics shmethics. That’s not the real topic of this essay. For the past few years, people who sell AI products have warned that AI is going to disrupt the entire workforce and replace our jobs. But there’s one flaw in this line of thinking that I simply haven’t been able to wrap my head around.

Something big is happening

A few days ago, Matt Shumer wrote a now-viral article titled Something Big Is Happening in which he forebodes AI’s impact on the labor market. He notes that as it stands, AI is accelerating at a rapid pace, iterating on the next best versions of itself, and, eventually, replacing your job. He writes:

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now.

Let’s follow this to its logical conclusion. Suppose for a second that 50% of entry-level white-collar jobs will disappear if the AI revolution does materialize. Then what? So will 50% of mid- and senior-level jobs. You can’t have experienced workers if you’re not willing to hire juniors and train them to become seniors.

Matt also warns:

If your job isn’t mentioned here, that does not mean it’s safe. Almost all knowledge work is being affected… I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started.

That’s concerning if true, especially since most of the American economy is propped up by a handful of tech companies that are circulating money in… well, a circle.

Sorry, I just want to get one thing straight before we continue: What does Matt do for a living? His LinkedIn bio says he’s the co-founder and CEO of “an applied AI company building the most advanced autocomplete tools in the world, powered by large-scale AI systems like GPT-3.”

Hmm. So the owner of an AI company—who stands to profit from the success of AI—says that you must embrace AI now to improve your future job prospects? But also, that AI is simultaneously a threat since it can do our jobs better than we can? Therefore we should… adopt it faster to accelerate our own eventual replacement? Okay got it, just making sure.

I’m curious: Is Matt’s job as the CEO of a GPT-wrapper company safe from being replaced? After all, he wrote at length about how current-generation AI models are capable of doing hours or even months’ worth of software development work on their own:

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

If AI can create all this impressive software with a mere prompt, why would anyone buy his services instead of spending $20 on Claude Code to replicate the same services locally?

For now, let’s just assume this job title is safe, but AI is here to take everybody else’s jobs. What does Matt suggest we do to get ahead of the curve?

If you’ve always wanted to write a book but couldn’t find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month… one that’s infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you’ve been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you’re passionate about. You never know where they’ll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

So let me get this straight: If you’re a skilled laborer—which almost everyone is—then you’re replaceable. To make the most of a bad situation, you should use AI to… learn skills? Sure, learning new things is objectively good, and something that many people already do. But if skilled labor is being replaced, what will learning those skills get you at the end of the day other than personal growth? Surely not a job.

Think practically for a second. An AI just replaced your job and your solution is to pay that same AI company to learn the very things you won’t get paid to do? You should be looking for a new job, not continuing to fund your own unemployment.

Maybe software professionals are doomed after all, but according to Matt, so is every other domain that requires a deep understanding of systems:

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

So that leaves us with what, menial labor? The kind of work that barely pays a living wage, leaving you with no disposable income, unable to afford modern luxuries like running water and a Claude Pro subscription?

During his 2025 interview with Lex Fridman, Google’s CEO Sundar Pichai suggested that AI could free up our time, allowing us to focus on more creative pursuits:

Yeah, I think, ultimately, it gives more time for us humans to do the things we humans find meaningful. I think it scares a lot of people because we’re going to have to ask ourselves the hard question of what do we find meaningful? I’m sure there’s answers, and it’s the old question of the meaning of existence. As you have to try to figure that out, that might be ultimately parenting, or being creative in some domains of art or writing, and it challenges to… It’s a good question of to ask yourself like, “In my life, what is the thing that brings me most joy and fulfillment?” If I’m able to actually focus more time on that, that’s really powerful.

Except, not only do the arts notoriously not pay well, but they are also some of the first jobs that AI has already disrupted. Writers, artists, filmmakers, and musicians have arguably been harmed more by AI than any other profession so far. If the end goal is for us to transition to more creative work, why did we begin the AI revolution by replacing those creative and fulfilling jobs first? It’s so easy to wax poetic about this utopian future where we’re all frolicking about and enjoying life. This is delusional; nobody’s going to pay you to do that.

Maybe I’m being too pessimistic here. Maybe Matt’s got the right idea: You should use AI to upskill yourself while you can and launch a business or build the app you’ve always dreamed of. Maybe then you can make some real money. No more slaving away at your nine-to-five job. The clankers can do that for you, and they won’t complain.

But in this future where AI takes all the skilled labor, who is going to buy what you’re selling? With what money? Oops, AI took my job, and now I’m burning Claude Code credits trying to launch a multi-billion-dollar AI startup that sells AI products to consumers who can’t afford buy my product because they’re all trying to do the same thing. Oops, AI took all the jobs and we have no money for rent and the landlords are landbroke. Oops, 50% of the population is on unemployment benefits and can’t pay taxes, so the government also can’t do its job. Time to cull the Unproductives. Trust me, bro, this will all sort itself out in the end.

It’s just not sustainable

In case it’s not already painfully obvious: You can’t have an upper class without a working class, and you can’t have a working class without jobs. I thought we figured this out a long time ago. And I suspect the people who are pushing for this fully autonomous future understand that. They just want to hype it up, capitalize on the funding in the short term, and then bail when the AI bubble bursts.

There must be a solution to all of this that doesn’t involve mass unemployment. Is it universal basic income? A rejection of AI and aggressive protection of workers’ rights? A dystopian future like the one from WALL-E where we lounge about and regress as a species while robots do everything for us?

I don’t know. But I do know when I’m being lied to.