We don't practice fucked

7 min read Original article ↗

A story. From Ron Jaworski’s excellent The Games That Changed The Game. Jaworski’s in Indianapolis, watching the inimitable Peyton F. Manning taking snaps in practice. A lot of snaps. So many snaps that, after a short while, it becomes clear that the Colts do not plan on anybody whose name is not Peyton Manning having a football hiked to them in the venerable Lucas Oil Stadium, on that night or any other.

The author asks the Colts’ OC, Tom Moore, why not let the backup take a few? In case Manning goes down? And Moore replies:

“Fellas, if 18 goes down, we’re fucked. And we don’t practice fucked.”

» are you worried about your job after AI?

There are a frightening number of people who think that this generation of LLMs is a toy, or merely a hype cycle. There are a frightening number of people who think that they’re a butterfly’s whisper away from achieving total sentience and blasting off. I think that both of these kinds of people are to put it kindly, fundamentally mistaken. But there is a third kind, perhaps most confused of all. These folks’ stance on AI can generally be summed up like this:

I’m nervous of AI taking my job. We’re heading away from five engineers in desks and towards one engineer tabbing between four Claude Code sessions, and it’s going to squeeze me out of a job.

This baffled me at first; baked into this statement are two mutually exclusive ideas:

  1. LLMs and their harnesses will become good enough at producing software and systems of software to outright replace human engineers
  2. The world as we understand it today still exists

I’ll explain why these ideas are incompatible, and why you shouldn’t plan for the worst case scenario, by means of the worlds of William Jevons and Philip K. Dick.

» jevons and philip k. dick

» william jevons’ paradox

Jevons’ Paradox says:

When technological breakthroughs make a resource more efficient to use, you’d expect the amount of that resource used to decrease. But when the price of that resource is highly elastic, more is used.

Jevons was an economist during the clacking, thundering heart of the Industrial Revolution. He saw coal-powered engines become more efficient, but instead of seeing coal usage drop, it grew. Which, of course. Running trains makes you money. Running more trains makes you more money. If it gets cheaper to run trains, you’ll run more of them.

In Jevons’ world, there’s no conception of autonomy. LLMs are coal; or, in modern terms, LLMs are ls. What does it mean for ls to exist outside of a system driven or otherwise orchestrated by a human? Nothing at all. ls is a blob of binary data. It is a tool.

» philip k. dick

Philip K Dick’s future has the machines slurping noodles from a crowded cart, speaking Chinese, having quietly tense moments in urbane-yet-lifeless stamped out apartments. It has buzzing advertisement drones which follow you around like imprinted ducklings and sell you products with names like Can-D or Life-O-Spray.

In Dick’s world, machines are always entities unto themselves. They may exist within a larger system, but they have a creaky autonomy and clearly can be said to have a will and existence outside of that system, too. Your new dishwashing robot might be phoning home to Bosch HQ, but its aimless wandering around your Martian hovel or spree of violence after popping a vacuum tube is in a very meaningful sense an expression of an internal will.

These are not the same world. My thesis is that in Jevons’ world, the developer will flourish. In Dick’s world, life would be so fundamentally changed that the premise itself becomes meaningless.

» in jevons’ world, you’ll have a job

Jevons’ world, at its core, is about swaths of market inefficiencies being created when some commodity becomes cheap. If you live in this world, this is all that LLMs are: Tools that create market inefficiencies.

Think about Salesforce. Most folks I know who interact with Salesforce on a regular basis have a deep, seething hatred for it. “Deep, seething hatred” is often “market inefficiency” wearing a mask. If people hate something, there should be an opportunity to replace it, and to make money doing so.

But with Salesforce, there was no market inefficiency. Replacing it is too much effort and risk. Writing a custom tool for one business is too little reward. There is no incentive, and so there is no inefficiency. Well – there was no inefficiency. Now? Things like “replace the pieces of Salesforce that Aunt Margaret actually uses for her towing business” could be a couple weeks of work for a strong engineer1.

This is how Jevons’ paradox works. This work literally did not exist before LLMs; you can tally it up however you want, the sum total of work before the inefficiencies is less than that same sum after. If you try to apply the former before-sum to the new world, it will not add up.

These kinds of market inefficiencies are a perfect fit for the superpower of current LLMs; they are incredible at letting you operate in unfamiliar spaces if you’re past some threshold of competence. If you understand software, but not a domain, they’ll get you up to speed on the domain part like that.

They are not a panacea; they are not a silver bullet. This mostly applies to folks who are already strong engineers. But if you feel comfortable with designing and deploying fairly simple, mid-to-low traffic web applications, your opportunities just exploded.

» in dick’s world, there are no jobs

In Dick’s world, LLMs are autonomous or semi-autonomous. You drop them into the world, or into a digital sandbox, and let them go. This is not to say that they are undirected.

As Simon Willison points out in his excellent blog, coding agents are secretly general purpose agents. True, they can only really operate on text, but lucky for us the bedrock of computing has been streams of text since before the Grateful Dead started playing country tunes and stomping around Europe.

The problem with this is that the world where an LLM can automate (truly automate), say, triaging bugs from your support portal and pushing fixes back to main is also the world where an LLM can automate half or more of the economy. Because streams of text are really, really powerful.

In software terms, you could imagine a persistent process which receives bug tickets from your support portal, makes a research plan, and then fixes the bug and commits it back to the repository. This could probably exist now, but it would be bad, it would require human intervention, and it almost certainly would be more trouble than it was worth2.

Imagine that this did work, though. You spin up the process, and pretty much forget about it; its PRs get reviewed. Your software becomes better; the system works flawlessly, doesn’t require you to fix a bunch of stuff the LLM fucked up. This is insane. This is a world where white collar works start getting fired, because it makes no economic sense to keep a hundred people who have to eat, sleep, shit, go to the doctor, have health insurance, slack off, etc. when you can replace half of them with an API. I don’t need to belabor this point. It should be self evident why, in this world, white collar workers will get nuked.

And thus: What do you do?

If you think we live in Jevons’ world, you should learn the tools. It moves pretty fast, but it’s not too bad. Ignore the noise, ignore the vibes, ignore the slop, and focus on finding simple ways to automate the tedious parts of whatever it is you currently do. That’s it. You’ll figure out what’s useful pretty quickly, and you’ll make it. I promise.

But Dick’s world? Don’t plan for Dick’s world. In such a world, the economy, the world order, the notion of what it means to even be someone who exists inside the current middle and upper-middle class, all of these things are gone. Moot. Start over, figure it out from scratch. You can think about this world, think about what it might be like or what might be valuable, but the real answer is that you have no fucking clue. And neither does anyone else.