What’s up with the people who say LLM-driven development doesn’t work for them?
This has really been nagging at me. I’ve been puzzling it over, and I’ve got a theory. But let’s start with the ones I rejected.
It doesn’t actually work, they’re right.
No. It’s just not true. I’ve used these tools, and I’ve seen what they can do. Friends and coworkers — people I respect and whose skills are known good to me — use them productively. Prominent public developers use them enthusiastically. I can’t disbelieve the evidence of my own eyes plus the testimony of my respected peer group plus the testimony of prominent public figures all at once. LLM-driven development absolutely works.
(If you’re in the skeptic camp and don’t find this argument convincing, well, it’s because I’m not trying to convince you right now. I’m trying to understand you. But accept that I’m not lying, at least, even if you still think I might be wrong.)
They just hate AI for ideological or other reasons, and are BSing.
In the fever swamps of public AI discourse, this definitely describes some people. Just as LinkedIn is full of a bunch of psychotic weirdos posting elaborate fantasies about what AI can do, Bluesky is full of a bunch of psychotic weirdos posting unhinged screeds about its evils. Some of those people are software developers, and they have their identity tied up in insisting it can’t work. If your HN username is something like “llmsaresatan,” well, you might not be coming to this with an open mind.
But I don’t think this is most people. There are a lot of people who are genuinely open-minded, have tried these tools out, and still come to the conclusion they did.
They’re using the wrong tools.
This is real. Sometimes I talk to people who don’t get all the fuss, and it turns out they’re using Copilot, or they’ve just been pasting code back and forth from a ChatGPT window. If you haven’t used Claude Code, Codex, or Cursor, you’re not talking about the same experience everyone else is. (And there are people who’d disagree with me about including Cursor there.)
But again, this isn’t everyone; there are people who’ve used the good tools and still found them lacking.
They’re doing it wrong.
For a long time, I believe this has been the most common explanation. Back in early 2025, at the dawn of agentic dev tools, I was finding AI driven development to be nearly miraculous, but when I’d talk to other people, I’d hear more mixed takes. Pretty much inevitably, as we’d talk through how we used it, there’d be things they were doing where I could redirect them to change their approach, and get it to work better for them.
(The most common culprits: Giving it too-large tasks, rather than breaking things down into digestible chunks; getting discouraged if it didn’t one-shot a solution, rather than working with it to refine its original approach into something better.)
But as of GPT 5.2/Opus 4.5, I don’t believe this anymore. It’s too easy to use these tools now; you don’t need to be particularly sophisticated in how you use them at this point. You can give your tool seemingly-too-big tasks, and it’ll just do them. You can expect it to one-shot things and get good results some reasonable fraction of the time.
(As an aside, I think that this explains a lot of the recent “I thought it was fake before, but now I think it’s real” essays — those are the people who were doing it wrong, but the underlying tech changed enough that now they’re doing it right, all without changing anything. This whole field is changing so fast.)
But so if all those reasons don’t explain it, what’s left? Here’s my theory.
They’re bottom-up programmers.
A top-down programmer generally has an idea of what they need the solution to look like: They know how their code is going to be structured, they have a rough idea of what their APIs look like, they know what shape the data has to be, they just need to fill in the blanks and color in the outlines.
A bottom-up programmer doesn’t. They start writing the code, and the structure emerges from it. As they write, they find that the code they’re writing suggests the existence of a function, and so they refactor their code into that function. As they get tempted to copy/paste, they realize larger structures exist, and they will those into existence. They hit a wall where it’s not clear how to proceed, realize that they made a bad assumption, and evolve their design. It all happens emergently, with structure rising up from the code.
These are both valid approaches to writing code, for different people at different times.
Early in my career, I was a bottom-up guy. If I tried to do top-down, I’d end up just creating a mess, with lots of unnecessary abstraction — maybe some “design patterns” thrown in there for good measure — that didn’t actually solve the problem well. And I saw other people’s alleged architectural designs hit the same pain points. In response, I decided that top-down architecture was basically a scam, and that the important thing was to write code that worked and refactor it aggressively as I wrote it, so that what emerged was a) good, and b) no more complex than it needed to be.
This worked for me, and I was successful with it.
And then I kept doing it for long years and yea verily even unto decades. When you do a thing for long enough, you eventually get to a point where you just naturally start to see patterns. I knew where solutions were going, and didn’t need to write the code to see how it should be written. The structures started becoming obvious to me, and I just had to fill in the blanks. Turns out that top-down design wasn’t a scam after all.
(And to be clear: I’m not always right about how the code should be structured. Sometimes I’ll be surprised! And like anyone, I benefit from bouncing my ideas off other people and refining or revising them — but in general… yeah, a lot of code ends up structured more or less like I’d expect.)
And so as a top-down guy, I’ve been bored by writing a lot of the actual code: It’s tedious, it’s just filling in the obvious blanks with the obvious thing. And now along come LLMs that can fill in those blanks at my command, and it’s incredible. I can come up with the top-down design, and then wave my hand and all the boring, obvious code appears.
But if I were still a bottom-up guy, that wouldn’t work. I wouldn’t know what the code was supposed to look like until after I wrote it. I’d have to write the code to get there. If I tried telling the robot to write it for me, I wouldn’t be able to give it useful directions; and if it wrote something that wasn’t where I would have ended up, I’d know it was wrong, but I wouldn’t even know how it was wrong, and couldn’t prompt it to make it better, because I wouldn’t know yet what “better” looks like.
And so this is my theory: That the people who don’t see the utility in LLM-driven development are the bottom-up people, the ones who learn about the code by writing it.
But it’s possible I’m full of shit. I hope this pops on Hacker News, so that I can get some feedback about whether this framing resonates with anyone. And be told I’m an idiot in two dozen ways as a special bonus prize.
These words were written by a human.
Since it comes up on every post about AI, I’ll note that none of these words were written by AI. The em-dashes are here because I’m also a typography nerd, and made sure my (largely LLM-written) blog software did em-dashes and curly quotes.
I’m a top-down coder now, but I remain a bottom-up writer; I’ve never once written an essay where my thoughts weren’t shaped by the process of writing it, and can’t imagine writing any other way. If you’re the same way about code, I get it; and maybe this bottom-up/top-down dichotomy will help you understand the people who are making inexplicable-to-you claims about these tools.