Stop calling it vibe coding | Dave Kiss

7 min read Original article ↗

Remember when Andrej Karpathy coined “vibe coding” a lifetime ago, back in February 2025? He nailed what that brand new moment felt like: you fully give in to the vibes, embrace exponentials, and forget that the code even exists. You don’t look at it; you just let it rip.

For a lot of us, that’s exactly how we started. The term captured something real about those early days of handing trust to something that might hallucinate your entire authentication system (nbd).

But a year of daily use changes things. The way most engineers I know actually work with these tools now—myself included—has evolved into something different. We’ve all collectively figured out what sustainable AI-assisted development looks like, and it turns out to be more structured than those early vibes.

I’m looking at most every implementation. I’m catching the 1 out of 10 outputs that’s subtly wrong, the approach that will fall apart at scale, the pattern that doesn’t match how we think about the codebase. The amount of times I’ve had to redirect Claude Code away from an embarrassing PR and toward an approach that will hold up for years, I couldn’t count. That’s not vibes, it’s skilled supervision.

James Ide put it well in a recent thread: LLMs accept imprecision and create imprecision. You need to already have a vision for the end state and use the LLM to automate getting there faster. Experts who understand a problem will be amplified in positive directions. Novices who trust LLMs will be amplified in negative directions, becoming confident in wrong solutions.

This framing resonates with me. Everyone will use AI to write code, that’s not the question anymore. The question is whether you have the expertise to know when the output is wrong.


Here’s what that looks like in practice: while writing this post, I asked Claude to build an interactive demo for this section. It created a “spot the bug” quiz—three code snippets, one with a subtle syntax error, click to reveal. Technically impressive. Completely missed the point.

I had to redirect: “That’s not the skill. Agents fix syntax errors in seconds. The expertise is catching when it reaches for a sledgehammer when all you need is a rubber mallet. When it builds an abstraction that’ll be a nightmare to unwind. When it maintains backwards compatibility for code that should just be deleted.”

The irony wasn’t lost on me. I was doing the exact supervision the article describes, on the article itself. The quiz worked fine. The direction was wrong. And no amount of clever implementation would have fixed that.

The shame isn’t gone

I wish I could say we’ve moved past it. That devs have embraced AI tools without the emotional baggage, without the existential weight of watching something produce code on par with, or better than, what took us years to learn.

But that’s not true, and I think we’re doing ourselves a disservice by pretending otherwise. The shame and the grief are still there.

Think about what it means to have spent a decade, maybe two, cultivating a skill set. Leveling up through study and struggle, through late nights debugging, through countless Stack Overflow threads, through the slow accumulation of intuition about what makes code good. That effort meant something, it separated serious builders from everyone else, and there was real craft and pride in having earned it.

And now something can just do it, often better, in seconds.

I wrote about this feeling a few weeks ago: the longing for when software required deep understanding, when you had to earn your ability to ship. That longing hasn’t gone away. If anything, it’s intensified as the tools have gotten better.

Here’s the question a lot of developers are sitting with: do we lose some of our cognitive abilities if we’re reduced to reviewers? If we’re not writing the code, are we still thinking deeply about the problems? Or are we reduced to pattern-matching against LLM output, slowly atrophying the muscles we spent years building?

I don’t have a clean answer. But I know the question is real, and I know a lot of people are feeling it.

The music problem

There’s a reason why not everybody made music. Learning an instrument, learning to compose, learning the theory and the feel and the craft of it, that’s what made music amazing. The barrier was the point. Somebody took the time to develop a skill that most people couldn’t or wouldn’t develop, and the result was art that meant something.

If you can simply prompt your way to a song, what happens to that?

I think it diminishes both the art and the skill, and I think we’re so close to the technology in software that we dilute how that feels. We tell ourselves it’s different because it’s code, because it’s pragmatic, because shipping faster is obviously better.

But is it? Or are we rationalizing because we don’t have a choice?

The efficiency gains from agentic coding are insurmountable. You can’t compete with someone using these tools if you’re not using them yourself. That much is clear. But accepting the efficiency doesn’t mean we’ve earned it. It doesn’t mean we’ve processed what it means for the craft we spent our careers developing.

What we’re actually doing

So if “vibe coding” doesn’t describe it, what does?

I’ve been calling it agentic coding. The distinction matters: it’s using AI agents while maintaining the expertise and judgment that keeps the output good, rather than letting it rip with zero validation.

Watch how experienced engineers actually work with these tools. They rarely type a prompt and accept whatever comes back; they plan first. They tell the LLM to ask clarifying questions before writing any code. They request architectural diagrams, implementation plans, a clear breakdown of the approach. They use something like beads to track the work in epics and tasks, blocking things based on what needs to happen sequentially. They define the shape of the solution before the agent writes a single line.

This is the opposite of vibes. This is structure, intention, a very clear definition of the approach the LLM should take, the constraints it should work within, the patterns it should follow. The planning itself is the work now, and the execution is what the agent handles.

For production work, codebases that need to scale, and teams that need to maintain software over years, the job now is to supervise the AI output, catch the mistakes that look plausible, redirect toward approaches that will hold up, and bring the context and vision that the model doesn’t have.

This requires more expertise, not less. You need to know the subject deeply enough to catch errors at 10x the volume. A novice who trusts the LLM’s capabilities more than their own judgment will believe all 10 outputs are correct. An expert will spot the 1 that isn’t.

That said, vibe coding still has its place: side projects, experiments, POCs, hack projects where you genuinely don’t care about the code quality because you’re just trying to see if something works. Trying to control a Philips Hue light using a Line 6 Helix Stadium XL pedalboard? Let it rip.

But for everything else, we need a term that reflects the reality: this is a legitimate engineering discipline that requires skill and judgment, and the people who do it well will be the ones who can use the tools without losing the expertise that makes the output worth anything.


The optimistic framing is that we’re set free to build what we can think of rather than what we’re held back by. We can now focus on the problem instead of the implementation and ship at a pace that was impossible before.

I do believe that framing. I’ve lived it. The projects I shipped over the holidays wouldn’t have existed without these tools. Maybe in some alternate timeline, they shouldn’t have existed at all.

But freedom and loss aren’t mutually exclusive. You can be grateful for the efficiency and still mourn what it cost. You can use the tools every day and still feel the weight of what they’ve changed about your craft, your career, your sense of what it means to be good at this.

The code was never the point, maybe. But for a lot of us, it felt like it was. And that feeling doesn’t just disappear because the tools got better.