I’m tired, y’all. I’m tired of doing both of our jobs.
git commit -m "vibe coded ¯\_(ツ)_/¯"Press enter or click to view image in full size
AI-assisted development tools are here, and they are genuinely useful. They can speed up boilerplate, help explore unfamiliar APIs, suggest patterns, and reduce friction. Used well, they make good engineers faster.
Used poorly, they create risk —* technical, operational, and cultural.
This post is a reminder of a simple but non-negotiable rule:
AI is a tool. You are still responsible for the code you ship.
Signing Your Name Still Means Something
AI raises the bar for diligence
Press enter or click to view image in full size
If you commit code under your name, you are accountable for it, regardless of whether it was typed by you, generated by an AI model, copied from Stack Overflow, or adapted from an example.
That means if you send me your code for a review, I am going in with the expectation that:
- You understand what the code does
- You have reviewed it fully
- You are confident it behaves correctly, securely, and maintainably
- You are prepared to explain and defend it
Using AI does not change that contract. It never has. If anything, AI raises the bar for diligence, because it can produce output that looks confident, complete, and reasonable while being terribly — or sometimes worse, subtly — wrong.
When you push a commit, you are saying “I am [state your name], and I approve of this code.”
The Real “AI Bubble”
AI isn’t the bubble. We are.
Press enter or click to view image in full size
The world is worried that “the AI bubble is about to pop”, and they say it will be bad. And honestly they’re probably right. But AI isn’t the bubble, WE are. IMO the AI bubble isn’t really about model capabilities or market hype, though those are a contributor and a symptom, respectively.
Get Dan Crews’s stories in your inbox
Join Medium for free to get updates from this writer.
The real bubble is over-trust. For us programmers, WE become the bubble by:
- Treating generated code as “probably fine”
- Skipping deep review because it compiles or tests pass (AI isn’t afraid to
yarn test -uto update snapshots to force its broken tests to pass). - Assuming correctness because the output sounds authoritative
- Offloading judgment instead of accelerating it
That’s how tools stop being productivity multipliers and start becoming liability multipliers.
Speed Mismatches Are a Smell
If individual contributors appear 10× faster while reviewers and leaders must slow down 100× to ensure nothing breaks, something is wrong.
Press enter or click to view image in full size
If individual contributors appear 10× faster while reviewers and leaders must slow down 100× to ensure nothing breaks, something is wrong. Even worse, it seems like most of senior leadership across the industry doesn’t even REALIZE much of this. They see you going faster (which is all they wanted all along), but they see me at 1/100x. They don’t know that I’m the reason the app didn’t go down today, and in seemingly-increasing frequency, I’m the one getting called in for an incident when it does. I’m the one who looks bad when I’m pushing back on your “vibe coded” 15,000 line merge request (yes, that was an actual PR). And when things go down, I take the credit because protecting the team is part of my job. But I know, even if you don’t seem to: Claude didn’t cause that incident by writing bad code; “you” [the ambiguous you, not actually you-the-reader] caused that incident by contributing bad code.
As principal / lead, I’m willing to be ultimately accountable for our team’s work. I signed up for this, and I’m proud of it, even when things go wrong because I know I’m surrounded by a team trying their best. But if that team is just pushing slop, it just piles more on me — more work, more pain, more stress — and I’m tired, y’all. I’m tired of doing both of our jobs and being the one accountable for it when you don’t.
That imbalance means:
- Risk is being pushed upward instead of owned locally
- Review is compensating for missing diligence
- Trust is eroding, even if unintentionally; my trust in you; the org’s trust in me; the world’s trust in the industry
AI should make everyone more effective; it shouldn’t create cleanup work for others.
This Is About Stewardship, Not Policing
It’s a request for shared responsibility.
Press enter or click to view image in full size
This isn’t an accusation, and it isn’t a mandate. I’m not even asking you to not use AI.
It’s a request for shared responsibility.
AI tooling is powerful, and many of us genuinely enjoy using it (I still remember when Copilot first amazed me by writing a Facebook API integration when the docs were wrong and the SDK was convoluted). But tools stick around only when teams and organizations can trust the outcomes they produce.
That trust is built when engineers:
- Treat AI like an editor with autocorrect, not an author
- Review everything they submit, every time
- Stand behind their work without caveats
- Protect the long-term viability of the tools by using them responsibly
Bottom Line
Use AI. Use it aggressively. Use it creatively.
Just don’t outsource judgment.
Please make sure that anything shipped under your name is something you fully understand and stand behind. That’s how we keep these tools — and the trust that comes with them — available to everyone.
The time will likely come soon that this post doesn’t age well, but until then, be “one of the good ones”.
* Note: I’m not AI; I just use em dashes. I always type --, but it autocorrects and these days that makes me sound machine-like. I feel like I also use more parentheses than most people, but I generally use parentheses for “whispered asides”, while I use em dashes for emphasis/drama.