Engineering, fast and slow | undecidability

6 min read Original article ↗

I can feel myself getting more impatient by the day.

I'm somewhat of a level 2 agentic engineer myself. And for those of us at the bottom of the agentic food chain, the productivity gains came slowly. But now, all at once, they are here.

The main reason they were initially hard to notice is that, before the Opus-4.5-era-models, getting your problem solved by AI was still kind of a toss up. Getting some weird error message when you try to start your local Dynamo container? Maybe GPT 4.1 knows why. Flips coin. Huh, guess not. I'll Google it myself. Jest test failing for seemingly no reason? Opus 4 is on it. Still broken? Fine, I'll look into it.

But something changed last Fall. These models one-shot everything now. And that's awesome! I've been able to move really fast.

But sometimes you should move slow. Certainly when rigor is critical. But even if you just want to learn something new. When you're learning, you should be incredibly deliberate about what you lean on AI for. I'm learning Rust right now, and AI has absolutely accelerated my learning. Having this dynamic information retrieval system that can meet me where I am is revolutionary. And I tell myself, this is fine. I just need to stay in the driver's seat. But I've read the articles, and I know how tempting it is to flip on the autopilot and doze off.

I really felt this the other day when I was debating what project to build to further my Rust journey. I had written like 50% of a toy shell already, and felt I had gotten everything out of that project that I was going to, and I wanted to move on to a good systems-level project that would be concurrency heavy so I can sharpen my skills in that type of programming. And I was bouncing some ideas off ChatGPT, and it said "Write an implementation of the Raft consensus algorithm." And so I looked it up, and found the original paper where Raft was proposed.

And I'm going to be vulnerable for a minute, because I'm incredibly embarrassed to admit it, but I found myself with a line of thought that on reflection has been deeply troubling: I don't have time for this. The goal is to learn, and the only way to learn is to wrestle with the material myself, but I don't have time for that. An 18 page paper? That will take... hours. Days, maybe, at least in the "more than one" sense.

I have no idea where that came from. I've been reading OSTEP, which is like 700 pages. I have time for that. But with a book, the reading is the goal. I didn't want to get in the driver's seat and implement an algorithm "from scratch". I wanted to be spoon-fed.

And it's because of this pressure, I think. Some teenager in Estonia is shipping 10,000 lines of code an hour. Some lunatic on my LinkedIn is telling me I'M GETTING LEFT BEHIND because I don't use something called a harness? to finish half of a side project and drop it. I can't spend a couple hours a day for a month building a Raft consensus library in Rust. We'll be in nuclear winter by then. Do I really want to spend my last days RTFM?

Fuck man. I was already behind. Now I need to pause to build something hard from scratch? How will I ever get better if I stop and take time to get better?

Maybe I'm just a moron. It's under consideration. But I want to put forth an idea: AI is a powerful drug. It's a shortcut to a dopamine spike, a sugar cube, just a little bump, a little pick-me-up. It's very hard to use responsibly.

We are the Odyssey's Lotus-eaters. We have been presented with this "tool", this Faustian device that eases the burden of menial toil. And we have decided as an industry to taste its flesh, sweet as honey, and drift peacefully to sleep.

Wouldn't that be ironic? There is a significant population of software engineers out there that believe AI will replace us. Soon. And you're probably in denial if you think it definitely won't. And we're fucking training it. Not doing anything to ensure we get a piece of the pie that we helped make. The Altmans get richer and we have to go be electricians and plumbers. You can't WFH and be a plumber.

I'm holding out hope that this approximation of intelligence, no matter how precise, how refined, is just not enough to get the job done. That we hit a ceiling soon. That you'll always need someone who deeply understands systems to steer it. At least until I retire. I don't want to live in a world where I don't write software for a living.

Whoa. Not sure where that came from. Was maybe a touch heavy-handed. Sorry. It's just kind of scary.

The point I wanted to make when I started this post is that I think it's a rite of passage to move fast. Like to get from mid-level to senior, you get assigned a project or two and you plan it and you absolutely fucking crush it. Top of the Jira leaderboard for a whole quarter. Usain-level velocity. AI makes this even easier and more satisfying.

It's a well known technique of agentic development to put your agents "on rails". Give them everything they need and point them in the right direction. Make it as hard as possible to get off course.

But the hardest problems are not on rails. They can't be shoved into a context window. There is no "off course". There is no course.

They're squishy. They require you to gather buy-in from other parts of the organization. They require you to convince your product manager to make an unsavory tradeoff or make space in the roadmap. They require a unique authorization pattern, or a different database technology.

So in this age of agentic development, I think that the good engineers are those who can harness the tooling to move fast. But the best engineers are those who can move slow.