Throw More Darts

3 min read Original article ↗

Adam Derewecki

AI-assisted coding has a weird superpower that’s easy to miss if you only use it like a vending machine.

Many treat Cursor/Claude like: “Here’s the problem. Solve it.” And sometimes it does! It comes back with a clean diff, the tests pass, you ship, you feel like you’ve discovered fire.

But a lot of the time it comes back with… garbage. Confident garbage. Half-right garbage. “This compiles but why is it doing that?” garbage.

The trick (for me, at least) is accepting that the best workflow isn’t always directional. It’s probabilistic.

Throw more darts

When I’m using an agent effectively, I’m not walking a straight line from problem to solution. I’m doing a stagger-walk:

  • try multiple approaches in parallel (e.g. many models on one prompt X many variations on prompt)
  • keep the 5 good ones
  • ignore the other 20 with zero guilt

It’s less “engineer with a plan” and more “finding gold with a pan”, which, yes, feels very on-brand for living in San Francisco: you’re not building a cathedral, you’re prospecting. Go with your gut and spin up an agent on a hunch.

Queue up a dozen improvements

This reminds me of the Sierra episode of ACQ2 where Bret Taylor talked about their engineers queueing up a dozen potential improvements on Friday afternoon and letting them run through the weekend.

That’s the vibe.

It’s not “I will now implement The Correct Fix™.” It’s “I’m going to tee up a pile of plausible wins, let time and automation do the brute force, and come back Monday to whatever shook out.”

AI agents fit that exact shape: they’re great at generating attempts. Lots of them. Cheap.

Why it can look like a 40% productivity boost

If your workflow is “generate many shots on goal,” then yeah, AI can feel like a 25–40% productivity boost (or more) because:

  • you can explore more options without committing early
  • you get unblocked faster when one of the attempts lands
  • you spend more time selecting/merging good ideas vs. inventing from scratch

In other words: you’re optimizing for throughput of tries, not perfection of the try.

The caveat: targeted firefights don’t work like prospecting

But if the task is targeted like:

> “This endpoint is crashing in prod and we need a diagnosis and a fix right now.”

…that’s not a gold-prospecting problem. That’s surgery.

In that mode, the “40% boost” story gets squishier.

Maybe there’s a 20% chance the agent one-shots it (finds the bug, proposes the right fix, even writes the patch). When that happens, it feels like magic.

But most of the time, the value is more like:

  • helping you read unfamiliar code faster
  • summarizing traces/logs and proposing hypotheses
  • generating debugging checklists
  • suggesting where to add instrumentation
  • drafting the “try this next” sequence

That’s still useful. It’s just not a rocket booster. It’s more like a 10–15% steady tailwind while you do the real work.

Pick the workflow that matches the tool

The best results I’ve gotten from agents come when I stop asking them to be a senior engineer with judgment, and start using them as a high-throughput idea generator.

Let the agent take 25 swings.

Keep the 5 that hit.

And when it’s a “prod is on fire” moment, sure use the agent, but don’t expect prospecting economics. Different job. Different math.