Still choose boring technology

3 min read Original article ↗

Dan McKinley's classic essay Choose Boring Technology (2015) argued that every team has a limited number of "innovation tokens" to spend[1]. McKinley urged us to spend them on your actual product, not on replacing PostgreSQL with the hot new database[2].

It was good advice then. Now with AI, boring technology has gained another concrete, measurable advantage.

Boring tech is in the training set

Most LLMs are trained on the internet. This means established, well-documented, stable technologies are extremely well represented. SQL, PostgreSQL, Redis, REST, Regex, React. The "boring stuff" has millions of examples, repos, tutorials, Stack Overflow answers, and battle-tested patterns baked into the model weights. This is one of the core theses in my previous short on ceding skills to AI.

Newer products and those that have large breaking changes between versions are kryptonite to AI. Last year I was building an editor using PlateJS. A recent set of breaking changes sent the AI down rabbit hole after rabbit hole. I burned hours on it[3].

The innovation tax is now double

McKinley's original framing was about cognitive overhead for your team. Every exotic technology choice means more documentation, more debugging, more onboarding friction. Often this skill or knowledge becomes "islanded" — only a select group know how to get the best results.

AI doesn't fix this. In fact, it will make it worse. If you pick an exotic technology, both you and the AI need to grapple with it. You need to prompt in a way that gets the best results, without the deep knowledge of the destination. If the technology is truly unfamiliar, you might not be expert enough to guide the AI to a sufficiently robust result.

Every innovation token you spend now costs you twice: once for your team, once for your AI.

The catch: boring still needs guidance

Boring technologies are used in a vast range of contexts. For PostgreSQL there is everything from Northwind tutorials through to advanced sharding for "webscale". The AI has been trained on all of it, but that doesn't mean it has discernment on what to apply in any given context.

It's rare that I jump in and do a heavy re-write of AI code. But when I do, one of the prominent causes is performance. Heavy or repeated SQL queries or nested loops are two common examples. The AI picked a valid pattern, just not the right one for the circumstances.

This actually strengthens the argument for boring tech. When the AI generates a bad PostgreSQL query, I can spot it. I have the expertise to guide it. When the AI generates bad PlateJS code, I'm much less sure whether the problem is the AI or my understanding of the library.

Boring technologies are where you still have the expertise to catch the AI's mistakes. For now, anyway.

So what should you actually do?

McKinley's use of the term "token" was somehow prescient. Now we're spending innovation tokens and LLM tokens on the same choices.

If you're choosing a stack today, run it through this filter: how much training data exists for this technology? How stable is its API? How established are its patterns? These used to be proxy signals for maturity and community support. Now they're strong proxies for AI competency.

McKinley's axiom is the same. Spend more innovation time on your product and less on the technology. But now that is amplified. The boring stack isn't just the safe choice anymore. It's also the fast one.