The Squaring Effect: Why AI Doesn’t Replace Skill — It Squares It

24 min read Original article ↗

Krzyś

Or: How I Learned to Stop Worrying and Love the Exponential Collapse of My Own Incompetence

Press enter or click to view image in full size

…aut cum scuto (or AI) aut in scuto (or AI)…

Listen — AI discourse has become a recursive loop of hot takes about hot takes, and I’d rather spend an afternoon debugging legacy PHP than add to the pile. Everyone’s either convinced ChatGPT (or Claude or whatever is hyped this week) will steal their job by Tuesday, or equally convinced it’s just spicy autocomplete that can’t even count the ‘r’s in “strawberry.”

Both camps are missing the point so thoroughly that it’s almost impressive.

I made my case in The Eternal Return of Abstractionprogramming was never about code, always about working through layers of representation. One essay is enough to establish that thesis. I’m not here to relitigate it.

But there’s a consequence of that argument I couldn’t see clearly until now. Not because the technology changed — these cycles always rhyme — but because the dynamics became impossible to ignore. AI didn’t add another abstraction layer to our teetering Jenga tower of frameworks and DSLs.

It changed the velocity at which abstraction compounds your errors. Or amplifies your clarity.

And velocity, as it turns out, changes everything.

The difference between falling and flying is just a question of speed and direction. We’re about to find out which one we’re doing.

I. Code Was Always Just Notation (A Reminder We Keep Forgetting)

Press enter or click to view image in full size

…we are cloud castles builders guild, aren’t we?…

“The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination.”
— Fred Brooks,
The Mythical Man-Month (1975)

(He wrote this in 1975. We’re still building castles in the air. We’ve just automated the part where they collapse.)

Programming has always been this: you have a mental model of a problem — hopefully accurate, probably not — and you encode that model through successive layers of abstraction. Domain concepts into algorithms. Algorithms into syntax. Syntax into machine operations. Each layer: a translation, each translation: a bet that you understood the previous one.

The code itself? Just the notation. The medium, not the message. The map, not the territory. (Korzybski, one of the semantics titans, said that about language in general. He never had to debug a microservices architecture, but the principle holds.)

Every era of programming invented new abstractions to reduce friction. Assembly freed you from toggling switches — though not, tragically, from toggling bits. High-level languages freed you from register management. Frameworks freed you from writing the same database-wiring boilerplate seventeen times. Low-code tools freed you from syntax entirely, replacing it with the nightmare of drag-and-drop UIs designed by people who clearly hate their users.

Each step lowered the barrier to entry. Each step also introduced new failure modes, because abstraction is never free — it’s just debt you’ll pay later, with interest.

Every abstraction is a bet that what you’re hiding doesn’t matter. Until it does. Usually at 3 AM. Usually on a Friday.

But here’s what’s different now. And I promise I’m going somewhere with this that isn’t just “AI scary” or “AI magical” — both positions being equally tedious. So let’s continue…

II. AI Isn’t a New Layer. It’s an Amplifier.

Press enter or click to view image in full size

…i want it all, i want it now…

“We shape our tools and thereafter our tools shape us.”
Attributed to Marshall McLuhan

(Actually, John Culkin said this in 1967, summarizing McLuhan’s ideas. But history prefers the famous guy. Just like junior devs copy-pasting Stack Overflow answers without checking the author. The principle stands either way: every technology eventually becomes a medium through which we express our incompetence.)

Previous abstractions — languages, frameworks, DSLs, those GUI builders that promised you’d never need to code again (how’s that working out?) — they all delayed the discovery of your mistakes.

You’d write broken code. It would compile. If you were lucky, it would fail your tests — assuming you wrote tests, which, let’s be honest, you didn’t. If you were unlucky, it would pass your tests (because your tests were as broken as your understanding). Then three weeks later, in production, at 3 AM (it’s always 3 AM), you’d discover that your mental model was wrong from the start.

The friction was protective. Like speed bumps, or hangovers, or that friend who tells you the truth about your haircut. Slow feedback gave you time to course-correct before compounding errors too deep to excavate without a complete rewrite and a crisis of faith.

AI collapses that delay to near-zero.

You prompt. It generates. You iterate on that output — “make it more elegant,” “add error handling,” “convert this to async/await” — each cycle taking seconds. And each cycle inherits the assumptions and errors from the last one, like a game of telephone played at the speed of thought.

This is not another layer of abstraction sitting politely on top of your existing stack. This is a feedback accelerator that amplifies whatever signal you’re feeding it.

Clear signal in? Compounding clarity. Exponential progress.

Fuzzy signal in? Compounding confusion. Exponential disaster.

The machine doesn’t judge. It just multiplies.

And multiplication, my friends, is a neutral operation that feels very non-neutral when what you’re multiplying is mistakes.

III. Why “Skill²” Is Just an Image (But a Useful One)

Press enter or click to view image in full size

…to square or not to square, that is the question…

“All models are wrong, but some are useful.”
— George Box

(He was a statistician. This quote has been beaten to death in tech circles, but it’s beaten to death because it’s true. Also because we love having academic cover for our hand-wavy metaphors.)

Right. I’m about to use a mathematical metaphor, and I can already hear the pedants warming up their LaTeX compilers to explain why this isn’t technically a square function.

Yes, I know squaring a negative number makes it positive. Stop typing that comment. In software, negative skill squared is just a bigger crater — the math doesn’t care about your sign conventions when the production database is on fire.

(Footnote for the mathematically inclined: Yes, I know this isn’t actually a square function. It’s probably closer to exponential with a variable base dependent on initial skill level. But “The Exponential Effect with Variable Base Dependent on Initial Skill Level” doesn’t fit in a title, and I’m trying to communicate dynamics, not publish in SICP. Work with me here.)

The point isn’t mathematical precision. The point is dynamics.

The traditional view of tools is additive:

output = skill + tool_effectiveness

Better tools help everyone equally. A rising tide lifts all boats. Democracy of competence. The American Dream, but for programming.

Comforting. Also wrong.

What I’m observing — and yes, this is anecdotal, but anecdote informed by two decades of watching developers interact with abstractions — is multiplicative:

output ∝ skill × feedback_velocity

And when feedback velocity approaches instant — when you can iterate from idea to implementation to variation in seconds rather than hours — the dynamics shift from linear to exponential. The gap between “strong signal” and “weak signal” doesn’t just widen. It explodes.

(Imagine this graph, but prettier. X-axis: your actual understanding. Y-axis: what you ship. In the pre-AI world, the line was gentle — mediocre developers made mediocre things, good developers made good things, the delta was maybe 2–3x. In the AI world, it’s exponential. Mediocre developers compound disasters at scale. Good developers compound miracles at speed. The gap isn’t 2–3x anymore. It’s orders of magnitude. Small differences, massive consequences. Well… my best attempt below…)

Press enter or click to view image in full size

…hope it is readable…

Strong signal → rapid compounding:

  • You start with clarity about what you’re building and why
  • First iteration builds on solid conceptual ground
  • Second iteration compounds coherence
  • Third iteration introduces sophistication
  • You’re building systems faster than you could manually, but more importantly, you’re building systems that make sense

Weak signal → rapid degradation:

  • You start with a fuzzy notion of what might work
  • First iteration introduces subtle drift — close enough to plausible
  • Second iteration compounds the error — still looks reasonable
  • Third iteration is built on sand built on sand
  • Each prompt dilutes meaning further; you’re iterating toward confident nonsense

The danger zone: almost-right:

  • Close enough to produce plausible outputs that compile and run
  • Far enough from correct that subtle errors accumulate invisibly
  • You don’t notice the drift until you’re five iterations deep
  • Can’t trace back where it went wrong because each step seemed fine
  • Like a perfectly reasonable chain of bad decisions that somehow led to PHP

The model isn’t meant to be predictive or precise. It’s meant to show direction and character: AI doesn’t flatten the skill curve like previous democratizing tools claimed to. It steepens it. Makes it more brutal. Separates the “mostly gets it” from the “really gets it” faster than any previous abstraction could.

Is it exactly squaring? No. Of course not. Reality is messier than algebra, and pedants are already composing their “well, actually” comments.

But the shape of the dynamic — the non-linearity, the acceleration of consequences, the way small differences in starting position lead to massive differences in outcomes — that’s real.

And if “Skill²” is a useful lie that helps us think clearly about what’s happening, I’ll take a useful lie over a useless truth any day.

IV. The Most Dangerous Zone: Almost Competent

Press enter or click to view image in full size

…been there, done that, but — frankly — who is without guilt (…)…

“The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.”
— Bertrand Russell

(He died in 1970, before Stack Overflow, before Dunning-Kruger became a meme, before AI could generate confident nonsense at the speed of thought. But he saw it coming.)

Here’s the uncomfortable truth that nobody wants to hear: the people most at risk aren’t beginners.

Beginners know they’re beginners. They’re appropriately terrified. They check everything twice, Google constantly, ask stupid questions (there are no stupid questions, only stupid assumptions masquerading as questions). Fear is protective.

And experts? Experts have developed bullshit detectors calibrated by years of production disasters. They’ve seen enough plausible-but-wrong solutions to recognise the pattern. They know what to check. What could fail. Where the dragons live.

No, the people most at risk are the ones hovering just below real competence. The almost-skilled. The confident-enough-to-be-dangerous. Skill level: Almost. Confidence level: Absolute. Outcome: Predictable.

Press enter or click to view image in full size

…engines on, full ahead, matey…

Why are they most at risk?

Because AI outputs are coherent and confident. They sound right. They compile. They pass surface-level checks. They use the right terminology. They follow conventions. They look, to the untrained eye (or the almost-trained eye), exactly like code written by someone who knows what they’re doing.

If you don’t have:

  • Tools to falsify claims (tests, type systems, production monitoring)
  • Awareness of your own model’s limits (epistemic humility, basically)
  • Contact with production reality (actual users, actual consequences, actual 3 AM alerts)

Then you have no way to distinguish “sounds right” from “is right.” And AI will happily generate twenty variations of “sounds right” while you burn through iterations, each one drifting further from correctness but never obviously breaking.

It’s like watching someone confidently navigate using a slightly wrong map. Not obviously wrong — the streets are mostly in the right places, the landmarks exist — but wrong enough that they’ll never quite arrive at their destination. And because each turn seems reasonable given the previous one, they just keep going, increasingly confident that the next turn will surely be the one that gets them there.

I’ve watched this happen. Hell, I’ve been this person, iterating confidently toward disaster, each AI-generated suggestion sounding perfectly reasonable until I actually tried to run the code and discovered I’d created a beautiful, elegant, completely non-functional monument to my own misunderstanding.

The almost-competent have just enough knowledge to evaluate surface correctness — syntax, conventions, plausibility. But not enough to evaluate deep correctness — edge cases, performance implications, security vulnerabilities, architectural soundness.

And AI is perfect at surface correctness. It’s been trained on millions of examples of what code looks like. It knows the patterns. The idioms. The conventions.

What it doesn’t know — what it can’t know — is whether those patterns actually solve your specific problem in your specific context with your specific constraints.

That evaluation is on you.

Beginners know they don’t know. Experts know what to check. The almost-competent don’t know what they’re missing.

And AI isn’t just fuel for that blind spot. It’s a dedicated pipeline supplying pure oxygen to the Dunning-Kruger fire.

“AI does not amplify intelligence. It amplifies confidence in your own model of reality.”

(Not Descartes. Not Kant. Me. After watching the fifth developer in a month ship AI-generated code that looked perfect and failed catastrophically.)

If your model is good, confidence is warranted. You’ll compound clarity.
If your model is broken, confidence is catastrophic. You’ll compound error.

And here’s the thing that keeps me up at night (well, one of many things — I’m middle-aged and anxious): you can’t tell which one you are from the inside. The experience of being confidently right and confidently wrong feels identical until reality weighs in.

Which it will. Eventually. Usually at 3 AM.

This is the exponential risk zone. Not beginners who fail slowly and visibly. Not experts who succeed quickly and reliably. But the almost-competent who fail quickly, confidently, and at scale — compounding errors faster than they can recognize them, building technical debt at velocities that would have been impossible five years ago.

AI didn’t create this problem. The almost-competent have always been the most dangerous developers — the ones who know enough to be trusted with production access but not enough to use it wisely.

AI just gave them a force multiplier.

And force multipliers don’t care about the direction of the force.

V. Critical Evaluation Is No Longer Optional

Press enter or click to view image in full size

…question everything, evenr questioning itself…

“It is the mark of an educated mind to be able to entertain a thought without accepting it.”
— Attributed to Aristotle

(Likely apocryphal — scholars say he never wrote this exact sentence. But if AI can hallucinate facts, I can hallucinate a better attribution for a dead Greek philosopher. The principle stands: entertain the suggestion, don’t accept it blindly. This is not complicated.)

We’ve spent years — decades, really — calling “critical thinking” a nice-to-have. A soft skill. A meta-competency. Something you develop through maturity and experience and liberal arts education and probably reading more books.

That was always bullshit, but it was slow bullshit. You could get away with weak evaluation skills because mistakes took time to compound. By the time your fuzzy thinking manifested as broken systems, you’d had months of ambient feedback — code reviews, integration issues, that vague sense that something wasn’t quite right — to course-correct.

Not anymore.

Real competence now requires — and I mean requires, as in “table stakes,” as in “without this you’re toast” — three capabilities that used to be optional extras:

1. Falsifiability — Can you articulate what would prove you wrong?

Not “I’m open to feedback” (everyone says that). Not “I’m willing to learn” (meaningless platitude). Can you state, specifically, what evidence would convince you that your current approach is broken?

If you can’t formulate a falsifiable hypothesis about your code, you’re not engineering. You’re just hoping. And hope is not a strategy — it’s barely even a coping mechanism.

2. Boundary awareness — Do you know where your mental model ends and guesswork begins?

This is harder than it sounds. We’re all walking around with patchwork mental models — some parts crystal clear from deep experience, some parts kinda-sorta understood from that article we skimmed, some parts complete fabrication we’ve convinced ourselves is knowledge.

Experts know the boundaries of their knowledge. They can say “I’m confident about X, uncertain about Y, completely guessing about Z.” The almost-competent can’t tell the difference. It all feels like “things I know.”

AI doesn’t help you find those boundaries. It just confidently generates code that assumes your entire mental model is correct. Which is great when it is. Disastrous when it isn’t.

Get Krzyś’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

3. Production contact — Are you exposed to real consequences?

Not theoretical correctness. Not “it works on my machine.” Not “the tests pass” — tests you wrote, testing assumptions you made, validating a model that might be fundamentally broken.

Real consequences. Actual users. Production incidents. The 3 AM Slack notification (it’s always 3 AM) that tells you your beautiful abstraction just fell over under load you should have anticipated but didn’t.

If you’re not in contact with production reality — and I mean regular, painful contact — you have no feedback loop that can compete with AI’s confident assertions. You’re iterating in a vacuum, compounding errors that feel like progress.

“Critical thinking” isn’t some vague meta-skill anymore. It’s technical hygiene in an environment where errors propagate instantly.

Press enter or click to view image in full size

…I looked at AI :: AI looked back..

It’s the difference between using AI as a force multiplier for your competence versus using AI as a high-speed delivery mechanism for your incompetence.

“If you cannot evaluate AI output, AI output evaluates you.”

(Not Nietzsche. Though it sounds like something he’d say if he’d lived long enough to debug JavaScript. Me, after a particularly rough sprint.)

Every iteration you accept without inspection is a vote of confidence in your current mental model. AI doesn’t judge you. It doesn’t care. It just reflects whether that model was worth compounding.

And compounds it will. Exponentially. Enthusiastically. Exactly as instructed.

Whether you’re compounding signal or noise — well, that’s on you.

VI. The Loss of Pedagogical Pain

Press enter or click to view image in full size

…learning seems so “passeeee” these days…

“We don’t see things as they are, we see them as we are.”
— Anaïs Nin

(She was writing about perception and relationship. But it applies equally to code review: we see the bugs we’re predisposed to see, miss the ones that reflect our own blind spots.)

Here’s what nobody wants to admit: the tedious parts of programming were teaching us something.

Writing boilerplate for the hundredth time? That repetition was building muscle memory, intuition about edge cases, pattern recognition for what could go wrong.

Debugging line-by-line through a stack trace? That was forcing you to confront your assumptions about how the system actually worked, not how you thought it worked.

Waiting for the compiler to catch your stupid mistakes? That delay was pedagogical — it gave you time to notice you weren’t thinking clearly, time to step back before compounding errors too deep.

We optimized all of that away. And we were right to optimize it away — it was tedious, repetitive, soul-crushing work that could be automated.

But here’s what we lost in the bargain: friction as a forcing function for thought.

The bad old days: waiting for builds gave you time to realize your entire approach was wrong.

Press enter or click to view image in full size

…waiting and error-meditating…

The good new days: you’ve shipped three iterations of wrong before you notice.

Before AI, you could hide behind manual work. The friction gave you time to think — or at least time to notice you weren’t thinking clearly. Each iteration took long enough that you’d stumble into corrections before disaster.

Errors were expensive and slow. This wasn’t a bug. It was a feature.

When being wrong costs hours — code rewritten, deployments rolled back, post-mortems written — you develop defense mechanisms. Code review. Pair programming. Test-driven development. Defensive programming. All that process we love to complain about.

That process was teaching us epistemic hygiene, whether we realised it or not.

Now errors are cheap and fast. You can regenerate a hundred variations in an afternoon. This isn’t progress or regress — those are moral categories that don’t apply to tools. It’s a change in the cost structure of being wrong.

And when being wrong becomes cheap, the only thing protecting you is knowing how to be right.

Most of us never learned that. We learned how to muddle through. How to iterate toward correctness through trial and error. How to lean on the compiler, the type system, the framework, the senior developer, Stack Overflow, and sheer bloody-minded persistence.

Take away the friction, and what’s left?

Your raw understanding. Your mental models. Your ability to distinguish plausible from correct.

For some people, that’s enough. More than enough — they’re compounding clarity at speeds that would have seemed magical five years ago.

For others — and I include past versions of myself in this category — it’s terrifying how much we relied on friction to paper over fuzzy thinking.

Press enter or click to view image in full size

…let’s amplify…

“The era of AI is not about writing less code. It is about facing your own abstractions without delay.”

(Not Heidegger. Though he’d probably have something to say about “being-toward-code” and “the-ready-to-hand-ness of GitHub Copilot.” Me, having an existential crisis over a perfectly innocent pull request.)

If your mental model was always a bit fuzzy, you could get by. Manual work introduced enough delay that you’d stumble into corrections before disaster. Compile errors caught stupid mistakes. Integration tests caught logic errors. Code review caught design errors. Production caught everything else.

Now you’re iterating at thought-speed. And if your thoughts were always slightly off — not wrong enough to fail obviously, just wrong enough to drift — AI will take you confidently in the wrong direction faster than you can realise you’re lost.

It’s not the AI’s fault. The AI is just doing what you told it to do, based on the model you provided, amplifying the signal you fed it.

If the signal was noisy to begin with?

Shit in, shit out. But now the shit comes at you exponentially.

AI is a test of whether you understand your own abstractions. And for many of us, the answer is becoming uncomfortably clear.

VII. What Survives This: Responsibility Without Friction

Press enter or click to view image in full size

…no pain no gain…

“The map is not the territory.”
— Alfred Korzybski

(He was establishing the foundation of general semantics. But every debugging session is an exercise in discovering that your map — your mental model — was not, in fact, the territory of how the system actually behaves.)

This isn’t the death of programming. Let’s dispense with that tedious narrative before it takes root.

It’s the death of pretending you can outsource responsibility for understanding.

Programming has always been about encoding mental models into executable form. The notation changed — punch cards, assembly, high-level languages, frameworks, prompts — but the fundamental challenge remained constant: can you think clearly enough about a problem to represent it accurately through layers of abstraction?

For decades, we had guardrails. The friction of manual work gave us time to think. Let me repeat the concept: the compiler caught type errors. The framework prevented common mistakes. The senior developer caught design flaws in code review. And yet again, production caught everything else.

These weren’t just conveniences. They were scaffolding for fuzzy thinking.

AI removes the scaffolding.

What remains is the clarity of your thinking. The accuracy of your mental models. Your ability to distinguish between “sounds plausible” and “is correct.”

And — this is the part that should terrify you — your responsibility for what ships.

When the 3 AM incident comes (and it will come), you can’t point at the AI and claim plausible deniability.

  • “But it seemed reasonable.”
  • “But it compiled.”
  • “But it passed the tests.”
  • “But ChatGPT generated it.”

Reality doesn’t care. Your users don’t care. The database that just corrupted because of a race condition you didn’t understand doesn’t care.

The code runs under your name. The consequences are yours.

Who survives isn’t a question of who can prompt AI most effectively. Everyone can type instructions. The filter is subtler and more brutal:

1. Can you articulate intent?

Not “implement user authentication.” Everyone can generate auth code. Can you specify why JWT over sessions, what security properties matter for your threat model, what you’re optimising for and what you’re willing to sacrifice?

If your requirements are fuzzy, AI will generate code that matches that fuzziness. Coherently. Confidently. Incorrectly.

2. Can you evaluate outputs against reality?

AI generates code that compiles, passes tests, and follows conventions. But does it solve the actual problem? Will it hold up under production load? Are the abstractions sound, or are you building technical debt at exponential velocity?

The ability to falsify claims — to articulate what would prove you wrong — is no longer optional. It’s the difference between compounding clarity and compounding error.

3. Do you take responsibility for meaning?

Outsourcing notation is fine. Every abstraction in history has been about outsourcing notation — from assembly mnemonics to frameworks to prompts.

Outsourcing thinking is catastrophic.

Because when your beautiful AI-generated abstraction falls over in production, you’re the one holding the incident response. You’re the one explaining to management why the outage happened. You’re the one debugging at 3 AM, trying to understand code you didn’t write, generated from requirements you didn’t fully specify, based on assumptions you didn’t examine.

And if you can’t understand your own abstractions — if you’ve been iterating at AI-speed without maintaining contact with what you’re actually building — you’re in for a very long night.

AI makes bad engineers more efficient. This is undeniably true.

But efficiency is a multiplier, not a remedy.

Press enter or click to view image in full size

…one way or the other…

“AI does not make incompetence obsolete. It makes incompetence efficient enough to fail at scale.”

(Not Drucker. Though management consultants will probably attribute it to him eventually. Me, watching the post-mortem presentations pile up.)

The gap between strong mental models and weak ones was always there. AI just makes it exponential. Small differences in fundamental understanding now produce massive differences in outcomes — not over years, but over iteration cycles measured in seconds.

When AI gives you something that sounds right but isn’t, do you notice?

  • If yes: you’re compounding skill.
  • If no: you’re compounding error.

And both compound exponentially now.

The question isn’t whether you’re using AI. Everyone will be using AI. The question is whether you’re using it as a force multiplier for competence or as a high-speed delivery mechanism for disaster.

The difference is uncomfortable to articulate but impossible to ignore once you see it:

Can you take responsibility for code you didn’t write?

Not legal responsibility — that you have anyway, whether you like it or not.

Intellectual responsibility. Understanding what it does. Why it does it. Where it might fail. What assumptions it makes. What properties it guarantees.

  • If yes: congratulations. You’re in the compounding-clarity zone. The future is probably fine for you.
  • If no: you’re in the compounding-error zone. And the errors are coming faster than you can debug them, at a scale you can’t comprehend, with consequences you didn’t anticipate.

The machine doesn’t care about your intentions. It never has. It only executes your instructions.

And now those instructions iterate faster than you can course-correct.

  • Choose your mental models carefully.
  • Build your falsifiability tests.
  • Know your boundaries.
  • Stay in contact with production reality.
  • Take responsibility for what you ship.

Or don’t. The world needs incident responders. Someone has to debug all this confidently-generated nonsense.

Either way, the on-call rotation is yours.

Epilogue: Same Movie, Different Frame Rate

Press enter or click to view image in full size

…vanitas vanitatum et -> everything repeats itself…

The eternal return of abstraction continues. Punch cards to assembly to high-level languages to frameworks to prompts — each transition promised liberation, each delivered new constraints we didn’t anticipate.

AI is not special. It’s not different in kind. Just different in velocity.

We’ve moved from environments where errors were expensive and slow — forcing us to think carefully before acting — to environments where errors are cheap and fast. The cost structure of being wrong has fundamentally changed.

When mistakes took hours to produce, we developed elaborate defense mechanisms. When mistakes take seconds, the only defense is clarity of thought.

The question was never whether abstraction is good or bad. Abstraction is inevitable — it’s how we manage complexity beyond working memory. The question is whether you can see through the layers clearly enough to know what you’re building.

  • Whether your mental models are solid enough to compound.
  • Whether you understand your own abstractions.
  • Whether you can take responsibility for code that iterates faster than you can think.

Because the machine doesn’t care about your intentions. It only executes your instructions. And now those instructions iterate faster than you can course-correct.

Same cycle. Same pattern. Same mistakes.

Just faster.

The author has closed his laptop in contemplation, brewed a proper cup of Lapsang Souchong, and settled in to watch overly confident developers ship AI-generated code with the peace of mind of men who’ve never seen a 3 AM incident. Two decades of this have taught him when to intervene and when to simply take notes for the post-mortem. Currently fiddling with Rust, as what is painful really exists, plus it makes you immune to the lesser inconvenience…

Press enter or click to view image in full size

…cup of brown joy and chill…

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.

Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!