The Junior Developer Extinction: We’re All Building the Next Programming Dark Age

29 min read Original article ↗

Krzyś

In which we discover that the call is coming from inside the house, and we’ve been holding the phone all along

Press enter or click to view image in full size

…hmm, but why…

“I have not failed. I’ve just found 10,000 ways that won’t work.”
— Thomas Edison

Though to be fair, Edison never had to explain to his manager why the AI-generated light bulb stopped working, and nobody on the team understood the filament design.

Picture this scene, familiar to anyone who’s conducted code reviews in the past year: A junior developer presents their pull request with the quiet confidence of someone who’s just solved climate change using nothing but positive thinking and a particularly clever regular expression. The code is pristine — elegant error handling, proper separation of concerns, even decent variable names that suggest someone actually thought about maintainability. It works flawlessly in all test environments. There’s just one tiny problem.

“Walk me through this authentication middleware,” you ask, in the tone of voice usually reserved for defusing unexploded ordinance.

[Pause. Shuffle. The distinctive sound of someone’s brain desperately searching for purchase on perfectly polished glass.]

“Well, I asked ChatGPT to implement JWT authentication with refresh token rotation, and it… did this?” They gesture vaguely at the screen like a magician whose trick has worked too well and who’s now slightly terrified of their own powers. “I mean, it handles token expiration and everything. The tests pass.”

“Right, but why did you choose JWT over session-based authentication for this particular use case?”

[Longer pause. The sound of crickets, if crickets were particularly uncomfortable with distributed systems architecture and had strong opinions about stateless protocols.]

This is the new “I copied it from Stack Overflow” — except Stack Overflow at least forced you to understand the problem well enough to search for it, to read through multiple answers, to synthesize different approaches. It was like learning to cook by reading recipes and occasionally burning dinner. Now we just hand-wave at AI and hope for the best, like having a personal chef who’s brilliant but speaks a language we don’t understand and occasionally substitutes salt for sugar without mentioning it.

But here’s the uncomfortable truth I’ve been avoiding: How many times this week have I done exactly the same thing? How many architectural decisions have I made by essentially asking AI “what would a proper engineer do here?” and then nodding sagely at the response as if I’d thought of it myself? How many times have I shipped code that I understood conceptually but couldn’t debug line-by-line if my life depended on it?

The numbers paint a compelling picture of AI’s impact on developer productivity, though the story gets more complex as we examine the latest research.

The Productivity Paradox of 2025:

Recent studies tell contradictory stories. PwC’s 2025 Global AI Jobs Barometer found that productivity growth nearly quadrupled in AI-exposed industries since 2022, rising from 7% to 27%, while a major Danish study of 25,000 workers across 7,000 workspaces found that AI chatbots “had no significant impact on earnings or recorded hours in any occupation” — with users saving just 3% of their time on average.

Meanwhile, Microsoft research involving over 4,000 developers found that those using GitHub Copilot achieved a 26% increase in productivity, and Nielsen Norman Group studies suggest productivity gains as high as 126% for coding tasks. But here’s the kicker: according to the 2024 DORA report, speed and stability have actually decreased due to AI, and Uplevel’s quantitative study found that using Copilot didn’t result in productivity improvements but did increase the rate of bugs produced.

Perhaps most tellingly, Harvard Business Review’s 2025 research reveals that while AI makes people more productive, it also makes them “less motivated and more bored when they had to work on other tasks, without the use of AI”.

The Adoption Reality Check:

Even when AI tools are available, in Microsoft’s studies, around 30–40% of engineers didn’t even try the AI tools. As one researcher noted, “factors other than access, such as individual preferences and perceived utility of the tool, play important roles in engineers’ decisions to use this tool.”

The Junior Developer Contradiction:

According to Y Combinator’s latest batch analysis, 25% of their startups are shipping codebases that are 95% AI-generated. But here’s where the data gets interesting: junior developers consistently see the largest productivity gains from AI tools, with increases of 21% to 40%, while senior developers see more modest gains of 7% to 16%.

These contradictory findings suggest we’re not dealing with a simple productivity success story, but something more nuanced — and potentially troubling for the long-term health of our profession.

And perhaps most troubling: where do the rest of us fit when we can’t explain the AI code we’re shipping either? When does “AI-assisted development” become “AI-dependent development,” and how would we even know we’d crossed that line?

The 2025 Wake-Up Call:

Early predictions for 2025 are already proving prescient. As Winston Hearn from Honeycomb.io predicted in early 2025, “companies will learn what happens when their codebases are infiltrated with AI generated code at scale.” We’re starting to see exactly this: bigger incidents with slower resolution times, as the people trying to fix the problems don’t understand the code that created them.

Meanwhile, The New Stack’s 2025 developer survey found that “developers have become even more cynical and frankly concerned about the return on investment of AI in software development.” The honeymoon period is ending, and reality is setting in.

Press enter or click to view image in full size

…recursive learning leads to recursive learning leads to recur…

The Tutorial Hell Upgrade

“The real problem is not whether machines think but whether men do.”
— B.F. Skinner

We used to worry about “tutorial hell” — that liminal space where aspiring developers endlessly consumed learning content without ever building anything meaningful. It was like being trapped in a particularly elaborate episode of How It’s Made, where you’d watch fascinated as someone explained how to build a compiler, but you’d never actually get your hands dirty with lexical analysis.

At least in tutorial hell, you were learning something, even if it was just the comfortable illusion of progress. You were building mental models, however flawed. You were developing problem-solving instincts, however misguided. As Eddie Izzard might say, you were learning to be wrong, but in an interesting way.

Welcome to Tutorial Hell 2.0, where the learning loop has been optimized into nonexistence.

The Traditional Learning Cycle:

  1. Encounter a problem you don’t understand
  2. Struggle with it (the crucial bit — like doing press-ups for your brain)
  3. Find a solution through research, experimentation, or asking for help
  4. Understand why the solution works
  5. Apply that understanding to the next problem

The Modern AI-Assisted Cycle:

  1. Encounter a problem you don’t understand
  2. Paste it into ChatGPT with increasing levels of desperation and creative profanity
  3. Receive a solution that works (like magic, but with worse documentation)
  4. Ship it before anyone asks awkward questions like “how does this actually work?”
  5. Encounter the same class of problem tomorrow and repeat step 2 with marginally more sophisticated prompting techniques

The difference is profound. In the old model, ignorance was temporary and uncomfortable — like wearing someone else’s poorly-fitted clothes to a job interview. In the new model, ignorance is permanent and comfortable — like wearing a perfectly tailored jacket that you don’t realize is made entirely of tissue paper and optimistic thinking.

But here’s where we seniors need to look in the mirror and admit our complicity: How many of us have stopped explaining our decisions because “AI explains it better”? How many code reviews have we rushed through with “looks good, ship it” instead of using them as teaching moments? We’ve become like those parents who give their toddler an iPad instead of reading to them — technically more efficient, but missing the entire point of the exercise.

Press enter or click to view image in full size

…three wishes, mate: and make ’em count…

The Genie Paradox: All-Powerful Servants We Can’t Command

“I wish for infinite wishes!”
— Every child who’s ever heard a genie story

“I wish for infinite understanding of what I’m actually asking for.”
— Every developer who’s ever used AI

The relationship between developers and AI has become curiously similar to those classic genie-in-a-lamp scenarios, except with worse documentation and no convenient three-wish limit. We have access to something approaching omnipotent programming capability, but we’re discovering that omnipotence is only useful if you know what to ask for. And knowing what to ask for requires understanding the problem deeply enough that you might not need the genie in the first place.

It’s like having a personal chef who can cook absolutely anything in the world, but only if you can describe the dish in perfect molecular detail, including the precise temperature coefficients for each cooking stage and the optimal pH levels for flavor development. “I’d like something… food-like? Maybe with nutrients?” gets you a perfectly prepared bowl of protein powder and vitamins. Technically correct, nutritionally complete, utterly soulless.

This is the fundamental paradox of AI-assisted development: effective prompting requires the kind of specific, detailed knowledge that comes from experience — but if you had that experience, you probably wouldn’t need the AI to solve the problem. It’s recursive incompetence all the way down, like an Escherian nightmare where every level of abstraction depends on understanding the level below it, but nobody remembers how to get to the bottom floor.

Consider our authentication middleware scenario from earlier. A junior developer asking for “a login system” might receive a perfectly implemented OAuth2 flow with JWT tokens, refresh token rotation, PKCE for security, proper CORS handling, and even thoughtful error messages. It’s like asking for “a way to get to work” and receiving a detailed blueprint for a space elevator. Technically superior to walking, practically mystifying for anyone who just needs to catch the bus.

When it doesn’t work in their specific environment — and it will fail, because software always fails in ways you don’t expect — they’re completely stuck. They don’t understand what “doesn’t work” means in the context of authentication systems. They can’t debug it because they don’t know what success looks like, much less what the failure modes are. They’re like someone trying to repair a quantum computer with a hammer and a vague sense of optimism, while the quantum computer keeps giving them error messages in Sanskrit.

The AI has become a priestly class — an intermediary between developers and the actual systems they’re building. But priests are only effective if they understand both the divine mysteries and the needs of their congregation. When both the priest and the congregation are equally mystified by the liturgy, you get cargo cult programming: rituals that look right and sometimes work, but with no understanding of why they might succeed or fail.

And here’s my uncomfortable admission: half the time, I don’t have the deep knowledge required for effective prompting either. I’ve just become very good at iterating through prompts until something works, then shipping it before anyone asks me to explain it. I’m like a medieval alchemist who’s accidentally discovered antibiotics but insists it’s because of the proper alignment of Mars and Jupiter.

Press enter or click to view image in full size

…selfie with codequeduct, anyone?…

The Aqueduct Syndrome: How Civilizations Forget Their Own Infrastructure

“Rome wasn’t built in a day, but I suspect it could be AI-generated in about an hour.”
— Modern Development Wisdom

The Roman Empire built the most sophisticated water distribution system in human history. Aqueducts carried fresh water across hundreds of miles, using nothing but gravity, mathematics, and an almost obsessive attention to detail. The engineering was so advanced that some of these systems operated continuously for over a thousand years — which, in technology terms, is roughly equivalent to maintaining a COBOL system since the Paleolithic era.

But here’s where the historical parallel gets uncomfortably precise: the Empire didn’t collapse overnight. It was a gradual process of shifting priorities, where long-term maintenance became less important than immediate concerns. By the 3rd-5th centuries AD, when the real decline set in, all the clever engineers were busy dealing with barbarian invasions and economic crises rather than maintaining the boring old infrastructure that actually kept civilization running.

The irony is exquisite: Caesar conquered Gaul in the 1st century BC with brilliant military engineering, but by the time the Empire was crumbling centuries later, nobody remembered how to maintain the water systems that made those conquests sustainable. It’s like building a startup with amazing user acquisition but forgetting to document how the database actually works.

Generations of engineers maintained these systems through accumulated wisdom and careful apprenticeship. But as priorities shifted, that knowledge transfer stopped. The older engineers retired to villas in Tuscany (the ancient equivalent of becoming DevOps consultants), and the younger ones were too busy with “more strategic” projects to learn the mundane details of water flow calculations and settling tank maintenance.

The result? Across Europe today, you can find the magnificent ruins of aqueducts that we still don’t fully understand how to replicate. We know they worked, we can see they worked, but the why died with the civilization that created it. It’s like finding a perfectly functioning quantum computer in your attic and realizing the instruction manual is written in a dead language.

Now consider our industry’s trajectory, keeping in mind that we’ve been practicing selective amnesia about our past:

1990s-2000s:

We built systems with deep understanding, but also shipped plenty of unmaintainable spaghetti code, had terrible documentation practices, and routinely worked 80-hour weeks because “that’s just how software development works.”

2010s:

We started saying “frameworks handle that” instead of teaching fundamentals, but we also built more reliable systems, improved our deployment practices, and began to value work-life balance. The abstraction wasn’t all bad.

2020s:

We embraced AI and stopped explaining our decisions entirely, but we also became more productive, reduced time-to-market, and democratized certain aspects of programming.

2030s:

Systems fail, and nobody remembers why they were built the way they were — but this was already happening with poorly documented legacy systems written entirely by humans.

The pattern isn’t new; the acceleration is. We’re not experiencing the first knowledge gap in programming history — we’re experiencing the fastest one.

Press enter or click to view image in full size

…COBOL is the new black…

The COBOL Prophecy: What Happens When Cool Kids Grow Up

“Fashion is a form of ugliness so intolerable that we have to alter it every six months.”
— Oscar Wilde

“Programming frameworks are a form of tedium so intolerable that we have to replace them every six months.”
— The Eternal JavaScript Cycle

In the 1960s, COBOL was revolutionary — a programming language that would finally let business people write their own software. It was the AI of its day, promising to democratize programming and eliminate the need for those mysterious “computer experts” who spoke in FORTRAN and assembly.

By the 1980s, COBOL was boring and old-fashioned. Everyone wanted to learn C, or Pascal, or whatever the cool kids were using. COBOL developers were seen as dinosaurs, clinging to obsolete technology while the future raced past them on microcomputers and fancy new operating systems.

Fast forward to 1999. Y2K hits, and suddenly every company in the world desperately needs COBOL programmers. Not because COBOL is technically superior — it’s not — but because all the critical business systems were written in COBOL by people who had since retired to Florida or ascended to management. The average age of COBOL programmers today? 58 years old. They’re like pandas — precious, rare, and mostly concerned with reproduction in carefully controlled environments.

This is the programming equivalent of demographic collapse. We’ve created a world where the people who understand how things work are literally dying off, and nobody is replacing them because the technology is “old and boring.”

But here’s the uncomfortable truth: we’ve been doing this for decades, and it’s accelerating like a particularly aggressive case of Moore’s Law applied to forgetting things.

1990s seniors:

“Who wants to learn boring C when we have Java?”

2000s seniors:

“Who wants to learn boring SQL when we have NoSQL?”

2010s seniors:

“Who wants to learn boring JavaScript when we have TypeScript?”

2020s seniors:

“Who wants to learn boring programming when we have AI?”

Get Krzyś’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

Each generation of seniors has normalized the idea that fundamental knowledge is optional — that you can skip the “boring” parts and jump straight to the “interesting” stuff. We’ve created a culture where depth is seen as inefficiency, where understanding principles is less valued than knowing the latest framework. It’s like deciding that understanding arithmetic is unnecessary because calculators exist, then being surprised when the calculator breaks and nobody knows why 2+2 equals fish.

We’ve become a profession that’s simultaneously more productive and more fragile than ever before. We can build incredibly sophisticated systems at unprecedented speed, but we’re increasingly unable to maintain, debug, or modify them when they inevitably break. We’re like a civilization that’s perfected space travel but forgotten how to make fire.

Press enter or click to view image in full size

…ask AI as I cannot be asked…

The Mentorship Crisis We Created

“In the beginner’s mind there are many possibilities, but in the expert’s mind there are few.”
— Shunryu Suzuki

“In the AI-assisted mind, there are infinite possibilities, none of which we understand.”
— Modern Development Koan

Let’s talk about mentorship, shall we? Not the idealized version where wise seniors impart knowledge to eager juniors around a metaphorical campfire, but the messy, time-consuming, often frustrating reality of knowledge transfer in a profession that changes faster than British weather.

The traditional mentorship model was already crumbling before AI arrived. Code reviews had devolved into nitpicking about semicolons and catching obvious bugs — like being a writing teacher who only corrects spelling mistakes while completely ignoring whether the story makes any sense. “Senior” developers spent most of their time in meetings discussing velocity metrics and sprint retrospectives, leaving junior developers to learn from Stack Overflow and trial-and-error.

When someone did ask a question, the answer was usually a link to documentation or a framework that would “handle it for you.” We became very good at outsourcing explanation to external resources, like parents who respond to “why is the sky blue?” with “Google it.”

AI has simply accelerated this trend to its logical conclusion. Why explain the reasoning behind a design decision when you can just ask ChatGPT to implement it? Why teach debugging techniques when AI can usually fix the problem faster than you can explain what’s wrong? Why invest time in knowledge transfer when everyone can just prompt their way to a solution?

The result is that we’ve created a generation of developers who are incredibly productive but fundamentally dependent. They can build complex applications, but they can’t troubleshoot when those applications fail in unexpected ways. They can implement sophisticated algorithms, but they can’t explain why those algorithms work or when they might fail. They’re like drivers who can navigate anywhere with GPS but would be completely lost with just a paper map and a vague sense of direction.

More troubling, we’ve created a profession where the seniors are equally dependent on AI assistance. How many of us can honestly say we understand every line of the AI-generated code we’ve shipped this month? How many of us have stopped learning new things because AI can just handle the details?

We’ve become a profession of translators — converting business requirements into prompts, and AI responses into shipping code. But translation requires understanding both languages, and we’re losing fluency in the language of actual programming faster than we’re gaining fluency in the language of effective prompting.

Press enter or click to view image in full size

…it seems it is a WTF written in sanskrit, sir…

Archaeological Programming: The Future We’re Building

“The past is a foreign country; they do things differently there.”
— L.P. Hartley

“Last week’s codebase is a foreign country; they did things we can’t remember there.”
— Modern Developer Archaeology

Imagine a developer in 2040 trying to debug a system built in 2025. They’re staring at code that’s technically proficient, elegantly structured, and completely inexplicable — like finding a beautifully written letter in a language that went extinct yesterday.

The git history shows a series of commits like “AI improvements” and “ChatGPT optimization” with no explanation of the underlying logic. It’s like archaeological evidence that consists entirely of notes saying “the gods handled this part.” The original developers are long gone, promoted to management or burnt out on tech entirely, and the AI model that generated the code has been deprecated in favor of something that’s completely incompatible but 15% more efficient.

The documentation, if it exists, describes what the system does but not why it was built that way. Every attempt to modify the system breaks something else in unpredictable ways, like trying to renovate a house where the previous owner used load-bearing furniture and decorative electrical wiring.

This is the future we’re creating: archaeological programming, where maintenance becomes an exercise in digital archaeology. Future developers will study our AI-generated systems the way we study Roman concrete — with admiration for the results and frustration at our inability to replicate the process.

The parallels are uncanny. Roman concrete was superior to modern concrete in many ways — it actually got stronger over time, which is the kind of engineering magic we can barely comprehend today. But we lost the knowledge of how to make it. We know the ingredients, but not the proportions or the technique. We can analyze the chemical composition, but we can’t recreate the process.

Similarly, we’re creating software systems that work beautifully but come with no transferable knowledge about how or why they work. Future developers will be able to see what our systems do, but they won’t understand the reasoning behind the implementation choices. They’ll be reduced to reverse-engineering our intentions from the artifacts we leave behind, like anthropologists trying to understand a culture based solely on its pottery shards and garbage disposal patterns.

This isn’t a distant dystopian fantasy. It’s the logical endpoint of our current trajectory. We’re creating systems that work but can’t be explained, solving problems we don’t fully understand, using tools that generate solutions we can’t replicate manually. We’re building our own digital dark age, one perfectly functioning but incomprehensible system at a time.

Press enter or click to view image in full size

…vicious AI learns from me I learn from AI circle…

The Meta-Problem: Training AI on Human Code While Teaching Humans to Learn from AI

“The definition of insanity is doing the same thing over and over again and expecting different results.”
— Often attributed to Einstein (who probably never said it)

“The definition of modern development is training AI on human expertise while teaching humans to abandon expertise.”
— Definitely said by me, just now

There’s a recursive nightmare scenario that keeps me awake at night, usually around 3 AM when all my best existential dread tends to surface. We’re training AI systems on code written by humans who understood what they were doing — people who could explain their design decisions, debug their own mistakes, and adapt their solutions to changing requirements.

But we’re teaching humans to learn from AI systems that don’t understand what they’re doing — they just pattern-match very well, like a savant who can play beautiful music but can’t explain why certain chord progressions work.

The next generation of AI will be trained on code written by humans who learned to program by copying AI. The generation after that will be trained on code written by humans who learned from the previous generation of AI, and so on. It’s like that party game where you whisper a message around a circle, except instead of ending up with amusing nonsense, we end up with critical infrastructure that nobody understands.

This is quality degradation at a systemic level. We’re creating a feedback loop where each iteration is slightly less grounded in fundamental understanding than the last. It’s like teaching art by showing students only photocopies of photocopies — eventually, the copies become so degraded that you can’t even tell what the original painting was supposed to be.

The terrifying thing is that this degradation might be invisible for years. AI-generated code can be very good — often better than what a junior developer would write manually. The problems only become apparent when something breaks, when requirements change, or when you need to integrate with systems that weren’t anticipated by the original AI model.

We’re essentially conducting a massive experiment in knowledge transmission, and we have no idea what the results will be. We’re betting the future of our entire profession on the assumption that understanding how things work is less important than knowing how to make things work. It’s like replacing all the engineers with very sophisticated photocopiers and hoping nobody notices the difference.

Press enter or click to view image in full size

…let’s have a look, child…

Solutions? (Real, Not Idealistic)

“It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.”
— Charles Darwin

“It is not the smartest of the developers that survives, but the one who can still debug without asking AI for help.”
— Survival of the Practically Competent

Right, let’s be honest about solutions. Most articles like this end with a list of aspirational recommendations that sound great in theory but are about as practical as suggesting we solve climate change by asking everyone to be nicer to the planet.

So here are some actually achievable approaches, organized by who needs to do what:

For Juniors (The Archaeological Training Program):

  1. “AI-Free Fridays” — One day per week, work without AI assistance. Yes, you’ll be slower. Yes, you’ll get frustrated. Yes, that’s the entire point. Frustration is where learning happens.
  2. “Explain It Back” — For every line of AI-generated code you use, write a comment explaining what it does and why. If you can’t explain it, you don’t understand it well enough to ship it.
  3. “Build It Twice” — Once with AI, once without. Compare the approaches. Understand the tradeoffs. Realize that AI shortcuts often come with hidden complexity costs.
  4. “Debug First” — Before building new features, spend time fixing bugs in existing code. Debugging teaches you how systems fail, which is more valuable than knowing how they succeed.
  5. Aqueduct Apprenticeship” — Spend time maintaining boring legacy systems. They’re boring for a reason — they work, they’re stable, and understanding them teaches you principles that transcend specific technologies.

For Seniors (Our Overdue Accountability):

  1. “Actually Mentor” — Stop delegating teaching to AI and documentation. Schedule regular sessions where you explain not just what to do, but why you made specific decisions.
  2. “Document Context” — Write decision records explaining why you chose specific approaches. Future you (and future colleagues) will thank you when trying to understand six-month-old code.
  3. “Code Review for Learning” — Treat reviews as teaching moments, not just quality gates. Ask “do you understand why this works?” not just “does this work?”
  4. “Resist the Rush” — Prioritize understanding over shipping speed. Sometimes the fastest way to move fast is to move thoughtfully first.
  5. Model Good Behavior” — Show how to use AI as a tool, not a crutch. Demonstrate the difference between AI-assisted development and AI-dependent development.
  6. “Admit Ignorance” — Say “I don’t know” when you don’t know. Stop pretending AI makes you omniscient. Show juniors that not knowing something is the first step to learning it.

For Industry (The Systemic Changes):

  1. “Infrastructure Incentives” — Make understanding systems financially rewarding. Promote people who can maintain and debug, not just those who can ship fast.
  2. “Knowledge Preservation Programs” — Document decisions for post-AI archaeology. Assume future developers will need to understand your reasoning without access to your AI models.
  3. “Gradual AI Adoption” — Don’t go from 0 to 100% AI assistance in one quarter. Leave room for human learning and understanding.
  4. “Cross-Generational Teams” — Mix pre-AI and AI-native developers. Create opportunities for knowledge transfer before the pre-AI generation retires.

Press enter or click to view image in full size

…or maybe I am just a caveman at the rave party?…

The Flint-Knapping Fallacy: Or, How I Learned to Stop Worrying and Embrace Being Possibly Wrong

“The only true wisdom is in knowing you know nothing.”
— Socrates

“The only true development wisdom is in knowing your carefully crafted analogies might be complete bollocks.”
— Honest Self-Reflection

Before we reach any grand conclusions, there’s something rather important I should mention: I might be completely wrong about all of this.

Not wrong in the “oops, I mixed up my JavaScript frameworks again” way, but wrong in the fundamental “I’m trying to shoe-horn new realities into old mental models” way. It’s entirely possible that I’m like a master flint-knapper in the Bronze Age, perfecting techniques for creating the sharpest possible stone tools while completely missing the point that someone just invented metallurgy.

There’s a delicious irony in writing a lengthy treatise about how AI is making us lose fundamental knowledge while simultaneously using AI tools to help research historical parallels and catch my own grammatical errors. I’m essentially a Roman engineer warning about the decline of engineering knowledge while using Google Translate to read Latin inscriptions.

The uncomfortable truth is that every generation of technologists thinks the next generation is doing it wrong. Coders who learned on punch cards thought terminal interfaces were making programmers soft. Assembly programmers thought high-level languages were creating lazy thinkers. C programmers thought garbage collection was for people who couldn’t manage their own memory. And now we’re all clutching our manually-written algorithms and muttering about kids these days with their AI assistants.

Perhaps the real problem isn’t that junior developers are learning differently — perhaps it’s that I’m struggling to understand what “learning” even means in an AI-augmented world. Maybe the ability to effectively collaborate with AI is the fundamental skill of the future, and my insistence on “understanding how things work” is like insisting that pilots need to know how to navigate by the stars when GPS exists.

The research bears this out to some extent. Studies show that AI tools provide the most benefit to junior developers — they’re the ones getting 126% productivity improvements while seniors see more modest gains. Perhaps they’re not becoming dependent; perhaps they’re becoming fluent in a new kind of programming literacy that we don’t yet recognize as legitimate.

But then again, when GPS fails over the ocean, you really do want a pilot who can use a sextant. And when AI-generated systems fail in production — which they inevitably will — someone needs to understand what the code actually does, not just what it was supposed to do.

The patterns I’ve described — the knowledge loss, the dependency cycles, the archaeological programming future — these could all be symptoms of my own cognitive limitations rather than actual problems. I might be like those Victorian engineers who insisted that trains traveling faster than 30 mph would cause passengers’ bodies to disintegrate, or the film industry executives who thought television was just a fad that would never replace cinema.

After all, every technological transition has its curmudgeons. The shift from assembly to high-level languages probably had assembly programmers muttering about “kids these days” who don’t understand memory management. The move to garbage collection had C programmers insisting that automatic memory management would make developers soft. And here I am, continuing that tradition by worrying about AI making developers… what exactly? More productive but less knowledgeable? Is that even a problem if the productivity gains are real?

But until then, I’ll keep teaching people how to debug without AI
assistance, just in case. Because even in the Bronze Age, it was probably useful to have a few people around who still knew how to knap flint.

Press enter or click to view image in full size

…”I am you, yo are me, we are One, don’t you see?”…

The Mirror Moment: What We’re Really Building

“We are all in the gutter, but some of us are looking at the stars.”
— Oscar Wilde

“We are all using AI, but some of us are still trying to understand the code.”
— Modern Development Realism

Let’s end with some uncomfortable honesty, shall we?

We’re not victims of AI taking over programming. We’re not heroes saving the profession from artificial intelligence. We’re just humans repeating historical patterns with slightly better tools and considerably less self-awareness.

The Romans didn’t wake up one day and decide to forget how aqueducts worked. It happened gradually, through a series of reasonable decisions that made sense at the time. “Why spend time teaching aqueduct maintenance when we could be building new infrastructure?” “Why document the engineering principles when the systems work fine?” “Why invest in knowledge transfer when there are more pressing priorities?”

We’re making the same decisions, just faster. We’re optimizing for immediate productivity while ignoring long-term sustainability. We’re building systems we can’t maintain, teaching skills we don’t possess, and solving problems we don’t fully understand.

But here’s the thing about historical patterns — knowing about them gives us a chance to break them. The Romans didn’t have the benefit of hindsight. We do. We can see where this trajectory leads, and we can choose to do something different.

The future doesn’t belong to AI. It doesn’t belong to humans either. It belongs to humans who understand how to work with AI while maintaining their own competence. It belongs to developers who can debug AI-generated code, who can explain why certain solutions work, who can adapt when the AI models change or fail.

It belongs to the people who can maintain what AI builds.

The Real Call to Action…

Press enter or click to view image in full size

…and some obligatory preaching…

Don’t let AI make you a passive consumer of code. Learn fundamentals especially because AI exists. For seniors: teach, don’t just delegate to AI. Be the Roman engineer who writes down how the aqueducts work.

Because the alternative isn’t that AI replaces programmers. The alternative is that we become a profession of people who can’t program, managing systems we can’t understand, built by tools we can’t control.

And that’s not progress. That’s just a more efficient way to build ruins.

The author (aka “me”) is a (sic!) senior or at least a bit tetric (or both) developer who has spent some time trying to understand AI-generated code and is slowly coming to terms with the fact that this might be what getting old feels like. He can be found debugging legacy systems that nobody else understands, occasionally remembering to ship actual code, and muttering about “kids these days” while secretly using LLMs for his own pull requests. He writes here and there some para-existential rants about programming, society and the inevitable heat death of the universe, in roughly equal measure.

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.

Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!