Over the past year I keep running into the same question—in code reviews, in team chats, and yes, on X: can software engineers still grow in the age of AI, or is the ladder of progression quietly disappearing?
The arguments on both sides are sharp, and the comment threads have been lively.
On one side, people worry that as AI takes over a large portion of repetitive coding tasks, newcomers are losing their “trial-and-error leveling-up” opportunities. Without that early grind, they fear the skill tree simply cannot branch out.
On the other side, many argue that better tools have never weakened programmers; if anything, they accelerate an engineer’s exposure to complexity and help them operate at a higher level of abstraction.
Both perspectives are meaningful. But to answer the question properly, we need to zoom out and look at the longer arc of computing history.
Stretched over several decades, the software industry shows a remarkably consistent trend: engineers move further and further away from the metal.
Early programmers wrestled with registers, stack frames, and memory offsets. C later let developers “write programs that read like human language.” C++, Java, and eventually Node.js pushed abstraction layers even higher. Today, most engineers working on internet applications have never manually managed memory, let alone handwritten assembly.
If abstraction truly made programmers weaker, how did we manage to build systems hundreds of times more complex than those of the past? How do we support global online collaboration and millions of concurrent users?
The answer is simple: abstraction doesn’t erode capability — it removes low-level labor, freeing a developer’s cognitive bandwidth for larger systems and harder problems.
That logic held true in the past, and it still holds today.
If you treat AI as “a smarter search engine,” you’ll only notice that it writes a few snippets for you or fetches APIs faster than you can. In that framing, yes — it looks like it’s just speeding up the easy parts.But that’s not the real revolution.
AI can understand natural-language requirements, summarize business rules, propose architectural approaches, generate tests and documentation, monitor code quality, and even surface missing edge cases after you describe your system. This isn’t “faster coding.”
This is a paradigm shift.
In the past, we learned programming by starting small — implementing individual functions. Today, a newcomer might write only a handful of functions before they’re already discussing data structures, interfaces, and module boundaries with an AI partner.
That means the path to growth is shifting too: from gradually accumulating experience through repetition, to rapidly acquiring higher-level skills through expression, modeling, and decision-making.
If someone insists on working the old way — treating AI as a faster StackOverflow or a better Google search — their attention remains fixed on the code itself, not on the systems behind it. They don’t build context, don’t ask prompts that unlock deeper reasoning, don’t involve AI in design; they simply copy whatever AI spits out.
These developers indeed risk becoming increasingly dependent on AI without improving their own judgment or abstraction skills.
In contrast, another group of engineers uses AI as part of their thinking process. They bring AI into requirements discussions, boundary exploration, trade-off analysis. They let it act as a second brain — raising alternatives, spotting blind spots, stress-testing architectural ideas. Once code is written, they also involve AI in testing, refactoring, documentation, and quality assurance.
In that relationship, engineers actually grow faster, because their time is freed almost entirely for high-value thinking: judgment, modeling, abstraction, and reasoning about complex systems.
History has shown repeatedly: rising layers of abstraction never weaken programmers; they raise the ceiling. AI is no exception.
Revisiting the two opposing views from the beginning, you’ll notice they aren’t contradictory. You can continue working the old way — or you can upgrade your workflow.
In the end, the gap is never created by the tools themselves.
It’s created by the choices people make when the landscape changes — including how we choose to work with AI.
I’ll try to write about once a week, though that may vary depending on what I’m building or what I’m stuck on.
If any of this sounds interesting, feel free to subscribe and read along.
Thanks for being here,
Tiger