Something on my mind recently is Stack Overflow, and the almost-meme that went around about new and intermediate developers a few years ago. The critique was that new devs were just copy-pasting from Stack Overflow without understanding what they pasted, and weren’t really learning to code. Most of the great developers I have seen come up in the last fifteen years started, at some point, with an idea and no clue how to build it. They went to places like Stack Overflow to find code other people had written that might work. They copy-pasted first. They learned what the code did later.
There was a lag effect. They learned what code worked before they learned what the code did. Over time, that gap reduced, right, as their experience and mental models improved, the time it took them to understand what they had copied kept shrinking, and at some point it flipped. They were reading first, pasting second, then writing from scratch.
A lot of people are writing articles about how AI is making people stop thinking. Honestly, I think it is symptomatic of the same meme about copy-pasting from Stack Overflow. The shape is identical: someone takes a shortcut to fix a problem, and someone else takes a stab at them for not understanding what they pasted. But there was a lag effect there, and it produced great outcomes for the people who learned to ask the right questions and figure out what was missing before they reached for the snippet.
Today, with coding harnesses, with LLMs and agents that are getting smarter, with language models that are getting more nuanced, what’s interesting — even though the snippet has become the whole thing — is that the function of this phenomenon is the same. Someone asks a good question. Someone writes a good prompt. Someone defines good outcomes. The model produces code. The code works, and they don’t yet know why. But the speed at which they can get to the point of knowing why is much shorter than it used to be.
The function is still happening. People are still learning. Things are still advancing.
What I have seen with the people around me is that this is playing out faster than it used to, not slower. I have watched someone go from prompting their first script to reading harness diffs and pushing back on them inside eighteen months. A year ago they could not write a line of code. I felt the same thing happen to me recently. I tried building a video codec a few weeks ago. I had never built one. A day and a half in, I had something working — not at the quality of H.264 or H.265, but working. What I had also done by then was learn what it takes to build a codec. The agent also told me, more than once, that the codec was good — but it couldn’t actually show me practical results when I asked. The function had fired. People are progressing faster than I have seen knowledge progress in years, and it looks very similar to how it played out before. Just faster.
What is different is where the friction sits. The old loop forced you through the friction: the snippet did not import cleanly, the variable names collided, the function signature was off by one, and you had to read the code to make it run. The harness absorbs most of that now. The lag-closing has become opt-in instead of automatic. It now passes through review or extension, not through compile or run. The friction has moved, not vanished. It lives in boundary errors, in the context cliff you hit at module thirty, in code that is confidently wrong inside your domain. Stack Overflow forced the loop on you. The harness offers it. You have to take it.
I think the reason people are not catching this is that there are more and more people picking up code than ever before, and from the outside it looks like nobody knows what is going on, like nobody is understanding what the AI is doing. The lag effect is just different. It is no longer a copy-paste you can trace back to a specific engineer on a specific thread. Today it is an LLM doing it. Some of them won’t take it. The harness doesn’t care, the deploy goes out either way. You can spot it — they’re shipping more, but they can’t walk you through what they shipped. Humans haven’t outsourced all their thinking. They are thinking in a different pattern to what we are used to, and at a scale we have never seen before.
One thing worth saying as we move through this phase, as we build things the way we are building them: the need to slow down has become more important than ever. We can’t keep hitting things at the speed an LLM builds them. We have to make time to go for long strolls. To stare at a wall. To read the diff before accepting it. To extend the module without the agent in week three. The understanding does not arrive on its own anymore. You have to go and get it. What I learned about getting agents to refine work in a recursive way ended up being something I now use every day.
So the one note I will make at the end is this: don’t get too caught up in the speed of development that you skip learning what you developed. But also don’t rush to do that in a day. Do it over four weeks. Or eight. That’s still going to be faster than what you did a decade ago.