Two predictions about AI in software development have hardened into conventional wisdom. The first says developers are about to be made obsolete. The second says software itself is about to be made obsolete, because anyone with a laptop can ask an AI to build whatever they need, for free.
Both predictions confuse a demo with a product. And both miss the only resource that has ever mattered in this industry: time.
What is actually happening is simpler than either prediction. AI will not make software free. It will make iteration cheap. The rest is consequence.
I started with DHTML and C++, moved through PHP, ActionScript on Flash, Java, Python, and Go, and today work mostly in PHP, Python, modern C++, ECMAScript, and TypeScript. I’ve built and led engineering teams, set up the workflows that let them ship quality at pace, and these days I spend most of my time on product development. I’ve taken products from a blank repository through market validation through years of iteration. That full cycle is the lens I bring to AI.
I’ve also spent serious time inside the current generation of AI development tools. Garry Tan’s gstack (https://github.com/garrytan/gstack/) is the cleanest demonstration I’ve seen that AI-native shipping is not one prompt. It is a stack, a workflow, and a loop, and getting from idea to shippable product still takes many tools, many cycles, and many revisions.
Yes, an AI will build you a Flappy Bird clone in a single prompt. No, that is not a finished product.
A demo proves something is possible. A product proves it works when it shouldn’t. A finished product has onboarding, error states, accessibility, observability, support flows, an upgrade path, and a thousand small decisions that someone had to make and refine. Going from a prompt to a product that real people pay for and rely on takes days of prompting, testing, refining the result, refining the prompts that generated the result, and then doing it all over again.
That is the new software development cycle. It is not “ask and receive.” It is “ask, receive, evaluate, adjust, repeat, ship.” Faster than the old cycle. Not free.
Claim one: AI removes developers.
Not in the way people mean. Some roles will narrow or disappear, especially the ones that exist only to translate a clear specification into competent boilerplate. What survives and grows is the work that was always the harder half: judgment about what to build, taste about how it should feel, architecture that survives contact with users, debugging the strange edge cases no model has seen, integration with messy real systems. AI raises the floor on what an inexperienced builder can produce. It also raises the ceiling on what an experienced one can ship in a week. The output gap between the two has narrowed. The judgment gap has not.
There is a complication buried inside this, and it deserves its own name. The typing AI removes is the same typing that used to turn juniors into seniors. The ten thousand small frustrations of fighting a compiler, owning a bug from report to root cause, refactoring code you wrote six months ago and now hate: that grind is what produced the judgment we now say AI cannot replace. If we automate the apprenticeship and keep none of it, we end up with a generation that prompts fluently and reasons about systems poorly. I don’t have a clean answer for this. I do know that any team treating AI purely as a way to skip junior work, rather than as a teaching surface around it, is going to be short on senior judgment in five years.
Claim two: AI removes the need to buy software.
There is a version of this claim that is correct, and it is worth conceding before dismantling the rest. Internal dashboards, one-off workflows, the long tail of lightweight CRUD apps that existed only because writing them used to cost too much time: those genuinely do get compressed into prompts. That category has been living on borrowed time for a while. Strip it out, and the real question becomes whether the products that remain, the ones people open every day and depend on, also get eaten.
I don’t think they do.
Software has always been a machine for turning someone else’s time into your time. AI doesn’t change that. It just makes the machine faster to build. The moment you start building your own tools instead of buying them, you are spending the very resource the software was supposed to give you back. You are paying with hours instead of dollars, and hours are the only currency none of us can earn more of.
When you buy software, you are not really buying code. You are buying the compressed output of someone else’s hundreds of refinement cycles. Their taste. Their bug fixes. Their support. Their willingness to keep maintaining the thing next year. You are also buying someone to call when it breaks, someone to blame when it goes wrong, and someone who will still be there next year. That value does not disappear because the building got faster. If anything, the building getting faster makes the gap between “thing I prompted into existence yesterday” and “thing a team has refined for two years” more visible, not less.
So in the short term, people will keep paying for software. They will pay it to the people who have already done the multi-day, multi-cycle, multi-evolution work of getting a product to a state worth using.
What does change in the short term is the cadence.
A two week sprint, in many of the workflows I run, now compresses to two or four days of focused work. Feature requests that used to sit in a backlog for two or three months should now ship within a week. Not every feature. Compliance, deep migrations, real architectural shifts inside legacy systems still take what they take. But far more than most teams currently admit. Not because the work disappeared, but because the loops inside the work got shorter: faster prototyping, faster iteration, faster review, faster test scaffolding.
What gets faster also gets harder in places people are not yet looking. AI doesn’t remove complexity. It relocates it. You spend less time writing code and more time validating code you didn’t fully author, debugging behavior you can’t fully trace, and constraining a system that will happily generate ten plausible solutions when you needed the one correct one. The sprint compresses, but the surface area of “things that can quietly go wrong” expands. Teams that ship at the new cadence without upgrading their review, observability, and testing discipline are not actually shipping faster. They are shipping debt faster.
There is also a human ceiling that nobody is pricing in. The code got faster. The brain reasoning about the code did not. Five compressed sprints in a week is not a three day weekend. It is a high speed burnout machine if leadership doesn’t change what gets demanded of people in proportion to what got automated.
The bottleneck is no longer writing code. It is deciding what deserves to exist, and recognizing when something is actually done.
This puts a hard demand on product teams. If your process still assumes a quarterly release rhythm, you are leaving most of the value of these tools on the table, and a competitor with an AI-native cadence will learn faster than you can. The question every product leader should be asking right now is simple. What in our process exists only because building used to be slow, and what can we delete now that it isn’t?
If your honest answer is “feature requests still take us a quarter to ship,” you do not have an AI problem. You have a workflow problem. Fix that first.
The next shift is more interesting. Today, software ships in sizes. Small, medium, large. Free, Pro, Enterprise. The user picks the closest fit and adapts to it.
In the medium term, software starts to tailor itself. Instead of choosing a size off the rack, the user gets a suit that fits. They start from a sensible medium default and refine it, in natural language, in real time, until the product matches the way they actually work.
A note on what tailoring actually means here, because the obvious version of it is a trap. It is not every user getting a different-looking UI. Down that road lies a support team that cannot reproduce any user’s screen, a QA matrix nobody can run, and analytics that mean nothing because no two sessions share a layout. The tailoring that matters is functional, not visual. In a CRM, it is not a different sidebar color. It is the system learning which leads actually convert for this particular rep, which follow-ups should auto-trigger, which fields should drop out of the daily view because nobody on this team has used them in a year. The surface stays recognizable enough that a customer success agent can still help. The behavior underneath shapes itself to the person.
There is a harder problem sitting underneath all of this: trust. Software that quietly makes decisions on the user’s behalf only works if the user can see the logic the moment they want to. A CRM that silently archives a lead because it scored “low quality” and was wrong has not saved time. It has lost a deal and broken faith. The medium term challenge for tailored software is not generation. It is explanation. The user has to be able to ask the system, in plain language, why it behaved that way, and change the answer without filing a ticket.
The discipline here is to keep asking one question on every feature, on every screen: what can I change in the process so the user saves the maximum amount of time. If the answer is “let them shape this themselves without filing a feature request,” you have found the medium term opportunity.
Anything I say about five to seven years out is speculation, and I want to be honest that I do not feel qualified to make confident predictions on that horizon.
What I do think holds across whatever specific form factor wins: the interface will shrink, and the distance between user intent and software execution will collapse. Today you click through ten screens to do what you wanted. Eventually you say what you want and the system does it. Whether that happens through a laptop, through glasses, through ambient devices, through something we have not seen yet, I genuinely do not know. The principle is the bet. The hardware is imagination running.
What I am more confident about is the direction. The shape of the work shifts from typing to judgment, from sizes to tailoring, from quarterly cadence to weekly. The role of the experienced builder does not vanish. It moves up the stack, toward taste, toward systems thinking, toward the parts of product development that AI is least equipped to do alone.
I started in the Flash era. I watched a single tool hand a generation of designers the power to become developers, and I watched most of them wash out. The people who won that wave were not the ones with the slickest demos. They were the ones who used the new capability to actually solve someone’s problem, and kept solving it after the novelty wore off.
I think this wave rhymes. AI didn’t make software free. It made iteration cheap. The people who learn to iterate with taste, with judgment, and with the discipline to know when a thing is finished, will define the next generation of products.
That is my read. I would genuinely like to hear yours. Where do you see software changing today, and what do you think the next two or three years actually look like inside your own work? What is one part of your workflow that has been ‘compressed’ by AI, and one part that has become unexpectedly harder?