AI-DLC Solves the Wrong Bottleneck
My org adopted AI tools across Engineering, Product, and Design. One of our teams had each engineer ship a full service, end-to-end, in a single week. Everyone's faster. So why aren't we shipping more of the right things?
One idea to address this that has been gaining mindshare in our org is an AI-driven Development Lifecycle (AI-DLC). It was introduced by a solutions architect at AWS and promises to rebuild the software development process from the ground up for the age of AI agents. But when reading about AI-DLC, something didn't resonate with me. It doesn't address the real bottlenecks that our team is facing today, despite everybody across Engineering, Product, and Design using AI tools daily.
The Premise
The AI-DLC narrative typically follows a pattern: teams adopt AI tools, developers go dramatically faster at writing code, they find themselves idle waiting for direction, and so the process expands to let developers drive more of the lifecycle end-to-end with minimal product intent. It's pitched as a necessary reimagining of our software development process from first principles, to unlock the full potential of AI agents. "We need automobiles and not the faster horse chariots." Ditch the old methods, which have "product owners, developers, and architects spending most of their time on...SDLC rituals."
Now, I am all for first principles thinking here. But every meaningful change in how we build software has been an evolution, even from waterfall to agile principles. "You can't retrofit current processes because AI changed the world" is a thought-terminating cliché. It prevents meaningful inspection of the real bottlenecks in how we develop software today, and it's a recipe for unlearning past lessons. In the end, all software goes through the basic steps of Scope -> Build -> Ship at some level. That's a real foundation we can build on.
And I definitely need to call this out: if you as a software engineer are spending most of your time in SDLC rituals, something is deeply wrong! The majority of your day should include focus time for deep work. If AWS is solving for that as part of the AI-DLC, it's possible that any process reset would be a beneficial excuse to wipe the meeting load and start fresh. (Hey, maybe try Kanban!)
Where is the bottleneck?
So let's go ahead and use first principles, but also look at our processes today and find the real bottlenecks. Can we adapt our current Scope -> Build -> Ship cycles to fit the age of AI?
We can definitely generate code faster today. Let's assume for now that this code is all the same high-quality, well-understood, maintainable code that we'd get using traditional methods. (The jury is out on that; topic for another post.) To move faster, we want to pull more implementation-ready ideas into developers' hands. For my team, that's where we are currently running dry. AI-DLC responds to this by loosening the definition of implementation-ready: give developers intents rather than fully scoped specs.
But did we ask why that source is drying up? Product and Design are also using AI tools every day. Why is that not resulting in more scoped ideas for development? On our team, it's not because Product and Design are failing to move quickly. Just like developers, Product and Design have access to a sliding scale of speed vs quality when using AI assistance. So far, we've seen outputs that are faster and higher quality. I believe the bottleneck is further upstream. Now that code is fast, that upstream constraint is suddenly visible.
The Bottlenecks are Still Human
If Engineering, Product, and Design are all using AI tools to compress our cycle time, the bottlenecks shift to the inherently asynchronous, human-heavy parts of the process: user understanding, user validation, and stakeholder alignment. Together, these processes help to ensure the team is building the right thing (for the user, for the company). And yet, these are the hardest to speed up because they depend on external feedback loops and human judgement. AI-DLC attempts to address this through Mob Elaboration, but this runs into challenges. Let's look at each one and how we might solve it.
A Pipeline for Proactive User Understanding
AI-DLC relies on Mob Elaboration to make quick decisions in real time. What happens if you need additional user discovery? Traditionally, Product would take that question, go off and do some discovery, and come back with an answer later. If this is going to happen in real time, there needs to be a ready repository of user insights. A continuous background process can support this through, e.g., regular user interviews, support ticket analysis, tools like Productboard. When it's time to build, you pull from this pool first rather than starting research from scratch.
However, there will still be times when the answer is not readily available. At that point, you're left with a choice between making an uninformed decision in the moment or delaying the work. Since cycle times are very low, the cost of the uninformed decision may not be so high. (That's one reason why Scrum uses timeboxes, after all.) The specifics of that tradeoff will vary from company to company and product to product.
If user understanding and discovery is difficult, the risk is that teams will bias towards work that requires little discovery, like bug fixes or performance optimizations. Your team will feel very productive while delivering less overall value.
Decouple Validation Choices
After shipping something, how do you validate whether it's delivering on its goals? AI-DLC doesn't address this. Validated learning takes time to collect, as real users interact with the product. This doesn't need to run at the same speed as the development cycle. A feature can ship on Monday, another on Tuesday, and validation results from last week's feature come back on Thursday (and feed into the insight reservoir). The build cycle is daily but the learn cycle might be weekly. That's OK, as long as it's actually happening and feeding back in.
If we're not routinely validating what we build with real users, we risk building the wrong things; we ship a lot of features that nobody uses.
Lightweight Stakeholder Alignment
Smaller batch sizes should make stakeholder alignment faster and easier; there's less to disagree about when each unit of work is small. However, without a regular inspection point, ten individually reasonable daily decisions can add up to strategic drift that nobody intended. AI-DLC doesn't address where the "intents" are coming from either, so presumably there is still some cadence for gathering those from stakeholders. It's almost certainly not daily (there's too much cross-functional collaboration needed), but maybe weekly or biweekly.
Evolution, Not Revolution
If we take this all into account, what are we left with?
- Daily cycle: Scope -> Build -> Ship, agent-heavy execution on well-understood work drawn from a prioritized backlog.
- Continuous discovery: User research pipeline feeding an insight reservoir. Human-driven, async, running on its own natural cadence.
- Weekly cadence: Direction check and validation review, feeding learnings back into the reservoir and adjusting priorities.
The process overall has recognizable components of Scrum (a weekly inspect-and-adapt loop), Kanban (a focus on flow and limiting work in progress), and Lean Startup (validated learning) methodologies. This makes sense: those processes arrived at their structures by solving fundamental coordination problems under uncertainty. AI didn't eliminate those problems.
Instead of throwing out the old, if we want to squeeze the most productivity out of our new AI colleagues, we need to drill down to the true bottlenecks. That might mean focusing on a support system for the human-oriented, naturally asynchronous parts of the system today.
Cheers!