The Myth of the 'Waterfall' SDLC (2019)
bawiki.comThe problem with software development methodologies is they are sold as a solution for problems that arise from organizational/cultural dysfunctions. Up front analysis is not the problem, the problem is that the people charged with implantation are not given the big picture view and the flexibility to adapt the solution.
Instead of using the term waterfall as the counterpoint of agile I prefer to call it the human centipede model. In this model all of the vision, creativity, and flexibility stays with the head and the rest of the centipede just eats their shit. Developers can't see further than the next person's ass and have no idea why they are actually building the functional specifications that are fed to them. Implementation becomes completely disconnected from design which leads to compromised quality, missed deadlines, and products that miss the mark.
No task management framework is going to solve these problems.
The sad fact is just how many organizations are so utterly dysfunctional and how many people follow the centipede model without even realizing it.
> Up front analysis is not the problem
It depends, the idea of working in short sprints with deliverables at the end of it is that often you do not know what you need and the best way of finding out is by trying out and talking to the customer.
The place I'm currently working at is agile on paper but 90% of the work is just implementing external requirements we have no say in. Needless to say, not analysing the problem up front and not designing the general architecture up front leads to building roadblocks later on that prevent getting work done.
To stay with your metaphor: Sometimes you have to eat shit (because of legal requirements, etc.). Planning ahead is very useful in those situations for not choking on the shit.
I call this the problem of noise and loss of information. If someone has to write a specification and move on to another one, there will be a noise. Imagine customer -> product -> analyst / ux -> dev. etc. At each level, a little bit of information is lost. One possible solution is that the devs. are partly involved in all phases of the specification and design. This could reduce the problem of noise and loss.
One way to fight information loss is through error correction. That means a feedback loop between the stages in this process where downstream results are reviewed by upstream actors to evaluate their suitability. Agile tries to build that in by iterative work, demos, and retrospectives.
Is it just a problem of inadequate specification? Because the methodology seems to work with physical systems.
If I design a machine, and I need a gear or a cam or even a more complex component, I can send the specifications for that part to a machinist who knows nothing about the "big picture" of what I am building. Yet he can make that part of it to perfection. Does this lead to the same sort of problems with the development of physical systems?
> Does this lead to the same sort of problems with the development of physical systems?
I would argue that physical systems aren’t developed in a “waterfall” method.
In mechanical design classes in engineering school, we learned to start with low fidelity sketches to capture customer intent. We come up with several options, build those into increasingly higher fidelity 3D models, use simulation to refine, take a few candidates and get physical prototypes, do physical testing (strength, endurance, integration, etc.), determine the best one, then go into limited production to prove out and establish the production process, then go into full production.
We are taught (and in the automotive industry it’s a requirement) to have cross functional teams involved in the design stages of both the item and the manufacturing process.
To your point about inadequate spec: My opinion is there are so many different backgrounds coming into software that there is no common language or background.
I think everyone thinks their way is better and it’s hard to communicate technical ideas when you need to constantly recreate and translate between terminology and documentation methods. This lack of convergence and common knowledge is what I think results in poor specs.
The problem with comparing software development with manufacturing physical artifacts is that the phases are mixed up. The manufacturing, creating the artifact, in the case of software, is a wholly automated process usually done by a compiler (or similar). The compiler doesn’t need to know shit about the big picture and usually produces the artifact exactly as specified, as your machinist.
So software development are all about writing down the specifications precisely enough for the compiler to create the artifact. Mixing up this process with manufacturing is a source of many of the problems in software development.
Royce's waterfall model is what was actually being used until the mid-2000s in most software houses. And it was one of those processes that looks good on paper, but doesn't actually save you time or effort.
I don't know where this "rigid" model comes from, but I never encountered it in the early days. Everyone knew you couldn't just complete a phase in totality and then move on to the next; there had to be overlap and stepping back up to an earlier phase as you discovered new things.
I suspect the "rigid" variant is merely the hyperbole that waterfall has been reduced to since there are no more proponents of it left. Doesn't make the Royce variant useful, though.
It's not strictly an artifact of BigCorp From The Past. Frankly this model is what has been followed in most teams I've been on at Google, well into this part of the 21st century.
PRD -> design doc -> code -> launch -> promotion.
Of course, it's broken down and full of a mess (like a real waterfall) at most stages.
I feel like in a way it was a disservice to my career having a decade and a bit of work in smaller companies (where we often shipped quicker and with a less dysfunctional process) before coming to Google. Because the people around me seem to find the way things are... all quite fine and dandy while I just stare in cynical disaste.
The new team I'm on seems better, though.
you might be attaching too much importance to shipping quickly. being promoted quickly is the metric of importance
It's a model that allows contractors/consultants to declare a phase of the project "complete" and bill for it. And then to bill for change orders/rework after that. It's not about building good software, it's about making money building software.
Let's not forget upfront and arbitrary budget and time constraints from the customer side.
"We want the project to be delivered by X, and we have a budget of Y. Our specs are Z, but not really, so factor in A refactorings (A for arbitrary). Oh, and we don't have Q for Quality defined."
Especially prone in government and military contracts. Throw in some arbitrary public procurement rules so that Y is allegedly not disclosed, except to the select contractors that somehow know what the desired Y is and what the competition is bidding before bid close.
You get the F-35, which pragmatists say was never meant to fly, just to make the players rich in the process of failure.
Sauces: https://www.forbes.com/sites/davidaxe/2021/02/23/the-us-air-... https://pando.com/2015/09/24/war-nerd-why-f-35-albanian-mush...
Until 2010, I only worked in startups, small outfits. So I can't speak for "all software houses".
Speaking with refugees from megacorps, I do have the impression that bigger projects had different people (subteams) doing the different phases, with little overlap. I have no trouble imaging how that'd quickly become a Kafka nightmare.
You'd generally have PMs who had little contact with the engineering team proper. They'd take requirements from the execs, turn them into design documents, and then pass them to the engineering team to implement. So there would be a fair amount of friction, yeah. And lots of DFDs and ERDs that didn't match what was really happening ;-)
But then again nobody on the engineering team took them too seriously anyway, so it kind of worked out.
The main problem in those days was centralization of decision making at the top, where the decision makers had the least exposure to the reality on the ground. The military legacy of computer systems, I suppose...
Ya. I think that's just a very hard problem. Mostly due to scale running up against organizational psychology.
I helped a friend with some property tax software for governments. Trivially technically, almost intractable organizationally. Not for me. I deeply respect the people who can swim those waters.
Nice work.
Agile Methodology was a self-defense coping measure for dealing with psychotic customers who cannot or will not do proper project management.
It was never meant as a replacement for PMI.
IMHO, "waterfall" is not having feedback loops, iteration. All of the methodologists (I read) in the 90s cautioned against "throwing it over the wall", as detailed in this OC's description of rigid sequences.
Plenty of today's "Agile" lacks proper feedback loops. Where primary assumptions are not revisited, when new information does not lead to course corrections.
Maybe "waterfall" is just dysfunctional communication, where reasonable people aren't talking to each other.
Any way. This is a great write up.
A point often missed when talking about SLDC in client work is the client is a major factor in the success of the project and is not always up to the task. I can see how agile could be used as a way to cope with a difficult customer.
The thing with a client/customer work is the clock is ticking at a constant rate and the budget is decreasing at a variable rate. I’d like to research done on methods that try to ensure the two run out at the same time. If you finish on time with budget remaining you’ve left money on the table. If you run out of budget before the deadline you’re in the red. The goal is to run out of budget the moment you reach the deadline.
I recently took a mandatory SAFe course. The introduction consisted of mainly justifying the practice by comparing it to the waterfall method. The latter being defined absolutely ridiculos. The client is not allowed to see incremental versions until the entire thing is completely finalized, maybe 2 years later. The initial design is not allowed to be altered. Testing can't start until the entire application is done, etc.
If you have to misrepresent the alternative (false dichotomy anyway) then maybe you don't have anything of value to bring to the table. That's what it felt to me.
SAFe is the opposite of agile. It's what companies practicing waterfall (or any other heavyweight, non-agile process) turn to when they don't understand the problem, lack the will to solve it, but are compelled by external forces to make a change. When the board issues an edict to "adopt agile" because someone told them it was good, SAFe is what they turn to. It's anti-agile. Not only does it not solve the problem, it adds even more overhead. There's a whole industry built around the lie.
> The client is not allowed to see incremental versions until the entire thing is completely finalized, maybe 2 years later. The initial design is not allowed to be altered. Testing can't start until the entire application is done, etc.
TBF, in the defense and defense contracting world that is not an unusual situation. It's a terrible situation, but it's not unusual. I've seen multimillion and multibillion dollar mistakes caused by this repeatedly.
It's improving, but SAFe is not the answer. If it were, the F-35 may not have been such a clusterfuck. SAFe is overly prescriptive and inflexible, it's a way to make managers happy but does little to actually improve the situation.
You just have to look at the SAFe diagram to see that it’s complete insanity. Basically the pipe dream of managers who want a fully automated, predictable process.
I like how, outside of the central diagram, there's random buzzwords not connected to anything.
“client is not allowed to see” lol yeah that is so removed from reality as to be laughable. If I told a client you can’t see what we have the client would reply “ok, you can’t see a bank deposit until I get a demo of progress”.
In the defense contracting (and much of government contracting) world the way things are paid out is on hitting milestones. Milestones are not, necessarily (in the best case they are), incremental releases that could possibly stand alone without much further development.
https://acqnotes.com/acqnote/acquisitions/milestone-overview
This is literally the way the US DoD has done acquisition for decades. In the early phases there are often (but not always) multiple R&D efforts which are evaluated later (in the aircraft world think about the various X aircraft that compete to become the next fighter). But once it's selected, it's very waterfall-esque from within the context of the program office which encourages (but does not mandate) a waterfall approach from whoever is doing the work as well. They do periodic check-ins, but they're basically waiting for the entire thing to be produced before it can be evaluated properly. The checkins are just smaller milestones, again not necessarily increments which could standalone if the project were to stop there.
And yes, US DoD does big bang testing at the end. They have test organizations which receive software/hardware systems (cyber-physical systems is a common phrasing for them) and evaluate them at the end of the development process. Ideally, of course, the development team is also doing testing. But even then, in software, they tend to do scattershot testing during development with one big bang test run at the end.
Presumably this is in a context where the money is paid out in a very small number of batches upon hitting milestones, and the majority upon finalizing. There it wouldn't suprise me to see a "progress only gets shown at the milestone dates/review meetings/..." (likely still a bad idea, unless you have some pathological politics at work at the client)
It's a compelling simple story for the decision makers that only requires them to spend other people's money and avoid changing themselves or their organization. Irresistible.
The difference between what mainstream agile is and what the original authors of the agile manifesto were trying to achieve is vast.
This talk from one of the authors is an amazing rebuttal of what agile has become and clarification of what it is supposed to be https://m.youtube.com/watch?v=a-BOSpxYJ9M
I was on a project where we used Agile. The team genuinely wanted to improve itself over time and we didn't have hard due dates. The thing was done when it was done.
Despite that, something felt really off. Requirements and scope kept popping out of nowhere. We tried moving onto something new yet requirements kept popping up from things we didn't consider.
The project was somewhat of a legacy refactor so it was easy to say "just redo X in system Y" but for some reason it didn't work out that way. I think this project could have used some the old waterfall paradigm where you did a lot of the requirements analysis up front.
Sure, you're not going to capture everything, but in this case I think it would have helped everyone involved if there was more foresight put into it up front instead of continually bumping things into the night and making stories for it.
Once devs and testers get traction in their work, what happens is business slacking off. Then when approaching hard dates, stakeholders suddenly wake up and demand changes to everything, often just to leave marks or to hide they dozed off. This even though agreed upon scope. Keeping this in line demands PM skills, contrary to agility.
No process will save people from themselves.
well said
Very true. The decision makers kick something, fall asleep during the work and then start to panic close to the end. Agile would require constant engagement by the business people which I have seen very rarely.
What you needed was somebody empowered to judge the cost-benefit of those requirements and disallow low gain ones. (And if they were all high benefit, then things were just working as they should.)
If you tried to collect the requirements up-front, the only thing you would get would be to either stall the project before it starts (what could be a good thing), or to do the exact same process, but only after a months-log delay that created a bunch of stuff you would simply throw away.
I agree with you and I definitely don't think the up-front approach is great. I just wish there was a way to have some kind of forward thinking instead of low-gain requirements in that randomly appear.
Yeah. I don’t understand why people like to jump from one extreme ideology to another extreme. First it was “plan everything ahead in detail” which then got converted to “plan nothing ahead”. Both are insane and don’t work. I always ask where the decision makers see a project in maybe 3-5 years. Having that long term vision gives a lot of insight into possible software architecture decisions. You still can make changes but at least the general direction is right.
Every successful project I've been on has narrowed scope approaching releases. Usually because new QA/QC work crowds out new features.
A bit like the KickStarter campaigns: bucket features into MVP, stretch goals, and bonus.
This was mitigated by having a tempo for releases, so everyone knew those extra features were coming soon.
I agree with TFA that 'Waterfall' is substantially a mythical creation in terms of formal definiton.
Nobody defined 'Waterfall' as contrasted with Agile any more than 'they' argued constant climate as the Climate Change folks would have it.
Nevertheless, I have seen, especially in a government context, the very bureaucratic rigidity that the Agile practitioners decry. SAFE wasn't begotten in a vacuum or as a sales driver.
This talk describes how the waterfall development model was a caricature that got accidentally turned into reality: https://www.youtube.com/watch?v=NP9AIUT9nos "Lone Star Ruby Conference 2010 Real Software Engineering by Glenn Vanderburg"
This is absolute gold but we need a simple and compelling story that can be told before it will inform the masses. Agile tried but seemed to miss the point of Conway's Law that organizational structure or communication paths influence the design. Agile was trying to change or work around that fact.
Companies like Oracle are cargo-culting by building cloud platforms that resemble AWS but they have no appreciation that Amazon changed their internal communication practices resulting in AWS! Oracle isn't going to do that, perhaps they can't. Has Microsoft adapted their communication structure and is it reflected in their cloud platform?
What I find interesting is that cloud service adoption is allowing silos inside companies to avoid the structures that impede them--to some extent.
> However, the situation was once the exact opposite. As Barry Boehm relates, On my first day on the job [in the 1950’s], my supervisor showed me a GD ERA 1103 computer, which filled a large room. He said, 'Now listen. We are paying $600 an hour for this computer and $2 an hour for you, and I want you to act accordingly'
Adjusted for inflation, this guy was earning on the order of $40k per year.
I can remember way back around 1990 being told that waterfall was mostly a bad idea, and few shops really followed it. Rapid prototyping was agile back then. We were shown how to produce flow charts and static call diagrams in Software Engineering 201 or whatever it was called, and I felt they were doing it to show us how but also to demonstrate how much work it was, and how easy it was to get the analysis wrong.
As new fashions arose, instead of attacking waterfall as practiced (which, don't get me wrong, is still labor intensive), they criticized a straw man version, because it made the new methods look even better.
In consulting they talk of a “hybrid model” which is basically iterated waterfall. Which, ftfa, is regular waterfall.
It works in consulting because you know the end product and schedule before agreeing to do the work (or at least you’re suppose to).
I use to be anti waterfall just because everyone else was but, at the end of the day, it works pretty well in its niche.
Waterfall in practice being so horribly bad for software engineering projects is kind of sad because of how low the bar is set. It's 2021 and agile-ists still compare themselves to that dead horse.
I'd rather see more rigor being put in making a methodology that's measurably better than the clusterf of short-term task tracking spreadsheets and cargo cult dogma of agile in practice today.