Software Development: Dismantling the Waterfall
Although few projects would admit to using anything as flawed as the waterfall software development lifecycle, it continues to impose linear, mechanical thinking on our projects. We need to change the conversation in order to prevent the waterfall lifecycle from damaging any more projects.
Some Flaws
The waterfall approach assumes that it's possible to capture all requirements and complete analysis before design starts. For most projects this is ludicrous; by the time you've written down all of the requirements, they will have changed. Unfortunately, it's also very expensive to write down all of the requirements, because people are not used to specifying things in minute detail.
The waterfall approach is expensive and takes too long because it was designed to minimize the use of very expensive computing resources. Guess what? Things have changed. Computer time might have been very expensive back in 1970, but now development time is the most expensive part of a project budget.
The worst part, however, is the assumption that it's possible to separate software development activities into nice, neat, non-overlapping phases. Many problems are wicked problems, in which the design overlaps with and influences the requirements. With wicked problems, new requirements emerge out of the design decisions, making it impossible to create a clean separation between requirements and design.
NOTE
See Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms by Peter DeGrace and Leslie Hulet Stahl (Prentice Hall PTR, 1998, ISBN 0-13-590126-X).
But Nobody Really Uses the Waterfall Method, Right?
You might think that with all of the material that has been written over the years that some form of incremental development would be the default strategy. Guess again. I've lost count of the number of project plans I've seen that show systems analysts finishing their analysis work, then handing their work off to designers, who then hand specifications off to programmers, who then pass their work off to testers. It sure looks like the waterfall is alive and well!
Even the labels we assign to the roles in projects match those of the waterfall. It should also come as no surprise that the jobs at the top of the waterfall have higher status and rewards than the jobs at the bottom. No wonder systems analysts look down on lowly maintenance programmers.
In an era where companies are starting to talk about knowledge management, we still organize many software projects in a way that's guaranteed to minimize knowledge transfer. A systems analyst takes the time to learn the problem domain and then hands the work to a designer. The designer has to learn secondhand, and then subdivide and parcel up that knowledge into detailed specifications that get handed off to the programmers. Is this inefficient, or what?
But Are the Handoffs a Real Problem?
No, not from a linear, mechanical way of thinking. It's the most effective way to do it. Systems analysts can specialize and become good at requirements elicitation. Designers can specialize in creating detailed designs, and programmers can specialize in their favorite programming language. Sounds great, doesn't it?
Well it would be, if it were not for those wicked little problems that creep into the picture. If we've learned anything about software development, we know that each new system enables our users to think up completely new ideas for must-have features. But guess what? The user doesn't know who to tell these ideas to. Should she tell the maintenance programmer about the enhancement request? Or maybe the tester who is doing the usability testing with her? Or maybe the programmer who was asking her to decide between two different screen layouts? Or maybe the designer who was asking about the subtleties of one of the business rules? Or maybe the analyst who interviewed everyone 18 months ago?
Whoever she tells, it will be the wrong person, because nobody on the development side really understand the big picture.
The Waterfall Creates Schedule Slips
The most fundamental problem with the waterfall approach is that it's impossible to measure progress. Sure, it looks great that we can say we're 90% done on an analysis task, but what does that 90% mean? Not very much.
The problem is that you can always do more analysis to try to uncover more subtleties about the problem domain. At some point you have to say that you've done enough, but you can never be sure. The same goes for the design phase. How do you know whether your elegant design ideas are really going to work when the programmers implement them? How do you know the level of detail to put into the design documentation so that the programmer can understand the designcan you assume that the programmer is familiar with the problem domain or not?
Small wonder that most PERT charts show downstream tasks slipping and overrunning their due dates. The up-front tasks came in on schedule, so it must be the designers, programmers, and testers who are at fault. Yeah, I really believe that.
These slips are a natural consequence of the waterfall approach. And guess what? They put severe schedule pressure on the programming and testing activities. No wonder the quality of the resulting application is low. No wonder there's the temptation to ship "good enough" software.