Tools and Techniques
Fallacy 4 Tools and techniques: one size fits all.
Discussion
There are sure a lot of people in the software world who would like to believe that one size fits all. Those selling methodologies. Those defining process approaches. Those pushing tools and techniques. Those hoping to build component-based software. Those setting standards. Those doing research toward the next software engineering holy grail. Those academics who put the prefix meta- in front of whatever they're working on. All of them are seeking that "universal software elixir." Many of them even believe they have found it. All too many of them want to sell it to you!
But there's a problem here. Because software tackles such a diverse assortment of problems, it is becoming more and more obvious that there are few, if any, universal solution approaches. What works for business application programs will never be enough for critical, real-time software projects. What works for systems programming is often irrelevant to the needs of scientific applications. What works for small projects, and that includes today's Agile Development approaches, won't work well for those huge projects that consume hundreds of programmers. What works for straightforward projects will fail miserably if applied to critical projects.
We are just beginning, in our field, to appreciate how diverse the problems we need to solve really are. I've taken a stab at the dimensions of the problem (Glass and Oskarsson 1996).
Size matters. Small is vastly easier than huge.
Application domain matters. For example, what is needed for scientific applicationsa strong mathematical foundation, among other things is typically unnecessary for business and systems programs.
Criticality matters. If lives or vast sums of money are involved in a project, you will treat it far differentlyespecially its reliability needsthan if they are not.
Innovativeness matters. If the problem you are addressing is unlike any you have ever solved before, you will take a much more exploratory and much less methodologized approach to its solution.
Certainly others have their own favorite dimensions. For example, Jones (1994) divides the field of applications into Management Information Systems, Systems Software, Commercially Marketed Products, Military Software, Contract/ Outsourced Software, and End-User Software (he then does a wonderful job of characterizing each of those domains, including discussions of their most common risk factors).
Probably you have your own favorite dimensions. Most practitioners are well aware that "my project is different." All too many theorists, however, disdain such a comment and see that practitioner as simply unwilling to try new (and often "universal") things (but see Glass 2002a).
Controversy
This particular fallacy is coming to a head. The many people who continue to believe in the one-size-fits-all approach are finding an ever-increasing number of opponents. Plauger (1994) says that "anyone who believes that one size fits all belongs in a pantyhose commercial." Yourdon (1995) says "the most interesting paradigm shift now taking place" in the field is "the shift away from the notion that all software is essentially the same." Sanden (1989) writes about "eclectic design approaches." Vessey and Glass (1998) point out that, in the discipline of problem solving, problem-focused solution approaches (like a specific size of wrench) are seen as "strong" and generalized ones (like a monkey wrench) as "weak." Project differences are celebrated (Glass 2002a). Even the commonly disdained notion of "ad hoc" is coming in for some reappraisal (Glass 2002b) (the dictionary says it means "fitted to the problem at hand"). If you think one-size-fits-all is a bad thing, then you will see this groundswell of opposition to the notion as one that is extremely healthy for the field.
Sources
The opposition to the notion of one-size-fits-all is taking on an accelerated drumbeat. The newly popular Agile Development approaches, for example, say things like "different methodologies are needed for different projects" and go on to speak of the "sweet spot" projects in which Agile is at its best (Cockburn 2002): two to eight people in one room, on-site usage experts, one-month project increments, experienced developers. Following through on this theme, they also note the (very different) projects where traditional/rigorous approaches are best (Highsmith 2002): larger teams, critical projects, projects involving regulatory considerations.
McBreen, Pete. 2002. Software Craftsmanship. Boston: Addison-Wesley. Contains a section explicitly titled One Size Does Not Fit All.
References
Cockburn, Alistair. 2002. Agile Software Development. Boston: Addison-Wesley.
Glass, Robert L. 2002a. "Listen to Programmers Who Say 'But Our Project is Different.'" The Practical Programmer. Communications of the ACM.
Glass, Robert L 2002b. "In Search of Meaning (A Tale of Two Words)." The Loyal Opposition. IEEE Software.
Glass, Robert L., and Östen Oskarsson. 1996. An ISO Approach to Building Quality Software. Upper Saddle River, NJ: Prentice-Hall.
Highsmith, Jim. 2002. Agile Software Development Ecosystems. Boston: Addison-Wesley.
Jones, Capers. 1994. Assessment and Control of Software Risks. Englewood Cliffs, NJ: Yourdon Press.
Plauger, P.J. 1994. Programming on Purpose. Englewood Cliffs, NJ: Prentice-Hall.
Sanden, Bo. 1989. "The Case for Eclectic Design of Real-Time Software. IEEE Transactions on Software Engineering SE-15 (3). (The journal, not understanding the word eclectic, replaced it with electric when they published the article!)
Vessey, Iris, and Robert L. Glass. 1998. "Strong vs. Weak Approaches to Systems Development." Communications of the ACM, Apr.
Yourdon, Ed. 1995. "Pastists and Futurists: Taking Stock at Mid-Decade." Guerrilla Programmer, Jan.
Fallacy 5 Software are needs more methologies.
Discussion
The strange thing about this fallacy is, I can't think of anyone who is actually saying it. But before you move rapidly on to the next fallacy, muttering to yourself "what is this guy up to here?" let me point out the problem. No one is talking about inventing more methodologies, but everyone seems to be doing it.
Gurus do it. Graduate students do it. Opponents of the rigorous/rigid methodologies do it. Even professors and researchers do it, on occasion. "People who want to make their mark in the [software] world invent yet another method or model," (Wiegers 1998). The methodology machine seems to be cranking quite rapidly and seemingly continuously.
There's another funny thing about this fallacy. Study after study in the methods engineering research community (Hardy, Thompson, and Edwards 1995; Vlasbom, Rijsenbrij, and Glastra 1995) has shown that almost no software practitioners are using these methodologies straight out of the box. On the contrary, most people who use a methodology adapt it to fit the situation at hand.
At first, the methods engineering folk were troubled by this fact. "How dare practitioners tinker with the wisdom of the methodologist?" they seemed to be saying. But as time passed, the methods engineering people came to grips with what was happeningmethodologies were bent to fit because they needed to be bent to fit. Practitioners were exhibiting a necessary form of wisdom, not a perverse form of obstinacy.
Now, this fallacy is about whether we need all those methodologies that those methodologists are cranking out. And the fact of the matter is, many would say, we don't. One particularly vocal opponent of the creation of more methodologies, Karl Wiegers (you've heard of him before in this book), goes so far as to base keynote addresses on it. "Read my lips," the title of his presentation goes. "No new models" (by which he means "techniques, methods, and methodologies").
Why does Wiegers say that? Because, he says, no one is using the methodologies we have now. If shelfware is the current status of many software tools, then "dull thud" is the status of most methodologies. Methodologies appear on the scene, he says, and are roundly ignored by the people who are intended to be using them.
Should they be ignoring these methodologies? Ah, that's a fair question, but it's probably one for another fallacyor fact. My personal suspicion, however, is that all too many of our methodologies have been the product of (a) ignorance (what does a graduate student or even a faculty member know about the gritty real world of software practice?) and (b) a kind of police mentality (all too many gurus seem to want people to use their methodologies because it's the right thing to do, not because they can demonstrate that they really help in the building of software). DeMarco and Lister (1999) point out that there are Big-M Methodologies and little-m methodologies, where the rigid big-M's come from what they might call the "methodology police." DeGrace and Stahl (1993) make the same distinction but with different terminology"Roman" is their word for the methodology police form, and "Greek" is the name for a more flexible form. Given all of that, my personal opinion is that little-m's are good things, and big-M's are largely bad and should be used only with considerable care.
Controversy
I don't believe anyone is drawing a line in the sand about this fallacy, except Karl Wiegers. But perhaps someone should. There is plenty for us to learn about methodologies.
Are any of them really supported by empirical evidence? (For the most part, the answer is "no.")
Is the lack of use caused by ineffective methodologies or ignorant potential users?
Should practitioners be bending them to fit? If so, why?
Should methodologies be applied enterprise-wide, or is this too much one-size-fits-all?
Should methodologies be swallowed whole, or should we use a "best of breed" selection process on their constituent elements?
Is there a rationale behind the constituent elements of a methodology (any methodology), more than just "they seemed to fit together nicely"?
When do we use which methodologies? Or which of their constituent elements?
My personal belief is that until we have the answer to most of these questions, we should tread carefully on this subject. That's a bit of an irony, of course, given that we have been zestfully using methodologies for several decades now (the structured methods, information engineering, object orientation, Extreme Programming, Agile approaches, and so on).
So, is there controversy? Absolutely. We have more methodologies than we know what to do with already. We have tons of unanswered questions about them. And we have people continuing to invent methodologies as if their careers depended on it. Let's call a halt to all of this. Let's get some of those questions answered. Let's listen to Karl Wiegers's cry, "No new models."
Sources
In addition to the sources listed in the Reference section that follows, the following provides a summary and interpretation of the various views on methodologies.
Glass, Robert L. 1995. Software Creativity. Englewood Cliffs, NJ: Prentice-Hall.
References
DeGrace, Peter, and Leslie Stahl. 1993. The Olduvai Imperative: CASE and the State of Software Engineering Practice. Englewood Cliffs, NJ: Prentice-Hall.
DeMarco, Tom, and Timothy Lister. 1999. Peopleware. 2d ed. New York: Dorset House.
Hardy, Colin J., J. Barrie Thompson, and Helen M. Edwards. 1995. "The Use, Limitations, and Customization of Structured Systems Development Methods in the United Kingdom." Information and Software Technology, Sept.
Vlasbom, Gerjan, Daan Rijsenbrij, and Matthijs Glastra. 1995. "Flexibilization of the Methodology of System Development." Information and Software Technology, Nov.
Wiegers, Karl. 1998. "Read My Lips: No New Models!" IEEE Software, Sept.