- About Management
- People
- Tools and Techniques
- Estimation
- Reuse
- Complexity
Estimation
Fact 8
One of the two most common causes of runaway projects is poor estimation. (For the other, see Fact 23, page 67.)
Discussion
Runaway projects are those that spiral out of control. All too often, they fail to produce any product at all. If they do, whatever they produce will be well behind schedule and well over budget. Along the way, there is lots of wreckage, both in corporate and human terms. Some projects are known as "death marches." Others are said to operate in "crunch mode." Whatever they are called, whatever the result, runaway projects are not a pretty sight.
The question of what causes such runaways arises frequently in the software engineering field. The answer to the question, all too often, is based on the personal biases of the person who is answering the question. Some say a lack of proper methodology causes runaways; often, those people are selling some kind of methodology. Some say it is a lack of good tools (guess what those people do for a living?). Some say it is a lack of discipline and rigor among programmers (typically, the methodologies being advocated and often sold by those people are based on heavy doses of imposed discipline). Name an advocated concept, someone is saying the lack of it is what causes runaway projects.
In the midst of this clamor and chaos, fortunately, there are some genuinely objective answers to the question, answers from which typically no one stands to gain by whatever comes of the answer. And those answers are fascinatingly consistent: The two causes of runaways that stand head and shoulders above all others are poor (usually optimistic) estimation and unstable requirements. One of them leads in some research studies, and the other in other studies.
In this section of the book, I want to focus on estimation. (I will cover unstable requirements later.) Estimation, as you might imagine, is the process by which we determine how long a project will take and how much it will cost. We do estimation very badly in the software field. Most of our estimates are more like wishes than realistic targets. To make matters worse, we seem to have no idea how to improve on those very bad practices. And the result is, as everyone tries to meet an impossible estimation target, shortcuts are taken, good practices are skipped, and the inevitable schedule runaway becomes a technology runaway as well.
We have tried all kinds of apparently reasonable approaches to improve on our ability to estimate. To begin with, we relied on "expert" people, software developers who had "been there and done that." The problem with that approach is it's very subjective. Different people with different "been there and done that" experiences produce different estimates. In fact, whatever it was that those people had been and done before was unlikely to be sufficiently similar to the present problem to extrapolate well. (One of the important factors that characterizes software projects is the vast differences among the problems they solve. We will elaborate on that thought later.)
Then we tried algorithmic approaches. Computer scientists tend to be mathematicians at heart, and it was an obvious approach to try, developing carefully conceived parameterized equations (usually evolved from past projects) that could provide estimation answers. Feed in a bunch of project-specific data, the algorithmists would say, turn the algorithmic crank, and out pop reliable estimates. It didn't work. Study after study (for example, dating back to Mohanty [1981]) showed that, if you took a hypothetical project and fed its data into a collection of proposed algorithmic approaches, those algorithms would produce radically different (by a factor of two to eight) results. Algorithms were no more consistent in the estimates they produced than were those human experts. Subsequent studies have reinforced that depressing finding.
If complex algorithms haven't done the job, some people have reasoned, perhaps simpler algorithmic approaches will. Many people in the field advocate basing an estimate on one or a few key pieces of datathe "lines of code," for example. People say that, if we can predict the number of lines of code (LOC) we expect the system to contain, then we can convert LOC into schedule and cost. (This idea would be laughablein the sense that it is probably harder to know how many LOC a system will contain than what its schedule and cost will beif it were not for the fact that so many otherwise bright computer scientists advocate it.) The "function point" (FP). People say that we should look at key parameters such as the number of inputs to and outputs from a system, and base the estimate on that. There is a problem with the FP approach, as wellin fact, there are a couple of problems. The first is that experts disagree on what should be counted and how the counting should happen. The second is that for some applications FPs may make sense, but for otherswhere, for example, the number of inputs and outputs is far less significant than the complexity of the logic inside the programFPs make no sense at all. (Some experts supplement FPs with "feature points" for those applications where "functions" are obviously insufficient. But that begs the question, which no one seems to have answered, how many kinds of applications requiring how many kinds of "points" counting schemes are there?)
The bottom line is that, here in the first decade of the twenty-first century, we don't know what constitutes a good estimation approach, one that can yield decent estimates with good confidence that they will really predict when a project will be completed and how much it will cost. That is a discouraging bottom line. Amidst all the clamor to avoid crunch mode and end death marches, it suggests that so long as faulty schedule and cost estimates are the chief management control factors on software projects, we will not see much improvement.
It is important to note that runaway projects, at least those that stem from poor estimation, do not usually occur because the programmers did a poor job of programming. Those projects became runaways because the estimation targets to which they were being managed were largely unreal to begin with. We will explore that factor in several of the facts that follow.
Controversy
There is little controversy about the fact that software estimates are poor. There is lots of controversy as to how better estimation might be done, however. Advocates of algorithmic approaches, for example, tend to support their own algorithms and disparage those of others. Advocates of FP approaches often say terrible things about those who advocate LOC approaches. Jones (1994) lists LOC estimation as responsible for two of the worst "diseases" of the software profession, going so far as to call its use "management malpractice."
There is, happily, some resolution to this controversy, if not to the problem of estimation accuracy. Most students of estimation approaches are beginning to conclude that a "belt and suspenders" approach is the best compromise in the face of this huge problem. They say that an estimate should consist of (a) the opinion of an expert who knows the problem area, plus (b) the output of an algorithm that has been shown, in the past and in this setting, to produce reasonably accurate answers. Those two estimates can then be used to bound the estimation space for the project in question. Those estimates are very unlikely to agree with each other, but some understanding of the envelope of an estimate is better than none at all.
Some recent research findings suggest that "human-mediated estimation process can result in quite accurate estimates," far better than "simple algorithmic models" (Kitchenham et al. 2002). That's a strong vote for expert approaches. It will be worth watching to see if those findings can be replicated.
Sources
There are several studies that have concluded that estimation is one of the top two causes of runaway projects. The following two are examples, as are the three sources listed in the References section that follows.
Cole, Andy. 1995. "Runaway ProjectsCauses and Effects." Software World (UK) 26, no. 3. This is the best objective study of runaway projects, their causes, their effects, and what people did about them. It concludes that "bad planning and estimating" were a prime causative factor in 48 percent of runaway projects.
Van Genuchten, Michiel. 1991. "Why Is Software Late?" IEEE Transactions on Software Engineering, June. This study concludes that "optimistic estimation" is the primary cause of late projects, at 51 percent.
There are several books that point out what happens when a project gets in trouble, often from faulty schedule targets.
Boddie, John. 1987. Crunch Mode. Englewood Cliffs, NJ: Yourdon Press.
Yourdon, Ed. 1997. Death March. Englewood Cliffs, NJ: Prentice Hall.
References
Jones, Caper. 1994. Assessment and Control of Software Risks. Englewood Cliffs, NJ: Yourdon Press. This strongly opinioned book cites "inaccurate metrics" using LOC as "the most serious software risk" and includes four other estimation-related risks in its top five, including "excessive schedule pressure" and "inaccurate cost estimating."
Kitchenham, Barbara, Shari Lawrence Pfleeger, Beth McCall, and Suzanne Eagan. 2002. "An Empirical Study of Maintenance and Development Estimation Accuracy." Journal of Systems and Software, Sept.
Mohanty, S. N. 1981. "Software Cost Estimation: Present and Future." In Software Practice and Experience, Vol. 11, pp. 10321.
Fact 9
Most software estimates are performed at the beginning of the life cycle. This makes sense until we realize that estimates are obtained before the requirements are defined and thus before the problem is understood. Estimation, therefore, usually occurs at the wrong time.
Discussion
Why is estimation in the software field as bad as it is? We are about to launch a succession of several facts that can more than account for that badness.
This fact is about estimation timing. We usually make our software estimates at the beginning of a projectright at the very beginning. Sounds OK, right? When else would you expect an estimate to be made? Except for one thing. To make a meaningful estimate, you need to know quite a bit about the project in question. At the very least, you need to know what problem you are to solve. But the first phase of the life cycle, the very beginning of the project, is about requirements determination. That is, in the first phase we establish the requirements the solution is to address. Put more succinctly, requirements determination is about figuring out what problem is to be solved. How can you possibly estimate solution time and cost if you don't yet know what problem you are going to be solving?
This situation is so absurd that, as I present this particular fact at various forums around the software world, I ask my audiences if anyone can contradict what I have just said. After all, this must be one of those "surely I'm wrong" things. But to date, no one has. Instead, there is this general head-nodding that indicates understanding and agreement.
Controversy
Oddly, there seems to be no controversy about this particular fact. That is, as I mentioned earlier, there seems to be general agreement that this fact is correct. The practice it describes is absurd. Someone should be crying out to change things. But no one is.
Sources
I suspect that, like urban legends and old wives' tales, the expression of this particular fact cannot be traced to its origins. Here are a couple of places, however, where the "wrong time" problem has been clearly identified:
In a Q&A segment of an article, Roger Pressman quotes a questioner as saying, "My problem is that delivery dates and budgets are established before we begin a project. The only question my management asks is 'Can we get this project out the door by June 1?' What's the point in doing detailed project estimates when deadlines and budgets are predefined?" (1992).
In a flyer for an algorithmic estimation tool (SPC 1998), the text presents this all-too-common conversation: "Marketing Manager (MM): 'So how long d'you think this project will take?' You (the project leader): 'About nine months.' MM: 'We plan to ship this product in six months, tops.' You: 'Six months? No way.' MM: 'You don't seem to understand . . . we've already announced its release date.'"
Note, in these quotes, that not only is estimation being done at the wrong time, but we can make a case for it being done by the wrong people. In fact, we will explore that fact next.
References
Pressman, Roger S. 1992. "Software Project Management: Q and A." American Programmer (now Cutter IT Journal), Dec.
SPC. 1998. Flyer for the tool Estimate Professional. Software Productivity Centre (Canada).
Fact 10
Most software estimates are made either by upper management or by marketing, not by the people who will build the software or their managers. Estimation is, therefore, done by the wrong people.
Discussion
This is the second fact about why software estimation is as bad as it is. This fact is about who does the estimating.
Common sense would suggest that the people who estimate software projects ought to be folks who know something abut building software. Software engineers. Their projects leaders. Their managers. Common sense gets trumped by politics, in this case. Most often, software estimation is done by the people who want the software product. Upper management. Marketing. Customers and users.
Software estimation, in other words, is currently more about wishes than reality. Management or marketing wants the software product to be available in the first quarter of next year. Ergo, that's the schedule to be met. Note that little or no "estimation" is actually taking place under these circumstances. Schedule and cost targets, derived from some invisible process, are simply being imposed.
Let me tell you a story about software estimation. I once worked for a manager in the aerospace industry who was probably as brilliant as any I have ever known. This manager was contracting to have some software built by another aerospace firm. In the negotiations that determined the contract for building the software, he told them when he needed the software, and they told him that they couldn't achieve that date. Guess which date went into the contract? Time passed, and the wished-for date slid by. The software was eventually delivered when the subcontractor said it would be. But the contract date (I suspect that you guessed correctly which date that was) was not achieved, and there were contractual penalties to be paid by the subcontractor.
There are a couple of points to be made in this story. Even very bright upper managers are capable of making very dumb decisions when political pressure is involved. And there is almost always a price to be paid for working toward an unrealistic deadline. That price is most often paid in human terms (reputation, morale, and health, among others), butas you can see in this storythere is likely a financial price to be paid as well.
Software estimation, this fact tells us, is being done by the wrong people. And there is serious harm being done to the field because of that.
Controversy
This is another fact where controversy is warrantedand all too absent. Nearly everyone seems to agree that this is a fairly normal state of affairs. Whether it should be a desirable state of affairs is an issue that is rarely raised. But this fact does raise a major disconnect between those who know something about software and those who do not. Or perhaps this fact results from such a disconnect that already exists.
Let me explain. The disconnect that results is that software people may not know what is possible in estimating a project, but they almost always know what is not possible. And when upper management (or marketing) fails to listen to such knowledgeable words of warning, software people tend to lose faith and trust in those who are giving them direction. And they also lose a whole lot of motivation.
On the other hand, there is already a long-standing disconnect between software folks and their upper management A litany of failed expectations has caused upper management to lose its own faith and trust in software folks. When software people say they can't meet a wish list estimate, upper management simply ignores them. After all, what would be the basis on which they might believe them, given that software people so seldom do what they say they will do?
All of this says that the software field has a major disconnect problem (we return to this thought in Fact 13). The controversy, in the case of this particular fact, is less about whether the fact is true and more about why the fact is true. And until that particular controversy gets solved, the software field will continue to be in a great deal of trouble.
Sources
There is actually a research study that explores precisely this issue and demonstrates this fact. In it (Lederer 1990), the author explores the issue of what he calls "political" versus "rational" estimation practices. (The choice of words is fascinatingpolitical estimation is performed, as you might guess, by those upper managers and the marketing folks; rational estimation [I particularly love this choice of words], on the other hand, is performed by the software folk. And in this study, political estimation was the norm.)
In another reported study (CASE 1991), the authors found that most estimates (70 percent) are done by someone associated with the "user department," and the smallest number (4 percent) are done by the "project team." The user department may or may not be the same as upper management or marketing, but it certainly represents estimation by the wrong people.
There have been other similar, studies in other application domains (note that these studies were about Information Systems). The two quotes in the source material for the previous fact, for example, not only are about doing estimation at the wrong time, but are also about the wrong people doing that estimation.
References
CASE. 1991. "CASE/CASM Industry Survey Report." HCS, Inc., P.O. Box 40770, Portland, OR 97240.
Lederer, Albert. 1990. "Information System Cost Estimating." MIS Quarterly, June.
Fact 11
Software estimates are rarely adjusted as the project proceeds. Thus those estimates done at the wrong time by the wrong people are usually not corrected.
Discussion
Let's look at common sense again. Given how bad software estimates apparently are, wouldn't you think that, as a project proceeds and everyone learns more about what its likely outcome will be, those early and usually erroneous estimates would be adjusted to meet reality? Common sense strikes out again. Software people tend to have to live or die by those original faulty estimates. Upper management is simply not interested in revising them. Given that they so often represent wishes instead of realistic estimates, why should upper management allow those wishes to be tampered with?
Oh, projects may track how things are progressing and which milestones have to slide and perhaps even where the final milestones should be extended to. But when the time comes to total a project's results, its failure or success is usually measured by those first-out-of-the-box numbers. You remember, the numbers concocted at the wrong time by the wrong people?
Controversy
This failure to revisit early estimates is obviously bad practice. But few fight against this in any concerted way. (There was, however, one study [NASA 1990] that advocated reestimation and even defined the points in the life cycle at which it ought to occur. But I am not aware of anyone following this advice.) Thus, although there should be enormous controversy about this particular fact, I know of none. Software folks simply accept as a given that they will not be allowed to revise the estimates under which they are working.
Of course, once a project has careened past its original estimation date, there is public hue and cry about when product is likely to become available. The baggage handling system at the Denver International Airport comes to mind. So does each new delivery of a Microsoft product. So there is controversy about the relationship between political estimates and real outcomes. But almost always, that controversy focuses on blaming the software folks. Once again, "blame the victim" wins, and common sense loses.
Sources
I know of no research study on this issue. There are plenty of anecdotes about failure to reestimate, however, in both the popular computing press and software management books. I rely on these things to substantiate this fact:
Examples such as those mentioned earlier (the Denver Airport and Microsoft products)
My own 40-something years of experience in the software field
The half-dozen books I've written on studies of failed software projects
The fact that when I present this fact in public forums and invite the audience to disagree with me ("please tell me that I'm wrong"), no one does
Reference
NASA. 1990. Manager's Handbook for Software Development. NASA-Goddard."
Fact 12
Since estimates are so faulty, there is little reason to be concerned when software projects do not meet estimated targets. But everyone is concerned anyway.
Discussion
Last chance, common sense. Given how bad software estimates arethat wrong time, wrong people, no change phenomenon we've just discussedyou'd think that estimates would be treated as relatively unimportant. Right? Wrong! In fact software projects are almost always managed by schedule. Because of that, schedule is consideredby upper management, at leastas the most important factor in software.
Let me be specific. Management by schedule means establishing a bunch of short-term and long-term milestones (the tiny ones are sometimes called inch-pebbles) and deciding whether the project is succeeding or failing by what happens at those schedule points. You're behind schedule at milestone 26? Your project is in trouble.
How else could we manage software projects? Let me give just a few examples to show that management by schedule isn't the only way of doing business.
We could manage by product. We could proclaim success or failure by how much of the final product is available and working.
We could manage by issue. We could proclaim success or failure by how well and how rapidly we are resolving those issues that always arise during the course of a project.
We could manage by risk. We could proclaim success or failure by a succession of demonstrations that the risks identified at the beginning of the project have been overcome.
We could manage by business objectives. We could proclaim success or failure by how well the software improves business performance.
We could even manage (gasp!) by quality. We could proclaim success or failure by how many quality attributes the product has successfully achieved.
"How naive this guy is," I can hear you muttering under your breath, thinking about what I have just finished saying here. "In this fast-paced age, schedule really does matter more than anything else." Well, perhaps. But isn't there something wrong with managing to estimates, the least controllable, least correct, most questionable factor that managers handle?
Controversy
Everyone, software people included, simply accept that management by schedule is the way we do things. There is plenty of resentment of the aftereffects of management by schedule, but no one seems to be stepping up to the plate to do something about it.
Some new ideas on this subject may shake this status quo, however. Extreme Programming (Beck 2000) suggests that after the customer or user chooses three of the four factorscost, schedule, features, and qualitythe software developers get to choose the fourth. This nicely identifies the things at stake on a software project, and we can see clearly that only two of them are about estimates. It also proposes to change the power structure that is currently such a major contributor to poor estimation.
Source
I know of no research on this matter. But in software workshops I have conducted little experiments that tend to illustrate this problem. Let me describe one of those experiments.
I ask the attendees to work on a small task. I deliberately give them too much to do and not enough time to do it. My expectation is that the attendees will try to do the whole job, do it correctly, and therefore will produce an unfinished product because they run out of time. Not so. To a person, these attendees scramble mightily to achieve my impossible schedule. They produce sketchy and shoddy products that appear to be complete but cannot possibly work.
What does that tell me? That, in our culture today, people are trying so hard to achieve (impossible) schedules that they are willing to sacrifice completeness and quality in getting there. That there has been a successful conditioning process, one that has resulted in people doing the wrong things for the wrong reasons. And finally, and most disturbingly, that it will be very difficult to turn all of this around.
Extreme Programming is best described by the source listed in the Reference section to follow.
Reference
Beck, Kent. 2000. eXtreme Programming Explained. Boston: Addison-Wesley.
Fact 13
There is a disconnect between management and their programmers. In one research study of a project that failed to meet its estimates and was seen by its management as a failure, the technical participants saw it as the most successful project they had ever worked on.
Discussion
You could see this one coming. With all the estimation problems discussed earlier, it is hardly surprising that many technologists are trying very hard to pay no attention to estimates and deadlines. They don't always succeed, as I pointed out in my workshop experiment described in Fact 12. But it is tempting to those who understand how unreal most of our estimates are to use some factor other than estimation to judge the success of their project.
One research study doth not a groundswell make. But this particular study is so impressive and so important, I couldn't help but include it here as a possible future fact, an indicator of what may begin to happen more often.
This discussion focuses on a research project described in Linberg (1999). Linberg studied a particular real-world software project, one that management had called a failure, and one that had careened madly past its estimation targets. He asked the technologists on the project to talk about the most successful project they had ever worked on. The plot thickens! Those technologists, or at least five out of eight of them, identified this most recent project as their most successful. Management's failed project was a great success, according to its participants. What a bizarre illustration of the disconnect we have been discussing!
What on earth was this project about? Here are some of the project facts. It was 419 percent over budget. It was over schedule by 193 percent (27 months versus the estimate of 14). It was over its original size estimates by 130 percent for its software and 800 percent for its firmware. But the project was successfully completed. It did what it was supposed to do (control a medical instrument). It fulfilled its requirement of "no postrelease software defects."
So there are the seeds of a disconnect. Estimated targets didn't even come close to being achieved. But the software product, once it was available, did what it was supposed to do and did it well.
Still, doesn't it seem unlikely that putting a working, useful product on the air would make this a "most successful" project? Wouldn't that be an experience you would have hoped these technologists had had many times before? The answer to those questions, for the study in question, was the expected "yes." There was something else that caused the technologists to see this project as a successseveral something elses, in fact.
The product worked the way it was supposed to work (no surprise there).
Developing the product had been a technical challenge. (Lots of data show that, most of all, technologists really like overcoming a tough problem.)
The team was small and high performing.
Management was "the best I've ever worked with." Why? "Because the team was given the freedom to develop a good design," because there was no "scope creep," and because "I never felt pressure from the schedule."
And the participants added something else. Linberg asked them for their perceptions about why the project was late. They said this:
The schedule estimates were unrealistic (ta-da!).
There was a lack of resources, particularly expert advice.
Scope was poorly understood at the outset.
The project started late.
There is something particularly fascinating about these perceptions. All of them are about things that were true at the beginning of the project. Not along the way, but at the beginning. In other words, the die was cast on this project from day one. No matter how well and how hard those technologists had worked, they were unlikely to have satisfied management's expectations. Given that, they did what, from their point of view, was the next best thing. They had a good time producing a useful product.
There have been other studies of management and technologist perceptions that also reflect a major disconnect. For example, in one (Colter and Cougar 1983), managers and technologists were asked about some characteristics of software maintenance. Managers believed that changes were typically large, involving more than 200 LOC. Technologists reported that changes actually involved only 50 to 100 LOC. Managers believed that the number of changed LOC correlated with the time to do the task; technologists said that there was no such correlation.
And on the subject of software runaways, there is evidence that technologists see the problem coming far before their management does (72 percent of the time) (Cole 1995). The implication is that what the technologists see coming is not passed on to managementthe ultimate disconnect.
Perhaps the most fascinating comments on this subject came from a couple of articles on project management. Jeffery and Lawrence (1985) found that "projects where no estimates were prepared at all fared best on productivity" (versus projects where estimates were performed by technologists [next best] or their managers [worst]). Landsbaum and Glass (1992) found "a very strong correlation between level of productivity and a feeling of control" (that is, when the programmers felt in control of their fate, they were much more productive). In other words, control-focused management does not necessarily lead to the best project or even to the most productive one.
Controversy
There are essentially two aspects to this fact: the problem of what constitutes project success and the problem of a disconnect between management and technologists.
With respect to the "success" issue, the Linberg research finding has not been replicated as of this writing, so there has been no time for controversy to have emerged about this fact. My suspicion is that management, upon reading this story and reflecting on this fact, would in general be horrified that a project so "obviously" a failure could be seen as a success by these technologists. My further suspicion is that most technologists, upon reading this story and reflecting on this fact, would find it all quite reasonable. If my suspicions are correct, there is essentially an unspoken controversy surrounding the issue this fact addresses. And that controversy is about what constitutes project success. If we can't agree on a definition of a successful project, then the field has some larger problems that need sorting out. My suspicion is that you haven't by any means heard the last of this particular fact and issue.
With regard to the disconnect issue, I have seen almost nothing commenting on it in the literature. Quotes like the ones from Jeffery and Landsbaum seem to be treated like the traditional griping of those at the bottom of a management hierarchy toward those above them, rather than information that may have real significance.
Sources
Another relevant source, in addition to those in the References section, is
Procaccino, J. Drew, and J. M. Verner. 2001. "Practitioner's Perceptions of Project Success: A Pilot Study." IEEE International Journal of Computer and Engineering Management.
References
Cole, Andy. 1995. "Runaway ProjectsCauses and Effects." Software World (UK) 26, no. 3.
Colter, Mel, and Dan Couger. 1983. From a study reported in Software Maintenance Workshop Record. Dec. 6.
Jeffery, D. R., and M. J. Lawrence. 1985. "Managing Programmer Productivity." Journal of Systems and Software, Jan.
Landsbaum, Jerome B., and Robert L. Glass. 1992. Measuring and Motivating Maintenance Programmers. Englewood Cliffs, NJ: Prentice-Hall.
Linberg, K. R. 1999. "Software Developer Perceptions about Software Project Failure: A Case Study." Journal of Systems and Software 49, nos. 2/3, Dec. 30.
Fact 14
The answer to a feasibility study is almost always "yes."
Discussion
The "new kid on the block" phenomenon affects the software field in a lot of different ways. One of the ways is that we "don't get no respect." There is the feeling among traditionalists in the disciplines whose problems we solve that they have gotten along without software for many decades, thank you very much, and they can get along without us just fine now.
That played out in a "theater of the absurd" event some years ago when an engineering manager on a pilotless aircraft project took that point of view. It didn't matter that a pilotless aircraft simply couldn't function at all without computers and software. This guy wanted to dump all of that troublesome technology overboard and get on with his project.
Another way the "new kid on the block" phenomenon hits us is that we seem to possess all-too-often incurable optimism. It's as if, since no one has ever been able to solve the problems we are able to solve, we believe that no new problem is too tough for us to solve. And, astonishingly often, that is true. But there are times when it's not, times when that optimism gets us in a world of trouble. When we believe that we can finish this project by tomorrow, or at least by a couple of tomorrows from now. When we believe we will instantly produce software without errors and then find that the error-removal phase often takes more time than systems analysis, design, and coding put together.
And then there is the feasibility study. This optimism really gets us in trouble when technical feasibility is an issue. The result is, for those (all-too-few) projects when a feasibility study precedes the actual system construction project, the answer to the feasibility study is almost invariably "yes, we can do that." And, for a certain percentage of the time, that turns out to be the wrong answer. But we don't find that out until many months later.
Controversy
There is such a time gap between getting the wrong answer to a feasibility study and the discovery that it really was the wrong answer, that we rarely connect those two events. Because of that, there is less controversy about this fact than you would tend to expect. Probably the existence of a feasibility study (they are all-too-seldom performed) is more controversial than the fact that they all too often give the wrong answer.
Source
The source for this fact is particularly interesting to me. I was attending the International Conference on Software Engineering (ICSE) in Tokyo in 1987, and the famed Jerry Weinberg was the keynote speaker. As part of his presentation, he asked the audience how many of them had ever participated in a feasibility study where the answer came back "No." There was uneasy silence in the audience, then laughter. Not a single hand rose. All 1,500 or so of us realized at the same time, I think, that Jerry's question touched on an important phenomenon in the field, one we had simply never thought about before.