- The Change Problem—How Bad Is It?
- Evidence on Change Failure Rates
- Does All Change Fail the Same?
- Does Failure Always Mean the Same Thing?
- Change Masters and Change-Agility
- Failed Metaphors—The Fantasy of the Static Organization
- The Change Problem as a People Problem
- Change Myths
- Everybody Is an Expert on People Issues—Or Are They?
- Putting the Change Manager Out of Work
- From Change Management to Change Leadership
- Change Leadership and the Human Sciences
- Conclusion
Does Failure Always Mean the Same Thing?
Research including types of change takes us a bit further, but not far enough. What matters even more is the type of change failure. A complete business busting write-off is different than a 25 percent overrun. The lack of definitional rigor of most change surveys produces an average that includes tolerable delays (by the standards of organizational change) and those complete write-offs.
If some executives interviewed for the surveys use the word failure to signify “failed to deliver 100 percent of expected benefits,” or “overran budget and timetable,” and others use failure to signify “abandoned project halfway and wrote off entire project expenditure with no positive and many negative results,” then even the average estimates from Smiths research conceal some important facts. To get to a more useful statistic, we need a better definition of failure and an analysis of outcomes by kind of failure, perhaps using a rough framework such as SOCKS (Shortfalls, Overruns, Consequences, Killed, Sustainable), shown in Table 1.2.
Table 1.2 SOCKS Taxonomy of Project Failures
SOCKS CATEGORY |
EXAMPLE1 |
RESEARCH |
Benefit SHORTFALLS: The project completes, but there are important shortfalls in benefits delivery causing disruption of business processes. |
Hershey’s ERP supply chain system causes $100 million revenue dip. |
Little data is available on the average benefit shortfalls by type of change or type of business. |
Cost OVERRUNS:2 The project completes, but there are significant overruns (cost or time). |
Boston’s “Big Dig” overruns by $12 billion. |
Average cost overruns are 27% with one in six more than 200%.3 |
Unintended CONSEQUENCES: The project completes, but there are costly, unintended consequences. |
Fox Meyer Drug $65 million ERP system bankrupts the company. Scott Paper successfully cuts costs, earnings spike, but long-term competitiveness collapses. |
Little aggregate data is available on adverse consequences, either the number of projects affected or the type and extent of consequences. |
KILLED programs: The project is killed after significant investment. |
Denver Airport baggage system first delayed airport opening by 18 months, but was then scrapped at a rough cost of $3 billion. |
Little data is available on the number of projects killed and written off completely. |
Lack of SUSTAINABLE results: Results are delivered, but are not sustained over time. |
Following BP’s Texas City refinery disaster a new focus on health and safety behaviors brought about short-term gains that eroded as memory of the event faded.4 |
Little data is available on how many projects have benefits that erode over time. |
This SOCKS categorization is not a scientific categorization because terms such as consequences and sustainability can mean a lot of things, but it is considerably better than just talking about “failure.” It is a place to start, and every project should have a SOCKS review once completed (or not) using early budgeted costs and benefits as a baseline. This way, businesses can develop internal analytics on how projects fare and how they fail to meet expectations in ways that are useful for capital budgeting. They may be able to draw conclusions such as “When we attempt reorganizations, we exceed budget by an average of 30 percent, and there are often many negative, unintended consequences,” or “When we have acquired a new company, the financial returns were, on average, 25 percent less than we had predicted.”
As you will see throughout this book, when it comes to measuring change implementation performance, science is lacking, and practitioners are very slow to challenge orthodoxy or urban legends such as the 70 percent statistic. We need much better answers to questions such as: What types of change are riskiest? How much more risky is big budget change than small change? What factors increase/decrease risk? Does performance vary across business regions or functions?