- Agility
- "Agile" Studies
- Agile Software Development Ecosystems
"Agile" Studies
In Geoffrey Moore's rendition of the technology adoption life cycle, the buying cycle progresses from technology enthusiasts to visionaries to pragmatists to conservatives to skeptics. As he says, "The visionary strategy is to adopt the new technology as a means of capturing a dramatic advantage over competitors who do not adopt it" (Moore 2000). In the main, Agile approaches are still in the technology enthusiast and visionary domainsthey haven't penetrated into the mainstream pragmatists' marketplace. Visionaries buy ideas; pragmatists buy proof. Will Agile approaches be the methodology equivalent of Clayton Christensen's disruptive technology (Christensen 1997), or will they become another July 4th rocket, bursting upon the scene, only to fizzle out and return to earth? Realistically, we don't know yet.
Although much of the literature on ASDEs remains anecdotal, there have been several academic studies that have pointed to the efficacy of ASDEs. Laurie Williams, an assistant professor at North Carolina State University, wrote her doctoral dissertation on her study of pair programming. Two other studies, both done at Harvard Business School, provide keen insight into the issues surrounding Agile development, although they are not about Agile development per se.
Product Development in Internet Time
"Now there is proof that the evolutionary approach to software development results in a speedier process and higher-quality [emphasis added] products." This is the tag line from an article in the Winter 2001 issue of the MIT Sloan Management Review. The article, "Product-Development Practices that Work: How Internet Companies Build Software," was written by Alan MacCormack, a professor of technology and operations management at Harvard Business School. Much of the material written on Agile methods, iterative development, and other such practices is based on practical experience about what has worked, but MacCormack provides us with explicit research results.
MacCormack and his Harvard Business School colleague Marco Iansiti have been investigating processes that work best in complex, uncertain environments. The question that the research project addressed was, "Does a more evolutionary development process result in better performance?" The study included 20 projects from 17 companies and a panel of experts to evaluate relative "performance" factors between products.
MacCormack writes, "The most striking result to emerge from the research concerned the importance of getting a low-functionality version of the product into customer's hands at the earliest opportunity. The differences in performance are dramatic. That one parameter explains more than one-third of the variation in product quality across the samplea remarkable result." These are pretty bold statements from an academic researcher. As he points out in the article, there are so many variables that impact performance that finding one that has such a striking impact is rare in research circles.
MacCormack points to four development practices that spell success:
An early release of the evolving product design to customers
Daily incorporation of new software code and rapid feedback on design changes
A team with broad-based experience of shipping multiple projects
Major investments in the design of the product architecture
Now, to those who have been practicing Agile techniques for years, these statements may sound ho-hum. However, to those who are just embarking into this arena of exploratory approaches, or to those who are trying to "sell" these approaches to their management, these research find-ings are significant.
The study found that releasing early versions of products not only increased performance as MacCormack states above, but that the uncertainty associated with Internet software development "dictates short micro-projectsdown to the level of individual features." This finding supports both short, iterative cycles and driving projects by features rather than tasks, as Agilists recommend.
MacCormack's study shows that daily incorporation of new software code and rapid feedback on design changes have a positive impact on quality (admitting that there are obviously many factors here that determine quality) and that the quicker the feedback (as in hours, not days), the higher the quality. One of the reasons for this finding may be that if a project has very short feedback cycles, the team is led into continuous, automated testing. "None of the projects with extremely long feedback times (more than 40 hours) had a quality level above the mean," writes MacCormack. On the issue of quality versus feedback time, the study found the highest-quality projects cluster around 2- to 12-hour feedback cycles, with only a couple of data points in the 20- to 40-hour range that were above the mean in terms of quality.
As with any research effort, and when thinking in general about different forms of evolutionary development, understanding the relevant problem domain is critical. MacCormack studied companies operating in complex, uncertain environmentsNetscape, Microsoft, Yahoo, and others. To the extent that an organization (or a development project) operates in a slower-paced, more predictable market, some form of evolutionary development may be less critical.
"Heavy" Agile Projects
Isn't "heavy Agile" an oxymoron? I was sitting through Alistair Cockburn's tutorial on designing methodologies at the Software Development 2001 conference when something clicked. I realized that the transition from the label "light" to "Agile" had a secondary benefit; namely, we could now more easily differentiate between small, single-team projects that needed to be Agile due to requirements uncertainty (or other factors) and larger, distributed-team projects that needed to be Agile but also needed additional ceremony (documentation, formality, tools) because of their size.
MacCormack and Iansiti's work at Harvard Business School focused on Internet and software companies whose high-risk profiles are obvious, but what about larger IT projects? Two other Harvard professors, Rob Austin and Richard Nolan, have been investigating just this issue with respect to major enterprise projects, particularly very large enterprise resource planning (ERP) systems. The title of an in-depth report by Austin indicates their emerging conclusions: "Surviving Enterprise Systems: Adaptive Strategies for Managing Your Largest IT Investments" (Austin 2001).
"As the twenty-first century dawns, we are finally learning to obtain value from these very large IT projects," Austin writes. "The old project approaches do not work in this new space New and better analogies are based on activities like adaptive software development or new venture investment." Austin's report first documents several high-profile horror stories. One company, which obviously wanted to remain unnamed, was a major manufacturing company whose ERP installation plan fell apart after spending $30 million, getting board approval for spending $175 million, then quickly finding out (from the software vendor and the systems integrator) that the price tag would be more like $300 million. After getting board approval, the ERP vendor surprised the company's IT executives by informing them that the "off-the-shelf" package would only meet about 35 percent of their stated requirements.
Dell Computer backed out of an ERP project after spending over two years and $200 million. The folks at Dell couldn't make the software work (for them); they considered the system too monolithic, and they then opted for a best-of-breed approach.
The survey data in Austin's report came from participants in the Har-vard Business School's summer executive education program from the three years 1998 to 2000. This particular program attracts senior IT and general managers from over 80 companies worldwide. The survey found that up to 88 percent of the respondents (depending upon the year) considered their ERP projects to be large or very large, several responding that they were the largest projects that their companies had ever undertaken.
Besides size, an overwhelming percentage of respondents considered the projects to have considerable technical, organizational, and business risk. The categorization of risk was interesting. Technical risk was defined as the risk of the software's failure to meet business requirements. Organizational risk was defined as the risk that the organization would be unable to make the required changes in order to effectively "use" the software, and business risk was the risk that implementing the system would actually hurt the company rather than help it. While technical risk concerns declined from 199899 to 2000, concerns over organizational and business risks remained very high. However, in 2000, the survey indicated that companies were starting to achieve the long-anticipated benefits. This led to the second set of study questions: "What are the characteristics of successful projects, and how do they differ from failures?" Nolan and Austin concluded that there were three dysfunctional elementsin terms of flawed assumptionsin large front-end-loaded projects:
The first flawed assumption is that it is actually possible to plan such a large project well enough that success is primarily determined by degree of conformance to a plan.
The second flawed assumption embedded in planning-intensive approaches is that it is possible to protect against late changes to a large system project.
The third flawed assumption is that it even makes sense to lock in big project decisions early.
"Building a huge new enterprise system is, in many ways, more like building an entirely new venture than it is like managing a traditional IT project," says Austin. Therefore, he and Nolan recommend a staging model, much like venture capitalists use to fund new ventures: spend some money, demand tangible results, spend additional money. Although this "staging" may cost a little extra, it helps companies manage the risk exposure with cost expendituresmuch like exploration drilling. Staging can also produce incremental return on investment (ROI). Of the participants surveyed in the 2000 executive education group, 75 percent indicated they used some type of staging strategy. Even more interesting, companies in the planning stage estimated they would need 4 stages; companies in the midst of implementation estimated they would need 7 stages; while companies who had completed implementation said they had taken an average of 12 stages. Tektronix, one of the two in-depth Harvard Business School case studies (the other was Cisco), reported 25 stages, or waves, as it called them.
In the report summary, Austin points to four characteristics of these more successful projects (which I have paraphrased):
They are all iterative.
They all rely on fast cycles and insist on frequent delivery.
They get functionality in some form into business user hands very early in the project.
They are preceded by little or no traditional ROI-style analysis of the project as a monolithic whole.
So, in the end, from a strategic perspective, Agile approaches are not about levels of documentation or not using UML or the discipline of pair programming. In the end, Agile approaches are about delivering working productspackaged software, embedded software in a wide range of products, and internal IT productsin environments characterized by high levels of uncertainty and risk. Whether your firm is a dotcom rushing to market or a traditional company slogging through a $50 million ERP system implementation, agility is the key to success.