Improving Software Economics, Part 2 of 7: The Move to Agility
IBM learned many best practices as it evolved toward modern Agile delivery methods. Most of them were discovered years previously, while working with forward-looking organizations. IBM has been advancing techniques largely from the perspective of industrial-strength software engineering, where scale and criticality of applications dominate its governance and management methods. IBM was one of the pioneers of Agile techniques such as pair programming [1] and extreme programming, [2] and it now has a vibrant technical community with thousands of practitioners engaged in Agile practices in IBM's own development efforts and professional services. Many pioneering teams inside and outside of IBM have advanced these best practices from small-scale techniques, commonly referred to as "Agile methods," and these contributions were developed separately in numerous instances across the diverse spectrum of software domains, scales, and applications.
For years, IBM has worked to unite the Agile consultants (that is., small-scale development camps) with the process maturity consultants (industrial-strength software development camps). While these camps have been somewhat adversarial and wary of endorsing one another, both sides have valid techniques and a common spirit, approaching common problems with a different jargon and bias. There is no clear right or wrong prescription for the range of solutions needed. Context and scale are important; to be successful, every nontrivial project or organization also needs a mix of techniques, a family of process variants, common sense, and domain experience.
My book Software Project Management [3] introduces my top 10 principles of managing a modern software process. I'll use that framework here and in the following articles in this series to summarize the history of best-practice evolution. I describe three discrete eras of software lifecycle models by capturing the evolution of their top 10 principles. I will denote these three stages as 1) conventional waterfall development, 2) transitional iterative development, and 3) modern Agile delivery. I'll describe the first two eras only briefly, since they've been covered elsewhere in greater detail, and their description here is only to provide benchmarks for comparison to the top 10 principles of a modern Agile delivery approach.
Figure 1 provides a project manager's view of the process transition toward which the industry has been marching for decades. Project profiles representing each of the three eras plot development progress versus time, where progress is defined as percentage executable[md]that is, demonstrable in its target form. Progress in this sense correlates to tangible intermediate outcomes, and is best measured through executable demonstrations. The term executable doesn't imply complete, compliant, or up to specifications, but it does imply that the software is testable. The figure also describes the primary measures that were used to govern projects in these eras and introduces the measures that we find to be most important in moving forward to achieve Agile software-delivery success.
Figure 1 Improved project profiles and measures in transitioning to Agile delivery processes.
Agile Econometrics |
Iterative Trends |
Waterfall Measures |
Accurate net present value |
Honest earned value |
Dishonest earned values |
Reuse/custom asset trends |
Release content over time |
Activity/milestone completion |
Release quality over time |
Release quality over time |
Code/test production |
Variance in estimate to complete |
Prioritized risk management |
Requirements-design-code traceability |
Release content/quality over time |
Scrap/rework/defect trends |
Inspection coverage |
Actuals vs. dynamic plans |
Actuals vs. dynamic plans |
Actuals vs. static plan |
Conventional waterfall projects are represented by the dotted-line profile in Figure 1. The typical sequence for the conventional waterfall management style when measured this way is as follows:
- Early success via paper designs and overly precise artifacts.
- Commitment to executable code late in the lifecycle.
- Integration nightmares due to unforeseen implementation issues and interface ambiguities.
- Heavy budget and schedule pressure to get the system working.
- Late shoehorning of suboptimal fixes, with no time for redesign.
- A very fragile, expensive-to-maintain product, delivered late.
Most waterfall projects are mired in inefficient integration and late discovery of substantial design issues, and they expend roughly 40% or more of their total resources in integration and test activities, with much of this effort consumed in excessive scrap and rework during the late stages of the planned project, when project management had imagined shipping or deploying the software. Project management typically reports a linear progression of earned value up to 90% complete before reporting a major increase in the estimated cost of completion as they suffer through the late scrap and rework. In retrospect, software earned value systems based on conventional activity, document, and milestone completion are not credible since they ignore the uncertainties inherent in the completed work. Here is a situation for which I have never seen a counter-example: A software project that has a consistently increasing progress profile is certain to have a pending cataclysmic regression.
The iterative management approach represented by the middle profile in Figure 1 forces integration into the design phase through a progression of demonstrable releases, thereby exposing the architecturally significant uncertainties to be addressed earlier, where they can be resolved efficiently in the context of lifecycle goals. Equally as critical to the process improvements are a greater reliance on more standardized architectures and reuse of operating systems, data management systems, graphical user interfaces, networking protocols, and other middleware. This reuse and architectural conformity contributes significantly to reducing uncertainty through less custom development and precedent patterns of construction. The downstream scrap and rework tarpit is avoidable, along with late patches and malignant software fixes. The result is a more robust and maintainable product, delivered more predictably, with a higher probability of economic success. Iterative projects can deliver a product with about half the scrap and rework activities of waterfall projects, by refactoring architecturally significant changes far earlier in the lifecycle.
Agile software delivery approaches start projects with an ever-increasing amount of the product coming from existing assets, architectures, and services, as represented in the left-hand profile in Figure 1. Integrating modern best practices and a supporting platform that enables advanced collaboration allows the team to iterate more effectively and efficiently. Measurable progress and quality are accelerated, and projects can converge on deliverable products that can be released to users and testers earlier. Agile delivery projects that have fully transitioned to a steering leadership style based on effective measurement can optimize scope, design, and plans to reduce this waste of unnecessary scrap and rework further, eliminate uncertainties earlier, and significantly improve the probability of win-win outcomes for all stakeholders. Note that we don't expect scrap and rework rates to be driven to zero, but rather to a level that corresponds to healthy discovery, experimentation, and production levels commensurate with resolving the uncertainty of the product being developed.
Table 1 provides one indicative benchmark of this transition. The resource expenditure trends become more balanced across the primary workflows of a software project as a result of less human-generated stuff, more efficient processes (less scrap and rework), more efficient people (more creative work, less overhead), and more automation.
Table 1
Resource Expenditure Profiles in Transitioning to Agile Delivery Processes
Lifecycle Activity |
Conventional |
Iterative |
Agile |
Management |
5% |
10% |
10[nd]15% |
Scoping |
5% |
10% |
10[nd]15% |
Design/demonstration |
10% |
15% |
10[nd]20% |
Implementation/coding |
30% |
25% |
15[nd]20% |
Test and assessment |
40% |
25% |
15[nd]25% |
Release and deployment |
5% |
5% |
10% |
Environment/tooling |
5% |
10% |
10% |
References
[1] Laurie Williams and Robert Kessler, Pair Programming Illuminated, Addison-Wesley, 2002.
[2] Laurie Williams et al., "Toward a Framework for Evaluating Extreme Programming," Proc. of the 8th International Conference on Empirical Assessment in Software Engineering (EASE 2004), Edinburgh, Scotland.
[3] Walker E. Royce, Software Project Management, Addison-Wesley, 1998.