Improving Software Economics, Part 4 of 7: Top 10 Principles of Iterative Software Management
In the 1990s, Rational Software Corporation began evolving a modern process framework to capture the best practices of iterative development more formally. The primary goal was to help the industry transition from a "plan and track" management style (the waterfall model) to a "steering" leadership style that admitted uncertainties in the requirements, design, and plans.
The software management approach we evolved led to producing the architecture first, and then usable increments of partial capability, and finally worrying about completeness. Requirements and design flaws are detected and resolved earlier in the lifecycle, avoiding the big-bang integration at the end of a project by integrating in stages throughout the project lifecycle. Modern iterative development enables better insight into quality because system characteristics that are largely inherent in the architecture (e.g., performance, fault tolerance, adaptability, interoperability, maintainability) are tangible earlier in the process where issues are still correctable without jeopardizing target costs and schedules. These techniques attacked major uncertainties far earlier and more effectively. Here are my top 10 principles of iterative development [1] from the 1990s and early 2000s era.
Top 10 Management Principles of Iterative Development
- Base the process on an architecture-first approach.
- Establish an iterative lifecycle process that confronts risk early.
- Transition design methods to emphasize component-based development.
- Establish a change-management environment.
- Enhance change freedom through tools that support round-trip engineering.
- Capture design artifacts in rigorous, model-based notation.
- Instrument the process for objective quality control and progress assessment.
- Use a demonstration-based approach to assess intermediate artifacts.
- Plan intermediate releases in groups of usage scenarios with evolving levels of detail.
- Establish a configurable process that is economically scalable.
Whereas conventional principles drove software development activities to overexpend in integration activities, these modern principles resulted in less total scrap and rework through relatively more emphasis in early lifecycle-engineering, and a more balanced expenditure of resources across the core workflows of a modern process.
The architecture-first approach forces integration into the design phase, where the most significant uncertainties can be exposed and resolved. The early demonstrations don't eliminate the design breakage; they just make it happen when it can be addressed effectively. The downstream scrap and rework is significantly reduced, along with late patches and sub-optimal software fixes, resulting in a more robust and maintainable design.
Interim milestones provide tangible results. Designs are now "guilty until proven innocent." The project doesn't move forward until the objectives of the demonstration have been achieved. (This doesn't preclude the renegotiation of objectives once the milestone results permit further refactoring and understanding of the tradeoffs inherent in the requirements, design, and plans.)
Figure 1 illustrates the change in measurement mindset when moving from waterfall model measures of activities to iterative measures of scrap and rework trends in executable releases. The trends in cost of change can be observed through measuring the complexity of change. This requires a project to quantify the rework (effort required for resolution) and number of instances of rework. In simple terms, adaptability quantifies the ease of changing a software baseline, with a lower value being better. When changes are easy to implement, a project is more likely to increase the number of changes, thereby increasing quality.
With the conventional process and custom architectures, change was more expensive to incorporate as we proceeded later into the lifecycle. When waterfall projects measured such trends, they tended to see the cost of change increase as they transitioned from testing individual units of software to testing the larger, integrated system. This is intuitively easy to understand, since unit changes (typically implementation issues or coding errors) were relatively easy to debug and resolve, and integration changes (design issues, interface errors, or performance issues) were relatively complicated to resolve.
Figure 1 The discriminating improvement measure: change cost trends.
A discriminating result of a successful transition to a modern iterative process with an architecture-first approach is that the more expensive changes are discovered earlier, when they can be resolved efficiently, and get simpler and more predictable as we progress later into the lifecycle. This is the result of attacking the uncertainties in architecturally significant requirements tradeoffs and design decisions earlier. The big change in an iterative approach is that integration activities mostly precede unit test activities, thereby resolving the riskier architectural and design challenges prior to investing in unit test coverage and complete implementations.
This is the single most important measure of software project health. If you have a good architecture and an efficient process, the long-accepted adage, "The later you are in the lifecycle, the more expensive things are to fix" does not apply. [2]
Successful steering in iterative development is based on improved measurement and metrics extracted directly from the evolving sequence of executable releases. These measures, and the focus on building the architecture first, allow the team to assess trends in progress and quality explicitly, systematically addressing the primary sources of uncertainty. The absolute measures are useful, but the relative measures (or trends) of how progress and quality change over time are the real discriminators in improved steering, governance, and predictability. Balancing innovation with standardization is critical to governing the cost of iterating, as well as governing the extent to which you can reuse assets, versus developing more custom components. Standardization through reuse can take many forms, including the following:
- Product assets: Architectures, patterns, services, applications, models, commercial components, legacy systems, legacy components, etc.
- Process assets: Methods, processes, practices, measures, plans, estimation models, artifact templates, etc.
- People assets: Existing staff skills, partners, roles, ramp-up plans, training, etc.
- Platform assets: Schemas, commercial tools, custom tools, data sets, tool integrations, scripts, portals, test suites, metrics experience databases, etc.
While this series is primarily concerned with the practice of reducing uncertainty, there is an equally important practice of reusing assets based on standardization. The value of standardizing and reusing existing architectural patterns, components, data, and services lies in the reduction in uncertainty that comes from using elements whose function, behavior, constraints, performance, and quality are all known. The cost of standardizing and reuse is that it can constrain innovation. It is therefore important to balance innovation and standardization, which requires emphasis on economic governance to reduce uncertainty; but that practice is outside the scope of this discussion.
References
[1] Winston W. Royce, "Managing the Development of Large Software Systems," Proceedings of IEEE WESCON 26 (August 1970): 19.
[2] Appendix D in my book Software Project Management: A Unified Framework provides a large-scale case study of a Department of Defense project that achieved the cost-of-change pattern on the right side of Figure 1.