Improving Software Economics, Part 7 of 7: A Framework for Reasoning About Improving Software Economics
Today's empirical software cost estimation models (COCOMO II, SEER, QSM SLIM, and others) allow users to estimate costs to within 25[nd]30%, on three out of four projects. [1] This level of unpredictability in the outcome of software projects is a strong indication that software delivery and governance clearly requires an economics discipline that can accommodate high levels of uncertainty. These cost models include dozens of parameters and techniques for estimating a wide variety of software development projects. For the purposes of this discussion, I will simplify these estimation models into a function of four basic parameters:
- Complexity. The complexity of the software is typically quantified in units of human-generated stuff and its quality. Quantities may be assessed in lines of source code, function points, use-case points, or other measures. Qualities such as performance, reuse, reliability, and feature richness are also captured in the complexity value. Simpler and more straightforward applications will result in a lower complexity value.
- Process. This process exponent typically varies in the range 1.0 to 1.25 and characterizes the governance methods, techniques, maturity, appropriateness, and effectiveness in converging on wins for all stakeholders. Better processes will result in a lower exponent.
- Teamwork. This parameter captures the skills, experience, motivations and know-how of the team, along with its ability to collaborate toward well-understood and shared goals. More effective teams will result in a lower multiplier.
- Tools. The tools parameter captures the extent of process automation, process enactment, instrumentation, and team synchronization. Better tools will result in a lower multiplier.
The relationships among these parameters in modeling the estimated effort can be expressed as shown in Figure 1.
Figure 1 Relationships among parameters in the estimation models.
By examining the mathematical form of this equation and the empirical data in the various models and their practical application across thousands of industry projects, we can easily demonstrate that these four parameters are in priority order when it comes to their potential economic leverage. In other words, a 10% reduction in complexity is worth more than a 10% improvement in the process, which is worth more than a team being 10% more capable, which is worth more than a 10% increase in automation. In practice, this is exactly what IBM service teams have learned over the last 25 years of helping software organizations to improve their software development and delivery capability.
We have been compiling best practices and economic improvement experiences for years. We're in the continuing process of synthesizing this experience into more consumable advice and valuable intellectual property in the form of value traceability trees, metrics patterns, benchmarks of performance, and instrumentation tools to provide a closed-loop feedback control system for improved insight and management of the econometrics introduced earlier. Figure 2 summarizes the rough ranges of productivity impact and timeframes associated with many of the more common initiatives that IBM is investing in and delivering every day across the software industry. The impact on productivity typically affects only a subset of project and organization populationsthey require savvy tailoring to put them into a specific context. As the scale of an organization grows, the impacts dampen predominantly because of standard inertiathat is, resistance to change.
We have been careful to present ranges and probability distributions to ensure that it's clear that "your mileage may vary." The key message from Figure 2 is that a range of incremental improvements can be achieved, with a general hierarchy of impact. The more significant improvements, such as systematic reduction in complexity and major process transformations, also require the more significant investments and time to implement. These tend to be broader organizational initiatives. The more-incremental process improvements, skill improvements, and automation improvements targeted at individual teams, projects, or smaller organizations are more predictable and straightforward to deploy.
Figure 2 A rough overview of expected improvements for some best practices.
The main conclusion that we can draw from the experience captured in Figure 2 is that improvements in each dimension have significant returns on investment. The key to substantial improvement in business performance is a balanced attack across the four basic parameters of the simplified software cost model:
- Reduce complexity.
- Streamline processes.
- Optimize team contributions.
- Automate with tools.
There are significant dependencies among these four dimensions of improvement. For example, new tools enable complexity reduction and process improvements, size reduction leads to process changes, collaborative platforms enable more effective teamwork, and process improvements drive tool advances.
At IBM, and in our broad customer base of software development organizations, we have found that the key to achieving higher levels of improvements in teamwork, process improvement, and complexity reduction lies in supporting and reinforcing tooling and automation. Deploying best practices and changing cultures is more straightforward when you can systematically transform ways of working. This is done through deployment of tangible tools that automate and streamline the best practices and are embraced by practitioners because these tools increase the practitioner's creative time spent in planning, analysis, prototyping, design, refactoring, coding, testing, and deploying; while decreasing the time spent on unproductive activities such as unnecessary rework, change propagation, traceability, progress reporting, metrics collection, documentation, and training.
I realize that listing training among the unproductive activities will raise the eyebrows of some people. Training is an organizational responsibility, not a project responsibility. Any project manager who bears the burden of training people in processes, technologies, or tools is worse off than a project manager with a fully trained workforce. Having a fully trained workforce on every project is almost never possible, but employing trained people is always better than employing untrained people, other things being equal. In this sense, training is considered a non-value-added activity.
This is one of the fundamental dilemmas that organizations face as they try to improve in any one of the four dimensions. The overhead cost of training teams on new things is a significant inhibitor to project success; this cost explains many managers' resistance to any new change initiative, whether it regards new tools, practices, or people.
In making the transition to new techniques and technologies, there is always apprehension and concern about failing, particularly by project managers who are asked to make significant changes in the face of tremendous uncertainty. Maintaining the status quo and relying on existing methods is usually considered the safest path. In the software industry, where most organizations succeed on less than half of their software projects, maintaining the status quo is not a safe bet. When an organization decides to make a transition, two pieces of conventional wisdom are usually offered by both internal champions and external change agents:
- Pioneer any new techniques on a small pilot program.
- Be prepared to spend more resourcesmoney and timeon the first project that makes the transition.
In my experience, both of these recommendations are counterproductive.
Small pilot programs have their place, but they rarely achieve any paradigm shift within an organization. Trying out a new little technique, tool, or method on a very rapid, small-scale effortless than three months, say, and with just a few peoplecan frequently show good results, initial momentum, or proof of concept. The problem with pilot programs is that they're almost never considered on the critical path of the organization. Consequently, they don't merit "A" players, adequate resources, or management attention. If a new method, tool, or technology is expected to have an adverse impact on the results of the trailblazing project, that expectation is almost certain to come true. Why? Because software projects almost never do better than planned. Without a very significant incentive to deliver early (which is very uncommon), projects will at best steer their way toward a target date. Therefore, the trailblazing project will be a non-critical project, staffed with non-critical personnel of whom less is expected. This adverse impact ends up being a self-fulfilling prophecy.
The most successful organizational paradigm shifts I have seen resulted from similar sets of circumstances: The organizations took its most critical project and highest-caliber personnel, gave them adequate resources, and demanded better results on that first critical project.
Conclusion
Day-to-day decisions in software projects have always been, and continue to be, dominated by decisions rooted in the tradition of economics discipline, namely: value judgments, cost tradeoffs, human factors, macro-economic trends, technology trends, market circumstances, and timing. Software project activities are rarely concerned with engineering disciplines such as mathematics, material properties, laws of physics, or established and mature engineering models. The primary difference between economics and engineering governance is the amount of uncertainty inherent in the product under development. The honest treatment of uncertainty is the foundation of today's best practices; we have learned over and over that what makes a software practice better or best is that the practice reduces uncertainty in the target outcome.
Four concluding thoughts summarize the main themes of this series:
- Agile software delivery is better served by economic governance principles. With software delivery becoming a more dominant business process in most product, systems, and services companies, the predictability and track record of applying conventional engineering principles to managing software won't be competitive.
- Our top 10 principles of Agile software delivery have a common theme: They describe "economic governance" approaches that attack uncertainties and reduce the variance in the estimate to complete.
- The primary metric for demonstrating that an organization or project has transitioned to effective Agile delivery is the trend in the cost of change. This measure of the adaptability inherent in software releases is a key indicator of the flexibility required to navigate uncertainties continually and steer projects toward success.
- The next wave of technological advances to improve the predictability and outcomes of software economics needs to be in measurement and instrumentation that supports better economic governance.
IBM, and the Rational organization in particular, will continue to invest in research, practices, measures, instrumentation, and tools to advance our knowledge and practice of software economic governance, so that our customers can exploit a mature business process for Agile software delivery.