Attention to Flow
Central to the value-up paradigm is an emphasis on flow. There are two discrete meanings of flow, and both are significant in planning software projects.
First, flow is the human experience of performing expertly, as Mihaly Csikszentmihalyi explains in Flow: The Psychology of Optimal Experience:
We have seen how people describe the common characteristics of optimal experience: a sense that one’s skills are adequate to cope with the challenges at hand, in a goal-directed, rule-bound action system that provides clear clues as to how well one is performing. Concentration is so intense that there is no attention left over to think about anything irrelevant, or to worry about problems. Self-consciousness disappears, and the sense of time becomes distorted. An activity that produces such experiences is so gratifying that people are willing to do it for its own sake, with little concern for what they will get out of it, even when it is difficult, or dangerous.10
This meaning of flow is cited heavily by advocates of eXtreme Programming (XP) and other practices that focus on individual performance.
The second meaning of flow is the flow of customer value as the primary measure of the system of delivery. David J. Anderson summarizes this view in Agile Management for Software Engineering:
Flow means that there is a steady movement of value through the system. Client-valued functionality is moving regularly through the stages of transformation—and the steady arrival of throughput—with working code being delivered.11
In this paradigm, you do not measure planned tasks completed as the primary indicator of progress; you count units of value delivered. Your rates of progress in throughput of delivered value and stage of completion at the units of value are the indicators that you use for planning and measurement.
Correspondingly, the flow-of-value approach forces you to understand the constraints that restrict the flow. You tune the end-to-end flow by identifying the most severe bottleneck or inefficiency your process, fixing it, and then tackling the next most severe one. As Anderson explains:
The development manager must ensure the flow of value through the transformation processes in the system. He is responsible for the rate of production output from the system and the time it takes to process a single idea through the system. To understand how to improve the rate of production and reduce the lead time, the development manager needs to understand how the system works, be able to identify the constraints, and make appropriate decisions to protect, exploit, subordinate, and elevate the system processes.12
A flow-based approach to planning and project management requires keeping intermediate work-in-process to a minimum, as shown in Figure 1.3. This mitigates the risk of late discovery of problems and unexpected bubbles of required rework.
Figure 1.3 Measuring flow of scenario completion on a daily basis shows the rhythm of progress and quickly identifies bottlenecks that can be addressed as they arise.
Figure 1.3 shows how the continuous measurement of flow can illuminate bottlenecks as they are forming. Planned work for the iteration is progressing well through development (Active turning to Resolved), but is increasingly getting stuck in testing (Resolved to Closed). This accumulates as the bulge of work-in-process in the middle band. If you tracked development only (the reduction in Active work items), you would expect completion of the work by the expected end date; but because of the bottleneck, you can see that the slope of the Closed triangle is not steep enough to finish the work on time. This lets you drill into the bottleneck and determine whether the problem is inadequate testing resources or poor quality of work from development.
Contrast to Work-Down
An icon of the work-down paradigm is the widely taught "iron triangle" view of project management. This is the notion that there are only three variables that a project manager can work with: time, resources (of which people are by far the most important), and functionality. If you acknowledge quality as a fourth dimension (which most people do now), then you have a tetrahedron, as shown in Figure 1.4.
In Rapid Development, Steve McConnell summarizes the iron triangle as follows:
To keep the triangle balanced, you have to balance schedule, cost, and product. If you want to load up the product corner of the triangle, you also have to load up cost or schedule or both. The same goes for the other combinations. If you want to change one of the corners of the triangle, you have to change at least one of the others to keep it in balance.13
Figure 1.4 The "iron triangle" (or tetrahedron) treats a project as a fixed stock of work, in classic work-down terms. To stretch one face of the tetrahedron, you need to stretch the others.
According to this view, a project manager has an initial stock of resources and time. Any change to functionality or quality requires a corresponding increase in time or resources. You cannot stretch one face without stretching the others because they are all connected.
Although widely practiced, this paradigm does not work well. Just as Newtonian physics is now known to be a special case, the iron triangle is a special case that assumes the process is flowing smoothly to begin with. In other words, it assumes that resource productivity is quite uniformly distributed, that there is little variance in the effectiveness of task completion, and that no spare capacity exists throughout the system. These conditions exist sometimes, notably on low-risk projects. Unfortunately, for the types of software projects usually undertaken, they are often untrue.
Many users of agile methods have demonstrated experiences that pleasantly contradict to this viewpoint. For example, in many cases, if you improve qualities of service, such as reliability, you can shorten time. Significant improvements in flow are possible within the existing resources and time.14
Transparency
It’s no secret that most software projects are late, both in the execution and in the discovery that they are late.15 This phenomenon has many consequences, which are discussed in almost every chapter of this book. One of the consequences is a vicious cycle of groupthink and denial that undermines effective flow. Late delivery leads to requests for replanning, which lead to pressure for ever more optimistic estimates, which lead to more late delivery, and so on. And most participants in these projects plan optimistically, replan, and replan further but with little visibility into the effects. Of course, the all-too-frequent result is a death march.
This is not because people can’t plan or manage their time. The problem is more commonly the disparity among priorities and expectations of different team members. Most approaches to software engineering have lots of places to track the work—spreadsheets, Microsoft Project Plans, requirements databases, bug databases, test management systems, triage meeting notes, and so on. When the information is scattered this way, it is pretty hard to get a whole picture of the project—you need to look in too many sources, and it’s hard to balance all the information into one schedule. And when there are so many sources, the information you find is often obsolete when you find it.
Things don’t need to be that way. Some community projects post their development schedules on the Web, effectively making individual contributors create expectations among their community peers about their tasks. Making all the work in a project visible can create a virtuous cycle. Of course, this assumes that the project is structured iteratively, the scheduling and estimation are made at the right granularity, and triage is effective at keeping the work item priorities in line with the available resources in the iteration.
SCRUM, one of the agile processes, championed the idea of a transparently visible product backlog, as shown in Figure 1.5. Here’s how the founders of SCRUM, Ken Schwaber and Mike Beedle, define the product backlog:
Product Backlog is an evolving, prioritized queue of business and technical functionality that needs to be developed into a system. The Product Backlog represents everything that anyone interested in the product or process has thought is needed or would be a good idea in the product. It is a list of all features, functions, technologies, enhancements and bug fixes that constitute the changes that will be made to the product for future releases. Anything that represents work to be done on the product is included in Product Backlog.16
Figure 1.5 The central graphic of the SCRUM methodology is a great illustration of flow in the management sense. Not surprisingly, SCRUM pioneered the concept of a single product backlog as a management technique.
This transparency is enormously effective for multiple reasons. It creates a "single set of books," or in other words, a unique, maintained source of information on the work completed and remaining. Combined with flow measurement, as shown in Figure 1.3, it creates trust among the team because everyone sees the same data and plan. And finally, it creates a virtuous cycle between team responsibility and individual accountability. After all, an individual is most likely to complete a task when he or she knows exactly who is expecting it to be done.17