- The Story of the Never-Ending Project
- How Do You Know You're Failing?
- The Virtues of Failing Fast
How Do You Know You're Failing?
The trick, of course, is to know when the project has gone off the rails. There's no definitive set of metrics, but one of the benefits of Agile is that, if you're paying attention, you can construct a decent set of signals that warn you when the project has gone off the rails. Let's consider some of these metrics.
You Never Go to Production
Stories abound of projects that seemed to be going well, with GANTT charts lining up every week showing good progress. In many cases, everything was fine until the product actually gets installedand simply doesn't work. Perhaps the user experience turns out to be unacceptable, or the performance isn't good. Perhaps the product is very buggy, or it has security holes large enough to admit an armored truck. Regardless of the exact reason, many or most of the cases of $100M of investment leading to no software probably had a story that was in one of those buckets.
The golden rule is this: Make sure that you go to production well before you become too big to fail. If you can't get software representing a minimum viable product (MVP) into a user's hands prior to the investment level where an organization can't take the cost as an operational loss, you should probably cancel the project. This metricgetting an MVP prior to escape velocityis something of a "north star" metric that I use to drive all other metrics about when to kill a project.
Unrealistic Burn-Up
Two kinds of burn-up charts make me very, very nervous (see Figure 1):
- Highly volatile over a long period of time. High volatility is to be expected early in the project, as people ramp up and teams work through the "risky areas" first. However, if you're well past that point (for example, think sixth or seventh iteration), and your standard deviation of story points delivered per iteration is growing, it's hard to make the case that you have any kind of remotely predictable velocity. This is especially troubling if your average is below what you need (think numbers like 0, 10, 6, 20, 2, 15, when you really need to achieve 30 per iteration). Such situations tend to point not just to less-than-ideal estimates, but to other problems in the team that merit investigation.
- No variation at all in velocity delivered. If the burn-up calls for 30 points per iteration, and the team is delivering exactly that, you might have a case of "Enron Story Point Accounting" on your hands. Few teams are so perfect in estimation, and so regular in delivery, that you will see no variation at all. Lack of variation usually points to more insidious problems in project management that might be very eager to tell you what you want to hear.
Figure 1 Points per iteration, two worrisome scenarios.
Generally, in a project that has properly ramped up and is delivering, you want to see a standard deviation in the "Goldilocks" zonehigh enough to know that you aren't just being told what you want to hear, but not so high as to make velocity going forward impossible to estimate credibly.
Metric Deterioration - Internal Quality Loss
While some signs are visible that things are going horribly awry, hidden quality deterioration is a more insidious problem. Often this is the result of missing a velocity number a couple of times in a row, when pressure will come from on high to do anything possible to go faster. This pressure ultimately results in cutting corners for the sake of expediency. This loss of quality results in a short-term speed that has very high long-term costs.
Part of any good program-management regimen is establishing a balanced scorecard of internal quality metrics that sits side by side with the burn-up chart. This can't just be one metric, such as code coverage, as that's achievable by sacrificing other metrics or just writing low-quality tests. Ideally, you need a collection of metrics, such as instability index and cyclomatic complexity, collected over a span of time so you can compare them to velocity over time. If you see that you have an inverse relationship between your quality metrics and velocity, that's a giant red flag.
One more useful metricand this one is harder to fakeis the span of time between when a bug is found and when it's resolved. One way that information about internal quality loss tends to leak is that bugs start taking longer to fix. While this isn't itself proof positive that you have a failing project, it's usually a good sign that something else is wrong. It could be that developers just aren't prioritizing bug-fixing work, or it could be a "canary in the coal mine" of what the maintenance experience will be like down the road. If you consistently see bugs that persist from one iteration to the next, it behooves you to find out why.
In general, internal quality loss may not affect your ability to get to the first release. But the costs of low internal quality certainly will impact later releases. Maintainability isn't just an issue for people down the roadmost large programs that turn into $100M+ disasters have initial charters that go beyond an initial release. Maintainability isn't abstract for such projectsit can be a time bomb that obscures problems until you reach escape velocity.