False Positives Versus False Negatives
An actively managed portfolio demands judgment calls. The judgments may well be based on quantitative values and careful measurements. But unless you have nearly inexhaustible resources and can see every risky project through to its final conclusion, imperfect judgments will have to be made, running the risk of being wrong. Two simple criteria for effective portfolio management are to make judgments as early as possible and to make as few errors as possible. When speaking of errors in this context, you need to classify two types of error, often referred to as alpha errors and beta errors or, in other contexts, false positives and false negatives, and sometimes just as simply as type I and type II errors.
In a portfolio, a false positive is a project deemed to be "successful" and that gets resources, and advances, but that ultimately fails. A false negative is a project terminated on the assumption that it will fail and then ultimately proves successful. Although each type of error is easy enough to make, it is harder to track false negatives because after a project is terminated, it is only occasionally reincarnated to prove its ultimate worth. A typical false-negative scenario is one in which the project is terminated, with regard to the expenditure of resources, but is licensed elsewhere, and the licensee ultimately succeeds. When good judgments are made under conditions of incomplete and imperfect knowledge, both these types of error must occur. Logically, any attempt to eliminate one error type results in a greater number of instances of the other type. So if you never want to make the error of a false positive, you need to ruthlessly terminate projects with any hint of potentially failing—to avoid unnecessarily committing resources to them. Thus, you create many more false negatives in the process.
Well-managed portfolios result in both types of error. But what are the cultural pressures that might result in an overcommitment of one error type and consequently the commission of too many errors overall? Naturally, no innovator, whether scientist, technologist, or artist, wants to see their project terminated. So, not surprisingly, there is pressure to commit the error of falsely identifying a project as positive when it ultimately will fail. Consistent with this pressure—and serving the interest of individual project leaders and team members—most organizations have generated highly adverse stories about "the one that got away."
This doesn't mean that false negatives are somehow good. All errors are costly: The false positive error consumes resources and capital that, if deployed elsewhere, could have benefited the organization and its customers; and false negatives represent the very project in which the application of additional resources and capital would have served the organization and its customers. Remember that a bias toward one type of error will increase the total error population, and because both types of error represent cost without return, the goal clearly has to be to keep the sum of all errors as low as possible. Errors are a natural part of decision making under uncertainty, but they can be managed well or poorly, and good decision processes are often the difference between your ultimate success versus a competitor's.