Software [In]security: Top 11 Reasons Why Top 10 (or Top 25) Lists Don’t Work
The 2009 Software Security Bug Parade
The 2009 CWE/SANS Top 25 Most Dangerous Programming Errors was recently released with much fanfare. Lists of "the most significant" software security bugs are certainly not a new phenomenon, with the OWASP top ten (first published in 2004) garnering a lion's share of the attention. Certainly the idea of knowing your enemy (in this case, software vulnerabilities) is important in software security. However, as I have pointed out in previous iterations of this column, there's more to software security than watching the bug parade march by.
Today, I present the top eleven reasons why generic top N bug parade lists may be less helpful than you think. But first, a history lesson…
Bug Parades of Christmas Past1
The idea of collecting and organizing information about computer security vulnerabilities has a long history. One of the first (unclassified) studies of computer security and privacy was the RISOS (Research into Secure Operating Systems) project from 19762. The problem of creating lists and taxonomies has been of great interest since then.
Well-known research projects include the "Protection Analysis" work of Bisbey and Hollingworth (1978) and Carl Landwehr's Naval Research Lab taxonomy from 1993. Plenty of taxonomy work has been done on the attack side of the equation too, including Cheswick and Bellovin's attack classes (2003) and my work with Hoglund on attack patterns (2004).
More recent strain of work on vulnerabilities includes Aslam's classification scheme (1995), PLOVER (Preliminary List of Vulnerability Examples for Researchers) from 2005, and the Common Weakness Enumeration (CWE) project (ongoing).
A number of practitioners have developed top ten lists and other related collections based on experience in the field. Two of the most popular and useful lists are the 19 Sins3 and the OWASP top ten. To this assembly of lists, piles, and collections we can add the list of the day — the CWE/SANS top 25.
Top Eleven Reasons Why Top Ten Lists Don't Work
Before I start, there are some important good things about top ten lists that are worthy of mention. The notion of knowing your enemy is essential in security (as it is in warfare), and top ten lists can help get software people started thinking about attacks, attackers, and the vulnerabilities they go after. These days almost any attention paid to the problem is good attention, and the fact that the technical media is paying attention to software security at all is a good thing. Top ten lists help in that respect.
Without further ado, however, here are eleven reasons why top ten lists don't work:
- Executives don't care about technical bugs. Security is about risk management. Risk management involves getting your head out of the technical weeds and understanding which applications really matter to your organization from a business perspective. Geeky pontification about top ten lists does very little (if anything) to manage business risk. Putting "controls" around "top generic technical problems" may not be the best course of action.
- Too much focus on bugs. Software security practitioners have known for years that software defects lead to serious security problems. What we all seem to forget sometimes is that defects come in two basic flavors (divided roughly 50/50 in terms of prevalence): bugs in the code and flaws in the design. Top ten lists tend to focus on bugs, to the detriment of any attention for design-level problems.
- Vulnerability lists help auditors more than developers. Teaching someone how to do the right thing is much more cost effective and efficient than attempting to teach someone how not to do an infinite set of wrong things. Software people react more positively to being shown how to do things right than they do to a bug parade. On the other hand, big lists of bugs certainly make auditing code easier. But how efficient is that?
- One person's top bug is another person's yawner. (This point is closely related to number 1.) Software defects do have serious and important business impact, but impact is not an objective variable that carries across all organizations. In my experience, a list of "top N defects" is very powerful in any given organization, but actual lists differ according to the dev group, coding habits, tech stacks, standards, and a host of other variables. You'll be very lucky if your real list of top bugs aligns with any generic list.
- Using bug parade lists for training leads to awareness but does not educate. (This point is closely related to number 3.) Developers and architects are much better off understanding and learning how to do things right (defensive programming) than they are when presented with a laundry list of defects, even when those defects are shown in living color. In the early days, Biology was about taxonomies and zoos. In modern times, Biology is about cell mechanisms, DNA, and evolution. Likewise, modern software security needs to be more about the mechanics of building systems that work than it is about collections of no-nos.
- Bug lists change with the prevailing technology winds. Though top ten lists can certainly be updated (witness the OWASP top ten), rapid changes in technology make lists of particular problems obsolete very quickly. In this sense, education about building things properly (and about how things like stacks really work) again trumps lists of specifics.
- Top ten lists mix levels. Taxonomies are always superior to lists, especially when they are simple. Thinking about seven top level concerns (as presented in the Seven Pernicious Kingdoms) is much less confusing from an intellectual perspective than equating "Buffer Overflows" and "Failing to Protect Network Traffic" as the 19 Sins work does. Lists of bugs have lower fidelity when it comes to activities required to build secure software.
- Automated tools can find bugs — let them. Teaching all developers the 700+ bad things in the Common Weakness Enumeration (or the even larger set of permutations allowed by languages like C++) is a futile exercise. We can start by using static analysis tools to remember all of the potential bugs and alert us to their presence. Better even than that would be adopting programming languages that don't suck. (For a real world story about language issues, see Microsoft's Missed Opportunity.)
- Metrics built on top ten lists are misleading. The notion of coming up with metrics for software security is of critical importance. But as all business school graduates know (and not enough geeks, sadly), bad metrics can do more harm than no metrics. Using the OWASP top 10 or the CWE/SANS top 25 to drive your software security initiative will be a major mistake. See points 1 and 4 for more.
- When it comes to testing, security requirements are more important than vulnerability lists. Security testing should be driven from requirements and abuse cases rather than from hoping to discover particular technical bugs in code. Proper use of threat modeling and architectural risk analysis drives effective traceable tests.
- Ten is not enough. A myopic focus on ten bugs makes little sense. Adding 15 more to the pile may not do much to help. As the CWE/SANS top 25 website says, "The CWE site also contains data on more than 700 additional programming errors, design errors, and architecture errors that can lead to exploitable vulnerabilities." Enough said.
Footnotes
1. Instead of providing a complete list of taxonomy references here, I direct interested readers to Chapter 12 of Software Security: Building Security In (Read in Safari Books Online). Also note that the annotated bibliography (Chapter 13) is available in full on the Web.
2. Robert Abbot, Janet Chin, James Donnelley, William Konigsford, Shigeru Tokubo, and Douglas Webb. "Security Analysis and Enhancement of Computer Operating Systems," NBSIR 76-1041, National Bureau of Standards, ICST, Washington, DC, 1976.
3. Published in 19 Deadly Sins of Software Security by Howard, LeBlanc, and Viega (2005).