Software [In]security: Software Security Top 10 Surprises
Using the software security framework introduced in October (A Software Security Framework: Working Towards a Realistic Maturity Model), we interviewed nine executives running top software security programs in order to gather real data from real programs. Our goal is to create a maturity model based on these data, and we're busy working on that (stay tuned here for more). However, in the course of analyzing the data we gathered, we unearthed some surprises that we share in this article.
Nine Top Software Security Programs
Of the twenty-three large-scale software security initiatives we are aware of, we chose nine that we considered the most advanced. Our nine organizations are drawn from three verticals: financial services, independent software vendors, and technology firms.
On average, the target organizations have practiced software security for five years and four months (with the newest initiative being two and a half years old and the oldest initiative being a decade old). All nine have an internal group devoted to software security that we choose to call the Software Security Group or SSG. SSG size on average is 41 people (smallest 12, largest 100, median 35) with a "satellite" of others (developers, architects and people in the organization directly engaged in and promoting software security) of 79 people (smallest 0, largest 300, median 20). The average number of developers among our targets was 7550 people (smallest 450, largest 30,000, median 5000), yielding an average percentage of SSG to development of just over 1%.
We conducted the nine interviews in person and spent two hours going over each software security initiative in a conversation guided by the software security framework.
We're currently in the midst of the next step of the process: analyzing the data and presenting preliminary results to the nine participants. We will develop and publish a maturity model based on the data we gathered. Our objective is to impose some science on the often messy and subjective field of software security. We figure we'll get about as close to science as Anthropology ever does.
Ten Surprising Things
During our analysis, some interesting patterns emerged from the soup. Without further ado, the top ten most surprising things we learned about real software security programs:
9. Not only are there are no magic software security metrics, bad metrics actually hurt.
Don't get us wrong — gathering metrics is important, especially when a large-scale initiative is involved. We noted a number of point metrics that multiple programs rely on, but even the most advanced programs don't use any sort of balanced scorecard approach. In several cases, we heard stories about metrics that were misused, abused, or ignored in such a way that they actually created a roadblock to progress. The best course is to build metrics that work for your organization, collect them continuously, and keep a weather eye out for feedback loops that point people in the wrong direction.
8. Secure-by-default frameworks can be very helpful, especially if they are presented as middleware classes (but watch out for an over focus on security "stuff").
Software security has always made much hue and cry about being proactive. Part of a proactive stance is creating things like software frameworks that everyone in development can use (and learn from). Modern software frameworks allow plenty of flexibility. By making sure that a framework is used in a secure manner (and providing real examples in code), an SSG can create something tangible that helps development get its job done. If you choose to do this, make sure you get beyond standard security features like crypto. For example a Struts framework can be set up to include input validation by default. If you define things properly, you can even enforce the use of secure-by-default frameworks with a static analysis tool.
7. Web application firewalls are not in wide use, especially not as Web application firewalls.
Web applications are a kind of software with major high-profile security concerns. Web application firewalls have been touted as a complete solution to Web application security problems by vendors while simultaneously being pilloried as utterly useless by the cognoscente. The truth must lie somewhere between these extremes. We were surprised to find that only two of the nine organizations we interviewed used Web application firewalls at all. But even these two didn't use them to block application attacks; they used them to monitor Web applications and gather data about attacks. In our view, the best use of Web application firewalls is to buy some time to fix your security problems properly in the software itself by thwarting an active attack that you are already experiencing. So go ahead and stop attacks at a network choke point, but don't forget to fix the root cause in the code.
6. Involving QA in software security is non-trivial... Even the "simple" black box Web testing tools are too hard to use.
In order to scale to address the sheer magnitude of the software security problem we've created for ourselves, the QA department has to be part of the solution. The challenge is to get QA to understand security and the all-important attackers' perspective. One sneaky trick to solving this problem is to encapsulate the attackers' perspective in automated tools that can be used by QA. What we learned is that even today's Web application testing tools (badness-ometers of the first order) remain too difficult to use for testers who spend most of their time verifying functional requirements. QA is involved in software security in many real software security programs, but in all successful cases, QA is staffed by software engineers.
5. Though software security often seems to fit an audit role rather naturally, many successful programs evangelize (and provide software security resources) rather than audit even in regulated industries.
A cursory glance at software security Touchpoints shows an emphasis on checking things once certain software artifacts exist (think code review, architectural risk analysis, and penetration testing). These kinds of activities can be imposed in two ways: as audit gates or as teaching opportunities. We were surprised to find a strong emphasis on setting up an SSG as a resource helping to evangelize software security and to teach people how to build better code rather than as an audit bureau. This pattern held even in many of the financial services firms we spoke with, where we expected a much stronger "we are the security overlords" approach. The old adage "you catch more flies with honey than with vinegar" apparently applies even to software security bugs.
4. Architecture analysis is just as hard as we thought, and maybe harder.
By now most everyone understands that software security problems come in two basic flavors: bugs at the implementation level (in code), and flaws at the design level. The organizations we talked to were almost all engaged in searching for both kinds of problems. If we're to have any hope of taming software security or even securing a particular system, it's critical that we pay just as much attention to design problems as we do to coding errors. Working on one without the other will simply "squeeze the balloon." In the last few years we've made great progress automating the discovery of software security bugs with static analysis tools and centralized code scanning, but we've made little progress at all on architectural risk analysis. Even well-known approaches to the architecture analysis problem, such as Microsoft's STRIDE model, turn out to be hard to turn into widespread practices that don't rely on specialists. The thing is, architectural risk analysis often uncovers staggeringly important problems. We discovered that even though important real problems were found using architectural analysis, software groups still found the process painful enough that it didn't become a regular part of their security efforts. Much work remains to be done on this important aspect of software security.
3. Security researchers, consultants and the press care way more about the who/what/how of attacks than practitioners do.
As Exploiting Software teaches, the attackers' perspective plays a central role in software security (hence the black hat in the yin/yang). Unless you understand how potential attacks really work and who really does them, it's impossible to build a secure system. However, though attack information is critical for SSG members, the notion of teaching work-a-day developers how to "think like a bad guy" is not widespread. The "bug parade" approach to software security gets some airtime, but it is not as important as the "learn how to build code properly" approach.
2. All nine programs we talked to have in-house training curricula, and training is considered the most important software security practice in the two most mature software security initiatives we interviewed.
Even though some of us have been agitating, evangelizing, and teaching about software security for over a decade, developers are still learning the basics of software security. Academia is making very slow progress on folding security into the basic Computer Science curriculum, so developers coming straight out of school know almost nothing about software security. In-house training is the answer for all of the large software organizations we studied. Among those initiatives that have been at it the longest, training is held up as the most important practice. Everyone involved with making software gets a dose of security training when they join the company, and everyone gets another dose as an annual refresher.
1. Though all of the organizations we talked to do some kind of penetration testing, the role of penetration testing in all nine practices is diminishing over time.
We were not surprised that every one of the programs we talked to practiced penetration testing (most using external firms) at one time or another. After all, there's nothing like smoking hot pen testing report to get an organization to admit it has a problem. However, as activities earlier in the SDLC are adopted, penetration testing begins to lose some of its luster. We found evidence that the role of penetration testing diminishes (but does not go to zero) as an organization gets a handle on the software security problem.
0. Fuzz testing is widespread.
What kind of "last bullet" is that on a top ten list?! Let us explain. Way back in 1997 in the book Software Fault Injection, Jeff Voas and McGraw wrote about many kinds of testing that can be imposed on software. We wondered whether security was a special case for software testing. One classic way to probe software "reliability" is to send noise to a program and see what happens, i.e., fuzzing. Somehow the security community has morphed this technique into a widely applied way to look for software security problems. Wow. Who would have guessed that reliability trumps security?