1.4 Why Is Testing Critical?
A National Institute of Standards & Technology (NIST) report [NIST 2002] states that inadequate testing methods and tools cost the US economy between $22.2 billion and $59.5 billion annually, with roughly half of these costs borne by software developers, in the form of extra testing, and half by software users, in the form of failure avoidance and mitigation efforts. The same study notes that between 25% and 90% of software development budgets are often spent on testing.
Testing is currently the most important of the standard verification and validation methods used during system development and maintenance. This is not because testing is necessarily the most effective and efficient way to verify that the system behaves as it should; it is not. (See Table 1.1, below.) Rather, it is because far more effort, funding, and time are expended on testing than on all other types of verification put together.
Table 1.1 Average Percentage of Defects Found as a Function of Static Verification Method and Defect Type
Verification Method |
Defect Type (Location) |
Total Effectiveness |
||||
|
Requirements |
Architecture |
Design |
Code |
Documentation |
|
Requirements Inspection |
87% |
5% |
10% |
5% |
8.5% |
25.6% |
Architecture Inspection |
10% |
85% |
10% |
2.5% |
12% |
14.9% |
Design Inspection |
14% |
10% |
87% |
7% |
16% |
37.3% |
Code Inspection |
15% |
12.5% |
20% |
85% |
10% |
70.1% |
Static Analysis |
2% |
2% |
7% |
87% |
3% |
33.2% |
IV&V |
12% |
10% |
23% |
7% |
18% |
16.5% |
SQA Review |
17% |
10% |
17% |
12% |
12.4% |
28.1% |
Total |
95.2% |
92.7% |
96.1% |
99.1% |
58.8% |
95.0% |
Source: Jones 2013a |
According to Capers Jones, most forms of testing find only about 35% of the code defects [Jones 2013b]. Similarly, on average, individual programmers find less than half the defects in their own software.
For example, Capers Jones analyzed data regarding defect identification effectiveness from projects that were completed in early 2013 and produced the results summarized in Table 1.1 [Jones 2013a]. Thus, the use of requirements inspections identified 87% of requirements defects and 25.6% of all defects in the software and its documentation. Similarly, static analysis of the code identified 87% of the code defects and 33.2% of all defects. Finally, a project that used all of these static verification methods identified 95% of all defects.
As Table 1.2 shows, static verification methods are cumulatively more effective at identifying defects except, surprisingly, documentation defects.
Table 1.2 Cumulative Effectiveness at Finding Defects by Static Verification Methods, Testing, and Both
Verification Method |
Defect Type (Location) |
Total Effectiveness |
||||
|
Requirements |
Architecture |
Design |
Code |
Documentation |
|
Static |
95.2% |
92.7% |
96.1% |
99.1% |
58.8% |
95.0% |
Testing |
72.3% |
74.0% |
87.6% |
93.4% |
95.5% |
85.7% |
Total |
98.11% |
98.68% |
99.52% |
99.94% |
98.13% |
99.27% |
Source: Jones 2013a |