- Introduction
- Defining Software Quality
- Defining Economic Value and Defining the Value of Software Quality
- Summary and Conclusions
Defining Software Quality
Quality has always been a difficult topic to define, and software quality has been exceptionally difficult. The reason is that perceptions of quality vary from person to person and from object to object.
For software quality for a specific application, the perceptions of quality differ among clients, developers, users, managers, software quality personnel, testers, senior executives, and other stakeholders. The perceptions of quality also differ among quality consultants, academics, and litigation attorneys. Many definitions have been suggested over the years, but none have been totally satisfactory or totally adopted by the software industry, including those embodied in international standards.
The reason that quality in general and software quality in particular have been elusive and hard to pin down is because the word "quality" has many nuances and overtones. For example, among the attributes of quality can be found these ten:
- Elegance or beauty in the eye of the beholder
- Fitness of use for various purposes
- Satisfaction of user requirements, both explicit and implicit
- Freedom from defects, perhaps to Six Sigma levels
- High efficiency of defect removal activities
- High reliability when operating
- Ease of learning and ease of use
- Clarity of user guides and HELP materials
- Ease of access to customer support
- Rapid repairs of reported defects
To further complicate the definition, quality often depends on the context in which a software component or feature operates. The quality of a software component is not an intrinsic property—the exact same component can be of excellent quality or highly dangerous depending on the environment in which it operates or the intent of the user.
This contextual nature of software quality is a fundamental challenge and applies to each of the ten attributes just listed. What is elegant in one situation might be downright unworkable in another; what is highly reliable under certain conditions can quickly break down in others.
A closely related complication is what Brooks calls "changeability" of software. "In short, the software product is embedded in a cultural matrix of applications, users, laws, and machine vehicles. These all change continually, and their changes force change upon the software product." (Brooks 1995, p.185)
This brings us to the distinction between testing and software quality. Software quality is often loosely equated with the activities of testing or quality assurance. However, contextuality and Brooks' notion of changeability of software are the reasons why software quality cannot be equated with testing or quality assurance.
Testing can only tackle known unknowns. If you don't know what you're testing for, you are not, by definition, conducting tests. But software, by its very nature is subject to unknown unknowns. No amount of functional or nonfunctional testing can be designed to detect and correct these problems. For example, the behavior of the application can change when
- One or more application components are switched out for new components
- Components change for technology reasons (such as version upgrades)
- Components change for business reasons (such as for new features or a change in workflow)
- Or the application's environment (perhaps the technology stack, for example) changes
It is impossible to devise tests for these conditions in advance. However, from experience we know that some applications are more robust, reliable, and dependable than others when the environment around them changes. Some applications are much easier to modify or extend in response to pressing business needs. These attributes of an application—robustness, dependability, modifiability, and so on—are reliable indicators of application quality that go beyond the defects identified during testing or the process inefficiencies or compliance lapses indentified in quality assurance. Therefore, the quality of an application can and must be defined in such a way as to accommodate these indicators of quality that outrun those identified in testing and quality assurance. How the concepts of contextuality and changeability can be accounted for in defining and measuring software quality is addressed at length in Chapter 2.
There are seven criteria that should be applied to definitions of software quality in order to use the definition in a business environment for economic analysis:
- The quality definition should be predictable before projects start.
- The quality definition should be measurable during and after projects are finished.
- The quality definition should be provable if litigation occurs.
- The quality definition should be improvable over time.
- The quality definition should be flexible and encompass all deliverables.
- The quality definition should be extensible and cover all phases and activities.
- The quality definition should be expandable to meet new technologies such as cloud computing.
In addition, the various nuances of quality can be categorized into seven major focus areas or quality types:
- Technical or Structural quality, which includes reliability, defects, and defect repairs
- Process quality, which includes development methods that elevate quality
- Usage quality, which includes ease of use and ease of learning
- Service quality, which includes access to support personnel
- Aesthetic quality, which includes user satisfaction and subjective topics
- Standards quality, which includes factors from various international standards
- Legal quality, which includes claims made in lawsuits for poor quality
The reason that taxonomy of quality types is needed is because the full set of all possible quality attributes encompasses more than 100 different topics. Table 1.5 lists a total of 121 software quality attributes and ranks them in order of importance.
Table 1.5. Seven Types of Software Quality Factors
Quality Factors |
Value |
|
Technical Quality Factors |
||
1 |
Few requirements defects |
10.00 |
2 |
No toxic requirements |
10.00 |
3 |
Zero error-prone modules |
10.00 |
4 |
Low defect potentials |
10.00 |
5 |
Use of certified reusable code |
10.00 |
6 |
Low rates of severity 1 and 2 defects |
10.00 |
7 |
High reliability |
9.90 |
8 |
Strong security features |
9.90 |
9 |
Few design defects |
9.50 |
10 |
Few coding defects |
9.50 |
11 |
Low bad-fix injection rate |
9.50 |
12 |
Low rates of invalid defect reports |
9.50 |
13 |
Low rates of legacy defects |
9.50 |
14 |
Easy conversion to SaaS format |
9.00 |
15 |
Easy conversion to Cloud format |
9.00 |
16 |
Fault tolerance |
8.00 |
17 |
Few defects in test cases |
8.00 |
18 |
Low cyclomatic complexity |
7.50 |
19 |
Low entropy |
7.00 |
Process Quality Factors |
||
20 |
Customer support of high quality |
10.00 |
21 |
High defect detection efficiency (DDE) |
10.00 |
22 |
High defect removal efficiency (DRE) |
10.00 |
23 |
Accurate defect measurements |
10.00 |
24 |
Use of formal defect tracking |
10.00 |
25 |
Accurate defect estimates |
10.00 |
26 |
Low total cost of ownership (TCO) |
10.00 |
27 |
Executive support of quality |
10.00 |
28 |
Team support of quality |
10.00 |
29 |
Management support of quality |
10.00 |
30 |
Accurate quality benchmarks |
10.00 |
31 |
Effective quality metrics |
10.00 |
32 |
Minimizing hazards of poor quality |
10.00 |
33 |
Use of formal quality improvement plan |
10.00 |
34 |
COQ: appraisal |
10.00 |
35 |
COQ: prevention |
10.00 |
36 |
COQ: internal failure |
10.00 |
37 |
COQ: external failure |
10.00 |
38 |
Cost of learning (COL) |
10.00 |
39 |
Quality improvement baselines |
9.90 |
40 |
Function point quality measures |
9.80 |
41 |
Quality and schedules |
9.00 |
42 |
Quality and costs |
9.00 |
43 |
Use of formal inspections |
9.00 |
44 |
Use of automated static analysis |
9.00 |
45 |
Use of formal test case design |
9.00 |
46 |
Use of reusable test data |
9.00 |
47 |
Use of formal SQA team |
9.00 |
48 |
Use of trained test personnel |
9.00 |
49 |
Use of formal test library controls |
9.00 |
50 |
Use of formal change management |
9.00 |
51 |
Use of Six Sigma for software |
9.00 |
52 |
Use of Team Software Process (TSP) |
9.00 |
53 |
Use of Agile methods |
9.00 |
54 |
Use of Rational methods (RUP) |
9.00 |
55 |
Use of hybrid methods |
9.00 |
56 |
Use of Quality Function Deployment (QFD) |
9.00 |
57 |
Use of trained inspection teams |
9.00 |
58 |
Use of CMMI levels = > 3 |
9.00 |
59 |
Use of legacy renovation tools |
9.00 |
60 |
Low rates of false-positive defects |
9.00 |
61 |
Low rates of duplicate defect reports |
8.75 |
62 |
Use of refactoring and restructuring |
8.50 |
63 |
Six-Sigma quality measures |
8.50 |
64 |
High test coverage |
8.00 |
65 |
Low Cost of Quality (COQ) |
8.00 |
66 |
Use of automated test tools |
8.00 |
67 |
Use of story point quality metrics |
2.00 |
68 |
Use of Use Case point quality metrics |
2.00 |
69 |
Use of waterfall methods |
1.00 |
70 |
Lines of code quality measures |
-5.00 |
71 |
Use of CMMI levels = < 2 |
-5.00 |
72 |
Cost-per-defect quality measures |
-7.00 |
73 |
Executive indifference to high quality |
-10.00 |
74 |
Management indifference to high quality |
-10.00 |
75 |
Team indifference to high quality |
-10.00 |
76 |
Customer indifference to high quality |
-10.00 |
Usage Quality Factors |
||
77 |
Ease of use |
10.00 |
78 |
Useful features |
10.00 |
79 |
Ease of learning |
10.00 |
80 |
Good tutorial manuals |
10.00 |
81 |
Good training courses |
10.00 |
82 |
Good on-line HELP |
10.00 |
83 |
Useful HELP information |
9.75 |
84 |
Defect repair costs |
9.25 |
85 |
Low cost of learning (COL) |
9.25 |
86 |
User error handling |
9.00 |
87 |
Speed of loading |
9.00 |
88 |
Speed of usage |
9.00 |
89 |
Good nationalization for global products |
9.00 |
90 |
Documentation defects |
9.00 |
91 |
Easy export of data to other software |
9.00 |
92 |
Easy import of data from other software |
9.00 |
93 |
Useful manuals and training |
8.50 |
94 |
Good assistance from live experts |
|
Service Quality Factors |
||
95 |
Good customer service |
9.50 |
96 |
Rapid defect repair speed |
9.25 |
97 |
Good technical support |
9.00 |
98 |
Good HELP desk support |
9.00 |
99 |
Use of formal incident management |
8.00 |
100 |
Use of ITIL policies |
8.00 |
Aesthetic Quality Factors |
||
101 |
High user satisfaction |
10.00 |
102 |
Superior to competitive applications |
10.00 |
103 |
Superior to legacy applications |
10.00 |
104 |
Quick start-up and shut-down times |
9.00 |
105 |
No feature bloat |
7.00 |
Standards Quality Factors |
||
106 |
ISO/IEEE standards compliance |
10.00 |
107 |
Certification of reusable materials |
10.00 |
108 |
Corporation standards compliance |
10.00 |
109 |
Certification of test personnel |
8.00 |
110 |
Certification of SQA personnel |
8.00 |
111 |
Portability |
7.00 |
112 |
Maintainability |
6.00 |
113 |
Scalability |
5.00 |
Legal Quality Factors |
||
114 |
Good warranty |
10.00 |
115 |
Partial warranty: replacement only |
4.00 |
116 |
Litigation for poor quality—consequential |
-10.00 |
117 |
Litigation for poor quality—contractual |
-10.00 |
118 |
Litigation for poor quality—financial loss |
-10.00 |
119 |
Litigation for poor quality—safety |
-10.00 |
120 |
Litigation for poor quality—medical |
-10.00 |
121 |
No warranty expressed or implied |
-10.00 |
The ranking scheme ranges from +10 for topics that have proven to be extremely valuable to a low of -10 for topics that have demonstrated extreme harm to software projects.
A total of 121 quality factors is far too cumbersome to be useful for day-to-day quality analysis. Table 1.6 lists the top 12 quality factors if you select only the most significant factors in achieving quality based on measurements of several thousand applications.
Table 1.6. The 12 Most Effective Software Quality Factors
1. Low defect potentials |
2. Effective defect prevention methods |
3. High defect detection efficiency (DDE) |
4. High defect removal efficiency (DRE) |
5. Use of pretest inspections |
6. Use of pretest static analysis |
7. Use of formal test case design |
8. Good ease of learning |
9. Good ease of use |
10. Good technical support |
11. High user satisfaction |
12. Good warranty |
As of 2011, 11 of these 12 quality factors are technically achievable. Item 114 on the list, Good warranty, is not yet practiced by the software industry. This situation needs to change, and the software industry needs to stand behind software applications with effective warranty coverage.
Though all 121 of the quality factors are important, in order to deal with the economic value of quality, it is obvious that the factors have to be capable of quantitative expression. It is also obvious that the factors have to influence these seven topics:
- The costs of development, maintenance, enhancement, and support.
- The schedules for development, maintenance, enhancement, and support.
- The direct revenue that the application will accrue if it is marketed.
- The indirect revenue that might accrue from services or related products.
- The learning curve for users of the application.
- The operational cost savings that the application will provide to users.
- The new kinds of business opportunities that the application will provide to users.
This book concentrates on software quality factors that have a tangible impact on costs and revenue. And to deal with the economic value of these quality factors, the book addresses three critical topics:
- What are the results of "average quality" in terms of costs, schedules, revenue, and other financial topics? Once defined, average quality will provide the baseline against which economic value can be measured.
- What are the results of "high quality" in terms of cost reduction, schedule reduction, higher revenues, new market opportunities, and other financial topics?
- What are the consequences of "low quality" in terms of cost increases, schedule increases, reduced revenue, loss of customers, and other financial topics?
Usually, more insights result from polar opposites than from average values. Therefore the book concentrates on the economic value from high quality and the economic losses from low quality.
While average quality is important, it is not very good, nor has it ever been very good for software. Therefore, it is important to know that only better-than-average software quality has tangible economic values associated with it.
Conversely, low software quality brings with it some serious economic consequences, including the threat of class-action litigation, the threat of breach of contract litigation, and, for embedded software in medical devices, even the potential threat of criminal charges.