- Testing Axioms
- Software Testing Terms and Definitions
- Summary
- Quiz
Software Testing Terms and Definitions
This chapter wraps up the first section of this book with a list of software testing terms and their definitions. These terms describe fundamental concepts regarding the software development process and software testing. Because they’re often confused or used inappropriately, they’re defined here as pairs to help you understand their true meanings and the differences between them. Be aware that there is little agreement in the software industry over the definition of many, seemingly common, terms. As a tester, you should frequently clarify the meaning of the terms your team is using. It’s often best to agree to a definition rather than fight for a "correct" one.
Precision and Accuracy
As a software tester, it’s important to know the difference between precision and accuracy. Suppose that you’re testing a calculator. Should you test that the answers it returns are precise or accurate? Both? If the project schedule forced you to make a risk-based decision to focus on only one of these, which one would you choose?
What if the software you’re testing is a simulation game such as baseball or a flight simulator? Should you primarily test its precision or its accuracy?
Figure 3.4 helps to graphically describe these two terms. The goal of this dart game is to hit the bull’s-eye in the center of the board. The darts on the board in the upper left are neither precise nor accurate. They aren’t closely grouped and not even close to the center of the target.
Figure 3.4 Darts on a dartboard demonstrate the difference between precision and accuracy.
The board on the upper right shows darts that are precise but not accurate. They are closely grouped, so the thrower has precision, but he’s not very accurate because the darts didn’t even hit the board.
The board on the lower left is an example of accuracy but poor precision. The darts are very close to the center, so the thrower is getting close to what he’s aiming at, but because they aren’t closely positioned, the precision is off.
The board in the lower right is a perfect match of precision and accuracy. The darts are closely grouped and on target.
Whether the software you test needs to be precise or accurate depends much on what the product is and ultimately what the development team is aiming at (excuse the pun). A software calculator likely demands that both are achieved—a right answer is a right answer. But, it may be decided that calculations will only be accurate and precise to the fifth decimal place. After that, the precision can vary. As long as the testers are aware of that specification, they can tailor their testing to confirm it.
Verification and Validation
Verification and validation are often used interchangeably but have different definitions. These differences are important to software testing.
Verification is the process confirming that something—software—meets its specification. Validation is the process confirming that it meets the user’s requirements. These may sound very similar, but an explanation of the Hubble space telescope problems will help show the difference.
In April 1990, the Hubble space telescope was launched into orbit around the Earth. As a reflective telescope, Hubble uses a large mirror as its primary means to magnify the objects it’s aiming at. The construction of the mirror was a huge undertaking requiring extreme precision and accuracy. Testing of the mirror was difficult since the telescope was designed for use in space and couldn’t be positioned or even viewed through while it was still on Earth. For this reason, the only means to test it was to carefully measure all its attributes and compare the measurements with what was specified. This testing was performed and Hubble was declared fit for launch.
Unfortunately, soon after it was put into operation, the images it returned were found to be out of focus. An investigation discovered that the mirror was improperly manufactured. The mirror was ground according to the specification, but the specification was wrong. The mirror was extremely precise, but it wasn’t accurate. Testing had confirmed that the mirror met the spec—verification—but it didn’t confirm that it met the original requirement—validation.
In 1993, a space shuttle mission repaired the Hubble telescope by installing a "corrective lens" to refocus the image generated by the improperly manufactured mirror.
Although this is a not a software example, verification and validation apply equally well to software testing. Never assume that the specification is correct. If you verify the spec and validate the final product, you help avoid problems such as the one that hit the Hubble telescope.
Quality and Reliability
Merriam-Webster’s Collegiate Dictionary defines quality as "a degree of excellence" or "superiority in kind." If a software product is of high quality, it will meet the customer’s needs. The customer will feel that the product is excellent and superior to his other choices.
Software testers often fall into the trap of believing that quality and reliability are the same thing. They feel that if they can test a program until it’s stable, dependable, and reliable, they are assuring a high-quality product. Unfortunately, that isn’t necessarily true. Reliability is just one aspect of quality.
A software user’s idea of quality may include the breadth of features, the ability of the product to run on his old PC, the software company’s phone support availability, and, often, the price of the product. Reliability, or how often the product crashes or trashes his data, may be important, but not always.
To ensure that a program is of high quality and is reliable, a software tester must both verify and validate throughout the product development process.
Testing and Quality Assurance (QA)
The last pair of definitions is testing and quality assurance (sometimes shortened to QA). These two terms are the ones most often used to describe either the group or the process that’s verifying and validating the software. In Chapter 21, "Software Quality Assurance," you’ll learn more about software quality assurance, but for now, consider these definitions:
- The goal of a software tester is to find bugs, find them as early as possible, and make sure they get fixed.
- A software quality assurance person’s main responsibility is to create and enforce standards and methods to improve the development process and to prevent bugs from ever occurring.
Of course, there is overlap. Some testers will do a few QA tasks and some QA-ers will perform a bit of testing. The two jobs and their tasks are intertwined. What’s important is that you know what your primary job responsibilities are and communicate that information to the rest of the development team. Confusion among the team members about who’s testing and who’s not has caused lots of process pain in many projects.