Introduction to Rapid Software Testing
- Basic Definitions for Software Testing
- What is Rapid Testing?
- Developing a Rapid Testing Strategy
- The Software Development Process
- A Waterfall Test Process
- Tying Testing and Development Together
- What's Next
- References
Over the past two decades, computer systems and the software that runs them have made their way into all aspects of life. Software is present in our cars, ovens, cell phones, games, and workplaces. It drives billing systems, communications systems, and Internet connections. The proliferation of software systems has reached the point that corporate and national economies are increasingly dependent on the successful development and delivery of software.
As the stakes grow higher in the software marketplace, pressure grows to develop more products at a faster pace. This places increasing demands on software developers and on software testers not only to produce faster, but also to make products that are of good enough quality that the customer will be satisfied with them.
There are therefore two major demands placed on today's software test engineer:
We need to test quickly to meet aggressive product delivery schedules.
We need to test well enough that damaging defects don't escape to our customers.
The challenge is to satisfy each of these needs without sacrificing the other. The purpose of this book is to define an efficient test process and to present practical techniques that satisfy both demands. We begin by examining the fundamentals of software development and software testing.
Basic Definitions for Software Testing
Before launching into a discussion of the software development process, let's define some basic terms and concepts. The logical place to start is with software testing.
Software testing is a process of analyzing or operating software for the purpose of finding bugs.
Simple as this definition is, it contains a few points that are worth elaboration. The word process is used to emphasize that testing involves planned, orderly activities. This point is important if we're concerned with rapid development, as a well thought-out, systematic approach is likely to find bugs faster than poorly planned testing done in a rush.
According to the definition, testing can involve either "analyzing" or "operating" software. Test activities that are associated with analyzing the products of software development are called static testing. Static testing includes code inspections, walkthroughs, and desk checks. In contrast, test activities that involve operating the software are called dynamic testing. Static and dynamic testing complement one another, and each type has a unique approach to detecting bugs.
The final point to consider in the definition of software testing is the meaning of "bugs." In simple terms, a bug is a flaw in the development of the software that causes a discrepancy between the expected result of an operation and the actual result. The bug could be a coding problem, a problem in the requirements or the design, or it could be a configuration or data problem. It could also be something that is at variance with the customer's expectation, which may or may not be in the product specifications. More details about the terminology of bugs is given in Sidebar 1.1.
SIDEBAR 1.1
THE LIFE OF A BUG
The life of a software bug may be described as follows. A bug is born when a person makes an error in some activity that relates to software development, such as defining a requirement, designing a program, or writing code. This error gets embedded in that person's work product (requirement document, design document, or code) as a fault.
As long as this fault (also known as a bug or defect) remains in the work product, it can give rise to other bugs. For example, if a fault in a requirements document goes undetected, it is likely to lead to related bugs in the system design, program design, code, and even in the user documentation.
A bug can go undetected until a failure occurs, which is when a user or tester perceives that the system is not delivering the expected service. In the system test phase, the goal of the test engineer is to induce failures through testing and thereby uncover and document the associated bugs so they can be removed from the system. Ideally the life of a bug ends when it is uncovered in static or dynamic testing and fixed.
One practical consequence of the definition of testing is that test engineers and development engineers need to take fundamentally different approaches to their jobs. The goal of the developer is to create bug-free code that satisfies the software design and meets the customer's requirements. The developer is trying to "make" code. The goal of the tester is to analyze or operate the code to expose the bugs that are latent in the code as it is integrated, configured, and run in different environments. The tester is trying to "break" the code. In this context, a good result of a software test for a developer is a pass, but for that same test a successful outcome for the test engineer is a fail. Ultimately, of course, both the developer and tester want the same thing: a product that works well enough to satisfy their customers.
There are two basic functions of software testing: one is verification and the other is validation. Schulmeyer and Mackenzie (2000) define verification and validation (V&V) as follows:
Verification is the assurance that the products of a particular phase in the development process are consistent with the requirements of that phase and the preceding phase.
Validation is the assurance that the final product satisfies the system requirements.
The purpose of validation is to ensure that the system has implemented all requirements, so that each function can be traced back to a particular customer requirement. In other words, validation makes sure that the right product is being built.
Verification is focused more on the activities of a particular phase of the development process. For example, one of the purposes of system testing is to give assurance that the system design is consistent with the requirements that were used as an input to the system design phase. Unit and integration testing can be used to verify that the program design is consistent with the system design. In simple terms, verification makes sure that the product is being built right. We'll see examples of both verification and validation activities as we examine each phase of the development process in later chapters.
One additional concept that needs to be defined is quality. Like beauty, quality is subjective and can be difficult to define. We will define software quality in terms of three factors: failures in the field, reliability, and customer satisfaction. A software product is said to have good quality if:
It has few failures when used by the customer, indicating that few bugs have escaped to the field.
It is reliable, meaning that it seldom crashes or demonstrates unexpected behavior when used in the customer environment.
It satisfies a majority of users.
One implication of this definition of quality is that the test group must not only take measures to prevent and detect defects during product development, but also needs to be concerned with the reliability and usability of the product.