A Waterfall Test Process
In the traditional waterfall model shown in Figure 1.2, the role of the test organization is not made explicit until the system testing and acceptance testing phases. Most of the activity of the earlier phases, such as design, coding, and unit testing, are associated primarily with the software development team. For this reason it is useful to derive a corresponding life cycle model
Table 1.1 Inputs and Outputs for the Waterfall Test Process
Activity |
Inputs |
Outputs |
Requirements analysis |
Requirements definition, requirements specification |
Requirements traceability matrix |
Test planning |
Requirements specification, requirements trace matrix |
Test plantest strategy, test system, effort estimate and schedule |
Test design |
Requirements specification, requirements trace matrix, test plan |
Test designstest objectives, test input specification, test configurations |
Test implementation |
Software functional specification, requirements trace matrix, test plan, test designs |
Test casestest procedures and automated tests |
Test debugging |
"Early look" build of code, test cases, working test system |
Final test cases |
System testing |
System test plan, requirements trace matrix, "test-ready" code build, final test cases, working test system |
Test resultsbug reports, test status reports, test results summary report |
Acceptance testing |
Acceptance test plan, requirements trace matrix, beta code build, acceptance test cases, working test system |
Test results |
Operations and maintenance |
Repaired code, test cases to verify bugs, regression test cases, working test system |
Verified bug fixes |
Requirements Analysis
When analyzing software requirements, the goals of the test team and the development team are somewhat different. Both teams need a clear, unambiguous requirements specification as input to their jobs. The development team wants a complete set of requirements that can be used to generate a system functional specification, and that will allow them to design and code the software. The test team, on the other hand, needs a set of requirements that will allow them to write a test plan, develop test cases, and run their system and acceptance tests.
A very useful output of the requirement analysis phase for both development and test teams is a requirements traceability matrix. A requirements traceability matrix is a document that maps each requirement to other work products in the development process such as design components, software modules, test cases, and test results. It can be implemented in a spreadsheet, word processor table, database, or Web page. The requirements trace matrix and its role in "gluing together" the various activities of the development and test processes will be discussed in more detail in Chapter 2.
Test Planning
By test planning we mean determining the scope, approach, resources, and schedule of the intended testing activities. Efficient testing requires a substantial investment in planning, and a willingness to revise the plan dynamically to account for changes in requirements, designs, or code as bugs are uncovered. It is important that all requirements be tested or, if the requirements have been prioritized, that the highest priority requirements are tested. The requirements traceability matrix is a useful tool in the test planning phase because it can be used to estimate the scope of testing needed to cover the essential requirements.
Ideally, test planning should take into account static as well as dynamic testing, but since the waterfall test process described in Figure 1.3 and Table 1.1 is focused on dynamic testing, we'll exclude static testing for now. The activities of the test planning phase should prepare for the system test and acceptance test phases that come near the end of the waterfall, and should include:
Definition of what will be tested and the approach that will be used.
Mapping of tests to the requirements.
Definition of the entry and exit criteria for each phase of testing.
Assessment, by skill set and availability, of the people needed for the test effort.
Estimation of the time needed for the test effort.
Schedule of major milestones.
Definition of the test system (hardware and software) needed for testing.
Definition of the work products for each phase of testing.
An assessment of test-related risks and a plan for their mitigation.
Figure 1.3 Waterfall test process.
The work products or outputs that result from these activities can be combined in a test plan, which might consist of one or more documents. Test planning will be discussed in more detail in Chapter 3 and an example test plan will be provided in Part III.
Test Design, Implementation, and Debugging
Dynamic testing relies on running a defined set of operations on a software build and comparing the actual results to the expected results. If the expected results are obtained, the test counts as a pass; if anomalous behavior is observed, the test counts as a fail, but it may have succeeded in finding a bug. The defined set of operations that are run constitute a test case, and test cases need to be designed, written, and debugged before they can be used.
A test design consists of two components: test architecture and detailed test designs. The test architecture organizes the tests into groups such as functional tests, performance tests, security tests, and so on. It also describes the structure and naming conventions for a test repository. The detailed test designs describe the objective of each test, the equipment and data needed to conduct the test, the expected result for each test, and traces the test back to the requirement being validated by the test. There should be at least a one-to-one relationship between requirements and test designs.
Detailed test procedures can be developed from the test designs. The level of detail needed for a written test procedure depends on the skill and knowledge of the people that run the tests. There is a tradeoff between the time that it takes to write a detailed, step-by-step procedure, and the time that it takes for a person to learn to properly run the test. Even if the test is to be automated, it usually pays to spend time up front writing a detailed test procedure so that the automation engineer has an unambiguous statement of the automation task.
Once a test procedure is written, it needs to be tested against a build of the product software. Since this test is likely to be run against "buggy" code, some care will be needed when analyzing test failures to determine if the problem lies with the code or with the test.
System Test
A set of finished, debugged tests can be used in the next phase of the waterfall test process, system test. The purpose of system testing is to ensure that the software does what the customer expects it to do. There are two main types of system tests: function tests and performance tests.
Functional testing requires no knowledge of the internal workings of the software, but it does require knowledge of the system's functional requirements. It consists of a set of tests that determines if the system does what it is supposed to do from the user's perspective.
Once the basic functionality of a system is ensured, testing can turn to how well the system performs its functions. Performance testing consists of such things as stress tests, volume tests, timing tests, and recovery tests. Reliability, availability, and maintenance testing may also be included in performance testing.
In addition to function and performance tests, there are a variety of additional tests that may need to be performed during the system test phase; these include security tests, installability tests, compatibility tests, usability tests, and upgrade tests. More details on system testing will be given in Chapter 5 and in Part II.
Acceptance Test
When system testing is completed, the product can be sent to users for acceptance testing. If the users are internal to the company, the testing is usually called alpha testing. If the users are customers who are willing to work with the product before it is finished, the testing is beta testing. Both alpha and beta tests are a form of pilot tests in which the system is installed on an experimental basis for the purpose of finding bugs.
Another form of acceptance test is a benchmark test in which the customer runs a predefined set of test cases that represent typical conditions under which the system is expected to perform when placed into service. The benchmark test may consist of test cases that are written and debugged by your test organization, but which the customer has reviewed and approved. When pilot and benchmark testing is complete, the customer should tell you which requirements are not satisfied or need to be changed in order to proceed to final testing.
The final type of acceptance test is the installation test, which involves installing a completed version of the product at user sites for the purpose of obtaining customer agreement that the product meets all requirements and is ready for delivery.
Maintenance
Maintenance of a product is an often challenging task for the development team and the test team. Maintenance for the developer consists of fixing bugs that are found during customer operation and adding enhancements to product functionality to meet evolving customer requirements. For the test organization, maintenance means verifying bug fixes, testing enhanced functionality, and running regression tests on new releases of the product to ensure that previously working functionality has not been broken by the new changes.
Even though the acceptance test and maintenance activities are important, they will not be discussed in detail in this book. The basic principles of regression testing and bug verification apply well to these phases of the life cycle. For a detailed treatment of software maintenance from a testing perspective, Lewis (2000) offers a great deal of information.