Test engineer self-evaluation
Test engineers should assume responsibility for evaluating their own effectiveness. The following list of issues can be used as a starting-point in developing a process for test-engineer self-evaluation, assuming roles and responsibilities along with task assignments are understood:
-
Consider the types of defects being discovered. Are they important, or are they mostly cosmetic, low-priority defects? If the tester consistently uncovers only low-priority defectssuch as, during functional testing, non-working hot keys, or typographical errors in the GUIthe effectiveness of the test procedures should be reassessed. Keep in mind that during other testing phases (for the examples mentioned here, during usability testing), the priority of certain defects will change.
-
Are test procedures detailed enough, covering the depth, and combinations and variations of data and functional paths, necessary to catch the higher-priority defects? Do tests include invalid data as well as valid data?
-
Was feedback regarding test procedures received and incorporated from requirements and development staff, and from other testers? If not, the test engineer should ask for test-procedure reviews, inspections, and walk-throughs involving those teams.
-
Does the test engineer understand the range of testing techniques available, such as boundary-values testing, equivalence partitioning, and orthogonal arrays, well enough to select the most effective test procedures?
-
Does the engineer understand the intricacies of the application's functionality and domain well? If not, the tester should ask for an overview or additional training. A technical tester may ask for help from a Subject-Matter Expert (SME).
-
Does the initial testing focus on low-priority requirements? Initial testing should focus on the high-priority, highest-risk requirements.
Does the initial testing focus on regression testing of existing functionality that was working previously and rarely broke in the past? Initial testing should focus on code changes, defect fixes, and new functionality. Regression testing should come later. Ideally, the regression-testing efforts can be automated, so test engineers can focus on the newer areas.
-
Are any areas under testing exhibiting suspiciously low defect counts? If so, these areas should be re-evaluated to determine:
-
Whether the test coverage is sufficiently robust.
-
Whether the types of tests being performed are most effective. Are important steps missing?
-
Whether the application area under test has low complexity, so that indeed it may be error-free.
-
Whether the functionality was implemented in such a manner that it is likely no major defects remainfor example, if it was coded by the most senior developers and has already been unit and integration tested well.
Consider the Defect Workflow:
Each defect should be documented in a timely manner (i.e., as soon as it is discovered and verified)
Defect-documentation standards must be followed. If there aren't any defect-documentation standards, they should be requested from the engineer's manager. The standards should list all information that must be included in documenting a defect to enable the developer to reproduce it.
If a new build is received, initial testing should focus on retesting the defects. It is important that supposedly fixed defects be retested as soon as possible, so the developers know whether their repair efforts are successful.
Comments received from the development team regarding the quality of defect reports should be continually evaluated. If the reports are often said to lack required information, such as a full description of testing steps required to reproduce the errors, the testers should work on providing better defect documentation.
Testers should be eager to track defects to closure.
Examine the comments added to defect documentation to determine how developers or other testers receive it. If defects are often marked "works as expected" or "cannot be reproduced," it could also signal some problems:
The tester's understanding of the application may be inadequate. In this case, more training is required. Help may be requested from domain SMEs.
The requirements may be ambiguous. If so, they must be clarified. (Most commonly, this is discovered during the requirements or test-procedure walk-through and inspections.)
The engineer's documentation skills may not be as effective as necessary. Inadequate documentation may lead to misunderstanding of the identified defect. The description may need additional steps to enable developers to reproduce the error.
The developer may be misinterpreting the requirement.
The developer may lack the patience to follow the detailed documented defect steps to reproduce the defect.
The tester should monitor whether defects are being discovered in that person's test area after the application has gone to production. Any such defects should be evaluated to determined why they were missed:
Did the tester fail to execute a specific test procedure that would have caught this defect? If so, why was the procedure overlooked? Are regression tests automated?
Was there no test procedure that would have caught this defect? If so, why not? Was this area considered low risk? The test procedure creation strategy should be reevaluated, and a test procedure should be added to the regression test suite to catch errors like the one in question. The tester should discuss with peers or a manager how to create more effective test procedures, including test design, strategy, and technique.
Was there not enough time to execute an existing test procedure? If so, management should be informedbefore the application goes live or is shipped, not after the fact. This sort of situation should also be discussed in a post-test/pre-installation meeting, and should be documented in the test report.
Do other testers during the course of their work discover defects that were this tester's responsibility? If so, the tester should evaluate the reasons and make adjustments accordingly.
Are major defects being discovered too late in the testing cycle? If this occurs regularly, the following points should be considered:
There are many more questions a tester can ask related to testing effectiveness, depending on the testing phase and testing task at hand, type of expertise (technical vs. domain), and tester's experience level.
An automater might want to be sure to become familiar with automation standards and best automation practices. A performance tester might request additional training in the performance-testing tool used and performance testing techniques available.
Self-assessment of the tester's capabilities and the improvement steps that follow are important parts of an effective testing program.