- Problem 1: Almost None of the Test Scripts Were Correct
- Problem 2: Inattentional Blindness
- Problem 3: Inaccurate Perceptions
- Problem 4: Scripted Test Cases Gave the Illusion of Progress
- Additional Resources
Problem 4: Scripted Test Cases Gave the Illusion of Progress
A final noteworthy aspect to scripted testing: I think many project managers equate scripted test cases to progress milestones. On some of my past projects, managers have had the impression that the more scripted test cases we have, the more testing we’re doing. They believe that the more pages we fill with steps, expected results, and test data, the more effective our testing will be. It also adds an element of predictability to a project plan: "If we have 200 test cases, then we know we’re halfway through our testing when we reach 100, right?"
I’m not saying that there’s no correlation between the number of test cases you have and the time it takes you to execute your testing. I’m also not saying that measuring your test case execution progress isn’t important—it is. But it’s not as important as making sure that you’re testing the right things (coverage). It’s not more important than making sure that you’re testing for the right types of errors or for the right information (risk). Each test case executed should have the potential to reveal new information that could potentially shorten or lengthen the test project. Measuring test progress by counting pieces of paper doesn’t reflect that aspect of test management. This is probably why some managers prefer scripted test cases. They have the potential to provide you with the illusion that you’re testing—even if you aren’t.