- Making Sure That Your Tests Are Actually Testing
- What To Do If You Find Problems
What To Do If You Find Problems
When you encounter a problem with your automated tests, you need to consider several factors. First, is this just an isolated incident (one faulty test), or is it a larger issue that affects a series of automated tests? If it's isolated, you're probably safe just fixing the test and going on your merry way. However, if it's a bigger issue, you need a bigger solution. The first thing you need to do is ask if you really need these tests any longer. If you do need them, ask yourself whether they should be automated or manual. If you decide that you do need them and that they should remain automated, you then need to ask whether this is the best way for these tests to be implemented (going back to the implementation models I mentioned earlier).
Do We Still Need These Tests?
James Bach's minefield analogy illustrates a basic principle of testing: It's probably more meaningful and more valuable to execute a new test rather than an old test. "The minefield heuristic is saying that it's better to try to do something you haven't yet done, than to do something you already have done." When you look at the tests you need to update, ask yourself whether you would still create them today? Are they still interesting, or has the risk been significantly diminished to the point where we just don't need to invest any longer in those tests? Don't work under the assumption that a test is valuable just because someone once found it worth writing.
If you decide that the test is worth repeating, look for overlap between that test and other tests that currently work. If the feature is tested somewhere else (either manual or automated), you may not want to invest the effort to update this instance of the test. Earlier, when I mentioned overlapping your testing, I indicated that this isn't operating at optimal efficiency. Here's a chance to make up for some of that inefficiency. If you have other tests that test this functionality, and you know that those tests work, lose the tests that aren't working and don't look back.
Should These Tests Even Be Automated?
Before doing any sort of in-depth analysis, consider whether the test is even appropriate for automation. If it's a usability test, for example, I would seriously question someone's ability to automate it. Similarly, if it's a test involving special hardware (fingerprint readers, for example) or a manual process (such as viewing a printed piece of paper for correctness), you might question the original wisdom in automating the test. Not that it's impossible to automate these things—often it's just not practical.
I don't want to reinvent the wheel here. If you haven't read Brian Marick's "When Should a Test Be Automated?" stop reading and do it now. This is a must-read for anyone doing automated testing. It's probably one of the most complete and well-stated papers on a very difficult topic. In the paper, Marick addresses the costs of automation versus manual tests, the lifespan of any given test, and the ability for any given test to find additional bugs.
He draws two conclusions worth restating here:
- "The cost of automating a test is best measured by the number of manual tests it prevents you from running and the bugs it will therefore cause you to miss."
- "A test is designed for a particular purpose: to see if some aspects of one or more features work. When an automated test that's rerun finds bugs, you should expect it to find ones that seem to have nothing to do with the test's original purpose. Much of the value of an automated test lies in how well it can do that."
In the context of this article, the first point is the more important, as we're concerned with the cost of fixing an automated script; thus, your cost is the number of manual tests and bugs missed the first time around plus the number of manual tests and bugs missed the second time around when you fix the test. The second point addresses the ability of the test to find new, unexpected bugs, which it may or may not currently be doing, even though it's not testing what you initially designed it to test or thought it was testing.
Is There a Better Way To Automate These Tests?
Chances are that if you're doing automated testing, you can be doing it better. We all can. As with any testing activity, there's always a way we can do something better. In my experience, automation seems to be a low-hanging fruit with which many organizations can make immediate improvements. If you're going to invest in automated testing, the least you can do is the ensure that you're getting the most bang for your buck.
I'll close this article with my short list of resources for ways to make your automation more effective. Remember, you got to this point because you found an automated test that isn't testing what you thought it should be testing. You then asked whether you still needed the test, and decided that you did. After that, you asked whether the test should remain an automated test, and again you thought it should. So by this point you know that you did something wrong the first time around. These resources will help to ensure that you get it right the second time.
If your organization is un familiar with some of the "classic" frameworks for test automation, take a look at the following resources:
- "Deconstructing GUI Test Automation," by Bret Pettichord
- "Improving the Maintainability of Automated Test Suites," by Cem Kaner
- "Choosing a test automation framework," by Mike Kelly
If you've implemented some of these frameworks, but are interested in even more sophistication and new ways to leverage your existing tests or reduce their maintenance costs even further, the following resources might get you going down a new path:
- "Bypassing the GUI," by Brian Marick
- "Model-Based Testing: Not for Dummies," by Jeff Feldstein (pages 18–23)
- "High Volume Test Automation," by Cem Kaner, Walter P. Bond, and Pat McGee
Finally, perhaps your organization just needs a new way of thinking about automation. If you think that might be the case, these articles might be helpful:
- "Agile Test Automation," by James Bach
- "The Ongoing Revolution in Software Testing," by Cem Kaner
- "Determining the Root Cause of Script Failures," by Scott Barber
Michael Kelly is a senior consultant for Fusion Alliance, with experience in software development and testing. He is currently serving as the Program Director for the Indianapolis Quality Assurance Association and the Membership Chair for the Association for Software Testing. You can reach Mike by email at Mike@MichaelDKelly.com.