Challenges
Jerry Weinberg has a heuristic he calls "The Rule of Three": For any idea you have, if you can’t think of three ways it can fail, you don’t understand the problem space. This isn’t meant to discourage ideas, but to help you prepare for those times when an activity doesn’t go as planned. It also helps you to become aware of tradeoffs you’ll make. Tradeoffs are especially important to understand in software testing. What do I gain by using this technique, and what do I potentially lose?
By practicing TDD, and by talking with colleagues, I’ve learned about some of the problems we can run into. TDD has an important role in software development today, but isn’t a cure-all. Here are some of the challenges that are important to know as a conventional tester and a software developer:
- Don’t rely exclusively on automated unit tests. One
TDD developer told me a story about a project in which they had a contest. One
team did traditional programming with ad hoc manual unit tests and the odd
functional test. The other team did pure TDD with a continuous integration
system. At the end of development, the first team had slightly more stable
software, and both had about the same number of bugs. He realized at that point
that the traditional team did more functional testing, and did it more often, so
the software was better suited to the environment in which it would run. His
team relied almost completely on their automated unit tests, so when they
deployed into a test environment that was like production, it didn’t work
as well.
I learned that lesson myself the hard way, even though I should have known better as a software tester, given my experience with just this issue. When I wore my developer hat, though, that green bar was seductive.
- Don’t test only with test doubles such as mock objects. Mock objects are useful for development and can be a valuable testing tool. They can be used to simulate events that are difficult to replicate with a real system, and they help reduce dependencies in your test code so they can run quickly. However, mock objects have a fatal flaw. They carry the same assumptions that the program code has, and as such are not the same as the real thing. It’s common for tests to work with mock objects, but fail when the test is run with the real thing.
- You can get carried away with tests at the expense of the
design. Another experienced developer told me how he had written a lot
of tests only to find out a couple of days later that his design had suffered.
He would let the test ideas drive the design too much, and would realize later
that he had written a lot of code that was contributing to a poor design.
I found that same problem when I was developing on my own; I could get carried away with having a lot of tests, which gave me confidence, but when the time came to use the software I would discover that I had gone down the wrong path by not balancing test ideas with the overall design.
- Maintaining tests can be difficult. One experienced TDD
developer posed a problem for me that he had faced. What do you do when your
automated tests all pass, and you have mock objects with good isolation and few
dependencies in your tests, but the client asks for a change that requires a
quick but major change in your application code? This change causes a lot of
your tests to fail, and now you have to rewrite your mock objects, fix a ton of
unit tests, and rework all that automation code. First you have to determine
whether a failure is a new bug, and then you either change your code or change
your test so it passes. Whenever you change a test to pass, it’s
unnerving.
In this case, the client needed the software to be changed quickly due to market pressures. It didn’t make sense to spend the two weeks needed to rework the automated unit tests and mock objects when the code could be changed in the morning, and by the next afternoon the testers would be confident in the software after regression testing it from the GUI. The client knew the risks, but decided to send the code out to production without fixing the automated test code and mock objects. Overnight, the automated unit tests ended up falling into disuse. I’ve seen this situation a few times: Automated unit tests served well for one release, but over time they fell into disrepair and were forgotten.
- Test suites can get unwieldy over time. Even when you have a top-notch team of developers dedicated to maintenance, another problem creeps up that is particularly noticeable with a continuous integration environment. One TDD development team told me about a software project that had several releases over a period of years. They ran into a big problem: They couldn’t buy hardware that was powerful enough to run the tests in the time needed on the continuous integration server. This situation held up development progress because the unit tests could take a couple of hours to run. Even though they were strict about writing pure unit tests that would execute quickly, over a period of years the test suite grew to be very large, which caused it to run slowly.
- Writing GUI code with TDD is difficult. One area of TDD that’s particularly challenging is programming GUIs. User interfaces are complex and difficult to test. They may also rely heavily on libraries, which can make writing clean tests and working through a design much more limited. TDD pundits sometimes float the "thin GUI" theory, in which they advise making the GUI so simple that you don’t have to test it. This technique provides little comfort for those who may not be able to use a thin GUI design due to the tools that they’re using. It remains a difficult interface to test through using any sort of automated tool.