- TDD and Conventional Testing Work
- A Lesson Relearned
- From TDD to Legacy Code
- Lessons Learned
From TDD to Legacy Code
I now moved on to a new project that involved enhancing a homebrewed test automation framework. There was one catch: It didn’t have any unit tests. While I could start developing sooner, I was quickly gripped with that fear my developer friends had told me about. I would make one change, and then I’d have to do a lot of unit and functional tests to make sure that I hadn’t introduced any bugs in other areas of the automation code.
My progress slowed to a crawl, as I would make one small change, and find it caused a problem somewhere else in the automation stack. Another programming tester and I kept frustrating each other by making changes that caused bugs in each other’s projects. Since we didn’t have a suite of automated tests to run that covered the entire test application, we were limited to our ad hoc unit and functional testing that we did prior to code check-in. Invariably, each of us would miss a set of tests in an area that was unfamiliar, and therefore we would cause problems for each other. I realized that this automated test development looked exactly like any other legacy software development project that isn’t easily testable.
On the other hand, the unit-tested custom library I had developed earlier was working quite well. Testers found a bug during usage, so when I had a bug report I worked with another tester to fix it. We first added a unit test that would cause the error in the bug report, and then added the code to make the test pass. We then ran the entire suite of automated unit tests, and with that green bar in place, we were confident in the code. We ran the released functional tests again for good measure (I’d learned my lesson), and checked in the change with confidence. The simple, testable library was much easier to adapt to change than the large, complex test automation stack.
This experience reinforced a testing paradox I had discovered when I first started doing test automation work: "Who tests the tests?" or "Who watches the watchmen?" Test automation is software development, and as such is subject to the same problems as any other kind of programming activity. It benefits from testing as well, but there’s a huge dependency that causes problems—the application we’re testing. Now we have two applications to worry about: the production software and the test software. Production software can stand on its own, but the test software requires the production software in order to function at all.
The test software gives us confidence in the production software, but what happens when the test software becomes complex? My first inclination was to have tests for our test automation stack, but I quickly found that those tests could get complex as well, which led me into a situation that felt recursive. Like those Russian stacking dolls (Matryoshka)—when you open the doll, a slightly smaller doll is inside, with another doll inside that one, and so on. I could think of tests to test the tests to test the tests. Where do we draw the line on testing tests?
As another developer pointed out to me, there’s a symbiosis between the tests and the code. Jerry Weinberg has described symmetry between the test code and production code, that each tests the other. One developer friend told me that if our automation test code is so complex we have to have tests for it, we should look seriously at our automation stack. This is a difficult problem, and I have often jumped into test automation without thinking about the design and the potential consequences of those early design decisions.