Using Bugs To Bring Developers and Testers Closer Together
While I'm not sure that the common saying "Testers and developers think differently" is accurate, I think it's safe to say that testers and developers certainly have different motivators for their workand, more importantly, different pressures and perspectives that guide the directions their work takes. Testers are motivated by management to find and report problems within the system as quickly as possible, while developers are motivated to complete code as quickly and accurately as possible in order to move on to the next problem.
Somewhere in this jumble of speed versus quality, someone should be motivating testers to find better, more meaningful bugs, but I rarely see that motivation happening. And someone should be motivating developers to improve the quality of the code they create or modify, but that prompting often takes a backseat to "More code, faster."
Because of these different motivations of testers and developers, and the time pressures that everyone on the team is under, communication suffers. It's easy for a tester to think of a simple test that the developer should have run to find a particular problem before releasing the codebut the tester doesn't appreciate the pressures imposed on the developer. And it's easy for the developer to be dismayed by an overwhelming number of low-value or meaningless defects being reportedbut the developer doesn't appreciate the expectations and metrics against which management measures the tester.
As more defects are submitted, developers have less time to work on them, and communication may break down. Members of the project team commonly start to rely on short, assumptive descriptions and comments entered into a defect-tracking system as the fastest and most effective means of communication. Little time is spent face to face, where the most effective communication can take place and dialogues can offer insights to both the tester and the developer.
In this article, I'll discuss how recent projects used the following simple techniques to fix problems faster and improve communication between testers and developers:
Sharing test scripts between teams
Distributing the ability to execute smoke tests
Performing runtime analysis together
Using log files to isolate problems
Using defect-tracking systems effectively
Speaking face to face
Not all of these techniques added "face time" to our project communications, but all of them helped everyone involved to gain a better understanding of the pressures and constraints under which both developers and testers work.
Share Test Scripts Between Teams
Problems encountered by test scripts can be lengthy or difficult to reproduce. In past projects, I've submitted defects that took hours to reproduce due to the sequence of events in the scripts being executed. More often than not, this setup requirement made it impossible for the developer to reproduce the issue based on the information in the log. If you make all of your test scripts available to the entire team, however, you give developers the ability to look at the script code, look at the script logs, re-run the scripts and watch them execute, re-run them in local environments with debug information written to logs, or re-run them in conjunction with other tools.
In addition to sharing the test scripts, provide the development team with a remote automated test-script execution box. Most enterprise tools allow for distributed test execution. If you provide one of your test lab machines to run your scripts, developers can execute the tests they need while simultaneously using their own computers to keep developing. This technique allows developers to work on your problem; without the testing box, the developers might not be able to run a full test due to time and equipment constraints. To make this strategy most effective, reference your scripts in the submitted defect list. The development team can run the scripts without checking with you first, removing a manual step.
Sharing test scripts with developers also enables everyone on the team to use the same tools used to develop the scripts. When team members use the same tools, a likely side-effect is developers taking the time to offer improvements to the scripts. After running one of my scripts, a developer once told me that he already had a unit-test script that did something similar "behind the GUI." Together we reviewed both scripts, and ultimately we transferred the data from my regression script to work with his unit-test script. His unit-test script executed in a fraction of the time of my regression script, and the results were easier to read. The more feedback from developers you get on your scripts, the more powerful they'll become.
In addition, the more you collaborate with developers, the more likely they'll be to fix your problem; after all, it may have been their optimization that found the bug. By distributing the ability to execute any of your scripts, you increase communication between developers and testers.