Getting Started
Several years ago, I was working in a quality assurance department on a software development team. We were looking at ways of improving our processes, and were encouraged to research new ways of developing and testing software. Several of us were big fans of iterative lifecycles. When a developer introduced us to Scrum at a lunch-and-learn session, we started using the daily Scrum meeting near the end of releases. We found these Scrum meetings to be useful because we all knew what everyone else was doing at any given time, and we were able to share information more freely. Instead of spinning wheels for a couple of days tackling a problem in isolation, raising the problem at a Scrum meeting might lead to someone else piping up to meet you offline to talk about it.
Because we found we were more productive and effective using Scrum meetings, we sought management support to adopt the entire Scrum process on a pilot project. We started the project with a meeting to determine the Product Backlog. This meeting involved all the project stakeholders, including the development team and testers. On a whiteboard, we wrote the features that needed to be delivered; then we brainstormed and set up general priorities. Because the pilot project was small and we were also working on other projects at the same time as testers, we weren't as active as other team members, but were able to get a good idea about what to expect to test. The Scrum Master gathered the feature wish list and recorded it as a rough Product Backlog in a spreadsheet that was sent to the team members for review. We met again after taking the time to review the Product Backlog and determining how much time it might take to deliver each item. At that second meeting, the Product Backlog was reviewed and trimmed. The final Product Backlog was refined and sent out again to each stakeholder. As testers, we decided to test the way we always had—at the end of the development cycle, once a complete product was ready. We would develop our test plan during the Sprints leading up to the end of the project, and added a testing/bugfix Sprint to the end of the project.
Next, the team held a Sprint planning meeting to determine what features to develop in the first four-week sprint. After selecting the features, each team member added high-level descriptions of tasks we would perform during the Sprint. For the testers, there wasn't much to do yet, so we created planning tasks and attended and provided feedback at design meetings. The entire team also selected a goal to reach by the end of the Sprint: creating working software with a limited number of features that could be demonstrated and tested. As with the Product Backlog, the Sprint Backlog contained line items in a spreadsheet describing the tasks we would complete in the next four weeks to help meet the Sprint goal. This spreadsheet was sent out to the entire team for their review.
The first Sprint was quiet for the testers. We attended some design meetings and began planning our tests. It took a bit to adjust to the daily Scrum meetings. At first we thought, "Oh no, not another meeting," but since the meetings were so short and useful, they hardly felt like meetings at all. It was tempting to get into longer discussions during the meetings, but the Scrum Master kept us on track and limited the Scrum meetings to 15 minutes.
At the end of the first Sprint, we had working software that was ready to demo. As testers, we were part of the demo; we recorded both positive and negative feedback from stakeholders, and continued test planning with this information. Test planning was faster and easier with working software in front of us, rather than trying to visualize the software by using requirements documents.
We repeated this process for subsequent Sprints, with testers spending more time in design meetings, talking to developers about testing, and working on test planning. The more features we added, the more time we were able to spend thinking about test strategies. Coming down to the final Sprint, we had a very solid handle on what we were going to test. Since we could plan from working software, test planning was a dream. We were able to stretch out and look at new areas of testing on which we hadn't previously been able to spend any time.
For the final Sprint, we were extremely busy testing. Because the software was already familiar to us, we didn't have much of a learning curve when we started testing. The focus went from the development team to testing, and we had a lot to say in meetings, but many of the questions we would normally have had at this point in a project had already been answered, so we were able to focus more on testing and providing bug reports and feedback to the developers. We shipped the software with a lot more confidence, and with new ideas on testing for the next release.
At the end of the project, it was deemed a success, and everyone from management down was pleased with the results. As testers, we were pleased that we had access to the product sooner, to draw out our own information for testing during development instead of reading from a requirements document. We knew that what we were looking at was up to date and more detailed than requirements documents usually are. Because we focused on different areas of interest and risk for testing with this methodology, we had some diversity in our testing techniques.