Common Scenarios
This section covers some common scenarios and how you can handle them from a planning and tracking perspective.
Scheduling and Tracking Test Case Creation and Execution
Before everyone on the team rushes to write features and write Test Cases, you need a plan for how to manage and track this work. Out-of-the-box, you can notice that the Test Case work item type (regardless of whether you use the MSF for Agile or MSF for CMMI template) lacks the Remaining Work and Completed Work fields. There is a reason for this. What would that time track? Is it tracking the creation of the Test Case or the execution of the Test Case? Or both? It would be hard to say.
Another item to consider is projects in which the project manager uses Microsoft Project to track work. It uses a Work Breakdown Structure (WBS) that uses parent/child relationships between work items to create that WBS. The Test Case work item is related to the requirements with a Tests/Tested By relationship, so Test Cases will not show up in the WBS, and the project manager cannot schedule them the way they would schedule a task.
The best way to handle this is with the structure shown in Figure 3-18.
Figure 3-18 Work item relationships
This structure solves a number of problems. First, a project manager can assign the task of creating a Test Case to the test team, which means that the activity can be captured in a Microsoft Project WBS. Second, the project manager has the option to schedule the Test Case for creation and for execution separately. When doing it this way, the Assigned To field would be the person creating it in the first case and executing it in the second case. You do not need to use the Assign To Tester functionality unless testing on multiple configurations. This enables the project manager to track the time discretely for each activity; however, you may not want to assign a task to execute a Test Case. This is quite difficult for a tester to realistically keep track of. The task would be associated with the Test Case and not the test run, which makes reporting even more difficult.
The Parent/Child relationship between the Task and Test Case is not necessary. It provides some additional structure and enables the Test Cases to show up in a tree query (as opposed to a directed links query) but does not feed any reports.
Feature Driven Development
In FDD, software development is done on multiple branches. That is, you may have a branching structure like the one shown in Figure 3-19.
Figure 3-19 A typical FDD source code structure
In this type of branching structure, it is generally considered a best practice to perform comprehensive testing on all code in each feature branch before merging it to the main development environment. As part of this process, Test Cases need to be "migrated." For example, if you create a series of Test Cases (Test A, Test B, Test C) for code on feature branch F1 and that code is merged to Dev and then back down to feature branch F2, those Test Cases may need to be executed against the code in branch F2. How do you keep track of it?
The recommended solution is to create one Test Plan per feature branch. Because you can copy suites between Test Plans, this becomes relatively simple. Figure 3-20 shows the Copy Suites screen.
Figure 3-20 Copy Test Suites from Another Test Plan dialog
To get to this dialog, right-click Test Suite in the Plan, Contents page, and select Copy Suite from another Test Plan. You can either copy the entire suite (which includes the root node) or you can copy individual suites. It is critical to note that this does not create a copy of the Test Case. It simply references the existing Test Cases, which in this situation is exactly what you want—change the Test Case in one place and it changes it in all places. In this way multiple Test Plans can be associated with different code from different branches (because each Test Plan can be associated with its own build) but the results can all be reported on together.
Moving from One Iteration to Another
When you move from iteration to iteration, you need to deal with a number of issues. Some of these include uncompleted Test Cases, and in others the Test Cases were completed but never executed. Do you simply "copy" them from one suite to another, which creates a reference, or do you duplicate the Test Cases? This depends on how you want to report on them.
If you have a Test Case with the area set as Iteration 1 but then you copy the suite that it is part of to another Test Plan, which is testing Iteration 2, you have a problem. Because a suite copy is actually a "reference," the Test Case continues to show up in Iteration 1—not Iteration 2. This can significantly skew your reporting depending on how you report on it. On the other hand, creating actual copies of the Test Cases adds to the "number" of Test Cases, even though this number doesn't change.
What are your options? In the first case, the suite copy is an expedient way to handle the problem. But the recommendation for this is to go one step farther. After you perform a suite copy, update all the Test Cases that were copied to be the same iteration that the new plan is in. To make this clearer, consider the following: You have a plan (Analysis) that is set for Iteration 1. All Test Cases in the plan are also set for Iteration 1. The analysis phase is complete, and you move to the next phase in which these Test Cases will be updated. If you plan to do work on these Test Cases, use the suite copy to add them to a new Test Plan called Construction. After they are copied over, update all the Test Cases so that the iteration is set to Iteration 2 (to match the iteration in which they will be worked on). Then continue to work on them as you normally would.
The second option in many ways is more appealing. Creating copies of the Test Cases allows you to preserve the Test Case as it was executed against the code in a given iteration. An example is that Iteration 3 ended in a release to the customer. The team begins work on Iteration 4, which will modify some of the features in Iteration 3. (This is an every-day occurrence in agile development but less so in waterfall.) However, between the current release and the next release, those Test Cases may need to be re-executed against production code. If you are actively changing those Test Cases, you need to go back into the Test Case work item type history to get back to the Test Case executed against the current release. In this way it acts almost as a branching mechanism for your Test Cases and enables you to preserve the Test Cases executed against a release. This may be handy for auditing purposes.
The advice for this issue is "It depends on what you're trying to do." There are no "best practices" because everything is dependent on your situation. Just be aware of what can happen in the various scenarios, and think it through before developing your plan.
Handling Different Test Configurations
As previously mentioned you can use configurations as metadata for reporting purposes and to cut down on the number of Test Cases that you need to maintain. But does it always make sense to do this? The answer is no. No tools can easily solve this problem, so it takes some planning. First, you need to determine if the different configurations require different tests. If they do, your answer is simple: Do not use the MTM test configurations to differentiate configurations. In this scenario, it requires you to create separate Test Cases and differentiate by Area. In addition, you would be better off creating separate Test Plans. Why? As noted earlier, Test Plans have one manual test setting and one automated test setting. It can be assumed that for different configurations you may be testing on different systems or with different settings, so it is easier to manage with separate Test Plans. If you do this, you do not need to use Areas to break up your configurations; MTM can work for you.
This is one item that should absolutely not be overlooked. The test settings can be cumbersome to manage if you have to change them on a per-run basis. It is easy enough to group Test Plans in different areas and then arrange the Test Cases under them. If you have to group Test Cases together that require different test settings, you are adding more work for the testers, so plan this before you get to the point where it is a problem.