Test Plans
Before using the testing tools, you need to understand where all the various artifacts fit together because it matters when you start to manage an actual project. Figure 3-6 shows a container view of the artifacts.
Figure 3-6 Relationships between Team Projects, Test Plans, Test Suites, and Test Cases
Figure 3-6 shows that a Test Plan in MTM is associated with a specific Team Project. A Test Plan is composed of one or more Test Suites, and each Test Suite is composed of one or more Test Cases. This is a straightforward structure that enables flexible reporting and easy management of the Test Plans.
Exercise 3-1. Create a New Test Plan
This step assumes that you have not used MTM before. If you have, but you want to work through this exercise, you need to select the Home button in the upper-left corner of the screen and select Change Project:
- Open MTM.
- Select Add Server, or select an existing server if the correct server is listed.
- Select the BlogEngine.NET project, and click Connect Now.
- On the Testing Center screen, click Add to create a new Test Plan.
- Enter the name as Iteration 1 and click Add.
- Highlight the Plan, and click Select Plan.
Figure 3-7 shows the Iteration 1 Test Plan.
Figure 3-7 Test Plan
Properties
Test Plans have a name and a description, and if you use multiple Test Plans concurrently, you need to give them a descriptive name and also a more detailed description. The owner is usually the test manager but can also be a test lead if a lead is responsible for the testing that occurs within a plan. The state can either be Active or Inactive depending on whether it is currently used, and this is not customizable. Inactive Test Plans can either be previously completed Test Plans or Test Plans that have yet to be started and are still being created. The default state for new Test Plans is Active, but you might want to set the plan to Inactive if it is still being designed.
The area and iteration are the standard work item classification scheme. In general Test Plans should be related to iterations in some way (or whatever scheme the development team uses to produce software) because the testing follows the requirements or the coding, which are distinct phases in any methodology whether they are called out.
Test Plans are not work items such as a requirement, user story, or task. They are independent of the work item system. This is both a benefit and a disadvantage. The benefits are in the flexibility: the Test Plan contains more information and is more dynamic than a work item. On the other hand, items such as the Start and End date cannot be reported through a simple mechanism. You need to use the data warehouse (refer to Chapter 9, "Reporting and Metrics") to report on Test Plans.
Run Settings
Run settings define where tests execute and what diagnostic data adapters are implemented. Figure 3-7 shows the two categories of Run settings: Manual and Automated. Manual Run settings relate to any tests executed with the Test Runner (refer to Chapter 4, "Executing Manual Tests"). Automated Run settings relate to the execution of any automated tests (refer to Chapter 6, "Automating Test Cases") through MTM.
To create a new Run setting, go to the Lab Center, Test Settings tab, Test Settings Manager page, and copy an existing setting or add a new setting. These can then be assigned in the Test Plan Properties page. Figure 3-8 shows the Test Settings creation screen.
Figure 3-8 Test settings
Depending on whether you create an automated or manual setting, the options will be slightly different. Figure 3-8 shows a manual test setting on the Data and Diagnostics tab that contains the diagnostic data adapters. Table 3-2 lists the default diagnostic data adapters you can choose.
Table 3-2. Default Diagnostic Data Adapters
Collector |
Description |
Action Recording and Action Log |
Records each step that the tester takes in the application during a manual test run. |
ASP.NET Client Proxy for IntelliTrace and Test Impact |
Enables you to capture IntelliTrace and Test Impact information during a test execution of an ASP.NET application. Note: This setting does not actually perform the capture; you must check the IntelliTrace and/or Test Impact collectors in addition to this collector. |
Event Log |
Captures selected events written to the Event Log during a test run. |
IntelliTrace |
Enables capturing of the debug log. |
Network Emulation |
Throttles the network performance based on the specified settings. |
System Information |
Captures system configuration information for the system on which the test is performed. |
Test Impact |
Records Test Impact information for calculating Test Cases affected by modified code. |
Video Recorder |
Records a video of all actions taken on the screen during a test run. |
Diagnostic data adapters enable the test infrastructure to gather data—any particular piece of data you want. They are fully extensible and easy to create and modify (literally 20 lines of code plus whatever code is needed to collect data).
Builds
If you aren't using automated builds right now, you should be. Automated builds are one of the most effective ways to reduce the amount of time it takes to find and fix bugs. These automated builds can be Continuous Integration builds (the process of running a build immediately upon check-in to determine if the check-in broke anything) or nightly builds, and they can discover build breaks faster and with fewer lines of code to review to find the problem. They are also critical to manual testing; although not required for automated testing, they will certainly make things easier.
Builds enable you to specify which build you can execute the tests against. After you select a build to execute the Test Cases against, MTM provides you with information related to the build. Automated builds help light up the Test Impact Analysis results and provide the testing team with a list of all changes made to the code since the build they were previously using.
The build filter enables you to filter by build definition and build quality. Chapter 5, "Resolving Bugs," discusses build quality.
Configurations
On one hand configurations play an important part in test execution, and on the other hand they provide only metadata. Configurations enable you to specify various pieces of information about the tests you execute in the Test Plan. They also have a material effect on the number of tests that you need to execute and how you plan your Test Suites. For example, the default setting in MTM is Windows 7 and IE 8. If you have a Test Suite with 20 Test Cases, you need to execute 20 Test Cases. For every configuration that you add to a suite, all the tests need to be executed against the additional configurations as well. (By default, but you can change this.) So, if you have three configurations that you need to test against, you need to run 60 tests. The effect of configuration on testing and reporting are discussed in the "Assigning Test Configurations" section later in this chapter.
The "Test Configurations" section covers Test Configuration details.
Test Plan Status
This section provides status on the current Test Plan. The first pie chart lists the total number of tests broken down by successful tests, failed tests, and tests that have not yet been executed. The Failures by Type pie chart breaks down the categories of each failure. Table 3-3 shows the available categories.
Table 3-3. Failure Categories
Category |
Description |
None |
Use if the test failure is a nonissue. |
Regression |
Where the previous test results indicate a pass. |
New issue |
Has not been seen before. |
Known issue |
Possibly because a previous run found this bug or the development team has notified the testing team that the build is ready to test, but it knows about this particular failure. |
Unknown |
An error occurred, but the tester is not sure what the classification of the issue is. A test lead or manager should look further at Unknown issues. |
You can also provide a category for a failure type before or after it has been fixed, but leave this empty until the defect has been fixed. Table 3-4 lists the analysis categories.
Table 3-4. Analysis Categories (Also Called Resolution Type)
Category |
Description |
None |
No resolution at this time. |
Needs investigation |
The test team has decided to do a further investigation because it isn't sure of the cause. |
Test issue |
Usually set if the Test Case were at fault or the setup for the test were incorrect. This might be cause for concern because if a Test Case is wrong, the requirement it is based on might also have potential inaccuracies that need to be investigated. |
Product issue |
A valid failure occurred in the code. |
Configuration issue |
Usually a failure in the configuration files or on the machine on which the test was deployed. |
These graphs are updated as changes are made to the Test Plan and as test runs are completed and analyzed. (For performance reasons you might need to click the Refresh button to see the latest data.) This is a great view that quickly enables a testing team to see the progress of its testing within a given plan (as shown at the bottom of Figure 3-7).
Contents
The Contents portion of a Test Plan contains information on what will be tested; that is, it contains a list of all the Test Cases broken down into Test Suites. Figure 3-9 shows the Contents page of the Plan tab.
Figure 3-9 Test Plan contents
Refer to Figure 3-3 for the relationships between items. Test Suites can be composed in three ways: requirement-based, query-based, or customized with a static suite, and there are good uses for each of the three. The type of Test Suite is differentiated by icons next to the suite name (see Figure 3-10).
Figure 3-10 Test Suites
Requirements-Based Suites
For most teams developing line-of-business applications, the entire application is based around completing requirements; therefore, it makes sense that testers should test in relationship to the requirements that the developers finish. In other words, testers can rarely perform testing on partially completed requirements. They also can't perform testing on random pieces of the application because, in general, functional and integration testing relies on complete features. Even performing boundary tests must be done in the context of a requirement.
And, for the most part, customers want to know the status of their requirements. Are they close to completion? Did they pass their tests? How many bugs does a given requirement have? This is true regardless of what type of methodology you use. Grouping suites by requirement makes it extremely easy to report this information back to the customer.
To create requirements-based suites, simply select a static suite (the root node or another static suite) and click Add Requirements; then choose one or more requirements. Each requirement becomes its own suite. Any Test Cases already associated with the requirement are automatically added to the suite.
Query-Based Suites
These are suites created based on the results of a work item query. An example of why you might want to create a suite of this type is the need to test a specific area of your application that might be involved in different functionality. Using the requirement-based suite, you could not do this. Another reason for this type of suite can be the need to test all the bug fixes regardless of what requirement they are related to. The query-based suite simply provides you with more flexibility in selecting what you test and also enables you to run Test Cases from multiple Team Projects or requirements at the same time.
When creating this type of suite, you are limited to the results of the query, and the query specifies that you can query only work items in the Test Case category. So a query-based suite is specific to Test Cases. Because this type of suite is based on the results of a query, if the results of that query change, so will your Test Suite. Use this suite for short-term suites or suites where you don't mind them changing. An example of where this is effective is automated regression testing. You can create a query where Automation Status = Yes; when you execute the suite, all the automated tests execute.
Static Suites
A static suite is a fully custom suite; you provide the title of the suite and then add Test Cases as needed. One benefit of a static suite is that you can nest suites. This is not possible with the other two suite types. The reasons to use this type of suite can vary; however, an example of this might include final application testing where you might have time to only test requirements from various areas and iterations, and you want to break those up into subsuites so that you can roll the results up. In MTM when you select the New drop-down to add a new suite, the only two options you see are Suite and Query-Based Suite. The Suite option is the static suite.
Adding Suites and Test Cases to Your Plan
The mechanics of using the Contents window are fairly straightforward but offer a lot of options to help you control what happens when testers begin testing. The list of Test Suites is on the left side. Figure 3-6 shows a series of Test Suites starting with the Root Test Suite that is always the name of the Test Plan (Iteration 1 here). The Root Test Suite is a static suite, so you can add Test Cases directly to the root. Icons that have a red check on them are requirements-based suites. Another way to know this is to look above the list of Test Cases in the right pane; you can click the Requirement 1 link to open the requirement that these Test Cases relate to.
The Automated Regression Tests Suite in Figure 3-6 is a query-based suite, which you can tell by looking at the icon. The last suite listed, Custom, is a static suite with a Future Work subsuite that enables you to easily compose and manage your Test Suites.
You can change the default configuration for all the Test Cases here, or you can change the configuration for only individual tests. (This is not recommended because it can be difficult to keep track of which test is supposed to be run on which configuration.) You can change who the Test Cases are assigned to—either individually by selecting a Test Case and clicking the Assign button or by right-clicking the Test Suite on the left and selecting Assign Testers for All Tests (or any combination of testers to Test Cases).
In addition notice where it says State: In Progress in the upper-right corner. You can set the state to be one of three states: In Planning, In Progress, or Completed. In Progress is the default, and tests in a Test Suite that is In Progress may be executed. A Test Suite that is In Planning will not show up on the Test tab, so those tests cannot be executed. The same is also true for Completed suites.
You can also change the columns displayed for the Test Cases by right-clicking the column headers. You can filter certain columns (any column with a discrete list) to limit what displays. (For example, you can filter the Priority column in the default list of columns.)
Finally, you have the option to open a Test Case that has been added to a suite, add Test Cases that already exist in the suite, or create new Test Cases from within MTM. Any Test Cases you create or add are automatically linked with the requirement (or user story) if the suite is a requirements-based suite with a Tested By link type. The opposite is also true; if you remove a Test Case from a requirements-based suite, the Test Case is no longer in a relationship with the requirement. (The Tests/Tested By link is deleted, but the Test Case is not deleted.)
Exercise 3-2. Create a Test Suite
This exercise assumes that you have completed Exercise 3-1.
- Open MTM, if it's not already open.
- Select Testing Center, Test Plan, Contents tab.
- Select the Iteration 1 suite, which is the root suite and the only one that exists at this point.
- Click Add Requirements from the toolbar for the suite name.
- In the Add Existing Requirements to This Plan page, click Run (see Figure 3-11).
Figure 3-11 Add Existing Requirements to This Test Plan page
- Select the requirement As the Blog Author I Want to be Able to Log onto the Blog Engine, and click Add Requirements to Plan in the lower-right corner.