- Motivations for Different Types of Testing
- Test Scoping
- Test Planning
- Choosing the Right Test Tools
- Writing the Test Plan
- Summary
Writing the Test Plan
After you and your client have agreed upon the scope of the prototype and the test suites to be carried out, it is time to write a plan that describes exactly how you will test them. A test plan should address the following topics, which will be described in detail in the next few sections of this chapter:
- Overall project scope and objectives
- Test objectives and success criteria
- Test resources required (people, hardware, software, test tools)
- Test schedule
- Developing detailed test cases
Overall Project Scope and Objectives
A brief description of the overall project scope serves as a primer for stakeholders who are unfamiliar with the triggers and motivations for the testing project, in addition to guiding the testers' efforts as they create meaningful test cases. The following are some examples of specific project objectives that were written for particular customers:
- First Integrity Financial plans to build two new data centers in 2010 and is in the process of selecting a networking vendor with the best possible solution. Based on the customer's requirements for the new data centers, the account team has proposed a design that will be proof of concept tested in the Independent Network Services testing facility. Results of the tests will be presented to First Integrity as input into the vendor selection process.
- Spacely Sprockets is in the process of building a next-generation WAN to meet an increased, intergalactic demand for its superior bicycle components. A low-level design developed by Spacely Sprockets' network architects will be verified in the Independent Network Services testing facility to ensure that any weaknesses or limitations are found prior to deployment. Findings during the test effort will be documented and sent to the architects for LLD refinement.
Test Objectives and Success Criteria
Test objectives and success criteria should be developed based on a client's business and technical goals for the network design, and they should include any known SLAs associated with applications or services. The test objectives should simply be to measure the outcome of the test case, and they should be based, as much as possible, on industry standards for all relevant technologies and services. For example, VoIP quality can be measured quantitatively using a Mean Opinion Score.
A MOS score is a subjective test of a call quality that was originally designed by the Bell Companies to quantify the quality of a voice call, with 1 being unacceptable and 5 being superlative.
This information will help the test plan developer define relevant test cases with clearly identifiable success or failure metrics that can be agreed upon by the tester and client. The following are examples of test objectives and success criteria that were written for a particular customer:
- Measure the response time for the Trading application Delta when the network path is 45 percent loaded, which is the average estimated load during trading hours. The acceptance criteria, per the SLA, for Trading application Delta is that the response time must be 300 ms or less.
- Measure the throughput for the Trading application Delta when the network is 90 percent loaded, which is the peak estimated load during a failure scenario in the primary path. The acceptance criteria, per the SLA for Trading application Delta, is that the throughput must be at least 1 Mbps
- Measure the impact to test traffic when various components in the WAN path are failed over. The availability SLA for Trading application Delta specifies that less than .1 percent loss be encountered on a flow running at 1000 pps during a failover event.
Test Resources Required
The people, hardware, software, and test tools necessary to complete the test should be included in the test plan for resource estimation, test build guidance, and historical recording purposes. It is very important to accurately document the exact hardware and software versions of the components that will be tested, as even small variations in hardware or software versions can produce different results with certain test scenarios. This information will provide a valuable baseline should operational issues occur further down the road.
Table 4-2 is an example of how equipment details can be captured in the test plan.
Table 4-2. Example Hardware Equipment to Be Tested
PE Router—Generic Configuration |
||
Product |
Description |
Qty |
XR-12000/10 |
Cisco XR 12000 Series Original Router |
4 |
12410 |
Cisco XR 12000 Series 10-Slot Router |
1 |
12416 |
Cisco XR 12000 Series 16-Slot Router |
1 |
12816 |
Cisco XR 12000 Series 16-Slot Router |
1 |
12406 |
Cisco XR 12000 Series 6-Slot Router |
1 |
XR-PRP-2 |
Cisco XR 12000 Series Performance Router Processor 2 |
5 |
12000-SIP-601 |
Cisco XR 12000 and 12000 Series SPA Interface Processor-601 |
11 |
SPA-1X10GE-L-V2 |
Cisco 1-Port 10GE LAN-PHY Shared Port Adapter |
2 |
XFP-10GLR-OC192SR |
Multirate XFP module for 10GBASE-LR and OC192 SR-1 |
2 |
SPA-2X1GE-V2 |
Cisco 2-Port Gigabit Ethernet Shared Port Adapter |
5 |
SPA-8X1GE-V2 |
Cisco 8-Port Gigabit Ethernet Shared Port Adapter |
1 |
SPA-8X1FE-TX-V2 |
Cisco 8-Port Fast Ethernet (TX) Shared Port Adapter |
3 |
SPA-4XOC3-POS-V2 |
Cisco 4-Port OC-3 POS Shared Port Adapter |
4 |
SFP-GE-S |
1000BASE-SX SFP (DOM) |
4 |
GLC-T |
1000BASE-T SFP |
16 |
SFP-OC3-IR1 |
OC-3/STM-1 pluggable intermediate-reach 15 km trans |
4 |
SPA-10X1GE-V2 |
Cisco 10-Port Gigabit Ethernet Shared Port Adapter |
3 |
If applicable, it is also a good idea to provide per-node details of how the line cards are to be installed in modular node chassis. This will assist with the test build and remove any ambiguity regarding the exact hardware that was tested if questions arise during test results analysis. Figure 4-2 shows an example of an equipment slot configuration diagram that can be added to the test plan.
Figure 4-2 Equipment Slot Configuration Diagram
The exact software feature set and version should be recorded for each device type and role in the network, as shown in Table 4-3.
Table 4-3. Example Software Versions to Be Tested
Platform |
Role |
Cisco IOS Software Version |
Image/Feature Set |
2811 |
CE Router |
12.3(14)T7 |
c2800nm-adventerprisek9-mz.123-14.T7.bin |
2821 |
CE Router |
12.3(14)T7 |
c2800nm-adventerprisek9-mz.123-14.T7.bin |
4500/Sup III |
L3 Switch |
12.2(25) |
cat4000-i5k91s-mz.122-25.EWA14.bin |
4500/Sup 6E |
L3 Switch |
12.2(46) |
cat4500e-entservicesk9-mz.122-46.SG.bin |
C3750 |
L2 Switch |
122-25.SEB4 |
c3750-ipbase-mz.122-25.SEB4.bin |
Large test organizations often tackle several projects simultaneously, some of which are long term, requiring a team approach. An estimate of the resources allocated to a particular test should be included in the test plan, as shown in Table 4-4.
Table 4-4. People, Roles, and Time Allocation
Role |
Name |
Resource Allocation |
Program Manager |
Cosmo Spacely |
As required |
Test Manager |
George Jetson |
25% |
Test Lead |
Joseph Barbara |
100% |
Test and Documentation |
Henri Orbit |
100% |
George O'Hanlon |
50% |
Test Schedule
A test schedule designates work to be done and specifies deadlines for completing milestones and deliverables. Test entrance and exit criteria should be clearly defined so that everyone understands what tasks must be completed prior to the start of testing, and when testing is considered to be complete. An example of test entrance criteria may be that a client must approve the test plan, at which point no more changes will be allowed without a redefinition of the test scope. Test exit criteria may include running all of the planned tests, identifying or filing bugs for any defects found, and/or reviewing test results with the customer.
Table 4-5 shows a sample test schedule.
Table 4-5. Sample Test Schedule
Date |
Milestones |
Deliverables/Comments |
10/1/2009 |
Test Plan Start |
High-level test case review with customer and account team |
10/5/2009 |
Test Plan—Review & Approval |
Test Plan document review with customer and account team |
10/6/2009 |
Entrance Criteria (EC) Approval |
Project Execution Commit with sponsors |
10/6/2009 |
Test Start |
Dependent on test entrance criteria documented in EC |
10/13/2009 |
Test Complete |
Completion of all test cases |
10/20/2009 |
Test Result Report Complete |
Final test results report complete |
10/23/2009 |
Internal Test Document Review |
Review test document with internal team prior to customer review |
10/26/2009 |
Test Document Review with Customer |
Customer review of test document |
11/2/2009 |
Lab Topology Teardown |
Test Project complete |
Developing the Detailed Test Cases
As explained earlier, test cases are the essence of the test plan, as they ultimately will be followed to produce results that will determine whether the device, feature, or system under test has passed or failed. As the test plan writer, you must be very concise when specifying the set of preconditions, steps, expected output, and method of data collection that should be followed. This is particularly important when the people executing the tests have not been involved in the development of the test plan, or are working on several different tests concurrently. When the time comes for test execution, engineers need to understand
- What they are testing
- Why they are testing it
- How they are going to test it
- What information they need to capture
- The format in which they need to record results
Test cases are often classified as being either formal or informal.
Formal test cases can be directly mapped to test requirements with success criteria that are measurable through quantifiable metrics. Formal test cases have a known input and an expected output, which are worked out before the test is executed. For example, a formal test case could be developed to verify a vendor's claim that a particular firewall product can support 64,000 concurrent connections. The expected output might be that the platform should be able to forward traffic at a particular pps rate with the specified preconditions that it must be performing stateful inspection and filtering on 1 to 64,000 sessions. This type of formal case would be considered a "positive" test. A "negative" test could similarly be defined where the number of concurrent sessions was gradually increased above 64,000 at a rate of 1000 per second so that the effect on packet forwarding rate, CPU, memory, and general device health could be observed and measured. Formal test cases such as this should be linked to test requirements using a traceability matrix.
For features or network services without formal requirements or quantifiable success criteria, test cases can be written based on the accepted normal operation of features or services of a similar class. For example, an informal test case could be written to demonstrate the capability of a WAN acceleration appliance (such as Cisco WAAS WAE) to improve performance on a particular TCP application. As there are no industry standards that quantify "WAN acceleration," the informal test case could simply measure the time it takes to transfer a file via FTP from a remote server with, and then without, WAN acceleration enabled. The expected output could simply be that the time to retrieve the file should be "less" when WAN acceleration is enabled, which would then be recorded as a benchmark.
Understanding System Test Execution Methodologies
Chapter 1 introduced a four-phased approach to systems testing that has proven to be effective in replicating a customer's network design and in modeling application traffic characteristics. This approach includes developing a comprehensive set of test cases categorized as baseline, feature, negative, or scalability.
This section introduces a few common test methodologies that can be used to help develop test cases for each phase. These include conformance tests, functional and interoperability tests, and performance and scalability tests.
Conformance Testing
Conformance testing is used to verify compliance with standards and is often a key component of network hardware and software certification test plans. These types of tests are often challenging to develop because many network protocols are difficult to implement consistently between different vendors. Despite the existence of RFCs and IETF standards, implementations often have subtle differences because the specifications are typically informal and inevitably contain ambiguities. Sometimes there are even changes in implementation between different code levels within the same vendor's products.
Conformance tests are usually made up of both positive and negative test cases to verify how network devices comply with specific protocol standards. Conformance testing tools perform their tests as a dialog by sending protocol-specific packets to the device under test, receiving the packets sent in response, and then analyzing the response to determine the next action to take. This methodology allows conformance test tools to test complicated scenarios much more intelligently and flexibly than what is achievable by simple packet generation and capture devices.
When conducting conformance tests, keep in mind that even the test tool makers must interpret an RFC, and, as mentioned earlier, there may be differences in implementation between the test tool and the network equipment under test. If you see discrepancies, record them and work with the vendors to find a feasible workaround. Often times, these differences have been seen before.
A BGP conformance test plan is provided in Chapter 6, "Proof of Concept Testing Case Study of a Cisco Data Center 3.0 Architecture," as an example.
Functional and Interoperability Testing
Functional and interoperability tests are geared toward evaluating specific device features as they would be implemented in a "realistic" setup, and as such these tests are commonly seen in POC and design verification testing. Interoperability testing is a critical aspect of testing IP services that determines if elements within the architecture interact with each other as expected, to deliver the desired service capability. In contrast with conformance testing, which provides proof of RFC-defined protocols working between a few devices, generally two tests—functional and interoperability—allow engineers to expand the test coverage from a simple, small lab setup, to a more realistic, real-world configuration.
Functional and interoperability testing is the determination through a larger systems test of whether the behavior of a network architecture, in specific scenarios, conforms to the test requirements. For example, when you enable that QoS feature on your WAN edge network, will it reduce jitter for your voice traffic, or will it cause CPU spikes, fill your interface queues, and cause your routing protocol to drop? In this type of test, you will have multiple features enabled and competing for resources.
Functional and interoperability testing is often conducted as part of baseline testing, where all of the network features are enabled together. Only when all the features that will be working in conjunction in your network are combined with all of the types of hardware and software you will be using, will you be able to have a real view of how they will all interact together. Using the preceding QoS example, the routing protocol may work perfectly by itself, and the QoS policy may be doing exactly what you expect; but when you combine them together with the correct Cisco IOS Software and hardware, as well as some SNMP polling, you may see an issue. This combination of complex features, hardware, and software is what functional and interoperability tests are all about.
While the functional and interoperability tests do not specifically test for conformance, they sometimes help you identify conformance issues. For example, if you connect a new router to an existing lab network, you may find that the OSPF neighbors end up stuck in the Exstart/Exchange State. This problem occurs frequently when attempting to run OSPF between a Cisco router and another vendor's router. The problem occurs when the maximum transmission unit (MTU) settings for neighboring router interfaces don't match.
Performance and Scalability Testing
Performance and stress tests take the architecture to the next level. Assuming that everything is working as expected in your test environment under various test scenarios, including negative or failure tests, the next question is how well the network will work under different scenarios with an increased traffic load. There are many performance metrics you should collect, as well as stress scenarios that you should try out, before the network is deployed into production and required to support revenue-generating traffic.
Performance and stress tests are actually two different things. In performance testing, you are trying to create a baseline for how the network will behave during typical and increased loads, as well as during failover scenarios. The goal of performance testing is to find and eliminate bottlenecks and establish a roadmap for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis, until you hit a predetermined threshold, be it CPU, memory, interface utilization, or something else.
Stress testing, on the other hand, tries to break the system under test by overwhelming its resources or by taking resources away from it, in which case it is sometimes called negative testing. The main purpose behind this is to make sure that the system fails and recovers gracefully, as well as to find the point at which the system will become inoperable.
When conducting a performance test, you would want to see, for example, how long it takes a router to bring up 15 OSPF neighbors each advertising 1000 routes. In a stress test, you would check how many OSPF neighbors advertising 1000 routes would cause the router to start behaving incorrectly. Both of these types of testing tend to require very expensive and extensive test gear.
Format for Written Test Case
There are several articles written, and even commercial software products available, to help you develop written test cases. While there is no absolute right way to write a test case, experience and best practices suggest that it should be written clearly, simply, with good grammar. It is recommended that the following information should be included at a minimum:
- Test ID: The test case ID must be unique and can also be associated with the test logs and other collected data.
- Node List: The list of the actual hardware being tested in this test case.
- Test Description: The test case description should be very brief.
- Test Phase: Baseline, Feature, Negative, or Scalability.
- Test Suite: If applicable, include the feature or service that this test case will be used to verify. Examples may include OSPF, QoS, High Availability, or VoIP.
- Test Setup: The test setup clearly describes the topology, hardware, logical configurations, test tools, applications, or other prerequisites that must be in place before the test can be executed. For complex tests, it is often helpful to include a diagram to help illustrate exactly how the test should be set up.
- Test Steps: The test steps are the step-by-step instructions on how to carry out the test. These should be very detailed so that testers with minimum experience can execute the tests.
- Expected Results: The expected results are those that describe what the system must give as output or how the system must react based on the test steps.
- Observed Results: The observed results are those outputs of the action for the given inputs or how the system reacts for the given inputs.
- Pass/Fail: If the expected and observed results are the same, then the test result is Pass; otherwise, it is Fail.