- Why Automate?
- The Challenges of Testing Software Today
- Reducing the Time and Cost of Software Testing
- Impacting Software Quality
- Improvements to Your Software Test Program
- Summary
- Notes
Improvements to Your Software Test Program
Using automated test tools can increase the depth and breadth of testing. Additional benefits are outlined in Table Potential Benefits to Software Test Program with AST.[12]
Table Potential Benefits to Software Test Program with AST
Improved Quality of the Test Effort |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Improved Build Verification Testing (Smoke Test)
The smoke test (build verification test) focuses on test automation of the system components that make up the most important functionality. Instead of having to repeatedly retest everything manually whenever a new software build is received, a test engineer plays back the smoke test, verifying that the major functionality of the system still exists. An automated test tool allows the test engineer to record the manual test steps that would usually be taken in support of software build/version verification. With an automated test tool, tests can be performed that verify that all major functionality is still present, before any unnecessary manual tests are performed. As an example, when delivering the systems for one of our largest customers between 2003 and 2008, it was critical that every delivery to every platform be smoke-tested. The smoke test involved overnight runs that were semiautomated and checked after a period of time, and then reports were generated based on the outcome. The goals (numbers of platforms) for delivering verified software to platforms were big and also necessary in order to bring needed capability and functionality to the customer. The point is that streamlining smoke testing through automation is a huge benefit and value to many of our customers. It is also a time and cost control, so you don’t test a system in depth that is not basically stable. This reduces rework and is another great cost-containment strategy.
Improved Regression Testing
A regression test is a test or set of tests executed on a baselined system or product (baselined in a configuration management system), when a part of the total system product environment has been modified. The test objective is to verify that the functions provided by the modified system or products are as specified and there has been no unintended change in operational functions.
An automated test tool provides for simplified regression testing. Automated regression testing can verify that no new bugs were introduced into a new build. Experience shows that modifying an existing program is a more error-prone process (in terms of errors per statement written) than writing a new program.[13]
Regression testing should occur after each release of a previously tested application. The smoke test described previously is a mini and rapid regression test of major functionality. Regression testing expands on the smoke test and involves testing all existing functionality that has already been proven viable. The regression test suite is the subset of all the test procedures, which exercises the basic functionality of the application. It may also include test procedures that have the highest probability of detecting the most errors. Regression testing should be done through an automated tool since it is usually lengthy and tedious and thus prone to human error.
Multiplatform and Configuration Compatibility Testing
Another example of the savings attributable to automated testing is the reuse of test scripts to support testing from one platform (hardware configuration) to another. Prior to the use of automated testing, a test engineer would have had to repeat each manual test required for a specific environment step by step when testing in a new environment. Now when test engineers create the test scripts for an AUT on platform x or configuration x, they can just play back the same scripts on platform y or configuration y, when using multiplatform-compatible tools. As a result, the test has been performed for the AUT on all platforms or configurations.
Improved Execution of Mundane Tests
An automated test tool will eliminate the monotony of repetitious testing. Mundane repetitive tests are the source of many errors. A test engineer may get tired of testing the same monotonous steps over and over again. We call that tester fatigue or immunity to defects. Habituation is when you become used to the way the system works and don’t see the problems—it has become a habit to see the working solution without considering the negative, possibly nonworking, paths. A test script will run those monotonous steps over and over again and can automatically validate the results.
Improved Focus on Advanced Test Issues
Automated testing allows for simple repeatability of tests. A significant amount of testing is conducted on the basic user interface operations of an application or analyzing outputs comparing expected to actual results.
Automated testing presents the opportunity to move on more quickly and to perform a more comprehensive overall test within the schedule allowed. Automatic creation of user interface operability tests or automated test result output comparison gets these tests out of the way, allowing test teams to turn their creativity and effort to more advanced test problems and concerns.
Testing What Manual Testing Can’t Accomplish
Software systems and products are becoming more complex, and sometimes manual testing is not capable of supporting all desired tests. There are some types of testing analysis that simply can’t be performed manually anymore, such as code coverage analysis, memory leak detection, and cyclomatic complexity testing. It would require many man-hours to produce the cyclomatic complexity of the code for any large application. And manual test methods employed to perform memory leakage tests would be nearly impossible.
Security testing of an application is almost impossible using manual testing techniques. Today, there are also tools on the market that allow automated security testing. Consider, for example, tests to determine whether the application’s Web links are up and running in a matter of seconds. Performing these tests manually would require hours or days, or would be almost impossible.
Ability to Reproduce Software Defects
Test engineers often encounter the problem of having detected a defect, only to find later that the defect is not reproducible. With an automated test tool the application developer can simply play back the automated test script, without having to worry about whether all exact steps performed to detect the defect were properly documented, or whether all the exact steps can be re-created.
Enhancement of System Expertise
Many test managers have probably experienced a situation where the one resident functional expert on the test team is gone from the project for a week during a critical time of testing. The use of existing automated test scripts allows the test team to verify that the original functionality still behaves in the correct manner even without the expert. At the same time, the tester can learn more about the functionality of the AUT by watching the script execute the exact sequence of steps required to exercise the functionality.
After-Hours “Lights-Out” Testing
Automated testing allows for simple repeatability of tests. Since most automated test tools allow for scripts to be set up to kick off at any specified time, automated testing allows for after-hours testing without any user interaction. The test engineer can set up a test script program in the morning, for example, to be kicked off automatically by the automated test tool at, say, 11 that night, while the test team is at home sound asleep. The next day, when the test team returns to work, the team can review the test script output and conduct an analysis. Another convenient time for kicking off a script is when the test engineer goes to lunch, attends a meeting, or is about to depart for home at the end of the workday. Initiating tests at these times makes maximum use of the test lab and time.
During these times automated testing can also take advantage of distributed testing after hours, see described later on. After the engineers have gone home, multiple machines in the lab can be used for concurrency and distributed testing.
Improved Requirements Definition
If Requirements Management is automated as part of the Software Testing Lifecycle, various benefits can be gained, such as:
Being able to keep historical records of any changes or updates, i.e. an audit trail of changes.
Automated Requirements Traceability Matrix (RTM), i.e. linking requirements to all artifacts of the software development effort, including test procedures/pass/fail and defects. Automated maintenance of the RTM is another major benefit.
Improved Performance Testing
Performance information or transaction timing data is no longer gathered with stopwatches. Even very recently, in one Fortune 100 company performance testing was conducted while one test engineer sat with a stopwatch, timing the functionality that another test engineer was executing manually. This method of capturing performance measures is labor-intensive and highly error-prone, and it does not allow for automatic repeatability. Today, many performance- or load-testing tools are available open-source or vendor-provided, which allow the test engineer to perform tests of the system/application response times automatically, producing timing numbers and graphs, pinpointing the bottlenecks and thresholds of the system. This genre of tool has the added benefit of traversing application functionality as part of gathering transaction timings. In other words, this type of test automation represents an end-to-end (ETE/E2E) test. A test engineer no longer needs to sit there with a stopwatch. Instead, the test engineer initiates a test script to capture the performance statistics automatically. The test engineer is now free to do more creative and intellectually challenging testing work.
Improved Stress and Endurance Testing
It is expensive, difficult, inaccurate, and time-consuming to stress-test an application adequately using purely manual methods. This is because of the inability to reproduce a test when a large number of users and workstations are required for it. It is costly to dedicate sufficient resources to these tests, and it is difficult to orchestrate the necessary number of users and machines. A growing number of test tools provide an alternative to manual stress testing. These tools can simulate a large number of users interacting with the system from a limited number of client workstations. Generally, the process begins by capturing user interactions with the application and the database server within a number of test scripts. Then the testing software runs multiple instances of test scripts to simulate large numbers of users.
A test tool that supports performance testing also supports stress testing. Stress testing is the process of running client machines and/or batch processes in high-volume scenarios subjecting the system to extreme and maximum loads to find out the thresholds of whether and where the system breaks and identifying what breaks first to see whether and at what point the application will break under the pressure. It is important to identify the weak points of the system. System requirements should define thresholds and describe how a system should respond when subjected to an overload. Stress testing is useful for operating a system at its maximum design load to make sure it works. Stress testing is also useful to make sure the system behaves as specified when subjected to an overload.
Many automated test tools come with a load simulator, which is a facility that lets the test engineer simulate hundreds or thousands of virtual users simultaneously working on the AUT. Nobody has to be present to kick off the tests or monitor them; a time can be set when the script will kick off and the test scripts can run unattended. Most tools produce a test log output listing the results of the stress test. The automated test tool can record any unexpected active window, such as an error dialog box, and test personnel can review the message contained in the unexpected window, such as an error message.
Quality Measurements and Test Optimization
Automated testing produces quality metrics and allows for test optimization. Automated testing produces results that can be measured and analyzed. The automated testing process can be measured and repeated. Without automation it is difficult to repeat a test. Without repetition it is difficult to get any kind of measurements. With a manual testing process, the chances are good that the steps taken during the first iteration of a test will not be the exact steps taken during the second iteration. As a result, it is difficult to produce any kind of compatible quality measurements. With automated testing the testing steps are repeatable and measurable.
Test engineer analysis of quality measurements support efforts to optimize tests, only when tests are repeatable. Automation allows for repeatability of tests. A test engineer can optimize a regression test suite by performing the following steps:
- Run the regression test set.
- If cases are discovered for which the regression test set ran OK, but errors surface later, include the test procedures that uncovered those bugs in the regression test set.
- Keep repeating these steps as he regression test set is being optimized, by using quality measurements (in this case the metric would be the amount of test procedure errors).
Improved System Development Lifecycle
AST can support each phase of the system development lifecycle and various vendor provided automated test tools are available to do just that. For example, there are tools for the requirements definition phase, which help produce test-ready requirements in order to minimize the test effort and cost of testing. Likewise, there are tools supporting the design phase, such as modeling tools, which can record the requirements within use cases. Use cases represent user scenarios that exercise various combinations of system-level (operational-oriented) requirements. Use cases have a defined starting point, a defined user (a person or an external system), a set of discrete steps, and defined exit criteria.
There are also tools for the programming phase, such as code checkers, static and dynamic analyzer, metrics reporters, code instrumentors, product-based test procedure generators and many more. If requirements definition, software design, and test procedures have been prepared properly, application development may just be the easiest activity of the bunch. Test execution will surely run more smoothly given these conditions.
Improved Documentation and Traceability
Test programs using AST will also benefit from improved documentation and traceability. The automated test scripts along with the inputs and expected results provide an excellent documentation baseline for each test. In addition, AST can provide exact records of when tests were run, the actual results, the configuration used, and the baseline that was tested. AST is a dramatic improvement over the scenario where the product from the test program is half-completed notebooks of handwritten test results and a few online logs of when tests were conducted.
Some of the benefits AST can provide to a test program include expanded test coverage, enabling tests to be run that cannot practically be run manually, repeatability, improved documentation and traceability, and freeing the test team to focus on advanced issues.
Distributed workload and concurrency testing
It is almost impossible to conduct a distributed workload or concurrency test that provides useful results without some form of AST. This is one of the types of testing that benefits most from AST. Since hardware can be expensive and replicating a production environment is often costly, using Virtual Machines (VM) Ware along with an AST framework allows for most effective implementation of this type of test.