Phase 1: Decision to Automate Testing
The decision to automate testing represents the first phase of the ATLM. This phase covers the entire process that goes into the automated testing decision. During this phase, it's important for the test team to manage automated testing expectations and to outline the potential benefits of automated testing when implemented correctly. A test tool proposal needs to be outlined, which will be helpful in acquiring management support.
Overcoming False Expectations for Automated Testing
While it has been proven that automated testing is valuable and can produce a successful return on investment, there isn't always an immediate payback on investment. It's important to address some of the misconceptions that persist in the software industry and to manage the automated testing utopia. Following is a list of just a few of the misconceptions that need to be addressed. People often see test automation as a silver bullet; when they find that test automation requires a significant short-term investment of time and energy to achieve a long-term return on investment (ROI) of faster and cheaper regression testing (for example), the testing tool often becomes "shelfware." This is why it's important to manage expectations in order to introduce automated testing correctly into a project.
Automatic Test Plan Generation
Currently, there is no commercially available tool that can automatically create a comprehensive test plan while also supporting test design and execution.
Throughout a software test career, the test engineer can expect to witness test tool demonstrations and review an abundant amount of test tool literature. Often the test engineer will be asked to stand before one or more senior managers to give a test tool functionality overview. As always, the presenter must bear in mind the audience. In this case, the audience may represent individuals with just enough technical knowledge to make them enthusiastic about automated testing, while unaware of the complexity involved with an automated test effort. Specifically, the managers may have obtained secondhand information about automated test tools, and may have reached the wrong interpretation of the actual capability of automated test tools.
What the audience at the management presentation may be waiting to hear is that the tool you're proposing automatically develops the test plan, designs and creates the test procedures, executes all the test procedures, and analyzes the results automatically. Meanwhile, you start out the presentation by informing the group that automated test tools should be viewed as enhancements to manual testing, and that automated test tools will not automatically develop the test plan, design and create the test procedures, or execute the test procedures.
Shortly into the presentation and after several management questions, it becomes very apparent just how much of a divide exists between the reality of the test tool capabilities and the perceptions of the individuals in the audience. The term automated test tool seems to bring with it a great deal of wishful thinking that's not closely aligned with reality. An automated test tool will not replace the human factor necessary for testing a product. The proficiencies of test engineers and other quality assurance experts will still be needed to keep the test machinery running. A test tool can be viewed as an additional part of the machinery that supports the release of a good product.
One Test Tool Fits All
Currently, no single test tool exists that can be used to support all operating system environments.
Generally, a single test tool will not fulfill all the testing requirements for an organization. Consider the experience of one test engineer encountering such a situation. The test engineer was asked by a manager to find a test tool that could be used to automate the testing of all the department's applications. The department was using various technologies including mainframe computers and Sun workstations; operating systems such as Windows 3.1, Windows 95, Windows NT, and Windows 2000; programming languages such as Visual C++ and Visual Basic; other client/server technologies; and Web technologies such as DHTML, XML, ASP, and so on.
After conducting a tool evaluation, the test engineer determined that the tool of choice was not compatible with the Visual C++ third-party add-ons (in this case, Stingray grids). Another tool had to be brought in that was compatible with this specific application.
Immediate Reduction in Schedule
An automated test tool will not immediately minimize the testing schedule.
Another automated test misconception is the expectation that the use of an automated testing tool on a new project will immediately minimize the test schedule. The testing schedule will not experience the anticipated decrease at first, and an allowance for schedule increase is required when initially introducing an automated test tool. This is due to the fact that when rolling out an automated test tool, the current testing process has to be augmented or an entirely new testing process has to be developed and implemented. The entire test teamand possibly the development teamneeds to become familiar with this new automated testing process (such as ATLM) and needs to follow it. Once an automatic testing process has been established and effectively implemented, the project can expect to experience gains in productivity and turnaround time that have a positive effect on schedule and cost.
Benefits of Automated Testing
The previous discussion points out and clarifies some of the false automated testing expectations that exist. The test engineer will also need to be able to elaborate on the true benefits of automated testing, when automated testing is implemented correctly and a process is followed. The test engineer must evaluate whether potential benefits fit required improvement criteria and whether the pursuit of automated testing on the project is still a logical fit, given the organizational needs. There are three significant automated test benefits (in combination with manual testing):
Producing a reliable system.
Improving the quality of the test effort.
Reducing test effort and minimizing schedule.
Many return on investment case studies have been done with regard to the implementation of automated testing. One example is a research effort conducted by imbus GmbH. They conducted a test automation value study in order to collect test automation measurements with the purpose of studying the benefits of test automation versus the implementation of manual test methods. Their research determined that the breakeven point of automated testing is on average at 2.03 test runs. (T. Linz and M. Daigl, "GUI Testing Made Painless: Implementation and Results of the ESSI Project Number 24306," 1998.)
Acquiring Management Support
Whenever an organization tries to adopt a new technology, they encounter a significant effort when determining how to apply it to their needs. Even with completed training, organizations wrestle with time-consuming false starts before they become capable with the new technology. For the test team interested in implementing automated test tools, the challenge is how to best present the case for a new test automation technology and its implementation to the management team.
Test engineers need to influence management's expectations for the use of automated testing on projects. Test engineers can help to manage expectations of others in the organization by forwarding helpful information to the management staff. Bringing up test tool issues during strategy and planning meetings can also help develop better understanding of test tool capabilities for everyone involved on a project or within the organization. A test engineer can develop training material on the subject of automated testing and can advocate to management that a seminar be scheduled to conduct the training.
The first step in moving toward a decision to automate testing on a project requires that the test team adjust management understanding of the appropriate application of automated testing for the specific need at hand. For example, the test team needs to check early on whether management is cost-averse and would be unwilling to accept the estimated cost of automated test tools for a particular effort. If so, test personnel need to convince management about the potential return on investment by conducting cost/benefit analysis.
If management is willing to invest in an automated test tool, but is unable or unwilling to staff a test team with individuals having the proper software skill level or to provide for adequate test tool training, the test team needs to point out the risks involved and/or may need to reconsider a recommendation to automate test.
Management also needs to be made aware of the additional cost involved when introducing a new toolnot only for the tool purchase, but for initial schedule/cost increase, additional training costs, and for enhancing an existing testing process or implementing a new testing process.
Test automation represents highly flexible technology, which provides several ways to accomplish an objective. Use of this technology requires new ways of thinking, which only amplifies the problem of test tool implementation. Many organizations can readily come up with examples of their own experience of technology that failed to deliver on its potential because of the difficulty of overcoming the "Now what?" syndrome. The issues that organizations face when adopting automated test systems include those outlined below:
Finding/hiring test tool experts.
Using the correct tool for the task at hand.
Developing and implementing an automated testing process, which includes developing automated test design and development standards.
Analyzing various applications to determine those that are best suited for automation.
Analyzing the test requirements to determine the ones suitable for automation.
Training the test team on the automated testing process, automated test design, development, and execution.
Initial increase in schedule and cost.