- 29.1 Three Grains of Rice
- 29.2 Understanding Has to Grow
- 29.3 First Day Automated Testing
- 29.4 Attempting to Get Automation Started
- 29.5 Struggling with (against) Management
- 29.6 Exploratory Test Automation: Database Record Locking
- 29.7 Lessons Learned from Test Automation in an Embedded Hardware-Software Computer Environment
- 29.8 The Contagious Clock
- 29.9 Flexibility of the Automation System
- 29.10 A Tale of Too Many Tools (and Not Enough Cross-Department Support)
- 29.11 A Success with a Surprising End
- 29.12 Cooperation Can Overcome Resource Limitations
- 29.13 An Automation Process for Large-Scale Success
- 29.14 Test Automation Isn't Always What It Seems
29.13 An Automation Process for Large-Scale Success
Michael Snyman, South Africa
Test manager
I work for a large bank in South Africa, employing 25,000 staff. We adopted test automation from 2006 to 2008 with a clear aim of reducing costs and increasing productivity.
It was Edward Deming who said, “If you can’t describe what you are doing as a process, you don’t know what you are doing.” In our case, this was true; any success in the area of automation was due to individual skill and a large amount of luck. The challenge was in taking what the successful individuals did and describing this practice in the form of a process.
29.13.1 Where We Started
Our shelves were littered with numerous tool acquisitions and implementations with varying degrees of success. Each of these individual attempts had been focused on very limited and sometimes selfish objectives. The habit of looking only at accomplishing immediate project goals had significantly affected the ability of the organization to optimally use its selected tools. Such a one-sided view of automation had a considerable negative effect on operational activities such as regression testing and on justifying the investment made. Compounding the problem was the loss of valuable information in the form of test cases, test scenarios, and test data.
Automation was involved too late in the process. How often is automation viewed as the savior of the behind-schedule project? When automation does not deliver on these unrealistic expectations, it becomes yet another automation failure. In reality, it is very different; my experience points to automation requiring multiple cycles and project releases for it to become fully effective and provide an acceptable ROI.
We weren’t capitalizing on what we could have learned. For example, a failure experienced in production is an example of a test missed and one that should be included in the test cases for the next release. Test automation should provide an interface for both manual testers and incident management systems with the aim of capturing lessons learned during any phase in the project lifecycle.
The seeming lack of success in test automation initiatives and the large upfront investment required deters projects from planning and implementing test automation. The reluctance to learn from unsuccessful implementations and the habit of blaming the tool for failure in automation projects has resulted in a stigma linked to specific tools and automation in general.
Past attempts to justify automation focused on quality as the key attribute to be considered and measured. The difficulty in dealing with quality is that it is extremely complex. We clearly needed a way of providing a cost–benefit calculation for test automation using an attribute other than quality.
In the absence of a detailed automation framework and process, a large dependency was placed on the skill and ability of individual team members.
29.13.2 The Key to Our Eventual Success: An Automation Process
In 2006, a formal project was launched with dedicated resources, a champion for automation, a good technical framework, clear goals, and a detailed plan. In this anecdote, I describe one aspect that was critical to our success in achieving automation on a large scale.
It was clear, based on past experience, that a standard approach for automation should be defined and documented in the form of a test automation process. However, this process could not exist in isolation but had to be integrated into the newly defined manual test process and should be compatible with the organizational software development lifecycle (SDLC). For example, in the requirement for a defined automation process, the framework required high-level activities described as specification analysis, script creation, scenario documentation, validation, and data sourcing that needed to be satisfied by a detailed process. The full process is shown in Figure 29.4.
Figure 29.4 Automation process
From the documented automation framework, we were able to extract the key process activities required to perform and support most automated testing activities. Here follows a brief description of the objective of each step.
- Analysis and design: Understand the client’s requirements, and establish if it is possible to satisfy each requirement with current technology at our disposal.
- Scripting and configuration: Implement the client’s requirements via an automated solution. This might include recoding, coding, and building special utilities.
- Parameter definition: Assess scripts against system user–defined scenarios with the aim of identifying elements to be parameterized.
- Parameter management: Manage large amounts of data in customized spreadsheets.
- Scenario collection: Populate spreadsheets with scenarios provided by stakeholders of the system.
- Validation: Check the spreadsheets and parameters, incorporating pass and fail criteria in the spreadsheets and allowing the automated script to validate results of executed tests.
- Testing of scripts: Ensure that the scripts run as expected, and remove any bugs in the scripts.
- Script execution: Run the scripts with the scenarios and parameters defined.
- Review of results: Internally review the results of script execution, what tests passed and failed, any common problems such as an unavailable environment, and so on.
- Result communication: Summarize the results sent to managers, developers, stakeholders, and others.
29.13.3 What We Learned
These are the main lessons we learned on our journey through test automation:
- Having a tool is not an automation strategy.
- The tool is nothing more than an enabler of a well-thought-out set of automation activities.
- We believe that if you approach automation correctly, you should be able to switch between tools with little or no impact.
- Automation does not test in the same way as manual testers do.
- Automation will never replace manual testers. We view automation as an extension of the manual tester, taking care of mundane activities such as regression testing, leaving the tester to get on with the more intellectual work.
- Record and playback is only the start.
- A set of recorded, unparameterized scripts has very limited reuse and ages quickly. The focus on data-driven automation provides us with the flexibility and reuse required.
- Automation test scripts are software programs and must be treated as such.
- Follow a standard software development life cycle in the creation of automated scripts.
- Document requirements; design, implement, and test your automated scripts.
- The value lies in the maintenance.
- The secret of getting a good return on your investment is reuse; for this to be possible, ensure maintenance is simple.
- Keyword or data-driven approach facilitates both reuse and easy maintenance.
29.13.4 Return on Investment
Our automation process enabled us to achieve consistency of automation practices across the bank. We showed a benefit of $8,600,000 after 3 years. This benefit calculation method was reviewed by our finance team at the highest level, and the benefits were confirmed by the individual system owner for whom the testing was done.
The total amount invested in the testing project, of which automation was a subproject, was in the area of $4,200,000. The amount spent on automation was less than 20 percent of this total budget, including the acquisition of functional testing tools, consulting, and the creation and execution of automated test scripts.
The benefit calculation was primarily based on the saving achieved in human resource costs. For example, one of our main systems used in the sales process took, on average, 4 weeks with 20 human resources to regression test. With automation, we reduced that process to 5 days and two resources: a reduction from 2,800 man-hours to 70 man-hours. This translated to a financial savings of about $120,500 per regression cycle. If you take into account that, on average, we run two full regression cycles per release and have multiple system releases per year, and that we are involved in various other systems, the savings soon start adding up.
We have a spreadsheet that uses parameters as the basis for all calculations. It allows us to compare the manual execution time per parameter to the automated time. We refer to parameters as the inputs required by the system under test (e.g., if we are testing a transfer from one account to another, parameters might be “from account,” “to account,” and “amount”). So, if we say that conservatively we invested $850,000 in automation and had benefit of $8,600,000, then the ROI for automation (ROI = (Gain − Cost)/Cost) was over 900 percent.
From a project testing perspective, the organization viewed the return on the total investment in testing, which was still over 100 percent. (Usually, if ROI is 10 percent or more, it is considered an excellent investment!)
It is also interesting to note that the automation part was the only initiative within the project testing that could be measured accurately, and as such, it provided justification for the entire project.