What Is Just Enough Test Automation?
This is not going to be a discourse on how to select and implement an automated testing tools suite. There are a number of articles and books available today that offer practical advice on tool selection. It is also not an introductory book on software testing automation. If you are reading this book, we'll assume you have some level of previous experience with test automation. We will also assume that you have some serious questions about the practical aspects of test automation. You may, or may not, have a successful implementation under your belt. In any case, you most probably have experienced the operational, political, and cultural pitfalls of test automation. What you need is a how-to book that has some practical tips, tricks, and suggestions, along with a proven approach. If this is what you want, read on. Our perspective on test automation is what you will be getting in the remaining chapters of this book.
No New Models, Please!
"Read My Lips: No New Models!" echoes a sentiment with which we whole-heartily agree (14). As mentioned in the Preface, there has been a plethora of models of the software testing process (6,10,11) and models of the automated software testing process (4,7,8,9,12,15), including a software test automation life cycle model (2). While these ideas are all right and in some aspects useful when discussing software testing and test automation, they are of little use to real-world practitioners.
The Software Engineering Institute at Carnegie Mellon University has established a Software Testing Management Key Process Area (KPA ) that is necessary to achieve Level 2: Repeatable in the Software Process Capability Maturity Model (CMM) (11). Such a model is useful for general guidance, but it does not define a process that is useful to the test engineer proper. It does give test managers a warm and fuzzy feeling when they pay lip service to it but in reality the testing process activities do not reflect the model at all. The same things hold true for the software test automation life cycle model. We do not believe in life cycles. Instead, we believe in processes that direct workflows. Every testing group has a process. In some instances it is a chaotic process, and in other instances it is more organized.
Krause developed a four-level maturity model for automated software testing (3) that he ties to the software testing maturity model (1) and the SEI Software Process Maturity Model (4) that evolved into the CMM. The levels he specified are Accidental Automation, Beginning Automation, Intentional Automation, and Advanced Automation. While this model may describe what happens from a conceptual standpoint, it offers no practical advice that will facilitate test automation implementation. It merely describes what the author has noted happening in typical organizations.
Dustin, Rashka, and Paul published an Automated Test Lifecycle Methodology (ATLM)a "structured methodology which is geared toward ensuring successful implementation of automated testing."(2) It identifies a four-phased methodology: Decision to Automate Test; Introduction of Automated Testing; Test Planning, Design, and Development; Execution and Management of Automated Test.
While this model is useful from a management and control perspective, it is not practical from the test automation engineer's point of view. Powers offers practical advice that can be very helpful for software testing engineers who are responsible for building and implementing a test automation framework. It includes common-sense discussions of programming style, naming standards, and other conventions that should be applied when writing automated test scripts (9).
There is a comprehensive discussion of the principle of data abstraction, which is the basis of the data-driven approach to automated software testing. He discusses alternatives for coding how data are defined and used by the test script. According to Powers, "The principle is one of depending less on the literal value of a variable or constant, and more on its meaning, or role, or usage in the test." He speaks of "constraint for product data." He defines this concept as "...the simplest form of this data abstraction is to use named program constants instead of literal values." He also speaks of "variables for product data" and says, "...instead of the literal name 'John Q. Private,' the principle of data abstraction calls for the programmer to use a program variable such as sFullName here, with the value set once in the program. This one occurrence of the literal means there's only one place to edit in order to change it."(9)
The immediate impact of the statement Powers makes is that you begin to see the possible benefits derived from data abstraction when it comes to the maintenance of automated test scripts. He further suggests that these values be stored in a repository that will be accessible from the test script code: "All that's required is a repository from which to fetch the values, and a program mechanism to do the retrieval."(9)
This is the underlying principle of Strang's Data Driven Automated Testing approach. His approach uses a scripting framework to read the values from the test data repository. It uses a data file that contains both the input and its expected behavior. His method has taken data abstraction from storing just the literal values to also storing the expected result values. This approach can accommodate both manual and automated data generation. The test script must be coded in such a way that it can determine right results from the wrong results (12).
Powers's and Strang's work is reflected in the data-driven approaches discussed in Chapters 7 and 8 of this book. Archer Group's Control Synchronized Data Driven Testing (CSDDT) is an example of one such approach that employs the concepts discussed here.
Rational Software Corporation has authored the Rational Unified Process (RUP), which contains specific test phases that are designed to support its automated testing tool suite (10). Even if you are not a Rational user, the testing process information provides a solid base for doing even manual testing. RUP itself comprises process documentation that addresses all of software development, not just testing. It is relatively inexpensivethe RUP CD-ROM sells for under $1,000. The most important aspect of RUP's testing approach is that it can be used to support a data-driven automated testing framework. That is why we have used it in the past and why it is mentioned in this book.
A Life Cycle Is Not a Process
The problem with the approaches taken by the authors cited thus far and other industry gurus is the same problem we have with all life-cycle modelsthey are management oriented, not practitioner oriented. Again, this approach offers very little in the way of an operational process that we can term an automated testing process. Other approaches, e.g., data-driven automated testing, which these authors have criticized, offer much more in the way of methods and techniques that can actually be applied in day-today test automation activities. What this line of thinking really offers is a model to give testing managers the same warm and fuzzy feeling mentioned above with respect to the testing maturity model.
Although purported to be an experiential model, this representation of automated testing has not been developed on a deductive basis. It is a theory based on inductive reasoning, much of which is founded on anecdotal evidence, as are many of the models advocated in information systems (IS) literature. On the other hand, nonmanagement techniques, which are operational, not managerial, and which are applied to specific tasks in the automation process, are based on deductive reasoning. Data-driven testing is an example of a nonmanagement technique. These techniques have evolved through practitioner trial and errorhow many of the traditional engineering methods have come to be that are used today.
A Tool Is Not a Process
The most recent results for the minisurvey on the CSST Technologies, Inc., Web site indicate that 40% (102 out of 258 respondents) see software testing methods/process implementation as doing the most to facilitate their testing work. Twenty-four percent (63 respondents) see improved software requirements documentation as the most important facilitation factor. Nineteen percent (50 respondents) believe that software test standards implementation is the most important aspect. Ten percent (25 respondents) cite improved test planning as the most important consideration. Only 7% (18 respondents) think that more time to test would facilitate their work.
Purchasing a software testing tool suite does not constitute implementing a software testing process. Processes are steps that are followed that result in a goal being achieved or a product being produced. The process steps implement testing activities that result in test execution and the creation of test artifacts. Automated software tools support existing processes and, when the process is chaotic, impose some much-needed structure on the activities. One of the primary reasons software testing tool implementations fail is because there is little or no testing process in place before the tools are purchased.
When we are designing and building automated tests, we do not even see a process. What we see are tasks, a schedule, and personal assignments to complete. For us, just enough software test automation is the amount we need to do our jobs effectively. If we do not have commercial software testing tools we can use, we build our own or use desktop tools that our clients routinely load on workstations.
Figure 1.1 illustrates a testing process that was defined for one of our clients that is loosely based on the RUP testing approach (10). Our process approach differs from Rational's in that we view test script design as part of test implementation whereas in RUP it is still considered to be a test design activity. The reason we differ is we believe that the skill set required for test design does not include previous programming experience, but the one for test implementation does.
FIGURE 1.1 Quality Assurance and Testing (QA&T) Process