Automated Software Testing: An Example of a Working Solution
Most automated testing tool vendors—even open-source automated testing tool efforts—claim to have the "silver bullet" automated testing solution; yet the current crop of automated software testing (AST) tools and solutions is still riddled with numerous challenges.
Because vendor-provided tools and open source solutions alone didn't meet the needs of our project for the U.S. Department of Defense (DoD), the Innovative Defense Technologies (IDT) team developed a custom automation framework for the DoD's real-time, mission-critical systems. Our eventual solution used a mixture of available open source tools and other tools developed in-house.
This article summarizes my experience as part of the team that developed the Automated Test and Re-Test (ATRT) tool, now in use throughout Navy programs. Within this article you'll find automated testing hints that can be useful nuggets as part of any automated testing effort.
System Requirements for the AST Solution
IDT started by developing a clear statement of requirements and objectives for the automation, to ensure that our solution would take the right direction. After gaining a thorough understanding of the unique DoD automated testing challenge, we came up with seven major requirements for an automated software testing solution used on a typical DoD system under test (SUT):
- Cannot be intrusive to the SUT
- Must be OS-independent (compatible with Windows, Linux, Solaris, etc.)
- Must be GUI-independent (should be able to handle GUI technologies written in Motif, C#, etc., and handle any type of third-party non-custom GUI control)
- Must be able to handle GUI-centric and "backend" automation (handle test operations on SUTs with the GUI as well as operations on the SUT's backend, such as various messages and other data being sent)
- Must be able to handle a networked multicomputer environment (multiple servers and multiple monitors/displays interconnected to form one SUT)
- Non-developers should be able to use the tool (testers in the DoD labs were subject matter experts, but not necessarily software developers who could use an automated testing tool efficiently)
- Must support an automated requirements traceability matrix (RTM)
Our diligent research determined that while various automated testing tools on the market met one or more of these requirements, no existing tool met all seven requirements in one solution.
Challenge Solutions
This section describes how we tackled each of the seven automation requirements one by one and how we solved each challenge.
Requirement 1: Not Intrusive to SUT
Originally, we set out to reuse any applicable vendor-provided automated testing tool as part of our automated testing framework. Before knowing the detailed requirements, I thought we'd be using tools such as IBM's Rational Functional Tester, HP's QuickTest Pro and WinRunner, or SmartBear's TestComplete.
However, given that our first requirement was that the AST could not be intrusive to the SUT, we had to cross these tools off our list of potentials. While some partially met the requirement, none met it 100%, meaning that portions of the tool actually had to be installed on the SUT. Our customer wouldn't be happy with the tool being even partially intrusive.
Many of the SUTs we tested used virtual network computing (VNC) as part of their system configuration. We decided that an automated testing solution that can plug into the VNC server already installed on the SUT would allow our framework to meet the requirement of not being intrusive to the SUT. As part of our ATRT framework, we developed a VNC client that would allow connecting to the SUT's VNC server.
Requirement 2: Operating System Independence
We also had to be sure that the tool we selected as part of our ATRT framework wasn't dependent on a specific operating system (OS). Our client uses every type of OS imaginable, but initially we wanted to set out our proof-of-concept on Windows and Linux. Since various VNC versions exist for most operating systems, we were able to meet this requirement through our VNC solution.
Requirement 3: GUI Independence
Our solution needed to be able to handle any type of graphical user interface (GUI) technology written in any language—Motif, C#, and so on—and handle any type of third-party non-custom GUI control.
Many of the current vendor-provided AST tools depend on specific GUI technology. That means that if proprietary programming languages or third-party controls are used in the SUT GUI, the automated testing tool often isn't compatible, which presents automated testing problems. Our ATRT solution works by using VNC to interact with all the GUI elements of the SUT as images, independent of the GUI technology used.
Requirement 4: Tests Multiple Interface and Protocol Types
Our solution needed to be able to handle test operations on systems through their GUI but also should be able to handle operations on the system's backend (not through a GUI), such as various messages and other data being sent.
While our ATRT/VNC-based solution met the previously described GUI automation requirements well, our customer also needed to be able to test various backend (non-GUI) interfaces using various protocols. We couldn't find a vendor-provided solution that would support this requirement out of the box, so we had to develop an in-house solution for this backend "interface" testing.
This task was a more involved undertaking because we needed to understand the protocol requirements, which sometimes were proprietary, plus the makeup of the backend system. Our biggest challenge was the message data. Each SUT used a different protocol—including Transmission Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Common Object Request Broker Architecture (CORBA), proprietary, and more—and all used different message data formats. We developed an approach whereby all protocols and data could be tested via our ATRT framework, and thus our ATRT development effort was born.
Eclipse
We decided to use the open source Eclipse development environment. We chose the Eclipse rich client platform because ATRT could simply expand on the base environment to allow for configurability/extensibility via a powerful plug-in framework, which allowed us to integrate various plug-ins into that framework. Various new feature sets and tools can be integrated, and plug-ins can be added, removed, or modified easily through the Eclipse framework. (See the next section for more on plug-ins.)
Eclipse provides design elements that have to be considered for the various features required when developing and adding to an automated testing framework. Eclipse is an open source community whose projects are focused on building an open development platform comprising extensible frameworks and tools for building, deploying, and managing software across the lifecycle. A large and vibrant ecosystem of major technology vendors, innovative startups, universities, research institutions, and individuals extend, complement, and support the Eclipse platform. Eclipse is much more than just a Java integrated development environment (IDE).
The exciting thing about Eclipse is that many people are using Eclipse in many ways. The common thread is that they are building innovative, industrial-strength software and want to use great tools and frameworks to make their job easier.
Plug-ins
Eclipse comes bundled with a plug-in development environment (PDE), which itself is a plug-in that streamlines plug-in development. Because the Eclipse framework is written in Java, the plug-in code ideally is written in Java as well. Other languages can be used in the plug-in development, but that approach would limit the multiplatform support you get with Java.
We created an Eclipse extension point that defines an interface for our framework, giving the framework user a method to interact with an external tool in a standard way. Support for various protocols was developed, and ATRT can now test message data sent over TCP/IP, UDP, CORBA, proprietary, and many other protocols.
After zeroing in on the most efficient development environment, we needed a way to develop test-message data. Although we could reuse samples of "live" message data (that is, message data that was actually used in the SUTs), we wouldn't get the data coverage we wanted. We needed a tool that would allow us to measure test-data coverage so we could cover a broad range of test data. We initially evaluated FireEye (now called Advanced Combinatorial Testing System [ACTS]), developed by the U.S. National Institute of Standards and Technology (NIST) for an orthogonal-array approach of data generation, but at the time it didn't allow for the complexity of data interactions and relations required. We determined that MATLAB was the solution for our complex test data generation.
Requirement 5: Handles Networked Multicomputer Environments
While the commercial world mainly seems to focus on web-based applications and web testing, the DoD's systems consist of networked computers—multiple servers, systems of systems, multiple monitors and displays interconnected to form one SUT. With its VNC use, ATRT handles such interconnected mesh of environments very well. For example, it handles distributed and concurrent testing over a network: Automated tests can be executed concurrently over the network for the test case where various GUI- or message-based outputs are dependent on each other over a network or have to run in parallel. As long as a VNC server is installed on any of the networked computers, ATRT (which now also serves as a VNC client) can connect to them.
Requirement 6: Scriptless Automated Test Development for Non-Developers
One of our main requirements was that non-developers must be able to use the ATRT framework. Testers in the DoD labs were subject matter experts, but not necessarily software developers who could use an automated testing tool efficiently. Non-developers generally don't want to be bothered with developing automated testing scripts; they want to be able to use the tool with the simple click of a button. We therefore developed a model-based approach to automated testing. Our ATRT solution allows testers to drag the desired action onto a "canvas" to develop their automated tests. No scripting is involved. The tester can click an icon that matches any mouse feature (left-click, right-click, center-click, mouse down, etc.); via that icon click, ATRT generates the respective code behind the scenes.
This model-driven automated testing capability allows test flows to be generated via a simple select-and-drag model-driven interface. We wanted to provide a solution whereby testers could model a test before the SUT becomes available, and they would be able to see a visual representation of their tests.
GUI capture and test modeling is described here, but the tool also allows for non-GUI message-based test-case modeling by non-developers.
In summary, we met the requirement that non-developers be able to use this framework. ATRT provides the tester with every GUI action a mouse can perform, in addition to various other keywords. Our experience with customers using ATRT for the first time is that it provides a very intuitive user interface. Interestingly, we find that some of the best test automators are younger testers who have grown up playing video games, coincidentally developing eye-hand coordination. Some of the testers even find working with ATRT to be fun, which alleviates the monotony of manually testing the same features repeatedly with each new release.
Requirement 7: Must Support an Automated Requirements Traceability Matrix
We needed a framework that could allow for test management and automated requirements traceability. We developed ATRT to handle all related test-management activities, including documenting a test case in the ATRT manager, along with a hierarchical breakdown of test cases into test steps and substeps. Additionally, to meet this final requirement, ATRT allows users to import their requirements and map test cases to requirements. From this, we allow for RTM reporting. We integrated the open source Business and Intelligence Reporting Tools (BIRT), which is an Eclipse-based reporting system.
Automated Testing Solution Applied: Return on Investment
We successfully applied our ATRT solution to various DoD programs and have shown return on investment (ROI). We developed a complex ROI matrix that allows us to measure things like these as accurately as possible:
- Nonrecurring costs such as original setup and connection in the labs and required workarounds
- Cost and time to create and maintain the automated vs. manual test cases
- Learning curve
ROI includes time savings; for example, on one program for a 48-hour endurance test, we reduced the number of required testers from 5 to 1; those extra testers could now focus on new features. It also includes increased testing coverage; that is, our backend message-based testing allows a 100-fold increase in test-scenario testing and analyzing—testing that would be time-prohibitive if performed manually.
Although we showed ROI on all programs, our biggest challenge for bringing automated testing, ATRT, or any such type of framework into the DoD labs was often the testers themselves, who have done manual testing for years and often feel that their jobs are in jeopardy. After demonstrating ATRT on their test systems in labs, more often than not we hear the comment, "Why don't you just put a sign on these computers that says 'You have been replaced!'?"
Well, that's not quite true. Our main goal is not to replace testers but to make them more efficient. Our goal with bringing automated testing to the DoD systems (old and new) is to increase the speed of the testing certification process, which is currently mainly a manual process, with hundreds of thousands of dollars spent on inefficient testing processes. Often the mantra we hear is "This is how we have been doing it; don't rock the boat." ATRT is a new and more efficient way to implement testing on systems—it's not a silver bullet. We've shown initial savings, but we'll continue to enhance ATRT to be even more effective. Each day, our entire development and testing department focuses on developing a better automated testing solution. With each application to different systems, we learn more and add features accordingly.
One of our additional challenges is testing of ATRT. We need to make sure that ATRT behaves flawlessly. If the ATRT solution has problems, how can we claim to have an automated testing solution that will save you time and make you more efficient? We have a rigorous ATRT development process in place that includes software development using continuous integration, with code checked in daily to the source control tool (such as Subversion), nightly builds (using tools such as Hudson/Jenkins), and automated unit tests (using tools such as NUnit).
Automated unit and integration testing is required in order to add continuously to the automated ATRT unit and integration testing framework. The more interesting testing part here is that we use ATRT to functionally test ATRT—a technique known as "dog-fooding" or "eating your own dog-food." Along with the JUnit tests, we have automated functional ATRT tests in place that test all of the ATRT requirements in a nightly build. We've created a subset of automated tests of results reporting, and a build is not considered complete until it passes all automated unit and automated functional tests satisfactorily.
Conclusion
I'm excited to work at a company that's funded by the government to develop an automated testing solution. It allows me to see the testing challenges that testers face and to help create and provide automated solutions to these unique challenges while continuously evolving in effectiveness. It also made me realize that there's no need to reinvent the wheel: A successful automated testing framework can be built by reusing open source and in-house developed code. Just make sure that the manual testers don't feel threatened by a solution that will make them 10 times as effective, and be sure to understand the limitations of the currently available vendor-provided tools. While we have made great strides toward solving the automated testing challenge, there is still much more to be done, and better solutions are yet to come.