- 29.1 Three Grains of Rice
- 29.2 Understanding Has to Grow
- 29.3 First Day Automated Testing
- 29.4 Attempting to Get Automation Started
- 29.5 Struggling with (against) Management
- 29.6 Exploratory Test Automation: Database Record Locking
- 29.7 Lessons Learned from Test Automation in an Embedded Hardware-Software Computer Environment
- 29.8 The Contagious Clock
- 29.9 Flexibility of the Automation System
- 29.10 A Tale of Too Many Tools (and Not Enough Cross-Department Support)
- 29.11 A Success with a Surprising End
- 29.12 Cooperation Can Overcome Resource Limitations
- 29.13 An Automation Process for Large-Scale Success
- 29.14 Test Automation Isn't Always What It Seems
29.7 Lessons Learned from Test Automation in an Embedded Hardware–Software Computer Environment
Jon Hagar, United States
Engineer, trainer, and consultant
Embedded systems comprise specialized hardware, software, and operations. They come with all of the problems of normal software, but they also include some unique aspects:
- Specialized hardware that the software “controls” with long and concurrent development cycles.
- Hardware problems that are “fixed” in the software late in the project.
- Limited user interfaces and minimal human intervention.
- Small amounts of dense, complex functions often in the control theory domain (e.g., calculating trajectories, flight dynamics, vehicle body characteristics, and orbital targets).
- (A big one) Very tight real-time performance issues (often in millisecond or microsecond ranges).
Products that make up embedded software systems now span the automotive, control, avionics, medical, telecom, electronics, and almost every other product domain one can think of. I have been involved in space avionics (guidance, navigation, and control software), but many of the approaches and lessons learned are applicable to other embedded software systems. In this section, we use examples drawn from a hypothetical but historically based space flight software embedded system.
The goal of verification, validation, and testing (VV&T) is to show that embedded software is ready for use and the risk of failure due to software can be considered acceptable by the stakeholders.
Development programs can be small—for example, 30,000 source lines of code (with staffs of 10 to 60 people)—yet these programs are time and computationally complex and are critical to the successful control of the hardware system.
29.7.1 VV&T Process and Tools
We typically have four levels of testing and tools that support each level. The lowest level is probably the most different for embedded systems because it is nearest to the hardware. It uses a host/target configuration and cross-compiled code (including automation code). Cross-compiling is where source code is compiled on one computer, not into the binary (executable) of that (host) computer but rather into binary executable code that will run on a different computer (the “target”) that is too limited to be able to run a compiler on. Our testing at this level aims to check against standards and code coverage as well as requirements and design and is automated by the developer.
We call this “implementation structural verification testing” (some places call this unit testing). This testing is conducted with a digital simulation of the computer and/or a single-board target hardware-based computer.
The implementation test tools were customized in the beginning, but other off-the-shelf tools were added later. Examples include LDRA TBrun, Cantata, and AdaTEST. The project used both test-driven development and code-then-test implementation approaches. The comparison and review of results, which include very complex calculations, is done using test oracle information generated from commercial tools such as MATLAB, BridgePoint, and Mathmatica.
The middle level, which we call design-based simulation tools, uses tools that are based on software architecture structures and design information, which have been integrated across module boundaries. These tools allow the assessment of software for particular aspects individually. In some projects, model-based development tools, BridgePoint, and MATLAB were used, and this enabled the integration efforts to go better than in past systems, because the models enforced rules and checks that prevented many integration defects from happening in the first place.
The next level is requirements-based simulation (scientific simulation tools). These simulations (driven by models) are done in both a holistic way and based on individual functions. For example, a simulation may model the entire boost profile of a system with full vehicle dynamics simulation, and another simulation may model the specifics of how the attitude thrust vector control works.
This allows system evaluation from a microscopic level to a macroscopic level. The results from one level can be used as automated oracles to other levels of VV&T test supporting “compare” activities.
This approach of using simulation/models to drive and analyze test results comes with a risk. There is the chance that an error can be contained in the model or tool that replicates and “offsets” an error in the actual product (a self-fulfilling model result). This is a classic problem with model-based test oracles. To help with this risk, the project used the levels of testing (multiple compares), a variety of tools, different VV&T techniques, and expert skilled human reviewers who were aware of this risk. These methods, when used in combination with testing, were found to detect errors if they exist (one major objective) and resulted in software that worked.
Finally, at a system level, VV&T of the software uses actual hardware in the loop and operations. An extensive, real-time, continuous digital simulation modeling and feedback system of computers is used to test the software in a realistic environment with the same interfaces, inputs, and outputs as in the actual system. The system under test runs in actual real time; thus there is no speed-up or slow-down of time due to the test harness. Additionally, with hardware in the loop and realistic simulations, complete use scenarios involving the hardware and software could be played out with both for typical usage scenarios (daily use) and unusual situations such as high load, boundary cases, and invalid inputs.
29.7.2 Lessons Learned
This section summarizes some general observations that the projects had during the initial setup and use of automated VV&T tools:
- Training: It is important to allow both time and money for training on tools and testing.
- Planning: Tools must be planned for and developed like any software effort. Automated VV&T tools are not “plug and play.” To be successful, plan for development, establish a schedule and budget, integrate with existing processes, plan the test environment, and also test the test tools. Test tools must be “engineered” like any development effort.
- Have an advocate: Test tools need a champion in order for them to become incorporated into common use. The champion is someone who knows the tools and advocates their use. Success comes from getting staff to think “outside the automated tool box.” The new tools must “integrate” with the existing staff, which means education, mentoring, and some customization. Advocates work these issues.
- Usability of a tool must be reasonable for the users: While people will need training on tools, and tools by nature have complexities, a tool that is too hard to use or is constantly in revision by vendors leads to frustration by users that, in the extreme, will lead to shelfware. Ensure that the user interface is part of the selection evaluation before purchasing any tool.
- Expect some failures and learn from them: Our project explored several tools that were abandoned after an initial period of time. While failure is not good, it is really only total failure when one does not learn from the mistake. Also, management must avoid blaming engineers for the failure of an idea because doing so stifles future ideas.
- Know your process: Automated test tools must fit within your process. If you lack process, just having tools will probably result in failure. Expect some changes in your process when you get a new tool, but a tool that is outside of your process will likely become shelfware.
- Embedded systems have special problems in test automation: Despite progress, automated test tools do not totally solve all embedded VV&T problems. For example, our projects found issues in dealing with cross-compiling, timing, initialization, data value representation, and requirements engineering. These can be overcome, but that means vendors have more functions to add and projects will take more money and time. Plan for the unexpected.
- Tools evolve: Plan on test tool integration cycles with increments.
- Configuration management (CM): Even with VV&T tools, projects need to manage and control all aspects of the configuration, including the test tools as well as the test data.
29.7.3 Summary of Results
Although I am not permitted to reveal specific data, when compared to custom-developed tools and manual testing, establishing an automated commercial-based VV&T environment took about 50 percent fewer people. The projects tend to take these savings to create more and/or better automated tests. While adding to test automation, the projects maintained and improved functionality and quality. Further, maintenance-regression costs decreased because vendors provided upgrades for a low annual fee (relative to staff costs for purely customized tools). Commercial tools have a disadvantage of lacking total project process customization, but this has proven to be a minor issue as long as the major aspects of the process were supported by the tools.
Additionally, the projects reduced test staff work hours by between 40 and 75 percent (based on past VV&T cycles). We found that our test designers were more productive. We created the same number of tests and executed them in less time and found more defects earlier and faster. We had fewer “break-it, fix-it” cycles of regression testing, which meant that less effort was needed to achieve the same level of quality in the testing and the same defect detection rates.
In an embedded software VV&T environment, automated test tools can be good if you consider them as tools and not “magic bullets.” People make tools work, and people do the hard parts of VV&T engineering that tools cannot do. Tools can automate the parts humans do not like or are not good at. Embedded projects continue to evolve VV&T automation. VV&T automation tools take effort, increments, and iterations. Tools aid people—but are not a replacement for them.