- 3.1 Failure Story: Sequential RPV Systems Engineering and Development
- 3.2 Success Story: Concurrent Competitive-Prototyping RPV Systems Development
- 3.3 Concurrent Development and Evolution Engineering
- 3.4 Concurrent Engineering of Hardware, Software, and Human Factors Aspects
- 3.5 Concurrent Requirements and Solutions Engineering
3.4 Concurrent Engineering of Hardware, Software, and Human Factors Aspects
Not every system has all three hardware, software, and human factors aspects. When a system does have more than one of these aspects, however, it is important to address them concurrently rather than sequentially. A hardware-first approach will often choose best-of-breed hardware components with incompatible software or user interfaces; provide inadequate computational support for software growth; create a late software start and a high risk of a schedule overrun; or commit to a functional-hierarchy architecture that is incompatible with layered, service- oriented software and human-factors architectures 10.
Software-first approaches can similarly lead to architectural commitments or selection of best-of-breed components that are incompatible with preferred hardware architectures or make it hard to migrate to new hardware platforms (e.g., multiprocessor hardware components). They may also prompt developers to choose software-knows-best COTS products that create undesirable human–system interfaces. Human-factors-first approaches can often lead to the use of hardware–software packages that initially work well but are difficult to interoperate or scale to extensive use.
Other problems may arise from assumptions by performers in each of the three disciplines that their characteristics are alike, when in fact they are often very different. For systems having limited need or inability to modify the product once fielded (e.g., sealed batteries, satellites), the major sources of life-cycle cost in a hardware-intensive system are realized during development and manufacturing. However, as we noted earlier, hardware maintenance costs dominate (60–84% of life-cycle costs cited for ships, aircraft, and ground vehicles). For software-intensive systems, manufacturing costs are essentially zero. For information services, the range of 60% to 90% of the software life-cycle cost going into post-development maintenance and upgrades is generally applicable. For software embedded in hardware systems, the percentages would be more similar to those for ships and such. For human-intensive systems, the major costs are staffing and training, particularly for safety-critical systems requiring continuous 24/7 operations. A primary reason for this difference is indicated in rows 2 and 3 of Table 3-2. Particularly for widely dispersed hardware such as ships, submarines, satellites, and ground vehicles, making hardware changes across a fleet can be extremely difficult and expensive. As a result, many hardware deficiencies are handled via software or human workarounds that save money overall but shift the life-cycle costs toward the software and human parts of the system.
TABLE 3-2 Differences in Hardware, Software, and Human System Components
Difference Area |
Hardware/ Physical |
Software/Cyber/Informational |
Human Factors |
Major life-cycle cost sources |
Development; manufacturing; multilocation upgrades |
Life-cycle evolution; lowcost multilocation upgrades |
Training and operations labor |
Nature of changes |
Generally manual, laborintensive, expensive |
Generally straightforward except for software code rot, architecture-breakers |
Very good, but dependent on performer knowledge and skills |
Incremental development constraints |
More inflexible lower limits |
More flexible lower limits |
Smaller increments easier, if infrequent |
Underlying science |
Physics, chemistry, continuous mathematics |
Discrete mathematics, logic, linguistics |
Physiology, behavioral sciences, economics |
Testing |
By test engineers; much analytic continuity |
By test engineers; little analytic continuity |
By representative users |
Strengths |
Creation of physical effects; durability; repeatability; speed of execution; 24/7 operation in wide range of environments; performance monitoring |
Low-cost electronic distributed upgrades; flexibility and some adaptability; big-data handling, pattern recognition; multitasking and relocatability |
Perceiving new patterns; generalization; guiding hypothesis formulation and test; ambiguity resolution; prioritizing during overloads; skills diversity |
Weaknesses |
Limited flexibility and adaptability; corrosion, wear, stress, fatigue; expensive distributed upgrades; product mismatches; human-developer shortfalls |
Complexity, conformity, changeability, invisibility; common-sense reasoning; stress and fatigue effects; product mismatches; human-developer shortfalls |
Relatively slow decision making; limited attention, concentration, multitasking, memory recall, and environmental conditions; teaming mismatches |
As can be seen when buying hardware such as cars or TVs, there is some choice of options, but they are generally limited. It is much easier to tailor software or human procedures to different classes of people or purposes. It is also much easier to deliver useful subsets of most software and human systems, while delivering a car without braking or steering capabilities is infeasible.
The science underlying most of hardware engineering involves physics, chemistry, and continuous mathematics. This often leads to implicit assumptions about continuity, repeatability, and conservation of properties (mass, energy, momentum) that may be true for hardware but not true for software or human counterparts. An example is in testing. A hardware test engineer can generally count on covering a parameter space by sampling, under the assumption that the responses will be a continuous function of the input parameters. A software test engineer will have many discrete inputs, for which a successful test run provides no assurance that the neighboring test run will succeed. And for humans, the testing needs to be done by the operators and not test engineers.
A good example of integrated cyber–physical–human systems design is the detailed description of the Hospira medical infusion pump success story in Chapter 1. It included increasing risk-driven levels of detail in field studies and hardware–software–user interface prototyping; task analysis; hardware and software component analysis, including usability testing; and hardware–software–human safety analyses. Example prototypes and simulations included the following:
- Hardware industrial design mockups
- Early usability tests of hardware mockups
- Paper prototypes for GUIs with wireframes consisting of basic shapes for boxes, buttons, and other components
- GUI simulations using Flash animations
- Early usability tests with hardware mockups and embedded software that delivered the Flash animations to a touchscreen interface that was integrated into the hardware case