- 3.1 Failure Story: Sequential RPV Systems Engineering and Development
- 3.2 Success Story: Concurrent Competitive-Prototyping RPV Systems Development
- 3.3 Concurrent Development and Evolution Engineering
- 3.4 Concurrent Engineering of Hardware, Software, and Human Factors Aspects
- 3.5 Concurrent Requirements and Solutions Engineering
3.2 Success Story: Concurrent Competitive-Prototyping RPV Systems Development
A concurrent incremental-commitment approach to the agent-based RPV control opportunity, using the ICSM process and competitive prototyping, would recognize that there were a number of risks and uncertainties involved in going from a single-scenario proof-of-principle demo to a fieldable system needing to operate in more complex scenarios. It would decide that it would be good to use prototyping as a way of buying information to reduce the risks, and would determine that a reasonable first step would be to invest $25 million in an Exploration phase. This would initially involve the customer and a set of independent experts developing operational scenarios and evaluation criteria from the requirements in Section 3.1 (to synthesize status information from multiple on-board and external sensors; to perform dynamic reallocation of RPVs to targets; to perform self-defense functions; and so on). These would involve not only the sunny-day use cases but also selected rainy-day use cases involving communications outages, disabled RPVs, and garbled data.
The customer would identify an RPV simulator that would be used in the competition, and would send out a request for information to prospective competitors to identify their qualifications to compete. Based on the responses, the customer would then select four bidders to develop virtual prototypes addressing the requirements, operational scenarios, and evaluation criteria, and providing evidence of their proposed agent-based RPV controllers’ level of performance. The customer would then have the set of independent experts evaluate the bidders’ results. Based on the results, it would perform an evidence- and risk-based Valuation Commitment Review to determine whether the technology was too immature to merit further current investment as an acquisition program, or whether the system performance, cost, and risk were acceptable for investing the next level of resources in addressing the problems identified and developing initial prototype physical capabilities.
As was discovered much more expensively in the failure case described earlier, the prospects for developing a 4:1 capability were clearly unrealistic. The competitors’ desire to succeed led to several innovative approaches, but also to indications that having a single controller handle multiple-version RPV controls would lead to too many critical errors. Overall, however, the prospects for a 1:1 capability were sufficiently attractive to merit another level of investment, corresponding to a Valuation phase. This phase was funded at $75 million, some of the more ambitious key performance parameters were scaled back, the competitors were down-selected to three, and some basic-capability but multiple-version physical RPVs were provided for the competitors to control in several physical environments.
The evaluation of the resulting prototypes confirmed that the need to control multiple versions of the RPVs made anything higher than a 1:1 capability infeasible. However, the top two competitors provided sufficient evidence of a 1:1 system feasibility that a Foundations Commitment Review was passed, and $225 million was provided for a Foundations phase: $100 million for each of the top competitors, and $25 million for customer preparation activities and the independent experts’ evaluations.
In this phase, the two competitors not only developed operational RPV versions, but also provided evidence of their ability to satisfy the key performance parameters and scenarios. In addition, they developed an ICSM Development Commitment Review package, including the proposed system’s concept of operation, requirements, architecture, and plans, along with a Feasibility Evidence Description providing evidence that a system built to the architecture would satisfy the requirements and concept of operation, and be buildable within the budget and schedule in the plan.
The feasibility evidence included a few shortfalls, such as remaining uncertainties in the interface protocols with some interoperating systems, but each of these was covered by a risk mitigation plan in the winning competitor’s submission. The resulting Development Commitment Review was passed, and the winner’s proposed $675 million, 18-month, three-increment Stage II plan to develop an initial operational capability (IOC) was adopted. The resulting 1:1 IOC was delivered on budget and 2 months later than the original 40-month target, with a few lower-priority features deferred to later system increments. Figure 3-3 shows the comparative timelines for the Sequential and Concurrent approaches.
FIGURE 3-3 Comparative Timelines
Of the $1 billion spent, $15 million was spent on the three discontinued Exploration-phase competitors, $40 million was spent on the two discontinued Valuation-phase competitors, and $100 million was spent on the discontinued Foundations-phase competitor. Overall, the competitive energy stimulated and the early risks avoided made this a good investment. However, the $125 million spent on the experience built up by the losing finalist could also be put to good use by awarding the finalist with a contract to build and operate a testbed for evaluating the RPV system’s performance.
Actually, it would be best to announce such an outcome in advance, and to do extensive team building and award fee structuring to make the testbed activity constructive rather than adversarial.
While the sequential and concurrent cases were constructed in an RPV context from representative projects elsewhere, they show how a premature total commitment without adequate resources for and commitment to early concurrent engineering of the modeling, analysis, and feasibility assessment of the overall system will often lead to large overruns in cost and schedule, and performance that is considerably less than initially desired. However, by “buying information” early, the concurrent incremental commitment and competitive prototyping approach was able to develop a system with much less late rework than the sequential total commitment approach, and with much more visibility and control over the process.
The competitive prototyping approach spent about $155 million on unused prototypes, but the overall expenditure was only $1 billion as compared to $3 billion for the total-commitment approach, and the capability was delivered in 42 versus 80 months, which indicates a strong return on investment. Further, the funding organizations had realistic expectations of the outcome, so that a 1:1 capability was a successful realization of an expected outcome, rather than a disappointing shortfall from a promised 4:1 capability. In addition, the investment in the losing finalist could be put to good use by capitalizing on its experience to perform an IV&V role.
Competitive prototyping can lead to strong successes, but it is also important to indicate its potential failure modes. These include under-investments in prototype evaluation, leading to insufficient data for good decision making; extra expenses in keeping the prototype teams together and productive during often-overlong evaluation and decision periods; and choosing system developers too much on prototyping brilliance and too little on ability to systems-engineer and production-engineer the needed products 4. These problem areas are easier to control in competitions among in-house design groups, where they are successfully used by a number of large corporations.