- 1.1 Introduction
- 1.2 What Do We Mean by Lifecycle Assurance?
- 1.3 Introducing Principles for Software Assurance
- 1.4 Addressing Lifecycle Assurance3
- 1.5 Case Studies Used in This Book
1.2 What Do We Mean by Lifecycle Assurance?
The accelerating pace of attacks and the apparent tendency toward more vulnerabilities seem to suggest that the gap between attacks and data protection is widening as our ability to deal with them seems to diminish. Much of the information protection in place today is based on principles established by Saltzer and Schroeder in “The Protection of Information in Computer Systems,” which appeared in Communications of the ACM in 1974. They defined security as “techniques that control who may use or modify the computer or the information contained in it” and described three main categories of concern: confidentiality, integrity, and availability (CIA) [Saltzer 1974].
As security problems expanded to include malware, viruses, Structured Query Language (SQL) injections, cross-site scripting, and other mechanisms, those problems changed the structure of software and how it performs. Focusing just on information protection proved vastly insufficient. Also, the role of software in systems expanded such that software now controls the majority of functionality, making the impact of a security failure more critical. Those working with deployed systems refer to this enhanced security need as cyber security assurance, and those in the areas of acquisition and development typically reference software assurance. Many definitions of each have appeared, including these:
“The level of confidence we have that a system behaves as expected and the security risks associated with the business use of the software are acceptable” [Woody 2014]
“The level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its lifecycle, and that the software functions in the intended manner”1
“Software Assurance: Implementing software with a level of confidence that the software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software, throughout the lifecycle” [Woody 2014]
However, the most recent set of definitions of software assurance from the Committee on National Security Systems [CNSS 2015] takes a different tack, using DoD and NASA definitions:
“The level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software throughout the lifecycle” [DoD 2012]
“The planned and systematic set of activities that ensure that software lifecycle processes and products conform to requirements, standards, and procedures” [NASA 2004]
Finally, the ISO standards provide comprehensive coverage of the various topics, although the topics appear in various places in the standards, and not necessarily in a concise definition [ISO/IEC 2008a, 2008b, 2009, 2011, 2015].
As shown in Table 1.1, the various definitions of software assurance generally include the requirement that software functions as expected or intended. Referring to the definitions, it is usually more feasible to achieve an acceptable risk level (although what that risk level might be remains somewhat obscure) than to feel confident that software is free from vulnerabilities. But how do you know how many vulnerabilities actually remain? In practice, you might continue looking for errors, weaknesses, and vulnerabilities until diminishing returns make it apparent that further testing does not pay. However, it is not always obvious when you are at that point. This is especially the case when testing for cyber security vulnerabilities, since software is delivered into many different contexts and the variety of cyberattacks is virtually limitless.
Table 1.1 Comparison of Software Assurance Definitions from Various Sources
Definition of Software Assurance |
Woody |
MITRE |
CNSS |
CNSS April 2015 |
ISO/IEC |
ISO/IEC |
|
DoDI |
NASA-STD |
Parts 1, |
Parts 1 |
||||
Level of confidence |
X |
X |
X |
X |
|
X |
X |
Functions as intended |
X |
X |
X |
X |
X |
X |
X |
Free from vulnerabilities |
|
X |
X |
X |
|
X |
X |
Intentionally or accidentally inserted |
|
X |
X |
X |
|
X |
|
Software lifecycle process |
|
X |
X |
X |
|
X |
X |
Acceptable business risks |
X |
|
|
|
|
X |
X |
Business use of software |
X |
|
|
|
|
X |
X |
Set of activities that conform to product |
|
|
|
|
X |
X |
X |
Since we are increasingly seeing the integration and interoperation of security-critical and safety-critical systems, it makes sense to come up with an overarching definition of software assurance that covers both security and safety. In some ways, the different approaches suggested by the existing definitions result from risks related to modern systems of systems.
Further challenges to effective operational security2 come from the increased use of commercial off-the-shelf (COTS) and open source software as components within a system. The resulting operational systems integrate software from many sources, and each piece of software is assembled as a discrete product.
Shepherding a software-intensive system through project development to deployment is just the beginning of the saga. Sustainment (maintaining a deployed system over time as technology and operational needs change) is a confusing and multifaceted challenge: Each discrete piece of a software-intensive system is enhanced and repaired independently and reintegrated for operational use. As today’s systems increasingly rely on COTS software, the issues surrounding sustainment grow more complex. Ignoring these issues can undermine the stability, security, and longevity of systems in production.
The myth linked to systems built using COTS products is that commercial products are mature, stable, and adhere to well-recognized industry standards. The reality indicates more of a Rube Goldberg mix of “glue code” that links the pieces and parts into a working structure. Changing any one of the components—a constant event since vendors provide security updates on their own schedules—can trigger a complete restructuring to return the pieces to a working whole. This same type of sustainment challenge for accommodating system updates appears for system components built to function as common services in an enterprise environment.
Systems cannot be constructed to eliminate security risk but must incorporate capabilities to recognize, resist, and recover from attacks. Initial acquisition and design must prepare the system for implementation and sustainment. As a result, assurance must be planned across the lifecycle to ensure effective operational security over time.
Within this book we use the following definition of software assurance developed to incorporate lifecycle assurance [Mead 2010a]:
Application of technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner, are free from accidental or intentional vulnerabilities, provide security capabilities appropriate to the threat environment, and recover from intrusions and failures.