- 1.1 Introduction
- 1.2 What Do We Mean by Lifecycle Assurance?
- 1.3 Introducing Principles for Software Assurance
- 1.4 Addressing Lifecycle Assurance3
- 1.5 Case Studies Used in This Book
1.4 Addressing Lifecycle Assurance3
In general, we build and acquire operational systems through coordinated actions involving a set of predefined steps referred to as a lifecycle. Most organizations use a lifecycle model of some type, although these models vary from one organization to another. In this book, the approaches we describe relate to particular lifecycle activities, but we try to be independent of specific lifecycle models. Standards such as ISO 15288 and NIST SP 800-160 can provide guidance to those looking for additional background on suitable lifecycles in support of software assurance.
Organizations make or buy their technology to meet specified performance parameters but rarely consider the ways in which a new development or acquisition functions within its intended deployment environment and the unintended consequences that are possible. For example, security defects (also referred to as vulnerabilities) provide opportunities for attackers to gain access to confidential data, disrupt access to system capabilities, and make unauthorized changes to data and software. Organizations tend to view higher quality and greater security as increasing operational cost, but they fail to consider the total cost of ownership over the long term, which includes the cost of dealing with future compromises. The lack of a comprehensive strategy in approaching how a system or software product is constructed, operated, and maintained creates fertile ground for compromise.
Every component of the software system and its interfaces must be operated and sustained with organizational risk in mind. The planning and execution of the response is a strategic requirement, which brings the absolute requirement for comprehensive lifecycle protection processes into the discussion.
There is always uncertainty about a software system’s behavior. At the start of development, we have very general knowledge of the operational and security challenges that might arise as well as the security behavior that we want when the system is deployed. A quality measure of the design and implementation is the confidence we have that the delivered system will behave as specified.
At the start of a development cycle, we have a limited basis for determining our confidence in the behavior of the delivered system; that is, we have a large gap between our initial level of confidence and the desired level of confidence. Over the development lifecycle, we need to reduce that confidence gap, as shown in Figure 1.1, to reach the desired level of confidence for the delivered system.
Figure 1.1 Confidence Gap
With existing software security practices, we can apply source-code static analysis and testing toward the end of the lifecycle. For the earlier lifecycle phases, we need to evaluate how the engineering decisions made during design affect the injection or removal of defects. Reliability depends on identifying and mitigating potential faults. Software security failure modes, such as unverified input data, are exploitable conditions. A design review must confirm that the business risks linked to fault, vulnerability, and defect consequences are identified and mitigated by specific design features. Software-intensive systems are complex; it is not surprising that the analysis—even when an expert designer performs it—can be incomplete, can overlook a security problem, or can make simplifying but invalid development and operating assumptions.
Our confidence in the engineering of software must be based on more than opinion. If we claim the resulting system will be secure, our confidence in the claim depends on the quality of evidence provided to support the claim, on confirmation that the structure of the argument about the evidence is appropriate to meet the claim, and on the sufficiency of the evidence provided. If we claim that we have reduced vulnerabilities by verifying all inputs, then the results of extensive testing using invalid and valid data provide evidence to support the claim.
We refer to the combination of evidence and argument as an assurance case, which can be defined as follows:4
Assurance case is a documented body of evidence that provides a convincing and valid argument that a specified set of critical claims about a system’s properties are adequately justified for a given application in a given environment.
[Kelly 1998]
ISO/IEC 15026 provides the following alternative definition of an assurance case [ISO/IEC 2007]:
An assurance case includes a top-level claim for a property of a system or product (or set of claims), systematic argumentation regarding this claim, and the evidence and explicit assumptions that underlie this argumentation. Arguing through multiple levels of subordinate claims, this structured argumentation connects the top-level claim to the evidence and assumptions.
An analysis of an assurance case does not evaluate the process by which an engineering decision was made. Rather, it is a justification of a predicted result based on available information (evidence). An assurance case does not imply any kind of guarantee or certification. It is simply a way to document the rationale behind system design decisions.
Doubts play a significant role in justifying claims. During a review, an assurance case developer must justify through evidence that a set of claims has been met. A typical reviewer looks for reasons to doubt the claim. For example, a reviewer might do any of the following:
Doubt the claim—There is information that contradicts the claim.
Doubt the argument—For example, the static analysis does not apply to the claim that a specific vulnerability has been eliminated or the analysis does not consider the case in which the internal network has been compromised.
Doubt the evidence—For example, the security testing or static analysis was done by inexperienced staff or the testing plan does not sufficiently consider recovery following a compromise.
Quality and reliability can be considered evidence to be incorporated into an argument about predicted software security. Standard and policy frameworks become an important part of this discussion because they are the software industry’s accepted means of structuring and documenting best practice. Frameworks and policies encapsulate and then communicate a complete and coherently logical concept as well as methods of tailoring the approach for use by a particular aspect of “real-world” work. Frameworks for a defined area of work are created and endorsed by recognized entities such as the Software Engineering Institute (SEI), International Organization for Standardization (ISO), National Institute of Standards and Technology (NIST), Institute of Electrical and Electronics Engineers (IEEE), and Association for Computing Machinery (ACM).
Each framework typically focuses on a specific aspect of the lifecycle. The SEI has published several process models that center on communicating a particular approach to an issue or concern. Within the process domain, some SEI models focus on applying best practices to create a more effective software organization. Many widely accepted frameworks predate the emergence of critical operational security concerns and do not effectively address security.