- 1.1 Introduction
- 1.2 What Do We Mean by Lifecycle Assurance?
- 1.3 Introducing Principles for Software Assurance
- 1.4 Addressing Lifecycle Assurance3
- 1.5 Case Studies Used in This Book
1.3 Introducing Principles for Software Assurance
In 1974, Saltzer and Schroeder proposed software design principles that focus on protection mechanisms to “guide the design and contribute to an implementation without security flaws” [Saltzer 1974]. Students still learn these principles in today’s macrocycle classrooms [Saltzer 1974]:
Economy of mechanism—Keep the design as simple and small as possible.
Fail-safe defaults—Base access decisions on permission rather than exclusion.
Complete mediation—Every access to every object must be checked for authority.
Open design—The design should not be secret. The mechanisms should not depend on the ignorance of potential attackers but rather on the possession of specific, and more easily protected, keys or passwords.
Separation of privilege—Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.
Least privilege—Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
Least common mechanism—Minimize the amount of mechanism common to more than one user and depended on by all users.
Psychological acceptability—It is essential that the human interface be designed for ease of use so that users routinely and automatically apply the protection mechanisms correctly.
Time has shown the value and utility in these principles, but new challenges surfaced soon after Saltzer and Schroeder proposed them. The Morris worm generated a massive denial of service by infecting more than 6,000 UNIX machines on November 2, 1988 [Wikipedia 2011a]. An advanced operating system, Multiple Virtual Storage (MVS), where memory sharing was now available to all programs under control of the OS, was released in March of the same year [Wikipedia 2011b]. As a result, the security of the operating system became of utmost importance. Although Saltzer and Schroeder’s principles still apply to security within an individual piece of technology, they are no longer sufficient to address the complexity and sophistication of the environment within which that component must operate.
We propose a set of seven principles focused on addressing the challenges of acquiring, building, deploying, and sustaining systems to achieve a desired level of confidence for software assurance:
Risk shall be properly understood in order to drive appropriate assurance decisions—A perception of risk drives assurance decisions. Organizations without effective software assurance perceive risks based on successful attacks to software and systems and usually respond reactively. They may implement assurance choices such as policies, practices, tools, and restrictions based on their perception of the threat of a similar attack and the expected impact if that threat is realized. Organizations can incorrectly perceive risk when they do not understand their threats and impacts. Effective software assurance requires organizations to share risk knowledge among all stakeholders and technology participants. Too frequently, organizations consider risk information highly sensitive do not share it; protecting the information in this way results in uninformed organizations making poor risk choices.
Risk concerns shall be aligned across all stakeholders and all interconnected technology elements—Highly connected systems like the Internet require aligning risk across all stakeholders and all interconnected technology elements; otherwise, critical threats are missed or ignored at different points in the interactions. It is not sufficient to consider only highly critical components when everything is highly interconnected. Interactions occur at many technology levels (e.g., network, security appliances, architecture, applications, data storage) and are supported by a wide range of roles. Protections can be applied at each of these points and may conflict if not well orchestrated. Because of interactions, effective assurance requires that all levels and roles consistently recognize and respond to risk.
Dependencies shall not be trusted until proven trustworthy—Because of the wide use of supply chains for software, assurance of an integrated product depends on other people’s assurance decisions and the level of trust placed on these dependencies. The integrated software inherits all the assurance limitations of each interacting component. In addition, unless specific restrictions and controls are in place, every operational component, including infrastructure, security software, and other applications, depends on the assurance of every other component. There is a risk each time an organization must depend on others’ assurance decisions. Organizations must decide how much trust they place in dependencies based on realistic assessments of the threats, impacts, and opportunities represented by various interactions. Dependencies are not static, and organizations must regularly review trust relationships to identify changes that warrant reconsideration. The following examples describe assurance losses resulting from dependencies:
Defects in standardized pieces of infrastructure (e.g., operating systems, development platforms, firewalls, and routers) can serve as widely available threat entry points for applications.
Using many standardized software tools to build technology establishes a dependency for the assurance of the resulting software product. Vulnerabilities can be introduced into software products by the tool builders.
Attacks shall be expected—A broad community of attackers with growing technology capabilities can compromise the confidentiality, integrity, and availability of an organization’s technology assets. There are no perfect protections against attacks, and the attacker profile is constantly changing. Attackers use technology, processes, standards, and practices to craft compromises (known as socio-technical responses). Some attacks take advantage of the ways we normally use technology, and others create exceptional situations to circumvent defenses.
Assurance requires effective coordination among all technology participants—The organization must apply protection broadly across its people, processes, and technology because attackers take advantage of all possible entry points. The organization must clearly establish authority and responsibility for assurance at an appropriate level in the organization to ensure that the organization effectively participates in software assurance. This assumes that all participants know about assurance, but that is not usually the case. Organizations must educate people on software assurance.
Assurance shall be well planned and dynamic—Assurance must represent a balance among governance, construction, and operation of software and systems and is highly sensitive to changes in each of these areas. Assurance requires an adaptive response to constant changes in applications, interconnections, operational usage, and threats. Assurance is not a once-and-done activity. It must continue beyond the initial operational implementation through operational sustainment. Assurance cannot be added later; it must be built to the level of acceptable assurance that organizations need. No one has resources to redesign systems every time the threats change, and adjusting assurance after a threat has become reality is impossible.
A means to measure and audit overall assurance shall be built in—Organizations cannot manage what they do not measure, and stakeholders and technology users do not address assurance unless they are held accountable for it. Assurance does not compete successfully with other competing needs unless results are monitored and measured. All elements of the socio-technical environment, including practices, processes, and procedures, must be tied together to evaluate operational assurance. Organizations with more successful assurance measures react and recover faster, learn from their reactive responses and those of others, and are more vigilant in anticipating and detecting attacks. Defects per lines of code is a common development measure that may be useful for code quality but is not sufficient evidence for overall assurance because it provides no perspective on how that code behaves in an operational context. Organizations must take focused and systemic measures to ensure that the components are engineered with sound security and that the interaction among components establishes effective assurance.