Security Architecture
Although a robust architecture is a good start, real security requires that you have a security architecture in place to control processes and applications. The concepts related to security architecture include the following:
- Protection rings
- Trusted computer base (TCB)
- Open and closed systems
- Security modes
- Recovery procedures
Protection Rings
The operating system knows who and what to trust by relying on rings of protection. Rings of protection work much like your network of family, friends, coworkers, and acquaintances. The people who are closest to you, such as your spouse and family, have the highest level of trust. Those who are distant acquaintances or are unknown to you probably have a lower level of trust. It’s much like the guy you see in New York City on Canal Street trying to sell new Rolex watches for $100; you should have little trust in him and his relationship with the Rolex company!
In reality, the protection rings are conceptual. Figure 5.2 shows an illustration of the protection ring schema. The first implementation of such a system was in MIT’s Multics time-shared operating system.
Figure 5.2. Rings of protection.
The protection ring model provides the operating system with various levels at which to execute code or to restrict that code’s access. The rings provide much greater granularity than a system that just operates in user and privileged mode. As code moves toward the outer bounds of the model, the layer number increases and the level of trust decreases.
- Layer 0—The most trusted level. The operating system kernel resides at this level. Any process running at layer 0 is said to be operating in privileged mode.
- Layer 1—Contains nonprivileged portions of the operating system.
- Layer 2—Where I/O drivers, low-level operations, and utilities reside.
- Layer 3—Where applications and processes operate. This is the level at which individuals usually interact with the operating system. Applications operating here are said to be working in user mode.
Not all systems use all rings. Most systems that are used today operate in two modes: user mode or supervisor (privileged) mode. Items that need high security, such as the operating system security kernel, are located at the center ring. This ring is unique because it has access rights to all domains in that system. Protection rings are part of the trusted computing base concept.
Trusted Computer Base
The trusted computer base (TCB) is the sum of all the protection mechanisms within a computer and is responsible for enforcing the security policy. This includes hardware, software, controls, and processes. The TCB is responsible for confidentiality and integrity. The TCB is the only portion of a system that operates at a high level of trust. The TCB is tasked with enforcing the security policy. It monitors four basic functions:
- Input/output operations—I/O operations are a security concern because operations from the outermost rings might need to interface with rings of greater protection. These cross-domain communications must be monitored.
- Execution domain switching—Applications running in one domain or level of protection often invoke applications or services in other domains. If these requests are to obtain more sensitive data or service, their activity must be controlled.
- Memory protection—To truly be secure, the TCB must monitor memory references to verify confidentiality and integrity in storage.
- Process activation—Registers, process status information, and file access lists are vulnerable to loss of confidentiality in a multiprogramming environment. This type of potentially sensitive information must be protected.
The TCB monitors the functions in the preceding list to ensure that the system operates correctly and adheres to security policy. The TCB follows the reference monitor concept. The reference monitor is an abstract machine that is used to implement security. The reference monitor’s job is to validate access to objects by authorized subjects. The reference monitor operates at the boundary between the trusted and untrusted realm. The reference monitor has three properties:
- Cannot be bypassed and controls all access
- Cannot be altered and is protected from modification or change
- Can be verified and tested to be correct
The reference monitor is much like the bouncer at a club because it stands between each subject and object. Its role is to verify the subject meets the minimum requirements for access to an object, as illustrated in Figure 5.3.
Figure 5.3. Reference monitor.
The reference monitor can be designed to use tokens, capability lists, or labels.
- Tokens—Communicate security attributes before requesting access.
- Capability lists—Offer faster lookup than security tokens but are not as flexible.
- Security labels—Used by high-security systems because labels offer permanence. This is provided only by security labels.
At the heart of the system is the security kernel. The security kernel handles all user/application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure. However, in real life, the security kernel might be bloated with some unnecessary code because processes located inside can function faster and have privileged access. To avoid these performance costs, Linux and Windows have fairly large security kernels and have opted to sacrifice size in return for performance gains. Figure 5.4 illustrates an example of the design of the Windows OS.
Figure 5.4. Security kernel.
Although the reference monitor is conceptual, the security kernel can be found at the heart of every system. The security kernel is responsible for running the required controls used to enforce functionality and resist known attacks. As mentioned previously, the reference monitor operates at the security perimeter—the boundary between the trusted and untrusted realm. Components outside the security perimeter are not trusted. All control and enforcement mechanisms are inside the security perimeter.
Open and Closed Systems
Open systems accept input from other vendors and are based on standards and practices that allow connection to different devices and interfaces. The goal is to promote full interoperability whereby the system can be fully utilized.
Closed systems are proprietary. They use devices not based on open standards and are generally locked. They lack standard interfaces to allow connection to other devices and interfaces.
An example of this can be seen in the United States cell phone industry. AT&T and T-Mobile cell phones are based on the worldwide Global System for Mobile Communications (GSM) standard and can be used overseas easily on other networks by simply changing the subscriber identity module (SIM). These are open-system phones. Phones that are used on the Sprint network use Code Division Multiple Access (CDMA), which does not have worldwide support.
Security Modes of Operation
Several security modes of operation are based on Department of Defense (DoD 5220.22-M) classification levels as defined at http://www.dtic.mil/whs/directives/corres/html/522022m.htm. Per the DoD, information being processed on a system, and the clearance level of authorized users, have been defined as one of four modes (see Table 5.2):
- Dedicated—A need-to-know for all information stored or processed. Every user requires formal access approvals and to have executed all appropriate nondisclosure agreements for all the information stored and/or processed. This level must also support enforced system access procedures. All hardcopy output and media removed will be handled at the level for which the system is accredited until reviewed by a knowledgeable individual. All users can access all data.
- System High—A need-to-know for some of the information contained within the system. Every user requires access approval and to have signed nondisclosure agreements for all the information stored and/or processed. Access permission to an object by users not already possessing access permission must only be assigned by authorized users of the object. This mode must provide an audit trail capability that records time, date user ID, terminal ID (if applicable), and file name. All users can access some data based on their need to know.
- Compartmented—Valid need-to-know for some of the information on the system. Every user has formal access approval for all information they will access on the system and require proper clearance for the highest level of data classification on the system. All users have signed NDAs for all information they will access on the system. All users can access some data based on their need to know and formal access approval.
- Multilevel—Every user has a valid need-to-know for that information for which he/she is to have access. They have formal access approval and have signed nondisclosure agreements for that information to which he or she is to have access. Mandatory access controls shall provide a means of restricting access to files based on the sensitivity label. All users can access some data based on their need to know, clearance, and formal access approval.
Table 5.2. Security Modes of Operation
Mode |
Dedicated |
System High |
Compartmented |
Multi-Level |
Signed NDA |
All |
All |
All |
All |
Clearance |
All |
All |
All |
Some |
Approval |
All |
All |
Some |
Some |
Need to Know |
All |
Some |
Some |
Some |
Operating States
When systems are used to process and store sensitive information, there must be some agreed-on methods for how this will work. Generally, these concepts were developed to meet the requirements of handling sensitive government information with categories such as sensitive, secret, and top secret. The burden of handling this task can be placed on either administration or the system itself.
Single-state systems are designed and implemented to handle one category of information. The burden of management falls on the administrator who must develop the policy and procedures to manage this system. The administrator must also determine who has access and what type of access the users have. These systems are dedicated to one mode of operation, so they are sometimes referred to as dedicated systems.
Multistate systems depend not on the administrator, but on the system itself. They are capable of having more than one person log in to the system and access various types of data depending upon the level of clearance. As you would probably expect, these systems are not inexpensive. The XTS-400 that runs the Secure Trusted Operating Program (STOP) OS from BAE Systems is an example of a multilevel state computer system. Multistate systems can operate as a compartmentalized system. This means that Mike can log in to the system with a secret clearance and access secret-level data, whereas Carl can log in with top-secret level access and access a different level of data. These systems are compartmentalized and can segment data on a need-to-know basis.
Recovery Procedures
Unfortunately, things don’t always operate normally; they sometimes go wrong and a system failure can occur. A system failure could potentially compromise the system. Efficient designs have built-in recovery procedures to recover from potential problems:
- Fail safe—If a failure is detected, the system is protected from compromise by termination of services.
- Fail soft—A detected failure terminates the noncritical process and the system continues to function.
It is important to be able to recover when an issue arises. This requires taking a proactive approach and backing up all critical files on a regular schedule. The goal of recovery is to recover to a known state. Common issues that require recovery include
- System Reboot—An unexpected/unscheduled event.
- System Restart—Automatically occurs when the system goes down and forces an immediate reboot.
- System Cold Start—Results from a major failure or component replacement.
- System Compromise—Caused by an attack or breach of security.
Process Isolation
Process isolation is required to maintain a high level of system trust. To be certified as a multilevel security system, process isolation must be supported. Without process isolation, there would be no way to prevent one process from spilling over into another process’s memory space, corrupting data, or possibly making the whole system unstable. Process isolation is performed by the operating system; its job is to enforce memory boundaries.
For a system to be secure, the operating system must prevent unauthorized users from accessing areas of the system to which they should not have access. Sometimes this is done by means of a virtual machine. A virtual machine allows users to believe that they have the use of the entire system, but in reality, processes are completely isolated. To take this concept a step further, some systems that require truly robust security also implement hardware isolation. This means that the processes are segmented not only logically but also physically.