- Business Environments
- High-Level Policies
- Low-Level Policies
- The Policy Management Tool
Low-Level Policies
The different business needs outlined in the preceding section are satisfied by a variety of techniques. Many of these techniques can be used interchangeably. The choice of scheme depends on the specifics of the particular environment.
In order to support business SLAs, you can follow the approaches of capacity planning, Differentiated Services, or Integrated Services.
The basic idea behind supporting SLAs using capacity planning is fairly simple: Provide enough link bandwidth and processing capacity that the SLA requirements are satisfied. If an SLA is feasible at all (if it can be met on an unloaded network with unloaded servers), it should be possible to determine the link bandwidth and processing capacity that satisfied the performance objectives under normal operating environments. When the appropriate network has been designed, it can be operated as a best-effort network. Of course, the network needs to be monitored to ensure that the SLAs are being complied with.
There are many cases in which capacity planning might not be adequate. An example is the case in which loads on the network are not readily predictable or show a sudden spurt of growth. Situations in which capacity planning has failed are quite common on the Internet, where many companies have launched an advertising campaign or hosted an event that caused their servers to become overwhelmed. In these cases, QoS techniques can help ensure that the performance of a subset of the users in the network can be maintained.
As described in Chapter 2, there are two main approaches to support QoS in IP networksIntegrated Services/RSVP and Differentiated Services. The RSVP approach can be loosely described as a signaled approach, and DiffServ is a provisioned multiclass approach. In a signaled approach, applications communicate (or signal) their QoS requirements to the network routers and remote workstations. Each router that is signaled reserves enough local resources (link bandwidth or buffer space) to support the application's QoS requirements. The other approach is to support multiple preprovisioned and differentiated classes of service in the network. These multiple classes are provisioned so as to deliver different levels of average performance. Different service classes have different expectations of average network delays and loss rates. With the provisioned multiclass approach, the network decides to map an application's packet flow into one of these preprovisioned differentiated classes of service and schedules them appropriately. A subset of Differentiated Services capabilities can also be used to provide rate control within the networks. Rate control devices can also be used to effectively control SLAs within the network.
In order to support the security needs in the different environments, you can use the protocols associated with IPsec or use the analogous transport layer scheme of SSL. Both of these technologies were described in Chapter 2. Either of these approaches can be used to support the different security needs within the different environments.
The next few sections take a closer look at the policy requirements of the different devices within each of the technologies.
Policy Issues with IntServ
The main issue with policy in an integrated services network is to try to answer these questions:
Who is entitled to signal a reservation request using RSVP?
Which requests should be honored by a router, and which ones should be rejected?
Because QoS mechanisms are intended to provide an assured performance level for a set of specific applications, their goal is to provide preferential treatment to those applications. These applications can obtain the desired performance by means of reservation. However, nothing prevents other applications in the network from invoking RSVP to reserve bandwidth to improve their own performance. Any application can signal that resources be reserved for it. If no internal charge-back is associated with any reservation, there is no incentive to not ask for the maximum possible reservation that you can extract from the network. Obviously, a free-for-all reservation architecture is not likely to perform any better than a best-effort service. It can even perform worse. A user who is relatively sloppy at ending reservations might hog a large amount of bandwidth and never give any of it up.
The policy control module in RSVP decides who should be allowed to make reservations and also limits the number and duration of reservations that can be made. When reservation requests are received by the routers, they check the policy control module to ensure that the reservation should be honored. Thus, an enterprise can allow only reservations invoked by some key applications to succeed. Furthermore, it can also determine the amount of bandwidth that should be reserved by each flow belonging to the particular application.
Some routers in the network might not be capable of making policy decisions on their own. In that case, the routers can obtain policy decisions from an external policy server using a protocol called COPS.
When signaling for the reservation using the PATH message or RESV message, the end-points involved in an RSVP flow can include a policy object as part of the message. This policy object can (among other things) identify the user, organization, or application requesting the reservation. The policy server can thereby enforce policy decisions at various levels of granularity.
Policy decisions can prevent someone from hogging resources or allow reservations to be made only by specific applications that are considered business-critical.
Policy Issues with DiffServ
A DiffServ network consists of two types of boxesaccess routers and core routers. The access routers classify the various packets depending on the contents of the packet headers. This classification is marked into the DiffServ field of the IP header. An access router must know the rules which determine how different packets should be marked.
In addition to the marking, DiffServ access routers also can implement various types of rate control, limiting the amount of network bandwidth to be used by a particular type of traffic to specific limits. The policy definition for an access router needs to specify any such limits if they exist.
The core routers interpret the DiffServ field according to the set of PHBs defined for them. Thus, the policy definition for core routers must specify the type of queuing behavior that corresponds to different packet markings. Such a behavior can indicate the queuing priorities of the different network devices, as well as the rate limits or bandwidth shares that can be allocated to the different classes of traffic.
With the availability of any level of differentiation, you have to decide who or what gets which class of service. The answer to this question constitutes policy in DiffServ networks. In order to manage the performance of a DiffServ network, you must obtain the configuration information for all the DiffServ access routers so that the classifiers and rate controllers at DiffServ boundaries can be managed to meet expectations. Similarly, the core routers that make up a DiffServ network must be configured to have the correct configuration corresponding to that of different applications.
Communication in any network is bidirectional, and improving the quality of communication requires improving performance in both directions. Thus, trying to improve the performance of a specific application session would require configuring at least two access routers (plus the core routers that lie along the path). Coordinating a consistent configuration of multiple access routers is not a trivial task. The goal of the policy management tools described in Chapter 5, "Resource Discovery," is to ensure such a consistent configuration in the various network configurations.
Policies and Device Configuration
There is a subtle but important distinction which needs to be made between the notion of a device configuration and the low-level policies associated with a technology. The low-level policy definition for DiffServ consists of the rules that determine the behavior of the network and devices in a manner that is independent of the details of a specific device. These rules are represented in a format that can be understood and interpreted by any of the devices within the network. As an example, such policies may be represented in an LDAP directory using a commonly accepted schema. Also, these policies may be specified for a group of devices (or for the entire network), rather than for each device individually.
Corresponding to the policy specification, each device can generate its configuration which implements the set of policies which are relevant to it. The semantics of the configuration must match the semantics of the low level policies.
Thus, there are two important differences between the low-level policies and device configuration: representation, for example, where policies are represented in a device-independent manner; and scope, for example, where policies are applicable to more than one device.
This book discusses an application of the policy technology, namely how to get all the devices in a network configured in order to meet some high level goals. The primary use of low level policies for this application is generating the device configuration. As a result, the line between device configuration and low level policies may appear blurry at times, but the reader should keep in mind that the two are different.
Policy Issues with Servers
Providing adequate performance within any environment depends on ensuring that performance is assured on all parts of a system, including the clients, the network, and the servers. Thus, when QoS features are being used within the network, they need to be augmented by similar functions within the servers. In some specific environments, such as the ASP environment, server controls might be more important than network controls.
If the server operating system supports any notion of different levels of service offered to different applications, that service-level information must be encoded into the appropriate configuration for the server platforms. The set of priorities that are needed to manage the performance of the various applications must be specified in some manner in the server configuration.
In cases where server differentiation mechanisms such as support different priority levels are available, the appropriate configurations for the various platforms need to be generated. These configurations must include the appropriate performance priorities (or other suitable information) for the different classes of applications that are supported on a given server or cluster of servers.
Policy Issues with IKE/IPsec
The policy issues associated with IKE involve defining the set of parameters that specify how secure communication using IKE is to be implemented. A typical IKE configuration is specified in terms of the characteristics of the Phase 1 and Phase 2 tunnels that need to be established in order to exchange the keys required for IPsec communication.
The typical IKE configuration consists of specifying three types of records:
Phase 1 characteristics. This defines the characteristics associated with a Phase 1 security association of IKE. These characteristics define how long a key used in Phase 1 would be valid and the type of authentication mechanisms used by the communicating parties to validate each other. Two common techniques that can be used for authentication are the use of a shared common secret and the use of public certificates. With a shared common secret, both parties in the IKE establish a secret key that they use to identify each other. With public certificates, they both trust a certificate-issuing authority that can be used to obtain the public keys of the other party.
Phase 1 transform lists. During the Phase 1 negotiations, the communicating parties discuss a list of encryption and authentication algorithms that they would be willing to accept in communication over a Phase 1 security association. This transform list would be used to secure the exchange of keys for establishing Phase 2 security associations. A transform list would indicate whether encryption or authentication or both should be used for the communication, and which algorithms should be used for this purpose.
Phase 1 tunnel descriptions. This specifies which phase 1 characteristics and phase 1 transform lists should be used for communication between a pair of source and destination machines. The granularity of the source and destination can be further refined by the use of port numbers at the source and destinations.
Note - Here's a quick note on terminology: What I call tunnel descriptions are usually referred to as policies in the IKE/IPsec implementation and RFCs. Because this usage might cause some confusion with the definition of policy I have been using all along, I have opted to call these tunnel descriptions.
The other three types of records are the corresponding Phase 2 characteristics, Phase 2 transform lists, and Phase 2 tunnel descriptions. There are differences in the exact set of characteristics that is specified among the two phases, or for the transform lists that make sense in the two phases of communication.
It is probably apparent to you by now that configuring the IKE correctly is a daunting task. It doesn't help that the configuration must be done not for one firewall, but consistently across multiple firewalls in order to enable some business needs.
Policy Issues with SSL
The typical SSL configuration for the bidding client or bidding server application consists of parameters such as the type of authentication that should be used for the different communication typessuch as based on shared secrets or public certificates. Furthermore, the configuration should indicate whether only the server is authenticated or if both the client and the server are authenticated.
Other SSL parameters, such as when the security keys should be renegotiated, also need to be specified as part of the SSL configuration.
Although SSL configuration is simpler than the corresponding IKE configuration, it is essential that the configuration be consistent across the bidding client and bidding server applications so that the establishment of the secure SSL connection succeeds.