- "Do I Know This Already?" Quiz
- Foundation Topics
- Exam Preparation Tasks
Foundation Topics
Intrusion Prevention Overview
All the CCNP Security exams consider CCNA Security materials as prerequisites, so the Cisco Press CCSP Exam Certification Guide series of books also assumes that you are already familiar with CCNA Security topics. However, the CCNP Security exams do test on features that overlap with CCNA Security. Additionally, most people forget some details along the way.
This book uses two methods to help you review CCNA-level Security topics. The first is an examination of concepts included in the CCNA Security certification. The second is a brief review of other CCNA-level Security features along with a deeper discussion of each topic.
To that end, the following sections begin with a review of intrusion prevention terminology. The following section details the key features and limitations of both intrusion detection and intrusion prevention systems. Finally, the last part of this chapter discusses security controls, approaches, and technologies.
Intrusion Detection Versus Intrusion Prevention
An intrusion detection system (IDS) is a security control or countermeasure that has the capability to detect misuse and abuse of, and unauthorized access to, network resources. An IDS, in most cases, is a dedicated device that monitors network traffic and detects malicious traffic or anomalies based on multiple criteria.
Figure 1-1 shows how an IDS is typically deployed. Notice the placement of the device.
Some of the most commonly detected attacks by a network IDS are as follows:
- Application layer attacks, such as directory traversal attacks, buffer overflows, or various forms of command injection.
- Network sweeps and scans (indicative of network reconnaissance).
- Flooding denial of service (DoS) attacks in the form of TCP SYN packets or large amounts of Internet Control Message Protocol (ICMP) packets. DoS attacks are those in which an attacker uses a large number of compromised systems to disrupt the operation of another system or device on a network. Attacks of this nature can impact the resources of a system and severely degrade performance.
- Common network anomalies on most Open Systems Interconnection (OSI) layers. Some of these common network anomalies detected by a network IDS include the following:
- Invalid IP datagrams
- Invalid TCP packets
- Malformed application layer protocol units
- Malformed Address Resolution Protocol (ARP) requests or replies
Figure 1-1 Intrusion Detection System
After an IDS detects an anomaly or offending traffic, it generates alerts, which are stored locally on the IDS and can be retrieved by a management system. The network security administrators monitor these alerts generated by the IDS and decide how to react. An IDS cannot stop an attack or malicious traffic alone.
A security control or countermeasure that has the capability to detect and prevent misuse and abuse of, and unauthorized access to, networked resources is an intrusion prevention system (IPS).
Figure 1-2 shows how an IPS is typically deployed. Notice the placement of the device or sensor.
Figure 1-2 Intrusion Prevention System
Intrusion Prevention Terminology
Before digging too deeply into intrusion prevention technology, we examine terminology that is important to understand. This section only focuses on terminology as it relates to intrusion prevention; there is a more inclusive list of information security terms in the glossary.
As discussed, an IPS or IDS detects and produces alerts because of a number of factors that include legitimate malicious activity, misconfiguration, environmental changes, and so on. Security controls are classified in one of the following terms:
-
True positive: A situation in which a signature fires correctly when intrusive traffic for that signature is detected on the network. The signature correctly identifies an attack against the network. This represents normal and optimal operation.
-
False positive: A situation in which normal user activity triggers an alarm or response. This is a consequence of nonmalicious activity. This represents an error and generally is caused by excessively tight proactive controls or excessively relaxed reactive controls.
-
True negative: A situation in which a signature does not fire during normal user traffic on the network. The security control has not acted and there was no malicious activity. This represents normal and optimal operation.
-
False negative: A situation in which a detection system fails to detect intrusive traffic although there is a signature designed to catch the activity. In this situation, there was malicious activity, but the security control did not act. This represents an error and generally is caused by excessively relaxed proactive controls or excessively tight reactive controls.
Most security administrators will agree that addressing false negative and false positive issues is a bit of a balancing act. While tuning a system to be less restrictive to fix false positives, you can increase the likelihood of false negatives and vice versa. Security controls should only be tuned by those expertly trained to do so to optimize these decisions.
Preventive controls, such as IPS sensors, are often tuned to be less sensitive to prevent blocking legitimate traffic, while detective controls, such as IDS sensors, are tuned to be more sensitive, which often results in false positives. Some best practices often combine a sensitive detective control with a relaxed preventive control to gain insight to the preventive control and enable incident response. This is often advantageous if the preventive control is bypassed.
Some other critical terminology that is important to understand when dealing with intrusion prevention are vulnerability, exploit, risk, and threat.
A vulnerability is a weakness that compromises either the security or the functionality of a system. You’ll often hear the following examples listed as vulnerabilities:
-
Insecure communications: Any form of data or voice susceptible to interception, such as system passwords, personnel records, and confidential documents.
-
Poor passwords: Often referred to as the first line of defense. Weak or easily guessed passwords are considered vulnerabilities.
-
Improper input handling: Software that hasn’t been through a good security and quality scan (which usually involves evaluating all possible input and results) can lead to a form of DoS or access denied or restricted to system resources.
An exploit is the mechanism used to leverage a vulnerability to compromise the security functionality of a system. You’ll often hear the following examples listed as exploits:
-
Executable code: Often referred to as more advanced form of an exploit, these are exploits written as executable code requiring programming knowledge and access to software tools such as a compiler.
-
Password-guessing tools: There are tools built specifically for this function that can be easily found on the Internet designed to “guess” or “crack” passwords using knowledge of the algorithm used to generate the actual password or by attempting to access a system using combinations and permutations of different character sets.
-
Shell or batch scripts: Scripts created to automate attacks or perform simple procedures known to expose the vulnerability.
A threat is defined as any circumstance or event with the expressed potential for the occurrence of a harmful event to an information system in the form of destruction, disclosure, adverse modification of data, or DoS. Examples of Internet threats that have been prevalent over the past few years include malware that utilizes HTML code or scripts that the cybercriminals place on legitimate websites. These programs generally redirect a user to a malicious user’s exploit-infected website without the user noticing. Other examples of threats include network attacks against exposed application servers, malware targeting workstations, or even physical destruction (natural or unnatural).
A risk is the likelihood that a particular threat using a specific attack will exploit a particular vulnerability of an asset or system that results in an undesirable consequence. Security engineers, administrators, and management will often try to determine risk in their business continuity and disaster recovery planning. A simple equation often used to equate risk is to multiply threat by vulnerability and multiply the result by the asset value. This equation might sound simple, but the vulnerability and threat of an asset depend on a number of factors to include the presence and quality of the security controls deployed to guard an asset, the capability of the attacker, and the frequency of attacks.
Some other critical terms we’ll reference throughout the study guide are as follows:
-
Risk rating (RR): A rating based on numerous factors besides just the attack severity.
-
Deep-packet inspection: Decoding protocols and examining entire packets to allow policy enforcement based on actual protocol traffic (not just a specific port number).
-
Event correlation: Associating multiple alarms or events with a single attack.
-
Inline mode: Examining network traffic while having the ability to stop intrusive traffic from reaching the target system.
-
Promiscuous mode: Also known as passive mode, a way to passively examine network traffic for intrusive behavior.
-
Signature: A rule configured in a network IPS or IDS device that describes a pattern of network traffic that matches a specific type of intrusion activity.
-
Signature engine: An engine that supports signatures that share common characteristics (such as the same protocol, service, operating system, and so on). The Cisco IPS Sensor has multiple signature engines called microengines.
-
Atomic signature: A signature that triggers based on the contents of a single packet.
-
Flow-based signature: A signature that triggers based on the information contained in a sequence of packets between two systems (such as the packets in a TCP connection).
-
Anomaly-based signature: A signature that triggers when traffic exceeds a baseline.
-
Behavior-based signature: A signature that triggers when traffic deviates from regular user behavior.
-
Meta-event generator: The capability to define metasignatures based on multiple existing signatures that trigger at or near the same window of time within a sliding time interval.
Intrusion Prevention Systems
As defined earlier, an IPS (also referred as a network IPS or NIPS) is a security control put in place to detect by analyzing network traffic and prevents by attempting to block malicious network traffic. There are different aspects in which a network IPS analyzes traffic, such as the following:
- Reassembles Layer 4 sessions and analyzes their contents
- Monitors packet and session rates to detect and/or prevent deviations from the baseline (or normal) network profiles
- Analyzes groups of packets to determine whether they represent reconnaissance attempts
- Decodes application layer protocols and analyzes their contents
- Analyzes packets to address malicious activity contained in a single packet
Network intrusion prevention systems provide proactive components that effectively integrate into the overall network security framework. A network IPS includes the deployment of sensors (also known as monitoring devices) throughout the network to analyze traffic as it traverses the network. An IPS sensor detects malicious and/or unauthorized activity in real time and takes action if/when required. There are various approaches to deploying IPS sensors, which are usually deployed at designated points that enable security managers to monitor network activity while an attack is occurring in real time. The security policy will often drive the designated points in the network where the sensors are to be deployed.
Network growth will often require additional sensors, which can easily be deployed to protect the new networks. A network IPS enables security managers to have real-time insight into their networks regardless of the growth caused by more hosts or new networks. Following are some common factors that often influence the addition of sensors:
-
Network implementation: Additional sensors might be required to enforce security boundaries based on the security policy or network design.
-
Exceeded traffic capacity: Additional bandwidth requirements might require an addition or upgrade of network link(s), thus requiring a higher-capacity sensor.
-
Performance capabilities of the sensor: The current sensor might not be able to perform given the new traffic capacity or requirements.
Typically, network IPS sensors are tuned for intrusion prevention analysis. In most cases, the operating system of an IPS sensor is “stripped” of any unnecessary network services while essential services are secured. To maximize the intrusion prevention analysis for networks of all types, there are three essential elements to the IPS hardware:
-
Memory: Intrusion prevention analysis is memory intensive. The memory directly affects the ability of a network IPS to detect and prevent an attack accurately.
-
Network interface card (NIC): The network IPS must have the capability to connect into any network infrastructure. Network IPS NICs today include Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet.
-
Processor: CPU power to perform intrusion prevention protocol analysis and pattern matching is required for an effective intrusion prevention system.
Features of Network Intrusion Prevention Systems
A network IPS has four main features:
- A network IPS can detect attacks on several different types of operating systems and applications, depending on the extent of its database.
- A single device can analyze traffic for a large scale of hosts on the network, which makes network IPSs a cost-effective solution that decreases the cost of maintenance and deployment.
- As sensors observe events from and to various hosts and different parts of the network, they can correlate the events, hosts, and networks to higher-level information. In conjunction with the correlation, they can obtain deeper knowledge of malicious activity and act accordingly.
- A network IPS can remain invisible to the attacker through a dedicated interface that monitors only network traffic and is unresponsive to various triggers or stimuli.
Limitations of Network Intrusion Prevention Systems
The most commonly known limitations of network IPS are as follows:
- The network IPS can require expert tuning to adapt the sensor to its network, host, and application environments.
- The network IPS sensor is unable to analyze traffic on the application layer when traffic is encrypted either with IPsec or SSL (Secure Socket Layer).
- The network IPS can be overloaded by network traffic if not properly sized. Thus, the IPS can easily fail to respond to real-time events in a timely manner if it is sized improperly.
- The network IPS might interpret traffic improperly, which can lead to false negatives. This is often a result of the sensor’s seeing traffic differently from how the end system or target sees the traffic.
Network Intrusion Prevention Approaches
There are three commonly used approaches to network intrusion prevention by security managers today. The security policy often helps security managers determine the approach in which they’ll deploy in their networks. In some cases, you’ll see more than one approach on one particular network. The three commonly used approaches are as follows:
-
Signature-based: A network IPS that analyzes network traffic and compares the data in the flow against a database of known attack signatures. A signature-based IPS looks at the packet headers and/or data payloads when analyzing network traffic. All signature-based IPSs require regular updates for their signature databases. Table 1-2 outlines signature-based features and limitations.
Table 1-2 Signature-Based Features and Limitations
Category
Feature
Limitation
Complexity
Simple for administrators to add new signatures, customize signatures, extend, and so on.
Often the simplest of IPS approaches to deploy (depends on the environment).
Sensors require constant and quick updates of the signature database to ensure that the IPS can detect the most recent attacks.
Can require expert tuning to be effective in complex and unsteady environments.
Susceptibility and Accuracy
Relatively low false positive rate (if the IPS is properly tuned and using well-designed signatures).
More susceptible to evasion through complex signatures that are designed to evade a signature-based IPS.
Cannot detect unknown attacks of which there is no signature in the database.
Reporting
Ability to name attacks and provide the administrator with additional information about a specific attack.
—
-
Anomaly-based: A network IPS that analyzes or observes network traffic and acts if a network event outside normal network behavior is detected. The two types of anomaly-based network IPSs are statistical anomaly detection and protocol verification. Table 1-3 outlines anomaly-based features and limitations.
Table 1-3 Anomaly-Based Features and Limitations
Features
Limitations
Ability to act on both known and yet-unknown threats.
More susceptible to evasion through complex signatures that are designed to evade an anomaly-based IPS.
Unable to name individual attacks.
Statistical approach requires a learning period to establish a normal network profile.
Statistical approach can cause false positives in unstable environments where it can be difficult or impossible to establish a model of a normal network traffic behavior.
-
Policy-based: A network IPS that analyzes traffic and acts if it detects a network event outside a traffic policy. A traffic policy usually involves permitted or denied communications over a network segment similar to an enterprise-class firewall. Table 1-4 outlines policy-based features and limitations.
Table 1-4 Policy-Based Features and Limitations
Features |
Limitations |
Very focused on the target environment and triggers very few false positives; thus, very accurate and effective in most cases. |
Requires the design of the policy from scratch, which in best practice should be as minimal as possible using as much detail as possible to provide the best protection. |
Ability to act on both known and yet-unknown threats. |
Unable to name individual attacks. |
Endpoint Security Controls
Another form of intrusion prevention is the host IPS (HIPS). Often referred to as endpoint security controls, a HIPS consists of operating system security controls or security agent software installed on hosts that can include desktops PCs, laptops, or servers. Host IPSs in most cases extend the native security controls protecting an operating system or its applications. Endpoint security controls can monitor local operating system processes and protect critical systems resources. HIPSs fundamentally have two essential elements: a software package installed on the endpoint or agent to protect it and a management system to manage the endpoints or agents.
In most cases, operating systems today split the runtime functions of the operating systems into two concurrently running modes known as Kernel mode and User mode. Kernel mode is the software that has complete access to the operating system hardware; thus, all the software running in Kernel mode can act without restrictions. Generally, the software running in Kernel mode includes the hardware drivers, operating system scheduler, and the application programming interfaces (API). User mode is the software that requires kernel services to execute applications in the form of processes but don’t have direct access to the hardware components of the operating system. There is required protection in the system hardware that separates the two modes so that the User mode applications cannot tamper with the Kernel mode software.
Access control enforcement for an operating system can be done using local system resources (native operating system access control) or remote system resources (RADIUS, TACACS, and so on). The local system of user or process privileges and permissions on the discretion of the logical owner/administrator is known as Discretionary Access Control (DAC). Another local system access control that extends the functionality by using the user’s role in the organization is known as Role-Based Access Control (RBAC) capability. Access control lists (ACL) are often used to define which systems or networks have access and in which direction. Audit trails (system logs) can aid in the detection of system misuse and attacks to protected objects. The same access control mechanism that decides whether to permit or deny access usually provides this audit trail, showing successful and unsuccessful access attempts. Buffer and heap overflow protection is critical for local applications that contain input-validation vulnerabilities. Protection against buffer and heap overflow attacks is often embedded into hardware and operating systems that provide specialized protection against this specific class of threats. Table 1-5 summarizes the features and limitations of endpoint security.
Table 1-5 Features and Limitations of Endpoint Security
Features |
Limitations |
Identity association, meaning that the endpoint security control can provide the information about the attacker. |
Platform flexibility (some operating systems might not support endpoint security controls). |
System-specific or customized to protect the system it is protecting and resides on. |
Inability to correlate whether a single endpoint or agent is deployed. |
Ability to see malicious network data; consequences of network attacks even if encrypted. |
Every host requires an agent. Thus, the cost of endpoint security controls can become quite large in some environments and also be quite challenging to manage with only a single or a few administrators to manage the hosts. |
Detection of the success of an attack and can take action after the system is stable. |
If an attack is successful in accessing the host prior to the endpoint security reacting, the host is compromised. |
Host-Based Firewalls
Endpoint security isn’t complete without a form of host-based firewall. There are two basic implementations, which include packet filtering and socket filtering (also known as API call filtering):
-
Packet filtering: Host firewalls use stateful and stateless packet filtering, and typically support dynamic applications such as HTTPS, FTP, and so on. Filtering is based on Open Systems Interconnection (OSI) Layer 3 and 4 information, so it can control connections based on host addresses, protocols, and port numbers. Similar in behavior to a network firewall.
-
Socket filtering (API call filtering): Controlling application requests to either create an outgoing or accept an incoming connection by filtering network-related API calls. API call filtering is applications aware, so there is no need to require intelligence to support dynamic sessions.
API and System Call Interception
Secondary Security Reference Monitor (SSRM) is an operating system security extension that provides a “second opinion” or layered approach of security by extending and duplicating the functionality of the native operating security model. SSRMs are often third-party extensions for the operating system kernel. They use API interception to insert themselves into the access control path. API interception has a low performance impact while consuming less than 5 percent of additional CPU resources; therefore, most of today’s HIPS products implement SSRM functionality. API interception (also called API hooking) is when an API call is intercepted and the SSRM registers itself as the replacement handler code for the API call it considers important enough to intercept. This allows the SSRM to enforce its own security policy. The SSRM can act as the host firewall, now controlling all applications’ access to the network.
Cisco Security Agent
The Cisco HIPS is Cisco Security Agent (CSA), which complements the Cisco NIPS, protecting the integrity of applications and operating systems. Malicious activity is blocked before damage is done by using behavior-based technology that monitors application behaviors. CSA protects against known and new/unknown attacks. Residing between the kernel and applications, CSA enables maximum application visibility with little impact to the performance and stability of the underlying operating system. A few of the numerous network security benefits CSA offers are as follows:
- Zero-update protection reduces emergency patching in response to vulnerability announcements, minimizing patch-related downtime and IT expenses.
- Visibility and control of sensitive data protect against loss from both user actions and targeted malware.
- Predefined compliance and acceptable use policies allow efficient management, reporting, and auditing of activities.
- System is protected at all times, even when users are not connected to the corporate network or lack the latest patches. This is often referred to as “always vigilant” security.
As stated in the previous paragraph, host IPSs and network IPSs are complementary. Table 1-6 illustrates this point.
Table 1-6 Host IPS (HIPS) and Network IPS (NIPS)
Host IPS |
Network IPS |
CSA can inspect the behavior of applications (encrypted or nonencrypted). |
Requires constant updates for new vulnerabilities. |
CSA is a behavior-based HIPS. |
Can prevent known attacks. |
CSA does not need constant updates. |
Can protect complete network. |
CSA can protect the host (server, desktop, and so on) efficiently, communicate with IPSs, and stop known and unknown (Day Zero) attacks. |
— |
CSA cannot “name” the attack or protect unsupported platforms. |
— |
Antimalware Agents
Antivirus and antispyware are primarily designed to find file-based malware threats and scan the content to identify known patterns of malware. This tends to be a permissive security approach. File and memory content can both contain traces of known malware, and fortunately antimalware scanners can examine both. Some antimalware scanners can perform scanning using the following methods or approaches:
- Using on-demand scanning when the user initiates a thorough system scan.
- Using real-time scanning, which in some cases isn’t as thorough as offline/on-demand, especially if executable code is populated in memory and the files being scanned are busy writing or reading from the file system.
- Using scanning in a scheduled manner in which all files are scanned thoroughly on the endpoint.
Viruses, spyware, adware, Trojan horses, worms that use file-based infections, rootkit software, and general attack tools can all be detected using file-based antimalware software, as long as that type of malware is known (through the malware database) and can be located using the file and memory scanning.
Typically, the antimalware scans files and memory for known patterns of virus code. This is compared to a database of known malware signatures. In some instances for accuracy, a lot of antivirus scanners today require content matching through multiple, independent detectors for the same virus. Scanners that analyze content for suspicious coding tricks, runtime attributes, structure, and behavior associated with malicious code use heuristic antimalware. Heuristics are not that reliable for new viruses and often will use various techniques that weight malicious features to determine whether the code should be classified as malicious. A common antimalware scanning technique is known as code emulation. In code emulation, the antimalware software executes suspicious code in a simple virtual machine that is isolated or sandboxed from the rest of the system. The antimalware scanner can (or attempts to) determine the behavior and actions that the suspicious code performs. The learned behavior is then stored in a database of executable signatures that can detect known patterns of execution to detect the virus in the future.
Data Loss Prevention Agents
Another form of endpoint security is known as Data Loss Prevention (DLP) extensions. DLP controls mobile data distributed on users’ systems to prevent users from accidentally or deliberately transferring sensitive data to uncontrolled systems. Examples of uncontrolled systems would be paper (using printers), open network systems (file sharing), and mobile storage (USB keys, portable hard disks, and so on). There are different forms of implementation when it comes to DLPs, but two common examples would be using content scanning to identify sensitive content (assuming that the content is labeled appropriately with a standardized labeling systems identifying sensitive material) and controlling transfer of data off the system using interception of users’ and applications’ actions.
Cryptographic Data Protection
One of the most discussed and well-known approaches to endpoint security today is file integrity checking to detect unauthorized changes to sensitive files or the system itself. Integrity-checking software calculates a secure fingerprint (HMAC [Hash Message Authentication Code]) for every important file on the system with a secret key. These fingerprints are created when the file(s) are known to be trusted and not modified from their original states. There are periodic rescans of the files and file fingerprints compared to a database of known good fingerprints, which identify whether they have been tampered with.
Integrity checkers rescan files in a specified interval or time, so they can only provide detection of attacks rather than provide real-time detection. It’s important to note that integrity checkers can be compromised with the system, given that they are usually a user-mode application.
Encryption is also an important method to prevent data from being stolen or compromised physically from a system, disk drive, third-party add-on, or file system. The user holds the decryption keys with Windows EFS (Encrypting File System) that are transparently linked to user credentials and provide access to encrypted information. Lost cryptographic keys can lead to sensitive data loss, which is why many security policies require the creation of a backup decryption key. Key generation might be left to the user, which substantially weakens cryptography protection of data if operated poorly. If stolen, an attacker must attempt to decrypt protected information; however, this is very difficult to do if cryptographic implementation and key management are done properly.
A Systems Approach to Security
Multiple layers of protection increase the probability of detection and prevention of malicious activity. As we’ve discussed, there are multiple approaches to detection and prevention, but it’s important to understand that what one security control detects, another type can overlook. Proper correlation results in more accurate or trustworthy data about system behavior or incidents when network and endpoint security controls are used together.
A defense-in-depth security solution attempts to protect assets by providing layers of security. Applying security controls at the network and host levels provides this defense-in-depth concept. Table 1-7 summarizes and compares the defense-in-depth technology approaches. It’s important to understand that one isn’t preferred over the other, but they both complement each other.
Table 1-7 Defense-in-Depth: Host-Focused and Network-Focused Technology
Host-Focused Technology |
Network-Focused Technology |
Protects the operating system |
Detects and prevents DoS attacks |
Controls access to local host resources |
Detects and prevents network reconnaissance attacks |
Protects applications and the data they handle |
Detects and prevents attacks against many network-facing applications and operating systems |