Security Principles: Detection
Detection lies at the heart of the NSM operation, but it is not the ultimate goal of the NSM process. Ideally, the NSM operation will detect an intrusion and guide incident response activities prior to incident discovery by outside means. Although it is embarrassing for an organization to learn of compromise by getting a call from a downstream victim or customer whose credit card number was stolen, these are still legitimate means of detecting intrusions.
As mentioned in Chapter 1, many intruders are smart and unpredictable. This means that people, processes, and products designed to detect intrusions are bound to fail, just as prevention inevitably fails. If both prevention and detection will surely fail, what hope is there for the security-minded enterprise?
NSM's key insight is the need to collect data that describes the network environment to the greatest extent possible. By keeping a record of the maximum amount of network activity allowed by policy and collection hardware, analysts buy themselves the greatest likelihood of understanding the extent of intrusions. Consider a connectionless back door that uses packets with PSH and ACK flags and certain other header elements to transmit information. Detecting this sort of covert channel can be extremely difficult until you know what to monitor. When an organization implements NSM principles, it has a higher chance of not only detecting that back door but also keeping a record of its activities should detection happen later in the incident scenario. The following principles augment this key NSM insight.
Intruders Who Can Communicate with Victims Can Be Detected
Intrusions are not magic, although it is wise to remember Arthur C. Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic." [15] Despite media portrayals of hackers as wizards, their ways can be analyzed and understood. While reading the five phases of compromise in Chapter 1, you surely considered the difficulty and utility of detecting various intruder activities. As Table 1.2 showed, certain phases may be more observable than others. The sophistication of the intruder and the vulnerability of the target set the parameters for the detection process. Because intruders introduce traffic that would not ordinarily exist on a network, their presence can ultimately be detected. This leads to the idea that the closer to normal intruders appear, the more difficult detection will be.
This tenet relates to one of Marcus Ranum's "laws of intrusion detection." Ranum states, "The number of times an uninteresting thing happens is an interesting thing." [16] Consider the number of times per day that an organization resolves the host name "www.google.com." This is an utterly unimpressive activity, given that it relates to the frequency of searches using the Google search engine. For fun, you might log the frequency of these requests. If suddenly the number of requests for www.google.com doubled, the seemingly uninteresting act of resolving a host name takes on a new significance. Perhaps an intruder has installed a back door that communicates using domain name server (DNS) traffic. Alternatively, someone may have discovered a new trick to play with Google, such as a Googlewhack or a Googlefight. [17]
Detection through Sampling Is Better Than No Detection
Security professionals tend to have an all-or-nothing attitude toward security. It may be the result of their ties to computer science, where answers are expressed in binary terms of on or off, 1 or 0. This attitude takes operational form when these people make monitoring decisions. If they can't figure out a way to see everything, they choose to see nothing. They might make some of the following statements.
-
"I run a fractional OC-3 passing data at 75 Mbps. Forget watching itI'll drop too many packets."
-
"I've got a switched local area network whose aggregated bandwidth far exceeds the capacity of any SPAN port. Since I can't mirror all of the switch's traffic on the SPAN port, I'm not going to monitor any of it."
-
"My e-commerce Web server handles thousands of transactions per second. I can't possibly record them all, so I'll ignore everything."
This attitude is self-defeating. Sampling can and should be used in environments where seeing everything is not possible. In each of the scenarios above, analyzing a sample of the traffic gives a higher probability of proactive intrusion detection than ignoring the problem does. Some products explicitly support this idea. A Symantec engineer told me that his company's ManHunt IDS can work with switches to dynamically reconfigure the ports mirrored on a Cisco switch's SPAN port. This allows the ManHunt IDS to perform intrusion detection through sampling.
Detection through Traffic Analysis Is Better Than No Detection
Related to the idea of sampling is the concept of traffic analysis. Traffic analysis is the examination of communications to identify parties, timing characteristics, and other meta-data, without access to the content of those communications. At its most basic, traffic analysis is concerned with who's talking, for how long, and when. [18] Traffic analysis has been a mainstay of the SIGINT community throughout the last century and continues to be used today. (SIGINT is intelligence based on the collection and analysis of adversary communications to discover patterns, content, and parties of interest.)
Traffic analysis is the answer to those who claim encryption has rendered intrusion detection obsolete. Critics claim, "Encryption of my SSL-enabled Web server prevents me from seeing session contents. Forget monitoring itI can't read the application data." While encryption will obfuscate the content of packets in several phases of compromise, analysts can observe the parties to those phases. If an analyst sees his or her Web server initiate a TFTP session outbound to a system in Russia, is it necessary to know anything more to identify a compromise? This book addresses traffic analysis in the context of collecting session data in Chapters 7 and 15.