We Continue to Use Old Thinking
Present systems use vulnerability management models to understand what will happen when the network is attacked. You take your vulnerability information and pop it into the model, and out comes a result that tells you how much "risk" you have of suffering an attack.
Consider a simple model where all you want to do is control the temperature in your house. Using vulnerability management as the basis for your design philosophy, you would start by getting an idea of how much heat your house leaks. The simple way to do this is to have somebody point an infrared sensor at your house and take a picture of the hot spots. This is analogous to having your network scanned for vulnerabilities. Now that you have an idea of where the heat is leaking out, you can plug the holes using better insulation, or, if you're cheap like me, clear plastic and duct tape.
According to the vulnerability management dogma, all you have to do to keep your temperature constant is to take periodic infrared snapshots of your house and fix the discovered leaks that might have popped up. The thinking is that there could have been a storm that tore the plastic over the windows, or worse, somebody could have opened a window and left it open. Therefore, this recurring analysis of your house is needed.
Before we move on, this is in no way intended to be a complete dissertation on the many ways one can model a network, but I believe that a brief description of the most popular methods will help lay the foundation for what we're going to talk about later.
Threat modeling is a way to understand how an attacker would attempt to breach your security. You start by assessing your network and applications the way an attacker would. The first thing you do is scan your network using something like nmap to find out what endpoints are on your network and what applications are running.2 You then drill down into those applications using other tools to look for weaknesses. For example, your scan might have discovered a Web server that hosts a custom application that is supporting the HR benefits service. These types of applications are typically Web-based user interfaces with a database back end. The next step is to use a Web scanning tool such as nikto3 to find out whether the Web server and database are vulnerable to things such as cross-site scripting or Structured Query Language (SQL) attacks.
After you have a list of potential attack methods, you prioritize them based on the value of the target endpoint and the probability that an attack will succeed. Web servers buried deep in your enterprise behind firewalls and layers of networks are obviously less susceptible to external SQL hacking attempts than the systems in your DMZ.4 However, as you can see in Figure 3-1, anything in your DMZ is only one hop away from both sides of the security perimeter.
Figure 3-1 A simple pictogram that depicts how close the DMZ is to the Internet and how it can act as a bridge to the internal network.
Conversely, application servers on your DMZ would be the first systems that you fix because they are the most exposed.
Now that you have this list, you can better understand how a hacker might penetrate your network.
If you've been in the security business for more than a week, you've heard the term risk analysis mentioned more than once. Risk analysis is another way of looking at your vulnerabilities and determining how they can be leveraged against your enterprise.
The difference is that the result is expressed as a probability, or, as we say, risk. Now, you're probably saying that risk is a pretty subjective thing, and you are right. There are those who say if you have a vulnerability, it's only a matter of time before it's exploited, and they are right, too.
There are other, more esoteric modeling techniques, but they all pretty much use the same vulnerability assessment methodology as their baseline foundation.5 The problem with this approach is that it is a reactive way of addressing the problem. Now before everyone starts filling up my inbox, the reason I say that it's reactive is because from the time that the endpoint is deployed to the time that you do the scan, you have a vulnerability on your network.
If you start with a vulnerability-based approach, you need to ensure that every single endpoint hasn't been compromised before you're sure you're more secure than when you started. Who's to say that some evil person hasn't already used one of your vulnerabilities to make a nice nest in your network somewhere? Not all hacks are apparent or obvious. As you will recall from Chapter 1, "Defining Endpoints," some hacks are placed on a system for later usage.
Now please don't run off and say, "Kadrich says that threat modeling and risk analysis are useless." Far from it. What I am saying is that although they are indeed useful tools for helping you understand the security posture of your network, they are not the models that are going to solve our endpoint security problem.
However, I am saying that there might be another, more effective way to model the network. It might be a bit unconventional, however.