1.3 Harm
The negative consequence of an actualized threat is harm; we protect ourselves against threats to reduce or eliminate harm. We have already described many examples of computer harm: a stolen computer, modified or lost file, revealed private letter, or denied access to data. These events cause harm that we want to avoid.
In our earlier discussion of assets, we note that value depends on owner or outsider perception and need. Some aspects of value are immeasurable, such as the value of the paper you need to submit to your professor tomorrow; if you lose the paper (that is, if its availability is lost), no amount of money will compensate you for it. Items on which you place little or no value might be more valuable to someone else; for example, the group photograph taken at last night’s party can reveal that your friend was not where he told his partner he would be. Even though it may be difficult to assign a specific number as the value of an asset, you can usually give a value on a generic scale, such as moderate or minuscule or incredibly high, depending on the degree of harm that loss or damage to the object would cause. Or you can assign a value relative to other assets, based on comparable loss: This version of the file is more valuable to you than that version.
Credit card details are astonishingly cheap, considering how much time and effort it takes victims to recover from a stolen card number. VPN provider NordVPN looked at credit cards for sale on the so-called dark web, the unregistered space of websites available only to those who know where to look (that is, people willing to engage in shady transactions). Of 4.5 million cards for sale, 1.6 million were stolen from U.S. citizens. The going price for U.S. card numbers (in U.S. dollars) was between $1 and $12, with an average of $4. The most expensive cards, at $20, were for sale from Hong Kong and the Philippines [FLI21]. Privacy Affairs, a web publication focusing on privacy and cybersecurity research, did a similar analysis of the price of stolen credentials being offered for sale on the dark web [RUF22]. It found, for example, a price of $120 for a stolen U.S. credit card with a $5,000 spendable balance remaining; when the balance left equaled only $1,000, the price dropped to $80. A stolen online banking account login for an account with at least $2,000 was $65. A cloned Mastercard or Visa card with PIN was $20. A hacked Facebook account cost $45, $25 for Twitter, and $65 for Gmail.
The value of many assets can change over time, so the degree of harm (and therefore the severity of a threat) can change too. With unlimited time, money, and capability, we might try to protect against all kinds of harm. But because our resources are limited, we must prioritize our protection, safeguarding only against serious threats and the ones we can control. Choosing the threats we try to mitigate involves a process called risk management, and it includes weighing the seriousness of a threat against our ability to protect. (We study risk management in Chapter 10.)
Risk management involves choosing which threats to control and what resources to devote to protection.
Risk and Common Sense
The number and kinds of threats are practically unlimited because devising an attack requires only an active imagination, determination, persistence, and time (as well as access and resources). The nature and number of threats in the computer world reflect life in general: The causes of harm are limitless and largely unpredictable. Natural disasters like volcanoes and earthquakes happen with little or no warning, as do auto accidents, heart attacks, influenza, and random acts of violence. To protect against accidents or the flu, you might decide to stay indoors, never venturing outside. But by doing so, you trade one set of risks for another; while you are inside, you are vulnerable to building collapse or carbon monoxide poisoning. In the same way, there are too many possible causes of harm for us to protect ourselves—or our computers—completely against all of them.
In real life, we make decisions every day about the best way to provide our security. For example, although we may choose to live in an area that is not prone to earthquakes, no area is entirely without earthquake risk. Some risk avoidance choices are conscious, such as deciding to follow speed limit signs or cross the street when we see an unleashed dog lying on a front porch; other times, our subconscious guides us, from experience or expertise, to take some precaution. We evaluate the likelihood and severity of harm and then consider ways (called countermeasures or controls) to address threats and determine the controls’ effectiveness.
Computer security is similar. Because we cannot protect against everything, we prioritize: Only so much time, energy, or money is available for protection, so we address some risks and let others slide. Or we consider alternative courses of action, such as transferring risk by purchasing insurance or even doing nothing if the side effects of the countermeasure could be worse than the possible harm. The risk that remains uncovered by controls is called residual risk.
A simplistic model of risk management involves a user’s calculating the value of all assets, determining the amount of harm from all possible threats, computing the costs of protection, selecting safeguards (that is, controls or countermeasures) based on the degree of risk and on limited resources, and applying the safeguards to optimize harm averted. This risk management strategy is a logical and sensible approach to protection, but it has significant drawbacks. In reality, it is difficult to assess the value of each asset; as we have seen, value can change depending on context, timing, and a host of other characteristics. Even harder is determining the impact of all possible threats. The range of possible threats is effectively limitless, and it is difficult (if not impossible in some situations) to know the short- and long-term impacts of an action. For instance, Sidebar 1-4 describes a study of the impact of security breaches on corporate finances, showing that a threat must be evaluated over time, not just at a single instance.
Although we should not apply protection haphazardly, we will necessarily protect against threats we consider most likely or most damaging. For this reason, it is essential to understand how we perceive threats and evaluate their likely occurrence and impact. Sidebar 1-5 summarizes some of the relevant research in risk perception and decision making. Such research suggests that for relatively rare instances, such as high-impact security problems, we must take into account the ways in which people focus more on the impact than on the actual likelihood of occurrence.
Let us look more carefully at the nature of a security threat. We have seen that one aspect—its potential harm—is the amount of damage it can cause; this aspect is the impact component of the risk. We also consider the magnitude of the threat’s likelihood. A likely threat is not just one that someone might want to pull off but rather one that could actually occur. Some people might daydream about getting rich by robbing a bank; most, however, would reject that idea because of its difficulty (if not its immorality or risk). One aspect of likelihood is feasibility: Is it even possible to accomplish the attack? If the answer is no, then the likelihood is zero, and therefore so is the risk. So a good place to start in assessing risk is to look at whether the proposed action is feasible. Three factors determine feasibility, as we describe next.
Spending for security is based on the impact and likelihood of potential harm—both of which are nearly impossible to measure precisely.
Method–Opportunity–Motive
A malicious attacker must have three things to achieve success: method, opportunity, and motive, depicted in Figure 1-11. These three elements are sometimes identified by their acronym, MOM, or M–O–M. Roughly speaking, method is the how; opportunity, the when; and motive, the why of an attack. Deny the attacker any of those three and the attack will not succeed. Let us examine these properties individually.
FIGURE 1.11 Method–Opportunity–Motive
Method
By method we mean the skills, knowledge, tools, and other things with which to perpetrate the attack. Think of comic figures that want to do something, for example, to steal valuable jewelry, but the characters are so inept that their every move is doomed to fail. These people lack the capability or method to succeed, in part because there are no classes in jewel theft or books on burglary for dummies.
Anyone can find plenty of courses and books about computing, however. Knowledge of specific models of computer systems is widely available in bookstores and on the internet. Mass-market systems (such as the Microsoft, Apple, Android, or Unix operating system) are readily available for purchase, as are common software products, such as word processors or calendar management systems. Potential attackers can even get hardware and software on which to experiment and perfect an attack. Some manufacturers release detailed specifications on how their systems are designed or operate as guides for users and integrators who want to implement other complementary products.
Various attack tools—scripts, model programs, and tools to test for weaknesses—are available from hackers’ sites on the internet, to the degree that many attacks require only the attacker’s ability to download and run a program. The term script kiddie describes someone who downloads a complete attack code package and needs only to enter a few details to identify the target and let the script perform the attack. Often, only time and inclination limit an attacker.
Opportunity
Opportunity is the time and access needed to execute an attack. You hear that a fabulous apartment has just become available, so you rush to the rental agent, only to find someone else rented it five minutes earlier. You missed your opportunity.
Many computer systems present ample opportunity for attack. Systems available to the public are, by definition, accessible; often their owners take special care to make them fully available so that if one hardware component fails, the owner has spares instantly ready to be pressed into service. Other people are oblivious to the need to protect their computers, so unattended laptops and unsecured network connections give ample opportunity for attack. Some systems have private or undocumented entry points for administration or maintenance, but attackers can also find and use those entry points to attack the systems.
Motive
Finally, an attacker must have a motive or reason to want to attack. You probably have ample opportunity and ability to throw a rock through your neighbor’s window, but you do not. Why not? Because you have no reason to want to harm your neighbor: You lack motive.
We have already described some of the motives for computer crime: money, fame, self-esteem, politics, terror. But it is sometimes difficult to determine motive for an attack. Some places are “attractive targets,” meaning they are very appealing to attackers, based on the attackers’ goals. Popular targets include law enforcement and defense department computers, perhaps because they are presumed to be well protected against attack (so they present a challenge and the attacker shows prowess by mounting a successful attack). Other systems are attacked because they are easy to attack. And some systems are attacked at random simply because they are there or are practice for a more important subsequent attack.
By demonstrating feasibility, the factors of method, opportunity, and motive determine whether an attack can succeed. These factors give the advantage to the attacker because they are qualities or strengths the attacker must possess. Another factor, this time giving an advantage to the defender, determines whether an attack will succeed: The attacker needs a vulnerability, an undefended place to attack. If the defender removes vulnerabilities, the attacker cannot attack.
Method, opportunity, and motive are necessary for an attack to succeed; without all three, the attack will fail.