1.2 Attack and Defense
You cannot do computer security research, or computer security practice, without carefully examining your attack model, your assumptions about the adversary’s abilities, and the strategies that he’ll use to attack your system. In cryptography research, for example, you might assume that “the adversary cannot find the secret key” or “the adversary isn’t better at factoring than I am” or “the adversary won’t successfully tamper with the tamperproof smartcard.” Once you have an adversarial model, you can go ahead and design a system that is secure against these attack scenarios. In the real world, adversaries will then immediately try to find scenarios you didn’t think about in order to get past the defenses you’ve put up! The cheapest way to break a cryptosystem isn’t to spend $100,000 on specialized hardware to factor a key—it’s to spend $50,000 to bribe someone to give you the key. The easiest way to get secrets out of a smartcard isn’t to pry the card open (having to bypass the security features that the designers put in place to defend against exactly this attack), but to induce faults in the card by subjecting it to radiation, modifying its power supply voltage, and so on, attacks the designers didn’t have in mind.
In surreptitious software research, the situation is no different. Researchers have often made assumptions about the behavior of the adversary that have no basis in reality. In our own research, we (the authors of this book) have often made the assumption that “the adversary will employ static analysis techniques to attack the system,” because coming from a compiler background, that’s exactly what we would do! Or, others have speculated that “the adversary will build up a complete graph of the system and then look for subgraphs that are weakly connected, under the assumption that these represent surreptitious code that does not belong to the original program.” One might assume that those researchers came from a graph-theoretic background. Some work has endowed the adversary with not enough power (“the adversary will not run the program”—of course he will!) and some has endowed him with too much power: “The adversary has access to, or can construct, a comprehensive test input set for the program he’s attacking, giving him the confidence to make wholesale alterations to a program he did not write and for which he does not have documentation or source code.”
Unfortunately, much of the research published on surreptitious software has not clarified the attack model that was used. One of our stated goals with this book is to change that. Thus, for each algorithm, we present attacks that are possible now and that may be possible in the future.
In Chapter 2 we will also look at a defense model, ideas of how we good guys can protect ourselves against attacks from the bad guys. We will propose a model that tries to apply ideas taken from the way plants, animals, and human societies have used surreptition to protect themselves against attackers to the way we can protect software from attack. We will be using this model in the rest of the book to classify software protection schemes that have been proposed in the literature.