- Software and Cement
- In the Shadow of Utility
- Fragile Analogies
In the Shadow of Utility
The litany of documented software failures is extensive and tragic.10 It does not take much effort to find examples of software failures resulting in loss of life, limb, money, time, or property. The trend only promises to become worse as software becomes more critical to almost every aspect of modern life; yet, software manufacturers enjoy an astonishing amount of insulation from government oversight, legal liability, consumer retaliation, and indeed, as some critics have observed, engineering skill. A proven record of significant, costly, and deadly failures with no significant decline in use by its victims is baffling. On top of—in fact, despite—these shortcomings, victims (consumers, corporations, and governments included) lavishly spend on acquiring and defending a clearly defective product. Why?
- Why do software manufacturers continue to produce and consumers continue to purchase unreliable and insecure software?
- Why do software users willingly and repeatedly accept licensing agreements that absolve software manufacturers of most forms of liability for any design or application defects that might result in injury, harm, or damages?
- Why do governments make so few demands on software manufacturers while placing onerous compliance requirements on software buyers, who are least qualified to address the problems associated with software manufacturing?
- Why should software not be subject to the same public policy concerns applied to other critical elements of national infrastructure?
- Why do chickens cross the road?
Each of these questions is answered in part by this simple response: to maximize utility. We all do things that might appear perfectly acceptable in our own eyes that might appear perfectly crazy to someone else. A chicken crossing the road in the presence of drivers who may be willing to flatten the poor thing simply to interrupt the monotony of driving might appear rather crazy to an outside observer. In fact, from an economist’s perspective, this is perfectly rational behavior on the part of the chicken so long as the chicken believes it will be better off for the crossing. Jumping out of an airplane with a parachute might seem perfectly crazy to observers, unless the skydiver believes they are better off for the jumping. Likewise, software buyers continuing to accept software licensing terms that put them at a distinct disadvantage legally, financially, or personally should the software fail might appear perfectly baffling, unless buyers believe they will be better off for the accepting.
Economists use the notion of utility to help explain why people behave the way they do. The concept of utility is a little like the concept of “happiness” only more general. I explain the concept of utility in more detail in Chapter 2, “Six Billion Crash Test Dummies,” but sufficed to say, utility centers around the notion that most of us want to make our lives better, and that many of our life decisions are probably based on this desire. Software inarguably makes our life better, but like crossing the road or jumping out of an airplane or owning a swimming pool, everything has a cost.
It is not always the utility we get out of something or some activity that matters most, but how much it potentially costs us. Costs are not always obvious to the individual at time of “purchase” so to speak, and can be hidden or otherwise obscured. In general, cost can be measured in private terms, what it directly costs an individual to behave in a certain way, or measured in social costs, what it costs society for an individual to undertake a certain activity. The balance of private and social costs is the focus of many public policy efforts.
The private cost of smoking, for instance, is relatively low monetarily from an individual’s view point, but can impose substantial social costs due to the prolonged medical services associated with caring for long-term chronic smokers. Imposing a cigarette tax is but one way to raise the private cost of an activity in order to deter the behavior, which thereby potentially reduces the social cost by reducing the total number of smokers in the population and how much they smoke.
People’s evaluation of utility versus cost can lead to some fairly interesting situations. As a case in point, in the United States swimming pools kill or injure more children under the age of 14 than firearms. At 16 percent, accidental drowning was the second leading cause of injury-related death of children aged 14 and under in 2004 (car accidents ranked first); compare this with only 1 percent of children that died due to accidental discharge of firearms.11 In fact, injury-related death due to accidental discharge of firearms ranks at the bottom of all other causes of death and injury among children including choking (17 percent), fire and burns (10 percent), and bicycle accidents, poisoning, and falls (each at 2 percent).
There are plenty of people, and parents in particular, who might forbid children playing at the home of a neighbor who possesses one or more firearms, but the likelihood of a child drowning at a neighborhood pool party is far higher than a child being injured or killed by the firearm of a neighbor. Yet few parents espouse an anti-swimming pool sentiment or join anti-swimming pool action groups as they would for firearms, even though statistics would certainly warrant such behavior. The rather simplistic answer to this incongruency is that a larger portion of the population sees the intrinsic utility of a swimming pool over and above the utility of possessing a hand gun. Yet a swimming pool incurs a much higher cost to both families and society than do firearms. Even things with obvious utility like a swimming pool can have a dark shadow.
Played out against this background of people’s desire for utility (and not always recognizing the real cost), is the story of software. The questions at the start of this section really touch on the issues of self-interest and, more importantly, the incentives we have as individuals to undertake certain activities and the utility we derive. Understanding incentives also gives us a possible foundation to address the issues of why software manufacturing seems to be in the state it is in. If it is up to humans to get the production processes for Portland cement and software correct, then it is just as important, if not more so, to understand why humans behave as they do. Incentives are a good place to start.
As such, Geekonomics is not so much the story of software told through the lens of technology, but through the lens of humanity, specifically the incentives for manufacturing, buying, and exploiting insecure software. Economics is simply one way of understanding why humans behave as they do. But if economics is generally described as “the dismal science,” then software engineering is economics’ freakish, serotonin-deprived cousin. Economics is positively cheery and approachable in comparison. To date, the discussion regarding software has been largely dominated by technology experts whose explanations largely serve to alienate the very people that are touched most by software. Us.
Yet the congress of these two disciplines tells an important and consequential story affecting both the reader’s everyday life and the welfare of the global community. The issue of insecure software is at least as much about economics as it is about technology. And so I discuss both in this book. This book is not intended to be a comprehensive economics text, a litany of software failures (although this is sometimes inevitable), a diatribe as to how the world is coming apart at the seams, or a prophecy that civilization’s ultimate demise will occur because of “bad” software. Prophesizing disaster is cliché. Bad things happen all the time, and forecasting tragic events does not require an exceptional amount of talent, intelligence, or foresight. If anything, the world tolerates disaster and somehow still makes progress. This does not mean valid threats to economic and national stability due to “bad” software are illusory or should be minimized. On the contrary, the story of insecure software has not been readily approachable and therefore not well understood. We cannot manage what we do not understand, including ourselves. Software is a ghost in the machine and, at times, frustratingly so. But as software is a human creation, it does need to remain a frustrating ghost.
My intent in this book is to give this story—the story of insecure software—a suitable voice so that readers from any walk of life can understand the implications. I promise the reader that there is not a single graph in this book; nor is there a single snippet of code. This story should be accessible to more than the experts because it is we who create this story and are touched by it daily. The consequences are too great and far-reaching for the average person to remain unaware.
The first task of Geekonomics, then, is to address the questions presented at the beginning of this section as completely as possible within the confines of a single book. This means some aspects may be incomplete or not as complete as some readers might prefer. However, if anything, the story of software can be entertaining, and this book is intended to do that as well as inform and enlighten.
The second and more difficult task of Geekonomics is to analyze what the real cost of insecure software might be. Swimming pools can have a high cost, but how costly is insecure software, really? This is a challenging task considering that unlike statistics regarding accidental drowning, good data on which to base cost estimates regarding insecure software is notoriously lacking and inaccurate for two reasons. First, there is presumed to be a significant amount of underreporting given that many organizations might not realize they have been hacked or do not want to publicly share such information for fear of consumer retaliation or bad publicity. Second, actual costs tend to be distorted based on the incentives of those reporting their losses. For some victims, they may tend to inflate losses in an effort to increase their chances of recovering damages in court. Other groups of victims might deflate costs in an effort to quell any uprisings on the part of customers or shareholders. Law enforcement and cyber security companies can tend to inflate numbers in an effort to gain more funding or more clients, respectively. Whatever the incentives might be for reporting high or low, somewhere within these numbers is a hint to what is actually going on.
The third and final task of Geekonomics is to identify current incentives of market participants and what new incentives might be necessary to change status quo. One alternative is always choosing to do nothing; simply let things work themselves out on their own, or more accurately, let the market determine what should be done. This book argues against such action. Any intervention into a market carries with it the risk of shock, and doing nothing is certainly one way of avoiding such risk. But intervention is necessary when a condition is likely to degenerate if nothing is done. The magnitude of the risk is great enough and the signs of degeneration clear enough that new and different incentives are needed to motivate software manufacturers to produce and software buyers to demand safer, higher quality, and more secure software.