- Documenting the Need for New Technology
- Distributed Information Systems Everywhere
- The Global Communication Spaghetti Pot
- Electronic Archeology: Layers upon Layers
- The Gathering Storm of New Activities on the Web
- Global Electronic Trade
- Agile Systems
- Cyber Warfare and the Open Electronic Society
- Summary: Staying Ahead of Chao
1.7 Cyber Warfare and the Open Electronic Society
What would happen to us if our computer systems suddenly stopped working for a few days? Nobody knows! Of course, given the redundancy and tolerence to failures in our computer systems, that is very unlikely to happen. Or is it? It is quite surprising what havoc a rather unimaginative virus attack can cause. And hackers seem to be able to invade corporate Web sites and government computers almost at will. The reality is that we do not know what is happening in our information systems minute by minute or day by day.
Cyber warfare involves the development of new ways to defend our IT layers against an increasing set of criminal and destructive activities. The battleground is the Internet and every corporate IT layer. These are some of the current problems:
Intrusions into computers. An intruder is someone who is not an authorized user and who anonymously gains privileged (or "root") access to a computer. The purposes for an intrusion vary widely and include the following examples:
Using a computer anonymously to set up a chat room for drug tra=cking
Accessing databases (supposedly secure) to steal information such as credit-card numbers
Using a network server to launch another kind of attack at some other site in the network
Denial-of-service attacks on Web sites and networks
Computer viruses, propagated from machine to machine by file transfers and e-mail messages
Spam, unsolicited e-mail, and other nuisance activities
Present security technologies such as encryption, firewalls, network-level intrusion detectors, and virus scanners defend against well-known, "textbook," criminal activities on the Web. When a new kind of attack is discovered, the IT security managers scramble to include it in their log file search scripts, and the security tools manufacturers try to add a defense against it to their products. But new tricks and techniques to exploit vulnerabilities in the defenses are being devised all the time. This kind of crude rearguard response we have at the momentfix the defenses after we know what happenedis totally inadequate for dealing with the coming wave of electronic crime.
It is a total myth that it takes an expert or genius to break into a computer. Breaking and entering an operating system is something that's taught nowadaysit is a very good way to learn how operating systems and network protocols work. There are chat rooms on how to break in and scripted attacks that can be downloaded. The majority of those breaking into systems do not understand how their exploits work any better than a clerk understands how the operating system running his inventory program works.
When a new kind of intrusion is devised, it does take an understanding of the detailed workings of an operating system, or a network protocol, or an application program. From then on, that method of intrusion can be applied using scripts that carry out the steps of the method automatically.
Experts often publish new attacks on the Web to encourage manufacturers to plug the holes in their operating systems and application programs. But of course, the "crackers" and "kiddies" can read the expert's techniques too! And even when holes are plugged by the manufacturers, the patches are often not downloaded to many of the computers on the network. A vulnerable computer is usually the weak link into a subnet or a whole IT layer.
At present it is generally admitted by anyone in the know, including those who build or manage IT networks or advise on Internet policy, that our present abilities to defend our IT infrastructure against attacks, both old and new, are dismal. Let's list a few of the problems.
We don't have powerful enough technology, even at the low level of network events, to detect nefarious activities in real time and cut them o'. And new ones are springing up all the time.
At the network level, the current crop of detectors have absurdly high false alarm rates. We can't correlate the outputs from various network-level detection products to get a more accurate picture of whether or not an intrusion attempt is taking place. Correlation of the various detectors produced by the security industry would need a lot of planning and agreements. First, we need standards and conventions for the formats of detector outputs, so-called alerts. This first-step problem has been recognized by the Internet Engineering Task Force (IETF) and other organizations. We may have some standards for alerts and alarms soon. Then we need event processing technology beyond what is presently available from the security industry to do real-time correlation of alerts.
Some kinds of attacks are di=cult, if not impossible, to detect at the network level. They would be easier to detect if we had application-level monitoringbut we don't. We don't have adequate monitoring to provide defensive tools with the kind of information they need. To do this, we need detectors that work at all levels of an IT system.
New Internet applications and activities, such as global eCommerce, for example, present new opportunties for electronic theft and destruction. We need technologies that enable us defend at the highest levels in enterprise operations. Just imagine being able to mess with your competitor's electronic trading processesinsert a few false events supposedly coming from a partner at the "right" time, or intercept and divert process events between trading partners, or relay process events to competitors.
We, as a society, have become far too dependent upon the Internet. Most of our essential services, from telecommunications and financial transactions to food distribution, now operate over the Internet and IT layers that are accessible from the Internet. In many cases there is no backup mode of operation, should the Internet be taken down. A denial-of-service attack on the Federal Reserve's Fedwire funds-transfer system could, if successful, bring the banking system to a standstill.
Traditional security technologyfirewalls, for exampleand open eCommerce are in direct conflict.
The first four problems all indicate the need for event processing technology that applies across all levels of enterprise operations and applies uniformly to a wide range of the security industry's products. The fifth problem is a political and national policy problem, beyond the scope of this book.
But the sixth problem is one of the most perplexing problems with the Internet explosionthat is, the tension between traditional security based upon locking things up and denying access (firewalls, virtual private networks, and so on) and the need for open commerce. Industry analysts have written report after report on this problem, but the technology to allow openess, whereby anyone can communicate with anyone else and still have secure operations, just doesn't exist.
The sixth problem could become the great stalemate in developing visions like the global eCommerce vision. Unfortunately the "real" problem is not the sixth problem itself, but the lack of attention to it by the developers of grand visions. They are focused on making the vision happen. I have no doubt that the sixth problem can be solved. The question is whether a solution must be part of the infrastructure of the grand vision from the earliest designs, or whether it can be added piecemeal afterward when the cyber warfare really beginsrearguard actions all over again, but the stakes will be higher than ever.
There may be a lesson to be learned from the DARPAnet, the precursor of the Internet, which was built on open principles and little thought about misuse. The focus was on getting distributed communication networks to work. Could the DARPAnet have been designed di'erently, in hindsight, to make our present security problems easier to track, compartmentalize, and handle? Some experts feel that authentication should have been included in name servers from the very beginning, and that the messaging protocols should have been designed to always indicate the actual source of any message, no matter how many "hops" it makes in getting to its destination.
Following up on this early DARPAnet lesson, should the IT systems of the future be built with monitoring facilities and event feeds at every level from the network to the strategic level? And should they have backup duplication of essential monitoring and tracking facilities? This will be expensive, just like having security checkpoints at airports today. But it may become part of the cost of running the information society.