For half a century, developers have protected their systems by coding rules that identify and block specific events. Edit rules look for corrupted data, firewalls enforce hard-coded permissions, virus definitions guard against known infections, and intrusion-detection systems look for activities deemed in advance to be suspicious by systems administrators.
But that approach will increasingly be supplemented by one in which systems become their own security experts, adapting to threats as they unfold and staying one step ahead of the action. A number of research projects are headed in that direction.
At the University of New Mexico in Albuquerque, computer science professor Stephanie Forrest is developing intrusion-detection methods that mimic biological immune systems. Our bodies can detect and defend themselves against foreign invaders such as bacteria and parasites, even if the invaders haven't been seen before. Forrest's prototypes do the same thing.
Her host-based intrusion-detection system builds a model of what is normal by looking at short sequences of calls by the operating system kernel over time. The system learns to spot deviations from the norm, such as those that might be caused by a Trojan horse program or a buffer-overflow attack. When suspicious behavior is spotted, the system can take evasive action or issue alerts.
"The central challenge with computer security is determining the difference between normal activity and potentially harmful activity," says Norman Johnson, an information security expert at Los Alamos National Laboratory in New Mexico. "The common solution is to identify the threat and protect against it, but in many ways, this is the same as constantly fighting the last war, and it can be quite inefficient in environments that are rapidly changing."
In another project—one that considers whole networks of computers rather than a single machine—Forrest and her students are developing intrusion-detection systems even more directly modeled on how the immune system works. The body continuously produces immune cells with random variations. As the cells mature, the ones that match the body's own proteins are eliminated, leaving only those that represent deviations as guides to what the body should protect against. Likewise, Forrest's software randomly generates "detectors," throws away those that match normal behavior and retains those that represent abnormal behavior.
Each machine in the network generates its own detectors based on that machine's unique behavior and experiences, and the detectors work with no central coordination or control. In fact, just how the detectors work isn't precisely known, Forrest says. "We are actively trying to understand how the system works and how well it behaves," she says.
Human Response
Indeed, these experimental approaches don't work perfectly, Forrest acknowledges, but she points out that no security measure, including encryption or authentication, works perfectly either. She says the most secure systems will employ multiple layers of protection, just as the human body does.
"The advantage of this type of system is that it is largely self-maintaining and doesn't require continual updating by experts," Johnson says. "And yes, sometimes things don't quite work right and the system has 'allergies,' but overall it outperforms the traditional approach. There is much we can learn from this project in what must be done for homeland security."
Meanwhile, work at Hewlett-Packard Co.'s research laboratory in Bristol, England, may lead to techniques that protect networks from rapidly spreading infections caused by viruses and worms such as Code Red and Nimda. It's part of HP's work in "resilient infrastructures," in which computers keep working—possibly in a degraded mode—in the face of attacks.
HP's "virus-throttling" software is based on the fact that viruses attempt to spread as rapidly as possible to as many machines as possible—not the way legitimate users work. The software permits connections to familiar machines at a slow rate—one or fewer per second, say—but delays or blocks connections to unfamiliar machines when the requests come at a rate of 200 to 500 per second, as they did with Code Red and Nimda.
The result is that the spread of a virus can be greatly retarded before its signature can be added to virus-detection software. False positives—legitimate requests to connect to unfamiliar computers or to connect at a slightly higher rate than normal—result in delays for the user, but the requests still get processed.
Matthew Williamson, a researcher at HP Labs Bristol, says it's too early to say whether virus throttling will appear in HP products. But, he says, such software is potentially a good way to protect a company from the spread of viruses across an internal network, if it's installed on all the company's computers.
"The nature of current and future threats to IT systems urgently requires that we develop automated and adaptive defensive tools," says Robert Ghanea-Hercock, a principal research scientist at BTexact Technologies, a unit of British Telecommunications PLC in London. "The shift to more open and interorganizational networked systems is undermining the traditional firewall paradigm. Combined with pervasive and wireless networks, almost all users and services will require automated security capabilities in some form."
|