Good Morning, Dave . . .

The Defense Department is working on a self-aware computer.

Any sci-fi buff knows that when computers become self-aware, they ultimately destroy their creators. From 2001: A Space Odyssey to Terminator, the message is clear: The only good self-aware machine is an unplugged one.

We may soon find out whether that's true. The Defense Advanced Research Projects Agency (DARPA) is accepting research proposals to create the first system that actually knows what it's doing.

The "cognitive system" DARPA envisions would reason in a variety of ways, learn from experience and adapt to surprises. It would be aware of its behavior and explain itself. It would be able to anticipate different scenarios and predict and plan for novel futures.

"It's all moving toward this grand vision of not putting people in harm's way," says Raymond Kurzweil, an artificial intelligence guru and CEO of Kurzweil Technologies Inc. in Wellesley Hills, Mass. "If you want autonomous weapons, it's helpful for them to be intelligent."

Cognitive systems will require a revolutionary break from current computer evolution, which has been adding complexity and brittleness as it adds power.

"We want to think fundamental, not incremental improvements: How can we make a quantum leap ahead?" says Ronald J. Brachman, director of DARPA's Information Processing Technology Office in Arlington, Va. Brachman will manage the agency's cognitive system initiative.

The goal is to create systems that take better care of themselves, and some manufacturers have already made small advances, Brachman points out. Software that tests itself automatically is a step in the right direction. So is software that walls itself off to avoid taking down the larger system in case it crashes.

Add advances in speech recognition and machine learning, and there may be enough "bits and pieces" to achieve the critical mass necessary for a real breakthrough, Brachman says.

"You get enough really smart people working on a really hard problem, and you get outcomes you didn't really expect," he adds. "We're hoping for a little serendipity."

They'll need it. The problems to be addressed are nearly as imposing as the dream. For example:

• How can a cognitive system learn from experience and use what it has learned to cope with new situations?

• How can it prioritize "standing orders," given complex and conflicting goals?

• How can it recognize important low-frequency events among the huge amounts of data in its "experience?"

• How can it use context to decipher complex actions, events and language?


Despite the challenges, Brachman is undaunted. "DARPA is about looking out of the box, the big reach," he says. "If we succeed, we can change the world in very dramatic ways."

Kurzweil agrees. "DARPA research tends to be visionary, and [although it] provides building blocks for future weapons systems, there's also applicability throughout society," he says. For example, DARPA's research and development on advanced communications led to the Internet. Its pattern-recognition advances led to technology that helps guide cruise missiles, reads electrocardiograms and detects computer fraud. The machine vision advances DARPA has funded have obvious value for satellites and aircraft as well as factory robots.

Brachman says cognitive systems could assist or replace soldiers on hazardous duty or civilians responding to toxic spills or disasters. It's not possible to preprogram a response to an emergency, but a cognitive system could size up many complex variables and chart its own course. A system that could imagine multiple scenarios could outsmart terrorists - or your business competitors - by envisioning actions they might take and assessing each for plausibility and impact. People can be blinded by prior experience and biases, Brachman notes, but a computer with no preconceptions could show humans how to think differently.

Moreover, self-explaining, self-debugging systems would require virtually no training and little maintenance. They would learn, not crash, when faced with a new situation.

But what about HAL 9000 and the other fictional computers that have run amok? "In any kind of technology there are risks," Brachman acknowledges. That's why DARPA is reaching out to neurologists, psychologists - even philosophers - as well as computer scientists. "We're not stumbling down some blind alley," he says. "We're very cognizant of these issues."

The solicitation is open to anyone, and DARPA won't speculate about who might step forward, for fear of limiting responses.

The project will have a three- to five-year life - long enough, Brachman hopes, to prove the value and plausibility of the concept. "We don't expect a full-fledged artificial assistant in four years," he says, "but that should be enough time to start getting some concrete indications that some of these dreams are possible."

Melymuka is a Computerworld contributing writer. Contact her at

5 power user tips for Microsoft OneNote
Shop Tech Products at Amazon