Skip the navigation

Getting Real: Analyzing Dynamics That Can Choke Supercomputers

Researchers find ways to tame the complexity in real-world reasoning

By Gary Anthes
December 5, 2005 12:00 PM ET

Computerworld - It is surely one of the more mind-blowing PowerPoint slides ever created. It's a graph, and one of the smallest numbers, near the bottom of the vertical axis, is 1017, the number of seconds from now until the sun burns up. Then comes 1047, the number of atoms on Earth. After that, the numbers get really big, topping the scale at 10301,020.


This graph, from the Defense Advanced Research Projects Agency, shows the exponential growth in possible outcomes for a range of activities, from a simple car engine diagnosis with 100 variables to war gaming with 1 million variables (that's what the 10301,020 represents).


The point DARPA is trying to make in explaining its Real-World Reasoning Project is that computers will never be able to exhaustively examine the possible outcomes of complex activities, any more than a roomful of monkeys with typewriters would ever be able to re-create the works of Shakespeare.


But in the recently completed Phase I of the Real Project, as it's called, the agency did discover shortcuts that can tame the punishing combinatorial complexity that for decades has stymied efforts to model the real world.


Beyond Brute Force
Bart Selman, a computer science professor at Cornell University and one of three DARPA contractors on the project, points out that for a decade there have been automated reasoning tools that can discover defects in chip or software designs. These tools can "prove" the correctness of a specification without exhaustively testing every situation the chip or software might encounter.


But those tools can do only what's called single-agent reasoning. Selman is extending the concepts to a much harder class of problem -- multiagent scenarios in which there's one or more opposing forces -- and he's developed chess-playing software to test his ideas. The best chess programs today, such as IBM's Deep Blue, excel by brute-force trials of moves, analyzing millions of board positions per second. "Deep Blue explores hundreds of millions of strategies, but most of them are very dumb," Selman says. "Grandmasters only explore three or four possible lines of play."


The Cornell chess program works more like a grandmaster, he says. "It might exploit certain strategies, then find they are not successful. It learns from that and adds that to its knowledge base. It gets better the more games it plays, even during a single game," Selman explains. It develops a conceptual view of the board and seeks out overall positions that will give it strength.


By applying these learning techniques and other improvements over traditional reasoning tools, Selman's team has so far achieved a 109 speed improvement over those tools, he says.



Our Commenting Policies