Skip the navigation

Games Computers Play

By Gary Anthes
February 25, 2002 12:00 PM ET

Computerworld - Artificial intelligence (AI) is a discipline that soared on the wings of optimism in the 1960s and 1970s, only to fall into disillusionment and even disrepute in the ensuing years. But in that time, AI has triumphed in a realm few people think about or take seriously: computer game-playing.


The biggest victory for game-playing computers was in 1997, when IBM's Deep Blue defeated world chess champion Garry Kasparov in a six-game match. The supercomputer, consisting of 512 specially designed chips, could consider 200 million moves per second, vs. about two moves per second for Kasparov's wetware.


Deep Blue's tour de force was the culmination of an eight-year, multimillion-dollar research project at IBM that led directly to advances in chip design, parallel-processing techniques and algorithms. That research continues as part of IBM's $100 million Blue Gene project, which during the next decade will build a machine operating at 1 quadrillion floating-point operations per second (1 petaFLOPS) to attack problems such as protein folding, molecular dynamics and drug design.


Writing software and building computers to play board games has taught computer scientists a great deal, and it has taught the artificial intelligentsia much about AI. Now research is heading in new directions, where experts say new techniques are likely to find applications elsewhere.


Jonathan Schaeffer, a computing science professor at the University of Alberta in Edmonton, uses games to aid his AI research. He developed parallel-processing algorithms to search a database of 1 trillion checkers positions, and those same algorithms found their way into commercial products for gene sequencing at a company he co-founded, BioTools Inc. in Edmonton.


Schaeffer says researchers once believed that the way to make computers play chess was to build into them the same expert rules and insights that the best players use.


"They tried to simulate the human brain, but they quickly discovered that, boy, that's really tough," he says. "The innovation was, 'If we are not smart enough to tell the computer what chess positions to look at, let's just look at them all.' " This "brute-force search," previously disdained by AI workers, proved to be the silver bullet. Today, the technique populates commercial optimization programs, Schaeffer says.


Research is now moving from games where raw searching is the answer, as it proved to be in chess and checkers, to those where that doesn't work well. For instance, in card games, there are too many combinations to consider and players don't know what cards other players have. Another example is a poker-playing program at the University of Alberta that uses a Monte Carlo simulation to assess the probability of various outcomes and neural networks to analyze the betting and bluffing history of opponents.



Our Commenting Policies