Google's AlphaGo scores 4-1 against South Korean Go player

AlphaGo won the final seesaw game even after it made a bad mistake

screen shot 2016 03 15 at 2.01.11 am

Lee Se-dol resigns from fifth Go game on Tuesday against AlphaGo. March 15, 2016

Credit: Google/IDGNS

Google DeepMind’s AlphaGo artificial-intelligence program won the last round in a five-game contest against top Go player Lee Se-dol, despite making a bad mistake during play.

The 4-1 margin for AlphaGo in the games played in Seoul, South Korea was not as large as the 5-0 win by the program against an European Go player in October, but carries more impact because of Lee's standing in the game.

For the most part of the game, commentators were not sure AlphaGo would win. Google DeepMind CEO Demis Hassabis said in a tweet, for example, that AlphaGo made a bad mistake early in the game, but was trying "hard to claw it back."

The AlphaGo program has been described as the next frontier in AI because of its ability to learn from its experience, which some experts explained include its far-from-human moves that were nevertheless successful.

The wins by AlphaGo are a momentous milestone in the field of AI since IBM’s Deep Blue defeated Garry Kasparov in chess in 1997, said Howard Yu, professor of strategic management and innovation at IMD business school, about the three consecutive wins by the program.

The Go game has been described as a more complex strategy game than even chess. Players take turns to place black or white pieces, called “stones,” on the 19-by-19 line grid; its aim is to capture the opponent's stones by surrounding them and encircling more empty space as territory.

AlphaGo's loss on Sunday to Lee, however, highlighted that artificial neural networks -- the hardware and software equivalent of the human central nervous system -- can act strangely because of hard-to-find “blind spots.” It is possible that a strong player can force AlphaGo into a situation that exposes its hidden blind spots, said David Silver, a key researcher on the AlphaGo project.

Much of the discussion ahead of the final game on Tuesday was on a move made by Lee in the fourth game on Sunday, which appeared to degrade the AI program’s performance subsequently. After taking a quick look at the logs, Hassabis said AlphaGo had given a probability of less than 1 in 10,000 for Lee's move, so it found the move very surprising.

“This meant that all the prior searching #AlphaGo had done was rendered useless, and for a while it misevaluated the highly complex position,” Hassabis said in a tweet on Tuesday. He added that the neural networks were trained through self-play “so there will be gaps in their knowledge, which is why we are here: To test AlphaGo the limit.”

The highly publicized contest has established Google DeepMind’s credentials at the frontier of AI. Besides using the technology internally, Google is expected to offer the technology for a variety of applications including healthcare and scientific applications to start with.

The AI system is still a prototype, said Hassabis, so Google DeepMind is still going to be doing a lot of testing and training of the platform, including presumably having a go at removing the hidden blind spots, before releasing the technology for mission-critical applications.

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.