Skip the navigation

Self-Taught: Software That Learns By Doing

Machine-learning techniques to create self-improving software are hitting the mainstream.

By Gary Anthes
February 6, 2006 12:00 PM ET

Computerworld - Attempts to create self-improving software date to the 1960s. But "machine learning," as it's often called, has remained mostly the province of academic researchers, with only a few niche applications in the commercial world, such as speech recognition and credit card fraud detection. Now, researchers say, better algorithms, more powerful computers and a few clever tricks will move it further into the mainstream.

Stanford professor Sebastian Thrun with
Stanford professor Sebastian Thrun with "Stanley," the car that used machine-learning techniques to drive itself 132 miles across the desert.
And as the technology grows, so does the need for it. "In the past, someone would look at a problem, write some code, test it, improve it by hand, test it again and so on," says Sebastian Thrun, a computer science professor at Stanford University and the director of the Stanford Artificial Intelligence Laboratory. "The problem is, software is becoming larger and larger and less and less manageable. So there's a trend to make software that can adapt itself. This is a really big item for the future."
Thrun used several new machine-learning techniques in software that literally drove an autonomous car 132 miles across the desert to win a $2 million prize for Stanford in a recent contest put on by the Defense Advanced Research Projects Agency. The car learned road-surface characteristics as it went. And machine-learning techniques gave his team a productivity boost as well, Thrun says. "I could develop code in a day that would have taken me half a month to develop by hand," he says.
Computer scientist Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, says machine learning is useful for the kinds of tasks that humans do easily -- speech and image recognition, for example -- but that they have trouble explaining explicitly in software rules. In machine-learning applications, software is "trained" on test cases devised and labeled by humans, scored so it knows what it got right and wrong, and then sent out to solve real-world cases.
Mitchell is testing the concept of having two classes of learning algorithms in essence train each other, so that together they can do better than either would alone. For example, one search algorithm classifies a Web page by considering the words on it. A second one looks at the words on the hyperlinks that point to the page. The two share clues about a page and express their confidence in their assessments.
Mitchell's experiments have shown that such "co-training" can reduce errors by more than a factor of two. The breakthrough, he


Our Commenting Policies