Future Watch: A.I. comes of age

After decades of limited application, artificial intelligence is everywhere. And it really works this time.

1 2 3 Page 3
Page 3 of 3

When the "toy car" query is submitted, in a fraction of a second Google looks up which advertisers are interested in those keywords, then looks at their bids and decides whose ads to display and where to put them on the page. "The problem I'm especially interested in," Wellman says, "is how should an advertiser decide which keywords to bid on, how much to bid and how to learn over time -- based on how effective their ads are -- how much competition there is for each keyword."

The best of these models also incorporate mechanisms for predicting prices in the face of uncertainty, he says. Clearly, none of the parties can hope to optimize the financial result from each transaction, but they can improve their returns over time by applying machine learning to real-time pricing and bidding.

Brainy Studies

One might expect AI research to start with studies of how the human brain works. But most AI advances have come from computer science, not biology or cognitive science.

These fields have sometimes shared ideas, but their collaboration has been at best a "loose coupling," says Tom Mitchell, a computer scientist and head of the Machine Learning Department at Carnegie Mellon University. "Most of the progress in AI has come from good engineering ideas, not because we see how the brain does it and then mimic that."

Tom Mitchell, Carnegie Mellon
Tom Mitchell, Carnegie Mellon

But now that's changing, he says. "Suddenly, we have ways of observing what the brain is really doing, via brain imaging methods like functional MRI. It's a way to look into the brain while you are thinking and see, once a second, a movie of your brain's activity with a resolution of 1mm."

So, cognitive science and computer science are now poised to share ideas as they never could before, he says. For example, certain AI algorithms send a robot a little reward signal when it does the right thing and a penalty signal when it makes a mistake. Over time, these have a cumulative effect, and the robot learns and improves.

Mitchell says researchers have found with functional MRIs that regions of the brain behave exactly as predicted by these "reinforcement learning" algorithms. "AI is actually helping us develop models for understanding what might be happening in our brains," he says.

Mitchell and his colleagues have been examining the neural activity revealed by brain imaging to decipher how the brain represents knowledge. To train their computer model, they presented human subjects with a list of 60 nouns -- such as telephone, house, tomato and arm -- and observed the brain images that each produced. Then, using a trillion-word text database from Google, they determined the verbs that tend to appear with the 60 base words -- ring with telephone, for example -- and they weighted those words according to the frequency of both occurring.

The resulting model was able to accurately predict the brain image that would result from a word for which no image had ever before been observed. Oversimplifying, the model would, for example, predict that the noun airplane would produce a brain image more like that for train than for tomato.

"We were interested in how the brain represents ideas," Mitchell says, "and this experiment could shed light on a question AI has had a lot of trouble with: What is a good, general-purpose representation of knowledge?" There may be other lessons as well. Noting that the brain is also capable of forgetting, he asks, "Is that a feature or a bug?"

Andrew Ng, an assistant professor of computer science at Stanford University, led the development of the multitalented Stair. He says the robot is evidence that many previously separate fields within AI are now mature enough to be integrated "to fulfill the grand AI dream."

And just what is that dream? "Early on, there were famous predictions that within a relatively short time computers would be as intelligent as people," he says. "We still hope that some time in the future computers will be as intelligent as we are, but it's not a problem we'll solve in 10 years. It may take over 100 years."

This version of the story originally appeared in Computerworld's print edition.

Got something to add? Let us know in the article comments.

Copyright © 2009 IDG Communications, Inc.

1 2 3 Page 3
Page 3 of 3
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon