For many people, artificial intelligence evokes the menacing computer Hal from 2001: A Space Odyssey, a machine so intelligent that it could function independently of humans.
Those inflated notions spawned by science fiction writers about the convergence of humans and machines tarnished the image of AI in the 1980s because AI was perceived as failing to live up to its potential.
Still, the field has quietly produced advanced applications such as Google Inc.'s search engine, systems that trade stocks and commodities without human intervention, and software that detects credit card fraud.
There's no precise definition of AI, but broadly, it's a field that attempts to provide machines with humanlike reasoning and language-processing capabilities.
Researchers now are emerging from what has been called an "AI winter" with renewed interest in the biology of the brain and research honed to practical applications in medicine, customer service, manufacturing, education and other areas.
Jeff Hawkins, founder of Palm Computing and chief technology officer at PalmOne Inc. in Milpitas, Calif., created a buzz in the AI world with his book On Intelligence: How a New Understanding of the Brain Will Lead to Truly Intelligent Machines (Times Books, 2004), which asserts that AI research should focus on the parts of the brain associated with intelligence.
"In the past, people thought of the brain as a computer, where I have some input, I write a program to process that, and then I spit it out, and the success is getting the correct output," Hawkins says. "In all these cases, AI kept failing, because brains are not computers; they are memory systems that make a model of the world."
|Jeff Hawkins says AI applications should work like the brain, not like a computer.|
Hawkins, who founded the Redwood Neuroscience Institute in Redwood, Calif., predicts that once researchers can infuse systems with language, memory and other skills housed in the neocortex, applications will emerge for areas such as drug discovery, robotics, computer vision and remote sensing—tasks that today are hard to automate by conventional techniques.
Fair Isaac Corp. in Minneapolis is automating business decision-making tasks such as approving bank loans and detecting credit card fraud. Robert Hecht-Nielsen, vice president of research and development at Fair Isaac, is building a cognitive system that can understand language and adapt through trial and error—similar to how a child learns to hit a baseball.
The system uses a cognition algorithm modeled on the cerebral cortex of the brain. Hecht-Nielsen's work is based on "confabulation," a mathematical theory that each individual instance of information processing in cognition involves drawing a conclusion based upon a set of assumed facts by applying available knowledge. For example, if a small animal waddles like a duck, quacks like a duck and flies like a duck, one can conclude that it's a duck.
Hecht-Nielsen's confabulation architecture is based on a new way of looking at human cognition: Everything in the mind is represented by lists of symbols that can be used to describe the attributes of an object.
After "reading" 8,000 encyclopedias and novels and accumulating billions of links between words for context, the confabulation architecture—which has no software or rules—can generate the end of a sentence that makes some sense after being given the first part of a sentence. Within 10 years, Hecht-Nielsen envisions that the system will work with decision-making software to boost customer service by conversing with customers to understand their needs.
The Intelligence Distribution Agent (IDA), developed for the U.S. Navy by the Institute for Intelligent Systems at the University of Memphis, helps assign sailors new jobs at the end of their tours of duty by negotiating with them via e-mail. According to Stan Franklin, co-director of the institute, IDA has a cognitive cycle that perceives language from an e-mail as a set of symbols and then makes sense of the symbols and chooses a response.
When an e-mail arrives, IDA pulls out relevant information like name, rank and statements of job preferences. It chooses the most relevant information based on episodic memory, or associations made from past interactions. The most relevant information is broadcast out to behavior "codelets"—executable software—that can perform tasks such as looking up something in a database or composing a message back to a sailor. Then a selection mechanism chooses a response to the e-mail.
In the next five to 10 years, Franklin says, IDA could perform jobs such as negotiating with humans in unstructured English and making decisions by looking up data like company policies or client preferences in a database.
Franklin claims that he has been successful with each of his efforts to copy a human trait in a machine. "I don't have any feeling that there is some human capability that we won't be able to emulate," he says.
Hecht-Nielsen says AI won't end up producing the superhuman cyborg of Hollywood scripts but will spawn practical applications made from pieces of human intelligence, such as cognition and rehearsal learning, or learning by repetitive practice.
In any case, we probably wouldn't want to make machines that are too much like humans, he says, or we might end up with systems that are influenced by personal biases, just like many people are.
Instead, AI systems will handle tasks that humans aren't particularly good at today, like dependably answering tedious customer questions with an endless supply of patience.
"AI will mean ennoblement for the customer," says Hecht-Nielsen. "Someone will answer calls in a call center and spend as much time as the customer needs, and they will be polite and fun. It just won't be a person."