Miller’s WordNet gives A.I. knowledge

Meaning with a universal encyclopedia

Surprising A.I. with understanding

The face of speaking A.I. when it understands speech for the first time

Credit: iStockphoto

I often hear that we can’t create natural language understanding (NLU), because we wouldn’t know how to represent meaning. But 30 years ago Professor George A. Miller, the famous psychologist and cognitive scientist, began WordNet.

WordNet is a networked dictionary/thesaurus. You enter a word to find definitions and related meanings, but it's also, unexpectedly, a simple universal encyclopedia (UE) — a system that, given choices, can identify the right meaning.

Miller’s WordNet was a useful experiment in the traditions of the scientific method. For example, it allows the testing of the hypothesis that a language can be split from its meaning. It hints at UE design, and as a result, it shows the way to speaking A.I.

Universal encyclopedia

A normal encyclopedia explains topics in detail, while a universal one extends it further to provide common sense knowledge.

In 1958, Dr. Yehoshua Bar-Hillel, the first academic to work full time in the field of machine translation at MIT, said a UE is needed to make "general-purpose fully automatic high-quality machine translation" feasible.

We learn "common sense" from experience, and therefore our brains must create a UE without programmers. In a brain, the specific defines the general, meaning the UE rapidly scales using examples. Replicating the brain's method, if we can do it, will give knowledge to A.I.

Against this approach, Professor Noam Chomsky argued in the 1980s that the poverty of the stimulus means we cannot learn language without an innate language facility, but since then, the Role and Reference Grammar (RRG) linguistic theory was developed. RRG provides an algorithm linking language to meaning and vice versa in context.

In light of these advances, as we experience phrases in sentences, our brains connect with the richness of the UE because of RRG's linking algorithm. A child's brain is bombarded with huge quantities of specific language examples, and at the same time, connections in the UE.

As language converts sound to meaning, language learning needs to connect words and phrases to their meaning, which can then allow the specific cases to be generalized. The massive amount of information connected between acquired language and the UE is language learning, just a by-product from storing patterns.

Ambiguity

A typical word has more than one meaning or word-sense. Dictionaries list them and define their meaning in different contexts. Choosing the correct meaning is known as word sense disambiguation. Human brains do this will apparent ease, but without a UE, how can a computer do this without intervention from a programmer? Statistics is one approach but, despite 30 years of trying, it has yet to provide the required accuracy.

Here is Bar-Hillel’s scenario to demonstrate the problem:

“Little John was looking for his toy box. Finally, he found it. The box was in the pen. John was very happy.”

The word “pen” has meanings including, a “writing utensil” (like a ballpoint pen) and an “enclosure where small children can play” (a play pen). Bar-Hillel claimed that “No existing or imaginable program will enable an electronic computer to determine that the word pen in the given sentence within the given context has the second of the above meanings.”

I’d argue that today, nearly 60 years later, we can create the program because I created a working prototype using an extension to WordNet.

There is sufficient context for people to disambiguate the meaning of “pen” in the sample scenario. If the system learns that writing utensils cannot contain a toy box, but play pens can (only one is a container of things), it can exclude the invalid combinations during phrase matching. Because language is about communications, people will always try to convey the right meaning.

I wrote previously that in A.I. a better way to store meaning is with only specific cases, with general cases found from them. This is illustrated if you tell a child that "the elephant flew out the window." They may laugh and say playfully "No! Elephants don't fly!" even though the child has never learned this as a specific fact.

Designing a UE

A trivial UE would link each meaning to every association, while a sophisticated one, like WordNet, would inherit meanings. By learning that "Animals don't fly," and that "Elephants are animals," the child's brain automatically knows that "elephants don't fly."

Obviously as a brain learns more, the sophistication of the patterns increases. Bats and birds are also animals, so our brains handle conflicts. Perhaps separating flying animals from other animals will maintain consistency, as WordNet suggests.

Finding the right model may be rocket science. But given the right model, it's not rocket science to deal with the requirements.

George Miller’s timing

Miller, a psychologist, was in the center of cognitive science's development. Cognitive science looks at how brains work, while A.I. reproduces their abilities on a machine.

Miller started the WordNet project in 1986, just as the industry embraced statistical analysis. As IBM manager, Frederick Jelinek, said: "Every time I fire a linguist, the performance of the speech recognizer goes up." The battle was lost to computation, but inaccuracy was the war’s result. They started to hit the target, but the bulls-eye remained elusive.

Search companies like Google used WordNet data in their early days to limit possible word meanings. It helped distinguish meaning, such as when "eat" can mean literally “bite and swallow” as in the search “what’s eating the pizza?” compared with "preoccupy" as in “what’s eating Obama?”

The UE doesn’t need to be like Wikipedia because it doesn’t need human-readable explanations, being just a network of associations, like WordNet. While imperfect for the full UE application, WordNet's separation of language-independent meaning from language (words and phrases) shows the way.

UE to rise from WordNet

WordNet links words and phrases in English to their meanings and synonyms. It also links a number of associations between the meanings which are essential for language understanding. Since the initial version, other projects around the world have created their own local language WordNets in many, many other languages such as French, Arabic, German, Korean and Chinese.

At the time of writing, WordNet has stopped support and development. On their website is the message: “Due to limited staffing, there are currently no plans for future WordNet releases.”

While the glamor of computational methods and its “machine learning” techniques are probably behind WordNet's demise, customer dissatisfaction from statistical inaccuracy compels the need for better approaches — in theory. From WordNet's ashes may finally arise the UE for speaking A.I.

This article is published as part of the IDG Contributor Network. Want to Join?

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.