Professor Noam Chomsky revolutionized linguistics in 1957 with his publication of Syntactic Structures, and his Chomsky hierarchy from the previous year remains a foundation stone in computer science for programming languages. But programming languages are a far cry from speaking A.I., and Chomsky’s unprecedented success in that part of linguistics should bear the blame for holding back the advancement in another part of linguistics -- the use of human language for A.I.
Obviously, how we use language to communicate is key, but there are a few flavors of the science of language, or linguistics. Chomsky studied formal linguistics, or "the formal relations between linguistic elements," but another type, functional linguistics, studies "the way language is actually used in communicative context." In other words, amazingly, Chomsky's approach, unlike functional linguistics, is not concerned with actual communications!
Chomsky’s linguistics, without communications, has been responsible for A.I. that doesn’t speak. While it’s not his fault that others used his approach to solve the wrong problems, we now have the opportunity to progress with different science.
Formal linguistics emerges from early computer days
How did we get here? The birth of A.I. was tumultuous. A number of new sciences were coming together, computer science and linguistics in particular, and they were still being developed.
This early work in A.I. was dominated by mathematicians partly due to the archaic stage of digital computers, but while human brains can be good at mathematics, it is just one of the skills they can learn. The problem arises when trying to fit a mathematical model to a non-mathematical brain.
Cognitive science, my discipline, focuses on how our brains work. It combines computer science with philosophy, linguistics, neuroscience, psychology and anthropology. It emerged with the goal of replicating cognition on machines roughly 20 years after A.I. was named at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.
In the first sixty years since computers exploded into our world, we have seen formal and computational linguistics dominate, despite their scientific conflicts. Early success is good, but hitting the target once isn't the same as hitting a bulls-eye. Also, hitting the bulls-eye once isn’t the same as doing it repeatedly. Science is about ongoing accuracy, hitting the bulls-eye every time.
Clearly, we need a new goal.
Setting the new goal
HAL in 2001: A Space Odyssey and Sonny in i, Robot both use conversational language beyond the capability of today’s artificial, computational languages. Emulating them is a good, revised target because speaking A.I. will be most useful to us if it mimics human communications accurately.
As I wrote recently on this blog, in 1969 John Pierce of Bell Labs advised us to work out the science before pushing ahead with engineering. But probably due to frustration at the lack of progress for over a decade, engineering based on statistics was embraced anyway, before the science was ready.
To meet the increasing demand for speaking A.I., the key is functional linguistics combined with a brain-based platform. Our goal should be to talk like Sonny because, like the evolution of personal computing, once unleashed, progress will be unstoppable.
The right linguistics
Patom theory is my computing approach, in which stored patterns do the work of programmers. But in 2006, as I was adding patterns to the system, the limitations of Chomsky's linguistics hit me.
What's the best way to extract meaning from a matched sentence?
I spent a lot of time researching the answer and decided to create my own model. It was a big decision because it was like starting a whole new scientific investigation. The implementation was difficult, too, because Chomsky's model was a bit like working in an office tower with a broken elevator where each floor possibly held something important. Moving between floors to check was annoying!
And then while browsing in a New Jersey bookshop, I stumbled across the answer. How could I have a degree in cognitive science, but still have missed out on the answers, based on more than 30 years of development, from Role and Reference Grammar (RRG) theory?
RRG deals with functional linguistics and considers language to consist of three pieces – grammar linking to meaning in context. You know, word sequences and meaning in conversation. Communication!
RRG was developed with the inspiration that all human languages are based on common principles and that clauses (parts of sentences) contain meaning. Its success in modeling the range of human languages is impressive. Speaking A.I. can use RRG’s linking algorithm to map word sequences in context to meaning, and vice versa.
It was an eye-opener.
The science speaks for itself in whatever language you read it.
I subsequently met with the primary developer of RRG, Professor Robert D. Van Valin, Jr., who convinced me that I no longer needed to develop a scientific model to link phrases and meaning because RRG already explains how to do it in depth, like a cook book.
It just got better and better. He also pointed out that the same algorithm works for any human language. I was sold, as it not only filled the Chomsky gap, but it meant Patom theory could be used with any language as well. [Disclosure: As our work is synergistic, Van Valin has become one of the advisers to my lab at Thinking Solutions.]
Why isn’t RRG used to speak to machines?
Here we have unfortunate timing. In the 1980s as RRG was being developed, programmer's continued to struggle with Chomsky’s linguistics.
Without waiting for another underlying scientific solution, the industry finally decided to proceed with a method of incremental improvement for computational linguistics, based only on the statistics of sequences of sounds and words.
Despite not meeting expectations, computational linguistics and its fixation on word sequences independent of meaning remains at the core of today's A.I. troubles.
Our next step will build on the new scientific approach using RRG for linguistics and Patom theory for programmer-free computing. It promises progress while the dominant paradigms deliver disappointment. With a plan for the future, speaking A.I. is finally coming of age.
This article is published as part of the IDG Contributor Network. Want to Join?