A.I. is too hard for programmers

Copying brains leads to A.I.

Artificial intelligence is the Holy Grail for Silicon Valley, because human-like robots that speak will change our world for the better. As Steve Jobs would have said, “This changes everything.”

Imagine standing in your kitchen and saying, “Can you turn the lights over the hot plates on?” instead of walking to the wall near the kitchen door to flick the correct switch. Simpler communications. Easier. Faster.

Think of the possibilities — when isn’t your normal speech better than today’s interactions with keyboards, mice and touchscreens? Speaking AI could really make a difference in situations like driving, where your attention should be on the road, not on a screen.

The roadblocks to A.I. have always included the problems of designing and writing large programs, representing complex ideas and dealing with different types of data. When computers first appeared, limitations in hardware and software were a factor, but no longer. Some hoped that machines would write their own software "like a brain," but that has never eventuated either.

Patom theory solves the problem of programming by using just one algorithm. 

What is Patom theory?

Patom theory says that brains just store, match and use patterns. Nothing more. It's a bit like a matched pattern identifying a stored pattern that acts as a program.

The name "Patom" combines “pattern” and “atom”. Patterns are indivisible elements that, like atoms, can combine, to form more complex patterns. Patom theory is inspired by the observations of pattern combinations in brains and languages.

It promises to be the first step toward machine intelligence because it solves the main problem: A.I. is too hard for programmers.

Learning: The specific defines the general

The biggest difference between computers and brains? Computer programmers define the general to store specifics, but brains store the specific to identify the general. Brains learn this way, but computers don’t.

You know this already, of course, because when you learn things, you experience them and can then “magically” apply what you have learned.

Computers programmers define data structures to represent general requirements. This follows from Alan Turing’s 1936 design emulating human computers. By keeping track of calculations on paper, human intelligence can make infinitely complex calculations using carefully designed data structures.

In brains, the opposite is true. We learn from experience (specific) and generalize from there. In a future post, I will have a lot more to say about how brains store patterns and learn them, but for now, let’s focus on why this difference is the significant roadblock inhibiting our 1956 A.I. objectives.

The problem is that programmers often cannot define the general. Our brains tell us what the general is, but it is often wrong.

Defining the general from specifics

My favorite example comes from the real world. What is a bird? Birds are fist-sized animals that fly. So what is a penguin? A bird, yes, but hardly fist-sized, and completely incapable of flight.

You see the problem: Definitions starting from a general case aren’t flexible. Generalizing from specific cases is better.

Let’s store a few birds: a sparrow, a hawk, a robin, a dove, an emu, an ostrich and a penguin. The only association we start with is that each is a bird. This tells us little, but we have more to learn.

Through experience, the robin and dove are about the size of my fist. The sparrow is a bit smaller and the hawk a bit larger. The emu and ostrich are much, much bigger. The penguin is bigger, too, but smaller than the emu and ostrich.

Each of these birds has scaly legs and feet, and feathers. Wait, does the penguin have feathers? If not, don’t give it that association. Also the penguin swims, but each of the other birds flies. Do birds fly? Well, yes, and no. Do penguins swim? Yes.

The experience of these relationships illustrates the brain’s paradigm where the specific defines the general, operating at the semantic level. The meaning of each bird is a set of associations from experience, including what it is, what it has and what it does.

I call these associations “a part of the pattern,” and each type of bird forms its own pattern "atom." The atoms connect in a network of associations, created through experience and generalization. Patom theory allows atoms to split and combine dynamically, but let’s not get ahead of ourselves.

The first association we stored for each bird type was that they are birds. The second is their size, relative to the other birds. The third association for each bird is that they have feathers.

The question is whether all birds need feathers. Can you imagine a penguin with blubber to keep warm instead of feathers? I can. The general defines the specific fails at such changes. The alternate is far more flexible because rewriting your general design is impractical. If you add a blubber-based penguin, it remains a bird and leaves generic bird details unchanged.

There is a simple mechanism to separate these patterns that I call “linkset intersection.” It allows you to take some elements and find the common attributes, much as a database query today, but working in a brain-like network.

How large is a bird? Just intersect “birds” with “large.” Large is a size (learned with the same association principles that created the bird network), so we get the following: (a) two are the size of my fist, (b) one is smaller, and (c) the others are larger.

Given an ambiguous question (ambiguity means there is more than one answer), we can choose the most frequent answer: “fist sized.” This comes from the meaning, which can be stored by experience. Linkset intersection is an efficient way to find answers from massive data stores, like a brain, without indexing. Once we have an answer, “fist-sized,” the associations such as the visual images of robins and doves are also available.

This isn’t a statistical approach, but a pattern-matching one. It only provides valid answers from experience, not guesses. Other intersections, as needed, are available to deal with other real-world questions.

Combining knowledge

I recently wrote about Alan Turing’s computer, a machine limited by compression and duplication of data. That programming system, based on human computers, further limits applications by requiring a programmer to define the general attributes in advance. Time has shown this approach to be spectacularly successful at many applications while being poor for A.I.

To unlock the power of A.I., the first step is to expand and centralize data. We then use the brain’s approach of storing specifics. Specifics are effective at determining general conditions in conjunction with an appropriate system for storing (learning).

Copyright © 2015 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon