Skip the navigation

I, Coach: What's in Store in Robotics

Someday, robots will do more than vacuum your floors. They'll train you and advise you -- and maybe even help out with the cooking.

By Gary Anthes
May 21, 2007 12:00 PM ET

Computerworld - Takeo Kanade is a roboticist, but his work extends far beyond the C3PO-like humanoids that often come to mind when one thinks of robots. He has been a pioneer in computer vision, smart sensors, autonomous land and air vehicles, and medical robotics. Kanade, a professor of computer science and robotics at Carnegie Mellon University in Pittsburgh, recently told Computerworld that peoples notions of what robots can and should do will change. Robots will serve as coaches and advisers, not so much replacing human labor as enhancing it.

Whats coming in human-computer inter­faces? The trend toward computer vision is clear, and it will accelerate. In 10 years, I wouldnt be surprised to see computers recognizing certain levels of emotions, expressions, gestures and behaviors, all through vision.

What is the quality-of-life technology that you are working on?

Takeo Kanade
Takeo Kanade
Intelligent systems [that incorporate robotics] have been developed over the past couple of decades, mostly for military, space exploration, hazardous environments and entertainment. I think they can do better for our daily lives, especially for older people and people with disabilities. The systems range from small devices that you carry to small mobile robots to the whole environment  home, streets, the community. The most important thing is for these systems to understand what the human wants to do, or is about to do, and then help them accomplish those tasks.

How could a computer know someones intent? What Im advocating right now is what I call inside-out vision. We think of putting cameras in the environment to observe you, which I call outside-in vision. But people dont like being observed. And, technically speaking, its difficult, because its important to know what you are looking at in order to know what you are trying to do, and the things you are looking at up close tend to be occluded by your body.

So the idea is to do it the other way and put sensors on you, looking out. They could be small cameras and maybe sound recorders. The computer then should be able to recognize objects that you are looking at, like a door you are approaching. [Because] the computer is looking from your viewpoint, it can understand what you are trying to do. We can put 1TB of memory on a small device with all the images of your home and neighborhood, all the places you tend to go and the routes you drive.

Where else might computer vision be applied? Imagine you train a person how to assemble a product. A trainers job is to observe the trainees actions, point out errors and show the right way. The job requires a lot of attention and patience, and that kind of one-on-one training is expensive. I can imagine the computer looking at what the trainee is doing and then giving some advice so the trainer can actually train more than one or two people. So the computer becomes a job coach.


Our Commenting Policies