Skip the navigation

User interfaces: The next generation

Keyboards and mice will face competition from motion-sensing, gesture recognition and haptic technologies.

August 9, 2004 12:00 PM ET

Computerworld - PDAs and smart phones are great for keeping the road warrior connected to the extended enterprise, but the technologies have always offered only limited data input capabilities, especially for typing-intensive applications.

San Jose-based Canesta Inc. thinks it may have just the thing to address the problem.

The company has developed a prototype technology that lets users of PDAs and similar mobile devices put data into their handheld systems by simply typing on an image of a standard-size keyboard projected onto a desktop or other surface. The "electronic perception" technology captures the user's finger motions via emitted light photons that form 3-D real-time images that are then processed and translated into keystrokes.

The technology can be integrated into any handheld device and includes a 3-D sensor module, a keyboard-pattern projector and an infrared light source.

Canesta has much more in mind. "Any situation in which a machine or a digital device needs to understand its surroundings is a great application for electronic perception technology," says Jim Spare, a vice president at the company. For example, he says, a future application could be an intelligent car-airbag system that can sense the size and position of an occupant to prevent injury upon deployment.

And Spare says his projection keyboard heralds the way to much more powerful user interfaces that are based on hand gestures. "We'll be able to navigate through databases, especially when you have different sets of data with complex relationships," he says. "You could open up a filing cabinet and pick up a file and sift through it with your fingers, using gestures from your hands as if you were actually picking it out of the file cabinet."

Canesta's technology is part of a growing list of emerging user-interface technologies that are being designed to enable a wider range of human-computer interaction than is possible with traditional mouse- and keyboard-based systems.

Broadly speaking, such technologies are designed to allow computers to accept gestures, motions, speech and facial expressions as data input methods along with the mouse clicks and keystrokes.

Many of these technologies are coming from small companies and are first developed for highly specialized applications. But as the technology matures and costs come down, expect to see it break into broader markets, vendors say.

One example is a gesture recognition system developed for the U.S. Department of Defense by Cybernet Systems Corp. in Ann Arbor, Mich. The technology was developed to facilitate silent troop communication during combat. It allows users to stand in front of a camera-mounted monitor and manipulate images, data and application windows by using specific hand movements from a lexicon of roughly 80 gestures recognized by the system. A San Antonio-based TV station is using a commercial version of the product, called GestureStorm, to control computerized visual effects in its weather reports.

Our Commenting Policies