'Minority Report' interface shown at CES

In a small meeting room at the edge of the show floor at the Consumer Electronics Show, a startup company is demonstrating a motion-sensing interface technology that could offer a radical new way for interacting with games, PCs and televisions.

The technology, from Israeli startup PrimeSense, can be embedded in TVs, Blu-ray players and set-top boxes, allowing people to use hand gestures to scroll through cable TV menus from their living room couch, or stand in front of the TV and shuffle documents on the screen by moving their hands around in mid-air, much as Tom Cruise does in the sci-fi film "Minority Report."

The technology can also be used as an interface for PC games and game consoles. In that way resembles Microsoft's Project Natal, which allows users to stand in front of a large screen and use full-body gestures, such as a kick, punch or jump, to control an avatar on the screen. Microsoft said this week that it will launch Project Natal for Xbox 360 users later this year.

PrimeSense's system uses a sensor-camera that sits above the screen and projects a beam of light, at a wavelength close to infrared, to build a 3D map of the people and objects in a room. When a person activates the device by thrusting their palm out towards the screen, the system locks onto that person and puts them in control.

PrimeSense is a fabless chip company, which means it designs the 3D sensor chip that powers the technology, as well as software that gets embedded into devices. It says it has an agreement with a large manufacturer to produce its chips for the mass market, although it won't yet say who it is.

In fact, a big question mark over PrimeSense is that it won't disclose any of its customers publically yet, although companies in the PC and set-top box markets are likely to announce products this quarter that include its technology, according to Adi Berenson, PrimeSense's vice president for business development and marketing. The company is also in talks with TV makers, he said.

A prototype system is being shown behind closed doors to reporters and industry partners at CES this week. The technology sounds futuristic, but in fact variations on it have been in the works for years, and are also being developed by competitors including Canesta of Sunnyvale, California, Optrima of Belgium, PMDTechnologies of Germany, and Mesa Imaging of Switzerland.

Canesta said in October that it had secured an additional US$16 million in funding, from companies including laptop giant Quanta Computer, to further develop its own 3D sensor technology. Last year Canesta demonstrated a prototype gesture-controlled TV from Hitachi, and it has worked with Honda in the past on vehicle safety systems.

Most companies in the market are using a "time of flight" technology, which works by emitting an infrared pulse from a camera above the screen and measuring the time it takes to bounce back from objects in the room. This allows the systems to calculate the distance of each surface and create a virtual 3D model of the room. Any changes, like hand movements, are then translated onto the screen.

PrimeSense uses a variation of this. Instead of calculating the time for light to bounce off of objects, it encodes patterns in the light and builds a 3D image by examining the distortion created in those patterns by objects in the room, Berenson said.

He claimed this system is faster and more accurate than time-of-flight systems, and can operate in near darkness. The technology can map out objects that are up to 18 feet (six meters) away, though six to seven feet is best for applications where the user is standing up, and 10 to 12 feet is the "sweet spot" for using hand gestures on the couch, he said.

The entire system, including the sensor chip and middleware, will cost manufacturers $20 to $30 to add to PCs or TVs when shipped in volume, Berenson said. Most high-end TVs will have enough computational power to run the software, and have USB 2.0 ports where the sensor device can be plugged in, he said.

PrimeSense showed a few applications for the technology here. At one point during a "Minority Report" style demonstration the system froze for a moment, but it recovered fairly quickly and appeared to work smoothly after that.

When using the "touch-screen" effect to manipulate documents, the outlines of two grey hands appear on the screen corresponding to the user's hands in mid-air. Touching a document turns the palms red, and the document can then be moved about the screen or rotated using two hands. Possible uses including sorting through digital photos on a PC, or playing a card game on a TV screen.

The sensor on top of the TV also includes a camera and a microphone, and PrimeSense showed how a person's image can be superimposed over a background on the screen, much like a weatherman on TV.

It wasn't clear how the capability might be used, and partner companies will have to come up with some of their own applications for the technology. One possibility is for a type of videoconferencing between two Internet-connected TVs, so that two people could discuss a Web page by appearing to stand in front of it on the screen point to images and links on the page.

"We don't know yet how everything will be implemented," Berenson said, "but it's something that could be fun to use."

Copyright © 2010 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon