Skip the navigation

CES: IBM, Emotiv show advances in virtual reality worlds

Technology allows users to control an avatar using brain signals transmitted wirelessly to a PC

January 9, 2008 12:00 PM ET

Computerworld - LAS VEGAS -- Hundreds of products at the International Consumer Electronics Show (CES) here are devoted to new ways to input data to a PC or gaming console, including a variety of inputs via voice commands or gestures that are registered via video detection.

But another way demonstrated at CES is the ability to wirelessly transmit the brain's electronic signals, including emotions and cognitions, from sensors on a person's head to a PC.

Emotiv Systems Inc., an IBM partner, demonstrated an alpha version of a neural input device that it plans to unveil as a consumer product at the Game Developers Conference in San Francisco next month.

IBM believes that such neural input can be an important part of a broad range of virtual reality uses for industry, not just for games, said Dave Kamalsky, project manager for virtual worlds research at IBM. Next to the Emotiv demonstration, IBM was showing a variety of virtual reality (VR) systems, including Second Life and Activeworlds, that businesses can use for training employees, holding meetings and demonstrating products to consumers.

Emotiv's working product name is the Emotiv Headset, which could sell for $200 to $300, similar to the cost of a high-end handheld game controller, said Patrick McGill, a spokesman for the San Francisco-based start-up.

The alpha version includes about a dozen sensors that pick up a brain's signals, which are transmitted via a 2.4-GHz wireless signal, said Emotiv product engineer Marco Della Torre. He demonstrated the alpha version while wearing the sensors that picked up his eye movements, eye blinks, smiles and frowns, which were shown the PC and a large display at the Emotiv booth. Each facial gesture was quickly and accurately recorded on a large graphical representation of a face on the display.

In addition to the simpler facial expressions, Della Torre was able to transmit the brain's affective impulses, such as calm or excited (which involves a group of facial movements) and even cognitions. The cognitions (conscious control) that Della Torre demonstrated were the ability to make an animated cube on the display move up or down or spin in space. He was able to train the software to interpret the cognition in less than 20 seconds.

While such capabilities might seem rudimentary, the control of the animated cubicle could eventually be extended to "think" whether an avatar in a virtual world should gesture with face or hands, shake someone's hand, or even throw a ball, Della Torre said. By comparison, in Second Life, many controls of an avatar are now possible, including facial expressions and walking and even flying, but all must be input via a keyboard.



Our Commenting Policies
Consumerization of IT: Be in the know
consumer tech

Our new weekly Consumerization of IT newsletter covers a wide range of trends including BYOD, smartphones, tablets, MDM, cloud, social and what it all means for IT. Subscribe now and stay up to date!