Skip the navigation

Intel demos perceptual computing software toolkit

One developer prototype at MWC relies on hand gestures to swipe through photos

February 25, 2013 01:26 PM ET

Computerworld - BARCELONA -- Software engineers at Intel are exploring new ways people can use the human voice, gestures and head-and-eye movements to operate computers.

hand gestures in a demonstration of a perceptual computing
Intel's Barry Solomon uses hand gestures in a demonstration of a perceptual computing toolkit being used by independent developers. (Photo by Matt Hamblen/Computerworld)

In coming years, their research is expected to help independent developers build computer games, doctors control computers used in surgery and firefighters when they enter flaming buildings.

"We don't really know what this work will become, but it's going to be fascinating to watch it play out," said Craig Hurst, Intel's director of visual computing product management, in an interview at Mobile World Congress. "So far, what we've seen has gone beyond what we thought of originally."

Intel's visual computing unit, created two years ago, has grown to become a top priority for the chip maker, Hurst said. Last fall, the unit released several software toolkits that are used by independent developers to create a raft of new and sometimes unusual applications.

One of the toolkits, called the Perceptual Computing SDK (software developer kit), was distributed to outside developers building applications that will judged by Intel engineers. Intel is planning to award $1 million in prizes to developers in 2013 for the most original application prototype designs, not only in gaming design, but also in work productivity and other areas.

Barry Solomon, a member of the visual computing product group, demonstrated how the Intel software is being used by developers on Windows 7 and Windows 8 desktops and laptops. With a special depth-perception camera clipped to the top of his laptop lid and connected over USB to the computer, Solomon was able to show how the SDK software rendered his facial expressions and hand gestures on the computer screen, accompanied by an overlay of lines and dots to show the precise position of his eyes and fingers. A full mesh model can then be rendered.

With that tracking information easily available, a developer can quickly insert a person's face and hands into an augmented reality scenario. Or, the person can be quickly overlaid onto a green screen commonly seen in video applications to make a weather or news report. The person's gestures could be used by a developer to interact with functions in a game or productivity application.

A company called Touchcast is building a green-screen application that will be available later in 2013. The prototype camera, called the Creative Interact Gesture camera, which Intel uses in its perpetual computing demonstrations with the SDK, will also be for sale later this year.



Our Commenting Policies
Internet of Things: Get the latest!
Internet of Things

Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!