Four years ago, in-the-air gestures were the future of gaming and the desktop PC user interface.
In 2010, Microsoft launched its Kinect product for Xbox 360 and Leap Motion was founded as a company.
For a while, Kinect for Xbox 360 was the fastest selling consumer electronics gadget of all time. And when Leap Motion introduced the Leap Motion controller, minds were blown by the demos.
The excitement around Kinect has fizzled, among both gamers and game developers. Microsoft recently boosted sales of the Xbox One by removing the requirement to also get Kinect. People aren't using it like they thought. Hardly anyone is using the hyped Kinect for Windows product.
And Leap Motion has completely failed in the market.
What happened?
Why in-the-air gestures failed
It's easy to be so dazzled by new technology that you forget the other half of the equation: the human user. This is especially true of user interfaces, which are by definition the point at which the human and the machine connect.
Yuriy Kozachuk, an Intel Perceptual Computing Group marketing engineer, uses a 3D camera atop an all-in-one computer screen to demo hand gesture control.
The trouble with in-the-air gesture technology is that it has thus far been applied to the wrong problem. Both Kinect and Leap Motion have been used to control on-screen action of some kind.
Waving your hands and arms around to control something "over there" is not an activity that corresponds to anything that was ever a part of the human experience -- unlike, say, the direct manipulation of on-screen objects with multitouch technology. It's a completely new and abstract behavior that Microsoft and Leap Motion are demanding of people. And we're not having any of it.
Why in-the-air gestures will succeed
Even great in-the-air gesture technology failed when applied to a problem that was counter to human nature. The technology will succeed when it is used for human-compatible applications. And there are two big ones in our immediate future.
1. Virtual and augmented reality
Leap Motion this week rolled out a $20 plastic clip that mounts the Leap Motion controller to an Oculus Rift headset. The Oculus Rift is a highly regarded prototype virtual reality system developed by Oculus VR, a company that was acquired by Facebook in March for $2 billion.
Leap Motion also released a demo video that I think you should see. It shows what's displayed in Oculus Rift, with two screens that (when you're wearing the Oculus Rift goggles) provide the illusion of 3D. It shows how Leap Motion's extreme accuracy in the real-time location of arms, hands and fingers translates into the ability to have total control in augmented reality and virtual reality programs.
Note that the Oculus Rift is still for developers only and won't be sold to consumers until there are applications available and the system itself is further refined.
Oculus Rift-style augmented reality (where virtual objects are superimposed on a view of real-world environments) and virtual reality (where the view is of an artificial but life-like world in which the user can interact with hands and feet) will be used for gaming, socializing and professional applications. One example of the latter is in the field of medicine, where physicians could use the technology to remotely control robots performing surgery on patients in other locations.
Extremely accurate motion control like what Leap Motion offers is not only a winning application for in-the-air-gestures, it's a perfectly necessary and inevitable one.
I expect Facebook to acquire Leap Motion and permanently build it into the Oculus Rift goggles.
Even if that doesn't happen, in-the-air gestures will go mainstream as soon as augmented and virtual reality go mainstream.
But that's not the only natural fit for in-the-air gesture technology.
2. Communication
The biggest technological change of the next five years will be that virtual assistants will become ubiquitous, and services like Siri, Google Now and Cortana will someday be the primary type of user interface for interacting with our apps, the Internet and one another.
Providers of those systems will engage in an arms race to improve their services. One key area of differentiation will be for the software agent to detect your mood as you use it, and also to automate communication. Let's look at those one at a time.
All three major virtual assistants are self-learning to some degree. They pick up facts about you and your life here and there, scanning your calendar, email and contacts.
This getting-to-know-you capability will improve: The assistants will try to learn more about you by offering you things and seeing if you like them or not. And integrated in-the-air gesture technology will help them do that.
The assistants will assess your reactions to things through a variety of inputs, including your tone of voice, the actual words you say and your facial expressions. And with the help of in-the-air gesture technology, they'll even read your hand gestures and body language.
In other words, they'll perceive the nature of your reactions to things just like people do.
If they detect that something they do delights or frustrates you, they'll adjust what they do in the future. Today's in-the-air gesture technology will play an integral role in how virtual assistants "read" you.
In-the-air gesture technology will also be deployed to help people communicate with one another.
You'll notice a trend in messaging apps where the input required of the user keeps getting simpler. Some messaging apps accept only emoji (simple cartoon characters that replace or abbreviate language) or even the word yo.
Of course, those apps are gimmicks. But the impulse to make communication faster and easier will ultimately require in-the-air gesture technology to turn hand gestures and body language into conveyable messages. For example, imagine if a smiley face icon were placed into your messages when you smiled, LOL added when you really did laugh out loud. Or if a shrug, a wave of the hand, a thumbs-down, a forehead slap, chin scratching and other gestures triggered auto-typing of the words you were conveying with body language.
Also: People are shy. That's why video chat never really took off. More people might be interested in video chat if, instead of actual video of themselves, the systems used stand-in avatars -- especially ones that instantly and automatically conveyed their facial expressions, body language and hand gestures in real time as they chatted to others online.
Intel's new 3D mobile chat app tracks user's faces and moods and reflects those emotions on the avatar.
Looking at the state of research in this area (which has been developing for two decades), it's likely that head and face-only avatars will be the first broadly used by consumers, but eventually whole-body avatars will take over, and these will take full advantage of in-the-air gesture technology -- the future versions of products like Kinect and Leap Motion.
So don't count out in-the-air gesture technology yet. It has failed as a user interface because it's been applied to the unnatural act of controlling on-screen action. Once it's applied to virtual reality and communication, it will become a totally mainstream technology that just about everyone will use.