Why it's time to augment our thinking about AR

The most revolutionary outcome of AR is that the physical world will function like the digital world. And that's a big deal for enterprises.

multiple-exposure shot of a woman using a tablet with a virtual interface and augmented reality

Everybody expects augmented reality (AR) to become the Next Big Thing. But few are clear about why, exactly.

The breathless coverage around AR says it will revolutionize healthcare, warfare, education, gaming and, of course, the enterprise generally.

In the enterprise, we're told, AR will enhance training, in-store marketing, experiential marketing, market research, customer-facing interactions, distribution, warehousing, manufacturing, engineering, design and more.

But how? By taking what's on our laptop, tablet and smartphone screens and projecting in mid-air through goggles? That sounds more cumbersome than revolutionary.

I'll tell you why people can't see the revolutionary value of AR in a minute. But first, let's clarify what AR is.

What is AR, anyway?

Google accidentally created a misleading impression about AR. The company's Google Glass "Explorer Program" (between April of 2013 and May of 2014, followed by a commercial release until January of 2015) convinced everyone that AR would be ugly, dorky, limited and not particularly useful. "So that's what AR is," the public seemed to say. "Do not want!"

In fact, the Google Glass Enterprise Edition today is a powerful and successful platform. But Google Glass is not and never has been augmented reality. Not really.

In fact, your smartphone is more of an AR device than Google Glass is.

Google Glass is technically a heads-up display, which means it shows you a semi-transparent screen superimposed on your natural field of view. When you turn your head to the right, the screen goes to the right with your head. (Glass is semi-AR, because information can be associated with general locations, but not specific ones.)

With full-fledged AR glasses, digital objects or words are attached to something in the environment. When a digital label is hovering in space over an object, and you turn your head to the right, the label stays with the object, and doesn't travel with the movement of your head. It appears to be attached to the object, not the AR device.

This may seem like a small difference. In fact, the ability of AR glasses to scan the environment and place digital objects within that environment makes them in one important sense the opposite of Google Glass-like heads-up displays.

A heads-up display is just a hands-free display.

An augmented reality capability tags or maps or digitally registers objects in the real world in virtual space. And that's everything.

An AR display gives you digital data directly associated with objects or places that can exist in and interact with 3D physical space.

When the MagicLeap and Hololens demos came along, the public began to understand that augmented reality (or "mixed reality," as it could be called) sounds cool. But useful? For what?

Distracting applications

AR confusion is caused in part by everyone's personal experience with, or personal reading about, actual existing AR applications.

AR, we have learned, is for playing Harry Potter: Wizards Unite or getting graphical walking directions on Google Maps with your phone.

If you're in Japan, maybe AR is good for creating ghosts that haunt your grandparents' graves.

Or if you're an iPhone user, maybe you're hoping AR will help you find your lost keys.

In the enterprise, it's good for conjuring up instructions or details during manufacturing, we've learned.

These applications can lead us astray about what AR might be good for and how it can work.

How we should be thinking about AR instead

Back in the early days of the personal computer, everyone had a vague sense that PCs would change everything. But how? Well, uh, now you can use a spreadsheet to track your expenses! And you can get rid of that typewriter, because with a word processor you can both copy AND paste!

We couldn't form a clear picture of what would become truly revolutionary about personal computers, because three future developments were missing from the equation:

  1. Moore's Law (radical miniaturization and cost reduction)
  2. Networking, wireless networking and the internet
  3. Software and services that would take advantage of 1 and 2

Yes, we knew all these things were coming. We just couldn't picture where they would all take us when all developed in combination over decades.

And that's now where we're at with AR. We have rudimentary applications. And we all understand that it's supposed to be revolutionary. We just can't see how, exactly. Much of the thinking around AR struggles to see past novelty applications, like AR business cards where your head pops out of the card when you look at it through a smartphone app.

Just like the PC, the same missing three elements cloud our thinking about how AR will change everything.

The first is miniaturization and cost reduction. Augmented reality will launch into the stratosphere, probably displacing smartphones, when reasonably high-resolution optics and other components can be built into glasses that pass as ordinary prescription glasses.

All the usual suspects are working hard on, investing billions in and winning patents for, this project: Apple, Google, Microsoft, Tencent, Sony, Facebook, Amazon, Samsung and others.

When such glasses exist affordably, even people with perfect eyesight will wear glasses just to augment reality.

The second missing elements is the networking part. No, I'm not talking about 5G, although that will certainly help, especially in the enterprise.

I'm talking about mapping, tagging or registering physical buildings, vehicles, objects and people so that -- like the digital world -- the physical world will be searchable, hyperlinked and function as communications media. Physical spaces will be app platforms.

As with the web, we'll be able to see all the general-public augmentation that everyone sees. We'll see a personalized version. And we'll also be able to apply the AR equivalent of "browser extensions" to process and interface with the physical world in unusual ways.

And the third is these first two developed in combination over time and expressed in increasingly innovative apps. It's impossible to predict what these apps will be like, just as it was impossible in 1980 to predict the society-changing effect of Facebook or TikTok.

There's one more element that will launch AR into some unknown future stratosphere: Artificially intelligent virtual assistants, which will whisper knowledge and advice in our ears and show it in our AR visuals as we move around in physical space, will give us something that will feel like omniscience. Instead of searching for it, knowledge will be automatically and constantly poured into our eyes and ears.

In addition to anchoring augmenting data in physical space, we'll also anchor it to objects and people through image recognition.

Have you ever had the desire to search a physical printed book, based on the mental habit formed by reading digital words? AR will make that trivially easy.

We tend to assume that various objects and resources will be augmented with data or digital experiences by companies like Google. And that's true. Google is already doing it. But much of this augmentation will be done by users simply using the AR technologies of the future. Volumetric capture, for example, where indoor spaces are mapped in 3D, will probably happen constantly and automatically. Each person's capture will improve the resolution of and update the data for all people using that same physical space.

In the enterprise, the internal mapping of every resource will equate to an internal network or intranet. Purpose-built in-house apps will be built on top of this data, enabling capabilities beyond the ones we can imagine, such as detailed virtual instructions and knowledge bases on every piece of equipment, and wizard-like training systems on the factory floor.

It's likely that both AR and 5G will move far faster in the enterprises than in the consumer space. In fact, it's already happening.

But many of the technologies that will enable next-generation AR are already being built into smartphones. The latest iPhones, combined with iOS 13, for example, can perform body tracking (identify and mirror a human body's movements), occlusion tracking (tracking objects are in 3D space, even behind other objects), world tracking (where the movement of the device is accounted for in tracking external objects), distance estimation of objects and other capabilities that will fit easily inside glasses.

And, of course, Apple, Google and other companies offer augmented reality development tools that are being used for smartphones and other device types.

In fact, the industry is bursting at the seams to offer AR experiences through glasses. All those apps available that give you AR through the phone -- which is an awkward experience -- know those same applications will be amazing through glasses.

Even more compelling is that the technology already exists for companies to make AR glasses that are close enough to ordinary glasses. Unfortunately, such glasses might cost $50,000 or more. So we're all waiting for the price to come down on the technologies for enabling wireless, all-day battery, high-resolution smart glasses that look like ordinary glasses.

As the prices come down, the dam will burst and AR will become a runaway platform just like smartphones were 10 years ago.

AR will change everything, much like smartphones did.

But the biggest thing AR will do is make the physical world function a lot more like the digital world -- searchable, hyperlinked and app-enabled. And for enterprises, that's the revolution.

Copyright © 2019 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon