As Internet technology matures and evolves, we are seeing the emergence of the "sensory" internet, a multi-sensory virtual space where devices can transmit smells, taste and touch over the web. Examples of these sensory technologies can be seen in products available today including:
- 3D glasses that surround wearers with holographic imagery (example: Microsoft’s HoloLens) such that users can roam through virtual environments;
- The use of voice commands and gestures (HoloLens can be commanded by voice, hand gestures – and can track eye movements and adjust holograms to where the viewer is looking);
- 3D sound (spatial surround, which has to do with sound positioning effects as broadcast from a source to a user); and,
- Haptic technologies (for instance, HiWave Technologies offers a haptic controller integrated circuit that simulates touch, while Senseg has developed a touch technology that generates the feeling of touching virtual buttons on smooth surfaces). [Disclosure: Microsoft is a client of Clabby Analytics.]
Of these new and evolving sensory technologies, I’m most excited about Microsoft’s HoloLens – a wearable, glasses-driven 3D/hologram environment. This environment uses a powerful depth camera to sense and help create images, a powerful, self-contained computing environment captures data from the surrounding environment using up to 18 sensors – and is capable of processing terabytes of data every second, and this computer controls lenses that help create depth perception.
I remember wondering back in 2000 how a glasses-environment would be able to process all of the data needed to display holographic images – I suspected that back-end computers would have to process that data and then find a way to wirelessly present that data to glasses displays. Further, I suspected that the modern commercial processor technologies at that time (x86, POWER, UltraSPARC, and Itanium) would reach their physical limitation somewhere in the 5-10 Ghz range – so I was still unsure how even the most modern back-end processors would be able to process terabytes of data and then transmit holographic representations wirelessly to 3D display glasses.
The answer to my computer headroom/processing quandary comes in the form of specialized processors. In my day job I’m a technology research analyst – and much of my research agenda for the past year has been focused on changes taking place in server technologies that allow specific workloads to be processed exponentially faster than ever before.
More specifically, I’ve been writing about how traditional central processing units (CPUs) are being complimented by graphical processing units (GPUs) and field programmable gate arrays (FPGAs) to streamline parallel processing and data communication speed. As examples of this shift in traditional server design, consider this report on VelociData (a streamlined x86/FPGA environment), The Now Factory (another x86/FPGA environment) and this CRN news report on Nvidia/IBM GPU/traditional processor interrelationship. This same idea – using specialized, optimized processors to drive specific computing functions is now being applied by Microsoft to build 3D holographic glasses.
A Closer Look at Microsoft’s HoloLens
Systems designers have known for years that it is important to choose the right processor to handle the right job. For instance, this abstract published in 2009 describes how an Intel x86 processor was used to process a cylindrical hologram, taking 4,406 hours to complete the job. By switching to a GPU, which was designed for processing highly parallel, streamed workloads, the same job was processed in 95 hours -- a 47X improvement. And this was before the invention of holographic processing units (HPUs).
A closer look at Microsoft’s HoloLens shows that it uses a CPU, a GPU and a specialized holographic processing unit to process terabytes of data being delivered to these processors by sensors in the glasses. Each of these processing units has been assigned specific computing tasks for which they have been optimized.
For instance, one processor can be used to process and project data, while another is used to manipulate lenses to create appropriate holographic depth. HPUs are capable of processing terabytes of information from sensors in real time.
As part of my research for this blog I requested further information from Microsoft (via their press relations agency) regarding the actual structure of the HPUs. What I want to know is what the processors look like, how many cores are being used, what the power consumption characteristics of these processors are and more. Unfortunately this information is not readily available at this time (but I will keep trying to find out more about the technical characteristics of Microsoft’s HPU).
In 2000 I wrote a book entitled Visualize This: Collaboration, Communication, and Commerce in the 21st Century in which I argued that one of the future directions for the Internet would be sensory driven. I called this evolution the “sensory virtual Internet” – and I described some of the technologies that were under development at that time that could enable people to experience the world around them using sensory technologies.
In that book I described how:
- The present two-dimensional (2D) world would evolve toward 3D imagery and holographics;
- Haptic (touch) technologies would simulate “feel” (a process known as tactiotation);
- Sound would become more realistic (multidirectional);
- Taste would be experienced (by printing sweet, sour, bitter, salty and umami [savory] flavors on bland, consumable cards); and,
- Smell would be electronically produced (laughingly referred to at the time as “smell-o-vision).
There were several shortcomings in that book (such as a failure to identify the importance of sensor technologies (Internet-of-Things devices such as seismic devices, temperature sensors, medical sensors and more) that feed data to today’s Big Data analytics systems. And I failed to predict the rise and importance of mobile devices (I was not alone here – several major vendors and research analysts also missed this evolving trend). And I admit that I had an overzealous view that sensory technologies would be heavily funded – a view that burst when the tech bubble popped in 2002. (Note, however, that virtual reality technologies are now being heavily funded according to the National Venture Capital Association and various news reports. Venture firms have recently invested more than $1 billion in virtual reality systems, believing that next generation big computing platforms will emerge from virtual- and augmented-reality projects).
In a way, I’m glad I wrote my book about the “sensory virtual Internet” fifteen years ago. It was fun to take a look at a wide range of technologies and try to envision what the combination of those technologies might bring in the future.
I must admit that I did not know how some of the technical challenges such as holographic display would ultimately be handled. I did not foresee the evolution of specialized microprocessors, or the shrinkage in storage that would lead to the ability to create a wearable holographic environment (so my book suggested that sensory computing would have to happen on back-end servers).
Still, I remember that I had every confidence that these types of technologies would be built. That belief stemmed from a conviction that there would be a personal need for these types of technologies for entertainment, and by businesses for design, collaboration and commerce. I also believed that engineers would be able to overcome several of the technology challenges that I identified in the book (I’ve met thousands of engineers in the course of my career – and I knew engineers would be unable to resist conquering the challenges of building advanced sensory environments).
I’ve got to admit that I’m pretty excited about someday entering the sensory virtual Internet. I foresee new learning activities, new ways to collaborate, new ways to visualize my work, new ways to create things and new games to be played.
Perhaps someday our paths will cross as we traverse virtual worlds in the sensory virtual Internet of tomorrow.
This article is published as part of the IDG Contributor Network. Want to Join?