Mobile remains a very competitive business, with major smartphone vendors continually trying to outdo one another and consumers often left doubtful that the latest gadget is really something they need to buy.
Previous mobile competition was all about having the best displays or the fastest processing. Early on-board assistants, such as Siri and Google Assistant, raised the bar and gave us a taste of what was coming. However, these features have become much less differentiated by brand these days. It’s time for the next wave.
The next generation will be about “smart” and is now making its way into our everyday mobile devices. It’s not just about voice interfaces to a search engine or calendar entry, as we’ve had in the past, nor is it about rudimentary augmented reality/virtual reality. We’re now seeing truly smart assistants learning about us and altering the functioning of our device as they go. They have the potential to both dramatically alter how we interact with our devices, as well as how they interact with us. And we’ve just scratched the surface with new visual interactions.
Artificial intelligence linked to mobile devices
This new strategy is emerging as more vendors deploy an AI cloud behind and closely linked to their devices. Services such as Samsung Bixby, which started as a way to help users navigate functions on their device, is now being extended to include interactions beyond the device and into the real world.
Clearly Amazon Alexa and Google Assistant are not just popular because they are a way to interact with your devices using your voice. Rather, they are increasingly popular because they tie our devices to other things around us (e.g., home automation) and make it much easier to perform complex tasks.
Assisted reality to become part of our devices
But "smart" doesn’t stop there. With the advent of more capable mobile engines, including enhanced graphics and AI capability from the likes of Qualcomm, Samsung, Huawei, etc., we’re now in a position to see “assisted reality” become part of our mobile devices.
This is will be even more compelling once we move to 5G networks, which have faster speeds and importantly, much lower latency. But even with current 4G/LTE advanced networks, the ability of the device to guide us in the real world by providing visual cues and superimposed images often based on internal 3D visual sensors is enabling a smart ecosystem to emerge to offer many more intelligent ways to interact with our world. This will definitely be a major battleground in the next two to three years as vendors try to outdo one another in providing assisted reality capability — and generally without the addition of a head-worn display that is unappealing to many consumers.
Apps that understand our moods and emotions
Voice interfaces and assisted reality are not the only smart features coming. Apple’s visual login capability demonstrated on the iPhone X may indeed be revolutionary, even though visual logins have been done before. But more importantly, it’s a first step towards something potentially much more profound. It will ultimately allow apps to understand our moods and/or our emotions, much like people do when they speak to one another and read facial expressions.
This not only provides emotional feedback, but it can potentially be used in many important ways — from reading facial expressions from people who may not be able to communicate in normal ways, to monitoring a patient’s health, to creating a new way to secure data/passwords/logins through unique facial expressions.
In the next two to three years, I expect to see a plethora of new and innovative uses for advanced facial recognition technologies, and I expect most vendors to make the capability an inherent part of their offerings.
While I expect the typical players to be dominant in this emerging market (e.g., Amazon, Google, and Apple), its unclear yet how well Microsoft will do. Cortana is a good assistant, and Microsoft clearly has high levels of expertise in all aspects of assisted reality and AI. But without its own ecosystem to play on, it’s reliant on enticing vendors to support their offerings. This may be a hard sell to mobile phone vendors bound to the Android ecosystem, but I do expect Microsoft to be successful with their smart technology in most enterprise uses of smart mobile.
The "smart" enterprise
All of this “smart” that's coming in the next few years will start out in consumer devices, but it is destined to become an important part of enterprise use, as well. Things such as assisted reality, emotional monitoring/visual cues, and smart virtual assistants will become an important part of logging in, safety monitoring of users, just-in-time training and on-the-job assistance, among many other purposes. They will ultimately make enterprise users more productive and allow enterprise apps to be much more intuitive and easier to use, while also making the work environment safer.
Enterprise deployments generally lag consumer tech by two to three years, but I expect this time around they will be implemented fairly quickly, since many of the services associated with this new tech will be tied to the cloud, which enterprises are already adopting in a big way.
The next 2-3 years will see a large impact from “smart” mobile devices as service providers, including Apple, Amazon and Google, make their systems available universally, and vendors with the needed high level of resources, such as Apple, Samsung, Huawei, and LG, add increasingly sophisticated tech into their devices — sometimes as a hardware enhancement, and sometimes as their own implementations of cloud services (e.g., Samsung Bixby).
Although companies like Baidu will be content to play in their home market for the short term, they definitely have visions of being a major international player and rival of the big guys. I expect with their massive scale and considerable resources, Baidu and other Chinese players will eventually achieve broad market penetration, although that likely will take three to five years. Nevertheless, you will see “smart” coming to your device very soon.