Of late, the future of artificial intelligence (AI) and truly ‘smart’ computing has been a major topic of fascination. This is perhaps from Stephen Hawking’s comments last month that AI has the potential to ‘supersede’ humanity – but is also almost certainly tied to the swelling number of recent innovation in AI technologies.
Those on the forefront of building such technologies have little doubt about their potential. People like Google’s Ray Kurzweil speak of a not-so-distant future where man and machine are increasingly intertwined. But whatever your belief system might be as to the merits or dangers of machines that can ‘think’ like humans, there is no denying the recent leaps in achievement that are beginning to surface. While it may be impossible to accurately predict how this will all play out over the next twenty years, we can connect the dots and understand at least what might be next.
We are rapidly replacing the traditional era of ‘programmatic computing’. We are entering an age of ‘cognitive computing’ and ‘deep learning’ and this is made possible through innovations around natural language processing and neural networks. In a nutshell it means that computers don’t need to be explicitly coded to accomplish a certain task. Rather through pattern recognition and iterating on past outcomes computers can ‘learn’ by adapting and responding to our natural language, whether written or spoken.
As you might imagine, applications taking advantage of these leaps have the potential for serious impact. A few months back, I had the pleasure of interviewing Stephen Gold, the VP of IBM’s Watson division. IBM has been the most vocal company about publicizing its cognitive computing prowess. Many of you might remember Watson’s historic Jeopardy! run a few years ago, and the technology has made significant leaps since then. For this reason, talking with Stephen provided an interesting perspective about what we can soon expect to see.
The most important thing to understand is that when we are talking about cognitive computing in its current state we are not talking about any sort of sentient or self-aware computers like you might see in movies. Instead we are talking about technology with a new ability to augment human decision-making. The application of technologies to specific problem-sets is where we will see the first major main-stream impacts. For instance IBM has publicized ‘Chef Watson,’ an application that uses an extensive knowledge of human tastes and the chemical composition of foods to suggest wild and counter intuitive combinations of ingredients to produce appealing foods.
More importantly, in terms of business impact, are the vast opportunities to enhance professional decision-making. The medical and legal professions represent key industries on which cognitive computing is poised to make an early impact. These professions rely heavily on a referring to a vast body of information that is constantly being updated through the likes of publications, discoveries, precedents, and advances in technology. Indeed, some people estimate that the half-life of medical knowledge, or the time it takes before new information is obsolete, is merely seven years.
It is impossible for a single human to digest the constant deluge of new information, let alone understand it in the context of existing knowledge. But cognitive systems such as Watson can do just that. This is because they have capacity to digest millions of pages of literature and journal articles and constantly build upon their knowledge base. This, in addition to the supreme computational abilities of computers, creates an opportunity for professionals to augment their expertise and decision making by using these systems as a tool. For example, a cognitive system may indicate confidence intervals as to the likelihood of certain diagnosis.
But of course IBM is far from the only player in the space. Google has, albeit less publicly, invested vast sums of money into this area. In 2014 Google went on a massive shopping spree, acquiring 30 companies with at least four focusing on artificial intelligence. There is also Cognitive Scale, a company founded by an ex-IBM Watson pioneer, that is making waves by using cognitive computing to provide ‘insights as a service’.
An interesting front-runner in closing this gap is Microsoft. Microsoft has made huge strides in its AI efforts with its Project Adam – and it is continually tying it together with its Cortana technology as a front end. We can already do simple things like talk to an Xbox- but the dominance of Microsoft’s office tools like Word, SharePoint, and Outlook point to a much larger impact. Microsoft has the opportunity to provide every user with a smart personal assistant. We will start to see the rise of truly useful virtual assistants. Originally, virtual assistants burst into the main stream with Siri. Now many of us are familiar with Google Now and Microsoft’s Cortana.
At the moment, beyond tasks like scheduling reminders or asking basic structured questions, the capabilities are fairly limited in their helpfulness to our daily lives. However, with the rise of better cognitive systems, employees would have the potential for a virtual assistant to help them orchestrate projects, schedule meetings, and manage their emails. There is an opportunity to fundamentally change how we work – a promise made more compelling by the fact that these cognitive based systems will be able to learn and adapt to our particular situation.
Innovations in this arena are progressing rapidly and will start to present a number of unexplored frontiers. In five to ten years as wearable’s become more ubiquitous, as devices become increasingly connected, and cloud technologies make access to cognitive systems easier and easier, we will see much of what we interact with have some form of cognitive component. From the consumer side, imagine fitness devices and software that suggests the optimal workout for your body type. These recommendations would adapt as your routines and health profile changes and might even change as new exercise and health research is published.
Whatever the future manifestations of artificial intelligence applications may be, they will undeniably be shaped by the coming mega trend of the Internet of Things (IoT). As everything becomes increasingly connected, we will need a way to intelligently orchestrate our actions triggered from our interdependent network of devices. The speed at which decisions will need to be made for rerouting shipping routes, trading stocks, controlling facilities, or other applications will demand smart and automated decision making. Businesses will require cognitive systems that can make sense of and optimize the deluge of data coming from every asset, location, and possibly even employee.
Overall, the public facing applications of artificial intelligence and cognitive systems are still in their infancy. While it remains to be seen exactly how they will impact our daily lives, it is impossible to deny their potential to shape our future.
This article is published as part of the IDG Contributor Network. Want to Join?