iPhone, Siri and mind control: the future evolution of the smartphone

It would be foolish to underestimate the importance of Apple [AAPL] Siri or other voice-activated intelligent assistants. These solutions are big steps toward new user interfaces: one day your iPhone may be controlled by your mind. 

Mental machinery

Sounds far-fetched? Perhaps so, but rest assured work’s already taking place to perfect the mind-controlled computer, a topic researched by Macintosh user interface expert, Jef Raskin (RIP). 

Mind-reading has been wishful thinking for science fiction fans for decades, but their wish may soon come true,IBM said. “You would just need to think about calling someone, and it happens.” IBM believes it will be making mind-controlled PCs and phones by 2016.

In a December press release, IBM explains: “Scientists in the field of bioinformatics have designed headsets with advanced sensors to read electrical brain activity that can recognize facial expressions, excitement and concentration levels, and thoughts of a person without them physically taking any actions.

The standard GUI interface was the only way to use any mass market device for decades. This changed with the iPhone which opened the flood gate for other forms of touch-based interface allegedly also in simultaneous development by other firms. Microsoft had been working for years to perfect such a system, but its solutions failed to gain traction. 

Evolution of the interface was inevitable, answering criticisms propagated by Raskin in his, Down With GUIs! report: “Graphical User Interfaces (GUIs) are not human-compatible. As long as we hang on to interfaces as we now know them, computers will remain inherently frustrating, upsetting, and stressful.

IBM isn’t the only major technology firm exploring thought as a next evolution for the interface. Intel’s also exploring this -- that's why company CTO, Justin Rattner, choose to wear brain-controlled bunny ears at IDF 2012 this year. “At Intel Labs we’re part of that perceptual computing effort” he said. “We decided, of course, since we’re research people, we would go all the way to mindreading. And here’s the prototype.”

Complex solutions

These systems are complex -- but we’ve seen work done on complex user interfaces before. Voice control, for example, has been a long-term project which is only now becoming a technology suitable for use by the mass market.

Apple's Siri is powered by the world’s leading voice recognition technology firm, Nuance. The beauty of the system is that each time you use it, it learns a little more about how humans say things, what they say, and grows more forgiving when it comes to dialect and regional accents. As the stored data at Nuance grows, the company becomes capable of delivering even more complex systems.

Today, Nuance is working with chipmakers to develop solutions that will let users control their phone without touching it.

You already see the impact of this work within iOS 6 and Siri (still in beta), which now allows use of your voice to send messages, place phone calls and more.

...ask Siri to update your status on Facebook, post to Twitter or launch an app. Additionally, Siri takes hands-free functionality even further with a new Eyes Free mode, enabling you to interact with your iPhone using nothing more than your voice,Apple explains.

I’m not pretending Siri is perfect. It’s not. But it is a hint of what’s to come.

Interviewed by MIT’s Technology Review, Nuance’s CTO, Vlad Sejnoha, opines that within a year or two you won’t just speak to Siri to tell it to do things, but you’ll be able to ask it questions, such as “When’s my next appointment?” 

The phone will detect that you are speaking, wake itself up, and tell you the answer or perform another task. Sejnoha observes: "Just turning on the device is part of the problem, right? So we're going to be smoothing that out, eliminating those problems as well.”

In other words, you won’t need to press a button to activate your phone in order to activate Siri any more. 

Life beyond voice

However, as the user interface evolves to become more sensitive to voice controls, the research gathered also feeds into ongoing research for other forms of control. Once you figure out how to make a device responsive to a voice, then some of the technical challenges met in that research can be applied to resolve similar challenges pertaining to other forms of input.

This means it’s not at all surprising that mobile chipmakers are exploring mind-controlled devices as a future for the interface. Qualcomm runs an interesting blog called Qualcomm Spark, publishing articles it commissions for use there. These don’t necessarily reflect company opinion, but it is interesting that one report looks at how mind-controlled devices will change our future

Written by Emotiv Lifesciences founder and CEO, Tan Lei, this report looks at that company’s work in mind-controlled devices in the form of its EPOC neuroheadset, which reads and interprets brainwave patterns: “The headset's multisensor "arms," which extend to the front and back of your head, pick up electrical signals from different functional parts of the brain. Both subconscious and conscious mental states can be detected using advanced algorithms, allowing the computer to react more naturally to the user’s mental state and even to accept direct mental commands.”

The impact of the technology is promising. For example, in theory it will be able to determine which music make you happy or sad. It can also figure out if you’re happy or sad. This means that if you put your headset on when you’re down in the dumps, the technology will be able to assess your mood and do something to help improve it, perhaps playing you a selection of tracks which generally help cheer you up.

Around the home you may use your mind to turn lights on or off; to change thermostatic controls; change channels on the television; make a phone call or set the security alarm -- or control a wheelchair, or hold a conversation if you’re speech impaired. 

Lei admits that we’re only scratching the surface of where these technologies will go. The ability to control devices with your mind is still in its infancy. However solutions such as the Emotiv  headset or Mattel’s Mindflex 2; or voice-controlled devices, such as a Siri-controlled iPhone, show that it is possible to develop responsive user interfaces. 

Siri as a service layer

Apple today filed a patent in which it describes development of Siri into becoming an intelligence which could potentially manage your experience for you, finding the right app for your need, whether you own it already or not. More on this on AppleInsider.

Clearly in this evolution, Siri is becoming something more than a voice-controlled assistant. It’s becoming an intelligent entity which serves up answers to your needs. What interface you use to interrogate that entity is less important than that entity’s purpose of accurately resolving those queries for you. Siri then becomes a building block which can be accessed transparently.

Raskin would be pleased. “Designers forget that humans can only do what we are wired to do. Human adaptability has limits and today's GUIs have many features that lie outside those limits, so we never fully adapt but just muddle along at one or another level of expertise. It can't be helped: Some of the deepest GUI features conflict with our wiring. So they can't be fixed. Like bad governments, they are evil, well entrenched, and must be overthrown,” he wrote.

The evolution of platform-agnostic cloud-based application as service solutions, and the development of universal solutions for cross-platform computing, mean that the PC becomes transparent. Mobile devices become keys to a much wider computing experience that’s inextricably connected to your daily life. 

Controlled by voice or -- if IBM is correct -- in five years by your mind, your mobile device, your iPhone if you will, shall also be a direct link to your entire computer experience. And if you need a desktop to work with, you’ll just pop your video glasses on to see it. Siri is part of this journey, the iPhone is another. 

Me? I don’t believe IBM has it right when it predicts brain-powered user interfaces within half a decade -- I suspect that as research continues we’ll encounter problems as hard to resolve as those which held back the development of voice control. However, the history of technology across the last fifty years proves such challenges will eventually be resolved, meaning that at some finite point in the future, making a call or editing a movie will be as simple as imagining what you want to happen in your mind. 

I just hope we don’t have to put up with any preference-based advertising when that future happens. I don’t want stupid advertising jingles literally forced into my mind. 

Got a story? Drop me a line via Twitter or in comments below and let me know. I'd like it if you chose to follow me on Twitter so I can let you knowwhen these items are published here first on Computerworld.

FREE Computerworld Insider Guide: IT Certification Study Tips
Join the discussion
Be the first to comment on this article. Our Commenting Policies