You arrive at your desk in the morning and sit down in front of your computer. Instead of issuing a voice command to your PC, or reciting an email, or dictating a memo to your boss, you start typing and clicking. In the environs of the office, where speech technology could save us time and make us more productive, most of us are still stuck with keyboards and mice.
Yet once we're away from the office, many of us don't think twice about issuing voice commands to our smartphones -- whether that means voice-dialing the phone, speaking a search term to Google or asking Siri what today's weather will be like.
Companies that provide speech technology have invested heavily in the concept of "personal digital assistants" like Apple's Siri and Google Voice Actions (available on many Android phones) that can understand natural-language commands, says Dan Miller, the senior analyst and founder of Opus Research. In fact, most recent breakthroughs for speech-recognition technology have come in the area of cloud-based natural language searches made from mobile devices, he says.
The main advancement is that the speech tools are now closer to the user -- on our phones and tablets as we go about our day -- and many run in the cloud, which provides immediate processing and a constantly expanding language database. Unlike older desktop-based software, these new tools do not require speech training, thanks to improvements in the algorithms. "We can be pretty imprecise in what we say," Miller says.
To continue reading this article register now