A.I.-powered assistants step into the enterprise

Artificial intelligence covers a large spectrum of disruptive technologies that will one day change the way we do business. Virtual assistants are a good place to start.

office telephone call with blue binary data overlay
Thinkstock

Amy Ingram sends a short email to her boss's colleague, suggesting dates and times when the two could get together for coffee. She offers up the available times that the boss usually sets aside for such meetings.

Little does the colleague know that she isn't interacting with a human being. "Amy Ingram" is a name used by a virtual assistant that relies on artificial intelligence to schedule meetings.

Available from a New York-based company called X.ai, Amy has a LinkedIn page -- one that's conspicuously missing a photo. Jason Madhosingh relies on Amy to maintain his calendar. Amy has been taught to interpret his emails, which she is copied on, and if any messages mention breakfast, lunch, coffee or a phone call, she takes steps to schedule meetings in time slots he has set aside for each type of event.

"It has proven to be a really useful tool," says Madhosingh, head of product marketing at 1stDibs, an online marketplace for fine art and furnishings. "I started using it personally, but now I've discovered that it's very easy for me to use Amy professionally to schedule external meetings with people outside my company."

Virtual assistants like Amy (who's sometimes known as Andrew) have become wildly popular for consumers and are now crossing the line from personal to professional use.

By the end of 2016, two-thirds of consumers in mature markets will regularly use virtual personal assistants in their daily lives, according to Gartner.

Virtual assistants are the most basic form of artificial intelligence -- the ability of a machine or software to mimic human intelligence through experience and learning, and perhaps answer intricate questions and solve complex problems.

A.I. systems are also referred to as cognitive technologies or (at a higher level) smart machines -- a term Gartner uses for technologies that it describes as "adaptive, curious and insightful."

For example, popular voice-based digital assistants, such as Siri, Cortana and Google Now, can understand our words, analyze our questions and, more often than not, point us in the general direction of the right answer.

There are even new writing analysis tools that use A.I. to help people polish their prose, going well beyond grammar and spelling checkers.

And organizations are realizing that artificial intelligence can streamline operations, save time, improve accuracy and lower costs. Indeed, thanks to cloud computing, advances in processing power, improvements in storage accessibility, and our expanding ability to harvest vast amounts of diverse types of data, some A.I. technologies are almost ready for widespread adoption in the enterprise.

Mike Walker, analyst at Gartner Gartner

Mike Walker

Virtual assistants are laying the foundation for a mind-blowing A.I. invasion that will one day dramatically impact life, business and the global economy. "While we're still at a foundational level today, technologies are rapidly evolving," says Mike Walker, an analyst at Gartner.

But questions remain. Are A.I. technologies sufficiently safe and reliable for enterprise use? Are current systems lifelike enough for tech-savvy, time-constrained employees and customers? And how is IT managing this influx of new technologies?

Industry analysts encourage organizations to experiment with virtual assistants and other A.I. technologies today, or risk being unprepared when the tipping point comes.

Virtual assistants

Virtual assistants are just scratching the surface of A.I. capabilities, says Michele Goetz, an analyst at Forrester. "Using assistants like Siri, Cortana, or even the personal assistant from Amazon, where you're asking for content, is a way to create a better search experience, but it's not necessarily delivering a more intelligent search experience," she says.

"It's still very much a retrieval process," Goetz adds. "Getting it right or wrong depends on how well the natural language software processes what you're speaking. But I think that's good to train us as humans to feel more comfortable with our devices."

Michele Goetz, analyst at Forrester Forrester

Michele Goetz

Duncan Regional Hospital in Oklahoma is always looking for ways to improve efficiency in its patient care. About 15 physicians at the hospital are already speeding up their clinical documentation on laptops with Dragon speech recognition software from Nuance Communications.

"We have several applications scattered throughout the organization that also have an A.I. interface," says Roger Neal, vice president and CIO. For example, Duncan Regional uses a secure, HIPAA-compliant messaging platform from Imprivata. It has features that are similar to those of the iOS messaging app, but instead of sending a text message, physicians open up the app from a laptop, tap the microphone button and speak their message rather than having to stop interacting with a patient. The hospital is also starting to use Dragon for some of its business functions, such as scheduling meetings.

"It's just so much more efficient to do those things [with A.I.]," Neal says. "From an efficiency standpoint, it's just technology that fits an extremely needed area of what we do on a daily basis."

Today the hospital is piloting a project that lets nurses use voice recognition software to enter clinical documentation, a critical need as the healthcare industry requires more and more documentation to meet HIPAA requirements and mandates for meaningful use of electronic health records.

A.I. in the healthcare sector comes with special challenges, such as the need to accurately transcribe highly technical language. Nursing documentation is especially detailed, Neal says.

"We're working with Nuance using Dragon because they have one of the largest medical word libraries, so we don't have to start from zero," Neal says. The technology is also being integrated with the hospital's Meditech health information system in "some very specific nursing documentation areas," he says.

Once approved, more than 500 nurses will have access to voice recognition documentation, Neal adds.

But not all A.I. ventures at Duncan Regional have been successful. In 2013, the hospital tried using voice recognition technology in the operating room as a way for surgeons to orally dictate what they do during procedures to save time on post-op documentation.

"We had this bright idea that we could set up a microphone outside the sterile field that could pick up his voice," Neal recalls. "But it just doesn't work very well -- too many quiet spots, background noises and ‘uhs.' We may get there at some point, but at the time it just wasn't as clean as it needed to be."

Roger Neal, CIO, Duncan Regional Hospital [2015] Duncan Regional Hospital

Roger Neal

Neal advises first-time A.I. users to take it one technology and use case at a time -- and to not get discouraged. "It's still really a newer technology and it's evolving very quickly. Some things are going to work great, and in other areas it's really not ready for prime time yet," he says. "It will get there over time, but it's not a silver bullet right out of the gate for all of your inefficiencies."

Virtual financial advisers

Some of today's A.I. technologies use natural language software to understand the user and then analyze vast amounts of data to come up with intelligent answers. Several companies in the insurance and financial services sectors are finding success with customer-facing smart advisers that use Watson, IBM's cognitive computing system.

USAA, a San Antonio-based provider of financial services to military families, harnesses the power of Watson in an application that lets USAA customers who are leaving the service ask questions about, for example, college tuition reimbursement or changes to health benefits.

"USAA chose the topic of military separation for its first foray into the cognitive computing space because it provides a singular focus with a finite audience -- about 150,000 leave the military each year," says Eric Engquist, assistant vice president of military transitions at USAA, in a statement.

A team of USAA employees spent more than six months training Watson to answer questions about military separation. They started with 2,000 questions to educate the computer, and the information it can answer will evolve, developers say. Watson knows only what it's taught, so when it gets a question it can't answer, developers will teach it.

USAA executives say the goal is to augment employees' expertise, not replace them. The smart system offers better insight and information than the USAA digital portal used to offer, they say, and it helps shorten service calls, offers more context to the calls coming in and reduces the amount of paperwork around customer interactions.

In another A.I. initiative, the Australian bank ANZ last fall deployed IBM's Watson Engagement Advisor in its Sydney Grow Centre, and it plans to make the cognitive computing platform's functionality available to more than 400 financial planners.

When a customer asks a financial planner about a company or an investment, the adviser can relay questions to Watson using natural language, by either speaking or typing. The system then culls through vast amounts of information -- such as annual reports, SEC filings, relevant news stories and other analysts' views -- and produces its own insight on the investment.

When making investment decisions, "it really helps to look at that [variety] of information and get a bigger picture rather than if you were just to calculate the financial numbers," says Forrester's Goetz.

ANZ said it hopes to observe the types of questions coming from both customers and financial advisers in order to continue enhancing Watson's capabilities and insights.

Machine-based helpers

Another category of A.I. includes what Gartner calls the doers -- robots and networked machines that are automated to handle tasks and may or may not utilize natural language capabilities.

For example, Blue Prism, a maker of robotic process automation software, has created "software robots" designed to handle back-office administrative tasks and combine relevant data from siloed systems to complete a variety of work assignments. Developing the technology involves computerizing what administrative personnel do on computers -- emulating the keystrokes they make to navigate various enterprise applications. The robot's user then codifies the process. "These software robots are taught how to onboard an employee by logging on to all the different systems and then mapping the customer information they get from one system to another by emulating what a person would do," says Blue Prism chief marketing officer Pat Geary.

University Hospitals Birmingham in England uses 35 to 40 software robots for a variety of tasks that each day collectively involve tens of thousands of transactions on its systems. One of its first software robots helped manage the 2,000 outpatients that one UHB hospital treats on a typical day. Those patients previously had 34 reception desks to choose from when they checked in, but those desks were consolidated into one central reception area in an ill-conceived redesign of the facility.

UHB came up with a plan to use kiosks to expedite the check-in process, but its patient booking system was a closed platform and couldn't communicate with the kiosks. Blue Prism tools were able to connect the two systems and "interrogate the database we use in our kiosk solution to pick up the data there, and use robotic process automation to key that data into our system used for patient appointments," says Steve Chilton, director of ICT at University Hospitals Birmingham NHS Foundation Trust.

The software robot now updates the patient appointment system 40,000 times per week.

"I recognized the potential for [software robots] to be used in many corners of this organization to help remove repetition, improve integration and automate all sorts of transactionally heavy activities," Chilton says.

A.I. forecasts vary

While artificial intelligence is evolving rapidly, Gartner believes that true business transformation from A.I. won't take place for at least two or possibly as many as 10 years, and really futuristic, general-purpose machine intelligence, where machines think for themselves, will take even longer.

Industry watchers agree that companies should start using systems with basic A.I. functionality now, even though the full promise of the technology may not be realized for years. That way, employees will get comfortable with the technology on a small scale, and IT leaders will learn how to educate users about A.I.'s capabilities.

"There is different training, governance and interaction that has to be learned," Goetz says. "If you don't do that now, when [advanced] A.I. does become ready for prime time and is truly commercialized, it's going to take you longer to adopt and capitalize on those new capabilities." That approach, she adds, will "allow these technologies to help you think beyond where you're thinking today."

Copyright © 2015 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon