After a long winter frozen in the technological permafrost, it's springtime again in the field of artificial intelligence.
A.I. is poised to take off in 2016 as enterprises begin figuring some element of it into their application portfolios. By 2018, artificial intelligence will be incorporated into about half of all apps developed, according to research firm IDC, and by 2020, savings fueled by A.I. -- in reduced people costs and increased workflow efficiencies, for example -- are expected to total an estimated $60 billion for U.S. enterprises.
What's more, A.I. platforms such as IBM Watson, Intel Saffron, Google Tensorflow and Microsoft Cortana will generate about $1.4 billion in revenue in 2016, predicts IDC's director of cognitive systems and content analytics David Schubmehl.
Within the typical enterprise, A.I. most often takes the form of machine learning -- that is, developing and deploying algorithms that can learn from and make predictions on large sets of data.
Machine learning should be at the top of every watch list kept by C-level executives, line-of-business heads and marketers, says Schubmehl. "CIOs may not have to do anything in 2016, but they sure should be thinking about it," he says.
Why A.I. now, when it was considered dead and buried a decade ago? The answer is cheap computer processing power and the allure of secrets buried amid a torrent of data that didn't exist in such voluminous quantities then.
"During the 'A.I. winter,' we didn't have enough data and computer processing power" to make the field technologically or economically practical, says Schubmehl. Starting around 2013, things began to change. "Now we have plenty of computing power and more data than we know what to do with," he says. "We're still in the early days, but we have better machine-learning algorithms. The Siri of today is much smarter than five years ago. Siri five years from now will be tied into digital assistants and do even more stuff."
Pitney Bowes ships smarter with A.I.
While some C-level executives are still pondering A.I.'s potential, Pitney Bowes isn't waiting for the competitive hammer to come down.
"We [function like] an Uber for cross-border business parcels in 100 countries around the world," explains Pitney Bowes chief innovation officer and EVP Roger Pilc. "We don't drive trucks and fly airplanes; rather, we [facilitate] a very complicated set of activities. We could not do that without machine learning and A.I."
Pitney Bowes uses machine learning algorithms to accurately calculate the lowest possible shipping cost along with taxes and duties for customers such as Target, Harrods and Macy's. That includes classifying commodities by international Harmonized System (HS) codes that determine tariffs in 200 countries for more than 5,000 items.
Shipping is one of the most critical parts of any online transaction because fully two thirds of consumers back out when shipping costs are too high, says Pilc. Given such price elasticity, it's critical for Pitney Bowes to come up with a shipping price that's low enough to retain the customer without causing the shipper to lose money on the transaction.
That's where machine learning comes in. Before A.I., a pair of sunglasses was assigned one weight, tax and duty whereas today, those charges can vary widely within the sunglasses category, depending on the dimensions, weight and packaging of the product, among other factors.
"We are facilitating commerce and volume of shipments, and that creates data that we feed back into our algorithms," says Pilc. "The more data we get, the more accurate we get."
Pitney Bowes has been using A.I. and machine learning for close to four years, according to Art Parkos, Pitney Bowes's VP for strategic technology and innovation. During that time, he estimates there's been a 25% improvement in accuracy yield from those algorithms.
GE taps its deep domain knowledge
One key to teaching machines how to learn is having access to deep domain knowledge, which is why some older and more experienced companies are farther down the A.I. path than their more youthful competitors -- they simply have a larger store of data to mine.
That's the case at General Electric (GE), one of the world's foremost manufacturers of industrial equipment. GE is pioneering the concept of the "digital twin," where a digital model of a jet engine, locomotive or large turbine, for example, is built to accurately predict via artificial intelligence when maintenance or a replacement is needed -- a process that can save billions, according to Colin Parris, GE vice president of software research.
"Every time I inspect an asset, I [have to] take it offline. Availability is lost, and I have to pay people to inspect it, and [then they find] there's nothing there," explains Parris. "Instead, I can have a digital twin of the asset that will tell me when to inspect it so I don't waste time inspecting it unnecessarily."
What's more, "You can have many digital twins associated with one asset, each one focused on a different financial outcome," explains Parris.
GE has built digital twins for its GE90 jet engine and says it has saved millions by avoiding unnecessary overhauls. Digital twins for its Evolution line of locomotives saves an average of 32,000 gallons of fuel a year, per unit, and lowers carbon emissions by 174,000 tons.
To build digital twins, GE employees first collect relevant data and then build a model. Once running, the model adapts to ever-changing conditions affecting GE's gas turbines, jet engines, MRI machines, locomotives or oil drilling equipment. This is where artificial intelligence comes in.
"When I collect the data, a lot of it is missing. A lot has been corrupted because a server or network was down or sensors were missing or off. I use A.I. and machine learning to clean and impute that data to [fill in] the spots that are missing," explains Parris. The payoff for digital twins can literally be in the many billions if not trillions of dollars, according to Parris.
"The power of 1% is the scale we look at," says Parris. Case in point: GE says a digital twin for its 6FA Turbine Combined Cycle [Power] Plant has yielded a 1% increase in efficiency. "At this scale, a 1% increase represents billions of dollars in savings."
Machines are one of the three areas where GE views A.I. as critical. The other two are humans and robots. "We have lot of A.I. on the machines, a little bit on the humans and lot less on robotics. This is a journey for us and we are just at the beginning," says Parris.
Robots, for example, use A.I.-aided computer vision to precisely position cameras to determine whether a turbine blade is cracked or merely dirty or oil-stained. Taking a power plant turbine out of service to fix a crack that doesn't exist is very costly. GE is also using A.I. to create robots with pressure-sensing fingers and ones that can work alongside humans.
On the human side, A.I.-infused apps help technicians find the resources they need faster by monitoring everything from their facial expressions to their searching, and then recommending content based on that data.
"If you look at A.I. right now, it is evolving at a fantastic rate" Parris says.
Twitter tailors its feeds with A.I.
The social network Twitter is using A.I. to deepen the user experience. More specifically, Twitter has developed machine learning algorithms that surface and push out Tweets based on what individual users have historically viewed or clicked through.
That's a challenge when a single user's timeline or newsfeed can consist of thousands of tweets about every conceivable topic, and when, according to Twitter, some 320 million users are active every month on the network.
"What is most important for Twitter users? [We are] trying to get what's happening in their world that is most relevant and important. That's the most important thing for Twitter to solve," says Siva Gurumurthy, senior manager of engineering of Twitter's Recommendations Team.
The Recommendations Team uses algorithms to figure out what topics are trending and which individual tweets -- which can contains links, GIFs and videos -- most closely align with users' preferences, based on their network of followers, location and interests. Machine learning is critical to this effort.
While the company says such A.I.-fueled customization is only a part of product features such as While you were away, Highlights, Trends and MagicRecs, speculation bubbled up at press time that the technology could be applied to a user's core Twitter feed.
Either way, Twitter seems committed to the idea of personalization via algorithms. "Deep personalization is the key aspect of machine learning. We train our algorithms to personalize content," says Gurumurthy. "If a user likes sports, then [the algorithms] prioritize live sports content. If they like content from friends and family, that content gets prioritized," he says.
One interesting challenge is to personalize content without losing out on timeliness, Gurumurthy says. Twitter's algorithms have to figure out what's relevant to hundreds of thousands of users during an event of world importance. For instance, when the Bataclan concert hall terrorist attack was raging in Paris, users relied on Twiiter for myriad different reasons. Some users were trying to locate family members, friends or business associates, while others wanted images, videos or breaking news alerts.
"The [time] window of applying machine learning algorithms is becoming shorter and shorter," says Gurumurthy. "[It can] bring the content you care about where there's an event like this. That is the problem we are solving with machine learning," he says.
It's not always content that dictates what users see. In many cases, the most important consideration is what's happening within a user's network of followers and their location. "The fact that my friend tweeted about an irrelevant topic is more interesting than the topic itself," Gurumurthy explains. "If there's a fire in Massachusetts, I may not care about the fire, but if three people in my network speak about that fire, that's much more interesting to me," he explains.
"There's a revolution going on in this space," Gurumurthy sums up. "We expect progress for years to come [in] machine learning in the streaming world."
CIT Group: Getting ready for A.I.
Apart from the large organizations with deep pockets that are jumping into A.I., many other companies are in the exploratory phase, looking carefully at ways to implement A.I. -- and to get funding for it. CIT Group, a leasing and lending concern, counts itself among the latter group.
A.I. "is very much exploratory. We have no funded projects, but are experimenting with a few use cases to see if there is a business case with sufficient ROI to sponsor an investment," says CIT chief data officer B.J. Fesq. With the idea of marrying analytics with A.I. at some point in the future, CIT has taken the first step by looking at a variety of analytics apps such as Redpoint's graphical CRM tools, the Tamr data-unification platform for cataloging, connecting and consuming data, and Cloudera's Hadoop platform.
"We've got this customer history that goes back 109 years and we haven't really mined it for cross-selling opportunities. We're starting to do that now. We say this customer kind of smells and looks like this customer over here," says Fesq. "It has a huge impact on what we do day-to-day."
CIT has been focused on its acquisition of OneWest Bank so A.I. has not been a top priority, but he expects that to change in the next couple of years. A.I. "is very small right now, but it's starting to grow and will explode," says Fesq. "We already have our data geeks looking at it."