How artificial intelligence explains analytics

No one would ever work with a person that just spits out answers and then walks away. Why would we expect people to work with intelligent machines that do any less?

Credit: ShutterStock

Steve Lohr recently wrote an interesting article titled “If Algorithms Know All, How Much Should Humans Help?” for The New York Times that examines our ability to collaborate with and trust the increasingly intelligent systems that are entering both the workplace and our homes. He noted that without some level of transparency into the reasoning done by these systems, we are going to be forced to simply have faith in the output of "black boxes."

This got me thinking about the emerging intelligent and analytic systems that are on the horizon. How are they going to interact with us? What are the ways we can partner with them? Are we going to be regulated to the role of simply looking at their output and just doing what we are told?

One of the approaches to this issue of transparency that Lohr highlighted was the work on "Watson Paths," an initiative that uses visualizations to display the reasoning that IBM Watson has gone through to arrive at its answers. But just as black boxes giving us answers doesn’t inspire trust, pictures of support trees aren’t all that reassuring either. As A.I. expert Danny Hillis said in that same article, "The key thing that will make it [artificial intelligence] work and make it acceptable to society is storytelling."

Storytelling.

This is a phrase that keeps coming up in conversations about intelligent systems, visualization platforms, the role of the data scientist and the integration of machine learning and predictive systems into our data centers. But what does storytelling really mean when it comes to interacting with software?

For some systems, such as Watson, the results they provide take the form of answers. They often provide great answers -- but answers without explanations, without the story, are not enough. Even a Jeopardy-style answer and confidence score is not enough. Having Watson tell you that it believes you have Graves' Disease (a thyroid problem) along with a numerical percentage that indicates the strength of this belief is certainly not enough. Augmenting this with a tree that maps out every rule and reason associated with that answer might be more useful, but it still requires interpretation, or someone with the skill set to navigate the decision model.

A more powerful approach would be to take all of that data related to what Watson was looking for, where it looked, and the nature of the things it found, and turn that into an explanation of how it came to its conclusion. Rather than receiving just an answer or a graphic, a physician could get access to a description he or she could simply read, like this:

In looking for a diagnosis based on the symptoms, I looked at a combination of medical textbooks and journals, case studies and treatment result studies as well as medical blogs and information sources linked to Research Hospitals. In the textbooks and journals I found a large number of matches in which the symptoms (irritation of the eyes, low TSH levels and higher than normal thyroid hormone levels) were overwhelmingly linked to Graves' Disease. High TSH levels are also mentioned in combination with pituitary problems, but the other symptoms are not. In the case histories and treatment result studies, I found a similar set of matches in which the symptoms were linked to the diagnosis of Graves' Disease by working physicians. And finally, the procedural information from the Research Hospitals lists these symptoms as indications of Graves' Disease. In the latter, there are recommendations as to further testing including iodine uptake tests and ultrasound.

In English, this narrative is driven by the data associated with the processes that led to the result. But rather than showing the reasoning alone, it explains its findings in a way that makes sense to anyone trying to evaluate it. It answers the questions that anyone would have when evaluating an evidence-based answer:

  • Where did you look?

  • What were you looking for?

  • Did you find anything else?

  • Are there information gaps that I could fill to confirm this answer?

The power of this approach is that exposing the reasoning of a system gives its users the ability to evaluate that reasoning and potentially learn from it or provide feedback to the system itself.

This approach is useful as we look at other types of systems as well. For example, the data associated with machine learning and data mining systems that build out rules for doing predictive analytics can be used to drive narratives in the same way. The evidence used to build those rules (“There is a strong correlation between a customer’s number of dropped calls over a monthly period and the possibility that he will look for a new service“) and their application (“The combination of dropped calls and the end of this customer’s contract period makes it likely that he will be looking for alternative services”) can both be used to generate an explanation that converts a black box into a cooperative coworker.

The same can be said of recommendation systems (which are really predictive systems in a slightly less scary form) and entire suites of business intelligence (BI) tools. Even the data associated with dashboards can be pivoted to drive narratives.

The point here is fairly simple. All of the analytics these systems are running have meaning. I know this is obvious, but we still need to be reminded of it, particularly after endless hours of data crunching. We can use this meaning and the rationale for the analysis that supports it to generate the narrative explaining why these systems have come to their conclusions. And if that reasoning is too complex, even more reason for us to utilize systems that can bring that complexity to a level of human understanding.

If we are going to have intelligent machines in our workplace that are providing us with answers, predictions and recommendations, we have to ask of them exactly what we would ask of any other co-worker. We have to ask them to explain themselves so we can productively work with them.

No one would ever work with a person that just spits out answers and then walks away. Why would we expect people to work with intelligent machines that do any less?

This article is published as part of the IDG Contributor Network. Want to Join?

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.