12 predictive analytics screw-ups

Make these mistakes and you won't need an algorithm to predict the outcome

Whether you're new to predictive analytics or have a few projects under your belt, it's all too easy to make gaffes. "The vast majority of analytic projects are riddled with mistakes," says John Elder, CEO at data mining firm Elder Research.

Most of those aren't fatal -- almost every model can be improved -- but many projects fail miserably nonetheless, leaving the business with a costly investment in software and time, and nothing to show for it.

And even if you develop a useful model, there are other roadblocks from the business. Elder says that 90% of his firm's projects are "technical successes," but only 65% of that 90% are ever deployed at the client organization.

We asked experts at three consulting firms -- Elder Research, Abbott Analytics and Prediction Impact -- to describe the most egregious business and technical mistakes they're run across based on their experiences in the field. Here is their list of 12 sure-fire ways to fail.

1. Begin without the end in mind.

You're excited about predictive analytics. You see the potential value of it. There's just one problem: You don't have a specific goal in mind.

   Mobius strip
A mobius strip. Source: Flickr/fdecomite.

That was the situation at one large company that engaged Elder Research to start working with its data to predict something -- anything -- that one executive could go out and sell to his business units. While the research consultancy did agree to work with him and developed a model for his use, "No one in those business units was asking for what he was trying to sell," and the project went nowhere, says Jeff Deal, vice president of operations at Elder Research.

The executive "uses the data internally for his own purposes, but to this day he keeps hoping that someone will realize the value of the data," Deal adds.

The lesson: Don't build a hammer and then look for the nail. Have a specific objective in mind before you start.

2. Define the project around a foundation that your data can't support.

A debt-collection business wanted to identify the most successful sequence of actions to take when trying to collect from delinquent debtors. The challenge: The company had a rigid set of rules in place and had followed the same course of action in every single case.

Crumbling foundation
Source: Flickr/archer10 (Dennis).

"Data mining is the art of making comparisons," says Dean Abbott, president of Abbott Analytics, which was retained for the project. Because the company had rules in place that always applied the exact same actions, Abbott had no idea which sequence would work better for collecting debts. "You need historical examples," he says.

And if you don't have those examples, you need to create them through a series of intentionally planned experiments so that you can gather that data. For example, for a given group of 1,000 debtors, 500 might get a threatening letter while the other 500 receive a phone call as the first step. "The predictive models can then be built to predict which characteristics of debtors respond better to the hard letter/call and which characteristics of debtors respond better to getting the call first," he says.

In this case the characteristics might include historical patterns of incurring debt, days to pay past debts, income, ZIP code of residence and so on. "Based on the predictive models, the collections agency would be able to use the best, most cost effective strategy for collecting debts rather than using the same strategy for everyone," he says. But you need to do experiments to get started. "Predictive analytics can't create information from nothing," he says.

Editor's note: This story was updated on Thursday, July 25 at around 9:45 AM (eastern time) to correct errors in items #6 and #9. We mistakenly attributed some quotes to the incorrect sources. Computerworld regrets the errors.

1 2 3 4 5 6 Page 1
10 super-user tricks to boost Windows 10 productivity
Shop Tech Products at Amazon