There are many ways to make a bad technology decision for a customer relationship management (CRM) system, but only a few ways to make a good one.
More often than anyone would like to admit, the decision comes from a manager who's just returned from a CRM trade show with a bag full of marketing materials and a head full of slick demos. But falling prey to the vendor hype isn't the only way to screw up the choice of CRM technology. Here's a list of some of the responses I've heard to the question, "How did you go about choosing your CRM product?"
- "The salesman gave it away for free for the first year."
- "The VP of product planning plays golf with the software company's CFO."
- "Because the competition is doing it."
- "Our end users liked the user interface ... and they're footing the bill."
- "The vendor asked us to be on their advisory committee -- we're helping them plan how to integrate campaign response modeling into their product."
- "They pretty much convinced us they were 'best of breed.'"
- "They told us the whole thing could be done in three months."
- "We already had their database product, so we thought, 'What the heck?'"
These reasons range from possibly acceptable to dangerous. The key to CRM success is basing technology decisions on a careful definition of the business requirements -- the business need, pain or problem -- and the functionality required to solve that problem.
For example, the business may need to track the success of targeted marketing campaigns. These requirements in turn demand specific functional capabilities, such as campaign response modeling. This process will greatly clarify your technology choices, because a list of products can be mapped to each specific function.
Remember: Technology is just one part of CRM. Most companies undertake CRM technology selection without really being ready to do so. They haven't figured out how CRM aligns to their corporate objectives, how it will affect their business processes or that it will require organizational changes that will irk many a CRM stakeholder.
Change is part of CRM's territory, and technology is probably the easiest area of CRM change for users to accept -- which is why many CRM business sponsors begin with the technology choice.
Allowing technology to drive CRM is known as the "bottom-up" approach, usually driven by a go-it-alone executive or department that wants a particular capability and isn't willing to wait for more rigorous, requirements-driven planning.
But the risks of bottom-up development are far more serious than the rewards. There's a danger of spending a lot of money on low-priority capabilities. The lack of integration with other technologies or CRM projects can lead to either throwaway work or cumbersome after-the-fact integration. And the CRM product's feature set may not meet the needs of future business growth and broader CRM adoption.
Although it might be tempting to espouse the well-worn aphorism "If you build it, they will come," the truth is that if you build it, they probably won't even notice. After all, how much of your company's software products have ended up as shelfware? Bottom-up CRM development is CRM in a vacuum -- a project not requested by or socialized to the business.
The fact is, when choosing your CRM technology, there's simply no substitute for allowing a structured list of requirements to dictate your technology decisions. Yes, it takes longer than knee-jerk development with the tool du jour. But the alternative is much riskier, and examples abound of CRM systems that never delivered the goods.
The key question to ask when defining necessary functionality is, What aspect of our customer-focused processes do we need to support with technology? To illustrate how this works, consider the following example. A major bank found that many of its customers did business with other financial institutions and were already using the product being marketed. So customer service representatives needed to offer alternative products when speaking with customers. The bank decided to use CRM to generate a list of five different product recommendations for each customer, based on that customer's likelihood to buy them.
For this to work, the process had to involve these steps:
- Analyze purchase history among all customers to understand the most frequently purchased products by similar customers.
- Rate the likelihood that a customer will buy a particular product.
- Communicate the resulting customer list and product scores to the call center application system.
- Collect response rates.
- Refine scores based on campaign results.
Each of these is a functional requirement. After you've defined functional requirements, give each one a numerical score that rates its importance to the business goal. Then you're ready to map the functions to the candidate technologies by answering the question, Is there a CRM tool that can perform each of these core functions?
It may be that many products provide the functionality -- in theory. Your job is to probe deeper: Is the function available out of the box or only through customization? Can you do without a particular function? Are you prepared to change your process to match the tool's workflow? And what about the product's overall usability? By scoring specific functions, you can compare a product's strengths and weaknesses against your most critical business requirements.
After narrowing down the product choices to a short list of vendors, it's time for IT managers to address the technical issues, from database compatibility to response times. If the choice comes down to two vendors, the one that most closely aligns with your existing technology infrastructure should be the winner.
Now you're ready to talk to the vendors on the short list -- not just about how great their tools are, but also about whether the vendors can support their tool sets and advise you on the best way to implement and deploy them. Have a structured set of questions to keep the vendor interview focused on what really matters.
Kicking the Tires
As with a new car, few companies purchase a CRM product without taking it for a test drive. Your vendor should offer an evaluation copy of its software so your CRM team can install and use the product. These trials usually last between three and six months to give users time to perform the following tasks:
- Verify that the promised functionality actually exists.
- Ensure that the product works in their specific technical environment.
- Gauge the product's usability.
- Verify that the product works with the user's data.
A critical point here is that verifying that the functionality exists is one thing, but discovering how the product actually offers the functionality it claims to have is another. Two products might each claim to evaluate marketing campaigns, for example, but one might involve significant end-user input while the other is more automated.
Likewise, ensure that the tool can work with your data. Many companies ignore this point, but it can be a make-or-break proposition for a CRM program. There might be data problems such as inconsistent formatting that prevent the CRM product from working correctly. Or the product might require certain data, such as cleansed address fields or access to customer support history, which your current systems simply can't furnish. Depending on the severity of these data problems, you might want to delay the purchase of any CRM tool until they're resolved.
Even if you aren't comparing different CRM tools, actually using the evaluation software is a good way to determine whether the per-user cost of the tool is worth the value it provides. Say you're evaluating a call center CRM product that costs $1,000 per end-user seat. One of your evaluation goals should be to verify that the product is able to truly deliver efficiencies that equal or exceed its cost. If you have 300 customer service representatives across the country, will the resulting productivity gains be worth $300,000 to your company? Only by installing the product and testing it can you truly know what to expect.
Finally, if you have the time, combine the evaluation with a proof-of-concept test and demonstration, using a subset of your data and metrics similar to what you expect to use after the tool is in production. Although you won't be working with all of your data or submitting high-volume transactions, your development team might be able to simulate workloads and extrapolate performance numbers based on more limited testing.
At worst, this exercise will save time during the actual development project. At best, it could save you many hours and untold expense on a product that doesn't cut the mustard.
Dyche is a vice president at Baseline Consulting Group in Los Angeles. She is the author of The CRM Handbook (Addison-Wesley, 2001), from which this article was adapted. Copyright 2002, Addison-Wesley, Boston.
Stories in this report:
- Editor's Note: Sober CRM
- The Story So Far
- Try, Try Again
- Slow and Steady Can Win CRM Race
- Picking Winners & Losers
- Insights Turn Into Profits
- Schwab Sees CRM Payoff
- CRM Analytics: The Integration Challenge
- All-Star Players
- Profitable Privacy
- Lessons in CRM
- How to Choose CRM Software
- Data Quality: 'The Cornerstone of CRM'
- Mazda Wants 360-degree View of Customers
- How to Run a CRM Project During a Recession