Retail technology trials are supposed to take risks. It's a time not just to find out if a technology will work in the field, but to see the results and customer reactions to it. That all said, a new facial-recognition trial from KFC China and Baidu strikes me as having more potential to alienate customers than to help them.
A good question to ask about any tech trial is, "If this behavior was being done by a human associate rather than software, would it be a good thing?" It's a critical question to think about with facial recognition.
The trial is not using facial recognition to identify customers and so reduce in-store payment fraud when a thief tries using a stolen payment card. If it were, I would be applauding it. Indeed, that's a fine way to use facial recognition and to sidestep its less-than-precise accuracy. If a bank associated the actual customer's image for comparison with the person using the card, it would have a benchmark to compare against.
So what is KFC China using facial recognition for? To be blunt, it is using it to stereotype customers and to guess what they'll want to order. Gosh, what could possibly go wrong with that?
TechCrunch did a nice piece on this KFC effort, and please allow me to quote the salient details. The two companies are opening a new restaurant in Beijing "which employs facial recognition to make recommendations about what customers might order, based on factors like their age, gender and facial expression. Image recognition [technology] installed at the KFC will scan customer faces, seeking to infer moods, and guess other information including gender and age in order to inform their recommendation."
The story references some examples that Baidu itself volunteered: "the system would tell a male customer in his early 20s to order a set meal of crispy chicken hamburger, roasted chicken wings and coke for lunch, while a female customer in her 50s would get a recommendation of porridge and soybean milk for breakfast.”
Come now, Baidu. Couldn't you make those examples a bit more condescending? Although much will depend on the sensitivity of the programming, prejudging what someone will order based on appearance is just begging for trouble by offending customers.
This is very different from, for example, having the system suggest an order because it's on a special one-day half-off sale. That has great potential. This gets back to the associate comparison mentioned earlier. If a store associate told every customer about that price special, it's unlikely anyone would be offended.
But what if that associate shared that information with customers selectively, based, say, on what they looked like or what they were wearing? Maybe the kind of car they drove in? That way, only certain kinds of customers would get offered the discount. See any trouble there?
Now remove the judgment that many — but certainly not all — humans have and leave it all to an algorithm. Remember that customers can hear what is being offered to other customers. Let's take Baidu's own example of what to offer a woman in her 50s. What if a 22-year-old woman has just worked a double shift and she has bags under her eyes? If the robot reveals that it thinks her facial profile matches a 56-year-old, might that perhaps alienate that customer just a tad?
Let me put this another way. If you wouldn't want to encourage store associates to make judgments based on what a person looks like and use that to decide what to recommend that they order, why permit software to do the same thing?
There's an old Burger King commercial where a store associate sang, "Hold the pickle, hold the lettuce, special orders don't upset us." That's true when the customer seeks that special order. When the associate/software looks at a customer and chooses a special order, yeah, that could "upset us" big time.
This article is published as part of the IDG Contributor Network. Want to Join?