The myth of hyper-personalization - algorithms are still undermining the customer experience

Jon Reed Profile picture for user jreed January 8, 2021
Summary:
I have a few bones of contention with the hyper-personalization crowd. But is there a better model for B2B? Grab your beverage of choice, and let's follow the argument and see if it has merit.

professor-questioning

Two years ago, I went into rant mode, declaring that Algorithms are undermining the customer experience, and (most) companies don't care. Nothing has changed.

But I should have also pushed back against the hype around so-called hyper-personalization, and the obsession with context. Are we really prepared to argue that algorithms are savvy enough to anticipate our needs in a moment-by-moment context?

Two years later, mostly we have brute force algorithms, attempting hyper-personalization without regard for those who find it intrusive - not too far from the mass email blasts most companies still indulge in.

Why would companies do this? Because the downside - alienating a minority of hyper-sensitive users, only a percentage of which vocalize their dissatisfaction - is considered acceptable collateral CX damage.

Typically, those alienated include a once-important user base for the company: folks we might call early adopters or super-users. These folks don't mind rough patches in early releases - especially if they can configure exciting new services. However, in the great UX dumbing down, simplicity and mass user adoption is the highest value. The only problem? Sometimes when you oversimplify an app or service, you harm it - or make it unrecognizable to those who built your audience.

Creating a better UI isn't something I would argue against. Whether it's B2C or B2B, intuitive user interfaces are not only a worthy goal - they are no longer negotiable. Not if you want to truly drive adoption.

But there is a huge UX difference between elegantly hiding advanced (necessary) functionality versus eliminating it. Example: recently, Facebook's UI "upgrade" (dumbing down for mobile simplicity) took away/hid/screwed up necessary tools needed for Facebook Group admins (of which I am one). 

Is alienating super-users an acceptable byproduct in B2B?

Facebook can get away with alienating super-user group admins; Facebook's problematic agenda is clearly not harmed by my alienation. But here is where B2C and B2B may part ways:

Is alienating a passionate group of super-users an acceptable byproduct in B2B? Depends on who is disgruntled. If it's a member of the press, an expert who impacts a purchasing decision, or an enterprise software budget holder, there is a risk in play. Sometimes you get a data bread crumb, a clue you have lost someone, such as an unsubscribe. But if a budget holder evaluating your software is alienated by overly-aggressive email pop-ups and intrusive chatbots, how are you to know? I riffed on this last February in Is real-time customer feedback a dream, or a nightmare? My in-person challenge to HappyOrNot co-founder Ville Levaniemi.

The myth of algorithmic hyper-personalization

Almost four years ago, I invested in Amazon Echo devices across home and office. The most striking thing? How little the "intelligence" behind these devices has evolved, and how clumsy the personalization is. The closest these Echos get to personalization? Nagging me with lines such as:

  • "If you want, I can set that alarm to go off every day at the same time." (No, I don't want).
  • "You usually order ____ around this time of the month. Do you want to order it again?" (No, I don't want).

But if I ask Alexa: "Cancel my 8am alarm," four years later, I still get:

"You don't have an alarm set for 8am." (This happens because two alarms are set, neither for 8am). Not once have I heard Alexa say to me: "You don't have an alarm set for 8am, but you do have two other alarms set. Would you like to cancel one of those?" Sorry, but with "AI" this dumb, hyper-personalization is a techno-fantasy for carnival barkers.

The highest-value feature on my Alexa? My Flash Briefing. I'm always in fear Amazon will get rid of it. I'll bet only a minority of users spend time configuring it. Like most super-user capabilities, the configurations are buried in the settings. But that learning curve doesn't bother me - not compared to the impact it gives. As I wrote in The enterprise professional's guide to Alexa Flash Briefings - why personalizing your news content pays off:

My experience is that platforms that allow "super-user" configuration options are still the ones delivering superior personalization - but your chances to configure your own content or preferences on big tech sites are getting more remote by the day. No one can personalize our content like we ourselves can. Big tech wants its algorithms to do that for us, so we get dumbed-down options each day.

Any day now, I expect Alexa to stick it to me and change or sunset configurable Flash Briefings in favor of "intelligent" news delivery, based on the algorithm's wild guesses, err, I mean, hyper-personalization of my preferences. They've done it to me before - they had no problem deleting 100 gigs of music files I had painstakingly uploaded, because only a minority of passionate music lovers were using the feature. Big tech is majority rules only.

Google, of course, is the classic perpetrator, constantly sunsetting services that fail to meet popularity thresholds - no matter who is using them. There would have been no issues maintaining Google Reader indefinitely. But, only passionate RSS fans used the Reader. Welcome algorithmic news delivery, goodbye, fully customizable news portal. Google wasn't concerned about alienating early Google Reader adopters whatsoever. The algorithm dictates the product decision - no matter how many influential journalists were passionate users of the product.

In the case of newsreaders, the good news is that it's a financially viable niche; small (hopefully) sustainable companies like Feedly and, my fave, Newsblur cropped up. I know detractors would point out that "you get what you pay for," but that's wrong-headed. I would have happily paid for Google Reader on subscription. Serving a bunch of paid subscription niches? No profit there? Think again - Amazon makes money by the truckload via its long tail marketplace sellers. The Patreon business model isn't far from that either.

Brute force CX algorithms - false positives are acceptable

Other examples of brute force CX algorithms I cited in the last piece:

  • My bank flags me for potential "money laundering" based on a pathetically weak series of machine-generated data points about occasional overseas payments.
  • Machine learning darlings Google generate a "someone has your password" email warning and lock me out of an email account, solely based on logging in from a different location on a supposedly different device (though it was the same device, different browser).
  • Southwest wifi asks for my Paypal password as it routes me to Paypal for wifi payment. Paypal immediately flags this activity as suspicious, even though I frequently use Paypal to buy Southwest wifi in-flight. Paypal asks me to verify my identity via a text message code as the only way to prove my identity, even though I am in an airplane, in airplane mode.
  • Five minutes after I check out of a hotel - any hotel - I'm now automatically subscribed to that hotel's "personalized" brute force promotions lists, even though a simple account review shows that my travel is tied to business and not to pleasure visits, or hotel room discount promotions.

Now, I realize on the scale of human suffering so many have been through in the last year - not to mention the massively disconcerting political events in the U.S. this week - these CX grievances are inconsequential. But the downsides of algorithmic overreach can have serious consequences also. In my whiff-of-the-year for hits and misses, I picked two: the UK's A-level algorithm fiascos, and Stanford's wacky vaccine distribution algorithm that de-prioritized some front-line workers.

As for the trivialities of hyper-personalization, well, with the amount of money enterprise vendors are throwing at this, it must not be too trivial. I question whether trying to anticipate an individual's changing context moment-to-moment is a worthy pursuit. At any rate, as per my Alexa example, the technology of hyper-personalization is just not there; super-user configuration remains vastly superior.

I do make a distinction between personalization and hyper-personalization. We could debate this distinction, but roll with me here for a sec. Personalization, as I see it, is more of an attempt at matching up a particular segment of individuals with their likely preferences. In some cases, AI-powered personalization does have traction. Example: I've written on Salesforce's data on the impact of personalization on e-commerce; the stats are persuasive.

Amazon.com's "customers also bought" is another winner. In this case, Amazon isn't trying to overreach by interrupting me in my context, but simply showing me similar items. YouTube does this fabulously with its related videos feature, but horribly with its personalized ads. Perhaps it's because YouTube's inventory of related content is much deeper than the ads its algorithm must choose for me? On the other hand, Spotify's attempts to hyper-personalize music algorithmically matched to my specific interests is a massive fail - literally a compilation of songs I can't stand.

Debating the hyper-personalization PR pitch

Even if the technology of hyper-personalization gets there, I'm skeptical companies will discontinue the broader blasts. In B2C markets, there is too much economic incentive for: "when in doubt, blast it out." This fall, I heard from a PR firm pitching AI-powered hyper-personalization for hotel chains. They claimed we could avoid searching for our loyalty numbers at hotel check-ins. Instead, I was told, "AI and ML can help brands solve these challenges by automating hyper-personalized offers." I responded:

I'm a member of many loyalty programs. Some are better than others, but none of them have succeeded in the type of hyper-personalization you are describing here.  I like the idea of it, and I see some ways it can work, but it's amazing how far companies are from it.

One reason is because of a very cynical approach to email marketing. Take hotels as an example. Even if they have the right data, they think:

1. "I don't care if Jon has never stayed here recreationally and only stayed here for business... send him the recreational offer anyhow.... because 10 percent of people like him bite on it. I don't care if it bothers Jon or not."

Until businesses are willing to change that, hyper-personalization will take a backseat. The technology isn't ready, and it's too easy to spray and hope for the best. I think data silos are a big part of the problem.

The company huddled, and got back with me:

You're 100% right. Thanks for the feedback! We spent some time discussing your reaction, and we totally agree.

They believe breaking down data silos will go a long way towards solving this. I'm not so sure. Even when the data is spot-on, the temptation to send the long-shot offer that 10 percent of folks accept will be perceived as well worth the blowback. Fine - maybe it is. But please don't call this the era of hyper-personalization. It's still the era of blast-and-apologize-later. As they wrote:

Sending those mass offerings only for 10% of customers to engage may seem easier in the short term, but it's a long-term failure.

But is it a failure? How do you prove it? If I'm a hotel chain under pressure, a ten percent bump in bookings is probably worth some disillusionment I can't easily measure. That's why I believe we are genuinely porked in this area when it comes to B2C. Even if the tech gets precise enough to justify the hyper-personalization monicker, there is too much digital upside in (affordably) blasting a wider range of people.

My take - hyper-personalization isn't the only model for B2B

I can't possibly succeed in getting Facebook or Google to care about my alienation. But in B2B, we can absolutely make the case you don't want to alienate the wrong decision-makers - especially with intrusive interruption marketing, however "intelligent" we claim it is. B2B is a small world indeed. Then it's a matter of:

1. Figuring out how to measure the disillusioned user or prospect for each "hyper-personalized" outreach - and a process to bring those folks back into the fold where possible.
2. Accommodating the needs of the super-user (or manager) in software design by elegantly hiding, not eliminating, crucial functionality. Oh, and make role-based UIs readily available and customizable. Now we can push for broad user adoption via UI simplicity, without losing an influential subset that wants more configurable control.
3. Applying "VIP treatment" to the preferred, most influential or profitable customers and community members, based on data that allows such segmentation.

One more thing is needed, however: a model to juxtapose against the considerable technical investments and questionable promises of "algorithmic hyper-personalization in a moment-by-moment context." I will argue that in B2B, there is an alternative model: building vibrant opt-in communities around your brand, content and industry. An audience of passionately-engaged subscribers solves a lot of algorithmic contextual guesswork, err, I mean hyper-personalization challenges.

No longer do you have to guess at what I need moment-to-moment, as my attention shifts from emailing a prospect to ordering toilet paper to checking on vaccination schedules. Nope, you don't have to worry about predicting any of that, because I find your community relevant, and I've opted in. Unlike the noisy hotel chain that craves my attention, I've invited you to contact me. Even if I forget about your brand for a day or two, I've opted in. Therefore, I'll be back - or you'll ping me with something I signed up to receive. We have a long-term connection of trust. You don't have to overthink the so-called "AI."

Granted, these two models aren't mutually exclusive. Progress on one can inform the other. One (hyper-personalization) is about throwing sexy technology at the problem. The other (opt-in communities) is about the burning purpose at the center of your brand, and how you can perhaps change people's lives, and make their projects better, if you can earn their trust through relevance. Then you obtain their data, gradually, as they willingly offer it in exchange for the value derived along the way. Now your rich data is truly tied to what they want you to know, and we can really have a talk about personalization.

I'm also good with eradicating data silos; depending on the situation, I might even be good with a CDP (Customer Data Platform). In retail, it obviously pays huge dividends to know your customer, and provide them with as fluid an "experience" as you can across channels. What I'm trying to do here is to throw a wrench in this discussion - and call out the persistent spray-and-pray cynicism that undermines any serious discussions of true personalization at scale.

In other news, I'll be appearing as a guest on the CRMKonvo show next week. Something tells me this topic might come up...

Speaking of which, would you care to fill out a survey about this? Because I can't pseudo-personalize for you without gathering even more data, and I don't have time to earn your opt-in. Therefore, a generic survey is really my most sophisticated way of feeding the algorithm so it can blast out some offers that I think you'll really like...

Loading
A grey colored placeholder image