Enterprise hits and misses - Davos debates AI in a Davos way, Apple Vision Pro pulls the CES rug, and DPD's AI service bot melts down

Jon Reed Profile picture for user jreed January 22, 2024
Summary:
This week - Davos did their AI pontification thing, but what we can we boil down from the erudition? AI project failure and success is on the table, but other matters (should be) keeping CEOs up at night: five, to be precise. Surprise: I take the Apple Vision Pro launch seriously, and DPD's service bot melts down.

loser-and-winner

Lead story - Davos brings the AI debate forward - but did we learn anything?

I'll spare you my usual potshots on how Davos seems to lose itself in virtue-signal-myopia, while the rest of us slog on, amidst the daily reckoning/application of the technology that many who gather in Davos are directly responsible for, but rarely discuss in such practical terms.

So be it. AI is clearly a technology with deep cultural impact, and if that leads us into advanced discussions on regulatory implications, we should listen.

Derek kicks that angle off with Davos 2024 - US, EU and Singapore put on their best polite smiles to discuss AI regulation. We might say the right things about regulatory imperatives, but Derek is right to call out the tensions:

A delicate balancing act is currently taking place. Nation states and trading blocs want to somewhat restrain megalomaniac companies and CEOs from doing as they please with such powerful tools and data, whilst protecting citizen rights and ensuring democracy remains viable - but they also recognize that this is a global arms race and that the ‘winners’ in AI will likely reap the rewards for many decades to come. So, it’s a case of ‘don’t do anything too bad’ but ‘keep going so we win’.

Chris honed in on the philosophical side of Davos in Davos 2024 - will AI change what it means to be human?:

There it is again: that certain something that makes us human, that we don’t believe AIs could emulate. But might we be proved wrong?

Chris captured the perils/potentials well in this piece - an illustration, to me, that we will get the AI we deserve. Could AI provide lonely/home-bound people with a form of companionship that eludes them? Yes. Could AI put the finishing touches on a world where our digital exhaust is converted into creepily "personalized" products that make smart devices look more like tools of surveillance than lifestyle advances? Yes again. If lines between machine and human are blurred, is that bad or good? That too is up to us to determine.

On the other hand, I've grown impatient with the inordinate attention paid to the existential risk theme, a topic Stuart picked up in The AI debate - it could go very wrong, admits Sam Altman; we have to avoid an AI Hiroshima, says Marc Benioff. The existential risk question is worth a debate, but not when it gobbles up the bandwidth for pressing discussions on the AI systems in actual use (or abuse?) today. Some brands may not care about making horrendous headlines for facial recognition misuse (last week's column), or service bot meltdowns (read on) - but I have a hunch most brands care about that quite a lot. See Chris' scorching Davos 2024 - AI is coming for everything we see, hear, and do. And will outsmart us quicker than we think.

Davos is more than a photo opp. But: enterprise AI leaders need a more practical basis for AI discussions (ROI anyone?) that still incorporates a risk framework.

Diginomica picks - my top stories on diginomica this week

I attempted to do just that in Attention enterprises - your AI project success in 2024 is not a given. What will separate wins from failures? I take on AI project success/failure, where the possibilities are in 2024 - and why customers must own AI projects in ways we don't talk about enough. I also warn about what I call "AI overreach":

Avoid "AI overreach" - most AI failures I've seen involve a profound misunderstanding of what AI is currently capable of, or buying into an overconfident view of AI's capabilities. Perhaps, in some cases, it's even using AI as a technical excuse for flawed business models or headcount reductions. It could be a reluctance to invest in human-in-loop design principles, in order to streamline costs. Or, in the case of Rite Aid, multiple points of failure, including an apparent lack of interest in reducing false positives.

Issues such as overreach and accuracy levels show us: it's not necessarily AI or not, but how you handle AI that dictates a good result (and avoidance of blowback). More picks:

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

A couple more vendor picks, without the quotables:

Jon's grab bag - Stuart hones in on a major impediment to successful AI projects: skills and education. (Why the C-Suite needs to get to grips with AI technology - Accenture and IBM CEOs on enterprise education and re-skilling needs). Meanwhile, Derek examines data on how AI can lead to greater inequalities in IMF - ‘capital income and wealth inequality always increase with AI adoption’.

Chris talks to a CEO with concerning views on quantum computing's threat to cryptography in Quantum technology – the black swans are gathering, claims start-up CEO. Cath wrote an important piece on DEI in 2024 and beyond: The death of DEI? Why 2024 is shaping up to be a pivotal year for diversity and inclusion in the tech sector:

This would appear to be a pivotal year for DEI in the tech industry. As the vultures circle, it is up to those leaders who understand the benefits of this approach to the business to start embedding it in ways of everyday working rather than just treating it as a nice-to-have or a necessary evil due to legislative pressures.

I'll revisit this piece further next week.

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top six

  • The Five Horsemen of the Business Apocalypse – A Quick Guide to the Real Issues that Should Be Keeping Every CEO Awake at Night - Shifting topics, Josh Greenbaum wrote a memorable post to move us (temporarily) away from the pitfalls of AI obsessions: "I attended a lot of conferences last year, including way too many in the month of October alone. While the conferences spanned a wide range of vendors, industries, geographies and customer types, a set of transcendent themes continued to bubble up to the surface, themes decidedly different – real, as opposed to artificial, if you will – that separated them from the buzzy froth that AI was generating throughout the tech market."The five? People, process, data, risk, and spend management. Josh joined my video show last Friday for a special "behind the blog" discussion of what effective leaders should be paying attention to; you can catch the replay on LinkedIn.
  • Raven Insights: Customer Feedback About Enterprise Software Implementations in 2023 - over at Ravel Intel, Bonnie Tinder's research reveal draws lessons from the highs and lows of enterprise projects in 2023: "The quality and value that a software implementation brings (whether its ERP or payroll or HCM) still comes down to a few fundamentals – none of which include Generative AI. Those fundamentals are: the project plan and team, the quality of the data and the readiness of the organization to deal with change."
  • Mobile App Development is Broken: Are Progressive Web Apps the Remedy? RedMonk's Kate Holterhoff documents a turning point in mobile app development.
  • Apple won the CES headset game without showing up - I was thinking about how, amidst all the CES gadget bombast, the most important gadget wasn't even there. Turns out The Verge wrote a piece about that.
  • I've Worn Apple's Vision Pro Headset 4 Times. Here's What You Need to Know Before Buying - Why would I devote two pieces to a consumer device in an enterprise column? Because I believe that the (overpriced) Vision Pro is the last best hope for the mainstreaming of the metaverse - at least for this generation of consumers. Yes, there will be gaming hardcores and niche enterprise use cases, but this is really the big mainstream crossover test. As for this pre-review: "At some point, I'm going to live inside the Apple Vision Pro: doing work in it, checking messages, playing games, watching shows." I most definitely won't - but that doesn't matter one bit. What matters is who else will. I see mixed reality as the future of the metaverse, re: a less obtrusive version of Google Glass we can wear all day, without looking like astronauts at the unemployment line, but that's another article.
  • Troubling Tech Trends: The Dark Side of CES 2024 - Recall my dsytopian tech snark earlier... David Cassell fleshes that out nicely in this New Stack roundup of iFixit's panel of "dystopia experts."

Overworked businessman

Whiffs

 Figured I'd save this one for the whiffs column: DPD customer service chatbot swears and calls company 'worst delivery firm'. For me, this unfolded real time on X/Twitter, via my colleague Phil Wainewright:

Hilarious, yes, but not too hilarious to also be true, as we soon found out:

This is a strange whiff indeed. Some took this as to mean that all gen AI service bots are not ready for prime time, but I disagree:

AI overreach comes in many flavors. This one, to me, looks like an architectural design fail. No, not lack of internal GPT-type guardrails, but a failure in the overall data quality control, and customer-specific data infusion, to limit responses to customer FAQ data - and not create wacky corporate poetry. I documented such an architecture here: Can enterprise LLMs achieve results without hallucinating? How LOOP Insurance is changing customer service with a gen AI bot.

We don't know yet if this bot hallucinates like this historically. DPD, which has disabled the AI component of its bot, claims a "system update" caused this problem, and that they have been running an AI-based bot for "years" now. Their explanation makes some sense to me, as this breakdown seems like a failure in a more controlled input/output architecture. We'll find out when DPD re-activates its AI. But for now, we can derive this warning:

There is one more caution for customers here: gen AI is not a static system you can "turn on and let run." This doesn't seem like model drift, but it is certainly indicative of the need for gen AI discipline. The output customers received yesterday may not be what they are getting today. Sorry for the cold shower, AI marketeers.

Reader Clive Boulton has been killing it with the whiffs lately; he spotted one more: Feds charge eBay over employees who sent live spiders and cockroaches to couple; company to pay $3M. When you are sending cockroaches to reporters, to say you've lost the corporate plot seems a tad too kind?

See you next time...

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Loading
A grey colored placeholder image