Lead story - Transformation is about data first, AI next - the Pandora jewelry example
Why would I lead with a transformation use case when the OpenAI drama needs more dissection? Don't get me wrong, backroom power plays, all dressed up in social media peacock feathers about "AI Safety," are worthy of sober deconstruction after the binge watch, but the enterprise rolls on.
And: if AI is to prove its relevance in an enterprise context, companies must make strides on their data woes. Enter Mark Chillingworth's latest, Pandora digital transformation underpins Phoenix Strategy. How did Pandora achieve enviable growth rates? As Mark explains, the two drivers of Pandora's digital transformation are: better customer experiences, and better collaboration tools/experiences for employees. Pandora has notched some omni-channel wins. But as their CIO says, taking this further will put data at the center:
Our new e-commerce system has expanded our reach and understanding of what our consumers want. But there is more we can do here to leverage the data, especially in terms of production, by increasing the breadth of data that we collect.
It is a trade-off in terms of the customer being willing to share their data and us demonstrating the value that we are able to provide.
Indeed - it's also about improved decision making, as analytics get closer to the real-time speed of digital business. All four of Pandora's "growth pillars" come back to data. Mark quotes the CIO:
They have to hang together; they don't operate independently of one another. The underlying platforms and cloud infrastructure are the capabilities that interoperate with one another, be it in manufacturing or the supply chain.
It is really important to have the oversight and governance of all these different areas.
And yes, Pandora has AI plans. I expect Pandora's AI pursuits will plug in nicely to all of this - but AI doesn't magically alleviate the need for a fundamental change in business strategy, and elimination of siloed approaches.
Diginomica picks - my top stories on diginomica this week
- It's Black Friday! Salesforce and Adobe crunch the numbers on a crucial Holiday season for retailers - Stuart shares projections for the consumer frenzy to follow. How frenzied? We'll find out - though Black Friday online sales numbers were frisky. Storylines to watch include the prevalence of mobile commerce, and the extent of "alternative" payment methods like BNPL apps (Buy Now, Pay Later). Also see: Stuart's Fixing up the technology stack - the macro-economic downturn isn't deterring omni-channel transformation at Home Depot and Lowe's.
- Galileo takes on detecting AI hallucinations, but new metrics are needed - George filed a level-headed take on Galileo's progress on detecting AI hallucinations. I just about done with vendors claiming they can eliminate hallucinations from LLMs (let's keep it real), but Galileo seems to be taking a more realistic stance, while remaining ambitious. As such, Galileo's tactics seem promising for better enterprise results. Also note: George's recent quote from Sam Altman himself on the limitations of LLMs, something we'll return to. See also: George's piece on Siemens and Microsoft partner on industrial copilots - here's why it matters.
Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:
- NVIDIA revenues up 200% year-on-year - sees sovereign AI clouds as next big opportunity - Derek on one of the biggest AI winners (selling the gold mining tools always beats hunting for gold). I found NVIDIA's take on the limits of multi-tenancy interesting. Derek: "It feels jarring to refer to cloud-based infrastructure as “data centers of the past”, given that many organizations and nations are still in a cloud-transition phase, but it appears, according to NVIDIA, that we are heading towards single-tenant infrastructure once again - something he describes as 'AI factories.'"
- Zoom lifts revenue forecasts as paying customers take advantage of AI tools - Derek on another vendor staking its (beneficial) AI claims - though I view Zoom's struggles to move beyond video meeting technology as its core problem. Derek: "If you look at where Zoom is seeing growth - amongst its enterprise customers - that’s a good sign."
- CEO Sudheesh Nair - ThoughtSpot on the road to ‘massive’, but the speed bumps are getting bigger - Chris on his latest meetup with ThoughtSpot's CEO, six months after ThoughtSpot's Mode Analytics purchase: "In June, Nair told me the deal was step one on the road to his company being “massive”. Nearly six months on, is ThoughtSpot any closer to that aim – with its long-rumoured IPO still on the horizon?"
- Workday Rising EMEA 2023 - Carl Eschenbach's priorities for when he becomes sole CEO at Workday - Phil continues his Workday Rising EMEA reviews, this one on a major (pending) CEO shift. Also see Phils take on Workday's 'responsible AI' plans, and my latest Workday use case, 'This is where we need to go to stay competitive' - Unum Insurance reveals how Workday Skills Cloud helps it excel on talent.
A few more vendor picks, without the quotables:
- Great places to work - UKG outlines its AI ambitions for HR tech - Rebecca
- Subscriptions, meet Sustainability - how Zuora helps customers to meet their carbon reduction goals - Madeline
- Standing out from the old school - how Sage is steering its own course - Brian
- How Buster + Punch lit up end-to-end automation thanks to NetSuite and the cloud - Phil
Jon's grab bag - Madeline with another notable installment, What I’d say to me back then - jump at opportunities earlier than you think you're able to, says Slack's Deirdre Byrne. Finally, in the wake of the OpenAI meltdown, Chris filed the blistering If you think OpenAI’s sacking of Sam Altman was just about board-level disagreement, think again. He then updated the story via a UK event that OpenAI bailed out on amidst the internal "navel gazing": King Canute, ahoy? The House of Lords debates AI, as OpenAI explodes and then reforms.
I see the OpenAI drama as ultimately about money and power, not ideology. Ideology is often used as a comforting blanket to cover over less glamorous conquests, but we are right to understand the stakes: the (disconcertingly few) companies that control the future of AI likely control the future of the economy itself. The problem is the rest of us have to live in this world also - and figure out where to take our stands, how to pay the bills, or maybe even make a creative contribution amidst the machines.
I urge readers to check Chris' initial OpenAI reaction piece, and its argument for a proper type of skepticism that does not preclude curiosity. Chris is not wrong to call attention to ideologies behind AI proponents. But while Sam Altman might be an adopted poster child for so-called "accelerationists," a closer look at OpenAI's culture finds a different ideological struggle. As Chris alludes to, the fight inside OpenAI that led to Altman's (temporary) ouster is really about another faction, so-called "AI Safety" advocates with roots in effective altruism, who were increasingly unsettled by OpenAI's commercial push.
This AI Safety ideology appears deeply flawed from where I type. Its proponents appear much more preoccupied with Artificial General Intelligence (AGI) doomsday scenarios that they believe OpenAI is close to achieving (mistakenly, in my opinion). They seem considerably less concerned about the gen AI exhaust fumes of bad, or in some cases, arguably stolen information OpenAI trained on, then released into the wild with frantically flimsy (if well intentioned) guardrails.
Even if the non-profit board had prevailed and prevented Altman's return, does anyone who isn't swilling OpenAI Koolaid believe this would have curbed the big AI industry as a whole, in any meaningful way? How could it, when no other big AI company had this type of non-profit board structure? What other company would have followed suit? Did they?
The reason for this fallacy is caught up, I believe, in the excitement of those inside OpenAI who are apparently convinced that they, and they alone, are on the inside track to the next big breakthrough. From the outside, that excitement looks a lot like hubris, albeit with some highly enviable stock options.
The road to AGI will not be through Bing GPT, sorry - no offense to the incredibly skilled (human!) prompt engineers out there who can coax moments of brilliance from these non-cognitive systems. Let the hand-wringing begin anew... Ergo here we are:
- Power and control of generative AI remains consolidated in few hands, though we can root for different AI approaches to take root that require less scope and scale - and reward upstarts and outliers.
- Though it's likely women will be named to OpenAI's new ultra-homogenous board, no women in the initial appointments is an unsettling reminder that a lack of inclusion seems to be a hallmark of big AI - especially when the decisions on the commercial future of AI are being made.
- The enterprise AI push is largely unaffected, setting up a fascinating 2024 where we will find out if enterprise vendors can curb the excesses of big AI with more accurate, "responsible" and customer-specific AI data output. Bret Taylor's presence on the new board only heightens this curiosity (Taylor is one of the sharpest enterprise technologists in the industry).
- "AI Safety" has been discredited as the guiding force inside OpenAI, gone by the wayside of the non-profit board's ouster. I propose we now ban the use of the term AI Safety, at least outside of formal buzzword bingo competitions, until someone can demonstrate how "AI Safety" is leading to more responsible AI practices today, not in some hypothetical AGI future. We are handing out AI religious leaflets instead of fire extinguishers. Surely we need the latter? Or at least both?
To be optimistic about AI or any other technology – as I frequently am – should not mean being denied the right to criticise, or to point out the chasm that sometimes exists between claim and real-world effect. Give me scepticism rather than passive, supine acceptance of evangelical BS any day. Evangelism begets healthy scepticism, and it is right that it does. The right to be a realist, not a disciple.
Hmm... I think Chris (wisely) misplaced his leaflet.
Best of the enterprise web
My top seven
Why I don’t worry about AGI … yet – And right on queue, Vijay Vijayasankar debunks the notion of AGI's imminence with his perfectly practical prose:
Humans can get started quickly with very little information. My daughter when she was three years old could recognize animals at the zoo based on the cartoons she had watched. She never confused a bear for something else because the red shirt on Winnie the Pooh was missing on the live bear.
As I said on Vijayasankar's LinkedIn post, the scientific breakthroughs that will lead to AGI someday are likely to come outside of the commercial spotlight. As Vijayasankar alludes to, the human brain doesn't require deep learning scale to generalize. "Deep learning" has a lot to learn. Yes, we must prepare for this AGI possibility. Walk - and chew some AI policy gum to get us there.
- What Happened In Tech? – AI has its Kardashians Moment with OpenAI’s Chaotic Weekend - Yes, one more OpenAI rehash, but Hyoun Park of Amalgam Insights is always a must-read on these topics.
- Some Marketers Use Data to End Conversations; Others Use Data to Start Them - Dave Kellogg penned one of the must-read posts on data/analytics you'll see this year: "If forced to choose between ignorance and hurt feelings, I’ll take the hurt feelings every time."
- How to succeed in digital and AI transformations - I selected this McKinsey podcast interview as a reminder: the strategy comes first, not the tech.
- Not To Be A Losing Pawn – Being a pawn is bad enough, being a losing pawn is worse. Lora Cecere tells us how to avoid such corporate fates, with a supply chain twist.
- Why write books in a world full of AI answers? - Josh Bernoff plows through a ream of AI-related hype, which in this case can be traced back to LinkedIn attention-seeking goofiness.
- 3 Ways LLMs Can Let You Down - Over on The New Stack, time to grapple with what will happen when LLMs ingest an Internet training set compromised by... generative AI content.
Smart mattresses are not very smart, but they are smart enough to get a company into trouble on social media:
Smart mattress company puts user data to good use: online tittle-tattle https://t.co/lM1CSt24Ep
-> hey, at least your personal sleep data goes to a good cause: viral social media antics
— Jon Reed (@jonerp) November 26, 2023
Okay, I'm shooting fish in a barrel here but anyhow:
Meta 'misled' the public through a campaign that downplayed the amount harmful content on Instagram and Facebook, court documents show https://t.co/9TALKCMXYv
-> I'm so confused, I thought Facebook was "a place for friends..." LMAO
— Jon Reed (@jonerp) November 26, 2023
This one is not a whiff at all, but it's the illustration of a classic whiff, via the satirical smarties over at Boring Nerds:
I could not stop laughing when this slide came up on the @boringnerds presentation. Of course if you get this kind of classic consultant bait-and-switch on your project, it's hardly a laughing matter.
— Jon Reed (@jonerp) November 14, 2023
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.