Enterprise hits and misses - OpenAI gets a board makeover (and a critical review), while Black Friday shoppers press the "buy" button

Jon Reed Profile picture for user jreed November 27, 2023
Summary:
This week - Sam Altman knows how to play the AI Game of Thrones, but what are the implications for the enterprise and beyond? Has AI Safety gone from buzzword bingo to oxymoron? The retail holiday season is underway, and transformations start with data silos, not AI. As always, your whiffs.

King Checkmate

Lead story -  Transformation is about data first, AI next - the Pandora jewelry example

Why would I lead with a transformation use case when the OpenAI drama needs more dissection? Don't get me wrong, backroom power plays, all dressed up in social media peacock feathers about "AI Safety," are worthy of sober deconstruction after the binge watch, but the enterprise rolls on.

And: if AI is to prove its relevance in an enterprise context, companies must make strides on their data woes. Enter Mark Chillingworth's latest, Pandora digital transformation underpins Phoenix Strategy. How did Pandora achieve enviable growth rates? As Mark explains, the two drivers of Pandora's digital transformation are: better customer experiences, and better collaboration tools/experiences for employees. Pandora has notched some omni-channel wins. But as their CIO says, taking this further will put data at the center:

Our new e-commerce system has expanded our reach and understanding of what our consumers want. But there is more we can do here to leverage the data, especially in terms of production, by increasing the breadth of data that we collect.

It is a trade-off in terms of the customer being willing to share their data and us demonstrating the value that we are able to provide.

Indeed - it's also about improved decision making, as analytics get closer to the real-time speed of digital business. All four of Pandora's "growth pillars" come back to data. Mark quotes the CIO:

They have to hang together; they don't operate independently of one another. The underlying platforms and cloud infrastructure are the capabilities that interoperate with one another, be it in manufacturing or the supply chain.

It is really important to have the oversight and governance of all these different areas.

And yes, Pandora has AI plans. I expect Pandora's AI pursuits will plug in nicely to all of this - but AI doesn't magically alleviate the need for a fundamental change in business strategy, and elimination of siloed approaches.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

A few more vendor picks, without the quotables:

Jon's grab bag - Madeline with another notable installment, What I’d say to me back then - jump at opportunities earlier than you think you're able to, says Slack's Deirdre Byrne. Finally, in the wake of the OpenAI meltdown, Chris filed the blistering If you think OpenAI’s sacking of Sam Altman was just about board-level disagreement, think again. He then updated the story via a UK event that OpenAI bailed out on amidst the internal "navel gazing": King Canute, ahoy? The House of Lords debates AI, as OpenAI explodes and then reforms.

I see the OpenAI drama as ultimately about money and power, not ideology. Ideology is often used as a comforting blanket to cover over less glamorous conquests, but we are right to understand the stakes: the (disconcertingly few) companies that control the future of AI likely control the future of the economy itself. The problem is the rest of us have to live in this world also - and figure out where to take our stands, how to pay the bills, or maybe even make a creative contribution amidst the machines.

I urge readers to check Chris' initial OpenAI reaction piece, and its argument for a proper type of skepticism that does not preclude curiosity. Chris is not wrong to call attention to ideologies behind AI proponents. But while Sam Altman might be an adopted poster child for so-called "accelerationists," a closer look at OpenAI's culture finds a different ideological struggle. As Chris alludes to, the fight inside OpenAI that led to Altman's (temporary) ouster is really about another faction, so-called "AI Safety" advocates with roots in effective altruism, who were increasingly unsettled by OpenAI's commercial push.

This AI Safety ideology appears deeply flawed from where I type. Its proponents appear much more preoccupied with Artificial General Intelligence (AGI) doomsday scenarios that they believe OpenAI is close to achieving (mistakenly, in my opinion). They seem considerably less concerned about the gen AI exhaust fumes of bad, or in some cases, arguably stolen information OpenAI trained on, then released into the wild with frantically flimsy (if well intentioned) guardrails.

Even if the non-profit board had prevailed and prevented Altman's return, does anyone who isn't swilling OpenAI Koolaid believe this would have curbed the big AI industry as a whole, in any meaningful way? How could it, when no other big AI company had this type of non-profit board structure? What other company would have followed suit? Did they?

The reason for this fallacy is caught up, I believe, in the excitement of those inside OpenAI who are apparently convinced that they, and they alone, are on the inside track to the next big breakthrough. From the outside, that excitement looks a lot like hubris, albeit with some highly enviable stock options.

The road to AGI will not be through Bing GPT, sorry - no offense to the incredibly skilled (human!) prompt engineers out there who can coax moments of brilliance from these non-cognitive systems. Let the hand-wringing begin anew... Ergo here we are:

  • Power and control of generative AI remains consolidated in few hands, though we can root for different AI approaches to take root that require less scope and scale - and reward upstarts and outliers.
  • Though it's likely women will be named to OpenAI's new ultra-homogenous board, no women in the initial appointments is an unsettling reminder that a lack of inclusion seems to be a hallmark of big AI - especially when the decisions on the commercial future of AI are being made.
  • The enterprise AI push is largely unaffected, setting up a fascinating 2024 where we will find out if enterprise vendors can curb the excesses of big AI with more accurate, "responsible" and customer-specific AI data output. Bret Taylor's presence on the new board only heightens this curiosity (Taylor is one of the sharpest enterprise technologists in the industry).
  • "AI Safety" has been discredited as the guiding force inside OpenAI, gone by the wayside of the non-profit board's ouster. I propose we now ban the use of the term AI Safety, at least outside of formal buzzword bingo competitions, until someone can demonstrate how "AI Safety" is leading to more responsible AI practices today, not in some hypothetical AGI future. We are handing out AI religious leaflets instead of fire extinguishers. Surely we need the latter? Or at least both?

Final word from Chris:

To be optimistic about AI or any other technology – as I frequently am – should not mean being denied the right to criticise, or to point out the chasm that sometimes exists between claim and real-world effect. Give me scepticism rather than passive, supine acceptance of evangelical BS any day. Evangelism begets healthy scepticism, and it is right that it does. The right to be a realist, not a disciple.

Hmm... I think Chris (wisely) misplaced his leaflet.

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

Why I don’t worry about AGI … yet – And right on queue, Vijay Vijayasankar debunks the notion of AGI's imminence with his perfectly practical prose:

Humans can get started quickly with very little information. My daughter when she was three years old could recognize animals at the zoo based on the cartoons she had watched. She never confused a bear for something else because the red shirt on Winnie the Pooh was missing on the live bear.

As I said on Vijayasankar's LinkedIn post, the scientific breakthroughs that will lead to AGI someday are likely to come outside of the commercial spotlight. As Vijayasankar alludes to, the human brain doesn't require deep learning scale to generalize. "Deep learning" has a lot to learn. Yes, we must prepare for this AGI possibility. Walk - and chew some AI policy gum to get us there.

Overworked businessman

Whiffs

Smart mattresses are not very smart, but they are smart enough to get a company into trouble on social media:

Okay, I'm shooting fish in a barrel here but anyhow:

This one is not a whiff at all, but it's the illustration of a classic whiff, via the satirical smarties over at Boring Nerds:

See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Loading
A grey colored placeholder image