Enterprise hits and misses - Sam Altman punted from OpenAI, but finds happy landings at Microsoft. Enterprises react, while retailers ramp up holiday hopes

Jon Reed Profile picture for user jreed November 20, 2023
This week - Sam Altman is out at OpenAI, but in at Microsoft. What does this mean for the enterprise? Meanwhile, retailers are hoping (and gritting teeth) for a holiday amidst macro-doubts - but Walmart is upbeat. Your whiffs include ChatGPT in my inbox, looking for PR love.


Lead story - Upbeat on downbeat? Walmart gives us a retail holiday outlook

How are retailers assessing the 2023 holiday picture? In "We're ansty about Christmas every year!" Will it be Happy Holidays for Walmart as digital revenues grow?, Stuart relays Walmart CEO Doug McMillon's valiant attempt to pull off the half-full view. What do we know so far? Stuart:

While Target’s digital revenues were in the doldrums earlier this week, Walmart is still seeing improvement. E-commerce sales were up 24% in Walmart US, 16% in Sam's Club US, and 15% globally. That’s having an impact on Walmart’s thinking around profit and loss (P&L).

How is Walmart excelling where others stumble? Stuart quotes McMillon:

Mentally I break it down as a combination of a traditional retail P&L and a newer version that starts with our digital businesses. It flows from first and third-party e-commerce pick-up and delivery to businesses like membership, advertising and fulfillment as a service.

Supply chain efficiencies - an area where Walmart is able to dictate terms more than most - is another asset. McMillon:

When you put it together with the supply-chain automation work we're doing, you get a more sustainable business that can grow more effectively over time and create a better mix along the way.

Walmart is also flexing its resources with the expansion of e-commerce fulfillment centers. It doesn't take a genius to see this holiday season shaping up as a mixed bag. Santa will make it down the chimney somehow, but he'll probably leave a few reindeer at the North Pole for operational efficiency's sake. Artificial Intelligence, with its provocative combination of market impact and existential cautions, is the volatile ingredient keeping us from the macro-economic Grinch. Meanwhile, shop local/buy local is sounding pretty good to me right about now.

diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

Workday Rising EMEA coverage - Phil was on the ground at Workday Rising in Barcelona; he returned with use cases, and a memorable session on potential AI model vulnerabilities:

Celonis Celosphere 2023 coverage - Derek took the jaunt to Munich to make sense of Celonis' recent BPM acquisition, and dig into Celonis' process intelligence ambitions:

Jon's grab bag - Brian kicks tires on a vendor in a software category that has quickly moved from very-stale to very-interesting: Checking out Sphera’s full ESG offering. Chris took his AI inquiries into cybersecurity - another area where AI has more open questions than easy answers: What a hacker can tell you about AI security – or the lack of it: "You might be glad when a red-team ethical hacker infiltrates your systems – before a bad actor does."  Finally, George delves into an area we are going to see a lot of action in: AI model selection (Martian model router jumpstarts AI cost optimization).

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

Sam Altman pushed from OpenAI, lands at Microsoft - an enterprise view

Enjoy your slow news weekend? I sure did.  Wake up Monday, and Microsoft hires former OpenAI CEO Sam Altman.  As per The Verge:

Microsoft is hiring former OpenAI CEO Sam Altman and co-founder Greg Brockman. Altman was fired from OpenAI on Friday, after the board said it “no longer has confidence in his ability to continue leading OpenAI.” After a weekend of negotiations to potentially bring Altman back to OpenAI, Microsoft CEO Satya Nadella announced that both Sam Altman and Greg Brockman will be joining to lead Microsoft’s new advanced AI research team.

Some of this is soap opera; some of this is old school, corporate Game of Thrones. The "what happened?" is for another day, but the "why" and the "what's next" deserve a moment. The closest thing we have on the "why" is about the supposedly dangerous pursuit of AI. As per Alberto Romero on Substack:

Here’s what Kara Swisher said, in a chain of tweets:

'Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move.

sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side.'

Ilya Sutskever, the brilliant mind behind many of OpenAI’s scientific successes seems to be the main decision-maker behind the firing. And, as people suspected, no one knew, not even Altman and Brockman, and neither did Microsoft execs nor OpenAI employees. Definitely not the ideal way to do something like this, which is already ugly in itself.

A couple quickies:

  • For those who wondered about Altman's frenzied weekend of possible returns to OpenAI, Microsoft is clearly one major investor that was broadsided by this news and reacted swiftly. But with Altman and Brockman's hiring, Satya Nadella has a definitive response, if not the last laugh.
  • This awkward mess of a departure highlights the always-unfeasible relationship between OpenAI's "non-profit" board and its blatantly commercial operations.

If true, I find the motivations to push Altman out to pursue a more responsible approach to AI odd, problematic and fascinating.  I'm not someone who believes the path to Artificial General Intelligence (AGI) lies through generative AI and large language models alone. Powerful use cases (and cultural impact) are not the same as truly cognitive systems. Altman himself was all over the place: sometimes overhyping AI's job loss impact and existential threat, sometimes advocating sensible guidelines, and sometimes pursuing commercial agendas that undermined those same stances.

If "being responsible" in this case is about AGI, which will require technical breakthroughs beyond today's roadmaps, I struggle to see how you could fire Altman for that. But: you can make the case that OpenAI's aggressive release of ChatGPT did cause harm - particularly in conjunction with consumer search and Bing. What Altman pulled off was commercially brilliant, but not necessarily "responsible AI." But that's in the context of many other companies (and investors) also doing a questionable job with the "let's do responsible AI" but "let's move faster!" juggling act (see: the whiffs section).

Then again I believe the algorithmic social media economy did far more harm to democracy and civil discourse via viral nonsense business models long before gen AI hit primetime. If OpenAI plans to be more "responsible," what does that actually mean - especially if other big tech players, rogue states and black hat operators fail to do the same? That's not even counting the importance of open source LLMs, which don't have OpenAI's brute force ethical "guardrails" (and some would argue that's a good thing). That's ethically complicated enough to make you wonder: just what did Altman do wrong?

For enterprise AI, I'm not sure this changes much - aside from an entertaining pre-Thanksgiving round of Zoom meeting derailments and watercooler banter. The future of gen AI in the enterprise will involve very different computing needs, data platforms, and industry-specific AI models. Open AI may play a role, but so will many other vendors - from Microsoft to other hyperscalers to third party upstarts, customizing open source LLMs. Meanwhile, I worry less about AGI right now, and more about the fragility of our last-remaining generally-accepted facts amidst easily-accessible deep fake technology outside of OpenAI's scope.

I wonder what the OpenAI board would say about that; then again they are probably a tad busy today. That's what happens, I guess, when you happily take Microsoft's billions but don't have Nadella on speed dial.

Gary Marcus aired out the mess:

All signs are that those financially-interested stakeholders will quickly emerge victorious. (Arguably, they already have).

The tail thus appears to have wagged the dog—potentially imperiling the original mission, if there was any substance at all to the Board’s concerns.

He raises concerns on what this means for AI startups:

Let me close with a terrifying thought that Kahn posted on X just as I was wrapping this up:

Three orgs filled with brilliant minds tried to create AI independently of the tech giants, and all three have been subverted.[

The chances of an "Amazon of AI" surging up and putting big tech incumbents on their heels doesn't look likely at the moment. Has anyone asked ChatGPT for a hot take?

Two non-AI picks:

Overworked businessman


Speaking of Microsoft, I wonder if this Google Meet was the last straw?

 Oh, about that "responsible AI" thing:

No concerns about navel-gazing non-profit oversight over at Meta... Finally, full circle with ChatGPT:

See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

A grey colored placeholder image