Lead story - Upbeat on downbeat? Walmart gives us a retail holiday outlook
How are retailers assessing the 2023 holiday picture? In "We're ansty about Christmas every year!" Will it be Happy Holidays for Walmart as digital revenues grow?, Stuart relays Walmart CEO Doug McMillon's valiant attempt to pull off the half-full view. What do we know so far? Stuart:
While Target’s digital revenues were in the doldrums earlier this week, Walmart is still seeing improvement. E-commerce sales were up 24% in Walmart US, 16% in Sam's Club US, and 15% globally. That’s having an impact on Walmart’s thinking around profit and loss (P&L).
How is Walmart excelling where others stumble? Stuart quotes McMillon:
Mentally I break it down as a combination of a traditional retail P&L and a newer version that starts with our digital businesses. It flows from first and third-party e-commerce pick-up and delivery to businesses like membership, advertising and fulfillment as a service.
Supply chain efficiencies - an area where Walmart is able to dictate terms more than most - is another asset. McMillon:
When you put it together with the supply-chain automation work we're doing, you get a more sustainable business that can grow more effectively over time and create a better mix along the way.
Walmart is also flexing its resources with the expansion of e-commerce fulfillment centers. It doesn't take a genius to see this holiday season shaping up as a mixed bag. Santa will make it down the chimney somehow, but he'll probably leave a few reindeer at the North Pole for operational efficiency's sake. Artificial Intelligence, with its provocative combination of market impact and existential cautions, is the volatile ingredient keeping us from the macro-economic Grinch. Meanwhile, shop local/buy local is sounding pretty good to me right about now.
diginomica picks - my top stories on diginomica this week
- Why pay anything for gen AI job description tools? Just say no! - Speaking of Grinches, Brian gives HR tech marketers an early holiday hangover by deconstructing their favorite (early) gen AI use case: "Software buyers may need to educate vendors as to what AI capabilities are really solving strategic problems and will be long-lived. That means customers must tell vendors “No!” to some of these proposed up-charges and remind vendors that they have to re-earn the customer’s business every month."
- Ingram Micro's platform-first ambition puts transformation to the test - Last year, Ingram Micro's large scale transformation brought provocative field lessons - but it all comes down to follow through. At CCE 2023, I got a progress report. Platform-first, platform-also, or platform-only? Here's why it matters.
- Why local government needs to seize its AI destiny - Carrie Bishop makes a bold diginomica debut with strong words for public sector change agents.
- Northumbrian Water Group architects customer-centric vision - Mark Chillingworth issues another meaty use case: "The big bang allowed a dramatic shift that led to agile and iterative methods to flow from then on."
Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:
- Bazaarvoice goes all-in on creator marketing with the acquisition of Affable.ai - Barb explores how contact creators are changing marketing: "Creators, everyday shoppers, and brand ambassadors help brands scale the right types of content that shoppers are looking for. Bazaarvoice focuses on the micro-nano creator set, saying that these creators are better for conversion because they are subject matter experts or highly authentic."
- Unilever’s move to “being digital” - how Aera’s Decision Intelligence offering is transforming market agility - Stuart has a fresh Aera use case; as Unilever says, you can't stop at a data lake anymore: "It’s not like you capture a data lake and then there it is. It's just a constant continuous thing that you need to build on."
- SAP tech leaders face the AI and S/4HANA convergence - the ASUG Tech Connect review - I came back from New Orleans with a slew of jugular conversations to parse: "During my session with Greenbaum, it didn't matter whether we talked about AI, S/4HANA, or broader transformation - it all came back to data."
Workday Rising EMEA coverage - Phil was on the ground at Workday Rising in Barcelona; he returned with use cases, and a memorable session on potential AI model vulnerabilities:
- Workday Rising EMEA - model collapse could bring a 'winter of AI', says Workday Co-President Sayan Chakraborty
- Workday Rising EMEA - how Coventry Building Society moved 'overnight' from Oracle to Workday
Celonis Celosphere 2023 coverage - Derek took the jaunt to Munich to make sense of Celonis' recent BPM acquisition, and dig into Celonis' process intelligence ambitions:
- Celonis wants to become the ‘Wikipedia of process intelligence’ with new platform
- Celonis acquires BPM vendor Symbio to accelerate process improvement for customers
- Why Celonis thinks its Process Intelligence Graph is key to enterprise generative AI adoption
Jon's grab bag - Brian kicks tires on a vendor in a software category that has quickly moved from very-stale to very-interesting: Checking out Sphera’s full ESG offering. Chris took his AI inquiries into cybersecurity - another area where AI has more open questions than easy answers: What a hacker can tell you about AI security – or the lack of it: "You might be glad when a red-team ethical hacker infiltrates your systems – before a bad actor does." Finally, George delves into an area we are going to see a lot of action in: AI model selection (Martian model router jumpstarts AI cost optimization).
Best of the enterprise web
Sam Altman pushed from OpenAI, lands at Microsoft - an enterprise view
Enjoy your slow news weekend? I sure did. Wake up Monday, and Microsoft hires former OpenAI CEO Sam Altman. As per The Verge:
Microsoft is hiring former OpenAI CEO Sam Altman and co-founder Greg Brockman. Altman was fired from OpenAI on Friday, after the board said it “no longer has confidence in his ability to continue leading OpenAI.” After a weekend of negotiations to potentially bring Altman back to OpenAI, Microsoft CEO Satya Nadella announced that both Sam Altman and Greg Brockman will be joining to lead Microsoft’s new advanced AI research team.
Some of this is soap opera; some of this is old school, corporate Game of Thrones. The "what happened?" is for another day, but the "why" and the "what's next" deserve a moment. The closest thing we have on the "why" is about the supposedly dangerous pursuit of AI. As per Alberto Romero on Substack:
Here’s what Kara Swisher said, in a chain of tweets:
'Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move.
sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side.'
Ilya Sutskever, the brilliant mind behind many of OpenAI’s scientific successes seems to be the main decision-maker behind the firing. And, as people suspected, no one knew, not even Altman and Brockman, and neither did Microsoft execs nor OpenAI employees. Definitely not the ideal way to do something like this, which is already ugly in itself.
A couple quickies:
- For those who wondered about Altman's frenzied weekend of possible returns to OpenAI, Microsoft is clearly one major investor that was broadsided by this news and reacted swiftly. But with Altman and Brockman's hiring, Satya Nadella has a definitive response, if not the last laugh.
- This awkward mess of a departure highlights the always-unfeasible relationship between OpenAI's "non-profit" board and its blatantly commercial operations.
If true, I find the motivations to push Altman out to pursue a more responsible approach to AI odd, problematic and fascinating. I'm not someone who believes the path to Artificial General Intelligence (AGI) lies through generative AI and large language models alone. Powerful use cases (and cultural impact) are not the same as truly cognitive systems. Altman himself was all over the place: sometimes overhyping AI's job loss impact and existential threat, sometimes advocating sensible guidelines, and sometimes pursuing commercial agendas that undermined those same stances.
If "being responsible" in this case is about AGI, which will require technical breakthroughs beyond today's roadmaps, I struggle to see how you could fire Altman for that. But: you can make the case that OpenAI's aggressive release of ChatGPT did cause harm - particularly in conjunction with consumer search and Bing. What Altman pulled off was commercially brilliant, but not necessarily "responsible AI." But that's in the context of many other companies (and investors) also doing a questionable job with the "let's do responsible AI" but "let's move faster!" juggling act (see: the whiffs section).
Then again I believe the algorithmic social media economy did far more harm to democracy and civil discourse via viral nonsense business models long before gen AI hit primetime. If OpenAI plans to be more "responsible," what does that actually mean - especially if other big tech players, rogue states and black hat operators fail to do the same? That's not even counting the importance of open source LLMs, which don't have OpenAI's brute force ethical "guardrails" (and some would argue that's a good thing). That's ethically complicated enough to make you wonder: just what did Altman do wrong?
For enterprise AI, I'm not sure this changes much - aside from an entertaining pre-Thanksgiving round of Zoom meeting derailments and watercooler banter. The future of gen AI in the enterprise will involve very different computing needs, data platforms, and industry-specific AI models. Open AI may play a role, but so will many other vendors - from Microsoft to other hyperscalers to third party upstarts, customizing open source LLMs. Meanwhile, I worry less about AGI right now, and more about the fragility of our last-remaining generally-accepted facts amidst easily-accessible deep fake technology outside of OpenAI's scope.
I wonder what the OpenAI board would say about that; then again they are probably a tad busy today. That's what happens, I guess, when you happily take Microsoft's billions but don't have Nadella on speed dial.
Gary Marcus aired out the mess:
All signs are that those financially-interested stakeholders will quickly emerge victorious. (Arguably, they already have).
The tail thus appears to have wagged the dog—potentially imperiling the original mission, if there was any substance at all to the Board’s concerns.
He raises concerns on what this means for AI startups:
Let me close with a terrifying thought that Kahn posted on X just as I was wrapping this up:
The question I'm trying to raise is bigger: what OpenAI, Anthropic, DeepMind have all tried to do is raise billions & tap vast GPU resources of tech giants without having the resulting tech de facto controlled by them. I'm arguing the OpenAI fracas show that might be impossible.
— Jeremy Kahn (@jeremyakahn) November 19, 2023
Three orgs filled with brilliant minds tried to create AI independently of the tech giants, and all three have been subverted.[
The chances of an "Amazon of AI" surging up and putting big tech incumbents on their heels doesn't look likely at the moment. Has anyone asked ChatGPT for a hot take?
Two non-AI picks:
- Developers can’t seem to stop exposing credentials in publicly accessible code - let's not forget to fret about cybersecurity also...
- Services firms are out of runway. They must forget Labor Arbitrage and conform to Technology Arbitrage - Phil Fersht of HfS back in his services reinvention wheelhouse...
Speaking of Microsoft, I wonder if this Google Meet was the last straw?
The most surprising detail of Altman's ouster might be the fact that OpenAI fired him over a Google Meet call https://t.co/84Gymf9Cid
"Couldn't they have at least made it a Microsoft Teams meeting?"
-> even Microsoft's closest partners draw the line at Teams calls... )
— Jon Reed (@jonerp) November 19, 2023
Oh, about that "responsible AI" thing:
Meta disbanded its Responsible AI team https://t.co/vTgeU6O09Y
-> oh well, at least they tried. Time to move fast again and break more things.
— Jon Reed (@jonerp) November 19, 2023
No concerns about navel-gazing non-profit oversight over at Meta... Finally, full circle with ChatGPT:
"Coverage plans for anniversary of ChatGPT?"
-> why would I do that when ChatGPT can just write its own tribute to itself of the same caliber as anything I could write? LMAO
— Jon Reed (@jonerp) November 17, 2023
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.