Enterprise hits and misses - generative AI bears down on industries, disruptive employees get analyzed, and fall events roll on
- Summary:
- This week - generative AI's impact leads to industry assessments. Next up: insurance. Fall event season is (still) in high gear - roundups of our Workday and Confluent user event coverage are still piping hot. Disengaged and disruptive employees hurt companies, but how did we get here? Plus: AI whiffs.
Lead story - Ensuring fair use of AI - the insurance example
One of the next steps in the enterprise AI madness evolution is making sense of industry scenarios. This time around, it's the insurance industry, via Neil's Ensuring fair use of AI techniques and outputs by actuaries - questions to be answered:
Recently, the Institute and Faculty of Actuaries (IFoA), the professional body in the UK dedicated to educating, developing and regulating actuaries based both in the UK and internationally, recently published “Risk Alert: The development and use of Artificial Intelligence (AI) techniques and outputs by actuaries.”
Neil does a section-by-section critique of the publication. This quote from the report stands out:
Explanations and validation of complex data and modelling techniques are likely to be more challenging than for more traditional actuarial models, including where results from AI models are not necessarily reproducible.
Yes, that's bloody obvious, to use UK (slang) parlance. But as Neil explains, this indicates just how different AI is from past approaches:
AI systems are increasingly capable of solving complex problems but tend to be opaque, the so-called “Black box” problem. It isn't easy to look inside to see what they do and how they work. They are not deterministic like classic models such as rules engines, procedural code or decision trees. Unreproducible results cannot be solely relied upon if we have not yet reached "the end of science" (the truth is in the data).
Can the push towards AI "explainability" help here? Neil quotes the report:
Communication to users is an essential element of the process, both in relation to explaining assumptions, methods and results, and the risks and limitations associated with data and models.
A well-meaning, but vague quote. Neil's response:
The nascent field of Explainable AI (XAI) is just beginning to grapple with this problem.
Neil's conclusion? A useful step, but like most things in so-called "responsible AI," a work in progress:
The above Risk Alert is an adequate foundational set of guidelines that needs. To be elaborated thoroughly. I am aware of numerous efforts in progress to do that.
Also see: the latest from Chris' (appropriately) scorching view of how artists are legally challenging the practices of "big AI" - Generative AI - authors and artists declare war on AI vendors worldwide.
Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:
- Accenture's full year generative AI revenues come in at $300 million out of a $64.1 billion total. There's a long way to go, says CEO Julie Sweet - Stuart pulls out some notable AI project lessons via Accenture: "Gen AI revenue has doubled in the past quarter, but projects are typically experimental and there's a lot of caution alongside the curiosity."
- Giving Dell the edge in edge - new platform capabilities offer easier data migration capabilities - Martin on Dell's edge platform push.
- SAP adds generative AI assistant to enterprise software portfolio - Madeline reports on SAP's "Joule" digital assistant news. It's a smart-but-expected move on SAP's part - will this generation of digital assistants, powered by generative AI, be more accurate/effective as well as easy to engage? Time will tell.
- Cloud World 23 - Oracle addresses the hot questions on generative AI pricing, customer data, and where SIs fit in - Wherein I pull together Oracle's views/plans on generative AI, across press Q/As and interviews.
Workday Rising 2023 - diginomica team coverage. I was on the ground in San Francisco; Stuart and Phil contributed use cases and news analysis via Workday Rising's live stream:
- Workday Rising '23 - bringing generative AI to heel in the service of enterprise productivity - Phil breaks down the keynote news, including generative AI announcements.
- Workday Rising '23 - Stuart has a Workday Finance use case from Whole Foods. On the first day of the show, I posted a Workday Extend use case via Cushman & Wakefield.
After a few days of on-the-ground pursuits, I published a deeper dive into Workday's generative AI approach, including some exclusive quotes from Workday's tech leadership. How much would Workday reveal about its generative AI approach? What about pricing, customer data privacy, opt-ins, and LLM accuracy? I got some meaty (and spicy) answers in San Francisco: Workday reveals its approach to generative AI and customer data privacy.
Confluent Current coverage - use case and analysis - Derek was on site for Confluent's user event. Obviously, generative AI in the Confluent tech/data management solution set was an important story, but this quote from Derek's coverage gives a broader view:
The key takeaway for me is that Confluent is focusing on usability of data streams. I’ve been following the company for a few years and whilst the benefits of Apache Kafka are clear, often what I’ve observed is that it requires a heavy operational and skills investment to get it right. Confluent introduced a greater level of simplicity for customers with its Confluent Cloud offering, extracting some of the management requirements away from enterprises.(Confluent’s data streaming platform aims to make development easier for real-time AI applications).
Also see Derek's Confluent Current use cases: McAfee adopts micro-services using Confluent Cloud, and Denny’s cooks up new intelligence stack with Confluent.
A couple more vendor picks, without the quotables:
- Guru expands the reach of its AI-assisted, human-verified enterprise knowledge search app - Phil
- Sitecore shifts to a composable architecture and focuses on AI innovation - Barb
Jon's grab bag - Katy revisited a longstanding customer pain point in Paying for under-utilized infrastructure? Wasn’t cloud supposed to solve that problem? Chris assessed the UK's AI outlook in What the UK must do to win on AI, according to techUK and the trades unions, and Madeline added another notable story with What I’d say to me back then - tech entrepreneur Heather Shoemaker on overcoming sexism in the VC market.
Best of the enterprise web
My top six
Some employees are destroying value. Others are building it. Do you know the difference? - McKinsey has a provocative framing on productivity to consider: "The central challenge for organizations is to move as many workers as possible away from the highly dissatisfied group (which is probably larger and more destructive than most C-suites realize) and toward greater engagement and commitment. Such a strategy would give workers the opportunity to develop their skills, reducing dissatisfaction and attrition rates and bringing clear financial and organizational benefits over the long term." The six main employee personas McKinsey details are interesting - the "shining stars" are a very small section (less than five percent from what I can tell). In McKinsey's context here, "disruptors" are not positive - that framing needs more discussion.
- AI needs human insight to reach its full potential against cyberattacks - Louis Columbus takes a closer look at how AI/humans need to be fused for proper cyber defense. After citing a wake-up call quote that attackers seems to know systems as well as system admins, Columbus writes about a successful defense: "A human in the loop recently stopped a breach of one of the fastest-growing municipalities in the southwestern U.S. after attackers obtained administrative-level privileged access credentials and attempted to breach the city’s infrastructure."
- Walmart Chief Sustainability Officer: ESG is integrated into operations, business - Larry Dignan profiles Walmart's sustainability efforts: "Walmart's McLaughlin said the retailing giant can tackle Scope 1 and Scope 2 emissions without offsets but will have to experiment."
- AI, Hardware, and Virtual Reality – Stratechery's Ben Thompson probes the intersection of AI and whether this can redeem the metaverse: "AI removes the human constraint: media and interactive experiences can be created continuously; the costs may be substantial, particularly compared to general compute, but are practically zero relative to the cost of humans. The most compelling use case to date, though, is communication: there is always someone to talk to."
- Top 5 reasons OpenAI is probably not worth 90 billion dollars - Gary Marcus questions OpenAI's valuation: "There’s not a huge moat here to protect OpenAI from competitors. The central technology at OpenAI (as far as anyone knows) is large language models, and many other companies know how to build them. Some are even open-sourcing competing models, and the competing models are developing quickly."
Whiffs
I'm pretty patient these days when it comes to exaggerated headlines, but this was ridiculous:
Nearly half of CEOs believe AI could replace their own jobs, says new poll—and 47% say that's a good thing https://t.co/5T5bBBUqBc
-> perhaps the most egregious example of a misleading headline, versus what the article/data actually says, that I have seen in 2023.
— Jon Reed (@jonerp) October 2, 2023
The article says nothing of the sort. Example:
"It’ll be harder for AI to replicate many of the “soft skills” that define a good CEO, like “critical thinking, vision, creativity, teamwork, collaboration, inspiring people, being able to listen and see,” says Agarwal. That means human bosses will almost certainly keep existing, but their jobs may soon look radically different. Delegating those mundane tasks could help CEOs focus on “the things that make them CEOs.
If we're going to understand what the impact of AI means we have to raise the bar above this kind of viral junk. CEOs aren't going anywhere; whether that's a good thing or not is outside my scope - let's ask "AI"!
This may be more of a "yikes" than a whiff:
-> life without AI guardrails isn't looking so promising
— Jon Reed (@jonerp) October 2, 2023
And, on a lighter note: don't take dental advice from the avatar resembling Tom Hanks:
Tom Hanks Warns Fans About ‘AI Version of Me’ Promoting Dental Plan: ‘I Have Nothing to Do With It’ https://t.co/QJJ4UmEtwT
-> hope it's not too late to back out of my root canal :)
— Jon Reed (@jonerp) October 2, 2023
I'm about to get booted from my hotel for overstaying my checkout, so that's a wrap. See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.