Main content

Enterprise hits and misses - gen AI do's and don'ts are revealed, Salesforce - Informatica speculation tweaks pundits, and AI bots whiff

Jon Reed Profile picture for user jreed April 22, 2024
Summary:
This week - generative AI do's and don'ts put project lessons into focus. Salesforce acquiring Informatica seems unlikely, but the speculation brings AI data issues to a head. Your whiffs include the the Humane AI Pin, even though it looks spiff.

man-with-headache

Lead story - the do's and dont's of generative AI

In 2024, we're still wading valiantly through the riptide pull of the gen AI hype machine, but I'll say this much: the project-based lessons are getting better.

The latest comes via Derek's jaunt to Google Cloud Next: Google Cloud Industries VP shares the dos and don’ts of generative AI projects. One standout from the "do" list: test, test, and test again. Or, as Derek puts it: "test and learn." This has implications for gen AI project choices:

One common approach across most companies and industries, however, is that organizations are starting to test out generative AI applications on internal users first - before promoting anything externally to customers. In the case of Discover Financial Services, a buyer we spoke to at Next, this was certainly the case and something it said was a priority.

I know you want the "don'ts." Via Google's Carrie Tharp, this is number one with a bullet: "firstly, Tharp said that it’s wrong to assume that generative AI solves all unstructured data problems." Amen! This has big project implications. Derek quotes Tharp:

It's going to help you unlock that unstructured data unlike you could before, but it doesn't mean skipping traditional steps and preparing your data for the use cases. We see that if you prepare the proper data foundation, those are the experiences that we're seeing go live to production. Don’t think about a narrow single use case, think about it more holistically, the process end to end.

Meanwhile, Barb delved into an interesting survey by Contentful about those internal gen AI project attitudes: What employees think their managers think about gen AI and vice versa - and can it come with an off switch please? As more gen AI projects gear up, the issue of disclosure heats up also:

What is clear from this study is that people think the use of generative AI tools should be disclosed  (76%). Sometimes, that only means internally disclosed, but other times, it means letting the customer or audience know it's being used.

These disclosure expectations/rules will surely change as these systems mature, but for now, the ability to audit gen AI output is important. Hard to do that when you don't know what's AI or not... Granted, some human agents are hard to differentiate from bots, but employee morale is a different problem.

The "off switch" is something I didn't think of in my own Attention enterprises - your AI project success in 2024 is not a given. What will separate wins from failures?, but I wish I had - though it may be less of a permanent "off switch," and more of an ability for employees to flag a pattern of output gaffes. Perhaps my rants on "AI overreach" and mitigating output accuracy are holding up...

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

A few more vendor picks, without the quotables:

Jon's grab bag - Gary filed a notable tech-for-good story in Could technology help reconnect indigenous communities with a rich - but vanishing - heritage? Sarah looked at gen AI's crime fighting potential in Can generative AI help tackle anti-money laundering? Yes, says WorkFusion - here's how.

Chris tackles the issue of humans-versus-AI in process design in OpenText CEO - don’t send a human to do a machine’s job!  Finally, Stuart raises a critical issue for 2024 in Voting for electoral integrity in a gen AI age - tech platforms need to step up to the mark.

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

  • Wall Street doesn’t seem too keen on a potential Salesforce-Informatica pairing - Ron Miller reports on one of the more head-scratching merger speculations we've had in a while: "You can imagine why some investors are therefore slightly confused that Salesforce is considering spending more than $10 billion on Informatica, a purchase that would add some revenue scale to Salesforce but little in the form of future revenue growth." The potential relevance of Informatica to Salesforce's gen AI moves is a subject of debate; this SnapLogic blog post captures one side of that, albeit with a dog in the fight: "Informatica at its core is a relational data integration engine. This means that managing semi-structured and unstructured data which makes up the majority of customer data and which is critical for generative AI is not a strength of the Informatica platform."
  • The end-to-end AI chain emerges - it's like talking to your company's top engineer - As Joe McKendrick notes, one of the most interesting gen AI use cases is to function as the front end interface for other numerical AI systems (gen AI isn't so wonderful at math, but in these scenarios, it doesn't have to be).
  • Gen AI training costs soar yet risks are poorly measured, says Stanford AI report - Stanford just issued a 500 page(!) gen AI report. Tiernan Ray provides a summary, though come to think of it, a comprehensive summary of 500 pages probably outside scope for both humans or AI.
  • From People to Tech arbitrage: Can we really survive this Great Services Transition? - Phil Fersht of HfS raises the crucial question: "The big question now is whether enterprises and their services partners have the appetite to fix their skills, processes, data, and technical debt?" HfS is right about services disruption, though their blog's analysis would be even better if they grappled with gen AI's shortcomings as well; this is not Artificial General Intelligence - or anywhere close. Then again, we've needed an enterprise services re-invention long before AI became trademarkable.
  • Meta's AI chief: LLMs will never reach human-level intelligence - Don't take my word for it, just ask Turing Award winner Yann LeCun: "They’re useful, there’s no question about that. But on the path towards human-level intelligence, an LLM is basically an off-ramp, a distraction, a dead end.”
  • Network Investment. Defining the ROI - Lora Cecere asks the jugular questions: why hasn't supply chain management fundamentally changed? "Why have we not invested in networks despite the criticality of the data to supply chain decision making and the importance to ESG initiatives?"
  • How Roman Regelman and BNY Mellon are driving long-term business transformation - "We would define transformation as something bold and truly aspirational—not something that generates incremental gains."

Overworked businessman

Whiffs

Hmmm... I thought the point of identifying security gaffes was to fix them?

Speaking of imperfect gen AI (bots):

Humane AI Pins aren't cheap, but they look spiff!

See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Loading
A grey colored placeholder image