Enterprise hits and misses - gen AI stats spark adoption debate, the Xz backdoor raises open source questions, and customers push back on cloud costs

Jon Reed Profile picture for user jreed April 1, 2024
Summary:
This week - conflicting reports create gen AI adoption questions; we attempt to bridge the gaps. The Xz backdoor brings open source utilization into the spotlight, and customers don't like cloud egress costs. Microsoft Teams makes its triumphant return to the whiffs section.

success-failure-road-for-businessman

Lead story - Gen AI adoption reports - where does the truth lie?

There are plenty of contradictory signals about gen AI adoption - but what are companies actually doing?

Example: two weeks ago, Stuart posted about Accenture passing $1 billion in gen AI bookings in  six months, but still acknowledging some adoption barriers inside of many companies, with data readiness being a blocker, and companies running on modern digital platforms pressing ahead faster.

My recent piece on Avasant Research's gen AI data found looked at 200 companies with gen AI projects in progress: What are the drivers (and blockers) of enterprise gen AI investments? Behind Avasant's generative AI adoption report.

When I asked about obstacles to getting to gen AI scale, Avasant responded:

Not surprisingly, it typically comes down to basic blocking and tackling. Is your data clean? How easy is it to access, and are your major sources of data integrated? If your data is clean and ready, the next step is determining the right use cases. Gen AI typically works best, at least for now, augmenting workers rather than replacing them.

Meanwhile, Chris parsed two reports that indicated companies hitting pause on gen AI projects across the board: AI - two reports reveal a massive enterprise pause over security and ethics. Chris shares this whopper stat:

First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result.

Gen AI concerns are across the board, and the issues aren't small:

Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that

Avasant Research found similar concerns, along with bias, output quality and accuracy, which I happen to believe are crucially important. Then there is this, from Chris:

For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments?

How are we to make sense of this? Well, first off, the sample sizes of these surveys are small, and therefore should be taken with the usual salt grains. Chris is right that "Shadow AI" could be tilting those numbers against each other, as rogue individuals and teams use gen AI tools on the side (an unwise pursuit from an IP risk standpoint). But here are the factors we should keep in mind:

  • "AI" in this context refers specifically to gen AI, not to more more mature machine learning projects already in play, e.g. predictive maintenance or fraud detection.
  • The gen AI market is immature and hard to predict - and, aside from some productivity use cases e.g. Microsoft Copilot ("write me an email," "summarize this meeting"), most of these projects are highly dependent on the customer's data quality.

I believe most customers without sophisticated data science teams are in the slow lane with their own gen AI projects. However, based on all I've seen at enterprise events this spring, many companies are eager to consume gen AI in the context of responsibly-delivered functionality from trusted software vendors they are already using, from CRM to cloud ERP etc. (Assuming those features solve an actual problem or help them work smarter etc). Just as one example amongst many, last week Oracle announced 200+ new gen AI features that will ship within NetSuite. Future gen AI adoption surveys need to carefully account for vendor-shipped functionality.

It's important to note that IP risk from uploading corporate data is dramatically reduced in these scenarios, as these vendors are not mingling customer data with the training data of external LLMs. That said, customers would be wise to press vendors on liability protections on all data issues.

So, take Accenture's view on fast and slow movers. Add in that many companies are investing in data projects and platforms in anticipation of more AI to come. Combine that with companies cautious on their own gen AI projects, but starting to use gen AI functionality from trusted vendors, and we have perhaps a better explanation for contradictory adoption results.

Not all companies are concerned with ethics, alas. Some companies are eager to employ these tools for head count reductions, whether the tools are appropriate or just the "AI hammer" Chris refers to in his article. One consulting director I trust told me that one third of his gen AI customers care about ethics and are taking it slow and pressing those issues. The other two thirds want to press ahead and reduce head count wherever they can. His sales pipeline is very active.

As I discuss in the Avasant piece, accuracy is a huge risk to address. When a vendor claims "no hallucinations," that doesn't mean they have solved all output accuracy problems. Yes, enterprise vendors are refining more accurate gen AI output with a variety of techniques documented on diginomica, but use case adoption is dependent on lining up acceptable output error tolerance with human in the loop design and so forth. We can't fix AI in one column...

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

  • AI - the future is open and on CPUs, claim Red Hat and Intel - Chris examines the Red Hat and Intel view of what's next, including some beefs with NVIDIA: "The discussion was chaired by Red Hat EMEA evangelist Jan Wildeboer, whose opening gambit sought to cool the fevered brows of any IT leaders caught up in the tactical rush to buy an AI ‘hammer’ then look for some business nails."
  • TrailblazerDX 2024 - in review - Rebecca files her post-event redux: "Although Salesforce isn’t stepping away from its own LLM research, there’s a recognition that it may not be the best use of Salesforce’s resources. Instead, a key part of its strategy is emphasizing openness, and that LLMs, no matter how good they are, will need CRM data for grounding and the privacy and security measures found in the Einstein Trust Layer."
  • "AI software development is completely different" - how Sage plans to deliver enterprise AI with thousands of customer-specific finance models - My deep dive into Sage's AI strategy brought surprises I wasn't expected, and a big enterprise wake-up call: "Unlike SaaS, you can have custom AI models and still retrain/update them easily - and Harris isn't the first AI expert who has been adamant on this point with me. "Custom" is typically a dirty word in a SaaS context, but AI is a different story."

Adobe Summit 2024 - diginomica coverage - Phil was on the ground in Las Vegas this week, parsing gen AI announcements with the automation of end-to-end customer experience processes for marketing teams in the forefront.

Domopalooza 2024, diginomica coverage - Alyx was live in Salt Lake City, digging into Domo's data-for-business-users positioning in the midst of the AI surge; Stuart set the tone with an earnings review and a look at how the new consumption model is faring.

More vendor picks, without the quotables:

Jon's grab bag - Chris addresses a persistent issue in Diversity in AI - women need strong role models to inspire them, hears techUK. Derek examines the UK's tech predicaments in Legacy technology, procurement and workforce identified as ‘Big Nasties’ of future spending for British Government (bonus points for the catchy "Big Nasties" phrase). Stuart delves into the BBC's pursuit of audience trust amidst the bots in Nation shall speak peace unto nation...with help from ethical AI - the BBC's ambitious digital plans in an age of Fake News.

Finally, Martin puts AI through the hype cycle review in Why I can see an AI South Sea Bubble about to pop...

As a wake-up call to those businesses already playing in the chatbot field, the BSI also found that 35% of respondents said they found no benefit from AI-based services. In addition, while 42% said that AI chatbots are OK for handling simple complaints and issues, 68% said they believed them to be unsuitable for handling complex queries.

To avoid a plunge down the far side of the hype cycle, software vendors will need to demonstrate they can deliver something with AI impact, because Martin is right: (most) customers will hold off on their own gen AI development. Except: the customer data needed for a strong result isn't always ready for prime time. We're rapidly moving from sexy keynotes and Wall Street AI dividends to prove-it-to-me time...

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

Overworked businessman

Whiffs

Josh Bernoff went off:

Competing for most eyebrow-raising headline of the week, we have:

Shareholders Sue AI Weapon-Detecting Company, Allege It 'Does Not Reliably Detect Knives or Guns'

and

Scientists toss 350,757 coins to prove theory that coin tosses aren't 50/50 (link warning - site is kind of spammy)

I haven't taken a gratuitious shot at Microsoft Teams in a little while, but I corrected that this week:

See you next time...

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Loading
A grey colored placeholder image