Main content

What are the drivers (and blockers) of enterprise gen AI investments? Behind Avasant's generative AI adoption report

Jon Reed Profile picture for user jreed March 25, 2024
Summary:
There is no shortage of generative AI marketing and hot takes. But there is a shortage of gen AI project data. A recent report from Avasant Research sheds light on how enterprises are spending on generative AI - as well as their top AI concerns.

wrench-and-nut

During the last year, otherwise known as the generative AI hype festival, the diginomica team has dissected the pros and cons of gen AI from just about every direction: policy, roadmap, architecture, vendor strategy news/analysis, use case documentation, ethics, and even creative/cultural impact.

The one area where we can all use more info? Analysis of generative AI lessons across projects.

Avasant Research has issued a notable report on this topic, via its Computer Economics report series: Generative AI Strategy, Spending, and Adoption Metrics (the first chapter is available free with sign up). One strength of this report? Analysis of generative AI adoption by industry, along with gen AI spending metrics, such as:

  • Gen AI spending as a percentage of IT spending
  • Gen AI spending per employee
  • Gen AI spending as a percentage of revenue
  • The impact of gen AI on jobs
  • Level of gen AI program outsourcing
  • Gen AI support staffing

Top generative AI concerns - cybersecurity comes in number one

There were some surprises; I was not expecting health care and manufacturing to be amongst the top three industries, as per Avasant's median spending per employee, per sector.

I also found a surprise - or at least a disagreement - in this rundown of top gen AI concerns:

 

Avasant - top generative AI concerns for enterprises
(Image used with express permission of Avasant) (Image used with express permission of Avasant)

When it comes to internal usage of gen AI, cybersecurity isn't as high on my list as some of these other issues - though new attack vectors like prompt hacking must be dealt with. On the other hand, if those surveyed were referring to the general exploitation of gen AI by nation states and bad actors, then the top position might be warranted. I'll say more about privacy and accuracy in my concluding remarks.

But what did the report authors make of this data? Via an email back and forth, Avasant Research's Frank Scavo (Partner) and Dave Wagner (Senior Director) responded to my top questions, starting with: "What do you think is the biggest takeaway from this study for enterprises considering gen AI projects?" As Scavo and Wagner responded, there is a big contrast between tech industry investments and actual adoption:

Our overall finding is that, despite the tech industry making enormous investments in generative AI, most end-user organizations are still in the early stages of adoption. There is a lot of excitement for sure. But as you note in your follow up question, many are still in the proof of concept or piloting stage.

Gen AI investments by industry - healthcare and manufacturing in the top three?

Next, I asked about that industry investments surprise, with healthcare and manufacturing in the top three industries on AI spending: "I can see the need for gen AI in health care to free up doctors from admin, but I would have expected regulatory/compliance issues around data to slow that investment down a bit. Manufacturing might be even more of a surprise, unless the explanation is that shop floor employees needs more hands-free ways to interact with systems. Your thoughts?" The authors responded:

Healthcare has always been a little challenged when it comes to technology. But now providers are seeing the possibilities with gen AI, which explains our survey results. For example, one of the biggest issues in healthcare right now is management of chronic diseases. Patients often do well under hospital care, but when they go back to their own health management, they revert to old habits and end up back in the hospital. It has been shown in studies that patients with better follow up care stay out of the hospital. But follow up care is expensive, time consuming, and there are simply not enough medical resources to go around.

Gen AI offers a chance to re-imagine personalized medicine with chatbots, digital assistants, and other virtual health partners to help patients manage their diseases. Additionally, gen AI promises to help healthcare workers clear more of their time for patients by handling off billing, transcription, charting, and other administrative duties to gen AI. All of these gen AI applications should positively impact patient care. As healthcare shifts to outcomes-based or value-based billing models, the way healthcare organizations deploy gen AI could be a huge differentiator.

As for manufacturing, our biggest takeaway is that manufacturers historically have had a lot of experience with other types of automation. For example, many sectors might use intelligent document processing or other types of AI-based automation, but manufacturers are flush with robots and other automation. In the long term, manufacturing may have fewer use-cases, but the use cases are clear. Manufacturers in our study plan on using gen AI for faster prototyping and testing, generating new product ideas, and improving supply chain performance. In other words, this is just a new tool in a tool box that was already full of other automation tools.

Obstacles to generative AI at enterprise scale

Not all of the respondents were doing gen AI projects at scale. I asked the authors about that: "You noted this challenge: 'One difficulty of a study like this is that Gen AI is still maturing as a technology. While all the organizations included in the report have adopted generative AI, some are merely piloting programs and others have full-scale projects. Some companies in the study have gen AI budgets of less than $10,000, while some are in the millions. Nonetheless, some sense of scale for gen AI spending is needed.' Did you pick up on any lessons in scaling gen AI while gathering the reporting data?" Scavo and Wagner replied:

Not surprisingly, it typically comes down to basic blocking and tackling. Is your data clean? How easy is it to access, and are your major sources of data integrated? If your data is clean and ready, the next step is determining the right use cases. Gen AI typically works best, at least for now, augmenting workers rather than replacing them.

Identifying a business process where gen AI can be integrated to speed up the process, and then applying typical change management practices is crucial. As experience builds, you’ll find users more comfortable applying to areas that the enterprise may not have even considered. Build a team to capture these ideas and evangelize them to the larger enterprise.

My take

This is one of the most realistic gen AI reports I have seen. The sample size of 200 companies is not large enough to get too definitive. However, when you consider the criteria for inclusion is to have an active gen AI project underway, 200 enterprises is a solid number - enough to take these results seriously.

I referred to my surprise with the gen AI concerns data. On my own concerns list, I would have put "inaccurate results" at the top. Why?

1. Accuracy of gen AI must be accounted for in project planning. Testing ways to improve output accuracy is crucial. I've documented numerous tactics in past diginomica AI articles, including prompt tuning, RAG (Retrieval Augmented Generation), industry LLMs, small LLMs, cross-checking one LLM against another, reinforcement learning, embedded guardrails, foundation models, knowledge graphs, and infusing industry and/or customer-specific data into a variety of these approaches.

2. Customers should be on their guard against any vendor that claims "no hallucinations." Getting rid of outright hallucinations could be possible in narrow use cases, but accuracy issues cannot be eliminated from this generation of gen AI tools. Some level of inaccuracy, is, if you will, a feature of probabilistic systems that are not cognitive, have no real world model, and no underlying structure for grasping causality.

But in the right use case, some level of inaccuracy is not a deal breaker (and yes, I count unsatisfactory answers under this umbrella). You can go a long way, for example, by limiting a bot's output to a collection of previously validated documentation. Embedding human escalations and error reporting into the AI design also solves many problems.

Choosing use cases where outlier results won't cause lawsuits and business disruptions also factors in. Generating legally-binding documentation is even possible, albeit with savvy humans in the loop for review. Providing AI assistants to service employees is another viable scenario, as is more sophisticated fraud detection. Improving fraud detection rates from 95 to 98 percent is an example of a win - with an appropriate tolerance for inaccuracy. In other cases, the humans previously doing the gen AI tasks were perhaps more inaccurate than the gen AI.

This is why the report authors urge thorough testing, prior to moving gen AI to production. Wagner and Scavo also do a good job of explaining why gen AI is currently about human staff augmentation, rather than headcount reduction, for the reasons I just noted and more.

There are  more insights from this report than I can cover in one article. In the next installment, I'll get into Avasant findings on gen AI costs, data quality, bias, and the top customer misconception about gen AI.

Loading
A grey colored placeholder image