AI - two reports reveal a massive enterprise pause over security and ethics

Chris Middleton Profile picture for user cmiddleton March 27, 2024
Summary:
Two surveys by tech vendors reveal a significant slowdown in AI adoption as business and IT leaders consider the legal and practical ramifications.

An image of many red stop signs piled on top of each other
(Image by Gerd Altmann from Pixabay)

No one doubts that artificial intelligence is a strategic boardroom issue, though diginomica revealed last year that much of the initial buzz was individuals using free cloud tools as shadow IT, while many business leaders talked up AI in their earnings calls just to keep investors happy. 

In 2024, those caveats remain amidst the hype. As one of my stories from KubeCon + CloudNativeCon last week showed, the reality for many software engineering teams is the C-suite demanding an AI ‘hammer’ with little idea of what business nail they want to hit with it. 

Or, as Intel Vice President and General Manager for Open Ecosystem Arun Gupta put it: 

When we go into a CIO discussion, it’s ‘How can I use Gen-AI?’ And I’m like, ‘I don’t know. What do you want to do with it?’ And the answer is, ‘I don’t know, you figure it out!’

So, now that AI Spring is in full bloom, what is the reality of enterprise adoption? Two reports this week unveil some surprising new findings, many of which show that the hype cycle is ending more quickly than the industry would like.

First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result. 

Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that. 

As previously reported (see diginomica, passim), multiple IP infringement lawsuits are ongoing in the US, while in the UK, the House of Lords’ Communications and Digital Committee was clear, in its inquiry into Large Language Models, that copyright theft had taken place. A conclusion that peers arrived at after interviewing expert witnesses from all sides of the debate, including vendors and lawyers.

According to PagerDuty, unease over these issues keeps more than half of respondents (51%) awake at night, with nearly as many concerned about the disclosure of sensitive information (48%), data privacy violations (47%), and social engineering attacks (46%). They are right to be cautious: last year, diginomica reported that source code is the most common form of privileged data disclosed to cloud-based AI tools.

The white paper adds:

Any of these security risks could damage the company’s public image, which explains why Gen-AI’s risk to the organization’s reputation tops the list of concerns for 50% of respondents. More than two in five also worry about the ethics of the technology (42%). Among the executives with these moral concerns, inherent societal biases of training data (26%) and lack of regulation (26%) top the list.

Despite this, only 25% of IT leaders actively mistrust the technology, adds the white paper – cold comfort for vendors, perhaps. Even so, it is hard to avoid the implication that, while some providers might have first- or big-mover advantage in generative AI, any that trained their systems unethically may have stored up a world of problems for themselves.

However, with nearly all Fortune 1,000 companies pausing their AI programmes until clear guidelines can be put in place – though the figure of 98% seems implausibly high – the white paper adds:

Executives value these policies, so much so that a majority (51%) believe they should adopt Gen-AI only after they have the right guidelines in place. [But] others believe they risk falling behind if they don’t adopt Gen-AI as quickly as possible, regardless of parameters (46%).

Those figures suggest a familiar pattern in enterprise tech adoption: early movers stepping back from their decisions, while the pack of followers is just getting started. 

Yet the report continues:

Despite the emphasis and clear need, only 29% of companies have established formal guidelines. Instead, 66% are currently setting up these policies, which means leaders may need to keep pausing Gen-AI until they roll out a course of action.

That said, the white paper’s findings are inconsistent in some respects, and thus present a confusing picture – conceivably, one of customers confirming a security researcher’s line of questioning. Imagine that: confirmation bias in a Gen-AI report!

For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments? 

One answer may be that, as diginomica found last year, that ‘departmental’ use may in fact be individuals experimenting with cloud-based tools as shadow IT. That aside, the white paper confirms that early enterprise adopters may be reconsidering their incautious rush. 

Headwinds

A second Gen-AI adoption report this week, this time from “search-powered AI” company Elastic, takes a more upbeat and evangelical view – no surprise, given the vendor’s stock-in trade. Despite that, it acknowledges the legal, regulatory, and organizational headwinds. 

According to Matt Riley, GVP & General Manager of Search at Elastic: 

In a little more than 12 months, the disruptive potential of Gen-AI has shifted from reverie to reality.

In his view, that reality is greater optimism about what the technology might achieve. Indeed, Elastic sees 88% of IT leaders considering increasing their investments.

Worldwide, Gen-AI is “poised to deliver tangible benefits”, says the company, with more than half of respondents (57%) anticipating that it will improve resource usage, operational efficiency, and productivity, 50% believing it will upgrade the customer experience, and 48% saying it will lead to more accurate decision-making.

Even so, the report identifies exactly the same challenges as PagerDuty’s, including chaotic data estates, and fears about privacy, security, regulation, and internal skills gaps.

Not only that, but the massive enterprise double-take on enterprise AI programmes is real, it finds. The report says:

Nearly all respondents [report that] adoption is being slowed, primarily by fears around the security and privacy of the technologies (40%), regulation issues (37%), and the skills gap to implement the technologies in house (36%).

My take

So, most enterprises are slowing AI adoption – even the majority that remain optimistic about the technology. A sign of more mature, strategic thinking in the boardroom, rather than the ‘me too’ tactical moves that characterized 2023. While not quite a new AI winter, this brace of reports reveals that Springtime can be chilly, wet, and changeable, even as the days get longer.

Loading
A grey colored placeholder image