Netskope CEO Sanjay Beri - AI bubble is partly hot air, and that is bad news for enterprise security
- Summary:
-
Pricking the air bubble of a market, AI, that is noisy for a reason.
That the AI era has begun is obvious. However, a story in the Financial Times this week pricked AI’s bubble – or at least suggested it is as much hot air as substance.
The piece, ‘AI buzz on company earnings calls goes silent in regulatory filings’, revealed an uncomfortable truth to AI’s noisiest evangelists. While 40% of S&P 500 companies have talked up the technology in their earnings calls this year, less than one in six mentioned it in their corresponding regulatory filings.
The conclusion is inescapable: talking about AI is good for business in 2023. It drives up users’ stock prices by making their companies appear cutting edge. But less than 16% of major enterprises are implementing it top down and strategically. If more are, they haven’t told the regulators.
One interviewee said the quiet part out loud as everyone rushes to plug into the buzz. Bryant VanCronkhite, an Allspring Global Investments’ senior portfolio manager, told the FT:
The joke out there was that all you had to do last quarter was say ‘AI’ and your stock would pop immediately. Some companies are saying they’re doing AI when they’re really just trying to figure out the basics of automation.
I can confirm that. diginomica receives daily communications from companies with ‘AI’ in the subject line, but scroll past the buzzy me-too intros and they are frequently just about automation – or nothing at all. The noise is deafening, but there’s precious little signal.
The FT added:
The motivation for executives to join the AI conversation is clear. The seven largest AI-linked tech groups have been responsible for the majority of the US stock market’s rise this year.
Indeed. Take NVIDIA, which has just announced a stellar quarter in revenues and profits. In the calendar year to date, its shares have soared from just over $143 to more than $471, giving it a market cap of $1.16 trillion. That’s a sustained, eight-month stock increase of 230% as its chips run generative AI’s infrastructures. Investors want a piece of that, so everybody say “AI!”
Clearly, NVIDIA’s hardware is powering something in the real world. But what? An AI transformation of the enterprise? Not quite. Or at least, not yet.
Another report reveals the real story. ‘AI Apps in the Enterprise’, from cybersecurity unicorn Netskope, reveals that AI app use in the enterprise is certainly booming: up 22.5% in the preceding two months. As the data comes direct from 1.7 million cloud-based users in 70 global organizations, we should take it seriously.
Enterprise use
So, what’s going on? Why the conflicting messages? The answer lies in the massive difference between ‘in the enterprise’ and ‘enterprise use’.
The reality is that the boom in AI usage is mainly individuals playing with cloud-based generative tools like ChatGPT, Bard, Midjourney, and Stable Diffusion, without the oversight, knowledge, or permission of CXOs (who are talking about AI to impress investors.)
In short, it’s shadow IT, and while some useful, excellent, innovative work is being done with GPT et al, the rest of it serves no purpose beyond play. It’s not solving the planet’s biggest problems.
That mass, consumer-style adoption has been encouraged by the availability of free tools, the vendors of which saw the arts as low-hanging fruit that was there for the scraping. (Make a picture and bankrupt an artist! Write a Beatles song and make composers cry!)
By playing to individuals’ basest instincts, therefore – offering them something for nothing at the touch of a button, negating the need to pay for skill, expertise, or talent – unscrupulous vendors have established generative AI as crack for the time-poor or just plain lazy. The risks of doing so were explored in my earlier reports, here and here.
Given some AIs’ serious potential to help tackle climate change, killer diseases, sustainability challenges, and more, it’s enough to make one weep at the stupidity of regarding creatives and experts as the world’s most urgent problem. (If only we could stop skilled, talented people from earning a living, everything would be fine!)
But that shadow use of AI by individuals or departments – some of which may be driven by real business need – is a cybersecurity nightmare for the organization, says the company behind the report. That’s because those same users are divulging private or privileged data in the cloud. Not only that, but they may be handing it to vendors that we can’t assume will be ethical.
Take large-language chatbot ChatGPT, for example. Netskope’s research finds that users post source code to it more than any other type of sensitive data: 158 incidents for every 10,000 users a month. Other forms of secret, privileged, or personally identifiable information are also haemorrhaging out of organizations into cloud-based AIs.
If Netskope’s figures (and my calculations) are correct, this implies a staggering 1,580,000 leaks of source code to OpenAI’s tool a month, given that ChatGPT is believed to have 100 million users. Such leak figures must be an overstatement – not everyone has access to source code – but the core problem is significant.
So, what’s the solution? More AI, according to Netskope CEO Sanjay Beri. He tells me:
What's happening with generative AI, and AI overall, is not driven by companies. It's driven by people, by end-users. The same thing happened with cloud and mobile. And so, my thesis was we needed to build a solution that enables the way that people actually want to work, and secures their most valuable asset: their data.
A solution based on AI, of course. And credit to the company for building a system that tells CIOs and CISOs which apps people are using, including covertly, then flags when they divulge sensitive information.
But is he right that end-users are demanding AI tools? Isn’t it more the case that vendors like OpenAI have actively encouraged them to play with them online for free? In this way, they’ve created a short-term dependency, which can deepen into a longer, paid relationship, dealer style. (Didn’t OpenAI once claim to be non-profit? Next thing you know, it’s your paid creative team!) Beri says:
It's nuanced. For comparison, the rise of cloud was not because an enterprise like Coca Cola said, ‘We’re going to leverage cloud,’ right? It was because their employees were consumers, but also workers.
Yes, but ‘cloud’ was a marketing confection, concocted in Silicon Valley in the Noughties – as one cloud CEO told me at the time “to give the consultants something to sell”. Users had to be sold that idea first – and hard – before they handed their data, en masse, to a server warehouse in San Mateo. Beri continues:
But it wasn't the IT group, right? It wasn't the board saying, ‘Let's adopt cloud’. It was their employees who were prosumers. They're the ones who brought in Dropbox, they're the ones who started using cloud apps for storage.
The average number of cloud applications across a company is over 1,000. And 90+% of those are not owned by IT. They're owned by individual users, or business units who said, ‘You know what, I'm not waiting for IT to sanction anything’. But absolutely, vendors pushed it.
Fair point. And that process is now continuing – at an accelerated pace – with AI. And the IT department needs to become a security-minded enabler. Beri says:
In the past five or six months, I've talked to over 1,000 CXOs. And among their top-of-mind things are, ‘How do we control generative AI? How do we make sure sensitive data isn't being used in queries? And how do we ensure they [the tools] are not learning off it?’
We can detect when sensitive data, like source code, is being used in a query. Then it prompts the user, saying, ‘You're allowed to use generative AI, but you can't query it with our source code’. It stops that, but it doesn't stop them using those tools in the right way.
He adds:
Don't say no, that's a losing game. Say yes, but put in the guardrails. That's what we see people doing now.
A far cry from the days when “just say no” was seen as critical advice to a generation.
Complexity
But when it comes to securing AI, is adding yet more complexity, and buying more technology to fix technology problems, a better solution than common sense, regulation, and clear policies? Especially when CXOs may end up being the senior responsible owners of data security and protection problems caused by users playing with gateway apps?
Beri says:
There are always layers of defence. And the end user is absolutely one of those layers.
Then, of his own volition, he notes:
Obviously, generative AI has been used to craft phishing and business-compromising emails. And eventually some of those will make their way to your end user, and that user needs training to never click on a link.
There’s no getting around that AI will be used for bad. It’s inevitable, just as cloud is used for bad – most malware today is distributed via the cloud. Yet, no one would argue that cloud itself is bad. It's the same with AI. It will be used for bad, and it'll never stop being used for bad. So, our job is to make sure it's used for good.
OK. So, what does he make of that FT story that top-down, strategic, organizational use of AI is, in many cases, a fiction designed to impress investors? People whose finger is poised over a pleasure-giving button of their own: the one marked ‘Buy! Buy! Buy!” Beri responds:
The world is dominated by mid-market companies, by small and medium businesses, and I would argue that most of those are not using AI. But when you get to large enterprises, they want it as a competitive advantage. And yes, that increases their multiple in the stock market.
But of course, there are some companies who are using AI for real, strategically, in some powerful use cases. But I would say it's the minority, to be frank, who have anything in production. And that’s because this is early innings, right? And we will all still be talking about it in 10 years.
But in reality, we're just very early [in the AI era]. There are a lot of companies that want to use AI strategically, but they are cognizant of, ‘Wait a minute, I have to use in the right way. I can't be using other people's data without their consent’. If I'm using generative AI, say in my marketing department, I ask, ‘What and where is this image generated from? And who originally owned it?’”
Let’s hope more people have that attitude.
My take
A refreshingly candid CEO with, of course, some AI to sell of his own. But that’s not to say he’s wrong, or that products like his won’t be a critical line of defence against the misguided use of free tools.