Why 80% of CISOs see AI as the biggest threat to their business

Chris Middleton Profile picture for user cmiddleton October 11, 2023
Summary:
RiverSafe CEO Suid Adeyanju talks through his firm's latest research on AI and security.

Privacy threat at work - laptop with eyes in screen © Juergen Faelchle - shutterstock
Privacy threat at work - laptop with eyes in screen

A large majority of security leaders (80%) believe Artificial Intelligence (AI)  is the biggest cyber threat to their business. Even more (81%) believe the risks of AI outweigh the many advantages. 

That’s according to London-based cybersecurity consulting provider RiverSafe, whose clients include Vodafone, BP, Aviva, Sophos, Nomura, and Sky. The company interviewed 250 Chief Information Security Officers (CISOs) and equivalents in UK organizations for a new report, AI Unleashed: Navigating Cyber Risks.

With UK Government data suggesting that the risks are much higher for large enterprises (69% have experienced a breach or attack over the past 12 months), medium-sized enterprises (59%) and high-income charities (56%), AI might appear to be a technology too far for CISOs.

But not according to RiverSafe. Its report reveals that only 11% lack confidence in their organization’s ability to fend off an attack – despite 18% admitting to a serious breach in 2023, and more to an incursion of some kind. The report says:

The rising volume and sophistication of attacks is also a key issue facing businesses. Sixty-one percent have seen an increase in cyber-attack complexity due to AI, 33% have seen little change, and just four percent have seen a decrease.

The threat of a breach continues to worry cyber leaders. Sixty-three percent expect a rise in data loss within their organization this year, compared with just 32% expecting no change and five percent expecting a reduction in data loss.

Might regulation help? RiverSafe finds strong support for tougher measures, with 95% agreeing that AI regulations are necessary. The report adds:

In the last year alone, government research suggests that across all UK businesses, there were approximately 2.39 million instances of cybercrime and approximately 49,000 instances of fraud as a result of cyber-crime in the last 12 months. 

With this in mind, the case for further investigation of the risks associated with AI adoption, data management, and cyber-security is clear.

Yet despite this, RiverSafe’s number-one recommendation is:

Embrace AI and don’t let it hold back your business.

Good or bad?

With the complexity of AI risk increasing all the time via deep fakes, AI-enhanced phishing, voice cloning, and more, this baffling mix of over-confidence and admissions of failure –alongside exhortations to adopt the technology – might be read as confusing. Yet this is characteristic of most security reports. AI is good for business, perhaps – if you’re a security company.

I asked RiverSafe co-founder and CEO Suid Adeyanju for his perspective. 

The report notes that some organizations are banning the use of tools like ChatGPT, with some users pasting privileged data into cloud-based tools. Is one challenge that strategic enterprise adoption of AI is minimal at present, with most deployments being shadow-IT use by individuals? Adeyanju says:

It’s interesting, that observation, and I think it's fairly accurate. There's a lot of talk about the adoption of AI within organizations, but the actual doing of it is not coordinated. And therefore, it's exactly that: introducing all sorts of risks within the business. Now, what we have seen is that, even at board level within the larger organizations, they've got initiatives to adopt AI because they see the benefits that it brings to the organization.

That said, the RiverSafe report notes that most CISOs believe the risks outweigh those benefits. Adeyanju argues: 

Yes. In terms of having a program in place to actually drive that objective [to adopt AI strategically], we haven't seen a well-coordinated one. We've seen business units using the broad objective as a reason to investigate the use of AI within some areas of the business.

But because there's no overarching strategy, this could potentially become a big issue. Because once it is adopted, users start uploading all sorts of information into ChatGPT, for example, and it can be proprietary. So, it requires careful consideration, and the right kind of policies in place to ensure that protections are there.

The report suggests that over three-quarters of organizations (76% of respondents) have gone as far as halting their AI programs over security fears. According to Adeyanju: 

With a lot of the AI tools out there, some of the threats that were easily picked up before have now become much harder to spot.  Take phishing. Traditionally, it has been easy to put in efficient tools to look at grammar and spelling mistakes – things that jump out at you when you look at attempted email compromises. 

But with the use of generative AI to improve the effectiveness of those, phishing is becoming a lot harder to pick up. Attacks are being targeted against organizations and they're a lot more sophisticated than they used to be. But in terms of organizations trying to halt the implementation of AI, I think that’s primarily to do with the fact that it's the unknown. It’s the fear factor more than anything else.

So despite the growing complexity and plausibility of attacks, RiverSafe’s advice is to seize the day on AI, he says: 

Clearly, there are benefits to adopting it, such as automating laborious manual tasks.

On the other hand, it might be argued that the rise of tools like ChatGPT, Midjourney, Stable Diffusion, and the rest, reveal that AI isn’t delivering on that promise. It’s doing the opposite: automating creative processes and turning employees into button-pushers. Adeyanju counters: 

A lot of organizations have left it to individual business units to experiment. And when people are allowed to experiment, they will do that. They'll go as far as their imagination allows them to in their domain.

But that’s why guardrails should be put in place to say, ‘This is what we actually need AI for’. And, ‘These are the areas where we need to focus adoption’. So there's a purpose, and everyone stands behind that purpose. 

But a lot of businesses we speak to now, they just talk about it. They haven't got an overarching program in place. Nothing that says. “This is the reason why we want to adopt the technology’. Or, ‘These are the activities we would like AI to do for us.’

Learn your lessons

That being the case, there are some key lessons to consider, he argues: 

Understand the real reasons behind your organization's adoption of AI. Then bake security into that, to ensure that the risks associated with it are properly assessed.

Have a conversation from the outset with all the key stakeholders to understand what their intentions are – including from a defensive point of view. And look at what technologies are out there, including what vendors are doing in terms of integrating AI into their own tools.

So, what does it mean [when vendors do that]? What sort of protection does that company provide its customers with? Get all those evaluations done properly before adopting these tools.

My take

Simple advice from RiverSafe, but good.

In related news this week, San Francisco-based data resilience provider Splunk has published its own AI-centric 2023 CISO Report.

It documents a worsening scenario than the RiverSafe research, which was based on June 2023 data. According to Splunk, 61% of security leaders now say their business has been disrupted by hostile actors, with 90% of respondents experiencing some form of hostile or opportunistic attack.

Ransomware is one of the most popular forms, with European companies most likely to pay their attackers. Sixty-one percent paid sums of up to $99,999.

According to Splunk, 57% of security leaders expressed interest in using generative AI for their cybersecurity response. Fighting fire with fire, suggesting a near future of ever-greater systems complexity and opacity.

Buyer – and user – beware. We are entering an era where we can no longer trust the evidence of our own eyes, ears, and contact lists.

Loading
A grey colored placeholder image