Great expectations from AI, but a bleak house for cybersecurity
- Summary:
-
So, what the Dickens is going on? Reports, governments and vendors mull how to tackle the cybersecurity challenges posed by AI
It is a truth universally acknowledged that a single technology in possession of good fortune must be in need of an urgent cybersecurity solution (that’s enough literary references). And so, eight months into the hype cycle around AI and Large Language Models, the industry is abuzz with fears about the security implications.
Recent diginomica reports have looked at the need for companies like Private AI to anonymize and obfuscate personal data to prevent users from handing it to providers without thinking. And at the digital identity implications of AI: an arms race of fake IDs, and open-source algorithms subverted to defraud unwary customers.
These are important issues. Last week, seven providers, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – with the latter pair partnering on products like the epochal ChatGPT – stood with US President Joe Biden and committed to managing the risks associated with the technology.
Earlier this year, the White House published a voluntary AI Bill of Rights, in the wake of vendors allegedly scraping the internet for data to train large-language models (LLMs) – content that may be published in a public domain, the Web, but not always classed as public-domain in law.
A flurry of lawsuits is ongoing or incoming as a result, while in Hollywood actors are pondering the implications of AIs using their likenesses. Might people begin to assert digital rights in themselves as unique entities in law? Inevitably, I think.
That aside, one of the goals of this latest White House and vendor initiative is to ensure that AI-generated content is clearly identified, flagged, or watermarked as such – a move that, on the face of it, would only seem unreasonable to companies seeking to pass off such content as made by humans.
However, Rusty Cumpston, CEO of trustworthy AI provider RKVST Inc, was not convinced by the move (might it eat into specialist providers’ profit margins?). He said:
Watermarks on AI images offer some help to people who want to know whether or not data is trustworthy, but it’s like half a Band Aid. It doesn’t really do much to fix the problem. Data consumers need an easy way to instantly verify the provenance and integrity of images and other digital content so they can decide whether or not the data is safe to use.
One wonders which company might offer such a service? But he added:
Standards are emerging that embrace an open and interoperable trust model that works for any data, anywhere including the C2PA provenance and authenticity metadata standard and IETF SCITT integrity, transparency and trust architecture.
RKVST uses an open API and open-source SDKs, but some in the industry have warned that open-source AI itself presents a security problem: a gift to techies, but also to fraudsters, as we noted in our earlier report.
AI fighting AI
Meanwhile, the EU has drawn up the Artificial Intelligence Act, proposed legislation which, like GDPR, seeks to protect Europe’s citizens from corporate overreach. Systems must be safe, transparent, non-discriminatory, and sustainable, say the bloc’s lawmakers and regulators.
But how concerned are organizations about the cybersecurity risks of AI, as many rush to adopt it – in some cases tactically rather than strategically (a risk in itself)? Very, according to a new report from security provider RiverSafe, ‘AI Unleashed: Navigating Cyber Risks’.
According to research conducted among 250 cybersecurity leaders, 80% of respondents believe that AI is now the biggest cybersecurity threat to their businesses, while 81% see it as a bigger threat than a benefit.
Ouch. These concerns are slowing down the pace of AI adoption, adds the report, with 76% halting implementations entirely while they assess the risk. Meanwhile, 22% of security leaders say that staff are banned from using ChatGPT and similar chatbots – in some cases because users are pasting sensitive data into cloud-based tools.
The report says:
There are concerns for some companies about the systems they have in place to manage emerging threats. For example, 21% told us their cyber strategy is outdated and urgently needs to be refreshed to respond to new threats such as AI. Security leaders are also hiring fresh cyber talent into the workforce.
Our survey paints a worrying picture of the role AI is set to play in cybercrime in the future. Eighty-five percent of respondents told us that AI advancements will outpace cyber defences. In preparation for this threat, 64% have seen an increase in their cyber budget this year.
The rising volume and sophistication of attacks is also a key issue facing businesses. Sixty-one percent have seen an increase in cyberattack complexity due to AI, while 33% have seen little change, and just four percent have seen a decrease.
According to the survey, the overwhelming majority of security professionals, 95%, believes that AI regulation is essential, as a result. Yet despite these worrying trends, RiverSafe advises organizations to “embrace AI” and to not let it “hold back your business”.
Unsurprisingly, it is not the only report in town this month. Another vendor, Censornet, believes that AI is the inevitable solution to the security threat posed by… AI.
Its own report, State of AI in Cybersecurity: Is AI the Answer to Cybersecurity Threat Overload? says:
As AI becomes more sophisticated, traditional cybersecurity systems may struggle to fight AI-generated threats. And with over half (53%) of global IT decision makers concerned about ChatGPT’s ability to help hackers craft more believable and legitimate sounding phishing emails, it’s no wonder AI is at the top of the cybersecurity agenda.
Indeed. But according to Censornet, organizations – especially SMEs – will be fighting AI with AI, with a claimed 84% of decision-makers planning to invest in AI-automated security solutions in the next two years, and 48% this year.
Seventy-six percent are investing in AI-automated security alerts, while 63% have reduced the number of security vendors they use, with most opting to consolidate around a single or core group of providers.
Censornet CEO Ed Macnair said:
The democratization of AI is a game-changer in the world of cyberattacks. Now more than ever, bad actors can easily manipulate the power of AI to automate and advance attacks.
Generative AI is helping hackers create highly persuasive content for phishing, or business email compromise (BEC) attacks. It’s also much easier to create convincing deep fakes in manipulated videos and images.
So, the pressing question that all cybersecurity teams need to ask is how to adapt to these changes.
But he added:
While AI-powered tools are a core part of the solution, the human element cannot be ignored. Training, education, and embedding a culture of security awareness throughout the organisation are equally as important as any AI-powered tool in protecting against the new threat landscape.
My take
Wise words.
As with all such reports, the independence of respondents to vendor surveys is sometimes unclear: are they customers approached by pollsters on providers’ behalf? That aside, the findings remain a useful snapshot of the state of play.
Fighting technology with technology, an escalation of complexity – a risk in itself. So, as Macnair advised, never overlook the critical human element. It may be where the danger, and the solution, reside.