AI will mean an arms race of fake IDs, warns new vendor alliance

Chris Middleton Profile picture for user cmiddleton July 26, 2023
Artificial Intelligence (AI) will create new chilling ways for fraudsters, hostile states and bad actors to generate fake IDs. What’s the solution?

An image of the shape of a person’s head illustrated with digital lines
(Image by Gerd Altmann from Pixabay)

The rise of artificial intelligence will create an arms race of fake identities and new forms of fraud. That’s according to Chris Lewis, Head of Solutions at anti-fraud and anti-financial crime specialists, Synectics Solutions. 

Speaking at a techUK event, Lewis said that open-sourcing some AI platforms – where he mentioned Meta’s partnership with Microsoft on Llama 2, the latest iteration of its large language model (LLM) – would be “good for techies”. However, he warned:

It’s also going to be very good for fraudsters. In three to five years’ time, that will result in more sophisticated synthetic and false identities, generated by AI.

We'll end up with an arms race, where ‘my dad's bigger than your dad’ artificial intelligence algorithms are pitted against each other. And it will never stop.

Hardly an inspiring vision in a world that’s already battling deep fakes of every kind. 

This is a downside of our AI-infused age. Even at this early stage in popular adoption, it is becoming harder to distinguish AI-generated content from human documents across a multitude of different media. Now, factor in apparently authentic fake IDs, generated by AI, and the opportunities would not only be a gift to opportunistic fraudsters, but also to hostile states and bad actors. Potentially a nightmare scenario that risks dissuading some people from engaging in the digital world at all.

Lewis continued:

In the Financial Services space, the biggest problem without question facing the anti-fraud sector is AI-enabled social engineering.

That isn’t news. But new forms of AI-enabled crime are, and they are proliferating. On my US trip back in May, for example, news programmes were reporting a new type of scam: people receiving voice messages from family members, partners, or friends asking for urgent financial assistance because of an accident. In reality, the voices were AI clones taken from whatever rich media content was available online. 

(Professionals’ advice: don’t respond. Instead, phone the number you usually contact those people on and ask if they sent the message.)

Is the answer more digital ID? 

While many of us have grown accustomed to spotting phishing or spear-phishing attempts in emails, texts, and chats, the idea of being taught by AI to distrust the evidence of our own ears and eyes, and even our closest relationships, is chilling. 

Might digital channels become so polluted with this type of AI-generated effluent that people begin pulling their data and personally identifiable content from social channels to protect themselves against scraping? And importantly, what can be done about preventing it?

One solution might be more digital ID, not less, suggested techUK. 

The spur for hosting the panel was to explore the fruits of a recent partnership between Synectics Solutions, digital identity tech specialist Yoti, and ID verification provider Mitek Systems. Project Shield was announced in June as an evolution of the rival companies’ shared-signals initiative for real-time threat intelligence. 

Shared-signals programmes are recognized as a viable way of gathering threat intelligence, but can be risky from privacy and data protection standpoints. Project Shield aims to address some of those concerns, and thereby encourage more people to adopt digital IDs in this increasingly dangerous environment.

But might cynics interpret this as more a marketing alliance designed to shift product than a selfless programme to protect the vulnerable? Gareth Narinesingh is Head of Digital Identity at Mitek Systems. He explained:

We believe that digital identity is at an inflection point and about to take off in a big way – over the next three-to-five years – whether that’s one-time or reusable IDs. And, like the government, we believe in creating a trust framework that will allow trust to be garnered in digital identity in ways we haven't seen before: between businesses and customers, and between government departments and citizens. 

But it would be remiss of us – verging on the wilfully neglectful – if we did not consider that digital identity will also become the new fraud attack vector. And I say that with a heavy heart because it's got such potential to help people, to make things more convenient in the digital world. But sadly, we have to come to terms with that. 

So, we've established that it would be a good thing for us to collaborate and coordinate to create a new first line of defence. And the way we do that is by creating an initial proof of concept [POC] to effectively connect certified identity providers, so we can share intelligence and information on identity fraud threats. 

The idea is that an individual with malevolent intent cannot infiltrate the system through any one party.

But don’t different ID providers use a disparate mix of technologies? How interoperable might such a system be? He said:

Different identity providers do things in different ways. We use different technologies within our stack, and different ID proofing techniques. So, we would want to safeguard against a sophisticated fraudster finding any weak spot and exploiting it. 

So, Project Shield is designed to guard against that. It is designed so that identity providers can form an ecosystem and use a very sophisticated network infrastructure to send alerts. We can be listening for those alerts, then see if there's anything for us to be concerned about across the whole network.

Good news. So, how did the three organizations end up collaborating? Lewis said:

This has been three years in the making. And not just this project, but other collaborative ones too that we've been working on with Mitek, Yoti, and other identity providers since the pandemic.

But the real thing that got this off the ground was to do with the wider requirements and guidance provided by the new Department for Science, Innovation, and Technology (DSIT), which set up the alpha and beta trust framework [the UK Identity and Attributes Trust Framework, which has been published in alpha and beta versions].

There are clear indicators within that to move towards collaboration between identity providers and other members of the trust ecosystem, and to share intelligence. We know from experience in the Financial Services, Insurance, Telecommunications, and public sectors that collaboration beats competition, particularly when it comes to fraud prevention.

John Abbott, Chief Commercial Officer, Global Partnerships, for Yoti added:

There’s an aligned vision of just getting something done, because we could sit in a lot of other forums and discussions [and not do that]. Between the three parties, we were clear that we had capability and skill sets to do it, to demonstrate that this is absolutely doable.

I think we'd rather be involved in driving and championing it now than after the event. With the right consideration of both consumers and businesses, it can be a very powerful addition without turning it into a nanny-state equivalent. This is something that should be preventing illegitimate forces from proliferating.

Or as Lewis put it:

We’re not going to wait until the house is on fire and wish that we'd installed a sprinkler system.

My take

An interesting initiative that points to a future of vendors collaborating to fix the problems that AI is already causing. Witness the joint announcement over the weekend of seven AI companies working with the US government to put more guardrails in place.

A grey colored placeholder image