Main content

AI in criminal justice – the risks of dehumanizing law enforcement

Chris Middleton Profile picture for user cmiddleton April 26, 2024
Summary:
In the second of our reports, we hear from a range of experts on how adopting AI without awareness of risks – or having the requisite skills – could dehumanize systems that ought to focus on individuals.

An image of people crossing the road but their profiles are blurred
(Image by Brian Merrill from Pixabay)

In her book Freedom to Think (Atlantic Books, 2023), author Susie Alegre describes AI in the legal and criminal justice systems as “vomiting out” prejudiced predictions based on “infected” information.

Strong words. But Alegre is no self-appointed political agitator, but a respected barrister from internationally renowned legal practice, Doughty Street Chambers. An expert voice with insights into the justice system - and someone whose job, at this stage of her career, is unlikely to be under threat from the technology.

My previous report on AI in justice explored how, whatever their merits, AI and predictive analytics can only model human systems and automate existing, sometimes flawed, assumptions. 

An AI might work with a mass of accurate data, but if an algorithm is built on a biased view of human behaviour - rooted in someone’s age, ethnicity, socio-economic group, belief, gender, sexuality, or even postcode - then it can only confirm the bias coded into its design. It will look for evidence that an assumption is correct, and probably find it.

Speaking at a Westminster Legal Policy Forum on AI in criminal justice, Silkie Carlo, Director of citizen rights organization Big Brother Watch, picked up Alegre’s theme, saying:

We know there are already historic biases and issues in mass data sets that relate to criminal justice. An automated profile increases the chances that ‘someone like you’ could be seen as dangerous to society. 

The individual judgments that we come to rely on in the criminal justice system are being stripped out. Wherever there is a reliance on AI, people aren't being looked at as individuals. They're being looked at on the basis of someone who is like them, and who is typically seen as being involved in these kinds of things [criminal behaviours]. And that's dangerous.

As we explored in my previous report, however, human judgements are often flawed and biased too - deeply so, in some cases. And there are many scenarios where AI could act as a critical counterbalance. But Carlo continued:

Facial recognition has come up, of course. But what’s different now is that the [British] government, for the first time, has pitched live facial recognition as a pre-election policy, and suggested that not only will it be used in specific locations, but also on fixed cameras. On CCTV, for example, and at train stations.

That policy plays both to the public’s fear of crime, and to the mistaken belief that such technologies can already be trusted to be accurate and safe, and not lead to innocent people being arrested because of misidentification or false assumptions.

Trusting technology over people can be a minefield. And there is no more tragic example of this than the Horizon IT/Post Office scandal in the UK, in which over 900 sub-postmasters were convicted of fraud, theft, and false accounting over a 15-year period. This was due to errors that were actually caused by Fujitsu’s accounting software. 

The cover-ups, lies, and human cost since 1999 - including four suicides - have been perhaps the greatest miscarriage of justice in British history. Anything but blame a faulty technology or a strategic government supplier, it seems.

But back to the UK’s enthusiastic support for live facial recognition, which we can only hope is not another scandal in waiting. Carlo said:

It is important to say this is controversial. Nowhere else in Europe is live facial recognition used in that way. But we do see it being used like that in China and Russia. Parliament has never had a debate about live facial recognition [it has been debated by a House of Lords committee]. There's no explicit legislation, and there's not one law in the country that has the words ‘facial recognition’ in it.

From Carlo’s description, therefore, this is a technology being positioned, for political ends, as a totem of safety, when its use in some other places has been outlawed. Such as California, the very state where some systems were developed and trained, in fact.

Protecting individuals

So, what can be done to minimize the risk from such technologies in government or public services? After all, imagine a hypothetical future in which someone is criminalized or denied basic services based on false or partial data, badly designed algorithms, or analysis rooted in flawed assumptions. A dehumanized world.

An overstatement? At this point I should declare a personal interest. Some years ago, I moved from one apartment to another in the same city in which I had lived for a decade. Within a week, I received a genuine legal letter from a council 70 miles away - a town I had never visited, let alone lived in. It demanded instant payment of a large, outstanding council tax debt: a four-figure sum. But mine was up to date.

Identity theft? No. I phoned that council and was informed that someone with a similar (and relatively common) name had absconded from an address in that town weeks earlier. Because I had recently moved home, an algorithm pinned the debt on me at my new address. A completely automated decision that wrongly identified me as a tax dodger. 

And that’s not all: it was added to my credit reference. As a direct result of that, every high street bank in the UK refused to open an account for my new business, refusals that were each added to my credit reference too. That single automated, catastrophic error took years for me to undo. 

To whom could I appeal a cascade of wrong decisions and refusals - a virus, in effect, that infected everything it touched, every financial data point? Indeed, that automated error forced me to close a business that could not operate, legally, without a bank account. And it prevented me from accessing any form of credit for several years. All thanks to an algorithm.

So, I am living proof of the damage that automated decisions can do, when combined with badly designed algorithms, flawed data, and false assumptions. All it took was a name and ‘has recently moved house’. The output? Tax dodger.

But my experience is trivial compared with the hundreds of Post Office employees whose lives, against all common sense, were destroyed by trust in technology trumping trust in people. And how long has it taken to undo the damage of the Horizon IT case? 

Twenty-five years and counting. Indeed, it took a recent movie to draw public attention to a scandal that Computer Weekly and Private Eye had been covering for years.

Now, in the above contexts, consider the use of AI, analytics, and algorithms in policing and criminal justice. Then factor in the enthusiastic adoption of flawed technologies - such as live facial recognition, which has been proved to have a high failure rate when identifying ethnic minorities. You can see the emergent risks. 

Remember Carlo’s words:

Wherever there is a reliance on AI, people aren't being looked at as individuals. They're being looked at on the basis of someone who is like them.

I can vouch for that being true.

Minimizing dangers

Rick Muir is Director of UK policing thinktank the Police Foundation. Speaking of AI’s use in criminal justice, he told the Westminster Legal Policy Forum:

Obviously, the danger of this kind of black-box issue is we don't really know what algorithms are doing. And, therefore, it's very hard to scrutinize decisions that are being made. And that's clearly a challenge with all uses of AI. 

And there's another danger, which is ‘abstract policing’. We know that in a ‘policing by consent’ model [public consent to being policed], trust and confidence are absolutely core. We also know from academic research that one of the most important ways of building trust and confidence is that the way the police make decisions is seen to be fair and transparent. 

So, there is a danger that if you automate some of that decision-making, then it creates a distance between the decision-making and the public. Thus, the public are less likely to understand it, and less likely to perceive it as fair.

Zooming out to see the bigger picture, how can government ensure that the benefits and opportunities of AI in public services are maximized, and the dangers minimized? 

Ruth Kelly is Chief Analyst at the National Audit Office (NAO), whose role is holding government to account by measuring success - or otherwise. The NAO recently published a report on AI in government.

She said:

Let's look at what we found. First, encouraging AI in the public sector has been a government aim for many years. As far back as 2018, the AI Sector Deal [within the then Industrial Strategy] aimed to stimulate the use of AI in the public sector [and elsewhere]. Then there was the 2021 National AI strategy, which had specific ambitions around the public sector being an exemplar for the safe and ethical use of AI. 

But things really took a step change in 2022-23, when the headlines started coming through about ChatGPT and generative AI.

So, why is government interested in AI? She said:

First, there’s an ambition for the UK to use AI to improve public services and outcomes. But there's also a very strong productivity lens [the Office for National Statistics estimates UK productivity at 0.3%]. For example, work done by the Cabinet Office, looking at tasks in the civil service, estimated that about one-third were routine and could be automated, which is potentially worth billions. 

Now, that doesn't look at the cost or the feasibility of that automation, but it shows you the potential size of the prize. [...] And that includes, in terms of the criminal justice system, the Ministry of Justice, the Crown Prosecution Service, courts and tribunal services, and so on. But it's important to note that it didn't look at local authorities or the police.

What were the findings, in the NAO’s analytical terms?

“What we found is that, despite all the enthusiasm, AI is not that widely used across government as yet. Just over one-third of responding bodies had active use cases, and typically, that was only one or two per body. 

But there is, generally, a lot more interest in exploring AI. We found that 70% of respondents are planning or piloting AI use cases […] with the most common around supporting operational decision-making and improving internal processes. However, very few of those use cases directly provide a public service or engage with the public. 

But in the future, we saw a potential shift towards focusing on service redesign and major transformation.

So, what is holding government back from delivering the claimed benefits? Kelly said:

The first issue, relating to strategy and governance, is the absence of a clear public-sector AI adoption strategy and plan. There's a draft strategy. It's ambitious, but it's still at a very early stage. And it doesn't set out an implementation plan with performance metrics or funding, with overall accountability for delivery. 

And we found that the proposed governance of the strategy for public sector adoption of AI is largely separate from the cross-government governance structure that's been set up to oversee wider AI policy delivery, led by the Department for Science, Innovation, and Technology [DSIT]. And that risks missing opportunities for collaboration and having a coordinated approach. 

We also found that these governance arrangements don't include the wider public sector. And don't, for example, include the police. And that will really limit the extent to which the full potential of AI can be exploited [in the justice system].

So, another emergent risk in the delivery of AI in public services is an all-too-familiar mismatch in government. On the one hand, its overarching ambition to exploit the latest hot technology - in many cases, driven by promises of cost-savings and productivity gains, not smarter decision-making. And, on the other, the absence of strategy, governance, and good management. The knock-on effect of this may be to push some public services into the arms of vendors on a piecemeal basis.

So, what about minimizing the risks of any ill-considered AI adoption or poorly designed systems? She said:

It’s really important that standards and assurance processes are in place. But it's still very early days, and these are still under development. And some of the bodies we interviewed said it was often hard to navigate the wide range of guidance that's out there, or to get a definitive view of what they needed to consider.

The single biggest concern is skills, she added.

Seventy percent [of responding government departments and organizations] said they had difficulties in attracting or retaining skilled staff. And 70% wanted support in addressing the legal risks, with almost two-thirds wanting support around risks to privacy, data protection, and cybersecurity.

My take

So: feeling confident?

Loading
A grey colored placeholder image