Police forces and the broader justice system are making use of new technologies, including artificial intelligence, without proper protections in place in order to protect already vulnerable people. A new report by the House of Lords Justice and Home Affairs Committee (JHAC) states that without proper oversight, these new tools could have serious implications for a person’s human rights and civil liberties.
The JHAC went as far to describe the current technological landscape within the justice system as the “Wild West” and recommended that a new national body be established to set strict scientific, validity and quality standards to ‘kitemark’ new technological solutions against.
Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said:
What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?
Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.
We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is “the computer” always right? It was different technology, but look at what happened to hundreds of Post Office managers.
Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A “kitemark” to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.
We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision makers, knowing how to question the tools they are using and how to challenge their outcome.
Baroness Hamwee’s comparison with the Post Office scandal is an entirely fair one, given that 700 branch managers were given criminal convictions when faulty accounting software made it look as though money was missing from their locations. It has taken over a decade for the truth to come to light and many lives have been ruined.
We at diginomica have previously highlighted how the use of facial recognition technology, which is widely deployed across the UK and elsewhere, has been shown to discriminate against women, and those from minority ethnic groups.
Members of Scottish Parliament actually said in 2020 that there was “no justifiable basis” for Police Scotland to use live facial recognition technology, because of the concern for people’s civil liberties.
A proliferation of AI
The JHAC report states that without many of us realizing it, artificial intelligence has begun being used to improve crime detection, aid the categorization of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline.
The Committee said that it was “taken aback” by the proliferation of AI tools being used without proper oversight, particularly by police forces across the country.
The report asks:
At what point could someone be imprisoned on the basis of technology that cannot be explained?
And it’s a fair question to ask, given that many technology vendors that build these algorithms and AI tools do so without explaining their mechanics - protecting the ‘black box’. These technology companies often cite intellectual property when being asked to explain the outcome of an AI decision.
The JHAC argues that informed scrutiny is essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate and effective - but that this scrutiny is “not happening”. The report states:
Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market.
And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven.
It goes on to add that there is no central register of AI technologies, making it virtually impossible to find out where and how they are being used, or for Parliament and the media to scrutinize and challenge them. The report says:
Without transparency, there can not only be no scrutiny, but no accountability for when things go wrong. We therefore call for the establishment of a mandatory register of algorithms used in relevant tools.
And we echo calls for the introduction of a duty of candour on the police to ensure full transparency over their use of AI given its potential impact on people’s lives, particularly those in marginalised communities.
The report also outlines how AI is being increasingly used to forecast crime before it happens. Whilst there is an opportunity to better prevent crime as a result, it could also exacerbate discrimination. The Committee heard repeated concerns about the dangers of human bias in original datasets, which in turn could lead to bias in decisions made by algorithms.
One witness actually said to the Committee:
We are not building criminal risk assessment tools to identify insider trading or who is going to commit the next kind of corporate fraud ... We are looking at high-volume data that is mostly about poor people.
And whilst there was enthusiasm in the evidence provided for the potential of new technology in the application of law, the Committee did not hear much corresponding commitment to any evaluation of its efficacy. It notes:
Most public bodies lack the expertise and resources to carry out evaluations, and procurement guidelines do not address their needs. As a result, we risk deploying technologies which could be unreliable, disproportionate, or simply unsuitable for the task in hand.
A national body should be established to set strict scientific, validity, and quality standards and to certify new technological solutions against those standards.
In relation to the police, individual forces must have the freedom to engage the solutions that will address the problems particular to their area, but no tool should be introduced without receiving “kitemark” certification first.
The JHAC found that there are more than 30 public bodies, initiatives and programmes which play a role in the governance of new technologies in the application of the law - but their roles are unclear, functions overlap and joint-working is patchy.
The system needs urgent streamlining and reforms to governance should be supported by a strong legal framework, it added. At present, users are “making it up as they go along”.
Local specialist ethics committees should also be established and empowered, the report say, and that the ‘human’ should always be the ultimate decision maker, in order to safeguard when algorithms get things wrong.
Individuals should also be appropriately trained in the limitations of the tools that they’re using, in order to know how to question and challenge and outcome, with the correct institutional support.
Widely using immature technology to make decisions in the justice system without proper oversight or checks in place? That will not end up with a positive outcome in the long run - I guarantee it.