The use of Live Facial Recognition technology across police forces has been a highly contentious issue and has been criticised by both civil liberty groups, as well as MPs. Police Scotland recently went as far to say that there is “no justifiable basis” for its use.
However, despite privacy concerns, negative media attention and claims that facial recognition systems are biased against minority groups, the Metropolitan Police Commissioner has this week delivered a strong critique of the criticism and said that those against its use should consider the impact on victims of serious crime.
In addition to this, Dame Cressida Dick said that London’s police force is not using facial recognition technology that contains any bias - a bold and highly contentious claim.
On the Met’s use of Live Facial Recognition technology, the Met Police Chief wanted to bust what she calls ‘myths’ regarding its use. In a speech this week, Dame Dick said:
The tech does not store citizens’ biometric data
Human officers will always make the final decisions on whether or not to intervene - not the machine
The tech is “proven” not to have an ethnic bias
The Met Police has been open and transparent about its use
The Police Chief went on to say that policing will remain an “essentially human service”, but will be supported by better information and tools. She argued that the use of technology will be necessary in an increasingly complex and sophisticated criminal landscape, where online and real-world blend and there is huge amounts of data available to forces.
Dick said that machines which are as sophisticated as humans at making decisions are a long way off, and so the Met Police likes to think in terms of ‘Augmented Intelligence’. She said:
I wouldn’t put all policing’s hopes and fears on what is described as Artificial Intelligence. Augmented Intelligence where one definition “human-centred partnership model of people and artificial intelligence working together to enhance cognitive performance, including learning, decision making and new experiences.
The term describes better how technology can work to improve human intelligence rather than to replace it. That feels much closer to how we in policing are using technology. I also believe a licence to operate technology in those human terms feels much closer to what the public would expect and accept.
The Chief added that the Data Protection Act 2018 also requires that automated decisions that affect individuals must have a human “in the loop” to oversee the decisions and processes behind them.
She said that technology gives the Met Police “incredible opportunities in 2020” to identify more offenders, locate fugitives, prove associations and motivations - but that the forces will need to work to ensure the public remain supportive of its approach.
Dick said that the Met’s trials of Live Facial Recognition resulted in the arrest of eight wanted individuals, which otherwise wouldn’t have been possible. But the Chief said that at the moment the loudest voices in the debate around the technology seem to be the critics. She went on to issue some firm words to those critics:
Sometimes highly inaccurate or highly ill informed. I would say it is for critics to justify to the victims of those crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals.
It is not for me and the police to decide where the boundary lies between security and privacy, it is right for the police to contribute to the debate. But speaking as a member of public, I will be frank. In an age of Twitter and Instagram and Facebook, concern about my image and that of my fellow law-abiding citizens passing through LFR and not being stored, feels much, much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest.
The Commissioner added that the only people that will benefit from the Met not using Live Facial Recognition “lawfully and proportionally” are the criminals, rapists and terrorists that want to “harm you, your family and friends”.
And on the question of system bias, Dick said that London’s systems do not contain any...except when identifying women. She said:
We know there are some cheap technologies that do have bias, but as I have said, ours doesn’t. Currently, the only bias in it, is that is shows it is slightly harder to identify a wanted women than a wanted man.
It’s important for people to know that we only have people on the LFR watch list who are wanted for serious crime. The surveys show – I know there is more work to come – but that’s very likely to be supported by the public.
Give us a legal framework
The Police Chief said that the Met has two possible choices. It can either not adapt to modern tech and not use tech that increases the likelihood of solving and preventing crime, or use tech proportionately to speed up how human officers solve and prevent crime.
She added that whilst the Met is open to criticism on this, what she would like to see is the government bring in an enabling legislative framework that is debated through Parliament, consulted on in public, and which will outline the boundaries for how police should and should not use tech.
For today, I will simply say that it will be crucial to ensure that any future governance is able to enable the proportionate and appropriate use of technology to augment human policing. Any future guidelines – and I know this is difficult – and guidelines should be clear, should be simple, and should be fit for the 21st Century and need not to go out of date as soon as they have been published.
I strongly believe that if we in the UK can get this right, we stand in good stead to be world leaders in appropriate, proportionate tech-enabled human policing. The Prime Minister announced in his speech to the UN last year that in 2020, he would host a summit in London to establish global tech guidelines with the UK as a “global leader in ethical and responsible technology.” I hope that that might provide us with an opportunity to look at the use of tech in the round, including what it means for policing.