Campaigners call for ban on use of facial recognition tech in UK

Derek du Preez Profile picture for user ddpreez August 19, 2021 Audio mode
The British Government has updated its guidance on use of facial recognition technologies, as police forces persist with their use.

An image of someone’s face being scanned by facial recognition software
(Image by Tumisu from Pixabay )

Human rights group Liberty has called on the UK to ban the use of facial recognition software by police forces and private companies, as the British Government updates its guidance on the use of the surveillance technologies. 

A year ago Liberty won the world's first legal challenge to police use of facial recognition technology, after it was found that South Wales Police's use of the tech in public breaks human rights, data protection and equality laws. 

Liberty says that surveillance tech has historically always been disproportionately used on communities of colour, deployed by police forces with a culture of racism. 

At the time the National Policing Lead for Facial Recognition said that he was "determined that the future, certainly for South Wales and I know a number of other forces, includes facial recognition". 

Twelve months have passed and the surveillance software is indeed still being pursued by multiple police forces, as well as a number of private companies across the UK. Liberty's petition calling for the tools to be banned has already reached 50,000 signatures.

Campaigners are now turning their eyes to the Metropolitan Police in London, which have persisted with their deployment of the technology. Liberty says: 

Despite it being well known facial recognition can't tell black people apart, leading to wrongful stops, searches and - in some cases in the US - arrests, the Met said: "we do not believe there are inherent biases that are extreme".

But how could they know when WebRoots Democracy discovered the Met didn't carry out an equality impact assessment ahead of the trials that formed the basis of its decision to approve the tech for wider use?

It was also revealed last week that more forces are experimenting with facial recognition on pictures and old footage (rather than live images like South Wales and the Met), while tech companies have plans to put the software on officers' body cameras, effectively putting everyone in a police line-up.

And it's not just the police making use of the tech. Private companies at busy transport hubs and shopping centres are scanning millions of people's faces without consent, while Southern Co-op is using the surveillance tech on staff and customers.

It must be banned.

The WebRoots report referenced by Liberty found that facial recognition technology will ‘exacerbate racist outcomes' in policing and called on its use to be banned for a generation. It found that even if the technology works accurately, it is likely to impact people of colour more. 

Interestingly, as we reported last year, IBM announced that it would no longer research, develop or sell facial recognition or analysis software, going as far as to say it "opposes and will not condone" its use. The move was praised by campaign activists. 

The government's response

Last year it was revealed that the Metropolitan Police Service would begin full operational use of Live Facial Recognition (LFR) technology. At the time the Met said that they have the right safeguards and transparency measures in place to ensure that they protect people's privacy and human rights. 

However, Paul Wiles, the Biometrics Commissioner, at the time suggested that concerns remained, after the Met finally carried out an equality impact assessment. He said: 

I am aware that the Metropolitan Police Service have produced an equality impact assessment in relation to their deployment of live facial recognition (LFR). In that document they claim that I ‘supported the concept of LFR'. In fact I have continually said that we need proper governance of new biometric technologies such as LFR through legislation. In my view it is for Parliament to decide whether LFR ought to be used by the police and if so for what purposes.

The Home Office has since updated its Surveillance Camera Code of Practice - for the first time in eight years - aimed at ‘empowering police and maintaining public trust'. 

The code of practice, which covers use by local authorities and police forces in England and Wales, says LFR deployments should

  • Take into account any potential adverse impact on protected groups

  • Be justified and proportionate 

  • Quickly delete any unused biometric data collected 

  • Follow an authorization process

  • Set out to publish the categories of people sought on the watch-list and the criteria on which the decision to deploy is based

However, Liberty remains unconvinced and states:

New laws regulating the use of facial recognition can't possibly solve the major human rights concerns.

And making the tech more accurate won't defeat discrimination. History tells us surveillance tech will always be disproportionately used on communities of colour, deployed by police forces with a culture of racism.

The only option is to ban police and private company use of facial recognition in areas open to the public.

More than 50,000 people have already signed our petition. Add your name and share it with your friends and family today.

Together we can build a fairer, freer society.

My take

From the outside, as a citizen, it certainly feels as if police forces, companies and local authorities are storming ahead with their use of LFR technologies, without providing insight into how the data is being used, collected and erased. Whilst people are arguing about the use of COVID passports, which pose a much less significant risk, it seems more attention should be paid to this issue - which will likely impact groups already disproportionately affected by policing activities. It also says something when you have a company like IBM condoning its use. Precedents are being set and we need to be very, very careful. 

A grey colored placeholder image