The innocent have nothing to fear from facial recognition tech, right? Well...

Chris Middleton Profile picture for user cmiddleton May 7, 2020
Summary:
The debate around the privacy implications of facial recognition tech isn't getting any less complex...

facial recognition
(via Pixabay)

“The innocent have nothing to fear” - words famously uttered by Michael Howard, Home Secretary for John Major’s UK government and leader of the Opposition in the Tony Blair years. But with a 98% reported failure rate of the Metropolitan Police’s 2018 live facial recognition tests – findings that were discussed in the UK Parliament at the time – it’s hard to make such a claim with any confidence.

Despite this, it and other biometric technologies are sweeping into the UK, in some cases as a potential response to the COVID-19 pandemic. In the US a year ago, San Francisco banned the use of facial recognition in local surveillance and law enforcement, citing the risk of error and bias.

But are these criticisms valid? First, some good news – of a sort - from the UK. Two years on, the success rate of live facial recognition technology in public tests appears to have slightly improved. In police schemes that compared live camera footage with watch lists of persons of interest, 93% of people stopped and questioned were wrongly flagged by the system (a five percent improvement).

That 93% figure came from Silkie Carlo, Director of privacy advocates’ group, Big Brother Watch. Speaking at an online Westminster eForum conference on digital identity this week, she said:

This is one of the most inexplicable aspects of the police's determination to use live facial recognition in the UK: the stats just aren't very good. From 2016 to 2019, over those four years of trials the overall misidentification rate was 93%. So, in all of the match alerts that have been generated, 93% have been of innocent people who have wrongly been identified.

The figures are troubling, but it’s important to understand what Carlo is saying. She isn’t claiming that all facial recognition systems have a 90-plus percent failure rate. She is referring to live technology wrongly matching a few faces with those on a watchlist – a small, targeted data set, rather than a hypothetical database of every citizen’s visage. It’s a critical distinction.

Proponents of the technology might say that if almost no one from a data set actually walked past the camera on test day, can the technology really be said to have failed? Clearly the system didn’t fail to identify over 90% of people on the watch list in live tests.

But if people not on that watch list were detained by police in their stead, then its real-world application is certainly putting innocent citizens under surveillance or forcing them to identify themselves just for walking down a street. That shifts the sensitive balance between the public and law enforcement.

It also raises important questions, such as: what happens to the innocent citizens’ biometric data? Are more and more people being added to police databases, or potentially flagged as suspects?

More collections?

Either way, the problem is that an implicit remedy for these errors would simply be to collect more data about every citizen to ensure that nobody is misidentified – on the principle that more training data equals greater accuracy in any facial recognition or AI system. That would mean total surveillance, Chinese style.

So, talking about facial recognition’s success or failure is to risk falling down a political and logistical rabbit hole, chased by arguments that are based on whether you believe surveillance is inherently good or bad.

But the privacy campaigners’ core point is sound, however you spin the figures. Even if a live facial recognition system were able to achieve, say, 99% accuracy from some notional database of 66 million citizens, that would still leave 660,000 people at risk of misidentification on any given day. (For comparison, that’s roughly eight times the current prison population of the UK).

Why thousands of people are on watch lists today is a separate question, to which Carlo offered an answer: the police aren’t using facial recognition to catch known criminals at all:

Facial recognition is being used as a kind of generalised policing and intelligence net in public places. We know this through the enormous number of Freedom of Information Act requests that we've done over the past couple of years. There are people on there who the police are interested to know about for intelligence reasons. We know that environmental protesters who aren't wanted for any crimes have been put on watch lists – and even people with mental health problems.

So while this technology might appear to be an authoritarian’s dream – scan everyone, then arrest anyone identified at the scene of a crime – it risks introducing real societal problems.

If you are a white male whose face has been scanned in ideal lighting conditions, then facial recognition and computer imaging systems can be extremely accurate – as Professor Josef Kittler, an AI and facial recognition expert at Surrey University explained to delegates. But numerous studies – such as this on driverless cars – have shown that if you are a woman, black, Asian, or your appearance is occluded, motion blurred, captured in low light, or on a low-res camera, then it is often far less reliable.

This is the core problem: real life is complex and messy, with far more training data about some groups than others; we don’t live in a laboratory in a world of total information. Yet some people apparently see the technology as operating that way and offering simple solutions.

Regulation

Hugh Milward, Director of Corporate, External and Legal Affairs at Microsoft UK acknowledged the problems of errors and racial or gender discrimination, but explained:

Researchers across the tech sector are working hard to address these challenges and significant progress is being made, but research demonstrates that deficiencies remain. The relative immaturity of the technology is making broader public questions even more pressing.

Microsoft is one of a handful of technology companies calling for facial recognition to be regulated – in the US, at least – because of the risk of error and the technology’s potential abuse in racial profiling. Indeed, it is clear that the indiscriminate (and discriminating) use of these technologies reverses core principles in a democratic society.

For example, if someone is detained via facial recognition technology – either now or in a more advanced near future – it would introduce a presumption of guilt and put the onus on that person to prove their innocence.

It would also undermine the social contract between police and citizens, replacing law enforcers’ function of protecting us from criminals with a need for government to protect us from invasive technology. As Carlo put it, this “obliterates the concept” of policing by public consent. She said:

A digital arbitrary identity check rebalances the relationship between the citizen and the state and is inherently dangerous, which is why it's a staple of authoritarian societies. Let alone all the questions about the impact on freedom of expression and freedom of assembly.

Would you go to certain events if you knew that you could be identified, or that your face was going to be scanned? It raises so many questions: there's no clear legal basis for any of this, and there's been very little parliamentary debate.

The legal foundation is a key point. According to Professor Paul Wiles, the UK’s Commissioner for the Retention and Use of Biometric Material, the only legislation governing the police use of biometrics is the Protection of Freedoms Act, 2012, which merely covers their use of DNA and fingerprints. He said:

The situation has been transformed by the growth of artificial intelligence and the availability of very large data sets, on which they have developed face and voice matching. Fresh parliamentary legislation is needed to govern the police use of biometrics beyond DNA and fingerprints.

And what if citizens, law-abiding or otherwise, try to evade FR technologies by wearing masks or other items that are designed to confuse cameras? Wiles said:

We've seen examples of people declining [to be scanned] or covering their faces, and we've seen the police thinking that they have the lawful power to arrest people in that situation. I have doubts about that [the police response], about whether that is appropriate or lawful.

Microsoft’s Milward added:

There's no specific regular legislation to govern the use of CCTV, while the commercial [other than police] use of facial recognition is governed by GDPR, with biometric data considered a form of personal data. So, there is a need for government action in this space.

In short, the reason for mission creep in the police use of facial recognition is that there is little preventing it in law. But existing legislation is picking up some abuses of biometric technologies.

My take

An important debate, which has long set privacy campaigners against authoritarians. But as with so much at present, the immediate context is the impact of the coronavirus, which (as my previous report explained) may help to usher in biometric surveillance by the front door. Carlo said:

Everything in the world has changed in the past few weeks, and so everything is currently on the table. Now is an important time to be talking about facial recognition, and it's an important time for Parliament to be scrutinising it as well. We mustn't sleepwalk into using this kind of technology without taking those decisions very, very carefully.

Loading
A grey colored placeholder image