I got an email last week with an unexpected title: “A.I.'s social consciousness and the evil facial recognition myth.”
Wow. Let’s face it (unintentional pun), facial recognition is an area of AI that is pretty creepy. How could there be a myth about it?
After all, look at the headlines:
- Forbes: Our Facial Recognition Nightmare Is Upon Us
- BuzzFeed News: Opinion: Don’t Regulate Facial Recognition. Ban It
- BuzzFeed News: How Artists And Fans Stopped Facial Recognition From Invading Music Festivals
- The Next Web: Tacoma convenience store’s facial recognition AI is a racist nightmare
- Washington Post: The dangers of facial recognition software
- The Guardian: The Guardian view on facial recognition: a danger to democracy
There just isn’t any good news there. The pressure group Liberty has denounced automatic facial recognition as “arsenic in the water supply of democracy”. The city of Atlanta recently opened the world’s first biometric airline terminal, using facial recognition technology to scan passengers for check-in, baggage and security clearance.
A recent challenge to Illinois’s law suggested that plaintiffs should have to demonstrate actual harm to take legal action, but in an age of surveillance capitalism, traditional concepts of harm are inadequate to describe what may happen behind the analytical curtain.
I’ve had a queasy feeling about facial recognition since 2002.Do you remember the movie Minority Report? It's a creepy dystopian story based in 2045 Washington DC, where they were able to predict a murder before it happens, and the perpetrator could be easily identified by the comprehensive database of people.
From 2011 onwards, the FBI has been involved in the development of what's been termed NGI, or Next Generation Identification, which integrates palm print, retina scans, and facial recognition to help computers search for criminal history. The facial recognition database is currently believed to consist of around 411.9 million images, the bulk of which are connected to people with no history of criminal activity at all.
This brings us to CNN reports that a study last year from organization Upturn.org found that twenty of the US's largest police forces have already engaged in predictive policing, not unlike what Minority Report envisioned. Communities with a history of being heavily policed will be disproportionately affected by predictive policing.
So that’s all the downside. Is there an upside?
In NRF 2019 - Facial recognition brings personalization to a head, and I put my face to the test, Jon Reed has a conversation with C2RO. CEO Riccardo Badalone makes the interesting distinction between facial recognition and facial analysis. In other words, using faces to deduce propensity, emotion etc. is a fair use of the technology provided. I’m not so sure. Once your face is captured somewhere, the chance of it leaking out is pretty good.
I’ve been writing and researching a lot about the ethics of AI, but most of the time, the topic is what NOT to do. But the I received this email with the provocative subject:
Re: A.I.'s social consciousness and the evil facial recognition myth
What I found is that surveillance and facial recognitionhave a bad rap, but this is a company that is using AI and is committed to saving lives. Now the connection between technology and saving lives is usually a little thin, or the technology only plays a role, but it’s front and center in this email:
Tue, Oct 22, 12:18 PM
Bias is removed if an A.I. security camera only looks for guns, knives and aggressive actions.
For your Diginomica column, how about an educational A.I. story that tells a positive tale, one where innovators align their social goals with their business model instead of making a buck by any means necessary.
For example, Athena Security's main incentive is to save lives by eliminating the time it takes police and medical to arrive to a crime or shooting. In an age where gun violence and mass shootings are an every day occurrence, Athena doesn't profile or resell identities or user data, they simply protect the public.
Athena Security CEO Lisa Falzone would be happy to discuss:
* How she came out of retirement to build a life saving tech business.
* How A.I. and Facial Recognition have become interchangeable and how the media/Hollywood have wrongly perpetuated the myth and evil depiction.
* How Athena advises on-premisscomputing to clients to avoid the cloud and big brother's grasp.
* How schools like Archbishop Wood High School and places of worship like Al-Noor Mosque in New Zealand take comfort in having Athena's extra layer of security always on, always watching, always learning and always protecting.
* How this unique form of computer vision and object detection was achieved through hiring professional actors to train the A.I. brain to achieve 99%+ accuracy.
“Athena Security CEO Lisa Falzone would be happy to discuss”
And I did. Had a great conversation with Lisa. She was a founder of a start-up that had a successful exit Right after one of the mass school shootings, she came up with the idea to use AI to save people. The idea of AI saving lives isn’t original, but in this case, it’s actually in use. They can identify a threat in three seconds or less with 99% accuracy. They DO NOT monetize the data. I repeat, they do not monetize the data. They flush everything within 48 hours (inference stuff, not the video itself, that stays with the client). If there are faces, it grays them out. Next, they are working on algorithms to detect knives and also fighting.
I was a little disappointed, though, not to hear the counter-argument to evil facial recognition, but that’s the angle – it doesn’t do facial recognition. And all of the data stays local, no cloud, no internet. Athena Security is already installed in many corporate environments. I hope its sensitivity to detecting fights can be managed.
Bottom line: I still don’t like facial recognition, except in very controlled situations like missing children. But I’m convinced Athena Security is on the right track.