If you are a little bit optimistic, even truly awful initiatives tend to be moderated over time (except maybe workout clothes).
On the facial recognition front, I had the pleasure of speaking to two organizations recently that offer some hope. One, D-ID, claims to be the first company that can de-identify and anonymize images and videos of people, which can't be reverse-engineered to extract the source image/video.
While a person could identify the face as being of the same person, slight tweaks are being made to ensure it can't be recognized by computer vision (facial recognition) software. The other, Athena Security, I wrote about before, which is dedicated to using AI, surveillance, and some smart ideas to prevent, among other things, gun violence. Kudos to both of them.
However, protection and convenience of facial recognition using AI is usually overshadowed by horror stories such as The Atlantic’s: China’s Surveillance State should Scare Everyone:
…China’s …sprawling network of technologies, especially surveillance cameras, to monitor people’s physical movements. omnipresent, completely connected, always on and fully controllable” national video surveillance network. …MPS and other agencies stated that law enforcement should use facial recognition technology in combination with the video cameras to catch lawbreakers. estimate puts the number of cameras in China at 176 million today, with a plan to have 450 million installed by 2020. One hundred percent of Beijing is now blanketed by surveillance cameras, according to the Beijing Public Safety Bureau.
Facial recognition is the AI technology with the bad rap, and realistically, it deserves it - with applications like China’s. But are there useful - and even ethical - uses for it? I spoke with two vendors who insist there are, with very different applications of the technology, taking a different approach to the use of surveillance for the public good.
Saving lives with facial recognition - a deeper look at Athena Security
I had the opportunity to speak with Lisa Falzone, CEO and co-Founder of Athena Security. In a previous venture, Revel Systems, Lisa and her co-founder had a successful exit, purchased by a private equity company, that provided them with the ability to financially retire. But they were horrified by the incidence and mayhem of school shootings and put their heads together to see if there were some something they could do about it with their technical acumen.
Though it sounds like an obvious idea, using Artificial Intelligence actually to SAVE LIVES instead of selling people more junk, they came up with the idea of using surveillance, a topic with a fairly toxic reputation and instead, seeing if they could find a better use for it than suppressing Chinese Uighurs.
They chose to model detecting bad actors slightly before they are about to commit a crime with a gun. Surveillance cameras are the first step. But by employing dozens of professional actors to act out the numerous gestures of pulling a firearm, brandishing a weapon, and even, currently in beta, a knife, they were able to train their algorithms to pick up these threats in near-real-time. They claim 99% accuracy in less than three seconds.
Besides fast, accurate threat detection – including guns, knives, and aggressive action, the system can also alert you to falls, accidents, and unwelcome visitors.
The ethics of this is that facial recognition is not involved, only the gestures. And if there is a face, it is blurred. Athena advises strictly on-premise computing to clients to avoid the cloud and big brother's grasp.
All data is wiped within forty-eight hours, and the best part, there is no connection to the Internet, and the company states categorically that it does not monetize data.
This is not beta software, and they have corporate clients and schools around the world.
Fooling facial recognition with D-ID
Another company I spoke with is D-ID, whose co-founders came up with the idea for the company while in the Israeli Defense Forces, recognizing the risks that come from the growing global use of facial recognition technologies. I’m not sure if this risk was defensive or not, but given its application, it doesn’t matter.
The company was founded in 2017. The team consists of experts in deep learning, image processing, and machine vision, and all committed to facial recognitionwith identity protection.
I spoke with Gil Perry, the co-founder and CEO of D-ID, who was able to go into the technology beyond website detail. He explained how its technology works, how it protects individuals' identities from facial recognition software, and how its solution can be applied across a range of industries. It is entirely GDPR compliant.
In a nutshell, here is how it works. Any face, whether picture or video, can be deformed by the software, but it's more than a Groucho Marx disguise. While facial characteristics are algorithmically altered, all PII characteristics are preserved. In other words, if the person is shouting, 5’7” or turned 45%, the same age, gender, all of these characteristics are preserved. The reasons are apparent: gesture analysis, position, position in a group, etc. are useful attributes to capture.
Resultant faces are indistinguishable from the original by facial recognition software. The algorithmically generated steps are stored and encrypted. The keys remain with the customer, and D-ID strongly advises the customer to not use them, except in particular cases such as legal matters.
In summary, D-ID Purports to offer secure privacy in photos and videos, ensuring customers' and users' privacy from sensitive biometric PII, including social media so, even though the faces may look similar to humans, facial engines cannot recognize them (for now, but the question remains how long until they can?)
Capturing facial recognition (and even full-body, gestures, pace) ventures into the danger zone when those images or videos are kept beyond the immediate purpose. An even more insidious and dangerous potential is the monetization of recognition.
Athena Security has, in my opinion, technology with legs. D-ID I’m not entirely sure of. Who would pay to alter the trillion pictures in social media algorithmically? If it becomes widely used, it seems that bad actors would be able to defeat it. And the idea that the customer holds the encryption key seems a little shaky too. Nevertheless, it looks like a smart idea, and we’ll keep an eye on it.