By now, everyone is familiar with Multi-factor authentication (MFA), a process that requires two factors to verify your identity. Typically, it starts with a username and password, then a third factor.
The premise of MFA is that your password is inadequate for protecting your account or the service you are logging into. Passwords are easily broken phishing and bulk attacks that can exploit weak credentials. MFA operates through:
- Text Message or Phone Call
- Push Notification
- Code Generator
These are the most common and least intrusive methods. Still, in terms of privacy, you are associating your email address with a common device, typically a smartphone, which opens all sorts of possibilities to intrude on your privacy. More complex approaches are:
Beyond MFA, it all gets a little dicey because it involves biometric identification techniques. Biometrics involves fingerprints, iris scans, or defining facial features that uniquely identify a person. At the Department of Homeland Security, biometrics are used to detect and prevent illegal entry into the U.S., grant and administer proper immigration benefits, vetting and credentialing, facilitate legitimate travel and trade, enforce federal laws, and enable verification for visa applications to the US.
Fingerprints are a biometric factor that has been in use for over a hundred years because everyone's fingerprints are unique and constant throughout their lives. Interestingly, in ancient Babylonia and China, thumbprints and fingerprints were used on clay tablets and seals as signatures. But DHS employs identity techniques more controversial – including facial recognition.
The rise of facial recognition systems
Facial recognition systems (FRS) use computer-generated filters to transform face images into numerical expressions compared to determine their similarity. These filters are usually generated using "deep learning," which uses artificial neural networks to process data. According to a Georgetown University study, half of all American adults have their images stored in one or more facial-recognition databases that law enforcement agencies can search. At least 117 million Americans have images of their faces in one or more police databases. And the FBI has had access to 412 million facial images for searches.
Facial recognition is fraught with problems, particularly issues of privacy and bias. Ethical issues include racial bias and misinformation, racial discrimination in law enforcement, privacy, lack of informed consent and transparency, mass surveillance, data breaches, and inefficient legal support.
- Many people and organizations use facial recognition — and in many different places. Some airlines have begun to scan the face at departure gates.
- Social media companies on websites. Facebook used an algorithm to spot faces when you uploaded a photo to its platform. then asked if you want to tag people in your photos creating a link to their profiles. Facebook FRS was able to recognize faces with 98% accuracy. Facebook (now called Meta Platforms Inc.) announced that it would end use of its FRS and delete more than a billion users' facial recognition templates, citing privacy and regulatory challenges as factors in its decision, on the back of ongoing government investigations and a class-action lawsuit.
- Law enforcement at airports - DHS uses FRS to catch people who have expired visas or may be followed as part of criminal investigations.
- Businesses at entrances and restricted areas. Here, FRS functions in place of badges
- Retailers in stores - identifying "suspicious" characters and potential shoplifters.
- Marketers and advertisers in campaigns. When targeting groups for a product or idea, marketers often consider gender, age, and ethnicity. Facial recognition can define those audiences, even at something like a concert.
Organizations typically do not develop their facial recognition software internally. They rely on third-party providers like ID.me and Clearview. Both have you submit a "selfie" and other documents to be considered identified. The IRS uses ID.me for some processes, such as electronic filing of taxes, but recently backed off on their controversial decision to require the use of ID.me and selfies to validate log in identities (you can now do a live virtual interview as an alternative).
Other areas of the U.S. government currently employs ID.me to verify people's identity for Social Security, state benefits, and Veterans Affairs. In 2021, some people who used ID.me to verify their state benefits reported having their benefits denied or applications put on hold due to possible issues with the software (the future of ID.me for US government applications is currently under scrutiny by lawmakers).
Governmental agencies in the U.K., Australia, and Canada have ordered and/or fined facial recognition company Clearview AI to cease collecting images for their database and destroy more than three billion collected images. France's Commission Nationale Informatique et Libertés (CNIL) said Clearview AI breached Europe's GDPR data protection law. Clearview was given two months to delete the collected photos and personal information and stop the "unlawful processing" of the data. The authorities asserted the company breached privacy by collecting and sharing face-identification information without consent and by unfair means.
FRS' use, admittedly, will continue to increase, despite the recent developments. With data breaches, cybercrime, and new surreptitious uses of biometric information being revealed every day, we can hope for some form of comprehensive federal legislation regulating the use of FRS. Its success, however, may largely depend on the mid-term elections.
A federal regulatory framework governing FRS must, at a minimum, offer privacy safeguards, consistency with constitutional protections, and transparency surrounding the specific uses of FRS. The overarching significance of consent and enhanced transparency for governing usage of sensitive biometric data will continue to be supported by other legislative proposals on data privacy.
In practice, user data must be encrypted and purged at regular intervals. Software providers must have a robust plan in place in case of a data breach.
The accuracy rate of any FRS depends on the database that its artificial intelligence was trained on. The data needs to be continually growing, with diversity in terms of gender and ethnicity. The training data also needs to have a variance in lighting, angles, and facial expressions. A good database also carries different resolutions of images for the system to work with. Machine learning programs are only as good as the database they use to learn, and the FRS is no exception.
The key metrics to look at while considering an FRS are false acceptance rate (FAR) and false rejection rate (FRR). FAR is when different images are falsely matched as identical. In this case, if you're using it for security, the wrong person may be allowed access. In FRR, exact images are falsely mismatched as different. In this case, the right person may be denied entry. The FAR must be low and the FRR high in a practical security scenario.
Will regulation control FRS?
Enforcement will be a crucial issue in any regulation of FRS. Some federal proposals suggest enforcement by removing federal funding or the eligibility for funding for governmental and commercial operators. Other proposals have offered private rights of actions for affected citizens, individually or collectively, to bring civil actions for injunctive relief, declaratory relief, and/or monetary damages. Any legislation should have clear enforcement plans to push companies to comply.
Until a comprehensive federal regime is established, the US will continue living in this patchwork of state and local laws, which may help develop a common consensus for requirements under a federal regulatory regime. Meanwhile, companies using FRS should maintain privacy and data security measures that comply with state and local laws and develop policies and procedures to protect sensitive information.
Another controversial biometric identification technique is the analysis of voice. Similar to facial recognition, this is rife with ethical issues. Call centers may route women, callers, to different processes than men. Racial identification is an obvious source for voice recognition bias as well.
FRS, and other techniques, like voice identification, are fertile ground for bad actors to steal peoples’ privacy and assets. It’s unimaginable to me that FRS providers, who keep massive databases to train and identify, can resist the gain of illegally sharing that information. In addition, the burning desire to validate identity seems to be a solution to a problem we only recently created – the Internet, the Web, an open system for people all over the world to use, and crooks to misuse. So, FRS was inevitable, but let’s hope our regulators can get a handle on it.