Meta's Facebook is facing yet another state-backed lawsuit over its previous facial recognition practices, where the Texas Attorney General's Office is suing the company for collecting biometric data on millions of Texans without their informed consent. Facebook faced a similar lawsuit in Illinois in 2020, where it ended up agreeing to pay $650 million as part of a settlement and later led to the company closing down its facial recognition technology.
The Attorney General of Texas, Ken Paxton, said that the lawsuit represents another example of "Big Tech's deceitful business practices".
The plaintiff's petition alleges that Facebook unlawfully captured the biometric identifiers or Texans for commercial purpose without their informed consent, disclosed those identifiers to others, and failed to destroy collected identifiers without a reasonable time.
The claim focuses mostly on a feature that saw Facebook offering suggested tags in photos for people based on an analysis of their face, including those of non-users of the app. However, the lawsuit also makes some accusations of similar tools being used on Instagram too.
Meta has said that the claims are "without merit" and that the company will defend itself "vigorously".
Reports suggest that Paxton and Texas are seeking billions of dollars in civil penalties.
Facial recognition technology is under intense scrutiny internationally, with various organizations facing lawsuits or public criticism for making use of the technology. The UK and Australia, for example, recently clamped down on Clearview AI, which describes itself as the ‘World's Largest Facial Network'.
Police forces have also been criticized for making use of facial recognition technology, given that there is well documented research that claims that it isn't entirely accurate and can lead to biased decisions being made.
Simply put, regulation has not caught up with the development of biometric technology itself. However, Texas' lawsuit argues that Facebook, for over a decade, built an "Artificial Intelligence empire on the back of Texans by deceiving them while capturing their most intimate data".
The lawsuit states that Facebook's empire was "built on deception, lies and brazen abuses of Texans' privacy rights - all for Facebook's own commercial gain".
Facebook knowingly captured biometric information for its own commercial benefit, to train and improve its facial-recognition technology, and thereby create a powerful artificial intelligence apparatus that reaches all corners of the world and ensnares even those who have intentionally avoided using Facebook services.
The scope of Facebook's misconduct is staggering. Facebook repeatedly captured Texans' biometric identifiers without consent not hundreds, or thousands, or millions of times - but billions of times…
There can be no free pass for Facebook unlawfully invading the privacy rights of tens of millions of Texas residents by misappropriating their data and putting one of their most personal and valuable possessions - records of their facial geometry - at risk from hackers and bad actors, all to build an AI-powered virtual-reality empire.
Shutting it down
As mentioned above, following the Illinois suit and settlement, Meta announced that it would be shutting down its Face Recognition System on Facebook. In November last year it said that people who had opted in would no longer be automatically recognized in photos and videos and that it would delete more than a billion people's individual facial recognition templates.
Facebook also pointed to how the technology had been responsible for its automatic alt text system, that uses AI to generate descriptions of images for people who are blind and visually impaired, telling them when they or one of their friends is in an image.
However, quite likely weighing up future challenges to its use, Facebook ultimately decided to shut the system down. In a blog post, Jerome Presenti, VP of Artificial Intelligence, said:
The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.
This includes services that help people gain access to a locked account, verify their identity in financial products or unlock a personal device. These are places where facial recognition is both broadly valuable to people and socially acceptable, when deployed with care. While we will continue working on use cases like these, we will ensure people have transparency and control over whether they are automatically recognized.
Like most challenges involving complex social issues, we know the approach we've chosen involves some difficult tradeoffs. For example, the ability to tell a blind or visually impaired user that the person in a photo on their News Feed is their high school friend, or former colleague, is a valuable feature that makes our platforms more accessible. But it also depends on an underlying technology that attempts to evaluate the faces in a photo to match them with those kept in a database of people who opted-in. The changes we're announcing today involve a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.
Meta once again finds itself facing the long arm of the law and is embroiled in a battle over its data protection practice. Given the outcome of the case in Illinois, the Texas lawsuit will no doubt be a concern to the company. We will be watching closely for how this turns out - not just for the impact to Meta, but also because these suits frame how companies and people will think about collecting biometric data going forward.