Face off? Microsoft and regulation
- Summary:
- Advanced technologies are being developed faster than our institutions can fully absorb their consequences. Microsoft wants the government to step up.
On the Official Microsoft blog, Smith said such artificial intelligence is too potentially dangerous for tech giants to police themselves and urged lawmakers in to form a bipartisan commission of experts that could set standards and protect against abuses of facial recognition, in which software can be used to identify a person from a distance without their consent. Smith writes:
Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.
Smith's call comes amid an avalanche of public criticism of tech giants over their development and sale of AI-powered identification and surveillance software. Microsoft has faced pressure from its own employees to denounce the company’s cooperation with the Immigration and Customs Enforcement (ICE).hasn’t stopped internal dissent. An open letter posted by more than 100 employees was published internally and by The New York Times. “As the people who build the technologies that Microsoft profits from, we refuse to be complicit,” reads the letter, which asks that Microsoft cancel any ICE-related contracts.
A similar movement is underway at Amazon where a group of employees are pressuring the company’s leadership to stop selling its facial recognition software to law enforcement and to stop providing services to companies who work with ICE. In a letter to CEO Jeff Bezos, posted on the company’s WIKI, they wrote:
We don’t have to wait to find out how these technologies will be used. We already know that in the midst of historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses--this will be another powerful tool for the surveillance state, and ultimately serve to harm the most marginalized.
Smith wrote that the "sobering" potential uses of face recognition, now used extensively in China for government surveillance (We wrote about China’s vast surveillance efforts here), requires that the technology be subjected to greater public scrutiny and oversight. He added:
Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.
Smith rightly points out that there are many benign and positive roles for facial recognition technology. For example, it can be used to catalog and search your photos, improve security for computer users by recognizing your face instead of requiring a password to access laptops or iPhones, and, coming soon, an automated teller machine that knows who you are on sight. Only a few days ago, Microsoft introduced a new live event hosting application with automatic facial recognition and speech-to-text transcription that enables employees to skim videos by participants, rather than just timestamps and keywords.
Some future uses could be even more profound, Smith wrote:
Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.
Among the many moral and ethical issues to be resolved, Smith said, is whether police or government use of facial recognition should require independent oversight; what legal measures could prevent the AI from being used for racial profiling; and whether companies should be forced to post notices that facial-recognition technology is being used in public spaces. Allowing tech companies to set their own rules, Smith wrote, would be "an inadequate substitute for decision making by the public and its representatives."
The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today.
My take
We are living through an age in which advanced cognitive technologies are being developed and deployed faster than our political, legal, moral, ethical and cultural institutions can keep up with them. Facial recognition technology is the boogyman du jour because it raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. It’s clear that other new technologies will raise similar issues for years to come. How these issues are resolved (and how the technologies are used) may well define the decade ahead. Technology on its own is neither good nor evil but when the potential for evil use is so great, putting the public interest first seems a wise approach. Microsoft should be commended for attempting to get out in front of the political fallout that’s already coming. The industry needs to work with governments to avoid becoming the fall guy for the certain to come unintended consequences.