Following up on yesterday’s AI and bias report, my attention was drawn to some interesting comments made by Microsoft President and Chief Legal Officer Brad Smith that illustrate again that ethical questions in tech are seldom binary.
Speaking at a Stanford University conference on human-centric AI, Smith cited examples of when Microsoft would decline to sell its tech to various parties, but defended a criticised research program undertaken with China. He told delegates at the event:
When it gets to something like facial recognition, there have been deals that we've turned down, just as there are countries to whom we won't sell our Artificial Intelligence for weapons. One was a proposed sale of facial recognition technology to the government of a country that [not for profit] Freedom House says is not free…We looked at that and we said, ‘ We think that's gonna put human rights at risk, we think it will undermine the ability of people to assemble and express their views’ and so we said no.
Another instance was a lot closer to home:
We had a law enforcement agency in California want to deploy facial recognition in two very different scenarios. This gets to the scenario-based issues that arise. One was to deploy it in a prison so that the they would know where the prisoners were when they were outside their cell. We looked at that we said, ‘It's a defined sample. We can be confident that there are low error rates. I think it was all men, so there was no issue for women. Most importantly, it could promote safety in the prison because unfortunately safety is a problem in many American prisons. That scenario we said we were comfortable with.
The other scenario was it was they wanted to deploy it for the body cams and police cars, in every car that was then out on patrol. The reason they wanted to do it was any time they pulled anyone over for any purpose whatsoever, they wanted to run a facial recognition scan through a database to identify whether it matched a list of suspects and if so they would take the person downtown for further questioning . We said, ‘ This technology is not your answer. You're gonna end up taking African Americans and other people of color and women downtown to be questioned when they did nothing wrong because you're gonna end up with this misidentification’.
Smith added that the potential buyer in this case was appreciative of Microsoft’s stance:
What was most interesting to me was we said two things to them. We said, number one , ‘We won't sell you our technology for this purpose’, but number two we said, ‘We don't think you should buy anyone's technology for this purpose!’. We actually think that our error rates are lower than what you're going to get from the competition, but it's just not the right way to put this technology to work. The thing that I found most encouraging from a customer perspective, they said, ‘Thank you for raising this’.
On the other hand, sales teams inside Microsoft weren’t quite so understanding, highlighting the tension between principle and profit. Smith admitted:
It does go in my view to one of the cultural challenges that almost in exists inherently in any company in any industry. You exist to sell your product to anyone who wants to buy it…Believe me, the sales force was not excited, whether it was the sales force selling to the government for the capital city or the sales force that wanted to sell to the police force. I got a very impassioned email from the head of the sales team in the first instance about how upset she was that I had said no, that we wouldn't sell it to this government and in this capital city. That as much as anything to me points to what companies need to work through. Put in place the controls and the processes and then you have the days when you're unpopular because that's the only way that you put these controls in place to make them stay.
But there are other controversial customers to whom Smith insisted it was appropriate to sell and to work with, citing AI and the military as a case in point:
We certainly have felt that it’s important to be principled now when it came to AI in the military. You know we said that we would provide our best technology and all of our technology to the United States military. We will do the same thing for others, say NATO allies that have democratic processes and fundamental human rights controls…The number one issue I think right now when it comes to AI in the military, it really does go to lethal autonomous weapons and whether they are going to be subject to appropriate and meaningful human control. I think this is an important issue of domestic policy in the United States and I think it is an important issue of global international law as well.
He also defended reports that Microsoft is working with a Chinese military-funded university on AI-enabled facial analysis tech which activists have warned could be turned against the nation’s Uighur Muslims:
Smith insisted that Microsoft wouldn’t co-operate in the use of facial recognition in ways that could lead to mass surveillance, but added that in this instance there is no evidence that this is happening. He claimed that the work being done is “basic research” and that it could benefit a much wider audience:
If you really look at what people were working on, it really is sort of basic research advances, typically in fields like machine learning. One of the things that we apply as a company that is not universally shared across the industry - I think the industry would be better if it were - is a commitment to publish papers in the basic research field…I don’t think any of us would say, “Well then, let's stop all research and machine learning’. I think that's the wrong answer.
Equally I think it's the wrong answer to start to put pressure on researchers in the United States, whether they be at Microsoft or Google or Stanford or MIT, that says, “Don't work any more with people doing basic research in China’. I think that is a recipe to begin to hold back technology leadership and growth in the United States itself and it is a recipe to hold back basic human understanding in fields of science and technology that are foundational for the world.
Smith also made a pitch to a moral high ground for Microsoft, suggesting that other firms - and while Amazon wasn’t mentioned, the elephant was trumpeting at the back of the room - might not take a similar stance:
Any time you're talking about an AI based market you face the risk that the market will tip early on to those who accumulate the most data because they'll be able to use that data to further improve their product. Therefore it is tailor-made unfortunately for the classic race to the bottom. People will just go out to do every deal they can so that they can get all the data they can and try to become the market leader. I just don't see a way to prevent that kind of race to the bottom without creating this kind of legal or regulatory [change].
Microsoft has been campaigning for Federal Government regulation in the areas of AI and machine learning, although Smith called for a change of approach:
Don’t spend a decade debating the ideal for the world of technology regulation; Do what we do every day in the world of technology itself. You create a first version of a product. You make sure it's good . You make sure it's useful. But you get it into use and then you learn from it. That’s how we move faster in the world of technology creation and i believe it is a recipe that governments should start to consider for the world of technology regulation.
This was an interesting contribution to the ethical debate around AI and bleeding-edge tech usage and one that reminds us that nothing is, pardon the expression, black or white here. Microsoft in general and Smith in particular are setting out their stalls as thought-leaders in this area, which is a savvy move to be seen on the right side of history. That said, there’s a way to go in the current climate in the US to convince everyone that, for example, working with the Chinese authorities - even at one stage removed - can be spun as a good thing.
In the meantime, this isn’t an issue that’s going to go away and will in reality only become more complex. As Smith observed, it's a moving goalpost:
Frankly there have been debates about technology more broadly. You know people have said, ‘Gee we shouldn't be selling technology to the immigration authorities or to oil companies’. Whatever the list is today, my guess is it'll be longer a year or two or five years from now in the future. Again you really have to step back and start to work through each of them. It’s not a single answer that we have found for everything.