That’s one of the dramatic findings of a new report just released from Oxford University’s Center for the Governance of AI. Titled Artificial Intelligence: American Attitudes and Trends, the report is based on the findings from a survey done with YouGov which surveyed 2,000 Americans on their attitudes toward artificial intelligence between June 6 and June 14, 2018.
The report many interesting insights but the key takeaway is that a large portion of the American public is frightened or, at best, pessimistic about the benefits of machine intelligence. After reading a short explanation of each of AI “challenges” provided by the interviewers, a minority (41%) somewhat support or strongly support the development of AI, while a smaller minority (22%) somewhat or strongly opposes it. Write the authors:
There are more Americans who think that high-level machine intelligence will be harmful than those who think it will be beneficial to humanity. While 22% think that the technology will be “on balance ba,” 12% think that it would be “extremely bad,” leading to possible human extinction. Still, 21% think it will be “on balance good,” and 5% think it will be “extremely good.”
The governance challenges perceived to be the most likely to impact people around the world within the next decade and rated the highest in issue importance were:
- Preventing AI-assisted surveillance from violating privacy and civil liberties
- Preventing AI from being used to spread fake and harmful content online
- Preventing AI cyber attacks against governments, companies, organizations, and individuals
- Protecting data privacy
Demographics play a role in level of AI support. There is substantially more support by those with larger reported household incomes, such as those earning over $100,000 a year (47%) than those earning less than $30,000 (24%); by those with computer science or programming experience (45%) than those without (23%); by men (39%) than women (25%). The authors note that these differences are “robust to our multiple regression” and are not easily explained away by other characteristics. The authors write:
The overwhelming majority of Americans (82%) believe that AI and/or robots should be carefully managed. This figure is comparable to with survey results from EU respondents.
Who do you trust?
The takeaway for large technology companies is that when it comes to the development of machine intelligence they have a serious “trust” problem with the American public at large. Said Allan Dafoe, director of the Center for the Governance of AI, and one of the authors of the report:
There is no organization that is highly trusted to develop AI in the public interest, although some are trusted much more than others.
Broadly, the public puts the most trust in university researchers (50% reporting “a fair amount of confidence” or “a great deal of confidence”) and the U.S. military (49%); followed by scientific organizations, the Partnership on AI, technology companies and intelligence organizations; followed by U.S. federal or state governments, the UN; and, coming in dead last, Facebook.
More than two-thirds of those surveyed said they had either "no confidence" or "not too much confidence" in Facebook developing AI. The result is another indicator that Facebook has lost public trust following a string of scandals, most notably its failure to protect users’ privacy and Russia’s use of the social network in an attempt influence the 2016 Presidential election.
Microsoft Corp. was the most trusted among technology companies, with 44% of people saying they had either "a great deal of confidence" or "a fair amount of confidence" in its ability to create AI that wouldn’t pose risks. Overall, people had the most faith in the U.S. military, with 17 percent giving it the highest confidence score and 32 percent the second highest.
As the authors--Yale's Baobao Zhang and Oxford's Allan Dafoe--admit, one of the problems with technology-oriented surveys of this kind is that a lot of the American public--like the public elsewhere—simply doesn’t know much about AI or machine learning. As a result, they may be unaware that many tech products and services utilize AI or machine learning, even though they may use those products regularly. Who stops to think that the photo editor they just used to tart up a photo on Facebook might be using artificial intelligence. Obviously, that’s why they framed their questions in terms of public policy “challenges” rather than more technical factors.
One of the key problems with machine intelligence is accountability. Who is responsible when machines make inferences and decisions that affect real people adversely and unfairly? For example, in October, Amazon replaced an internal AI recruiting tool when it found that the system favored men over women.
As Stuart Russell, professor of Computer Science, Director, Center for Intelligent Systems, Smith-Zadeh Chair in Engineering, UC Berkeley so elegantly explains:
Being better at making decisions is not the same as making better decisions. No matter how excellently an algorithm maximizes, and no matter how accurate its model of the world, a machine's decisions may be ineffably stupid, in the eyes of an ordinary human, if its utility function is not well aligned with human values.
The bottom line is that most Americans are not crazy about AI and robots and don’t trust our biggest technology providers to develop machine intelligence in a way that does no harm. This is a serious problem for the industry.
As an endnote, I am left wondering whether those same findings transfer to other geographies. My guess is that at least politically, the answer in Europe and India for example is a firm 'yes.'
What do you think?