A teenage girl intrigued by the use of artificial intelligence in courtrooms, a man in the North of England concerned about the implications of smart speakers being given a gender, and a mum worried about the privacy of an app used in her son’s classroom - just some of one academic’s personal encounters that reveal a public with a more sophisticated view of AI than newspaper headlines screaming about killer robots might suggest.
That academic is Hannah Fry, Associate Professor in the mathematics of cities at the University of London, introducing a new Samsung report on AI. That said, she acknowledges in her foreword [she’s not listed as an author of the document] that the words ‘data’ and ‘algorithm’ make many people want to “gouge out their own eyes”.
The report, FAIR Future: Involving Everyone in AI, is presented as part of an ongoing project by the electronics company to demystify AI via public consultation; an excellent aim. It has been hosting live events across the UK, while strategic insights firm delineate and market research company opinion.life carried out an online survey. For the survey, researchers engaged with a representative sample of the population of the UK and Ireland – 5,250 people aged 13+ – in an exercise designed to take stock of the public’s attitudes to AI and reveal any concerns. And, as Fry writes:
To do so in a way that invites, rather than excludes, and to allow everyone to be a part of shaping the way ahead. Because the future doesn’t just happen, we create it.
Despite the size, scope, and noble intentions of the programme, that this remark from a respected academic echoes some of Samsung’s recent marketing messages rings an alarm bell. But perhaps that’s just a coincidence. We do all create the future, even though it increasingly feels like we’re just horrified observers.
From the breakdown of responses – presented as a graphic – the public remains split on the technology’s potential impact. Roughly half of all respondents (51%) believe that AI’s impact on society will be positive, 16% that it will be negative, with the remaining one-third of interviewees being ambivalent or unable to answer the question. In that near 50:50 split overall, support is significantly higher among men than women.
Fifty-one percent positive support for AI against a paltry 16%expressing negative views is a notable finding in a report that strives to be upbeat in tone, content, and presentation. The survey also reveals that support is greatest among those who are familiar with the technology and lowest among those who are not.
Fair enough. But the problem is that these findings are merely presented in a bar chart labelled ‘Sentiment towards AI by gender and familiarity’, with no indication of what people were asked or how they responded. Is this an aggregate response to multiple questions? Samsung doesn’t reveal the workings behind its sentiment map.
The further one gets into the 20-page document in search of detail and depth, it becomes clear that the authors have a tendency to pull out and comment on the good news, with statistics, and either ignore the negative or ambivalent responses or simply fail to put figures against them.
The resulting impression is of a report that feels weighted towards positive outcomes, and against any naysayers. It’s likely that the detailed research supports the authors’ rather broad conclusions, but presenting them in a way that feels partial damages the document’s credibility. For such a huge survey (possibly the biggest of its kind in the UK), the output feels lightweight. Hopefully more detail will be revealed in the future.
In terms of applications, the public are most positive about AI in the home: 59% have a “positive feeling” about enhanced household appliances, presumably because of their exposure to smart speakers, Internet of Things devices, and smartphone digital assistants. Smart manufacturing receives the same level of public support, with half of respondents positive about the use of AI in agriculture.
According to the handful of graphs and statistical breakdowns in the report, smart healthcare is backed by less than half of the public: roughly 46%, with the rest being either negative, ambivalent, or unable to answer the question. That’s a surprising result, given AI’s potential to improve disease diagnosis and personal healthcare management.
Despite this, the commentary suggests much greater support was found by the project’s interviewers:
Almost two-thirds (63%) told us they are looking forward a great deal to having enhanced healthcare. Over half (56%) also felt this way about how technologies can give the elderly or those with disabilities more autonomy in their lives. And half of the people we spoke to said they are really looking forward to technology assisting with the prediction and response to disasters.
When we explained that these technologies would include artificial intelligence, people found them more desirable. Healthcare rose to 70% of people looking forward to adopting it, and disaster protection to 66 percent. It seems that once people start to engage with idea of AI, what it can do for them and the world around them, they begin to see the positive possibilities more clearly.
This may be true – and a natural result of the conversation that Samsung has started – but it may also suggest that the company is an evangelist spreading the ‘goods news’ as much as listening to what people are actually saying. Certainly, interviewers appear to have asked vague or leading questions, judging from the above paragraph from the report (“When we explained that these technologies would include Artificial Intelligence, people found them more desirable...”)
Without setting out all of the data about what questions the public were asked and what they actually said in response, the report appears (rightly or wrongly) to be matching data to an upbeat narrative, rather than providing a balanced commentary on its own statistical findings. That does the research few favors.
Among the applications with low public support or greater ambivalence and doubt, are the military (27% of respondents expressed positive views), finance, and call centres (both with one-third positive feedback). However, by far the lowest support is for the application of AI in social media and/or dating services (less than one-quarter of respondents, judging from data that’s largely presented in graphical form).
The report then explores the question of bias, saying that “alongside the optimism and keen anticipation of what AI can do for us” (which isn’t reflected in some of its own findings) there is a degree of caution and concern about how the technology might be used:
This caution focuses on areas like bias, ethics, and even conflict. As with most things in life, a little information can be a dangerous thing. When people aren’t invited into the conversation about how something might change their life, they begin to form negative opinions.
Ironically, the presentation of the report – its evangelical tone, with little breakdown of negative findings – becomes counter-productive at this point. Indeed, it suggests that the authors may be exhibiting a bias of their own: a belief that the only thing preventing people from welcoming AI into their lives is the absence of evangelists. Of course, that may be true in some instances, given the negative spin in the tabloid press, but it’s hardly rigorous academic analysis.
That said, the report acknowledges some of the core issues and counter-arguments very well:
We have a view of computers as logical, objective tools that support us to create, solve problems and make decisions. Some people feel that AI won’t have this same logical objectivity in the future. Almost four in ten (39%) people feel that AI will hold some form of bias, and this concern was higher (43%) in those who held a closer interest in AI.
In short, people who have invited themselves into the conversation about AI appear more aware of the risks – of bias and ethical problems – than others. The report continues:
Almost half (49%) believe that bias in AI would be unintentional, with a much lower number of people (20%) saying they felt this programming would be done on purpose. Others felt that the AI itself would form biases, with 28% feeling that these could develop on their own.
A number of these questions of bias focus on how AI might treat or profile certain groups in society differently based on their race, gender, sexuality or beliefs, etc. People are worried that any bias in AI could have a knock-on effect and increase prejudices. It might encourage people to act on their existing prejudices, as over a third (36%) of people believe.
It may also, as over a third (35%) of people believe, encourage more people to hold these prejudices. Some people (31%) also believe that groups that already face discrimination will see this increase as a result of bias in AI.
Good points well made. But as many studies on this issue have demonstrated, the core problem isn’t necessarily bias in algorithms or among coders – though this may exist, due to incorrect assumptions, lack of diversity in development teams, or groups unconsciously weighting a system to back their beliefs (confirmation bias) – it is often institutional and couched in the data itself.
If any system has been biased against a group or minority for a lengthy period – arrests, convictions, and sentencing in the US legal system, for example – then that bias will be reflected in decades of historical data that an AI needs to function. Unless counter-balances are introduced into the algorithm or training data, an AI system will simply automate that bias and come to view a particular group as exhibiting greater criminal tendencies.
Exactly this problem was identified a couple of years ago with COMPAS, an AI system used to advise US judges about whether a criminal was likely to reoffend. It was found to give more positive advice if the defendant was white, and more negative if they were black, due to the decades-long bias that the civil rights movement stood against – and still stands against.
Related problems have been found in some driverless car systems: if you collect less data about any minority (because, by definition, there are fewer of them in society), then it stands to reason that such system will be better able to identify members of the majority group, because it has access to far more data about it. These are issues that need to be addressed in both data sets and algorithm design.
Part of the problem with negative attitudes to AI is rooted in some people’s sources of information, says the report:
As many as 42% of people get their information on AI from TV and radio and online is a close second at 39%. But almost a third (32%) are basing their assumptions on what AI is and how it will impact them using fiction as their primary source of information. This is even higher for people under 25 (36%).
It seems that people want to understand the technology better. There is an opportunity for AI and technology businesses to educate. If we continue to let old ideas flourish, based on Hollywood and sci-fi stories, people will continue to misunderstand the benefits of AI-enhanced technologies.
That last statement may be completely true, and Samsung should be commended for trying to counterbalance the negative tabloid narratives and decades of dystopian sci-fi with an outreach programme of techno-evangelism. But it would establish a great deal more credibility for this exercise if it lets the figures speak for themselves, both for and against each question, rather than appear to impose its own utopian narrative.
After all, most of the great science fiction tales are really satires on the societies and times in which they were written, and some concern the workings of technology corporations. Hopefully, Samsung will publish all of the detailed findings of this significant survey, and allow them to be analysed without commentary.