Can countries forge AI leadership through ethical strategies?

Chris Middleton Profile picture for user cmiddleton November 11, 2019
Summary:
The Westminster eForum on AI skills and research excellence finds that ethics should be core to a country’s ambitions. But what does that actually mean?

ethics

There’s a global battle to become a dominant force in Artificial Intelligence (AI), both as a consumer and as a technology developer, but what part does ethical thinking play in that?

The EU launched its own €20 billion AI programme last year, while South Korea, Japan, Singapore, Taiwan, India, Sweden, Denmark, France, Canada, and Australia are among the many nations to have published dedicated strategies since 2017.

Meanwhile, few would deny that the US leads the world in commercialising the technology at scale – partly through acquiring UK innovations, such as Google’s $500 million purchase of DeepMind. Or that China is embedding troubling links between government, business, citizens, and surveillance via AI, facial recognition, digital payments, and other technologies: from 2020, it will form a compulsory national ratings and social engineering scheme.

Where a leadership position could be forged is in the ethical application of AI: technology that reflects the values of society, an ambition that the UK shares with France and Canada. That’s according to speakers at the recent Westminster eForum conference on reinforcing the UK’s AI skills and research excellence. Taken at face value, it’s a claim that seems rooted in a commonly held belief in British ‘exceptionalism’ and honesty. Whether that can be said to exist in 2019 is a question we’ll set aside for now.

China would doubtless argue that it, too, is casting its collective values into code via its national AI scheme, while many see the Googles, Amazons, and Facebooks of this world as exporting Silicon Valley values to the world in the guise of supposedly global platforms – values that, it must be said, merely replace state surveillance with advertisers’ surveillance, but with little transparency or oversight. As a result, a handful of men are now inordinately wealthy. Mr Hard Place meet Mr Rock.

Standing 

So where does the UK sit in all this? As Chair of the UK’s Centre for Data Ethics and Innovation –  one of the new organisations set up after the Hall & Pesenti Review and the Sector Deal for AI – Roger Taylor ought to know the answer. He opened his eForum keynote with a startling comment:

Most countries that are not America and China recognise that we have to come up with an alternative model, because it’s a new world and there are issues with the way America operates and China operates that are not wholly consistent with our values as pluralistic, democratic societies, with appropriate social limitations on their activities.

Acknowledging the Chinese government’s stated determination to use AI to engineer a more obedient and controlled society, I asked Taylor to be specific about what he meant by that statement when it comes to the US. He responded:

The US approach to free speech is very different. And the way algorithms operate is designed in California. So the way social accountability now works is very different.

“If you look at the way media used to operate in this country, for example, there was a degree of legal accountability to people living and working here, which reflected something about us. But we’re now in a situation where an algorithm decides ‘These are the next five videos that we recommend you watch’ or ‘These are the news stories we’re going to put in front of you today’ and it’s designed in California. The old mechanisms have simply gone.

In this country, we have always taken the view that there was some degree of social oversight of the media; broadcasting has always been regulated, with very high standards of impartiality, and newspapers have had self-regulatory models. But we now have a media world in which anyone can put something out there – publish it, as it were – and a Californian algorithm decides whether or not to distribute it to every household, or only to certain households, in our country. And there is no mechanism in that process where anybody has any degree of real social accountability.

So the question for us as a society is, we could just say ‘We’ll go with the ‘California model’ as the future, but it is a question for us to decide. Because there is clearly now a difference from how we’ve done things in the past

These principles apply to other areas of our lives too: increasingly, our healthcare, fitness, finances, and more, may become mediated and managed offshore by American algorithms. The effect of that may turn out to be transformative and positive for citizens, but equally it may not. Either way, it needs to be discussed in the context of the UK’s national governance and its ability to regulate these functions and protect its citizens from online harms.

The caveat to this, of course, is that Taylor isn't really talking about California at all, but about a handful of Silicon Valley software corporations. California itself has introduced a consumer privacy act – against the wishes of many of those companies – and has also acted against the use of live facial recognition systems in the state.

Partiality 

But Taylor’s basic point is sound, and it suggests that the breaking apart of once-monolithic media into a stream of bits filtered through the Valley’s advertising-focused giants may be the reason for what many see as a breakdown in standards of media impartiality in the UK. Are some publishers, broadcasters, and platforms simply throwing up their hands and saying, ‘Let’s drop any pretence about impartiality and go with the partisan views of our proprietors – because that’s what everyone else is doing’?

Or is the truth more subtle, complex, and troubling: that what we see of those media platforms online is itself the product of algorithms that favour the most controversial opinion pieces – aka clickbait – over those brands’ traditional reporting?

Many of us may no longer be seeing the actual BBC, Telegraph, New York Times, Guardian, or Independent anymore, but an individually targeted version of those platforms that conforms to what an algorithm thinks we want from them, or a version spun from our collective subconscious and prejudices. As a result, we may only be seeing an extreme manifestation of each brand: whichever stories garner the most clicks or we are most likely to speed-read ourselves.

Put another way, humans have access to an unprecedentedly vast and expanding landscape of data, thanks to the World Wide Web, yet we are all looking at it through smaller and smaller apertures that are of algorithms’ making. It’s like trying to capture an image of the universe through someone else’s pinhole camera.

Arguably, then, AI seems to be contributing to a breakdown of trust in information itself by giving us what Valley algorithms believe we want to see or hear, largely to serve the interests of an invisible network of advertising partners.

As a result, we are all adrift on a sea of surface noise, within vast echo chambers that reflect our own increasingly polarised views back at us. On that surface, at least, AI doesn’t seem to be quite the force for ethical, social transformation that many claim it to be. It’s left us all shouting at each other from entrenched positions.

In a science fiction movie, this might signal the beginning of some robo-pocalypse, but the reality is that this is simply what happens when advertising is allowed to be the dominant online force in the Western hemisphere. Little social good has come from unleashing the power to force commercial advertising down our throats, even if it might have been a force for good in terms of community action.

Taylor and his organisation are certainly doing their best to put things right – or, at least, to tell us what’s wrong. The Centre for Data Ethics was announced in 2017, and formally launched alongside the AI strategy, Sector Deal, the Office for AI, and other initiatives that signalled that 2018 was a new Year Zero for UK ambitions in the field.

According to Taylor, the Centre is very much core to both the Industrial Strategy and the UK’s push to remain an AI and data powerhouse – something it clearly has the intellectual firepower for, if not the competitive levels of state investment. But aside from coming up with a new model to combat Chinese and US dominance, there are more basic concerns, he said:

If we don’t work out how to do this ourselves, we will just become takers of technology from other countries and we will lose a considerable amount of power over our own economies. We will lose wealth, we will lose the ability to manage our own lives in the way that we want. We face a very real challenge here.

Let’s not talk about the hype cycles – that’s more to do with investment levels – the technology is real and it’s something that’s happening very quickly. We can see it in things like facial recognition, in the progress in self-driving cars, in natural language processing, in smart speakers. We can see it in some industries too – in mining, for example, where there are astonishing increases in productivity by getting all of the machines in a mine to talk to each other and work out what they are all doing.

But there are many other areas where we are making less progress. And one of the reasons is that there are considerable ethical and trust issues. And it’s very hard to make progress unless people feel that they know the right way through those.

These range from corporate concerns about liability, about bias – if they do something, are they going to end up in a lawsuit? – and there are issues around public acceptability. This is a particular problem in the public sector, where there are huge potential gains from using these technologies, but often AI gets characterised as simply a way of trying to save money and reduce jobs.”

My take

On that point, the government itself is partly to blame, as are a number of the more extreme think tanks, some of which have leapt on the technology to spin a vision of haves and have-nots rather than of collective social benefit.

For example, a couple of years ago right-wing think tank Reform went as far as suggesting that AI, robotics, and automation would be good for the public sector, because councils could fire full-time workers (teachers, doctors, and nurses among them) and force them to pitch for work via reverse auction in the gig economy. In other words, compete to work for less money. Unfortunately, such grubby organisations have the ear of the UK’s political leadership.

While the government’s Industrial Strategy, its ‘Eight Great Technologies’ and four ‘Grand Challenges’ reveal strong supporting bones for ongoing UK excellence, the musculature has fallen away over three years of Brexit paralysis. More, there is a strong sense that the government is pushing AI, robotics, the IoT, analytics, and more, in pursuit of a narrow and perhaps misconceived vision.

Whenever government spokespeople appear at events on robotics and AI, the message is invariably ‘productivity, productivity, productivity’, rather than, say, a smarter, more innovative, more prosperous future in which all of society can benefit. Indeed, Rannia Leontaridi, Director of Business Growth at the Department for Business, Energy & Industrial Strategy (BEIS), zeroed in on productivity in her conference presentation, before her co-director of the Office for AI, DCMS’ Gila Sacks, added a brief message about collective good. Well saved, Gila.

No one denies that the UK has a problem of flatlining productivity, and most people are aware of its deep – and diverse – history in computer science and AI. But the suspicion that some in government are still wearing the stovepipe hats of the first Industrial Revolution and demanding that workers do more with less to make a minority of people wealthy has never dissipated, despite the futuristic words.

Loading
A grey colored placeholder image