The AI political algorithm - digital's quis custodiet ipsos custodes?

Profile picture for user cmiddleton By Chris Middleton April 3, 2016
Summary:
All algorithms are political. All are designed to produce a set of predefined outcomes. But who defines those outcomes, and why? And who will guard the AI guards - and the robocops?

robocop
Quis custodiet Robocop?

It’s been fascinating to watch the storm over Microsoft's AI Twitter chat bot, Tay, which learned extreme racism, homophobia, and drug culture from internet trolls and was hastily taken offline.

As one commentator put, it went from saying "humans are super cool", to extolling Nazi values in less than 24 hours – a useful analog of extremism's connection with ignorance in a meme-propelled culture.

But were trolls solely to blame? As journalist Paul Mason noted in his Guardian blog, Tay was essentially feeding off the deep undercurrents of prejudice and hate speech that lurk near the surface of many social platforms.

Or at least they do in the West. Tay's Chinese counterpart, Microsoft's XiaoIce, has not faced the same problems and has been liaising safely online with millions of people.

This gives rise to the troubling possibility that AI/machine learning and unfettered freedom of speech may be mutually-exclusive concepts, unless controls are added to help robots filter out human beings' basest instincts.

The point is that computer algorithms always have socio-political and ethical dimensions. They reflect the values and beliefs of the societies or organisations in which they are written, not to mention the interests of corporate shareholders.

All algorithms are political. All are designed to produce a set of predefined outcomes. But who defines those outcomes, and why? And who benefits? Those are the key questions.

A recent survey by consultancy Avanade of 500 C-level executives worldwide found that 77% believe they have not given sufficient thought to the ethical considerations created by smart technologies, robotics, and automation, suggesting that 'automate first, ask questions later' is the dominant mindset in the quest to drive up profits.

Unless robots are programmed to obey Isaac Asimov-style universal laws, they can only learn, or be programmed with, behaviour from flawed human beings within whatever legal frameworks, political beliefs, and cultural norms exist locally.

In other words, machines may learn to hate by observing and modelling human society – or by being programmed within institutionally prejudiced organisations, perhaps. (What if Donald Trump’s ideology was placed in a robot?)

But the concept of a universal set of human values is a long way off, given that few societies genuinely share the same beliefs, laws, and equality standards. For example,only two nations have 50% or more representation of women in Parliament (neither are in the West), homosexuality is illegal in many countries, racial and religious prejudice are rife, and so on. Which values speak for all humanity, or even for the majority?

The inherent risks and socio-political dimension of computer algorithms are revealed by a simple example.  diginomica’s own Derek du Preez recently shared with me how he had used Google.co.uk to search for stock images of teenagers to accompany a news story.

Keen to reflect diversity, he was shocked to find that searching for "black teenagers" produced a disproportionately high number of pictures of criminals, police suspects, and crime victims, while searching for "white teenagers" mainly produced photo-library 'lifestyle' shots of smiling youngsters.

12512362_10101058833656274_8204684881917809990_n

Put simply, a neutral search for stock images instantly uncovered a form of networked racial profiling that panders to deeply racist stereotypes in society at large.

But is Google's search algorithm inherently racist? That seems unlikely.

Google’s code has probably uncovered something even more troubling - Western society's collectively racist fear of black youth expressed as a form of confirmation bias, thanks to relentlessly negative media coverage. (This is why positive stories are important in a networked culture.) That is no different to Tay spouting hate speech, it’s just much less obvious.

The risk of automating baseless, but deeply-held, beliefs is real. In the US, young black men are nine times more likely than other Americans to be killed by police officers, despite making up only two percent of the population and being twice as likely to be unarmed as white Americans. Almost half of all the Americans killed by police in 2015 came from ethnic minorities. (Source - The Guardian.)

The lesson is that computer algorithms are deeply connected with human societies and frailties in a way that few people – including coders – understand or take seriously enough. And it is into this context that machine learning, AI, and robots are emerging. The risks are all too clear, as the case of Microsoft's Tay chatbot amply demonstrates.

Q: Who could have predicted that an AI chatbot on Twitter might be hijacked by trolls?

A: Anyone with an ounce of common sense!

So why didn't Microsoft anticipate these problems? That much is a mystery; but it is certainly considering them now!

Quis custodiet robocop?

But a robot can only understand human intentions if coders do, which demands in turn that its coders are socially adept, broadminded, moral, astute, and culturally sophisticated. In this regard, Microsoft's Tay should ring our alarm bells. A naïve robot is a dangerous machine.

In the future, this nexus of issues may be of particular relevance to law enforcement, given that the world of 'Robocop' is not far away. For example, the United Arab Emirates has one of the most advanced police forces in the world, and is investing heavily in robots and smart-city capabilities, along with technologies such as Google Glass. At present, its robots are being used mainly in public liaison roles, but 'full AI' robots may be on the streets of Dubai within five years, and occupying law-enforcement roles within ten.

Silicon Valley also has robocops, with sensor-packed Knightscope K5 Autonomous Data Machines patrolling malls, offices, campuses and local neighbourhoods in the San Francisco Bay area, monitoring traffic and recording unusual behavior.

Now fast-forward to a future of robotic law enforcement worldwide, and what problems might emerge, given that algorithms can only automate an organization or country’s values and work towards predetermined outcomes, perhaps reinforcing confirmation bias or deeply ingrained beliefs?

What if that country, for example, suppresses the role of women in society, or identifies black or gay men as criminals? What if those beliefs are then automated and placed in a robot?

Another aspect of law enforcement is growing in prominence in the UK, the US, and elsewhere, in the form surveillance regulation, as the UK’s revised Investigatory Powers Bill and the FBI’s tussle with Apple reveal.

Sooner or later, someone in a back room will be asked to write an algorithm to identify ‘bad behaviour’ from the mass of context-free data gathered by ISPs and comms suppliers.

At that point, the political nature of algorithms will begin to have a massive impact on Western society, and the simple quest for knowledge about some subjects may create suspicion.

Then it's a case of a digital quis custodiet ipsos custodes?*  - who will guard the guards themselves?

My take

Who writes those algorithms, based on what values, what beliefs, and what concepts of ‘bad’ or ‘suspicious’ behaviour is something to which no-one has given serious consideration. Digital thought-leaders must address these issues urgently, or society may pay a terrible price.

The fact that a handful of ideology-driven politicians will have to power to press ‘Enter’ on an automated future of surveillance and law enforcement, is what should make us step back from the brink before it is too late.

Does anyone doubt that AI surveillance might be used as a tool of political repression, racial profiling, and more?