AI and the need for a rational ethical debate

Cath Everett Profile picture for user catheverett November 19, 2015
Summary:
It may still only be early days for the adoption of artificial intelligence software, but it is already raising a number of ethical questions that will have to be tackled at the technical, regulatory and societal level.

sat-ai-head-640x353
Technology is neither good nor bad; nor is it neutral.

This is the first of six “laws” expounded by Dr Melvin Kranzberg,  a professor of the history of technology at the Georgia Institute of Technology, in 1985.

And what it essentially means is that the ultimate uses to which technology can be put, is often quite different from the original intention, leading to sometimes far-reaching and unforeseen human, societal and environmental consequences.

Nowhere is this truer though than in relation to today’s hottest of hot topics, artificial intelligence (AI). Although AI is a portmanteau term covering many different types of underlying technology ranging from natural language processing to neural networks, it is essentially software controlled by algorithms that learns what to do in prescribed scenarios based on patterns found in vast quantities of data.

As a result, it can be trained to act in a certain way and provide particular responses based on the information it analyses and interprets. A program developed by the Faculty of Applied Sciences at the University of Sunderland, for instance, identifies potential lesions in the retina of diabetic patients to assist clinicians in diagnosing whether they are suffering from macular degeneration.

Another application, which takes the form of an avatar and is called Amelia, was developed by AI software provider IPsoft  and is currently being trialled by a handful of big corporations such as NTT Group, Shell Oil and Accenture to answer call centre queries.

All of this is, of course, a million miles away from the dystopian futures laid out by Hollywood movie-makers, in which people struggle against the rise of machines that, in one way or another, attempt to either dominate or replace them.

But that is not to say that the use of such technology, even in such a nascent a market as today’s, does not come without its ethical risks. One of the most discussed at both the governmental and public level relates to job losses. This debate took off following an Oxford University study in 2013, which forecast that nearly half of US positions were likely to be automated away over the next two decades - a claim that sent frissons of fear down many people’s spines.

Employment issues

In the same way that computerisation has done away with many a blue collar role over the last 30 years, so the argument goes, so will AI and robotic process automation have a similar impact on the labour-intensive, admin side of white collar employment in areas ranging from law and finance to IT.

Some believe that this situation will simply lead to the creation of new, more highly skilled roles – for instance, who would have believed in the existence of digital marketers even 20 years ago? But others forecast that AI could spark off a second Industrial Revolution, creating massive upheaval in the process not only in employment but also in wider societal terms.

A related and important point here is that, in most instances, the distribution of wealth today is based on employment and the income derived thereof, although some redistribution goes on in the form of taxes, benefits and the like. But if lots of jobs today no longer exist tomorrow, the question becomes how to maintain a stable society in which everyone benefits. As Frank Lansink, European chief executive for IPsoft, points out:

Jobs being displaced by cognitive intelligence has its positive side in that it’ll free people up from mundane chores to do more high value tasks. But if we don’t regulate for it, we could end up with a polarisation of society, where 30% of people are employed doing that and the rest work for them as personal trainers, day care assistants and the like - and so the middle class completely disappears.

This scenario would necessitate the creation of new systems to help people work in more flexible and mobile ways, but it would also require that wealth be distributed in a different fashion too. Lansink explains:

To distribute wealth on the basis of simply being a citizen and not necessarily because people are employed or not – we may need to think about that or risk generating a lot of turbulence in society as we could create a massive group of people who don’t see a future or way of having a meaningful life.

Other fears relate to potential privacy and confidentiality abuses and a lack of informed consent around how these systems are used. For instance, Professor John Rust, director of the Psychometrics Centre at Cambridge University, points out that AI systems can already predict things like personality, feelings, IQ and personal interests more accurately than humans.

Control issues

But such information, while it can be employed for good, for instance by retailers to improve customer service, also has the potential to be misused by both commercial and political organisations as a means of controlling individuals in ways possibly not even thought of yet. Rust explains:

It’s about prediction and control and the advance of science to find out more about people, which isn’t necessarily a good thing what with all the pressure coming from military intelligence. Rather than developing robots to help mankind, they’re developing military drones that kill.

There are similar ethical concerns around the potential abuse of AI-based systems for state-level surveillance and citizen profiling purposes in order to control the population a la Big Brother. Ozel Christo, chief executive and co-founder of AI software application specialist Neokami, explains:

It’s about connecting data sources, which has already been done to a certain extent with the move to the “know your customer” 360-degree view of things. But it’s not just about AI– it’s also about big data, freely available computing power that’s working to Moore’s Law, more storage capacity and the Internet of Things all combining together. It could create a scary future, which means we have to consider the ethical issues now.

According to ex-Autonomy chief Mike Lynch though, there are not just concerns about Big Brother but also about “Little Brother” too. Lynch, who set up Invoke Capital, which funds AI-based cybersecurity start-up Darktrace, describes Little Brother as ad hoc groupings of people that use technologies such as AI, cloud and the like en masse. Lynch says:

Picture a group of animal rights extremists using automatic number plate recognition apps in combination with social media – they could potentially identify certain cars and track them around a city. The potential power of Little Brother is greatly underestimated.

Regulatory issues

Then there is the ability of intelligent machines to undertake risk assessments, the results of which, although based on data, may come to unacceptable conclusions. Lynch explains:

For example, in the past, when particular foreign organised crime groups were undertaking mass mortgage fraud, if you asked an AI system to spot which applications were fraudulent from a pile of thousands, the machine may – in a perfectly unbiased way – tell us that individuals from certain ethnic groups are more likely to commit mortgage fraud. Of course, that is completely unacceptable as a way of screening mortgage applications, so we have to make sure that we are fitting AI into our ethical guidelines.

But there are also concerns at the more technical level as well. For instance, Pete Trainor, director of human centred design at consultancy Nexus CX, which is implementing an AI customer service system for a UK high street bank, points out that if fed less than 10 million customer records, they are essentially “immature”. This means that enough data has not been provided for them to learn how to perform their assigned task effectively. Therefore, they can do little more than undertake analytics. Trainor says:

The risk onus is on the person who coded the system not to release it before it’s ready. But there aren’t any real checks and balances to ensure that it’s an adult rather than a teenager, apart from major companies not wanting to risk being sued and suffer reputational damage.

cost-risk-analysis
It is also currently unclear in law who would be responsible if things went wrong, and whether it would be the vendor, the programmer or even the machine itself that would be held liable. As Andrew Joint, commercial technology partner at technology and digital media law firm Kemp Little, points out:

Stretching current legal principles to cover things like the internet and cloud computing was successful in the past, but we’re starting to apply them to what feels more like revolutionary than evolutionary technology. My biggest worry is that we’re drifting. There’s been very little in relation to government-led regulatory discussion to date and much of it so far has been led by lawyers. But it’s more of a societal and philosophical question that should be dealt with at the political and society level.

In order to move the debate out of the sphere of academia and into the real world though, he believes, will most likely necessitate some kind of “newsworthy event” such as a driverless car knocking someone over and killing them.

One organisation that is trying to tackle the issue of ethics, however, is the British Standards Institution. It has just put out its “BS 8611 Robots and robotic devices – Guide to ethical design and application of robots and robotic systems” report out for public comment, with the aim of developing formal, if non-legally binding standards in the area.

My take

Although it is still early days for the widespread adoption of AI technology, it would seem sensible to start a debate now on creating ethical frameworks for how to use, apply and administer this kind of software, which could well act as a catalyst for deep societal change.

As the University of Cambridge’s Rust points out, it was eugenics and the excesses of the Second World War that shaped today’s ethical frameworks around medicine. So better that those around AI are forged through enlightened debate and discussion rather than in a similar devastating furnace.

Loading
A grey colored placeholder image