AI's rising challenge to the core legal system

Chris Middleton Profile picture for user cmiddleton March 5, 2018
Summary:
Thou shalt kill? Ethics is an increasing topic of debate around AI, but what about the challenges that new tech poses to the established legal system as we understand it?

fotolia scales justice
The rise of AI poses serious challenges to the legal system. Those challenges are different to the longstanding evolutionary approach that the law has taken to new technologies over the years, such as in the areas of e-commerce and IP protection.

That’s the view of Andrew Joint, managing partner and commercial technology partner at law firm Kemp Little, speaking at a Westminster eForum seminar last week in central London. The event, Artificial Intelligence and Robotics: Innovation, Funding, and Policy Priorities, brought together a range of speakers from academia, business, and government to discuss the challenges facing the UK across these booming tech sectors.

A core area of discussion was the ways in which AI poses not only overarching ethical challenges to human society – which have been explored in a previous diginomica report – but also practical, day-to-day legal ones.

Despite their routine nature, these challenges may sometimes be to fundamental legal concepts and principles, said Joint, because many laws are a “codification of our moral and ethical standpoints”, notably in areas such as duty of care, criminal liability, and criminal convictions.

Joint suggested that the rise of some applications of AI, such as robot doctors and AI medical diagnoses, may call into question these longstanding legal principles. This is because it may be hard to establish liability when errors are made by autonomous or AI-driven systems, and whether a robot or computer can itself be said to have a duty of care in the same way as a doctor does.

Whodunnit?

If a robot makes the wrong medical diagnosis, or a driverless car kills its passengers, then who is responsible? For example, might the error lie in the programming, or in the training data? And does responsibility lie with the software vendor or its customer? Or might the error be completely unrelated to the AI, and come down to mechanical failure, a malfunctioning sensor, or a lost network connection?

To these questions, Joint – sensibly – offered no answer. His point was to explain the magnitude of the problem facing the legal system over the next few years.

Another challenge lies in AI’s predictive abilities: the potential for organisations to use AI to predict things about a person that the data subject may be unaware of, or may not have consented to.

For example, experiments have already been carried out to determine if an AI system can predict someone’s sexuality based on a photograph, and separately to predict the likelihood of someone having a variety of different medical conditions.

Questions then arise, such as does any enterprise have a right to this information without the subject’s knowledge or informed consent? What law prevents an organization from simply Googling a person and then applying AI to whatever data or images they find? And, most seriously, perhaps, what if the algorithm makes the wrong prediction, or is statistically right only a percentage of the time?

The potential use of such algorithms by insurance companies or mortgage lenders could lead to people being denied services without having any idea of why this may have happened.

This is a separate – if related – concern to the risk of AI systems automating bias and discrimination (because of flaws in training data, or in precedent data that has been gathered in a biased system).

You didn’t see that coming

Joint accepted that these are growing problems that the law will need to solve, but suggested that organizations won’t be able to hide behind their AI systems for long:

A wave is building momentum. There is going to be rising demand for people to be able to show their workings in terms of how decisions were made, how predictions were reached, and what was done with data at a certain point.

I’m not a technologist and I’ve never written a line of code in my life, but my understanding from clients I work with who have diagnostic tools is that this will be the incredibly difficult thing: how they can build something that can show its workings, such as why it made a particular diagnosis at a certain point.

They’re going to need, from a data privacy point of view, to be able to explain what happened with somebody’s data, and demonstrate why decisions were made in relation to that data, and why predictions were made.People are aware of this challenge, but the interesting thing is what level of detail, audibility, and explanation is going to satisfy us [in the legal sector].

Thou shalt kill?

But what if an autonomous system is designed to kill? This was the specialist legal area of Richard Moyes,  Managing Director of Article 36, a not-for-profit organization working to prevent the unintended, unnecessary, or unacceptable harm caused by weapons systems. (The name refers to article 36 of the 1977 Additional Protocol I of the Geneva Conventions, which requires states to review new weapons, means, and methods of warfare.)

The Terminator – the clichéd media image of autonomous weapons – is “very far from what we’re talking about”, said Moyes. The core issue for Article 36 is the use of sensors in weapons, and the autonomous systems that may arise from these applications – systems that are capable of “identifying, selecting, and applying force to targets”.

One of the challenges in this area is that each of the steps towards a technology outcome might seem reasonable in isolation, but the end result may be morally questionable. There are numerous drivers behind the move towards autonomous systems in the military, Moyes suggested, for example, the need for smart weapons to be able to react quickly in an attack, and the need to protect our own troops.

However, within these are significant moral hazards, he said. One is the language used: how targets are labelled within lines of computer code:

What is the way we code an object or person as a reasonable target of attack? There are all sorts of moral issues bound up with that.

But the more pressing issue, said Moyes, is the “dilution of human control, and therefore of human moral agency:

The more we see these discussions taking place, the more we see a stretching of the legal framework, as the existing legal framework gets reinterpreted in ways that enable greater use of machine decision-making, where previously human decision-making would have been assumed.

In other words, the core ethical dilemmas in military applications of AI are no different to those in other walks of life. The more systems become autonomous – or are applied to people’s data in order to make decisions about them – the less human beings are involved, and the more difficult any application or interpretation of existing law then becomes.

So what’s the solution? Moyes’ organisation believes that the answer is to create an obligation for “meaningful human control”. He said:

That doesn’t mean absolute control over every aspect of an attack, but there needs to be a sufficient form of human control for us too feel that a commander has a predictable understanding of what’s going to happen. And also that they can be reasonably held accountable for the actions that are undertaken.

My take

At the heart of this debate, then, is the simple search for someone to blame, which suggests that the real moral hazard in AI is organizations deploying technology in order to abdicate their own responsibility.

The law is a set of rules based on deep historical data sets, and so – on the surface at least – would appear to be the ideal candidate for AI and automation. But the law is also subtle, constantly evolving, nuanced, and human. And that means it is also flawed.

For example, few would look at the US or UK legal systems and conclude that they have historically treated ethnic minorities fairly. However, decades of precedent data are now populating the AI systems used in sentencing advice, such as the COMPAS algorithm in the US, which has been found to discriminate against black Americans.

But AI and automation pose yet another challenge to our legal system as these technologies are removing the first rungs on the career ladder.

Kemp Little’s Joint explained how when he was a junior lawyer his job was to read masses of case histories – a task that can now be completely automated. The problem is that this makes learning the ropes of a profession at junior level much harder for human beings.

This widening of the gulf between trainee and experienced expert is being caused by AI, and yet we are now asking AI to fill that gap and help us make critical decisions.

The long-term risk is clear: a system that is running further and further ahead of human control, without many people having a clear idea who is in charge, or whose responsibility it may be when something goes wrong.

Loading
A grey colored placeholder image