Robot ethics need to get specific, says Durham University

Chris Middleton Profile picture for user cmiddleton May 10, 2022 Audio mode
Summary:
Does the latest academic treatise on ethical robotics and AI really tell us anything new?

Image of a robot hand touching a human hand
(Image by Tumisu from Pixabay )

Ethical development in robotics and artificial intelligence is often debated, and tends to centre on the vital need to avoid automating existing societal biases when it comes to ethnicity, gender, socioeconomic background, sexuality, age, beliefs, or even postcode/zipcode.

For example, AI systems may deny people jobs or insurance based on factors that have nothing to do with applicants’ skills or experience. Historic, human-based decisions over decades may have denied jobseekers from certain areas or groups employment opportunities. An AI system might then look at that data, at those patterns of behaviour, and automate such decisions by rejecting CVs to focus on those that, historically, have been more successful.

To a coder, the logic might seem clear. But the danger is that bias or prejudice – whether intended or not – is given a veneer of computerized neutrality and thus becomes systemic. That bias may come from flawed data, assumptions, or design. But it may also come from a lack of diversity in development teams, creating a tendency to design systems that reflect the make-up of the team rather than of society.

As previously reported, Louise Sheridan, Deputy Director of the UK’s Centre for Data Ethics and Innovation (CDEI), believes that such fears can inhibit innovation as much as inform it. Speaking at a Westminster eForum on UK AI strategy last month, she observed:

Concerns about ethical risks still hold organisations back. So, we cannot innovate using data and AI without addressing ethical risks and developing data governance that is worthy of citizens’ trust.

Data gives us a powerful tool to see where bias is occurring and to measure whether our efforts to combat it are effective.

But of course, data can also make things worse. New forms of decision-making have surfaced numerous examples where algorithms have either entrenched or amplified historic biases, or indeed created new forms of bias or unfairness.

Also in the frame are other ethical questions, such as whether autonomous or automated machines should ever have agency over the lives of human beings, either in business, finance, transport, and healthcare, or on the battlefield.

The latter inevitably gives rise to fear of the Terminator, so beloved of mainstream media commentators who want to whip up public anxiety about the future. Not the fictional metal-boned android as such, but any machine that has been programmed to kill, or has the autonomous capacity to do so.

Robots aren’t just coming to steal our jobs, the tabloids claim – despite evidence from the World Economic Forum and others that the net effect of Industry 4.0 technologies will be to create jobs and growth – but also to wipe out humanity. But is it true?

Alongside the use of drones and land-based autonomous vehicles, it is certainly the case that the American military, for example, is developing a broad range of robotic and AI-based systems through the DEVCOM Army Research Laboratory (ARL).. Among its many areas of research are robots that can reason, learn, communicate, and question their human minders – whether they are soldiers on the battlefield or remote operators back at the command centre.

The ethical challenges of this are legion. In isolation, any one technological advance might be justified by logic, safety, or expedience. For example, it’s much easier for a soldier to simply order a robot to move forward, turn left, or fire by voice than by laptop and joystick, and, importantly, this would also give soldiers less heavy kit to carry.

But taken together, a lot of small technical advances add up to a big step towards creating a thinking, reasoning killing machine. And at that point, who or what would be responsible, and therefore accountable, for decisions that future robot takes? Where does the buck stop ethically? Or might it be the case that are we creating systems where the buck no longer exists, and everyone can blame someone else?

However, the core question is, what can we do about these issues, other than be paralyzed by indecision – not to mention by fear that others will develop advanced systems and not care about the ethics? After all, there is no universally accepted code for what constitutes good behaviour.

What do you mean, exactly? 

According to researchers from Durham University Business School, we need to avoid vague philosophical discussions and navel-gazing about ethics and create specific accountability guidelines for AI and robotics. A new paper from the university proposes a framework for ensuring that organizations which employ what it calls “AI robots” have these guidelines in place. It says:

AI robots are increasingly used to facilitate human activity in many industries, for instance, healthcare, educational settings, mobility, and the military, but must have accountability for their actions.

The problem is, hanging their research on the term “AI robot” is unhelpful and may even be counterproductive. It is far from clear whether they are talking about software or intelligent, autonomous hardware – of the kind that, frankly, doesn’t exist yet, at least not to a sophisticated degree.

According to an announcement from the university, Zsofia Toth, Associate Professor in Marketing and Management at Durham University Business School, alongside her colleagues Professors Robert Caruana, Thorsten Gruber, and Claudia Loebbecke, have developed four clusters of accountability, to help identify the specific actors who are accountable for AI robots’ (sic) actions. 

The paper says:

In the first cluster, ‘professional norms’, where AI robots are used for small, remedial, everyday tasks like heating or cleaning, robot design experts and customers take most responsibility for the appropriate use of the AI robots. 

In the second cluster, ‘business responsibility’ – where AI robots are used for difficult but basic tasks, such as mining or agriculture – a wider group of organizations bear the brunt of responsibility for AI robots.

In cluster three, ‘inter-institutional normativity’, where AI may make decisions with potential major consequences, such as healthcare management and crime-fighting – governmental and regulatory bodies should be increasingly involved in agreeing on specific guidelines.

While in the fourth cluster, ‘supra-territorial regulations’ – where AI is used on a global level, such as in the military, or driverless cars – a wide range of governmental bodies, regulators, firms, and experts hold accountability.

However, this comes with “high dispersal of accountability”, warns Durham.

“This does not imply that the AI robots usurp the role of ethical human decision-making, but it becomes increasingly complex to attribute the outcomes of AI robots’ use to specific individuals or organizations and thus these cases deserve special attention.

My take

Well, quite. But didn’t we know this? The researchers hope that their framework offers insights and an approach for policy makers and governments to “place accountability on the actions of AI robots”. In reality, I’m not sure it moves us very far forward, beyond creating a useful breakdown of the various actors involved.

A grey colored placeholder image