Rethinking AI Ethics - Asimov has a lot to answer for

Neil Raden Profile picture for user Neil Raden September 8, 2020
Summary:
Is the current obsession with AI Ethics doing any good? Maybe Asimov's Three Laws of Robotics wasn't such a great starting point after all

AI ethics robot teaches Asimov's three laws © Andrey Suslov - shutterstock
(© Andrey Suslov - shutterstock)

From whence did this concept of AI ‘Ethics' derive? Digital systems that caused great harm to people via injustice, discrimination or exclusion, privacy or just plain cheating, not to mention the environment, have been with us for decades. Ethical issues in analytics and models did not arise with Big Data, Data Science or AI — they have been with us for a long time. Was there ever a COBOL Ethics, a DB2 Ethics?, an ERP Ethics (well maybe)?

This whole fascination with AI Ethics derives from, in my opinion, Isaac Asimov's Three Laws of Robotics. The laws were introduced in his 1942 short story Runaround (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 AD", are:

  • Law One — "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
  • Law Two — "A robot must obey orders given to it by human beings except where such orders would conflict with the First Law."
  • Law Three — "A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law."
  • Asimov later added the "Zeroth Law," above all the others — "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

But Asimov didn't mean these as ‘laws', they were merely plot devices for his novels. For example, what was meant by ‘human'? In IRobot, which was a foreshadowing of a problem we have now, the definition was subjective, allowing a robot to commit genocide depending on how ‘human' was defined. A continuing problem today with AI models is the incomplete representation of the population in the training data, thereby defining ‘humans' as whatever is in your sample, leading to ridiculous outcomes like Amazon's human resources system that was systematically biased against women, or consumer pulse oximeters that give the incorrect measurement to people with dark skin.

So, Asimov wasn't serious. It was entertainment, with a good dollop of prescience. But there is another reason to dismiss it, a more fundamental one. The laws are logically inconsistent. Asimov's stories with the laws are full of inconsistencies. Whether this was deliberate from the start or just a result of literary license, we don't know. But what we do know is this:

  1. AI Ethics owes its intellectual origin to Asimov.
  2. The ‘laws' and the burst of energy in ‘ethics' is a continuing distraction from solving problems caused by inferencing algorithms.

An AI ethics conversation with Esteban Kolsky

A few days ago, I mentioned, tongue-in-cheek, "I'm tired of AI Ethics. I've decided to become an Ancient Astronaut Theorist. I researched the requirements. There are none." This prompted a response from my colleague, Esteban Kolsky, CX Customer Service, Product & Strategy at SAP:

There are no requirements to do AI ethics either. Apparently... just saying — better the devil you know.

I wasn't entirely sure what he was getting at, so I replied:

If I can be serious for a moment, Esteban, the problem is the topic of ‘ethics'. It arose from Asimov's Laws for robots, but as a subject, it doesn't resonate with practitioners, who want to know how to avoid ethical traps in their work. ‘Principles' lack practical value, though there are some insidious ethical traps that aren't immediately obvious, so we focus on that. If an AI developer doesn't have a foundational concept of what's right and wrong, they shouldn't be in the business.

Or alternatively, since AI applications should never be a singular effort, but rather, a team with a mix of skills, there should be a moral compass in there somewhere to keep things from running amok. Maybe it's not reasonable for everyone involved in the effort to have some heightened sensitivity to unfairness in what they are building.

Esteban's response:

I simply believe the attempt of positioning ethics as part of AI (and thus, anthropomorphizing a stupid machine, which they all are) is just a waste of time. Computers don't have ethics, and programmers don't have to build them — show your work is the only ethical standard we need, briefly...

As you say, there are things you know or not, but no one goes willingly into breaking ethical rules (we can have a discussion on certain characters in the industry later) in any facet of life. and if they do, they try to hide it, and if they do — and they show their work... well. you can figure out the rest. Too much adjudicating human traits to machines does not make them stop being machines, nor their output ‘intelligent.'

Point taken: computers don't have ethics, but I'm not sure who is anthropomorphizing machines unless he means Alexa et al. Show your work is the only ethical standard? That's a good point, but not universally effective. "No one goes willingly into breaking ethical rules." Really? Like Palantir? Like COMPAS? Plus, I think he may be overlooking the manifest problem of breaking ethical rules INADVERTENTLY.

There is a lot to unpack in that paragraph, but here is what I said:

A few years ago, there was value in learning not to train your model with data that didn't represent the affected population or understanding the derangement of an ML model that isn't converging or not letting amateur data scientists loose on an ANN, but this information is everywhere now. There is still the issue of responsibility. When it screws up, who is responsible? So there are a whole bunch of things that may not rise to the level of ‘unethical' that are still important, like how resumé bots tend to overlook unique talent for more generic characteristics, or how FICO scores in car insurance are a regressive tax against the working poor. There is still much work to do.

To which Esteban replied:

Agree, but let's not call it ethics in AI, let's call it ethics in data scientists.

I'll follow up with Esteban. I wonder if we have some semantic dissonance. I see AI Ethics as practices, I think he may be referring to it as AI in a machine.

Here's another comment on one of my articles, from Victoria Whiteheart:

You know what is my concern about AI Ethics? There is a risk to be like ethics in business — after more than 50 years of research and implementation, we didn't make much progress — except covering the walls of companies [and] their annual reports with Codes of Ethics that nobody reads.

She's right. Enron, perhaps the most corrupt corporation in history, had a 65-page ethics manual.

My take

Esteban makes this point: AI per se is neither ethical or not, it is how it is applied by practitioners.

But if an AI model is introduced into the market that routinely disrupts ethical concepts, what is it? A neutrally ethical artifact produced by unethical people? Think about the judicial system COMPAS that routinely made bail and sentencing recommendations two- to three-times more severe for African-Americans. That is clearly an unethical AI application.

Instead of teaching people about the ethics of Plato, and Aristotle and Kant and trying to draw a line from that to build AI applications, a better approach is to start with identifying developers who are simply good people. People who show forbearance, who have the backbone to resist their organization's directive to deliver wrong things. People who have prudence can look beyond the results of a model and project how it will affect the world.

Only good people should develop AI.

A quick search turns up over a hundred AI Ethics proclamations from governments, non-profit special interest groups, government committees, etc, and it's consuming too much energy and bandwidth. In fact, the cynical view is that all of this is just a way to avoid authorities from issuing rules for AI. Educational and consulting offerings are largely repetitious and the promoters generally have no training in AI itself. Reading Aristotle and Asimov is not sufficient.

In my next article, I'll describe how my partner and I have reinvented a way to inject acceptable practices into AI development — and, as importantly, data management, a primary source of ethical gaffes. Surprisingly, we see the solution to AI Ethics in people, not principles.

Loading
A grey colored placeholder image