When Isaac Asimov wrote his Three Laws of Robotics in 1942 (in the short story Runaround), he wasn't to know how inadequate they would be in the 21st Century.
While Asimov's first law, ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm' might seem watertight, its apparent simplicity hides ethical challenges in a messy, complex world. What if a crashing driverless car opts to hit a child rather than a group of adults, or a woman rather than a man?
And what if the harm is psychological? There you are, lying on the sofa, innocently spooning a tub of Rocky Road into your mouth, and the robot vacuuming your apartment tells you you're fat and should run around the park. Outrageous. (Seriously, though: get some exercise.)
Some have argued that Asimov's second law, ‘A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law', would be unethical in a future world of advanced, self-aware machines. An area of debate that's guaranteed to spark outrage. So how did we get here?
The drama that coined the word ‘robot' for Western audiences, Karel Čapek's play R.U.R (Rossumovi Univerzální Roboti, or Rossum's Universal Robots) took it from the Czech word for forced labour. That play concerned intelligent androids that serve humanity to keep costs down and allow people to pursue a life of leisure. A century on, we are still debating the issues it raised, including: at what point does a device become sentient, with rights, feelings, and ambitions of its own? And what would that mean for us?
Echoes of this idea can be heard across 100 years of sci-fi. And of course, a play written at the dawn of Modernism and the machine age was also a coded examination of slavery and the exploitation of low-paid factory workers. Fritz Lang's Metropolis followed soon after.
Asimov's third law, ‘A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws', reinforces the idea of a permanently ordered universe with humans at the apex, say anxious roboticists. Most other people would say, "What's wrong with that?", and the Terminator trope so beloved of tabloids comes into play yet again as an expression of our fear of intelligent machines.
Yet fast forward through a century of robots to 2021, and the reality is that humans, not robogeddon, remain the biggest threat to human rights and the planet. While industrial robots make cars and circuit boards, autonomous machines patrol rigs, and humanoid machines lack meaningful application, most robots today are software - lines of code, not squads of shape-shifting metalheads with guns.
‘Digital employees' with AI and machine learning elements automate repetitive tasks so humans have more time to empathise with customers, have Socratic thoughts about the universe, and definitely not get fired to slash costs, OK? In that world of automated applications Asimov's laws just won't do. We need something less vague and more comprehensive.
The real point is this: humans are far more likely to deploy bots to harm other humans than the bots are to go on the rampage themselves. In 2021, an unscrupulous organisation might use robotic process automation (RPA) to automate fraud or reject job applicants from certain postcodes or zipcodes.
Hypothetical scenarios? Sadly not, says Oded Karev, Head of RPA at automation software company, NICE (a reassuring name). Karev has drawn up a Robo-Ethical Framework - five new ‘laws' for the software robot age (see below). Not to prevent bots from harming humans so much as to stop humans from harming others with robots, or automating corporate crime.
It came from different directions. One of them was questions we got from our customers. A bank employee wants to programme the robots to use a username and password. Now, this employee has access to retail banking accounts. So, there's the classical fraud of going to every account and transferring one cent out of it, running 24 x 7. It might take weeks before you find out that millions of dollars have been transferred to the Cayman Islands.
So how do you make sure of the governance, that robots are not used for fraud but for something positive, because these robots have no judgement themselves?
It goes without saying that any customer question about how to stop robo-crime might be a coded enquiry about how to commit it. A software CEO once told me that an insurance client of his was worried a new system might prevent fraud. Why? Because the insurer's top salesman was indeed syphoning off cents into fake accounts but brought in far more dollars to the business than he was stealing. The insurer didn't want to lose the ambitious crook, so could the software provider make the system easier to game? "Anyone talented enough to defraud the company is the kind of guy we want!" the insurer told the CEO.
Injustice ex machina
Stories like this are an industry cliché. Some may be true, but Karev shares another example from a recent US customer - one that happened and demonstrates the risk of automating bias that ethicists have been warning us about for years. He says:
A company used robots to screen CVs and started disqualifying applicants from specific neighbourhoods, because statistically a lot of people from those neighbourhoods had not qualified previously. So why review their CVs? From the robot's standpoint, it's way more effective not to look at the CVs of these people if there is a statistically higher chance of finding better quality candidates from another area.
The subtext here is that unsuccessful applicants identified in historic data might not have been screened out by humans in the past because they weren't qualified or experienced, but because of ethnic, socio-economic, gender, or other biases. That data then becomes available to bots and AIs today, which interpret it as evidenced statistical fact. That's when prejudice becomes invisibly automated and entire sections of the populace are denied opportunity.
It's a problem that has afflicted automation in the US legal system, for example. The Compas sentencing advice algorithm used by judges to inform decisions has, in the past, been found to penalise black Americans with more severe sentences or favour white ones with greater leniency.
Again, that's because data from decades of unequal treatment in the courts were interpreted by bots as evidence that a certain black male would be more likely to reoffend than a given white one, even if in some cases the opposite proved to be true. A system rigged across years of flawed judgements, right back to the age of racial segregation.
Naturally, as a society, it's not something we want to do, we want everyone to get a fair chance, and we want to screen based on more criteria. So, we understood that there has to be a need for an overarching framework around what we call ‘robotic anxiety'.
He believes the solution is NICE's five-point ethical watchlist.
The Five Laws of Software Robotics
More accurately, the NICE RPA Robo-Ethical Framework (which we have edited to remove brand-specific statements).
- 1. Robots must be designed for positive impact: With consideration to societal, economic, and environmental impacts, every project that involves robots should have at least one positive rationale clearly defined.
But surely, any RPA project could be said to have an economic rationale. Doesn't that risk negating the societal or environmental ones? Karev says:
Economic impact is not a negative thing. Let's say we put in a robot that is saving an organization time, so instead of executing a process with 10 people, you can execute with eight. What do you do with that spare capacity?
Some customers will always say, ‘I'm going to fire a few agents from the call centre, and I'm going to take these savings directly on the bottom line.' But others will say, ‘I'm going to shorten the queue in my IVR, so people spend less time waiting before someone picks up the phone.'
I had a customer who said, ‘I told all my agents, ‘Don't care about how much time you spend on the phone [now that many processes are automated], because it is pure engagement and customer experience. So be empathetic.' They believe that if they provide a great customer experience over the phone, they will have better retention, which means more customers recommending them. So, the economic value is positive.
- 2. Robots must be designed to disregard group identities
To reduce the possibility of biased decision-making, robots should not consider personal attributes, such as colour, religion, sex, gender, age, or any other protected status. Training algorithms should be evaluated and tested periodically to ensure they are free from bias. Karev says:
I gave you the example of the CVs. I can also give you one of a large insurance company, which uses a robot to review claims. Say a claim comes in and you decide, ‘Below this amount, just the processing time is going to be too expensive, so I'm going to pay it.' But wait a minute, have the robots check. Say this customer is claiming $100 every month, so look at his history. Is he on a fraud list?
Up to now, this has been straightforward because a human wrote the rules and they can be governed. But the moment you say, ‘OK, let's have machine learning do that' is where the risk of racial profiling, or bias by name, social status, gender, and so on, comes into play.
Look at the claim, don't look at the gender or race.
- 3. Robots must be designed to minimize the risk of individual harm
To avoid harm to individuals, humans should choose whether, and how, to delegate decisions to robots. The algorithms, processes, and decisions embedded within robots should be transparent, with the ability to explain conclusions with unambiguous rationale.
Accordingly, humans must be able to audit a robot's processes and decisions. If a robot causes harm to an individual, a human must be able to intervene to redress the system and prevent future offence.
This is about the ability to audit, to look back at how decisions are delegated to robots - because robots can take decisions that they don't understand.
- 4. Robots must be trained and function on verified data sources
Robots should only act based upon verified data from known and trusted sources. Data sources used for training algorithms should be maintained with the ability to reference the original source.
But as we've seen, known and trusted sources might have been gleaned from biased processes or by unfair, untrustworthy means. Karev acknowledges this, saying:
Data can never be 100% accurate and it can never be 100% up to date. The idea is that the data sources you use are not tampered with, they're not biased. You need to make sure that you feed the data, impartially to the best of your knowledge, to the best of your ability.
- 5. Robots must be designed with governance and control
Humans should be informed of a system's capabilities and limitations. A robotics platform should be designed to protect against abuse of power and illegal access by limiting, proactively monitoring, and authenticating any access to the platform and every type of edit action in the system.
In short, don't assume using robots means abdicating human control, oversight, and responsibility. Digital employees still represent your organization's values.
Good rules and a worthy endeavour. But Karev admits that, while NICE adheres to these principles itself and pushes them to its customers, they are merely a voluntary code. As such, there's a limit to how far ethics can be coded into the products themselves and, at the end of the day, customers can simply disregard the rules.
I have no control over how my customers use the robots. But the first and foremost thing is to make sure that my software supports the ability to execute or follow these rules. Our robots are designed with governance and control in mind, which is key to having an audit trail to see who did what, to have roll-back and version control. Every decision can be drawn back to the human who wrote the rules.
As ever with robots, having a responsible human in the loop is essential.