Once again, calls to tame the threat from robots have led to headlines evoking the Terminator film franchise. The notion that humans are creating a new species that will turn on us and wipe us out is one that seizes the popular imagination. But it's science fiction.
The fact is, we don't need to create a new race of super-intelligent robots to wipe out the human race. We're perfectly capable of doing it ourselves, whether by instigating all-out nuclear war, or by fueling global warming. AI is simply another tool within our power that has the potential to destroy us all if we're negligent and careless enough to let it happen.
To be fair, that's actually the point the eminent scientists and industrialists behind this latest call for inter-government agreements on lethal AI were trying to make. But by allowing the media to run away with the notion that the robots will consciously seek to wipe us out, they encourage a misleading sense that we are helpless victims rather than in charge of our own destiny.
What I find sinister about this collusion in perpetuating the Terminator myth is that it allows those who create new AI technologies to evade responsibility for the consequences of their actions. Let me explain what I mean with a simple example.
From time to time, I hear people speculating about the possibility of robots being held legally responsible for their actions. For example, if an industrial robot causes a worker's death, is the robot culpable? Well no. Quite clearly, the legal liability lies with the creator of the robot and its owner, just like any other industrial machinery. There are only two questions to consider:
- Did the technology vendor take sufficient care to ensure the safe operation of the robot?
- Did its owner take sufficient care to ensure the safety of those coming into contact with it?
If the answer to both questions is yes, and the accident still occurred, then the robot's design or the working practices surrounding it must be changed to ensure the same kind of accident never happens again. If the answer to either question is no, then the corporation or individual responsible must face the legal consequences.
We should not let the Terminator story confuse us about matters of legal liability, and let people or corporations evade their responsibility for the consequences of their own decisions and actions.
Reinvesting the AI dividend
Similarly, we must rigorously apportion responsibility for the consequences in society of increasing adoption of AI. Let's look at jobs, for example.
There are two parts to this argument. The first is that it's overblown, and that for every job taken away by a robot, others will be created. To some extent I think this can already be seen to be true, because the robots have already taken many jobs, and yet there has not been a significant reduction in employment. Has the advent of self-service checkouts in supermarkets reduced retail employment? Are fewer people employed in travel as a result of online check-in? Different jobs become available in place of those that are eliminated.
But the elimination of some jobs and the creation of others is uneven, leaving many without the right skills to find new work. Meanwhile, increasing automation means that capital takes a greater share of the wealth produced, leaving less available to workers. As I've argued previously, businesses should bear more taxation to reflect their greater share of the common wealth, and this AI dividend should be reinvested in retraining and support for disadvantaged communities — perhaps ultimately in the form of a universal basic income. In this way, corporations can take responsibility for the impact on society of their increasing use of AI-powered automation.
AI-powered killing machines
What then of the threat of AI-powered killing machines? Once again, it's a question of allocating responsibility to the people who create and use these machines, rather than to the robots themselves. The call for a treaty to outlaw the creation, production and use of uncontrollable autonomous killing machines should be backed up a call to define such acts — whether deliberate or inadvertent — as a war crime under the Geneva Convention.
But as my colleague Stuart Lauchlan argued in his story earlier today, such calls are unlikely to be heeded until the world sees the impact of such weapons in action. Chemical weapons were not banned until they had been used in warfare. Landmines, which are an early, very dumb form of "autonomous weapons system," are still not forsworn by leading countries, despite their indiscriminate impact on civilian populations. The tech industry must take responsibility for remaining vigilant and ensuring that the world recognizes the horrific impact of such weapons as early as possible.
Meanwhile, everyone should be clear that the source of the danger is not the robots themselves, but the humans who design and deploy them. Robots, like any tool we create, can and will be a tremendous force for good and for the advance of human achievement. It is up to us to harness that power and take responsibility for ensuring it is not misused.