Elon Musk v the killer robots – less rhetoric, more reason please


Here come the killer robots – again. Elon Musk is back in Prophet of Doom mode, this time joined by 115 others. Is it time for a more considered debate than Terminator headlines?

Another day, another apocalytic warning about the march of the killer robots and the threat to the human race – and yes, Elon Musk is right at the forefront in his chosen guise of Armageddon Pedlar-in-Chief.

Musk is the headline-grabbing name among 100 or so other signatories to an open letter to the United Nations on the day that the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems was due to meet to debate how to prevent the abuse and misuse of AI-enabled weaponry.

The letter declaims:

Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once this Pandora’s Box is opened, it will be hard to close. Therefore we implore the High Contracting Parties to find a way to protect us all from these dangers.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

While Musk’s signature on the letter was the most prominent, he was joined in his concerns by over a hundred other AI and robotics experts. For example. Ryan Gariepy, the founder of Clearpath Robotics said:

Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.

Meanwhile Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, said:

Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war. We need to make decisions today choosing which of these futures we want.

And Stuart Russell, founder and Vice-President of Bayesian Logic, warned:

Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.

So, away from the ‘end is nigh’ rhetoric, how realistic is the threat? Countries such as the US and the UK are at the forefront of developing autonomous weaponry, while Russia recently released a video of a robo-warrior that can fire a gun. And inevitably rogue nations, such as North Korea, are feared to be investing in this field.

Earlier this year, Mary Wareham Advocacy Director, Arms Division at Human Rights Watch warned:

Autonomous weapons systems are in development by more than a dozen countries, particularly the United States, China, Israel, South Korea, Russia, and the United Kingdom. The concern is that the human role in selecting and firing on targets will become less and less prominent until humans are no longer involved and the machine takes over these critical functions.

Even though there was widespread support for formalizing the process to discuss concerns over killer robots, countries such as France, the United Kingdom, and the United States have set the bar far too low. They have not called for new international law, but instead have proposed focusing on sharing best practices and greater transparency in the development and acquisition of new weapons systems. That is not enough to stop the development of these weapons before it’s too late.

My take

The problem with technological advances is that no-one’s ever worked out a way to uninvent something. The chances of the major powers agreeing to halt research and development into AI-enabled weaponry are essentially non-existent.

Being able to send a machine into a combat or terrorist zone rather than human beings clearly has a political and military logic that is inescapable. It also of course raises enormous moral questions which need to be addressed head-on – and preferably without the ‘killer robots are coming’ hysteria. This requires calm, reasoned debate and considered regulation.

The inevitable result of this open letter yesterday was a rash of TV and mainstream media coverage, every single one of which began with or concluded with Terminator references. It’s silly season when it comes to the news agenda, but even so…

Musk has put the topic onto the public radar. I’d suggest the most beneficial contribution he can now make is to sit down quietly for a while and let others take the lead on this thorny topic for a time.

Image credit - Pinterest/Twitter