There were two robot videos doing the rounds on social media last week. The first was Boston Dynamic’s latest ‘look what he can do now’ video showing the Atlas humanoid robot doing backflips.
The second, which got rather less attention, was a video from the Campaign To Stop Killer Robots pressure group, entitled Slaughterbots. It’s a video that opens with a cheesy CEO unveiling a new product which turns out to be a miniature drone that uses faclal recognition to select a specific target before firing off a missile. The CEO’s sales pitch:
A $25 million order now buys this, enough to kill half a city — ‘the bad half’. Nuclear is obsolete, take out your entire enemy virtually risk-free. Just characterise him, release the swarm and rest easy.
Of course - and you can probably see the ‘twist’ coming here - the drones fall into the hands of unnamed terrorists and sundry bad guys, who turn it on a classroom of upstanding students. It’s not subtle, but then it doesn’t set out to be. It’s a got a simple message - stop this stuff in its tracks now.
The problem is, as noted before, once something’s been invented, it’s impossible to un-invent it. You just have to learn to manage the situation, which was one of the ambitions discussed last week at a United Nations (UN) meeting of countries participating in the Convention on Conventional Weapons.
The Slaugherbots video was released to coincide with a meeting of a Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), gathering for the first time to deal with its mandate of examining emerging tech around LAWS. It’s chaired by Ambassador Amandeep Singh Gill of India, who told reporters afterwards that there’s a need to dial back the hype and hysteria here:
Ladies and gentlemen, I have news for you: the robots are not taking over the world. Humans are still in charge. I think we have to be careful in not emotionalising or dramatising this issue.
To which the immediate response has to be - tell that to Elon Musk. Or Professor Stephen Hawking. Or any of the myriad scientists and technologists who’ve been popping up to warn about the coming of the cyber-army to a neighborhood near you.
As ever with anything to do with the UN, getting agreement between nations is going to be akin to herding cats. The meeting falls under the UN Convention on Certain Conventional Weapons — AKA the Inhumane Weapons Convention — and operates by consensus. That means that lowest common denominator objectives and lowest hanging fruit are the most likely successes.
Some 22 countries have to date called for an outright ban on LAWS-type tech development. The problem is that these countries fall on the side of being those with lower military budgets and no major investment in R&D in this field.
More problematic is the stance taken by countries such as the US and Israel, both of which are significantly non-commital on the issue, while pumping cash into R&D. Russia has been the most open in its ambitions, with President Vladimir Putin declaring of AI in general that:
The one who becomes the leader in this sphere will be the ruler of the world.
Need for speed
Speaking at a side event to the main UN meeting, Toby Walsh, AI expert from the University of New South Wales in Australia, warned:
The “arms race has happened [and] is happening today. These will be weapons of mass destruction,” he added during a side-event at the UN this week. I am actually quite confident that we will ban these weapons …My only concern is whether [countries] have the courage of conviction to do it now, or whether we will have to wait for people to die first.
But time doesn’t seem to have been a major issue during the GGE discussions. After the initial meeting ended at the UN on Friday, representatives from more than 80 counties approved a report on the outcome of the talks, which essentially amounts to - push this into next year and worry about it then. Ambassador Gill appeared happy with this outcome, saying
I am very happy with the start we made.
For its part, the Campaign to Stop Killer Robots said that this is basically "a roll-over of the unambitious mandate to continue deliberations in 2018” and that more concrete action points and objectives need to come out of future meetings, including agreement to a formal negotiating mandate at the end of 2018, and the conclusion of a new protocol by the end of 2019, one that bans the development, production, and use of fully autonomous weapons:
The 2018 GGE meetings should be action-oriented and focus on discussions between states…In 2018, states at the GGE should focus on considering characteristics or elements of a working definition on lethal autonomous weapons systems. It is time for experts from governments to make explicit where they draw the line in increasing autonomy in weapon systems and determine how to retain meaningful human control over weapons systems.
Look at the amazing Atlas robot and marvel about how fast robotics tech is evolving! And while you’re all ooh-ing and aah-ing at this, you won’t be paying too much attention to the similar fast - and getting faster - evolution of the Slaughterbots. While I’m not about to throw in my lot with Elon Musk and start to panic about Cybermen in my living room, the outcome from last week’s meeting smacks of at best a lack of urgency to come to firm conclusions. Clearly this isn’t a debate that should be rushed into or from which we demand hasty policy-making. But some greater recognition of urgency would be welcome in 2018.