AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.
Another day, another set of dire predictions about the rise of the bots. Over the weekend it was the turn of Tesla CEO Elon Musk. It’s not the first time he’s aired colorful concerns of this nature. As far back as 2014, he was issuing lurid warnings of:
With artificial intelligence, we are summoning the demon. You know, in all those stories where there’s the guy with the pentagram and the holy water, and he’s like yeah, he’s sure he can control the demon. Didn’t work out.
What makes this latest turn as a tech Cassandra most interesting is that he was addressing American politicians at the National Governors Association conference in Rhode Island – and actively calling for political intervention. Musk said:
AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late…I’m against overregulation for sure, but man, I think with we’ve got to get on that with AI, pronto.
Thus we need to the regulators to come in and say, ‘Hey guys, you all really need to pause and make sure this is safe. When it’s cool and the regulators are convinced it’s safe to proceed, then you can go, but otherwise, slow down.”. You kind of need the regulators to do that for all the teams in the game. Otherwise the [corporate] shareholders will be saying, ‘Why aren’t you developing AI faster? Because your competitor is’.
The hugely competitive nature of the tech industry will fan the flames here if that regulation is not put in place, he added, admitting that most firms are not going to like what he was saying:
For sure, the companies doing AI — most of them, not mine — will squawk and say this is really going to stop innovation.
Musk painted a picture of out-of-control actions from robotic entities that could end up harming human beings, even if they’re programmed to be supportive:
If [the programming] is not well thought out – even if its intent is benign – it could have quite a bad outcome. If you were a hedge fund or private equity fund and you said: ‘Well, all I want my AI to do is maximize the value of my portfolio,’ then the AI could decide, well, the best way to do that is to short consumer stocks, go long defense stocks, and start a war…[They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information. The pen is mightier than the sword.
Musk argued that there’s an urgent need for the political class to gets collective mind around the topic of AI and machine intelligence – and be ready to be frightened when it does so:
Right now the government doesn’t even have insight. Once there is awareness, people will be extremely afraid, as they should be…I have access to the very most cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.
Such is the threat posed by AI in Musk’s mind that the US government needs to rethink the way it deals with the normal way of introducing regulatory regimes:
Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.There’s a bunch of opposition from companies who don’t like being told what to do by regulators and it takes forever. That, in the past, has been bad, but not something which represented a fundamental risk to the existence of civilisation…AI is a fundamental, existential risk to human civilisation and I don’t think people fully appreciate that.
Not everyone seemed convinced by Musk’s warnings of disaster, with Republican Governor Doug Ducey of Arizona calling him out on the subject:
I was surprised by your suggestion to bring regulations before we know what we are dealing with.
Musk’s response to such questioning was that it’s necessary to get started in order to find out:
The first order of business would be to try to learn as much as possible, to understand the nature of the issues. Look closely at the progress that is being made and the remarkable achievements of artificial intelligence.
Musk has of course lately passed up on whispering his armageddon predictions into the most powerful US political ear after walking out on Donald Trump’s economic advisory council, following the President’s controversial decision to pull the US out of the global Paris climate accord.
He told the gathered Governors in Rhode Island that he had tried to influence Trump on a number of fronts, but his position become untenable after the Paris decision:
I did my best, and I think in a few cases I did make some progress…If I stayed on the councils it would be saying that wasn’t important, but I think it’s super important. The country needs to keep its word. There’s just no way I could stay on after that.
I’m inherently wary of inviting politicians to get their heads around anything much to do with technology as it usually ends up coming down to lowest common denominators. Just look at the way that politicians on both sides of that Atlantic have tied themselves in ill-informed knots around encryption, for example.
And while I don’t doubt the sincerity of Musk’s views, I’m not sure painting pictures of killer robots stomping down the street is going to take the governing elite much further on from the normal labored allusions to The Terminator or Doctor Who’s Cybermen that seem a necessary element of most political debates around the subject of AI.
Anyway, basic premise – the robots are coming to kill us. Discuss!
Image credit - BBC