Autonomous AI is the end of the world as we know it. Do you feel good about that?

Jerry Bowles Profile picture for user jbowles July 22, 2018
Summary:
Thousands of AI engineers, scientists and entrepreneurs are just saying no to building autonomous weapons.  They are probably coming anyway.

AI terminator
Visit the Future of Life Institute (FLI) website, you may be a struck by its dramatic subhead: “Technology is giving life the potential to flourish as never before…or to self-destruct. Let’s make a difference!”

Lest there be any ambiguity as to the message, the site’s Machine Intelligence Research Institute page spells it out with this helpful thought:

Most benefits of civilization stem from intelligence, so how can we enhance these benefits with artificial intelligence without being replaced on the job market and perhaps altogether?

Replaced altogether? That sounds s tad polemic but until the past decade, the notion that cognitive machines could acquire new data and crunch information around the clock, constantly learn, and make autonomous decisions—for good or for evil--independent of human control was the stuff of science fiction.

Nowadays, you don’t have to walk around in a tinfoil hat to know that civilization is rapidly approaching the point where AI-powered machines are poised to take control of the operation of many of the everyday tools of modern life--automobiles, airplanes, medical devices, financial trading systems, power grids and— the nightmare scenario—military systems, including autonomous weapons that can select and engage targets without human input.

The latter possibility is a bridge too far for many AI companies, researchers, engineers and experts. Last week at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, FLI, a Boston-based research organization whose  goal is mitigating existential risks facing humanity, released a pledge from more than 160 organizations and 2,460 individuals from 90 countries to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”

The pledge ends with the words:

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

Among the signatories are Elon Musk, the founders of Google DeepMind--Demis Hassabis, Shane Legg, and Mustafa Suleyman--University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), British MP Alex Sobel, and prominent AI researchers Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, pointed to the thorny ethical issues involved in autonomous weapons. He said:

We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.

Last year, Musk and Mustafa Suleyman led a group of 116 specialists from across 26 countries in drafting An Open Letter to the United Nations Convention on Certain Conventional Weapons.  The letter warned:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

More than a dozen countries, including the United States, China, Israel, South Korea, Russia, and the United Kingdom are believed to currently be developing autonomous weapons.

Human Rights Watch  and many other global organizations are calling for a complete preemptive ban on “killer robots” before the world goes down that dangerous path:

While international humanitarian law already sets limits on problematic weapons and their use, responsible governments have in the past found it necessary to supplement existing legal frameworks for weapons that by their nature pose significant humanitarian threats. Treaties dedicated to specific weapons types exist for cluster munitions, antipersonnel mines, blinding lasers, chemical weapons, and biological weapons. Fully autonomous weapons have the potential to raise a comparable or even higher level of humanitarian concern and thus should be the subject of similar supplementary international law.

My take

Autonomous weapons may be the deadliest threat posed by artificial intelligence but it is not the only area where AI technology is driving vital moral, ethical, legal and public policy questions.

We are living in an age where advances in cognitive technology are outpacing our ability to fully understand the implications of those technologies. AI, machine learning, face recognition software, autonomous vehicles, data mining, robots, all have the ability to improve the quality of human life or, sadly, to make many of us redundant.

Right now, most AI in use falls into the category of narrow, or weak AI because it is designed to perform a single narrow task—drive a car, recognize faces, perform internet searches, remind you to take your medications. AI can beat you at chess but probably can’t do your homework.  The “holy grail” of many researchers is to create general AI (AGI or strong AI). AGI would outperform humans at nearly every cognitive task. At that point, it may not be clear exactly who is working for whom.

One last thought.  This may be the scariest film trailer you’ll ever watch.  But, it’s important that you see it.  It’s called Slaughterbots and, for now, it’s fiction.

Loading
A grey colored placeholder image