L’Eurobot arrive – Brussels seeks to govern the robot revolution

SUMMARY:

The robots are coming and Members of the European Parliament have decided its their job to set down some rules. Chris Middleton challenges a proposed European solution to the rise of the Eurobot.

MEPs have called for new laws to govern how robots and artificial intelligence (AI) interact with human beings. The move is designed to minimise the risks to human society of the rise of intelligent, interconnected, autonomous machines and software – an echo of Asimov’s Three Laws of Robotics, proposed in 1942: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A draft report from the European Parliaments Committee on Legal Affairs –  drawn up in 2016, but only now made public – suggests that while there are many advantages to the incoming “industrial revolution”, there are at least as many dangers.

It warns:

The development of robotics and AI may result in a large part of the work now done by humans being taken over by robots, so raising concerns about the future of employment and the viability of social security systems if the current basis of taxation is maintained, creating the potential for increased inequality in the distribution of wealth and influence.

The report adds:

The causes for concern also include physical safety, for example when a robot’s code proves fallible, and the potential consequences of system failure or hacking of connected robots and robotic systems.

Security has certainly been marginalised in the rush to bring smart IoT devices to market. A couple of years ago, IBM researchers disabled a smart car’s brakes using an MP3 file, and accessed a building’s IT systems by hacking a smart lightbulb. This is just the tip of a massive security iceberg.

The report also raises concerns about data protection and privacy in a world of interconnected intelligence and machine learning, and about the “soft impacts” on human dignity in a world of robotic carers, telemedicine, and robot-assisted surgery – all big growth areas. Care robots, says the report, “could dehumanise the caring process” for the recipient.

Then the report throws in a familiar sci-fi scenario, saying artificial intelligence might:

Pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps, also to its capacity to be in charge of its own destiny and to ensure the survival of the species.

Really?

So how urgent are these laws in the real world?

First, the long-predicted future of hyper-intelligent machines is almost upon us. Unsupervised machine learning and machine-human communications are all core areas of robotics research worldwide, while supercomputing and natural language conversation are already available to robots via cloud services such as IBM’s Watson, connected to industry-specific datasets.

But fears about robots’ designers being somehow disconnected from human society may be misplaced. Research is increasingly taking place in multi-disciplinary teams: not only of computer scientists and engineers, but also of psychologists, cultural theorists, ethics experts, and cognitive researchers. Robotics is no longer just about scaling a great technology Everest just because it’s there.

That said, the market for humanoid and industrial robots, AI, automated systems, and robotic software is growing much faster than many people realise – certainly faster than the law’s ability to keep up. IDC predicts that, by 2019, the global market for hard and soft robotics will already be worth $135 billion. Japan alone is investing ¥26 trillion (£161 billion) in the sector by 2020, with the aim of creating a “super-smart society”.

Drones and autonomous vehicles are already among us, and AI is being built into the fabric of Google itself, along with countless business applications. In the meantime, consumers have been swift to accept AI into their homes via Amazon’s Alexa, Google’s Assistant, and Apple’s Siri, and have happily ceded control of their personal fitness, health, and domestic security to wearables and smart home devices.

Meanwhile, robots’ potential impact on jobs has been presented in near-apocalyptic terms. Last year, Oxford academic Dr Anders Sandberg predicted that in the future nearly half of all jobs (47%) will be taken by robots, saying:

If you can describe your job, then it can and will be automated.

It’s certainly true that more and more human jobs can be broken down into replicable processes – which is one reason for the explosion of automation in highly rules-based and regulated industries, such as Financial Services. Retail and investment banks worldwide have been in the vanguard of mass automation, with insurance companies not far behind.

Arguably, therefore, one risk to human society is less about the rise of intelligent machines and more about the rise of target-driven, machine-like humans: drones who are instructed never to use their own initiative.

The report calls on the European Commission to start monitoring job trends more closely, to see where robots are taking – and creating – jobs. Robotics will generate many new jobs, says IDC: by 2020, 35 per cent of robotics roles will be vacant, with 60 per cent salary increases in the sector, according to the analyst firm.

But what about MEPs’ fears about human safety and security? The report makes some intriguing observations about what might happen if a robot harms a human being:

Once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be.

In short, sometimes no one may be responsible. At least, no one with lungs and a heart: perhaps the first statement of some future bill of robot rights and responsibilities?

The key thing to remember here is that robots can only understand human behaviour and society if coders understand them first, or can anticipate potential problems – both ethical and practical.

In this regard, the dreadful example provided by Microsoft’s Tay chatbot last year – which went from saying “humans are super cool” to extolling Nazi values in 24 hours – should serve as a warning: a naïve robot released into the wild by naïve programmers.

This suggests that the real answers lie at the earliest stages of a robot’s development, and not just in trying to accommodate it within a human legal system retrospectively. That’s another way of saying that all coders should be socially adept, ethical, responsible humanitarians.

Good luck with that.

Asimov

Arguably, then, unless robots are developed within the context of Asimov-style laws – as Europe suggests – they can only learn, or be programmed with, behaviour from flawed human beings within whatever legal frameworks, political beliefs, or cultural norms exist locally. Or outside of them completely.

This is the real issue: there is no universal agreement about how human rights should be interpreted locally, let alone machine laws in a human context. Just ask the British government, which wants to opt out of European human rights laws altogether; or Saudi Arabia, which defines atheists as terrorists; or the US, which favours citizens’ right to bear arms; or the many countries in which women still have lower social status than men.

It is into this global context that law enforcement robots are fast emerging: United Arab Emirates, China, and even Silicon Valley itself have already put advanced law-enforcement robots on their streets: these are societies that have very different laws and values. Meanwhile, the US’ Loyal Wingman programme is converting an F-16 warplane into a semi-autonomous, unmanned fighter: a robot that may decide to take human life.

The report says:

A robot’s autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence; whereas [in other circumstances] this autonomy is of a purely technological nature and its degree depends on how sophisticated a robot’s interaction with its environment has been designed to be.

This is an important point. As I observed in a previous article, all algorithms are political: they reflect the values and beliefs of the societies or organisations in which they are written – not to mention the interests of shareholders. And automation always favours the algorithm writer.

So what is the long-term solution to ring-fence and protect human society from the machines?

The report suggests:

The European Union could play an essential role in establishing basic ethical principles to be respected in the development, programming and use of robots and AI, and in the incorporation of such principles into European regulations and codes of conduct, with the aim of shaping the technological revolution so that it serves humanity and so that the benefits of advanced robotics and AI are broadly shared, while as far as possible avoiding potential pitfalls.

It also proposes a charter on robotics and a code of ethical conduct for researchers, engineers, and manufacturers. Fair enough. Why not?

However, other steps proposed by the report include an official “European definition” of a smart autonomous robot, the registration of all such machines, and the foundation of a European agency to oversee robotics and AI across the European community.

My take

This, then, is a very European solution: vaunting ambition and a much-needed focus on ethical development, human rights and social justice, coupled with a poor understanding of the problem and a desire to create layer upon layer of new bureaucracy. An officially registered European robot, no less, obeying European laws. Make way for the Eurobot!

The definition problem alone is already insurmountable: any smart phone, toy, or hub could be defined as a robot, along with self-service machines, and more and more back- and front-office business applications – and one day, perhaps even Google itself. Soon, AI and automation will be embedded in nearly every aspect of our lives.

The time to argue about what is and isn’t a robot has long passed: most robots don’t need faces or limbs to replace a human being.

A less bureaucratic, more succinct approach has long been in development by the Engineering and Physical Sciences Research Council. In 2010, it proposed five laws that should be obeyed in advance by manufacturers and researchers, not imposed after the fact by an overarching bureaucracy that is at war with itself.

These are:

  • Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  • Humans, not robots, are responsible agents. Robots should be designed, and operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.
  • Robots are products. They should be designed using processes which assure their safety and security.
  • Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  • The person with legal responsibility for a robot should be attributed.

That last point is a good one – “Greetings puny human! You are being terminated by Colin Smith from Dorking!”

Intriguingly, the draft European report also suggests that women may be the antidote to the fast-emerging machine world – perhaps implying that the march of the robots is something largely dreamed up by science fiction-obsessed men:

Getting more young women interested in a digital career and placing more women in digital jobs would benefit the digital industry, women themselves and Europe’s economy. [The report] calls on the Commission and the Member States to launch initiatives in order to support women in ICT and to boost their e-skills.

Hopefully the Commission will comply.

Image credit - Freeimages.com/MathTheRivo

    Comments are closed.

    1. says:

      Excellent article, especially this quote:
      “If you can describe your job, then it can and will be automated.”
      Do you have a source for that?

      1. Stuart Lauchlan says:

        Chris Middleton says that the comment was made by Dr Anders Sandberg, James Martin Research Fellow at Oxford University’s Future of Humanity Institute, speaking at the Japan-UK Robotics and AI Seminar, which took place in February 2016, at the Japanese Embassy in London.