EU to debate artificial intelligence regulation and legal issues
- Summary:
-
The EU will look to build a framework, based on expert input, on the thorny ethical and legal issues surrounding artificial intelligence (AI).
In a new statement put forward by the European Group on Ethics (EGE), it notes that US companies are now developing AI systems at a rapid rate, but that it has concerns about the gap between regulation and the capability of these new systems. It also highlights the opaque nature in which these systems are being developed.
US companies wishing to operate in the EU and sell AI-enabled software will need to be aware of any future regulations being put forward.
The European Commission is going to establish a group that will discuss the challenges associated with the fast-evolving world of artificial intelligence (AI) - in the hope that the EU can start to provide answers to some of the difficult ethical, legal and societal questions the technology poses.
The EGE rightly notes in a statement that AI can usher in number of benefits for workers, government and citizens, but also adds that the opaque nature of the technology, and the speed at which it is developing, raises some urgent moral questions.
It is calling for the launch of a process that would pave the way towards a “common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics and ‘autonomous’ systems’”.
It also proposes a set of fundamental ethical principles, based on the values laid down in the EU Treaties and the EU Charter of Fundamental Rights.
The EGE’s statement argues that while there is a growing awareness of the need to adress moral, legal and ethical questions around AI, the technology itself is often development more rapidly than the process of finding answers. It adds:
Current efforts represent a patchwork of disparate initiatives. There is a clear need for a collective, wide-ranging and inclusive process that would pave the way towards a common, internationally recognised ethical framework for the design, production, use and governance of AI, robots and ‘autonomous’ systems.
The European Commission has opened applications to join an expert group in artificial intelligence which will be tasked to:
- advise the Commission on how to build a broad and diverse community of stakeholders in a "European AI Alliance";
- support the implementation of the upcoming European initiative on artificial intelligence (April 2018);
- come forward by the end of the year with draft guidelines for the ethical development and use of artificial intelligence based on the EU's fundamental rights. In doing so, it will consider issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights.
Priorities
The EGE says in its statement that it is “unfortunate” that some of the most powerful among AI tools are also the most opaque. It adds that the advanced capabilities are accumulating in large part with private parties and are for a large part proprietary.
It points to the fact that AI systems are no longer programmed by humans in a linear manner. For example, Google Brain develops AI that allegedly builds AI better and faster than humans can. Or how AlphaZero can bootstrap itself in four hours from completely ignorant about the rules of chess, to world champion level. This means, according to the EGE:
In this sense, their actions are often no longer intelligible, and no longer open to scrutiny by humans. This is the case because, first, it is impossible to establish how they accomplish their results beyond the initial algorithms. Second, their performance is based on the data that have been used during the learning process and that may no longer be available or accessible. Thus, biases and errors that they have been presented with in the past become engrained into the system.
The EGE has laid out the EU’s ethical principles and democratic prerequisites for examining the role of AI in the future. These lay firmly within the context of the EU’s underlying principles. They include:
- Human dignity - understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ technologies.
- Autonomy - the principle of autonomy implies the freedom of the human being. This translates into human responsibility and thus control over and knowledge about ‘autonomous’ systems as they must not impair freedom of human beings to set their own standards and norms and be able to live according to them.
- Responsibility - ‘autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes.
- Justice, equality and solidarity - AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and ‘autonomous’ systems can bring. Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible.
- Democracy - key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. A spirit of global cooperation and public dialogue on the issue will ensure that they are taken in an inclusive, informed, and farsighted manner.
- Rule of law and accountability - rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy.
- Security, safety, bodily and mental integrity - Safety and security of ‘autonomous’ systems materialises in three forms: (1) external safety for their environment and users, (2) reliability and internal robustness, e.g. against hacking, and (3) emotional safety with respect to human-machine interaction. All dimensions of safety must be taken into account by developers.
- Data protection and privacy - In an age of ubiquitous and massive collection of data through digital communication technologies, the right to protection of personal information and the right to respect for privacy are crucially challenged. Both physical AI robots as part of the Internet of Things, as well as AI softbots that operate via the World Wide Web must comply with data protection regulations and not collect and spread data or be run on sets of data for whose use and dissemination no informed consent has been given.
- Sustainability - AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations.
Commenting on the announcement, Commission Vice-President for the Digital Single Market Andrus Ansip said:
Step by step, we are setting up the right environment for Europe to make the most of what artificial intelligence can offer. Data, supercomputers and bold investment are essential for developing artificial intelligence, along with a broad public discussion combined with the respect of ethical principles for its take-up. As always with the use of technologies, trust is a must.
My take
There’s so much at stake here, as autonomous technologies offer the promise of ease and convenience - but this should not be accepted at the risk of fundamental rights, bias and/or outside of the frameworks of what society deems as acceptable. The EU needs to work quickly and intelligently to get this framework in place, as these tools are developing at a rapid race. Once established, these frameworks need to be applied rigorously and need to be help up to constant scrutiny.