Europe's magnificent 7 AI ethics principles - a lot of carrot and no stick

Profile picture for user slauchlan By Stuart Lauchlan April 14, 2019
Summary:
Can Brussels do what Google can't - deliver an ethical framework for AI with which we can all agree?

carrotstick

There were red faces all round at Google earlier this month when the firm disbanded a much-promoted Artificial Intelligence ethics advisory panel only a few days after it was set up.

The Advanced Technology External Advisory Council (ATEAC) came under fire from Google staffers due to the make-up of its members, which included the president of conservative think-tank Heritage Foundation who had expressed “anti-trans, anti-LGBTQ and anti-immigrant” comments.

Google hit the panic button and canned the panel, which had been set up to meet four times a year:

It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board. We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.

It’s an embarrassing glitch at the very least in the cause of Big Tech being left to self-regulate and adopting a ‘grown-up’ approach to ethical policies. As one of the intended ATEAC board members, Professor Joanna Bryson of the UK’s Bath University, frostily observed in a tweet:

I thought there were enough smart people at Google that there must be some process for either communicating or improving decisions. But I was wrong, and the people who called me naive were right.

Ouch!

So is it back to the external regulators and legislators to take a lead, particularly now that Facebook CEO Mark Zuckerberg has asked to be taken in hand? If so, it’s timely that the European Commission has produced its own set of ethical guidelines for an AI age with the - inevitable - ambition of beating the US to the mark and getting Europe’s vision adopted on a global scale:

This is the path that we believe Europe should follow to become the home and leader of cutting-edge and ethical technology. It is through Trustworthy AI that we, as European citizens, will seek to reap its benefits in a way that is aligned with our foundational values of respect for human rights, democracy and the rule of law.

Or as Vice President for the Digital Single Market Andrus Ansip puts it:

The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe, being a leader of human-centric AI that people can trust.

See also data protection and digital services taxation for how well that sort of talk goes down in the White House. But leaving the political grandstanding to one side, there are some useful contributions to a wider debate to be found in the report Ethics Guidelines for Trustworthy AI, created after consultation with a group of 52 experts, picked from tech companies, NGOs and academics.

It starts from a very simple basic premise:

Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems. Without AI systems – and the human beings behind them – being demonstrably worthy of trust, unwanted consequences may ensue and their uptake might be hindered, preventing the realisation of the potentially vast social and economic benefits that they can bring.

The report argues that Trustworthy AI has three essential components. It must be:

  • Lawful, complying with all applicable laws and regulations.
  • Ethical, ensuring adherence to ethical principles and values.
  • Robust, both from a technical and social perspective.

The long term impact of AI tech makes it essential to get these foundational building blocks in place, says the report:

AI is a technology that is both transformative and disruptive, and its evolution over the last several years has been facilitated by the availability of enormous amounts of digital data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation in AI methods and tools. AI systems will continue to impact society and citizens in ways that we cannot yet imagine.

Pitching to developers of AI tech, the report outlines seven key factors that need to be taken into account:

  1. Human agency and oversight, meaning that all AI systems should enable equitable societies by supporting human agency and fundamental rights. They must not be designed or deployed in such a way that would decrease, limit or misguide human autonomy.
  2. Robustness and safety, meaning that underlying algorithms must be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance, such that citizens have full control over their own data and that such data will not be used to harm or discriminate against them.
  4. Transparency, meaning that visibility of AI systems is paramount.
  5. Diversity, non-discrimination and fairness, meaning that AI tech needs to be for everyone, regardless of color, creed, skills, abilities etc.
  6. Societal and environmental wellbeing, such that AI is used to drive positive social change and enhance sustainability and ecological responsibility.
  7. Accountability, with mechanisms and processes in place to ensure human responsibility and accountability for AI systems and their outcomes.

My take

You need to read the entire report to appreciate the level of detail that’s been produced here. Beneath the top-line principles, there’s an almost too dense body text that betrays the academic and legislative roots of the authors. There will now be a pilot phase running until early 2020 that is intended to give businesses the chance to provide more grounded feedback.

So is this a realistic platform for a universal ethical framework? The answer to that lies in the wording of its ambitions, which despite the political ‘global leadership’ bravado, is carefully couched. These principles are non-enforceable guidelines that Mariya Gabriel, EU Commissioner for the Digital Economy, describes as “baselines” for organizations to check against when developing AI tech.

In other words, this is - at this point - very much carrot, not stick. But given Brussels track record on tech matters, the temptation to regulate further down the track won’t be far from front of many minds.

For now though, read this as another contribution to the ongoing debate on developing ethics in the AI world, bearing in mind that the power bloc behind its authorship inevitably adds weight to its ideas. But bear in mind as well that Europe lags well behind the US in terms of commercial exploitation of AI tech to date - and sadly that does matter when it comes to setting the rules.