Back in April, the European Commission published a set of seven AI ethics principles. These were, we said at the time, worthy enough, but there was a lot of carrot and very little stick to turn them into deliverable action.
As a reminder, the ‘magnificent seven’ were:
- Human agency and oversight, meaning that all AI systems should enable equitable societies by supporting human agency and fundamental rights. They must not be designed or deployed in such a way that would decrease, limit or misguide human autonomy.
- Robustness and safety, meaning that underlying algorithms must be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance, such that citizens have full control over their own data and that such data will not be used to harm or discriminate against them.
- Transparency, meaning that visibility of AI systems is paramount.
- Diversity, non-discrimination and fairness, meaning that AI tech needs to be for everyone, regardless of color, creed, skills, abilities etc.
- Societal and environmental wellbeing, such that AI is used to drive positive social change and enhance sustainability and ecological responsibility.
- Accountability, with mechanisms and processes in place to ensure human responsibility and accountability for AI systems and their outcomes.
Flash forward to today and the next stage in the process has been taken with a new report from the Commission’s independent AI High Level Expert Group (HLEG). This one comes with 33 recommendations for specific policies for European Union member states and organizations in the private sector as well as public sector bodies.
A lot of the report - Policy and Investment Recommendations for Trustworthy AI - is built around differentiating Europe’s approach to AI policy from those of the US and China, a so-called third way that would require non-EU parties to toe the line. The HLEG argues:
Europe can distinguish itself from others by developing, deploying, using and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being.
As a case in point, the HLEG picks out the use of AI-enabled surveillance tech in both private and public sector contexts. While both the US and China have embraced the potential here, Europe should take a different tack:
Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.
Use - and abuse - of personal data also comes into focus:
Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning).
Underlying the detailed recommendations - which you can access via here (be warned, it's lot of reading, so grab a coffee!) - are some broader takeaways that set the tone. For example, HLEG calls for what it calls a “tailored approach” to the AI landscape:
[Policy makers] should consider the ‘big picture’, by looking at AI’s overall impact on - and potential for - society, while simultaneously understanding the sensitivities of AI solutions in B2C, B2B and P2C [public-to-citizen] contexts, both as digital products and services only, and as digital solutions embedded in physical systems.
There needs to be a Single European Market for AI, it adds:
Establishing a level playing field for Trustworthy AI across Europe can benefit individuals and organisations by removing barriers to procure lawful, ethical and robust AI-enabled goods and services, while also ensuring a competitive position on the global market through economies of scale enabled by large integrated markets.
To achieve this, the HLEG calls for a “10 year vision with a rolling action plan”, suggesting that our ‘carrot and stick’ point was well-made back in April:
This report…together with the earlier published Ethical Guidelines on Trustworthy AI, is a first step to establish the foundation for this durable strategy. However a long-term follow-up scheme for their implementation is crucial if we truly wish to move the needle in Europe…Building on this reports’s cross-sectoral recommendations, we believe it is now necessary to learn which impactful actions should be undertaken for various strategic sectors…Europe’s readiness to respond to this [AI] opportunity must be ensured, which requires action now.
But is this report enough to make that practical ‘call to arms’ a realistic prospect? Aside from the inevitable concerns around more AI from civil rights and privacy groups, there’s criticism from techUK, the trade association for the UK technology industry. Katherine Mayes, Programme Manager, Cloud, Data Analytics and AI, warns:
At points the text can be quite negative and arguably not justified about existing technologies, such as cloud computing, that will help enable the benefits of AI technology. For example, the paper mentions “the lack of well performing cloud infrastructure respecting European norms and values may bear risks regarding macroeconomic, economic and security policy considerations, putting datasets and IP at risk, stifling innovation and commercial development of hardware and compute infrastructure of connected IoT devices in Europe”.
In some cases, the recommendations do not consider the existing legislation that is already in place. techUK believes this is important before placing unnecessary additional processes on the private sector, especially SMEs. For example, the AI HLEG suggest a mandatory obligation to conduct a trustworthy AI assessment when the private sector use AI systems that may have potential to significantly impact on human lives. However there is no acknowledgement that under current legislation, private companies cannot legally deploy AI systems that would adversely impact an individual’s human rights.
Mayes also cites some inherent contradictions in the HLEG recommendations:
The report states that ‘unnecessarily prescriptive regulation should be avoided’. ‘In contexts characterised by rapid technological change, it is often preferable to adopt a principled-based approach.’ However later the report calls for meaningful oversight mechanisms and new regulation to address the critical concerns listed in the Ethics Guidelines for Trustworthy AI. This section of the report seems to counter the original intention of the AI HLEG Trustworthy AI guidelines as a flexible, evolving and voluntary toolkit. The European Commission should await the outcome of the pilot sessions before making premature decisions about oversight mechanisms.
There’s more meat on the bones here to chew on, but this isn’t necessarily particularly helpful. In pursuit of a wide holistic strategy, there’s the danger of trying to boil the ocean here. That’s always a risk with this kind of pan-industry/academia/public sector policy body - so many stakeholders, so many agendas, so many boxes to tick.
The next phase of the HLEG’s work is run a pilot phase of its ethical guidelines to establish their sectoral suitability and to instigate what it calls “a limited number of sectoral AI ecosystem analyses”. These should deliver more real-world conclusions than what to date has been largely theoretical work. By the time that next phase has been completed, there will a new European Commission in place - and the UK may well have finally Brexit-ed, which adds a whole new set of complexities for that country which has already decided to drop out of the Digital Single Market anyway!
While leaning towards the HLEG’s call for more action, there’s a lot in flux and it’s going to be worth not rushing into policies that are going to have very long term impact too hastily.