Uh oh, AI, EU - what could possibly go wrong? The risky business of regulation

Stuart Lauchlan Profile picture for user slauchlan April 23, 2021
Europe makes its first moves to put the world to rights on acceptable uses of AI. There may be trouble ahead...


One US tech CEO summed up his puzzlement with the European Commission (EC) to me once by asking if there was any area of life left that didn’t yet have its own Commissioner appointed to regulate it? How could Europe, he went on, expect to develop its own Silicon Valley avatar if innovation was stifled by Eurocrats intent on micro-managing the free market?

That’ll be read as a fairly extreme, and possibly self-serving, worldview, delivered from the comfort of a billionaire’s board room in the real Silicon Valley, but in some form or another it’s a question that’s been aired at various times over the years, most usually shortly after politicians get up on their trotters and start pitching a glorious day in which the EU (European Union) has somehow overtaken the US and China in the tech stakes.

Given the relentless rise of Artificial Intelligence (AI), it was inevitable that the EC would need to put its own regulatory stake in the ground and set the world right on a few matters. The fact that there are already multiple national and international efforts underway on this front is irrelevant; if there isn’t a Euro-version, there ought to be - and besides, the great thing about standards is there are so many to choose from…

The first fruit of the Commission’s work on an AI regulation framework was published earlier this week - here - complete with the usual ‘global leader’ rhetoric to puff up the gills of the good and the great in Brussels. These are, claims the proposal document:

…new rules and actions aiming to turn Europe into the global hub for trustworthy AI. The combination of the first-ever legal framework on AI and a new Co-ordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.

Risky business

Leaving the grandstanding aside, what’s the meat on the bones here? There are several aspects in the 80 page document that jump out. There’s a proposal to set up a European Artificial Intelligence Board that would develop new standards and enforce the rules across the bloc. Those rules would carry heavy penalties for infringement - up to 6% of their global annual revenue for companies, higher than the 4% of revenue that can be imposed on GDPR (Global Data Protection Regulation) violators. It’s not explained how this new Board would be aligned with the existing European Data Protection Board - or indeed, national data protection authorities across the EU.

The most important aspect of this first cut - and it is a first stab. There’s a long, long way to go before any of this makes it into practice -  is that the Commission has openly followed a risk-based approach to its deliberations with a number of key categories named, if not entirely defined. So there is:

  • Unacceptable Risk - AI tech deemed to be a clear threat to the safety, livelihoods and rights of individuals. This takes in any system that enables ‘social scoring’ by government, as demonstrated  in China.
  • High Risk - this includes critical infrastructures that could put the life and health of citizens at risk, including self-driving vehicles; safety systems in general; tech that impacts on employment rights and prospects; private services, such as credit scoring; law enforcement where it interferes with “fundamental rights”; migration and border control systems; and most uses of biometric tech.
  • Limited Risk - AI systems that come with specific transparency obligations, such as chatbots where users would be made aware that they’re interacting with a machine.
  • Minimal Risk - this category is said to include the vast majority of AI systems.

Margrethe Vestager, Executive Vice-President for A Europe Fit for the Digital Age at the Commission, explains the risk rationale here in the following terms:

Our proposed legal framework doesn't look at AI technology itself. Instead, it looks at how AI is used and what for. It takes a proportionate and risk-based approach, grounded on one simple logic: the higher the risk that a specific AI may cause to our lives, the stricter the rules.

And some things will have to go on the naughty list because they’re just not, well, not the way things are done in Europe. She explained:

At the top of the pyramid, we find those limited uses of AI that we prohibit altogether because we simply consider them unacceptable. It is AI systems that use subliminal techniques to cause physical or psychological harm to someone. For example, in the case of a toy that uses voice assistance to manipulate a child into doing something dangerous. Such uses have no place in Europe. We therefore propose to ban them. 

The same prohibition applies to AI applications that go against our fundamental values. For instance, a social scoring system that would rank people based on their social behavior. A citizen that would violate traffic rules or pay rents too late would have a poor social score. That would then influence how authorities interact with him or how banks treat his credit request.

Facing up to problems

Inevitably much attention has been paid to the suggestions around biometric tech, with the proposal focusing mostly on what the Commission positions as “remote biometric identification”, such as facial recognition in crowds. Some applications of biometric tech, such as fingerprint scanning at immigration control, are OK, argued Vestager, but others are absolutely de trop:

We treat any use of [remote biometric identification] as highly risky from a fundamental rights point of view. That's why we subject remote biometric identification to even stricter rules than other High Risk use cases. But there is one situation where that may not be enough. That's when remote biometric identification is used in real-time by law enforcement authorities in public places. There is no room for mass surveillance in our society.

So, in the proposal, the use of biometric identification in public places is prohibited in principle.  Those last two words will come up a lot in the debate around the plans in the coming months and years as they provide ‘wiggle room’ for abuse. For Vestager, there could be exceptions - “extreme cases”, she calls them - where she could tolerate such biometric tech being used, such as searching for missing child.  Clearly AI then could make a positive contribution. But the problem with wiggle room is that the wiggles just get bigger and bigger and more and more self-interested.

What’s next?

For businesses, the proposal as it stands - and any likely iterations will be the same - would inevitably impose additional bureaucratic and compliance requirements. While GDPR is usually deemed to a be a success, the most common complaint from companies is the cost of compliance. What cost the proposed AI regulations would generate have not been detailed as yet, which is expedient for the Commission for now, but isn’t a sustainable stance as the debate heats up. Expect a lot creative arithmetic from all sides in the arguments ahead.

For tech firms who operate globally, operating in an EU regulatory regime that isn’t in step with other major markets is going to be another complication. That noise you can hear is the US tech sector lobbyist machine gearing up to stalk the corridors of power across Europe in the coming months to influence what happens next.

Having said that, there are those who don’t think that Brussels has gone far enough with this proposal. Some 40 MEPs (Members of the European Parliament) have already written to the Commission demanding an outright ban on facial recognition tech - no exceptions! - as well as  stronger language around anti-discrimination protection, forbidding the use of AI tech in border controls and  banning automated recognition of gender, sexuality and race.

AccessNow, a non-profit founded in 2009 that defends and extends the digital rights of people around the world, calls the EC proposal “too limited” in its prohibitions:

The current language is too vague, contains too many loopholes, and omits several important red lines outlined by civil society. Many of civil society’s red lines have only been classified as High Risk, and the current obligations on High Risk systems are insufficient to protect fundamental rights.

The treatment of ‘biometric categorization systems’ are deeply concerning. The current definition applies equal treatment to banal applications of AI that group people according to hair colour based on biometric data, and to dangerous, pseudo-scientific AI systems that determine our “ethnic origin or sexual or political orientation” from biometric data. There is no option but to ban this latter group of systems.

And given the global leadership positioning that inevitably accompanies all such EC regulatory regime initiatives, attention will also be paid to how other parts of the world with AI interests react.

China will probably just ignore it.

The US meanwhile had urged Brussels not to over-regulate and create “heavy handed innovation-killing models” that would ultimately benefit totalitarian regimes with no such European qualms around the use of AI. I can’t see this proposal as it stands going down well in Washington where successive administrations have had much the same view as my CEO cited at the top of this article.

As for Brexit Britain, who knows what stance might be taken? The country committed to aligning to GDPR standards, but has already begun talking up the idea of a “less European” data protection regime to attract inward investment, particularly from US tech firms. A similar ‘cake and eat it’ posture might well be expected around AI regulation.

My take

Early days and a long way to go. But my first gut reaction is that the category definitions are too vague to be genuinely helpful. Without crystal-clear definitions, tech firms developing AI futures will be at the mercy of regulatory box-tickers whose understanding of the underlying tech is little better than those opportunistic politicians who stand up and declare that ‘something must be done’. Innovation demands risk. Attempting to stifle all risk also stifles innovation. It was ever thus and that’s going to be the response we hear from the tech sector in the coming months.

On the other hand, digital rights campaigners, such as AccessNow, will complain that the Commission has given commercial interests around innovation too high a priority. They’ll argue that the risk-based approach is the wrong way to tackle this situation and that coming at the problem from a human rights-based perspective is the only way to succeed longer term.

Fasten your seat-belts it’s going to be the proverbial bumpy ride ahead. (Although not if you’re in a self-driving vehicle…they’re banned!)


A grey colored placeholder image