European Union edges closer to passing AI Act - but time is of the essence

Derek du Preez Profile picture for user ddpreez May 2, 2023
It’s hard to distinguish hype from reality in the latest wave of AI news - but it’s becoming clearer that protections and controls are needed sooner rather than later. Can the EU’s AI Act get us there soon enough?

Image of the EU flag
(Image by S. Hermann & F. Richter from Pixabay )

European lawmakers have reached a ‘political deal’ on the EU’s AI Act, paving the way for a scheduled vote later this month. It is hoped that the finer details can be ironed over the coming months so that the law can be officially passed later this year, making the region one of the first globally (outside of China) to deliver some cohesive rules around the use and development of artificial intelligence. 

It is thought that the AI Act may influence lawmakers globally, potentially creating a similar ripple effect to what we have seen with the EU’s GDPR data protection legislation. However, companies such as Microsoft and Google have already pushed for a watered down approach, arguing that the Act doesn’t need specific provisions for general purpose AI (which would likely cover LLMs, such as ChatGPT). 

But as the developments in AI continue at pace; with companies like IBM claiming that they are planning to replace 7,800 jobs with AI, scientists developing AI technology that they claim can read people’s minds, and the ‘godfather of AI’ warning of dangers ahead - it certainly feels like time is of the essence when it comes to regulation. 

And although MEPs were squabbling over some last minute details, an official told reporters late last week:

We have a deal now in which all groups will have to support the compromise without the possibility of tabling alternative amendments.

As the National Law Review outlines, the EU’s AI Act will apply to both organizations providing or using AI systems in the EU, and providers or users of AI systems located in a third country (including the UK and the US), if the output produced by those AI systems is used in the EU. 

The Act follows a risk-based approach, assigning applications of AI to three risk categories. The ‘unacceptable risk’ category includes applications such as government-run social scoring, cognitive behavioral manipulation of persons or specific vulnerable groups, and real-time and remote biometric identification systems. 

High risk AI systems aren’t banned, but will be carefully assessed before being put on the market and throughout their lifecycle. These include: 

  • Critical infrastructure that could put the life and health of citizens at risk 

  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring exams)

  • CV sorting software for recruitment procedures

  • Credit scoring denying citizens opportunity to obtain a loan

  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)

  • Verification of authenticity of travel documents

  • Surveillance systems

Rapid pace of change

As diginomica has noted previously, the introduction of regulation for AI is necessary, but is fraught with challenges and needs to remain flexible, but can also risk stifling innovation. 

However, the speed at which some of these systems are developing is outpacing the speed at which the general population understands how they work. We at diginomica have also argued that an extensive education process needs to be undertaken by governments and the private sector, in addition to regulation, in order to ensure people, democracy and economies are protected. 

A quote in the news this morning from Geoffrey Hinton, the ‘godfather of AI’ that is cited above, stands out in particular. Where he told reporters: 

We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person.

This rapid acceleration of shared ‘knowledge’ across chatbots, or AI systems more generally, is interesting because everyone thought that the AI tooling would impact the blue collar skilled jobs first - those working in factories, on highways, on production lines. And whilst there is an impact being felt there, it’s becoming clearer that it’s the knowledge workers that are going to have to adapt rapidly. 

Not only that, but we are entering a period of online information where it will become incredibly hard to discern what is real from what is false. Text to video generation is becoming more sophisticated, there are AI tools that can now replicate voices, and systems can potentially understand someone’s entire online presence in a very short amount of time and feed them information to manipulate them. This power will be used for nefarious purposes and we don’t yet really have an understanding, let alone a plan, for how to deal with that. 

The objectives of the EU AI Act currently include: 

  • ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

  • ensure legal certainty to facilitate investment and innovation in AI;

  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation

However, although the EU AI Act may get passed this year, it will likely still be a few years until it is enforced. Much the same way GDPR was finalized, but then businesses had a period to get their houses in order before the regulations were enforceable. But given the pace of AI development, who knows what will be a priority in a few years’ time? 

That being said, the EU’s tech regulation chief Margrethe Vestager is reported to have been encouraging businesses to comply now with what is being proposed. She said: 

There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence.

That’s perhaps some very wishful thinking, as companies race to develop AI systems that will likely deliver huge financial returns…

My take

The pace of technology innovation has always outpaced regulation. But this time it feels like the gap is widening and we don’t have a solution. I maintain that education will be a big part of that, but unfortunately - and I hate to be all ‘we just don’t know’ about it - but…we just don’t know what this will look like in five years' time. 

A grey colored placeholder image