Main content

EU AI Act passes the European Parliament - a milestone, but only the beginning

Stuart Lauchlan Profile picture for user slauchlan March 14, 2024
Summary:
A big day for AI regulation…

EuropeanParliament

The EU Parliament has approved the world's first comprehensive AI regulations, aiming for safe and ethical AI development in the European Union.

Five years in the making, the legislation was endorsed by a vote of 523 in favor, 46 votes against and 49 abstentions. The EU AI Act now has to be officially endorsed by by the 27 EU Member States. Companies will then have two years to comply with its requirements, with the Act likely to be fully applicable by mid-2026. Rules on general-purpose AI will start applying after one year, by mid-2025. 

The Act establishes a risk level for each product, with the highest risk products receiving the most scrutiny.  It also prohibits certain uses of AI entirely,  such as the use of social scoring systems, and techniques that  manipulate people in a way that “impairs their autonomy, decision-making and free choices”.

Reaction

Industry reaction to the new rules has been largely positive, despite OpenAI CEO Sam Altman threatening to pull out of Europe at one point, a position from which he later withdrew.

Chandler Morse, Vice President, Public Policy at Workday, called the vote “a significant and welcome milestone in the path towards responsible AI”, saying:

AI presents huge opportunities for growth and efficiency across industries, but its usage needs to be properly regulated. Implemented efficiently, the AI Act will play a foundational role in building an emerging global consensus on principles related to AI’s development and use. As with GDPR, leaders around the world will be watching, as this landmark legislation in Europe paves the way for global changes.

In a blog post, Eric Loeb, EVP of Government Affairs at Salesforce, commented:

Salesforce believes that harnessing the power of AI in a trusted way will require governments, businesses, and civil society to work together to advance responsible, safe, risk-based, and globally interoperable AI policy frameworks.

The progress of the EU AI Act has meaningfully advanced AI policy discussions across the globe. Salesforce commends the policymakers behind the EU AI Act for working with care toward nuanced approaches, including a risk-based approach and ethical guardrails.

The development of risk-based frameworks should address the entire value chain of AI activity. AI is not a one-size-fits-all approach, and effective frameworks should protect citizens while encouraging inclusive innovation and competition.

He added:

In an era where AI-powered services are revolutionizing productivity, and governments are grappling with the idea of regulating this important technology, the EU AI Act provides an important tool to guide public and private entities as they continue to develop and innovate with AI. 

Research confirms that the European public is keen on such guidance. Citizens are curious about AI, but are concerned about data privacy, misinformation, and other risks and are counting on lawmakers to address these risks through guardrails and standards.  

We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain.  We look forward to further contributing to this collective challenge and collaborating with EU policy makers during the implementation phase, anchoring trust as the cornerstone of AI development. 

Dr  Carlsson, Head of AI Strategy, Domino Data Lab, took a rather different view: 

With the passing of the EU AI act, the scariest thing about AI is now, unequivocally, AI regulation itself. Between the astronomical fines, sweeping scope, and unclear definitions, every organisation operating in the EU now runs a potentially lethal risk in their AI, ML, and analytics-driven activities.

However, using these technologies is not optional and every organization must increase their use of AI in order to survive and thrive. Consequently, it is more important than ever for companies to build their Responsible AI capabilities by implementing the processes and platforms to efficiently govern, validate, monitor and audit the entire AI lifecycle at scale. These capabilities are the best protection not only against EU regulation and future regulation in the US and elsewhere, but are also critical for minimising business risk and ensuring the fair and ethical use of AI –  something these new regulations will not be able to accomplish on their own.

And Flavia Colombo, EMEA VP of Sales at Hubspot, welcomed the idea that “we’ll now have clear guardrails in place to ensure it’s being used ethically”, but cautioned that this is not the end of the road:

We should be wary of regulation that doesn’t keep pace with constantly evolving technology. Things are changing every day - at some point, the curve will flatten out, but until then legislation should be flexible enough to complement these advances.

My take

That last point is well made. This is indeed an enormous milestone and the European Union deserves credit for getting the Act over the line. But as we’ve seen with the meteoric rise of generative AI, this is a technology and a sector that is moving incredibly quickly. This is only the beginning of the regulatory regime building that will need to be adaptive and agile in its own right.

Loading
A grey colored placeholder image