In another example of Europe setting the pace around tech regulation, the European Commission last week floated consumer protection proposals centered on Artificial Intelligence (AI) products in the event they cause harm to people.
Existing rules around product liability date back more than four decades. The AI Liability Directive aims to provide more legal clarity for manufacturers and consumers as in an AI age. The Commission states:
As technological advances continue to roll-out, so must the guarantees put in place to ensure that EU consumers benefit from the highest standards of protection, even in the digital age. The Commission is committed to ensuring that pioneering technological innovation is never at the expense of safeguards for citizens. A harmonised legal framework is required at EU level to avoid the risk of legal fragmentation when filling the voids brought by these unprecedented technological advances.
Current national liability rules are not equipped to handle claims for damage caused by AI-enabled products and services. In fault-based liability claims, the victim has to identify whom to sue, and explain in detail the fault, the damage, and the causal link between the two. This is not always easy to do, particularly when AI is involved. Systems can oftentimes be complex, opaque and autonomous, making it excessively difficult, if not impossible, for the victim to meet this burden of proof.
Commissioner for Justice Didier Reynders added:
While considering the huge potential of new technologies, we must always ensure the safety of consumers. Proper standards of protection for EU citizens are the basis for consumer trust and therefore successful innovation. New technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected.
The new liability package complements the existing EU AI Act - which sets out to guarantee the safety and fundamental rights of individuals and organizations - by empowering fault-based civil liability claims for damages. The Commission explains:
The AI Act and the AI Liability Directive are two sides of the same coin: they apply at different moments and reinforce each other. Safety-oriented rules aim primarily to reduce risks and prevent damages, but those risks will never be eliminated entirely. Liability provisions are needed to ensure that, in the event that a risk materialises in damage, compensation is effective and realistic. While the AI Act aims at preventing damage, the AI Liability Directive lays down a safety-net for compensation in the event of damage.
The AI Liability Directive uses the same definitions as the AI Act, keeps the distinction between high-risk/non-high risk AI, recognises the documentation and transparency requirements of the AI Act by making them operational for liability through the right to disclosure of information, and incentivises providers/users of AI-systems to comply with their obligations under the AI Act. The Directive will apply to damage caused by AI systems, irrespective if they are high-risk or not according to AI Act.
There are actually two proposals in play around liability in the AI space. The first deals with an upgrade to rules around the liability of manufacturers for defective products. The second would see a harmonization of national liability rules for AI across EU member states, with the underlying principle of making it easier for people to get compensation if/when things go wrong.
The Commission hopes to create a level playing field between EU and non-EU manufacturers so that if a consumer is harmed by a product from outside the Euro zone, he or she will be able to seek compensation from the importer or the manufacturer's EU representative.
It also wants to adjust the balance of power between manufacturers and consumers by requiring the former to disclose evidence, introducing more flexibility into the time period for consumers to pursue claims, and alleviating the burden of proof for victims. A ‘presumption of causality’ will relieve victims from having to explain in detail how the damage was caused by a certain fault or omission.
Vice-President for Values and Transparency, Věra Jourová commented:
We want the AI technologies to thrive in the EU. For this to happen, people need to trust digital innovations. With today's proposal on AI civil liability we give customers tools for remedies in case of damage caused by AI so that they have the same level of protection as with traditional technologies and we ensure legal certainty for our internal market.
The Commission's proposal will need to be adopted by the European Parliament and the Council.
We make our legal framework fit for the realities of the digital transformation.
It’s hard to argue with any of this. The existing regulations are no longer fit for purpose and need to be upgraded for a digital age. Once again, Europe is setting a regulatory standard for the US tech sector to follow - or lobby against. Sadly, I suspect we know all too well which one of those is the more likely.