UK unveils new ‘AI rulebook’ that takes different regulatory approach to the EU

Derek du Preez Profile picture for user ddpreez July 18, 2022
Summary:
The UK is not creating a central regulatory body for AI, unlike the EU, and is instead allowing existing regulators to tailor their approach.

Image of someone holding a digital brain in their hand
(Image by Gerd Altmann from Pixabay )

The British Government has consistently talked up its ambitions to become a global ‘AI superpower’ and has today unveiled a new ‘AI rulebook’ that it hopes will guide regulation, boost innovation and build public trust in the technology. The rulebook outlines a different approach to the EU, whereby AI regulation will be less centralized and will instead allow existing regulatory bodies to make decisions based on their specific context. 

The UK’s approach is based on six core principles that regulators must apply, with the aim of giving them flexibility to implement these in ways that meet the use of AI in their sectors. 

Commenting on the proposals, Digital Minister Damian Collins said:

We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work.

It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.

At present, the extent to which existing laws apply to AI can be hard for organizations to navigate and understand. The government is also concerned that if AI regulations do not keep up with the pace of technological development, innovation could be stifled and it will become difficult for regulators to protect the public. 

The rulebook describes AI as a ‘general purpose technology’, much like electricity or the internet, where it will touch many areas of our lives and have huge implications - with its impact varying greatly depending on context and application. Much of which we probably can’t even predict at this stage. 

The UK’s approach to regulation, according to the proposals released today, appears to want to provide regulators, and their sectors, as much flexibility as possible. Whether this provides organizations with the clarity they need or not remains to be seen, but the hope is that this approach will give organizations more freedom to invest. 

This differs from the EU’s approach, which as part of its AI Act, aims to harmonize EU regulation across all member states and will be governed by a central regulatory body. The rulebook describes the EU’s regulatory approach as a “relatively fixed definition in its legislative proposals”. It adds: 

Whilst such an approach can support efforts to harmonize rules across multiple countries, we do not believe this approach is right for the UK. We do not think that it captures the full application of AI and its regulatory implications. Our concern is that this lack of granularity could hinder innovation.

In what the UK is describing as a ‘Brexit seizing moment’ (we’ve heard that before!) the rulebook instead sets out the core characteristics of AI to inform the scope of the AI regulatory framework, but allows regulators to set out and evolve more detailed definitions of AI according to their specific domains or sectors. The rulebook adds: 

This is in line with the government’s view that we should regulate the use of AI rather than the technology itself - and a detailed universally applicable definition is therefore not needed. 

Rather, by setting out these core characteristics, developers and users can have greater certainty about scope and the nature of UK regulatory concerns while still enabling flexibility - recognising that AI may take forms we cannot easily define today - while still supporting coordination and coherence.

With this in mind, the rulebook proposes establishing a ‘pro-innovation framework’ for regulating AI, which is underpinned by a set of cross-sectoral principals tailored to the specific characteristics of AI, and is: 

  • Context-specific - acknowledge that AI is a dynamic, general purpose technology and that the risks arising from it depend principally on the context of its application.

  • Pro-innovation and risk-based - regulators should focus on applications of AI that result in real, identifiable, unacceptable levels of risk, rather than seeking to impose controls on uses of AI that pose low or hypothetical risk so we avoid stifling innovation

  • Coherent - ensure the system is simple, clear, predictable and stable

  • Proportionate and adaptable - regulators should consider lighter touch options, such as guidance or voluntary measures, in the first instance.

The government said: 

We think this is preferable to a single framework with a fixed, central list of risks and mitigations. Such a framework applied across all sectors would limit the ability to respond in a proportionate manner by failing to allow for different levels of risk presented by seemingly similar applications of AI in different contexts.

This could lead to unnecessary regulation and stifle innovation. A fixed list of risks also could quickly become outdated and does not offer flexibility. 

A centralized approach would also not benefit from the expertise of our experienced regulators who are best placed to identify and respond to the emerging risks through the increased use of AI technologies within their domains.

Less uniformity

The UK’s approach does come with challenges and risks though, as is acknowledged in the rulebook. The context-driven approach offers less uniformity than a centralized approach, which could lead to confusion and less certainty for organizations. As such, the UK is hoping to counter this by complementing this approach with a set of overarching principles to make sure that it approaches “common cross-cutting challenges in a coherent and streamlined way”. 

The rulebook’s cross-sectoral principles build on the OECD Principles on AI and describe what the UK thinks well governed AI use should look like. The principles will be interpreted and implemented in practice by existing regulators, whilst the government is examining how it can offer a strong steer to adopt a “proportionate and risk-based approach” (such as government-issued guidance to regulators). 

The cross-sectoral principles for AI regulation are: 

  • Ensure that AI is used safely

  • Ensure that AI is technically secure and functions as designed

  • Make sure that AI is appropriately transparent and explainable

  • Consider fairness

  • Identify a legal person to be responsible for AI

  • Clarify routes to redress or contestability

Regulators - such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency - will be asked to interpret and implement the principles. They will be encouraged to consider ‘lighter touch’ options, which could include guidance, voluntary measure and creating sandboxes. 

My take

My gut instinct on this is that it’s the EU’s approach - much like GDPR - that will influence the global approach to AI regulation. Whilst context is important, the risks associated with AI are wide-ranging and can have a huge impact on peoples’ lives. Flexibility is good, but consumers and the public need clear mechanisms to report or challenge AI use, with insight into how algorithms make decisions. It’s clear that the UK wants to take a hands off approach that drives investment, but one has to hope that this doesn’t come at the expense of fairness, transparency and equity. 

Loading
A grey colored placeholder image