Main content

UK sets out new approach to regulating AI that will replace ‘patchwork of legal regimes’

Derek du Preez Profile picture for user ddpreez March 30, 2023
Summary:
The UK is often cited as being placed third in the world for AI research and development. But with the speed at which AI tools are developing, it’s a delicate balancing act between fostering innovation and protecting society.

Image of London at sunset
(Image by Pierre Blaché from Pixabay)

The UK has laid out its plans for regulating artificial intelligence (AI), where it will take a principles-based approach in order to guide the use of the technology. From the White Paper released this week by the British Government, it’s clear that the authorities are walking a fine line between promoting innovation and protecting citizens, society and business. 

The UK is placed third in the world for AI research and development, employs 50,000 people, contributes more than £3.7 billion to the economy already, and Britain is home to twice as many companies providing AI products and services than any other European country. 

For instance, the government has said that there is a short timeframe for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies. However, it also notes that the principles will not be put on a statutory footing initially, as it recognizes that onerous legislative requirements on businesses could hold back AI development. 

In addition, whilst the government is keen to highlight the benefits of AI technologies (which, let’s face it, could be particularly helpful in jumpstarting the UK’s flatlining productivity), the White Paper also delves into the many risks that arise from their use. For instance, A could damage our physical and mental health, infringe on the privacy of individuals, lead to widespread disinformation, and undermine human rights. The report states: 

Public trust in AI will be undermined unless these risks, and wider concerns about the potential for bias and discrimination, are addressed. By building trust, we can accelerate the adoption of AI across the UK to maximize the economic and social benefits that the technology can deliver, while attracting investment and stimulating the creation of high-skilled AI jobs. 

In order to maintain the UK’s position as a global AI leader, we need to ensure that the public continues to see how the benefits of AI can outweigh the risks.

The framework

The government’s principles for governing AI in the UK will not sit within a new regulator. Instead the government is arguing that AI is a ‘cross-cutting’ technology and the principles will be issued and implemented by existing regulators. The hope is that this approach makes use of regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used. 

As noted above, the principles won’t be implemented on a statutory footing, at least initially. Rather, the government will have a ‘period of implementation’ and continue to work with regulators to assess what’s working and what isn’t. Following this period, the government expects to introduce a statutory duty on regulators , requiring them to have due regard for the principles. 

The five principles in question that will guide regulators in the UK are: 

  • safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed

  • transparency and explainability: organizations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI

  • fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes

  • accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes

  • contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

Over the next 12 months, regulators will issue practical guidance to organizations, as well as other resources such as risk assessment templates, to set out how to implement these principles in their relevant sectors. 

Organizations and individuals working with AI can share their views on the White Paper as part of a new consultation, too. This will inform how the framework is developed in the months ahead. 

The government has also announced a £2 million fund for a sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by unnecessary legislation. 

Commenting on the details of the new regulatory framework, Secretary of State for Science, Innovation and Technology, Michelle Donelan, said: 

I believe that a common-sense, outcomes-oriented approach is the best way to get right to the heart of delivering on the priorities of people across the UK. Better public services, high quality jobs and opportunities to learn the skills that will power our future – these are the priorities that will drive our goal to become a science and technology superpower by 2030.

Our approach relies on collaboration between government, regulators and business. Initially, we do not intend to introduce new legislation. By rushing to legislate too early, we would risk placing undue burdens on businesses. 

The pace of technological development also means that we need to understand new and emerging risks, engaging with experts to ensure we take action where necessary. A critical component of this activity will be engaging with the public to understand their expectations, raising awareness of the potential of AI and demonstrating that we are responding to concerns.

The framework set out in this White Paper is deliberately designed to be flexible. As the technology evolves, our regulatory approach may also need to adjust. Our principles-based approach, with central functions to monitor and drive collaboration, will enable us to adapt as needed while providing industry with the clarity needed to innovate. 

We will continue to develop our approach, building on our commitment to making the UK the best place in the world to be a business developing and using AI. 

My take

This all feels entirely sensible. A principles-based approach that’s focused on outcomes, with a trial period to see what works, is better than over or under regulating when we don’t yet know how these technologies will impact our daily lives. That being said, as per my report earlier this week, the UK has a limited time to make investments and lead the way in this new AI world - and it often feels like it's too busy being distracted by other headline grabbing areas. Time will tell, but we need a whole of a lot more than a few principles to see us through the next few years. 

Loading
A grey colored placeholder image