UK’s data protection regulator calls on government to widen scope of ‘AI fairness’ principle to include development of AI, not just use
- Summary:
- The Information Commissioner’s Office (ICO) is responsible for regulating data protection and information rights in the UK. It has responded to the British Government’s recent AI regulation White Paper.
Data protection regulators are going to play a central role in how AI technologies are used and governed over the coming years, given the risks associated with the misuse of personal data. As such, it’s with interest that the UK’s Information Commissioner’s Office (ICO) has responded to the British Government’s recent ‘pro-innovation’ AI White Paper, arguing that more clarity is needed in certain areas and that the government’s ‘fairness’ principle should be expanded to include the development of AI, not just its use.
The White Paper was published at the end of March and, as diginomica noted at the time, walks a fine line between promoting innovation and protecting citizens, society and business. The government has said that there is a short timeframe for intervention to provide a clear, pro-regulatory environment in order to make the UK ‘one of the top places in the world to build foundational AI companies’.
The UK is taking a principles-based approach for governing AI, where there will not be a new AI regulator created, but rather the Pringles will be issued and implemented by existing regulators. Given the ICO is a key regulator in the UK, and that effective data use and data protection will be key to helping AI thrive, its thoughts on the government’s approach should hold some weight moving forward.
The government’s five principles for governing AI are:
-
Safety, security and robustness
-
Transparency and explainability
-
Fairness
-
Accountability and governance
-
Contestability and redress
Over the next 12 months, regulators will have to issue practical guidance to organizations, as well as publish other resources, such as risk assessment templates, to set out how to implement these principles in the relevant sectors.
Commenting on its role in the future governance of AI and the government’s plans, the ICO this week has said:
From improving healthcare to tailoring online entertainment, the uses of AI with greatest salience for public policy are often powered by personal data. Personal data may be processed to design, train, test or deploy an AI system. All these stages of AI development and deployment where processing of personal data takes place will fall under the ICO’s purview, as the UK’s data protection regulator.
Empowering responsible innovation is one of our ICO25 priorities and we believe data protection can help organisations build or use AI with confidence while avoiding risks to people’s rights and freedoms. This includes risks that can lead to physical, material and non-material damage (see Recitals 83 and 85 of the UK GDPR). As such, the ICO as the data protection authority in the UK, plays a central role in the governance of AI.
The ICO’s feedback
The ICO notes that the government’s AI principles map closely to those found in the UK data protection framework, but that it would welcome close collaboration with the government to ensure that the AI White Paper principles are interpreted in a way that is compatible with data protection principles. The aim is to avoid creating additional burden or complexity for businesses.
The central piece of feedback that the ICO has provided is that the government’s scope needs to be widened to incorporate AI development, not just the use of AI. For example, looking at the ‘fairness’ principle, the ICO said:
We believe that the AI White Paper’s suggested ‘fairness’ principle, much like data protection’s fairness principle, should cover the stages of developing an AI system, as well as its use. We therefore suggest that the definition of the principle is amended to read “AI systems should be designed, deployed and used considering definitions of fairness which are appropriate to a system’s development, use(s), etc.
Another interesting point raised is on the principle of ‘contestability and redress’, which states that regulators will have to clarify the routes people have to dispute harmful outcomes and decisions generated by AI. The ICO notes that it is typically organizations using AI that have oversight of their own systems that are expected to clarify routes to, and implement, contestability. The ICO asks:
We would welcome clarity around this sentence, and would like to understand whether the scope for regulators such as the ICO may be better described as making people more aware of their rights in the context of AI.
Equally, the ICO highlights that there are already regulations in place - particularly under GDPR - that govern AI decision making. It states:
Separately, the paper notes that regulators are expected, where a decision involving the use of an AI system has a legal or similarly significant effect on an individual, to consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected parties.
We would like to highlight that where an AI system uses personal data, if UK GDPR Article 22 is engaged, it will be a requirement for AI system operators to be able to provide a justification, not a consideration. We suggest clarifying this to ensure this does not create confusion for industry.
Finally, the ICO also raises the question of costs and funding. It says that while it supports the intention to provide greater clarity to businesses on how AI regulation applies in their sector, this will incur additional costs to cross-economy regulators (such as the ICO). It highlights how the regulator will have to produce products tailored to different sector contexts in coordination with other relevant AI regulators.
As such, it says that it welcomes further discussions with the government on the funding required to enable these proposals to succeed.
My take
It’s early days and the government’s White Paper is just a first step into broader regulation of AI technologies. Government’s around the world will face the challenge of not scaring off investment, whilst protecting citizens and businesses. But as I asked recently, is regulation the only aspect of this? Perhaps the government should be investing in AI itself to take a lead on the direction of this technology, one which is likely to change the way we work and live for decades to come.