The UK government is hoping to boost transparency around its use of algorithms by launching a new standard and providing guidance for public sector organizations using AI.
The UK hopes to establish itself as an ‘AI superpower' over the next decade, but questions have been raised about its capability and its apparent desire to diverge from EU systems and regulatory frameworks, such as GDPR.
However, the bid for transparency around the use of algorithms should be encouraged, particularly given the type of data the government holds on citizens and businesses. It's also true that trustworthiness will likely be used as a measure on the international stage to gauge leadership in the field.
In other words, there is an opportunity for the UK to lead in this area and work collaboratively with international governments and organizations to set standards. It's too early to say how effective these measures will be, and there isn't a great deal of insight yet into how they will be governed, but it bodes well that the Cabinet Office is being transparent about the guidance.
The Digital and Data Office said that the standard will be piloted by several public sector organizations and further developed based on feedback.
It follows a review into bias in algorithmic decision making, where the Center for Data Ethics and Innovation (CDEI), which recommended that the government should place a mandatory transparency obligation on public sector organizations using algorithms to support significant decisions affecting individuals.
Lord Agnew, Minister of State at the Cabinet Office, said:
Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery. However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.
I'm proud that we have today become one of the first countries in the world to publish a cross-government standard for algorithmic transparency, delivering on commitments made in the National Data Strategy and National AI Strategy, whilst setting an example for organisations across the UK.
The standard itself can be viewed here. It essentially outlines what public sector organizations should be using for certain data fields, such as ‘website', and the description of what they mean, such as ‘the attribute 'website' is the URL reference to a page with further information about the algorithmic tool and its use'.
An algorithmic transparency template has also been published, which should provide bodies with additional guidance on how they should be approaching these AI projects to support decisions.
The transparency template urges public sector bodies to explain:
How the tool works
How the tool is incorporated into the decision making process
What problem the tool is aiming to solve and how it is solving the problem
Justification or rationale for using the tool
List who's accountable for deploying the tool
Describe the scope of the tool, what it's been designed for and what it's not intended for
How much information the tool provides to the decision maker, and what the information is
Potential harm from the tool being used in a way it was not meant or built for
Commenting on today's publications, Tabitha Goldstaub, Chair of the UK Government's AI Council, said:
In the AI Council's AI Roadmap, we highlighted the need for new transparency mechanisms to ensure accountability and public scrutiny of algorithmic decision-making; and encouraged the UK government to consider analysis and recommendations from the Centre for Data Ethics and Innovation, and the Committee on Standards in Public Life.
I'm thrilled to see the UK government acting swiftly on this; delivering on a commitment made in the National AI Strategy, and strengthening our position as a world leader in trustworthy AI.
Sir Patrick Vallance, UK Government Chief Scientific Adviser and National Technology Adviser, also offered his support and said:
We need democratic standards and good governance for new technologies, such as AI, that will enhance the way we work and benefit society. The launch of this new standard demonstrates this government's commitment to building public trust and understanding of the application of these technologies, including exploring increased transparency in public sector use of algorithms.
Transparency when deploying algorithms is essential. I welcome the government's efforts to lead in this area. However, what I would like more detail on is how this is going to be governed and where information will be held on the projects being deployed. Transparency is essential, but so is accountability. We need the government to allow this information to be accessible, so that failures can be discovered and highlighted. And the government itself needs to explain how it's going to be moderating progress against this standard and guidance.