The UK does not need a new AI regulator, but the government is failing on openness

Profile picture for user ddpreez By Derek du Preez February 10, 2020
Summary:
A new report by the independent Committee on Standards in Public Life found that public sector organisations are not transparent about their use of AI.

Image of AI machine touching hands with a human

The Committee on Standards in Public Life, which advises the Prime Minister on ethical standards, has released its anticipated report assessing the role of artificial intelligence (AI) in government and made a number of recommendations for government. 

After receiving evidence, the Committee has said that it does not believe there is a need for a new AI regulator, but that all current regulators must adapt to the challenges that AI poses to their specific sectors. 

However, it adds that the government is failing on openness and public sector organisations are not sufficiently transparent about their use of AI and that it is too difficult to find out where machine learning is currently being used in government. 

The report underlines a number of risks associated with AI use in public office, including: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; inhibit public officials from providing meaningful explanations for decisions reached by AI; and the prevalence of data bias risks embedding and amplifying discminiation in everyday public sector practice. 

The Committee said that the UK’s regulatory and governance framework for AI in the public sector - despite being hailed by Ministers as world class - is a work in progress and deficiencies are notable. 

Jonathan Evans, Chair of the Committee on Standards in Public Life, said: 

Honesty, integrity, objectivity, openness, leadership, selflessness and accountability were first outlined by Lord Nolan as the standards expected of all those who act on the public’s behalf.

Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.

Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.

Explanations for decisions made by machine learning are important for public accountability. Explainable AI is a realistic and attainable goal for the public sector - so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.

Recommendations for government

The Committee has outlined eight recommendations to government, national bodies and regulators to “help create a strong and coherent governance and regulatory framework for AI in the public sector”. 

These include: 

  1. Ethical principles and guidance - There are currently three different sets of ethical principles intended to guide the use of AI in the public sector. It is unclear how these work together and public bodies may be uncertain over which principles to follow. The government should identify, endorse and promote the scope of application and respective standing of each of the three sets currently in use. Guidance by the office for AI, GDS and the Alan Turing Institute should be made easier to use and understand, and promoted extensively. 

  2. Articulating a clear legal basis for AI - All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery. 

  3. Data bias and anti-discrimination law - Guidance should be developed by the Equality and Human Rights Commission on how public bodies should best comply with the Equality Act 2010. 

  4. Regulatory assurance body - A regulatory assurance body should identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI. 

  5. Procurement rules and processes - The government should use its purchasing power to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. Provisions for ethical standards should be considered early in the procurement process and explicitly written into tenders and contractual arrangements. 

  6. The Crown Commercial Service’s Digital Marketplace - CCS should introduce practical tools as part of its new AI framework that help public bodies find AI products and services that meet their ethical requirements. 

  7. Impact assessment - Government should consider how an AI impact assessment could be integrated into existing processes to evaluate the potential effects of AI on public standards. These assessments should be mandatory and should be published. 

  8. Transparency and disclosure - Government should establish guidelines for public bodies about the declaration and disclosure of their AI systems. 

Evans added: 

Our message to government is that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable. The work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office (ICO) are all commendable. But on the issues of transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation.

Regulators must also prepare for the changes AI will bring to public sector practice. We conclude that the UK does not need a specific AI regulator, but all regulators must adapt to the challenges that AI poses to their specific sectors. Government should establish the CDEI as a centre for regulatory assurance to assist regulators in this area.

Upholding public standards will also require action from public bodies using AI to deliver frontline services. All public bodies must comply with the law surrounding data-driven technology and implement clear, risk-based governance for their use of AI