GCHQ outlines how it’s going to ensure its (secretive) use of AI is ethical

Profile picture for user ddpreez By Derek du Preez February 26, 2021 Audio mode
Summary:
GCHQ, has released a report that outlines the risks and opportunities when making use of AI for its UK security and intelligence operations.

Image of GCHQ logo
(Image sourced via GCHQ website)

This week the UK's security and intelligence agency GCHQ shared an in-depth document outlining how it is thinking about AI ethics, specifically how it can be held accountable for its use of artificial intelligence intelligence when so much of the organisation's work is carried out in secret. 

The risks and challenges associated with using AI are well documented, particularly with regards to how to eliminate bias as much as possible within the datasets used and how to hold AI systems accountable for recommendations or decisions made. The benefits of rolling out AI are tempting as the technology becomes more sophisticated, but we've seen time and time again how organisations have fallen foul of ‘trusting the black box'. 

The British government is working with a wide variety of institutions to create frameworks for AI ethics and there is working being done at an intra-national level too. And for an institution like GCHQ, which has to react to international threats and increasingly sophisticated harms online, there is a real need to make effective use of AI tools. But the risks are even greater when you consider the role GCHQ plays in protecting personal freedoms and liberty. 

With this in mind and an attempt at transparency, GCHQ has outlined how it is approaching AI ethics and detailed the concerns it has with regards to what risks it faces in getting this right. Outlining the opportunities and the challenges, GCHQ director Jeremy Fleming said: 

At GCHQ, we believe that AI capabilities will be at the heart of our future ability to protect the UK. They will enable analysts to manage the ever-increasing volume and complexity of data, improving the quality and speed of their decision-making. Keeping the UK's citizens safe and prosperous in a digital age will increasingly depend on the success of these systems.

Philosophers and data scientists have been grappling with the implications of AI for ethics: how do you ensure that fairness and accountability is embedded in these new systems, for example? How do you prevent AI systems replicating existing power imbalances and discrimination in society? Debate on these and many other questions is still in the early phases. GCHQ is sponsoring work through the Alan Turing Institute and other civil society institutions to help provide practical answers.

We won't pretend that there are not challenges ahead of us. In using AI we will strive to minimise and where possible eliminate biases, whether around gender, race, class or religion. We know that individuals pioneering this technology are shaped by their own personal experiences and backgrounds. Acknowledging this is only the first step - we must go further and draw on a diverse mix of minds to develop, apply and govern our use of AI. Left unmanaged, our use of AI incorporates and reflects the beliefs and assumptions of its creators - AI systems are no better or no worse than the human beings that create them.

Coming up with a plan

GCHQ notes that the UK Centre for Data Ethics and Innovation is informing its approach to responsible AI use and it is also partnering with the influential Alan Turing Institute. The challenge, however, is for GCHQ to manage the risks of AI, whilst operating in a somewhat unique environment - in that it mostly works in secret and has to allow its teams to do so. That being said, GCHQ still wants to be as "open as possible about our approach to AI ethics, maintaining trust, and learning from others' experiences and insights". 

The report this week notes that whilst AI is likely to be used comprehensively, the overarching decisions making process at GCHQ will still be made by people. It states: 

GCHQ's specialists share the same concerns voiced by many external experts around using AI to make predictions about individuals, their behaviour and motivations. AI software can help triage and prioritise across our data sources. It can suggest previously unseen patterns and learn to identify valuable behavioural indicators. But it is not yet sophisticated enough to be trusted to make independent decisions based on those outputs.

In these cases, we expect our approach to AI to resemble the augmented intelligence model being advocated by a range of research partners, including RUSI. This involves tasking AI software to collate information from relevant sources and flag significant conclusions for review by a human analyst, but does not automate any action as a result - it supports and empowers the human decision-making process rather than determining it.

In order for GCHQ to be able to account for its decisions, it notes that it is important for its testing procedures to demonstrate the validity and reliability of the AI method employed. It plans to use the developing techniques in the emerging field of "explainable AI" to improve its assurance and its aim is that systems are designed in a way that even non-technical users can interpret key technical information, such as margins of error and levels of uncertainty. The report adds: 

Where the systems are drawing conclusions about individuals, this analysis should form part of a wider evidence base from which our analysts can make an overall judgement.

A framework

With all this in mind, GCHQ is developing a governance system to manage AI and data ethics, which it claims will ensure the safeguards are in place to implement AI at GCHQ. The governance system will consist of the following: 

  • An AI Ethical Code of Practice - this will draw on best practice around data ethics, and will build systematically on the practical experience GCHQ is acquiring. It comprises an internal policy, setting out the standards GCHQ software developers are expected to meet, and supporting guidance explaining the techniques and processes which can be employed to achieve this. GCHQ states that it has been managing complex analytic systems for over a century and intends to "use the full weight of this accumulated knowledge and experience" to manage its approach to AI ethics.

  • World-class AI training and education - GCHQ states that all its people have a personal responsibility to understand and comply with the legal and ethical obligations placed on the organisation. To support them, it will deliver training and education in the issues and challenges that AI raises for all levels of the organisation. It states that it is strengthening the way it cultivates and grows its data scientist professionals, and is investing in the specialist training of those engaged in the development, use and security of AI systems. 

  • The right "mix of minds" - GCHQ will monitor the way it grows its teams and says that it is "committed to reflecting the nation" it serves. This means drawing on the "full spectrum of talent" from across the UK. GCHQ states that it fosters a culture of challenge, proactively seeking alternative perspectives and ideas. It recognises that it still has much to do to achieve the right mix of minds, but adds that the UK's diverse talent pool is one of its key assets. 

  • Reinforced AI governance - GCHQ is also reviewing its internal governance processes to ensure they apply for the full lifecycle of an AI system. This includes an escalation mechanism for the review of novel or more challenging AI applications.