Main content

Is the UK Government planning a voluntary national AI code of conduct in 2024?

Chris Middleton Profile picture for user cmiddleton January 3, 2024
Summary:
Glimpses of some extraordinary ideas appeared in the Digital and Communications Committee’s final session on LLMs late last year. What can we expect in 2024?

regulations

Pitting a booming 21st Century tech sector, Artificial Intelligence, against the UK Parliament's House of Lords, an organization whose roots lie in the 11th Century, seems like an unlikely contest. But late last year, in the weeks after the Bletchley Park AI Safety Summit, the Lords’ Digital and Communications (Select) Committee did sterling work examining Large Language Models (LLMs). 

Throughout November, Lords, Baronesses, and Bishops threw incisive, informed questions at vendors, regulators, academics, lawyers, and the representatives of sectors angry that some LLMs have been trained on copyrighted data. The LLM Inquiry’s aim was to form an advisory position and/or policy response to recent developments in the sector, so we await the Committee’s final deliberations and report this year. 

But what view is forming in the UK Government itself? What glimpses did we get of its plans?

Some clues could be found in the responses of the final two witnesses to the Inquiry. Dame Angela McLean is the UK Government’s Chief Scientific Adviser, and Viscount Camrose is, as Parliamentary Under Secretary of State in the Department for Science, Innovation, and Technology (DSIT), effectively the Minister for AI.

Dame Angela was asked a key question upfront: have the government’s policy objectives shifted away from enabling innovation – the theme of many of UK Prime Minister Rishi Sunak’s statements on AI – and towards merely identifying and managing its risks?  She replied that perhaps this impression has come from the Safety Summit itself, plus the foundation of the AI Safety Institute. Then she said:

I do not think that shows that we have stopped paying attention to other things. Of course, we need to keep three things in balance: short-term safety worries – by which I mean safety issues like bias and misinformation and disinformation, all of which we live with already; longer-term safety issues, including the sorts of things that were discussed at the Safety Summit and have been discussed rather widely over the past 12 months; and the massive opportunities. There is plenty of focus, and plenty of us in government are very keen to keep those three things in balance.

She added:

This country is renowned for having fantastic regulators. So, let us not fall into the trap of thinking that we can have either regulation or innovation. Good regulation drives innovation. Clear, rapid, well-informed, proportionate regulation, particularly if it is stable, creates a terrific environment in which to innovate.

Well said. In his own evidence at a later session, Viscount Camrose said:

There may be a false dichotomy between regulation, safety, and innovation. And if I speculate, that goes back to the earlier days of the internet. Our view is first and foremost that LLMs – and, indeed, AI in general – are an opportunity for innovation that can generate prosperity, health, and wealth across society; that can solve a huge range of societal problems and bring enormous benefits. 

But, in order for AI to be adopted, it must be safe and not only trusted but worthy of people’s trust. So, I do not think there is an ‘either/or’ between safety and innovation. Safety is a necessary precondition to innovation.

He went on:

I regret that the language often veers either too much to the side of innovation or too much to the side of safety, and I wish we could all collectively find a form of language that allowed us to speak of the importance of both sides, because both are extremely important.

Big picture 

In addition to this, there are some big-picture risks that are rarely mentioned in the arguments about bias, data privacy, data protection, breach of copyright, and so on. Dame Angela explained:

My least favourite scenario is AI disappointing us – that is, it being just another damp squib, and it turns out that we never managed to sort out the hallucination problem, so we can never use it and we have to go back to Excel spreadsheets and Google. That is my least favourite scenario.

[But the most challenging one would be] an AI Wild West, where things that are fairly safe are all driven off open models, with the idea being that somehow the open LLMs win the competition and become the best models. 

So, instead of there being four or five companies that we can squash when they are bad, there is an anthill of LLM providers. And then we have to think about how we will have the invisible hand of the market regulate this mess.

An extraordinary statement from every perspective.

However, the Inquiry itself has not been about fixing the world’s AI problems, so much as addressing the UK’s particular challenges. So, in terms of Britain’s ability to be a significant player on the world stage – a respectable bronze medallist behind the US and China – what are the most urgent priorities? Dame Angela argued: 

First, I see more compute [computing power and capacity for LLMs]; I think there is more compute on the way.

Viscount Camrose confirmed her belief, and hinted at national plans that may be exciting or controversial to anyone concerned about surveillance states, or citizens being turned into commercial data points:

Given the extensive investment that we are making into compute – on top of the £900 million already committed, there was a further half a billion in the Autumn Statement that will be invested into exascale [supercomputing] and AI capability in our Bristol, Cambridge and Edinburgh centres – sovereign compute puts us in a good position to elect, at some future point, to acquire the capability of a LLM for sovereign purposes.  That is one of many examples of retaining optionality and being agile in our response to the opportunities of AI.

A national LLM, in effect, populated by citizen data? Another extraordinary statement. He continued:

The real questions are: what are the capabilities of emerging models for AI, which sectors can they help, and what use cases might we be able to support with them that are not delivered on favourable commercial terms elsewhere? 

AI models supported by the newer generation of chips will start to emerge over the next year. At that point, I expect us to be presented with an interesting range of possibilities for a sovereign LLM model. That is not to say that we would necessarily go that route, but there will be compelling opportunities.

What might those be? According to Camrose: 

There are a range of possibilities in different departments. The ones that are frequently discussed include, obviously, the Department for Education in the reduction of the administrative burden on schoolteachers. Then there is Health, of course; because of the extensive data available to it, there are opportunities there.  We are now seeing them, department by department, identifying approaches that they might choose to pursue for their own AI models.

The implication is clear: UK Government departments have been asked to look for applications for a sovereign AI capability – nails for the national hammer, perhaps. Camrose added:

I do not think we will see cross-cutting AI models that might affect multiple departments until sometime next year as the more frontier-capable models start to emerge.

International 

Meanwhile, Dame Angela continued her assessment of the things that are needed to reinforce the UK’s international position on AI:

We are already in a very good position in having well-curated data pertinent to the questions in hand. That is an incredibly important asset and one that we need to make sure we make good use of. And what else? Good and wise regulation… That reminds me: we have a whole thing called the Science and Technology Framework. It was written by my predecessor, Patrick Vallance, so I can laud it. It is all about exactly this set of questions. If you have a big science and technology advance, what do you do so that you can use it to drive prosperity, security, terrific jobs — all the things you want for your country?  [It is] about what we used to call the ‘Ten Big Things’. It is a list of all the things you need to do in order to drive commercial and public sector advances when you have an exciting new technology at your fingertips.”

That 19-page document certainly contains ten (very) broad, vague, and aspirational headings, to which Dame Angela appeared to be referring. These are:

• Identifying critical technologies
• Signalling UK strengths and ambitions
• Investment in research and development
• Talent and skills
• Financing innovative science and technology companies
• Procurement
• International opportunities
• Access to physical and digital infrastructure
• Regulation and standards
• Innovative public sector

So, which of these is most important? She said:

It is always skills – at least, skills is the first one. The whole point of the framework is that it says, ‘It is never just one thing’. We have sometimes struggled to generate wealth from the brilliant inventions we make because of systems failure. There are lots of things that we do not get quite right. So, it is about making sure that our kids are properly skilled, that we re-skill, and that we are a tremendously attractive place for the most able people in the world to come and work.

Dame Angela then explained that LLMs and AI throw down a gauntlet to the Civil Service to be more embracing of change, and to nurture some of those skills within their own ranks:

A real strength of our Civil Service is this idea of the generalist. We have people who run the Civil Service and could write you a single page on almost anything. But, at the moment, very few members of our senior Civil Service, most of whom work as generalists, come from what I would call a deep science and technology background.

We need to create a pathway so that some of those people can come in, learn how to be a civil servant, which is a skill of its own, and join the senior Civil Service as generalists, but generalists with deep experience and expertise as scientists. I want people who can just look at an experiment and say, ‘That will not do’.

But would a national AI be such an experiment? And if so, who would be qualified to say no to it, given its potential for abuse and misuse? 

On the subject of how the machineries of government will deal with the challenge of ensuring safe, responsible AI, Camrose clarified the roles of DSIT and the new AI Safety Institute:

The central AI risk function sits within DSIT. Its role is to scan the horizon for emerging risks, focused on where we are with today’s technology, and to liaise with existing regulators that form the backbone of our current regulatory model for AI. Supplementary to that, at the central level is a range of further organizations that we have now set up. One is the central AI risk function, which performs scanning and liaises between the different regulators and advises them on how to enhance their capability. 

“Others include the Centre for Data Ethics and Innovation, the monitoring and evaluation function and the AI Standards Hub, which develops and owns the expertise in AI standards for the evaluation for existing models. The AISI provides more of a leadership role, in addition to that.

Meanwhile, the Institute will carry out functions such as evaluating frontier models and understanding the potential societal risks and harms they pose.

Of course, some sectors believe that industrial harms are resulting from what they see as AI vendors’ cavalier attitude to laws and conventions on copyright. Dame Angela seemed reluctant to comment on this, but Camrose was more forthcoming. He said:

It is highly contested on both sides, between the rights holders, who feel that they are being infringed, and the innovators – the AI labs – which feel that any attempt to stop them creates far too significant a drag on their ability to innovate.

We need to solve two particular problems here. The first is finding the appropriate balance – the landing zone – between two hotly competing sides in this debate. The second is the need to make sure whatever solution we come up with is internationally operable. We cannot have a set of strict rules over here that then allow people to go and train their models elsewhere. I do not think that would help anyone.

There are a number of these cases in the courts globally—I point particularly to Getty vs Stability AI – and we need to see how that will end up. Overall, the best outcome – and we are pushing hard for this—is a voluntary agreement between both sides that recognizes the needs of both. To that end, we have established a working group, which we continue to operate with. It is led by the Intellectual Property Office [IPO], which continues to engage with the innovators and the rights holders to try to find a successful outcome.

He added:

Our current focus is on developing the set of principles around which we may or may not be able to operate, and then turning that into a code of conduct. Ideally, this would operate on a voluntary basis, because legislation on this basis runs the risk of sending people to operate overseas in jurisdictions over which we have no control.

At this point, the Committee asked: is there a timescale for when this code might be announced? Camrose answered:

We had hoped that we would have it by the end of [2023]. [But] the participants on both sides have made strong representations to us, saying, ‘Please do not go ahead until we’ve properly argued this out. Speed is a secondary consideration to getting this right.’ 

That said, we will not get into an endless talking shop about this, but we need to talk it out. Should it, sadly, emerge that there is no landing zone that all parties will agree to, we will have to look at other means, which may include legislation. But I very much hope we will not have to go there.

My take

So, there we have it. A UK national AI is in the offing, perhaps and the British Government hopes a voluntary code of conduct will somehow bring AI vendors to heel on copyright – an ambition that exhibits a staggering level of naivety. One that – fortunately – the Digital and Communications Committee has not exhibited. We look forward to its report this year. 

Loading
A grey colored placeholder image