Main content

LLMs and generative AI – give us more money, regulators tell UK legislators

Chris Middleton Profile picture for user cmiddleton November 27, 2023
Summary:
The House of Lords inquiry into LLMs has heard from the regulators. We have the right powers, they told peers; now we just need the cash.


sterling

In the weeks since the much-hyped AI Safety Summit, the UK Parliament's House of Lords’ Digital and Communications (Select) Committee has been on a fact-finding mission about Large Language Models (LLMs), generative AI, and the impacts of these epochal technologies.

The aim has been to inform Britain’s medium-term response to AI, to decide where on the policy dial the needle should point: towards enabling innovation, or towards strict regulation. And, indeed, whether those two concepts are, in fact, mutually exclusive. Doesn’t good regulation enable entrepreneurship?

Along the way, the Committee has interviewed Meta, Microsoft, and Google DeepMind, among others, and heard expert witnesses from copyright-centric businesses such as publishing. But perhaps the most important session this month was with the regulators themselves. Do they have the expertise and tools they need? And if not, what is missing?

The Committee sought written responses from 10 regulators. Appearing in person on 21 November were three members of the Digital Regulation Cooperation Forum (DRCF): Dr Yih-Choung Teh, Group Director of Strategy and Research at Ofcom; Stephen Almond, Executive Director Regulatory Risk at the Information Commissioner’s Office (ICO); and Hayley Fletcher, Director at the Competition and Markets Authority (CMA), within which the new Digital Markets Unit (DMU) sits.

Also appearing was Anna Boaden, Director of Policy and Human Rights at the Equality and Human Rights Commission (EHRC). In her case, the Lords sought testimony from an organization that is not normally in the frame of tech regulation, but which would need to step in if AI is found to be automating historic societal biases.

Boaden was first to address the Lords, saying:

This is a new area of work for the EHRC. And some of the regulatory risks around equalities are probably more self-evident around discrimination, bias, and a need for transparency than perhaps more nuanced human rights risks. 

So, for us, it's critical that we are able to work across our regulatory ‘family’ to consider the issues. And that we are effectively resourced with the right technical capabilities to be able to respond in an agile way.

She added:

We have a robust regulatory model that has been around for some time, but it was very much built to help people. Yet here we are dealing with rapidly evolving technologies instead. So, it's really important for us to consider what that means for a small regulator, but one with a significant, important remit around equalities, in particular, and human rights.

While the importance of getting this right might be obvious, observed the Lords, Boaden was asked if the EHCR can respond adequately to these challenges as things stand? Boaden answered:

Yes, if we are structured and resourced in the right way. Making sure that when the technology is developed, equalities and human rights principles are embedded so that we can focus on outcomes. This is a massive area, with huge human rights and inequalities risks, but we are small in size. So, there is lot of will, but not a huge amount of resource.

Message received – and echoed by other speakers, who made plain their view that government needs to back regulators with hard cash and, where needed, additional skills, in order to meet the challenge of AI.

Ethics

Beyond the question of finance, however the EHCR’s was an ambitious statement. That’s because it is far from clear how ethical principles can be embodied in software applications that, to a large degree, have been trained on data scraped from the pre-2021 Web. And by US corporations who seem hell bent on tearing up local copyright conventions.

Take OpenAI. The recent Sam Altman saga demonstrated how quickly the board’s ethical and governance concerns were side-lined, while extravagant claims are now being made about its next-generation technologies, despite the absence of an evidence base. How can regulators function meaningfully in such a world of smoke, mirrors, dollar lust, and CEO deification?

While these challenges might be new or unprecedented for an organization such as the EHRC, they are not new for the ICO, explained Almond:

While generative AI and deep LLMs have had their moment this year, considerations around how to regulate them have been around in the context of data protection law for some time. But what is perhaps different in this context is rather more questions of the scale and complexity of the technology, and the rate of adoption, rather than there being a core challenge in terms of how it plays out in data protection law.

That speed of adoption was revealed in a recent Avanade report, in which a staggering 92% of enterprises claimed they are moving to an AI-first business environment by next year. At the same time, the survey found that the skills needed to manage the technology responsibly are falling. Less than half of businesses believe they have them (whether they actually do remains to be seen).

Almond set out the facts as he sees them:

Data protection law applies where LLMs are trained, fine tuned, or continue to process personal data at the deployment stage. Data protection law is also designed to be technology neutral, and principles based. And we apply those principles to whatever new and emerging contexts we find ourselves in.

Even so, he explained that the industry has work to do, even more than the regulators do:

I would say there are basic expectations that people have around how LLMs process their personal data – whether that's about being transparent about the personal data being processed, or considerations around how people's information rights are respected. I think these are still important points for industry to get right in this space.

He added that the UK Information Commissioner's Office has powers to tackle the bias and discrimination issues that so concern the EHRC:

By paying attention to model developers, we may be able to tackle risks upstream around bias, that would flow downstream into the deployment of those models. Equally, lots of the challenges that surround how LLMs are used come from the contexts in which they are deployed. So really, for us as a regulator, we have to make careful prioritization choices about where we can have the greatest impact and where we will be tackling the greatest harm.

Even so, the EHCR’s Boaden noted:

I think there's a challenge in terms of the end-user, and the burden on them to prove discrimination, which is particularly challenging given all the links in the value chain and the complexities. That is a concern for us, because it puts a high onus on an individual to understand complex algorithms, and information that is not immediately apparent even to the people developing the technology. So, I think there are risks – in terms of being able to exercise rights, or challenge discrimination – that are concerning to us from a regulatory perspective.

Public services 

Ofcom’s focus, meanwhile, is solely on the services being used by the public, rather than on the technologies themselves or the nuances of their training data. Dr The explained:

The technology and consumer behaviours are developing at an increasing pace and, of course, that brings huge benefits. But with it, what we see is an acceleration and exacerbation of some of the harms or risks associated with it. 

AI is used across all sectors. But to pick one example, in the next three years we will be implementing the Online Safety Act. […] AI is an important part of the answer to reducing the prevalence of illegal content. But equally, of course, generative AI has the potential to create large amounts of harmful content as well.

Thus far in the proceedings, the Lords’ expert witnesses appeared to be saying that, although the AI-infused world is becoming more complex and unpredictable – and, therefore, riskier – the regulators are well placed to deal with it. Is that a realistic assessment, given the unprecedented wave of tactical, me-too adoption, and the extraordinary level of hype? Dr The said:

It'll always be a challenge because of the pace of change. But we [Ofcom] invest significantly in horizon-scanning to try and understand what the implications of these technology and consumer behaviour developments are, in terms of the outcomes. 

But […] Parliament has given us regimes which put the responsibility on the platforms, on the networks themselves, to undertake risk assessments, and then to put in place systems and processes which mitigate those risks. So, you get to a place where you want to see, by design, a more safe and secure environment.

I would argue that ‘want to see’ is doing some heavy lifting in that statement. Throughout the session, Dr The was an erudite and reassuring speaker. However, his apparent belief that safety by design will emerge, as if by magic, from the current febrile, hyper-competitive vendor environment might strike many as naïve. 

Market power

With events at OpenAI fresh in mind,  the CMA’s Fletcher was asked about the concentration of market power in AI. While there are countless start-ups, the impetus and initiative are clearly with a handful of Big Tech companies (and their billion-dollar investments). She said:

The facts of [the OpenAI case] are not yet clear. But I think the way that we do that [deal with the concentration of power] concretely is in the principles that we've set out. And in explaining to both businesses and consumers in the market what we expect, and what right conditions should be in place for competition to flourish.

Here, Fletcher appeared to be referring to the new Digital Markets, Competition, and Consumers (DMCC) Bill. However, as set out previously on diginomica, that legislation is primarily designed to rein in a handful of US Big Techs. So, it has no meaningful powers to tackle market-defining start-ups, such as OpenAI – Microsoft’s $10 billion backing notwithstanding. What the Bill defines as Strategic Market Status (SMS) is nothing of the sort; it is incumbent market status, measured by company size.

Meanwhile, that legislation is also being watered down. On 15 November, the UK Government announced amendments to the Bill – under pressure from those same Big Techs. The changes will ensure that the CMA et al “cannot impose an intervention on a firm unless it is proportionate to do so”.

In a written statement, the Government said:

This will mean that eligible tech firms can challenge regulatory decisions on proportionality grounds throughout this process. This approach will enable the CMA to encourage the most powerful firms in dynamic digital markets to work with regulators to ensure competition is maintained on an ongoing basis, rather than allow legal challenges to cause the regime to get bogged down in the courts.  This will also act as a further incentive on the CMA to ensure that it is always acting proportionately.

Amendments will also allow SMS firms to challenge any fines imposed for anti-competitive behaviour, both in terms of substance and process.

Despite all this, Fletcher continued:

We've been absolutely clear that no firm should be subjected to any kind of any anti-competitive conduct. We have to have markets where there is fair dealing, and that's crucial for businesses to have the confidence to invest. We've also set out that it's important that the widest range of possible firms have access to critical inputs – such as data, compute, or the relevant expertise. And that's the kind of mechanism that's going to ensure that a range of firms can bring the best models to market and guard against the risks.

Industry response

So, what do AI firms themselves makes of all this? At a separate session on the same day, Professor Zoubin Ghahramani, Vice President of Research at Google DeepMind, told the Committee:

We feel that AI is too important a technology not to regulate. But it's also too important a technology not to regulate well. A sector-based and contextual approach often makes sense, and many of the misuses of AI, including LLMs, are already regulated. We already have laws that protect people from many of the harms. 

It's just that AI allows bad actors to perhaps do this at scale in a way they couldn't have done before. And so, we need balance. […] We need to balance regulation and governance with fostering innovation.

But what if AI companies themselves are sometimes bad actors? For example, by seeing the arts – and creators’ copyrighted content – as low-hanging fruit to grab and throw over the fence to users for free, via cloud apps?  In every case, transparency will be key. But Ghahramani cautioned:

Transparency is important, but it is not a panacea.

My take

A useful session, and part of an ongoing series in The House of Lords. While Parliament’s second chamber is sometimes criticized for its elevation of unelected officials, its Committees often do an excellent job of quizzing expert witnesses on all sides of key public debates. 

But the question now must be, how significant an impact will this inquiry have on the British Government’s policymaking, given that No 10 seems keen to jump into bed with vendors? And just as important, is this administration likely to give regulators the extra resources they need?

Loading
A grey colored placeholder image