Main content

UK Parliament calls for evidence as it launches Large Language Models investigation

Derek du Preez Profile picture for user ddpreez July 10, 2023
The second chamber of UK Parliament - the House of Lords - seeks to gain a better understanding of how Large Language Models will develop over the next one to three years.

Bot with speech bubble, concept of chatting with bot. Asking question to AI. Question mark and bot representation ready to work © SaskiaAcht - Shutterstock
(© SaskiaAcht - Shutterstock)

The House of Lords, which plays a critical role in examining legislation and guiding public policy, has announced that it will be investigating Large Language Models (LLMs) and has put out a call for evidence. The second chamber of UK Parliament is concerned that the speed of development of this type of generative AI is outpacing our understanding of its potential harms and is keen to learn how government and legislators can mitigate its risks. 

The British Government recently released its AI White Paper, which set out how it plans to regulate the development of artificial intelligence. The plans, as they stand, are principles-based and won’t initially be put on a statutory footing, with the hope that guidance rather than enforced regulation will promote innovation. 

The principles focus on safety, security, transparency, fairness, accountability, governance and paths to contestability. 

However, as businesses and citizens experiment with the rapidly advancing field of generative AI, which experts have warned pose a threat to jobs and the way we think about work, the House of Lords is seeking evidence to help it guide its thinking around LLMs over the next three years. 

Meanwhile, the European Union is grappling with its own approach to regulation, having recently reached a political deal to pave the way for its forthcoming EU AI Act. It is thought that the AI Act may influence lawmakers globally, similarly to how GDPR caused a ripple effect internationally as it relates to data protection rules - and so is being observed closely. However, the likes of Microsoft and Google have already pushed back and are arguing for a watered down approach, arguing that the EU’s definition of ‘high risk AI’ is too broad. 

Simply put, organizations around the world, as well as legislators, are walking a fine line between wanting to foster this new age of technological innovation - which could well help deliver some much needed economic buoyancy - whilst also trying to mitigate for risks that they don’t fully yet understand. 

A call for evidence

This backdrop forms the basis for the House of Lords’ inquiry, which cites Open AI’s GPT-3 and GPT-4 LLMs as central to the recent drive for LLM adoption. However, it also notes that smaller and cheaper open source models are set to proliferate. The House of Lords Communications and Digital Committee’s call for evidence says: 

Governments, businesses and individuals are all experimenting with this technology’s potential. The opportunities could be extensive. Goldman Sachs has estimated generative AI could add $7 trillion (roughly £5.5 trillion) to the global economy over 10 years. Some degree of economic disruption seems likely: the same report estimated 300 million jobs could be exposed to automation, though many roles could also be created in the process.

The speed of development and lack of understanding about these models’ capabilities has led some experts to warn of a credible and growing risk of harm. Several industry figures have been calling for urgent reviews or pausing new release plans. 

The Committee notes that LLMs can generate contradictory or fictitious answers (‘hallucinations’), which could be particularly dangerous in some industries without sufficient safeguards. Not to mention, as highlighted on diginomica numerous times, the ability for these tools to rapidly accelerate the distribution of misinformation. 

The inquiry also notes that training datasets can contain biased or harmful content and that the ‘black box’ nature of machine learning algorithms can make it difficult to understand why a model follows a course of action - as well as there being a lack of understanding of what it might do next. 

As such, the Committee is concerned that this all presents challenges for the safe, ethical and trusted development of LLMs and ultimately undermines opportunities to capitalize on the benefits they could provide. 

Baroness Stowell of Beeston, Chair of the Committee, said:

The latest large language models present enormous and unprecedented opportunities. Early indications suggest seismic and exciting changes are ahead.

But we need to be clear-eyed about the challenges. We have to investigate the risks in detail and work out how best to address them – without stifling innovation in the process. We also need to be clear about who wields power as these models develop and become embedded in daily business and personal lives.

This thinking needs to happen fast, given the breakneck speed of progress. We mustn’t let the most scary of predictions about the potential future power of AI distract us from understanding and tackling the most pressing concerns early on. Equally we must not jump to conclusions amid the hype.

Our inquiry will therefore take a sober look at the evidence across the UK and around the world, and set out proposals to the Government and regulators to help ensure the UK can be a leading player in AI development and governance.


The Committee’s line of inquiry focuses on establishing what needs to happen over the next one to three years to ensure that the UK can respond to the risks and opportunities posed by LLMs. This will include looking at the work of government and regulators, how they are dealing with current and future technological capabilities, and reviewing the implication of approaches taken elsewhere in the world. 

Some of the questions the Committee is seeking evidence on include: 

  • How will large language models develop over the next three years?

  • What are the greatest opportunities and risks over the next three years?

  • How adequately does the AI White Paper (alongside other Government policy) deal with large language models? Is a tailored regulatory approach needed?

  • Do the UK’s regulators have sufficient expertise and resources to respond to large language models? If not, what should be done to address this?

  • What are the non-regulatory and regulatory options to address risks and capitalize on opportunities?

  • How does the UK’s approach compare with that of other jurisdictions, notably the EU, US and China?

The Committee is seeking written contributions from stakeholders and experts, with a deadline of 5th September 2023 for providing evidence. 

My take

I do not envy legislators and regulators grappling with these issues. The speed of development of AI is rapid and knowing the consequences of its use is challenging. The UK’s principles-based approach does lend itself well to providing guidance for ‘good practice’, but without a statutory footing also is somewhat toothless. That being said, AI use is already covered by a lot of other legislation, including data protection and human rights regulation - so it’s not entirely a free for all. But the reality is that the outcomes of these inquiries will likely be delivered when the landscape of AI has already changed. The only certainty is that it’s going to be very challenging for those forming effective regulations over the coming years. 

A grey colored placeholder image