Main content

AI and LLMs - the UK has a Goldilocks problem, says House of Lords report

Chris Middleton Profile picture for user cmiddleton February 6, 2024
Summary:
The UK’s focus on risk means it may create an environment too cold for AI, says Parliament’s Communications and Digital Committee. The problem is its report offers some more cold porridge.

An image of three illustrated bears of various sizes standing over a table with porridge on it
(Image by Prawny from Pixabay)

The UK suffers from a ‘Goldilocks problem’ when it comes to its approach to AI innovation and risk: the need to create an environment that is neither too hot nor too cold to support safe, responsible growth and capitalize on the technology’s transformative potential.

That’s according to a new report from the House of Lords’ Communications and Digital (Select) Committee. Publication comes on the back of its lengthy Inquiry into Large Language Models last year (see diginomica, passim), which heard from 41 expert witnesses and reviewed over 900 pages of written evidence.

At present, the government’s “narrow focus” on AI safety – in the context of the Bletchley Park Summit last autumn – risks cooling the market, warns the report, which inadvertently reveals the same Goldilocks problem in its 93 pages. 

It says:

We are optimistic about this new technology, which could bring huge economic rewards and drive ground‐breaking scientific advances.

Later, it adds:

Large Language Models have significant potential to benefit the economy and society if they are developed and deployed responsibly.

But that is largely it for optimism. Then the report explains:

Capturing the benefits will require addressing risks. Many are formidable, including credible threats to public safety, societal values, open market competition, and UK economic competitiveness. 

Far‐sighted, nuanced and speedy action is therefore needed to catalyse innovation responsibly and mitigate risks proportionately. We found room for improvement in the government’s priorities, policy coherence, and pace of delivery here.

Optimism, then, but minus much detail on the reason for it: a significant flaw in this otherwise comprehensive report. However, the Lords present detailed awareness of the dangers, which are set out over numerous pages. Some fall under ‘catastrophic’: societal distortions; realistic deep fakes – including of children; terrorism; disinformation; cyber warfare; election interference, and more.

So, what does the Committee mean by “room for improvement” in the government’s strategy to date? The report explains:

The government has recently pivoted too far towards a narrow focus on high‐stakes AI safety. On its own this will not deliver the broader capabilities and commercial heft needed to shape international norms.

The UK cannot hope to keep pace with international competitors without a greater focus on supporting commercial opportunities and academic excellence. A rebalance is therefore needed, involving a more positive vision for the opportunities and a more deliberate focus on near‐term risks.

In other words, the government should be more upbeat – as it was prior to the Safety Summit dominating the agenda – and do more to foster innovation. At the same time, it should also address the technology’s existing, real-world challenges, rather than its theoretical futures. 

In many ways this is good advice. We should be mindful of the future, but that future will surely proceed from whatever good decisions we take today. Unfortunately, the Summit swept aside many critical issues, such as bias, copyright, exclusion, and lack of development team diversity, in sessions that were closed and centered on frontier-model debate.

The Committee adds:

Concentrated market power and regulatory capture by vested interests also require urgent attention. The risk is real and growing. It is imperative for the government and regulators to guard against these outcomes by prioritizing open competition and transparency.

This may be a problem in itself. Over the past 14 years, our experience of this administration suggests a lack of transparency and openness, plus a willingness to gladhand big money – remember Prime Minister Rishi Sunak’s desperation to associate himself with Elon Musk at the Safety Summit. So, we should perhaps abandon any notion of a change of heart.

Indeed, the report notes:

Current trends suggest growing private sector influence. Witnesses emphasized the limited extent of public sector expertise and the necessity of closer industry links, including staff exchanges.

Andreessen Horowitz, a venture capital firm, cautioned that large AI businesses must ‘not [be] allowed to establish a government‐protected cartel that is insulated from market competition due to speculative claims of AI risk’.

The perception of conflicts of interest risks undermining confidence in the integrity of government work on AI. Addressing this will become increasingly important as the government brings more private-sector expertise into policymaking.

Some conflicts of interest are inevitable, and we commend private sector leaders engaging in public service, which often involves incurring financial loss. But their appointment to powerful government positions must be done in ways that uphold public confidence.

Urgent remedies

The Committee also accuses the government of sitting on its hands on questions of copyright and AI companies scraping unlicensed data, at one point suggesting that No 10 is hoping these questions will be resolved in court, rather than in office. (We will examine these issues in a separate analysis of the Committee’s recommendations this week.)

The report adds:

There has been further concern that the AI safety debate is being dominated by views narrowly focused on catastrophic risk, often coming from those who developed such models in the first place.

Overall, the report presents a confusing picture, therefore: a Committee that goes to great lengths to set out the spectrum of risks from “minor” to “catastrophic”, yet urges optimism, hope, and seizing the day. In this way, it surely mirrors what it sees as the government’s approach to the Goldilocks problem. It talks ‘hot’, but stresses ‘cold’.

However, the key issues here are the Committee’s calls to action, given members’ focus on openness, transparency, and manageable near-term risk. Among its many recommendations and observations, the Committee urges a number of urgent remedies. 

First, on the subject of the joined-up approach that is often missing in government:

  •  “The government should set out a more positive vision for LLMs and rebalance towards the ambitions set out in the National AI Strategy and AI White Paper. It otherwise risks falling behind international competitors and becoming strategically dependent on a small number of overseas tech firms.”

  • “The government must recalibrate its political rhetoric and attention, provide more prominent progress updates on the ten‐year National AI Strategy, and prioritise funding decisions to support responsible innovation and socially beneficial deployment.” 

In other words, No 10 should remember the government’s own deep work to date, and not keep starting from scratch with faddy messaging and PR – as if AI policy is something that passes like a bus or (in recent years) an outgoing Prime Minister. The UK has done all the groundwork, so why not acknowledge that and build on those foundations?

The report continues:

  • “The government should make market competition an explicit policy objective. This does not mean backing open models at the expense of closed, or vice versa. But it does mean ensuring regulatory interventions do not stifle low‐risk open access model providers.” 

  • “The government should work with the Competition and Markets Authority to keep the state of competition in foundation models under close review.” [Notably the report does not mention the Digital Markets Unit in this context. (Is AI a digital market? Discuss.)]

  • “We recommend the government should implement greater transparency measures for high‐profile roles in AI. This should include further high‐level information about the types of mitigations being arranged, and a public statement within six months of appointment to confirm these mitigations have been completed.”

Accountability, no less! 

On questions of skill and greater preparedness for LLMs (and for AI in general), the report says:

  • “We reiterate the findings from our reports on the creative industries and digital exclusion: those most exposed to disruption from AI must be better supported to transition. The Department for Education and DSIT should work with industry to expand programmes to upskill and re‐skill workers, and improve public awareness of the opportunities and implications of AI for employment.”

  • “A diverse set of skills and people is key to striking the right balance on AI. We advocate expanded systems of secondments from industry, academia, and civil society to support the work of officials—with appropriate guardrails. We also urge the government to appoint a balanced cadre of advisers to the AI Safety Institute with expertise beyond security, including ethicists and social scientists.” [Hear, hear.]

  • “The government should take better advantage of the UK’s start‐up potential. It should work with industry to expand spin‐out accelerator schemes. This could focus on areas of public benefit in the first instance.” 

The latter is excellent advice in an LLM sector that is often focused on solving non-existent problems, many of which are likely to cause knock-on effects in society, such as devaluing the work of human creatives, experts, and professionals (issues left unexplored, alas, in this and other reports). 

If expert content can simply be produced at the touch of a button, for zero cost (or a paid subscription to the likes of OpenAI), then it stands to reason that this both commoditizes high-end skills and depresses the earnings potential of many humans, including artists, writers, designers, filmmakers, academics, musicians, and more. 

In the process, this creates an economic black hole in which users pay OpenAI et al for every type of content and information access. Content that has been sourced and remixed (often scraped without permission and used as training data) from the work of millions of uncredited humans. 

The result is, users end up paying OpenAI for all of the world’s knowledge, skill, and talent. And nobody seems to regard this as a problem – including the Lords.

That aside, the Committee continues:

  • “It should also remove barriers, for example by working with universities on providing attractive licensing and ownership terms, and unlocking funding across the business lifecycle to help start‐ups grow and scale in the UK.”

  • “The government should also review UKRI’s allocations for AI PhD funding, in light of concerns that the prospects for commercial spinouts are being negatively affected and foreign influence in funding strategic sectors may grow as a result.”

Can we trust them? 

The report then moves onto the vexed and under-reported (except in diginomica) topic of a sovereign LLM capability – aka a national AI for Britain, using data sourced from British citizens. It says:

  •  “A sovereign UK LLM capability could deliver substantial value if challenges around reliability, ethics, security, and interpretability can be resolved. LLMs could in future benefit central departments and public services for example, though it remains too early to consider using LLMs in high‐stakes applications such as critical national infrastructure or the legal system.”

  • “We do not recommend using an ‘off the shelf’ LLM or developing one from scratch: the former is too risky, and the latter requires high‐tech R&D efforts ill‐suited to government. But commissioning an LLM to high specifications and running it on internal secure facilities might strike the right balance.”

The idea of a national AI system raises difficult questions, however. First, is this government, of all governments, likely to fall into the arms of an American Big Tech offering a lightly remixed off-the-shelf product, without asking enough questions? I’d rate that as probable – almost a certainty, in fact. 

Second, divorced from the EU’s half a billion citizens, the UK would have only a small training data set (partial data on 67 million people, six percent of whom are digitally excluded). This may force it to seek more and more data from citizens, forcing most to comply with digital-only options for tax, health, finances, and more. Would that data be protected, or shared with the likes of Microsoft/OpenAI or Google?

Meanwhile, tempted by tabloid pressure and Home Office or Exchequer rhetoric, it seems likely that such an AI would be applied in ever-more risky scenarios, where error and intrusion would be likely to proliferate: trawling citizen data to find petty criminals or alleged tax evaders. 

A grim scenario, when added to the UK’s lamentable record in running large national technology projects, most of which have been completely mismanaged, out of date by go-live, and way over budget.

Be honest: Would you trust this government to design, build, and use a national AI system – safely, responsibly, and competently? I suspect even its most ardent supporters would wince at the prospect.

However, the longest section in this (supposedly optimistic) report is on risk. Among its many observations, the Committee notes:

The most immediate security concerns from LLMs come from making existing malicious activities easier, rather than qualitatively new risks. 

Catastrophic risks resulting in thousands of UK fatalities and tens of billions in financial damages are not likely within three years, though this cannot be ruled out as next generation capabilities become clearer and open access models more widespread.

There are however no warning indicators for a rapid and uncontrollable escalation of capabilities resulting in catastrophic risk. There is no cause for panic, but the implications of this intelligence blind spot deserve sober consideration.

Remember: Don’t panic! So, what are the Committee’s recommendations?

  • “The government should work with industry at pace to scale existing mitigations in the areas of cyber security (including systems vulnerable to voice cloning), child sexual abuse material, counter‐terror, and counter‐disinformation. It should set out progress and future plans in response to this report, with a particular focus on disinformation in the context of upcoming elections.”

  • “The government should publish an AI risk taxonomy and risk register. It would be helpful for this to be aligned with the National Security Risk Assessment. The AI Safety Institute should publish an assessment of engineering pathways to catastrophic risk and warning indicators as an immediate priority. It should then set out plans for developing scalable mitigations.”

  • There is a credible security risk from the rapid and uncontrollable proliferation of highly capable openly available models which may be misused or malfunction. Banning them entirely would be disproportionate and likely ineffective. But a concerted effort is needed to monitor and mitigate the cumulative impacts. 

  • “The AI Safety Institute should develop new ways to identify and track models once released, standardize expectations of documentation, and review the extent to which it is safe for some types of model to publish the underlying software code, weights and training data.”

Then the report adds:

  • It is almost certain existential risks will not manifest within three years and highly likely not within the next decade. As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline.”

My take

Perhaps that’s what the Committee means by optimism: a non-specific hope for the future, despite dozens of pages devoted to risk and danger. The Lords state, in writing, that they merely hope catastrophic risk, fatalities, and lost billions won’t arise, but they can’t rule it out!

So, if this report is neither too hot nor too cold, it seems that its warmth comes more from hope, and from trust in government, than anything else. 

Meanwhile, a more concerted effort to talk up the advantages of LLMs would have been advisable, if the Lords want the message of optimism, refocusing, and good management to be taken seriously.

Loading
A grey colored placeholder image