UK AI Safety Summit – the diginomica preview

Chris Middleton Profile picture for user cmiddleton October 30, 2023
Summary:
On 1-2 November, a who’s who of world leaders and AI luminaries will descend on Bletchley Park. At least, that was the original plan. What do we know today?

Sunak
British Prime Minister Rishi Sunak

The UK’s AI Safety Summit takes place on November 1-2 at historic home of trusted computation,  Bletchley Park. There, Alan Turing’s Bombe machine once helped codebreakers uncover Germany’s secret communications, nudging the Allies towards victory in the Second World War.

This week, however, UK Prime Minister Rishi Sunak hopes that its eighty years of tech heritage will help secure a legacy of AI safety. But will looking to Britain’s past help him achieve that? 

Yet Sunak should be commended for his plan to attempt to lead the global debate, a move that represents real personal risk for him, should the event turn out to be a damp squib. But a gulf between promise and delivery often troubles this government. Only last month Sunak talked of “long-term decisions for a brighter future”, while rolling back Britain’s Net Zero commitments. Compare and contrast. 

AI promises much too, of course: smarter decisions, predictive healthcare, sustainability, unlocking the hidden value of data, and giant leaps forward in science. But can it deliver? Humanity is about to find out.

Back in June, Sunak suggested that the Summit would be a large-scale, global affair, in which a broad range of AI risks would be discussed by world leaders, technologists, and academics. But the reality seems disappointing: a small, opaque affair with a delegate list thought to number just 100. Another gulf between medium and message.

So, who is actually coming? At the time of writing, it's all rather up in the air. That may be for security reasons at a time of heightened international tensions - Bletchley Park has been in lockdown for days -  but it is just as likely that the organizers simply don’t know. The world’s political leaders have their own images and agendas to consider. France's Macron isn't coming; the US Veep Kamala Harris is; the EU's Ursula von der Leyen was still unconfirmed as of time of writing. 

From the tech sector, OpenAI's Sam Altman is coming, Palintir's Alex Karp is around, as is Demis Hassabis, co-founder and CEO of Google DeepMind. Meanwhile Meta's Vice President of Global Affairs and one time UK political hopeful Nick Clegg is said to be coming, while Salesforce's Marc Benioff will be attending on day one of the two day gathering.

Not only that, but the government has announced that discussions will be limited to frontier AI models, and not tackle the technology’s wider problems, which even many tech CEOs now accept will need regulatory oversight. These include the risks of AI bias and privacy breaches; the lack of transparency in black-box solutions; copyright theft in training data; loss of human consent and control over decisions; sentiment distortion in text; job market upheavals; malicious AI adoption; convincing misinformation campaigns; deep fakes, and more. 

And as previously reported,  there is a subtler but more insidious risk from AI too: enterprises rushing to adopt a technology that few understand, and even fewer have the skills to deploy responsibly - AKA buying a sledgehammer because your neighbor has got one, then looking for the ‘nail’ of business need. But as novelist Milan Kundera once observed, metaphors are dangerous. Indeed, they often betray our own cognitive biases.

Meanwhile, the former 49 day Prime Minister Liz Truss has demanded that China’s invitation to Bletchley Park be rescinded on national security grounds. Her outburst last week revealed another big-picture risk: the impossibility of achieving a global consensus. Logically, AI at scale is only as safe as the weakest or most malicious use case, which is hardly a cause for optimism.

Intervention

A year ago, the IT sector dismissed the idea that governments should get involved in AI safety – back when the White House published its voluntary AI Bill of Rights. “Let the industry police itself!” came the cry from many quarters. But now that ChatGPT maker OpenAI is looking at a possible 9,900% revenue increase from 2022 to 2024 (according to Reuters’ estimates), more and more CEOs are demanding intervention.

So, will world leaders actually turn up to meet AI chiefs at the Summit? Or will it be minor officials mingling with the likes of OpenAI and Anthropic? Probably the latter. But in the real world, the likelihood of fierce AI competitors disclosing their frontier plans to anyone, let alone to junior ministers, is low. 

But as we know, being seen to do something is often what politics is all about. And with Christmas approaching, the Summit is beginning to look like festive window-dressing for UK Plc. The hope is that passing investors will spend money with the world’s number-three start-up backer, rather than with a competitor.

Digital Secretary Michelle Donelan – host of Summit Day One – hinted as much on 24 October, when she said:

 I want to set out how the UK’s approach to safety and security in AI will make it the best place in the world for new AI companies to not only grow, but locate.

And with founder and CEO of Entrepreneur First, Matt Clifford, organizing the event (and representing the Prime Minister for much of it), inward investment does seem to be a strong subtext. If so, that scramble to attract Big Tech dollar surely hands the Summit advantage to vendors.

Either way, the UK has been making a series of announcements in recent weeks. On 26 Oct, for example, Sunak gave a speech unveiling a new AI healthcare and life-sciences mission, apparently backed by £100 million in investment. Next day, he confirmed the foundation of the UK’s new AI Safety Institute, which aims to test and evaluate frontier models. (This is almost like joined-up government! A refreshing change after the chaotic Boris Johnson years.)

Also last week, six frontier developers – Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon, and Meta Platforms – published AI safety plans via the Summit website. 

Perhaps to counter the Summit’s limitations, the Government has published a 45-page report on AI safety, setting a new context for the discussions. So, what does it say? Alas, the report confirms the narrow, frontier-model remit, and proposes nine processes and practices – some drawn up with input from the security services.

These are: 

Responsible Capability Scaling. A framework for managing risk, as organizations scale the capability of frontier AI systems. 

Model evaluations and red teaming to help assess the risks that AI models pose, and inform decisions about training, securing, and deploying them. 

• Model reporting and information sharing to increase government visibility into frontier AI developments (how can that possibly work in a cut-throat commercial world?).

Security Controls. If models and weighting are not developed and deployed securely, they risk being stolen, or leaking secret/sensitive data before guardrails have been applied, says the report.

• A new reporting structure for vulnerabilities designed to enable outsiders to highlight safety and security issues. 

Identifiers of AI-generated material to reveal whether content has been AI generated or modified (a sensible concept in our view).

• Prioritising research on the risks posed by AI to help identify and address emerging problems at the frontier. (But what about the ones that already exist?)

• Preventing and monitoring model misuse to prevent harmful outcomes.

Data input controls and audits to help identify and remove training data that may increase frontier AI’s dangerous capabilities. (But again, what about current AI models? And the problems that existing training data causes, such as scraped copyrighted content? This may not be a risk to the public, but it is a harm to copyright holders and to their industries – or so many believe.)

All of this activity can be viewed in two ways. Pragmatism in the form of acceptance that AI is here to stay, so let’s ensure that its makers behave responsibly. Or naïve optimism about frontier model development, leaving a host of societal problems unaddressed. Either way, this is not what most people assumed this ‘global conference’ would achieve.

Satellites

Inevitably, a number of satellite events have taken place this month: opportunities to talk about all the things the Summit fails to address, in fact. Official ones were held on 11 October at the Alan Turing Institute, 12 October at the British Academy, 17 October at industry body techUK, and 25 October at the Royal Society. 

But unofficial ones have been taking place too, as thought leaders clamour to talk about the issues. One was at think tank Chatham House on 24 October, and it attracted an impressive, diverse set of speakers. 

Panellists included: Jean Innes, CEO of The Alan Turing Institute; Francine Bennett, Interim Director of the Ada Lovelace Institute; Katie O’Donovan, Director of Public Policy at Google UK; and Zoe Kleinman, Technology Editor at the BBC. Plus, AI luminary Professor Yoshua Bengio of the Department of Computer Science and Operations Research at the Université de Montréal (and an advisor to the British government).

So, can the Summit succeed? Speaking on the record (not under Chatham House rules), Bennett said:

We would say that the focus of the Summit is too narrow. […] I think you'll only get good outcomes by thinking about the broadest range of benefits and risks, and not just focusing on the outer edge.

Well said. But Innes took a different tack, saying:

I worry about us worrying so much about risk that we don't use these technologies. And the Alan Turing Institute is fundamentally optimistic about what these technologies can bring to society.

Google’s O’Donovan stressed the importance of a strong vendor presence:

I don't know how you would have a successful, meaningful Summit without those companies there. I think that provides a really important anchor.

However, she acknowledged an uncomfortable fact: the UK’s AI Safety Summit is hardly unique. For example, the Hiroshima AI Process (ongoing) saw the G7 nations issue a statement on trustworthy AI last month. Meanwhile the EU, the world’s biggest regulator, published its AI Act in June. Then there is the US AI Bill of Rights, and more. Even today, on the eve of the Summit, US President Joe Biden will be signing off on an Executive Order that will require developers of the most powerful AI systems to share critical testing information with the US authorities. 

But it fell to Bengio to address perhaps the biggest threat of all, saying:

We don’t know how to build an AI system that will behave as intended.

A truly sobering thought. 

My take 

Perhaps the key question is whether the UK knows how to build a Summit that will behave as intended. Has its founding ambition been compromised, scaled back, and fumbled – making it a kind of high-tech equivalent to HS2, hitting the buffers of realpolitik?

But the BBC’s Kleinman, at least, was upbeat at Chatham House:

It's very much being driven by the Prime Minister, who is obsessed with AI! […] We are certainly not in the same league [as the US], but we are at the table.

This is true, and the Summit is a worthwhile venture in a great many ways. However, the government must demonstrate real wins and outcomes from it, or the suspicion will remain that it is little more than a private preview of six frontier companies’ 2024 wares. 

My prediction? Expect a joint statement of intent on AI safety, but little more. Bold, useful, and well intentioned, but ultimately underwhelming – and lacking real global authenticity.

Loading
A grey colored placeholder image