AI Safety Summit - great on the theory, but the practicalities of realistic action remain unclear

Stuart Lauchlan Profile picture for user slauchlan November 2, 2023
Summary:
The Bletchley Declaration is great, but as later roundtable sessions revealed, the big question is how to make any of this enforceable against bad actors?

AI

It was a case of ‘back to the future’ for Michelle Donelan, the UK’s Secretary of State for Science Innovation and Technology, as she kicked off Day One of the AI Safety Summit at Bletchley Park:

Seventy-three years ago, Alan Turing dared to ask if computers could one day think. From his vantage point at the dawn of the field, he observed that, ‘We can only see a short distance ahead, but we can see plenty there that needs to be done.’ Today we can indeed see a little further, and there is a great deal that needs to be done.

There is indeed and 48 hours isn’t going to deal with very much, although there’s room to make a start on work that will be picked up in a mini-summit in South Korea in six months time and a full-blown gig in France in a year’s time. (Presumably President Macron will be able to clear his diary and turn up to that one?).

So what did Day One achieve? Most notably came The Bletchley Declaration, launched into the world before Donelan had even got to her feet to deliver the welcome address. Signed by 28 countries, including the US and China, as well as the European Union,  it’s basically a statement of intent that ticks all the right boxes, such as:

For the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.

That’s kind of why everyone who was allowed in turned up at Bletchley in the first place of course, so while a worthwhile display of declarative unanimity, it’s a powerful gesture rather than anything practical.

Roundtables

Of more significant note perhaps are the outcomes of various roundtables that took place during the day on a number of key AI topics. You can check out summaries of these here. The final plenary session of day one tapped into some of the conclusions.

François-Philippe Champagne, Minister for Innovation, Science and Industry, Government of Canada, chaired a session on risks to global safety from Frontier AI misuse. He pointed out that 2024 will see around 50 major elections around the world, not least the US Presidential Election, that will affect 4 billion people. These face genuine risk from the likes of AI-enabled Deep Fakes, for example. There are three 'A's that need to be thought about, he argued - acknowledgement of risk, action to counter it, and adapting to change:

I think we need to adapt. Clearly, everyone understands that it's very difficult to forecast or foreshadow what AI could do over the next five to10 years exponentially and therefore [we need to be] making sure that our regulation would be adapting and I would say this is very much the start of a discussion.

Josephine Teo, Minister for Communications and Information, Government of Singapore, chaired a session on the risk of loss of human control of Frontier AI:

If we look at current AI systems today, they do not yet pose a real risk of loss of control. They require human prompting, and generally fail when asked to plan over time towards a goal. And current models also have limited ability to take actions in the real world. However, it appears quite clear that future models are likely to improve on all of these dimensions, even if the case has not been fully made, that we will one day experience a severe loss of control over AI models.

What, she posited, if society could continue to enjoy the benefits of current AI models, but hit the pause button on Frontier AI so as to, as she put it, “buy ourselves time"? Nice theory, but as she admitted:

The difficulty lies with the fact that you can have responsible developers that are willing to comply with such a rule, but what levers do we actually have to ensure that the bad actors don't go ahead to develop frontier models anyway, without our knowledge, and what could we in fact do to stop this from happening?

This is obviously a question that runs through the whole of this Summit and beyond - it’s great to have everyone saying they’re all behind responsible development of AI tech, but what in reality would governments or regulators actually be able to do in practical terms to prevent or disincentivise bad actors with their eyes more on profit or political advantage than anything else?

The point was picked up by Tino Cuéllar, President of the Carnegie Endowment for International Peace, when he stated:

As we look to the future, we need not only a set of values and principles that the world converges around, but implementable, realistic action, much of it perhaps at the local level, the national level, but with some mechanism to trust and verify and understand what's happening at the national level.

The view from the White House Lawn

Meanwhile back in London, US Veep Kamala Harris was cheerfully stealing the limelight at the US Embassy where she gave a speech on AI, once again reiterating America’s leadership in setting standards, regulating risk etc etc. Prior to her taking to the stage, the White House had issued the announcement of a US AI Safety Institute, just days after host Prime Minister Rishi Sunak had done the same for the UK. Harris said:

I am proud to announce that President Biden and I have established the United States AI Safety Institute which will create rigorous standards to test the safety of AI models for public use. Today, we are also taking steps to establish requirements that when the United States government uses AI, it advances the public interest and we intend that these domestic AI policies will serve as a model for global policy.

She noted that 30 countries to date have signed up to US principles for the responsible development and deployment of military AI and called upon more to follow suit.

The Biden Administration is also taking a lead in talking to AI technology leaders to ensure they act responsibly, she said:

Today, commercial interests are leading the way in the development and application of large language models and making decisions about how these models are built, trained, tested and secured. These decisions have the potential to impact all of society. As such, President Biden and I have had extensive engagement with the leading AI companies to establish a minimum minimum baseline of responsible AI practices.

The result is a set of voluntary company commitments, which range from commitments to report vulnerabilities discovered in AI models to keeping those models secure from bad actors. Let me be clear, these voluntary commitments are an initial step toward a safer AI future with more to come. Because, as history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritise profit over the well being of their customers, the safety of our communities, and the stability of our democracies.

My take

That last comment should make for some interesting conversations on Day Two as Harris travels to Bletchley Park itself where Prime Minister Sunak is convening his own meeting with some such tech firms, before engaging in a ‘conversation’ on X with Elon Musk, a move that has been met with near universal bafflement and threatens to overshadow the outcomes of the Safety Summit itself. Brace for landing!

Loading
A grey colored placeholder image