Main content

AI Safety - Seoul-searching for answers as Summit 2 kicks off

Chris Middleton Profile picture for user cmiddleton May 21, 2024
Summary:
Round two of the global AI safety debate kicks off in Seoul, Korea, today. What’s on the agenda?

An image of the Seoul skyline at night
(Image by huong nguyen from Pixabay)

AI safety is back on the world agenda, as the second AI Safety Summit takes place in Seoul, Republic of Korea, on 21-22 May. However, for a conference about AI safety, transparency, and ethical behaviour, delegate details are thin on the ground once again – at the time of writing, at least. It seems remarkably like a closed shop.

Six months on from the inaugural Bletchley Park UK event, details of who attended that are still limited to a handful of names out of 100+ guests: British Prime Minister Rishi Sunak; Kamala Harris, Vice President of the US; Giorgia Meloni, Prime Minister of Italy; Ursula von de Leyen, President of the European Commission; Sam Altman, CEO of OpenAI; Elon Musk – feted by Sunak and pushing Grok AI on the platform formerly known as Twitter; Mustafa Suleyman, co-founder of DeepMind; Nick Clegg – former UK Deputy Prime Minister, now President of Global Affairs at Meta; and, reportedly, Wu Zhaohui, Chinese vice-minister of science and technology. Plus, King Charles III, who attended virtually.

That delegate list is significant, as this week’s Seoul Summit is billed as a catch-up. According to the British government, Day One – hosted virtually by President of the Republic of Korea Yoon Suk Yeol, and Sunak – is seeing an unnamed “limited number of participants” update delegates on how they have been fulfilling commitments made at Bletchley Park, and how they are supporting “innovation and inclusivity”. 

But not transparency, it seems. That said, representatives from OpenAI, Google, Microsoft, and Anthropic are thought to be attending.

According to Downing Street, Day Two of the Summit will take place physically and host “representatives from 30 countries, the European Union, and the UN”, alongside key figures from industry, academia, and civil society. Lee Jong Ho, the Republic of Korea’s Minister of Science and ICT, and Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, will co-chair a session.

So, what have some attendees been doing for the past six months? In OpenAI’s case, it has unveiled ChatGPT4o, and added filmmakers, studios, directors, producers, actors, set designers, special effects artists, and more, to the list of urgent problems that, apparently, need solving more than climate change, cancer, heart disease, and nuclear proliferation, with the launch of text-to-video app Sora. 

Plus – in what must be one of the most remarkable coincidences in history – the voice of its Sky chatbot was found to be “eerily similar” to that of actress Scarlett Johansson, who voiced the AI ‘Samantha’ in the 2013 movie, Her. (It couldn’t be that the CEO – temporarily fired for being untrustworthy only months ago – just thought it was funny to exploit one of the world’s most successful screen stars? Could it?) 

Meanwhile, lawsuits against the company are mounting – for IP theft and, in the case of Musk’s action against OpenAI, for departing from its non-profit roots. These issues are no trivial distraction: at heart, they are simply about trust and ethics, which the Summit is surely about too.

Despite all this – and actions like it against other vendors – copyright infringement remains off the main agenda for the AI Safety Summit, with governments, such as the UK’s, sitting on their hands and expressing little more than concern at AI companies’ apparent threat to IP-based industries.

Also not on the agenda is the vast power consumption of the GPU-centric data centres that underpin Large Language Models and generative systems. 

According to the International Energy Agency, by 2026 data centers could be using as much energy as the world’s fourth largest economy, Japan, which has a population of 125 million people. Estimates of 1,000 terawatt hours for data centre energy consumption would be more than double 2022’s figure of 460 terawatt hours, largely due to the massive expansion of cloud-based AI. 

With a rapidly warming planet, how is that not an AI safety issue? So, what is the conference discussing?

According to Downing Street, it will focus on “emerging consensus on frontier AI risks”, plus work on the new International AI Safety Report, an interim version of which was published on 17 May.

Plus, Day Two will look at increasing resilience against the negative impacts of AI – not only technical issues such as AI model development, says the British government, but also the deployment of responsible and trustworthy AI to increase societal benefits and acceptance.

The Report, then, is key to future discussions, with a final text scheduled to appear at the next AI Safety Summit, which will take place in France towards the end of this year.

The interim version says:

If properly governed, general-purpose AI can be applied to advance the public interest, potentially leading to enhanced wellbeing, more prosperity, and new scientific discoveries. However, malfunctioning or maliciously used general-purpose AI can also cause harm, for instance through biased decisions in high-stakes settings or through scams, fake media, or privacy violations. 

As general-purpose AI capabilities continue to advance, risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI could emerge, although the likelihood of these scenarios is debated among researchers. Different views on these risks often stem from differing expectations about the steps society will take to limit them, the effectiveness of those steps, and how rapidly general-purpose AI capabilities will be advanced. 

There is considerable uncertainty about the rate of future progress in general-purpose AI capabilities. Some experts think a slowdown of progress is by far most likely, while other experts think that extremely rapid progress is possible or likely.

In a hyper-competitive market, with Big Techs fighting over dominance, it is hard to conceive of development slowing down. The Report continues:

There are various technical methods to assess and reduce risks from general-purpose AI that developers can employ, and regulators can require, but they all have limitations. For example, current techniques for explaining why general-purpose AI models produce any given output are severely limited.

Indeed, the lack of transparency of AI systems is a recurring theme in the 132-page report. Much of the world’s future safety from – or perhaps underpinned by – AI systems relies on AI vendors being open about technology that is, in many cases, proprietary.

The report adds:

The future of general-purpose AI technology is uncertain, with a wide range of trajectories appearing possible even in the near future, including both very positive and very negative outcomes. 

But nothing about the future of AI is inevitable. It will be the decisions of societies and governments that will determine the future of AI. This interim report aims to facilitate constructive discussion about these decisions.

A duty to act

So, what about copyright – not a minor issue, given the hundreds of billions of dollars that copyright-centric businesses contribute to local economies? (As I noted last month, the UK’s creative industries alone contributed £109 billion to the economy in 2021, rising to £126 billion in the following year. In total, IP of every kind contributes over 14% of gross added value, according to the government’s own figures.)

The report – produced in the UK, with international academic and IT contributors – says:

An unclear copyright regime disincentivizes general-purpose AI developers from improving data transparency. Transparency about general-purpose AI model training data is useful for understanding various potential risks and harms of a general-purpose AI system. However, this type of transparency is often lacking for major general-purpose AI developers. Fears of legal risk, especially over copyright infringements, may disincentivize these developers from disclosing their training data.

The infrastructure to source and filter for legally permissible data is under-developed, making it hard for developers to comply with copyright law. The permissibility of using copyrighted works as part of training data without a licence is an active area of litigation.

The phrase about the "unclear copyright regime" disincentivizing developers appears twice in the Report. But is the regime really unclear? Not in many countries, and not in Britain, according to the UK Parliament’s second chamber, the House of Lords. Earlier this year, the Digital and Communications Committee said:

The government has a duty to act. It cannot sit on its hands for the next decade until sufficient case law has emerged [this referred to ongoing class actions and other copyright-holder lawsuits in the US and elsewhere].

LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process.

My take

The UK, once again, should be commended for kickstarting an ongoing global Summit series on AI Safety, which by the end of this year will have produced three international conferences and an agreed report. 

But one thing is clear: most of the questions about AI safety are largely in vendors’ hands to answer. So, can we trust them to do that? I would argue that copyright – while perhaps not a core topic for discussion in a world of rising disinformation, deep fakes, employment uncertainty, and more – should be seen as the perfect test case. 

If trillion- and multibillion-dollar companies can’t be trusted with others’ proprietary data – and the businesses that have been built on them – then why should we trust them to behave responsibly in other areas?

diginomica will report back on the Seoul Summit if there are significant developments.

Loading
A grey colored placeholder image