Main content

AI safety - Seoul food absent for ethics campaigners as Summit focus is technical

Chris Middleton Profile picture for user cmiddleton May 22, 2024
Summary:
The second AI Safety Summit has produced its first measurable outcome. Does it amount to anything useful?

humanoid robot thinking, data ethics governance © PhonlamaiPhoto - Canva.com
(© PhonlamaiPhoto - Canva.com)

The most notable output so far from the second AI Safety Summit in Seoul this week has been an agreement that 10 countries plus the EU will launch a network of AI Safety Institutes to help advance “the science of AI safety”.

The Seoul Declaration – more accurately, The Seoul Statement of Intent toward International Cooperation on AI Safety Science – was signed yesterday by Australia, Canada, Japan, the Republic of Korea, the Republic of Singapore, the US, the UK, the European Union, plus EU members France, Germany, and Italy; a total of 34 countries.

The UK launched the world’s first AI Safety Institute in the wake of the Bletchley Park Summit last year, backed by £100 million ($127 million) of public money. Earlier this week, it was announced that a branch of the UK Institute would open in San Francisco. It is not clear whether the Seoul Declaration will affect that plan.

The aim of the signatories creating a network of similar national organizations is to “forge a common understanding” about AI safety and align work on research, standards, and testing, according to an announcement from the British government.

It adds:

This will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific AI harms and safety incidents where they occur, and sharing resources to advance global understanding of the science around AI safety.

Overall, the aim is to ensure the development of human-centric, trustworthy, and responsible AI, so that it can be used to “solve the world’s biggest challenges, protect human rights, and bridge global digital divides”.

However, as I reported yesterday, the current global context is companies such as ChatGPT, DALL-E, and Sora maker OpenAI facing escalating numbers of lawsuits and controversies over irresponsible behaviour and copyright infringement, while popularizing cloud-based tools that it would hard to argue solve the world’s most urgent problems. (Are writers, artists, and filmmakers a bigger threat than climate change?)

Secretary of State of Science, Innovation, and Technology Michelle Donelan said:  

Ever since we convened the world at Bletchley Park last year, the UK has spearheaded the global movement on AI safety. And when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own. 

Capitalizing on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security, and trust at its core.

Ethics on the sideline 

Earlier this year, the UK also signed a Memorandum of Understanding with the US on, again, the “science of AI Safety” – as opposed to ethical principles, which, it seems, are largely being left to US courts to decide.

However, 16 AI vendors, including OpenAI, have signed up to what are billed as “Frontier AI Safety Commitments”. These state that they will take input from governments and AI Safety Institutes when they consider that an AI model’s risks are becoming unmanageable. 

It seems, therefore, that the frontier model focus of the Bletchley Park event has steered “the science of AI safety” onto a primarily technical track, rather than one that looks at broader ethical principles. This underscores criticisms made at the time that debating issues such as copyright, bias, and employment was largely being left to fringe events.

In the meantime, many vendors have been popularizing their tools by enabling push-button creativity at the expense of copyright-centric businesses – at least, in the view of the House of Lords’ cross-party Communications and Digital (select) Committee and the various instigators of private and class actions in the US courts.

So, what does the Seoul Declaration actually say, given that it is being proposed as a foundation model, of sorts, for how democracies deal with tech-bro-driven vendors – whose market caps are larger than the GDPs of nearly every nation on Earth?

Looking forward to the “Action Summit” planned for later this year in France, the Declaration contains nine broad statements of intent, which can be summed up by the fifth:

We call for enhanced international cooperation to advance AI safety, innovation and inclusivity to harness human-centric AI to address the world’s greatest challenges, to protect and promote democratic values, the rule of law and human rights, fundamental freedoms and privacy, to bridge AI and digital divides between and within countries, thereby contributing to the advancement of human well-being, and to support practical applications of AI including to advance the UN Sustainable Development Goals.

It adds that the signatories aim to:

…strengthen international cooperation on AI governance through engagement with other international initiatives at the UN and its bodies, G7, G20, the Organization for Economic Co-operation and Development (OECD), the Council of Europe, and the Global Partnership on AI (GPAI).

It continues:

In this light, we acknowledge the Hiroshima AI Process Friends Group, welcome the recently updated OECD AI principles, and the recent adoption by consensus of the United Nations General Assembly resolution ‘Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development’ that solidified the global understanding on the need for safeguards for AI systems and the imperative to develop, deploy, and use AI for good, and welcome discussions on the Global Digital Compact in advance of the Summit of the Future in September 2024 and look forward to the final report of the UN Secretary-General’s High-level Advisory Body on AI (HLAB).

Got all that? Punctuation is the latest victim of the AI Age, it seems.

Meanwhile, the Seoul Statement of Intent toward International Cooperation on AI Safety Science is presented as a brief annex to the Declaration. It reaffirms and emphasizes the scientific and technical focus of this new international movement, rather than the ethical principles that many would like to see front and centre of debate:

We express our shared intent to take steps toward fostering common international scientific understanding on aspects of AI safety, including by endeavouring to promote complementarity and interoperability in our technical methodologies and overall approaches.

These steps may include taking advantage of existing initiatives; the mutual strengthening of research, testing, and guidance capacities; the sharing of information about models, including their capabilities, limitations, and risks as appropriate; the monitoring of AI harms and safety incidents; the exchange or joint creation of evaluations, data sets and associated criteria, where appropriate; the establishment of shared technical resources for purposes of advancing the science of AI safety; and the promotion of appropriate research security practices in the field.

My take

So, what is an ‘AI harm’ in this new world of untrammelled, grubby advances towards automated creativity? Advances that invert the decades-long promise that Industry 4.0 technologies will take away all the boring tasks so that humans can be creative? That, it seems, is being left to the courts to decide. 

Meanwhile, governments are sitting on their hands on the bus of survivors, making vast resources available that AI vendors will, almost certainly, never use.

Loading
A grey colored placeholder image