Main content

AI Safety - 49 countries sign up to protecting vendors’ interests

Chris Middleton Profile picture for user cmiddleton May 23, 2024
Summary:
In a shock development and meta-irony, the AI Safety Summit programme is beginning to exhibit the biases of its own design. Why?

Blue robot holding a coin on blue background using generative AI finance accounting © lerbank-bbk22 - Canva
(© lerbank-bbk22 - Canva)

Ministers from 27 countries, plus the EU, have issued a joint statement on advancing the safety of artificial intelligence systems, bringing the AI Safety Summit in Seoul, Republic of Korea, to a close. 

Ministers from Australia, Canada, Chile, India, Indonesia, Israel, Japan, Kenya, Mexico, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, Saudi Arabia, Singapore, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the UK, the US, EU members France, Germany, Italy, the Netherlands, and Spain, plus the EU itself, issued the statement yesterday – a total of 49 countries.

Signatories affirm the need for collaborative international approaches to the rapid advances in AI, and their potential impact on societies and economies – not just in terms of safety, but also the need for “innovation and inclusivity”.

China – represented at the inaugural Bletchley Park summit in November – does not seem to have played any role in the follow-up, or its various Declarations and Statements.

On safety itself, the statement says:

It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems, and those that may be designed, developed, deployed, and used in future. 

Principles for AI safety and security include transparency, interpretability and explainability; privacy and accountability; meaningful human oversight, and effective data management and protection. 

We encourage all relevant actors, including organizations developing and deploying current and frontier AI, to promote accountability and transparency throughout the AI lifecycle by seeking to assess, prevent, mitigate, and remedy adverse impacts which may emerge. 

We further encourage all relevant actors to foster an enabling environment in which AI is designed, developed, deployed, and used in a safe, secure and trustworthy manner, for the good of all and in line with applicable domestic and international frameworks.

Bold words. 

The statement acknowledges some useful facts, including that it is national governments’ responsibility to establish the frameworks for managing risks posed by “the design, development, deployment, and use of commercially or publicly available frontier AI models”.

Note that those hypothetical frameworks single out frontier models – and do not appear to apply to AI systems that are currently in use. Indeed, beyond the cursory mention of existing models earlier, the Statement largely applies to frontier innovations. 

Implicitly, the Statement also appears to absolve vendors of responsibility in these areas, and merely encourages them to behave responsibly and ethically. (Whether mounting lawsuits and controversies suggest that they can be trusted to do so remains to be seen.)

The statement continues:

We recognize our increasing role in promoting credible external evaluations for frontier AI models or systems developed in our jurisdictions, where those models or systems could pose severe risks. 

We further acknowledge our role in partnership with the private sector, civil society, academia, and the international community in identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations.

Criteria for assessing those risks include models’ robustness against malicious attacks, foreseeable uses and misuses, and their different deployment contexts, including the broader systems into which an AI model may be introduced.

It adds:

We recognize that such severe risks could be posed by the potential model or system’s capability to meaningfully assist non-state actors in advancing the development, production, acquisition, or use of chemical or biological weapons, as well as their means of delivery. […]

We further recognize that such severe risks could be posed by the potential model or system’s capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission.

The new network of AI Safety Institutes will be critical in AI risk assessment and management, it says, and in increasing global awareness of the issues:

Through our AI Safety Institutes or other relevant institutions, we plan to share best practices and evaluation datasets, as appropriate, and collaborate in establishing safety testing guidelines. We aim towards interoperability across AI safety activity, including by building partnerships between AI Safety Institutes and other relevant institutions, recognizing at the same time the need for testing methodologies [that consider] cultural and linguistic diversity across the globe.

Under the second heading, Innovation, is the sole mention of copyright, listed as one of several governance frameworks that may have an impact on frontier-model innovation. Later on, there is a further mention of intellectual property, in the most general terms.

We recognize the importance of governance approaches that foster innovation and the development of AI industry ecosystems with the goal of maximizing the potential benefits of AI for our economies and societies. 

We further recognise the role of governments is not only to prioritise financial investment, R&D, and workforce development for AI innovation, but also to consider governance frameworks, which include legal and institutional frameworks, including personal data, copyright, and other intellectual property protections for the safe, secure, and trustworthy development and deployment of AI.

This is an extraordinary statement in the wake of the 2023 House of Lords’ Communications and Digital Committee inquiry into Large Language Models – with its Chair slamming UK government inaction on AI companies’ copyright theft this year, accusing No 10 of implicitly siding with vendors, despite the Committee’s findings.

According to this Statement, issued by 49 countries, copyright is only relevant where it applies to protections for “the safe, secure and trustworthy development” of frontier-model AI. There is no mention of the wholesale scraping of proprietary data by vendors or, significantly, of the need to protect other industries from the rapacious self-interest of those companies.

Indeed, the Statement sees AI sweeping across every sector, including Government itself:

We recognize the transformative benefits of AI for the public sector, including in areas such as administration, welfare, education, and healthcare. These benefits include using AI efficiently and effectively through accessible digital services and automated procedures that enhance citizen experience in accessing public services. 

Furthermore, we intend to support the adoption of AI in key industrial sectors like manufacturing, logistics, and finance to revolutionize productivity, and reduce the burden on employees, while protecting rights and safety and unlock new avenues for value creation.”

Shared equity

On productivity, it is worth noting that, despite 40 years of transformative digital innovations – which include desktop computing, the internet, the Web, ERP, CRM, ecommerce, email, broadband, Wi-Fi, cloud computing, mobility, smartphones, app stores, fast processors, cheap storage, big data analytics, automation, social platforms, robotics, early quantum systems, and now AI – UK productivity growth languishes at a fraction above zero, and has done for much of this century.

The Statement continues:

We are committed, in particular, to supporting an environment conducive to AI-driven innovation by facilitating access to AI-related resources, in particular for SMEs, start-ups, academia, universities, and even individuals, while respecting and safeguarding intellectual property rights.

On that point, the new focus on SMEs is interesting. 

At the end of April, a Westminster eForum on AI in employment (see diginomica, passim) heard event Chair Stuart Morrison, Chief Researcher in the Insights unit at the British Chambers of Commerce, tell delegates that roughly half of the 70,000 businesses his organization represents have no plans to use AI whatsoever. 

Meanwhile, 80% of his members that are deploying AI are just using chatbots, such as (inevitably) ChatGPT – the chatbot that shouted so loud 49 nations entirely missed the point.

So, what about the Statement’s other aim: inclusivity? It says:

In our efforts to foster an inclusive digital transformation, we recognize that the benefits of AI should be shared equitably. We seek to promote our shared vision to leverage the benefits of AI for all, including vulnerable groups. 

We intend to work together to promote the inclusive development of AI systems and the utilization of safe, secure, and trustworthy AI technologies in order to foster our shared values and mutual trust. 

We recognize the potential of AI for the benefit of all, especially in protecting human rights and fundamental freedoms, strengthening social safety nets, as well as ensuring safety from various risks, including disasters and accidents.

It adds:

We seek to ensure that socio-cultural and linguistic diversity is reflected and promoted in the AI lifecycle of design, development, deployment, and use.

We are committed to supporting and promoting advancements in AI technologies, recognizing the potential to provide significant advances to resolve the world’s greatest challenges such as climate change, global health, food and energy security, and education. 

We further seek to foster inclusive governance approaches by encouraging the participation of developing countries in joint efforts and discussions aimed at accelerating progress toward achieving the Sustainable Development Goals and promoting global common interests and developments.

No mention of the risks to inclusivity from long-term human biases being automated, or from the lack of diversity in a sector that is overwhelmingly dominated by men.

My take

While we should commend the UK and South Korean governments for hosting these events – with an Action Summit due in France later this year – the AI Safety Summits are now exhibiting the biases of their own design. 

Against the advice of many in academia, civil society, and business, the Summits’ focus was limited to frontier models, and the technical challenges that arise from them. Broader ethical concerns have thus been sidelined, as have the interests of copyright-centric businesses and individuals.

Meanwhile, governments have, collectively, taken full responsibility for setting the parameters for vendors’ safe innovations in future, leaving those companies to do whatever they please – until the courts intervene, at the expense of wealthy litigants.

An object lesson in the risks of poor system design.

Loading
A grey colored placeholder image