The rapidly advancing AI market is just as much a global arms race to win market share as it is a fight to build the most sophisticated technology. Governments around the world are having to strike a delicate balance between encouraging investment and protecting against the harms posed by Artificial Intelligence, as they begin to recognize that fostering an environment for the responsible and safe use of AI could be key to success in the long run.
Earlier this month British Prime Minister Rishi Sunak pitched the UK as a center for AI safety, acknowledging that regulation is just as critical as intellectual property to garnering power in this area, whilst the European Union (EU) is edging towards passing its own AI legislation, which focuses on classifying Artificial Intelligence risks and building in regulatory safeguards.
Meanwhile the US, home to many companies leading in the development of AI, recently published a Blueprint for an AI Bill of Rights, which seeks to minimize bias and the potential risks to citizens from technology overreach, data grabs and intrusion. However, critically, the US’ approach thus far has been one of guidance and steering, rather than embedded in law.
This approach to guiding the development of AI continued this weekend, as seven leading technology companies in the US signed up to a number of voluntary commitments to help ensure the “safe, secure and transparent” development of Artificial Intelligence.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all convened with President Biden at the White House to sign up to the commitments, with each of the companies saying that they will implement the pledges immediately.
President Biden said:
These commitments are real, and they’re concrete. They’re going to help the industry fulfill its fundamental obligation to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.
We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation to me, quite frankly. Artificial intelligence is going to transform the lives of people around the world.
The group here will be critical in shepherding that innovation with responsibility and safety by design to earn the trust of Americans.
However, Biden went further this time and said that in the weeks ahead he is going to take executive action and work with both Democrats and Republicans to develop new bipartisan legislation and regulation to further mandate responsible AI innovation. President Biden added:
These commitments are a promising step, but we have a lot more work to do together.
Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight.
In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation. And we’re going to work with both parties to develop appropriate legislation and regulation. I’m pleased that Leader Schumer and Leader Jeffries and others in the Congress are making this a top bipartisan priority.
As we advance the agenda here at home, we’ll lead the work with our allies and partners on a common international framework to govern the development of AI.
The White House said that whilst it pursues this international framework, it has already consulted on the voluntary commitments that have been published with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
Whilst the legislative process is underway, the voluntary commitments that the previously mentioned seven companies have signed up to include the following:
Committing to internal and external security testing of AI systems before their release. This testing, which will be carried out in part by independent experts, aims to guard against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
Committing to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This should include best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.
The companies have committed to investing in cybersecurity and insider threat safeguards with the aim of protecting proprietary and unreleased model weights. The White House said that these model weights are the most essential part of an AI system, and the companies have agreed that it is vital that the model weights be released only when intended and when security risks are considered.
A commitment to facilitating third-party discovery and reporting of vulnerabilities in AI systems. Some issues may persist even after an AI system is released and a reporting mechanism aims to enable them to be found and fixed quickly.
The companies commit to developing technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This hopes to enable creativity with AI to flourish but also aims to reduce the dangers of fraud and deception.
The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
A commitment to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The White House said that the track record of AI shows the “insidiousness and prevalence of these dangers”, and the companies have committed to rolling out AI that mitigates them.
The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. This includes everything from cancer prevention to mitigating climate change, with the hope that AI can contribute to the prosperity, equality, and security of all.
If recent history tells us anything it’s that technology companies can’t necessarily be trusted to mark their own homework, when it comes to responsible development. This is coupled with the fact that fines and penalties from governments are sometimes seen simply as a ‘cost of doing business’ in the pursuit of greater market share. There are going to be serious mistakes made along the way and the real test will be how governments respond to those and correct industry along the path towards safe and fair AI use. Responsible development and use of AI is critical to all of us, but the pace of development will inevitably outpace the creation of effective legislation. Lawyers and the courts will be busy over the next decade, no doubt, but it’s also worth remembering that AI use cases are often already covered by established regulations - including human rights and labor laws. My hope is that these companies recognize that if they to be leaders in AI, building in trust for users from the start is essential.