AI regulation - Stability AI and Ada Lovelace Institute speak out

Chris Middleton Profile picture for user cmiddleton September 15, 2023
Summary:
In the third and final report of our mini-series on AI regulation this week, we look at the gaps between what vendors and government are saying, and what they are doing.

An image of a robot sitting at a desk reading some paperwork
(Image by ThankYouFantasyPictures from Pixabay )

In my previous two reports on UK AI regulation – in the context of the government’s plan to make the UK a prime destination for safe, responsible usage – we heard from a think tank representing 94 powerful vendors. And from one of the sectors most impacted by artificial intelligence, publishing, about the legal and copyright challenges.

So, what does one of today’s epochal AI companies make of all this – one of the vendors whose generative systems have been a focus of policy discussions? And what does a leading UK research organization think of the government’s strategy to date?

Ben Brooks is Head of Public Policy at Stability AI, maker of generative image system Stable Diffusion, plus Stable Audio for music and sound. Has that technology, and generative systems like, it created a world of push-button creativity that devalues the work of human artists? And if so, should we care?

Speaking at the recent Westminster eForum on UK AI regulation and opportunity – and ignoring diginomica’s question about whether Stable Diffusion had been trained on copyrighted data – Brooks said:

From our perspective, AI is just a tool. It's not a substitute for creators. It's not a substitute for professionals. It's a tool that can help to boost productivity in certain tasks, and help people experiment with new concepts or perform increasingly complex tasks. 

We are particularly excited about how Large Language Models, for instance, can help software programmers identify vulnerabilities in their code and increase their productivity in terms of generating that code.

However, my report last month on AI and cybersecurity revealed that many shadow-IT users of AI within enterprises – which is the majority of ‘enterprise use’ of AI – are divulging source code and other privileged data to cloud-based LLMs. 

According to security company Netskope, source code is by far the most common form of private data pasted into public tools, by workers who don’t understand the risk of handing their crown jewels to vendors.

Brooks continued:

We're excited about what AI means for visual artists and creators, with many examples of how these tools can be used to help edit photographs. They're also being used to help Broadway designers mock up new concepts for stages and sets. And helping research teams think about new research techniques, and new diagnostic techniques for complex disorders. 

I could go on. But the TLDR is that open models are a really important part of the AI ecosystem. They can help promote transparency in these black-box technologies, and can also help promote competition in a technology that, very quickly, is going to become critical infrastructure across the digital economy.

So, does he see any downsides and risks? And shouldn’t we look inside the black box? (Might it contain millions of uncredited people’s photographs, paintings, and drawings?) He said:

From our perspective, there are both risks and opportunities. There are risks in terms of product safety, when these models are deployed into increasingly sensitive environments. There are also risks around intentional misuse.

Then he added:

We're not interested in developing what people have referred to as artificial general intelligence or artificial super intelligence. We're solely interested in practical, user-centric models that can help simplify and support everyday tasks. 

By developing models in this way, then releasing those models openly, we can help to unlock the potential of AI, while minimizing the risk of intentional misuse, weaponization, or Terminator-style runaway systems. 

We hope that policymakers, including in the UK, think through the future policy landscape. And that they also take the lens that diversity is important. These aren't just closed-source powerful models, they’re also open-source technologies. These aren't just big firms, they're also everyday developers, everyday creators, and everyday researchers. And they’re not just global players, but also local players as well.

We want folks to look under the hood of these models, understand their risks, verify performance, and help develop new mitigations. And we want everyone to have access to this technology: people are going to be building businesses based on it!

So, if people are going to be integrating this technology into powerful, popular tools, we want to make sure that everyday developers, researchers, and businesses have access to the underlying foundational texts.

So, in Stability AI’s mind it’s all about the vendors and the end-users. And what of the security implications? He said:

If this is critical infrastructure, then public-sector agencies and large private institutions will be using these models, so we want to make sure that they can do so without exposing their data to third parties, and without giving up control of their underlying AI capabilities. 

A lot of our most important strategic capabilities are built on open-source software today, on Linux or Android. The same will be true of AI. So, it is really important that national policy accounts for open innovation. And it is very important that it promotes, and nurtures, open innovation as well.

Poacher or gamekeeper? 

Admirable sentiments that few would argue with. But there’s a problem: they side-step questions about copyright and training-data provenance. In other words, they completely ignore another vital community: the people (and companies) who created the data that AI vendors simply scraped off the internet. Reports, books, articles, speeches, songs, paintings, illustrations, photographs, designs, and more.

And now the industry seems to be saying, ‘Let’s work towards a fair, open, and secure future for all’, while drawing a veil over how their systems were trained in the first place. That’s not acceptable; you can’t just flip between poacher and gamekeeper whenever it suits you.

AI companies can’t, on the one hand, be so diligent, transparent, and socially responsible as some claim to be while, on the other, being so cynical and opaque. The reality, I suspect, is that some vendors – we won’t name them – are fully aware that they scraped private and/or copyrighted data to train their systems. But they can’t admit to it as it would generate a flood of lawsuits. 

Meanwhile, organizations like the Publishers Association – see my previous report – are demanding that AI companies license their content retrospectively, citing multiple training data analyses that have, indeed, revealed the ingestion of copyrighted material. (Ever used an image-generating tool and found what was once, clearly, an image-gallery watermark? I have.)

In short: pay up, AI providers, for the transgressions of the past. With some vendors now backed by trillion-dollar corporations, it is only fair and right that they do.

Gaps

But where does independent research organization the Ada Lovelace Institute stand on the UK’s proposed regulations – which, at this stage, ignore copyright – given its mission that data and AI should work for all of society? Michael Birtwistle is its Associate Director, AI & Data Law and Policy. He told eForum delegates:

The UK’s legal and regulatory patchwork for AI currently has significant gaps. 

For example, it's not clear that there are regulators to enforce the principles in many areas where we might see high-risk AI use cases. Employment and recruitment being very noticeable examples. 

The second test is about whether regulators are currently properly equipped and capable to govern AI. Regulating AI is a very resource-intensive and highly technical process. Proposals give some new central support to regulators to understand AI, but they get no new powers or legislation to deliver on that. And there's no clarity around what resources they'll be given to deliver on new areas and new sets of responsibilities.

That hardly sounds positive. Birtwhistle continued:

And last, it's important to think about urgency. 

The government proposes setting up this new framework up in over a year or more, but we're seeing the pace of AI development in the news every day. The widespread availability of foundation models like GPT, which are being integrated into software that we use every day, into our search engines, our office productivity software…. we only see this accelerating AI adoption and the risks of scaling up existing harms.

So, there are challenges that are going to need a much faster response from the government. In practice, addressing these concerns means two things at the political level. One is a commitment to resourcing AI as part of our digital infrastructure, when its role in integration seems highly likely to grow.

What did he mean by that? He explained:

When we think about other domains where safety is important, where technologies are part of our infrastructure, the relevant regulators are funded to the tune of tens of millions, or hundreds of millions, of pounds a year. And we think that's the right frame of reference for the additional resources that regulators will need to deal with the impact of this powerful, general-purpose technology. 

And second, it means a commitment to legislation. But at present, there is no serious proposal to incentivize anyone in the AI value chain, be they regulators, developers, hosts, or users of AI, to make sure that the proposed principles are both lived and enforced.

So, there are clear gaps.

Indeed. Then he added:

I’m going to skim over the Data Protection Bill and just say it's an important part of the picture at a time where AI is being increasingly integrated into our lives. 

That said, there are a number of important protections around personal data use that have been weakened in the Bill. Data protection impact assessments and Data Protection Officers are being downgraded, and legal protections around automated decision-making are also being loosened.

Yet more signs of a dogmatic approach from the government, in fact. Whitehall has repeatedly said at eForums over the past three years that it expects regulators to shift their focus to unlocking innovation and growth rather than the EU approach of protecting citizens from vendor overreach.

Birtwhistle continued:

The AI Summit [on 1-2 November] is being framed around this idea of AI safety, but it’s not an established term in the sector. If we're going to succeed at creating justified trust in AI usage, then it's crucial that we take a broad view of safety. 

Safety isn't just a buzzword; it has governance and regulatory contexts. For example, it means something specific in areas like food and product safety, in road safety, in civil nuclear cybersecurity, in online safety, and in aviation. And it is fundamentally about the anticipation, identification, prevention, and management of risks and harms. 

Beyond that, it's also about creating trust in technologies, in technical systems, and in industries. For example, making sure that people both feel, and are, safe to fly. And it can also be about achieving wider social goals and preventing diffuse social and economic harms.

So, we need new approaches for AI, almost certainly rooted in new legislation to give regulators the right tools to manage these risks.

Then he concluded:

There's intense debate over whether the main focus of policymakers should be on current, near-term risks, which are better defined and understood in evidence, or on longer-term, more extreme, but to some extent speculative risks that have dominated public debate, following recent jumps in AI’s capabilities. 

The renaming of the taskforce to ‘Frontier AI’, and the objectives for the government’s Summit seem to imply a focus on the longer term. But we think it is not just important but urgent that the conversation looks at the full range of AI risks and harms.

My take

Taken together, my three reports on UK AI regulation this week reveal a troubling situation: a government that is long on ambition, but short on detail. A government that moving too slowly and with too little investment in regulating the sector. And one that may be too in the pocket of US vendors, while paying too little attention to industries that are already critical, like the UK’s creative sectors. 

The UK must do much better than this. So, let’s hope the 1-2 November summit addresses these concerns head on, and isn’t just PR and window-dressing.

 

Loading
A grey colored placeholder image