IP in crisis – the march of generative AI and its impact on the arts

Chris Middleton Profile picture for user cmiddleton April 20, 2023
Summary:
What are the legal dimensions of generative AI for creative people?

An image of an AI generated landscape
(Image by Alan Frijns from Pixabay )

The sudden, popular uptake of AIs such as GPT/ChatGPT, Stable Diffusion, Midjourney, et al, has thrown a spanner in the works of centuries-old conventions on copyright, intellectual property, and fair usage. But it is also a consequence of the law’s slow, precedent-based evolution: an inevitability. 

Whenever loopholes or grey areas exist in law, someone will exploit them. In this case by scraping the pre-2021 Web of data and using it to train generative systems. As we noted in a previous report, the arts were simply the lowest-hanging fruit: there for the taking, at scale. 

Tech companies bet that ownership issues would be so diverse and diffuse that the legal system would take years to catch up. They were right: the law may never catch up, and may be forced to create a regulatory patch or levy – a sticking plaster on a body that has been blown to pieces.

A key question for lawmakers is: when is a work derivative? On the face of it, any piece produced by a generative AI trained on existing human content can only be derivative – especially if you instruct it to be. Yet software companies claim otherwise. Unconvincingly so far; but that will be for litigation to decide.

An increasingly common view on social platforms is that all artists’ work is derivative of others on some level: a bad-faith argument which implies that our human capacity for original thought and experience is on a par with a photocopier. 

Others have suggested that if a creative person can be replaced by GPT, then they can’t have been that skilled or talented to begin with. That argument is disproved by the artists who, in January this year, launched a legal action against Stability AI (makers of Stable Diffusion), Midjourney, and artists platform DeviantArt. 

The suit alleges that work, including theirs, was scraped without consent to train AI systems, allowing prompters to reproduce the style of any artist, including those working today. In short: uniqueness and skill make work more likely to be devalued and copied, not less. (More metadata = more visible online = a bigger target for scraping.)

But if courts decide that the tech companies are correct, then who owns the rights in a generated work? The prompter? Their employer? The software provider? The (sometimes unknown) creators of the source material? Or no one? 

Does ownership matter anymore? Not to trillion-dollar corporations, it seems – until you copy their patents, of course. But it does to anyone whose copyright represents their career and income as a creative individual, and their intellectual property. To dismiss people’s complaints is therefore to dismiss their lived experience – and their need to eat and pay bills and taxes.

Boundaries are certainly being tested. Only this month a professional entered an AI-generated image in the Sony World Photography Awards, won in his category, then revealed the deception. The purpose, said conceptual artist Boris Eldagsen, was to provoke debate and find out if competitions are ready for the technology. “They are not,” he said. 

It is hard to argue with that. But for creative people, the troubling dimension was that the AI was able to generate a more arresting ‘photo’ than the human contestants. So, perhaps Eldagsen’s actions were the real work of art in this case – as when Marcel Duchamp signed a urinal in 1917 and created one of the definitive statements of the 20th Century. 

Either way, the Guardian expressed the view that the technology “is on the brink of irreversibly damaging the human experience”. It devalues it in financial terms, certainly, by implicitly rendering human talent of little monetary value or import, beyond training the machine. Abilities we have previously prized more highly than others, and seen as defining the human condition. In this way, it implicitly devalues human emotions, experiences, and skill too, and devotion to art and craft.

And what happens when AIs start recycling AI-generated content, and we get an infinite regress of material based on other AIs’ output? Art that has an ever more distant link with humanity? If nothing else, that will be interesting. Perhaps even exciting.

But as AI companies know only too well, the problem is the law progresses case by case, offering checks, balances, and remedies – with occasional giant leaps forward (and backward). But differing legal systems propose little in the way of global comeback when companies opt to scrape the Web to train large-language and generative AIs – exposing the myth that the industry can police itself.

By being online in the first place, that content may have been in a public domain, the internet, but that does not mean it was public-domain data in legal terms. But to prove that demands action in the courts – with some confidence of winning.

In the meantime, lawyers rub their hands at the prospect of expensive litigation to establish whether an offence against existing laws has been committed, either in isolated cases or in class actions brought by creator communities. At least until those lawyers are replaced by AIs trained on centuries of precedent. At that point, will legal judgements suddenly become as affordable as a knock-off painting? 

Can the law catch up? 

So, just what is the legal dimension? As suggested above, one of the key challenges is that Big Tech platforms are increasingly global in reach, while local legal systems differ in both detail and nuance. 

And for Brexit UK – where politicians promise a bonfire of EU rules – this once-leading light in regulation and jurisprudence seems isolated and adrift. Do its allegiances remain with Europe, the US, or its new trade allies in Asia? The tragedy is no one knows, especially in government. Meanwhile, AI companies are in the ascendant.

However, one thing is clearer than you might think from the above. For generative AI to work, it must have access to data for crunching, mining, analysis, and pattern generation. And there are laws governing that. 

Matt Hervey is Partner and Head of Artificial Intelligence Law at legal firm Gowling WLG. Speaking at a Westminster Legal Policy Forum, he explains:

On text and data mining. In 2014, the UK introduced a [fair use] exception for text and data analysis. There's no magic in the word ‘analysis’ as opposed to ‘mining’: the two are used interchangeably throughout the entire period and over discussions. But the exception is limited to non-commercial purposes. 

So, it's very much intended for academia [the Web’s roots]. And it requires fully accessible works which are going to be mined. So, for example, as an academic, you would already have a subscription to a journal. In essence, the exception means that your access to [the journal] meant that mining couldn't be prevented, and you couldn’t be charged more when it came to automating your research techniques. 

Arguably, the problem with our law is, first, all of the examples at that time – around 2014 – are about automatically extracting facts, including trends and patterns in facts. Things that are not protected by copyright. 

And it is not clear – to me, at least – whether it was ever intended to cover using copyrighted works to train AI. And especially generative AI, which is focused on distilling expression, not facts, and producing expressive works, such as art, literary and musical works. 

Second, the UK exception is much narrower than other significant jurisdictions. That includes the EU, where mining is allowed for commercial purposes – subject to an opt-out by rights-holders. And the US, where data mining is broadly believed – but this has been challenged – to have an open-ended fair-use exception.

Uncertainty

The question therefore becomes: what is fair use? The problem here is we are already so far into what might loosely be called the GPT era that fairness has been left to courts to define at some unknown point in the future. 

So, what is vital now is clarity from political leaders. And in the UK, that is barely an option for a country that (in the words of an anonymous member of the House of Lords) has been “impaled” on Brexit for the best part of a decade. 

Hervey continues:

The UK Government consulted on the text and data mining exception over a period of about a year and a half, but has yet to actually reach a landing on what its policy should be. So, I would say we're in a drift relative to other jurisdictions.

What about computer-generated works? He says:

A handful of countries do provide protection for computer-generated works. They include the UK, Ireland, India, Hong Kong, South Africa, and Ukraine. 

The way the UK has implemented its international treaties, computer-generated works are properly protected here, regardless of the jurisdiction in which were they were made or the nationality of the persons responsible. 

Now, the legal effect, at least in the UK and Ireland, is uncertain, because the test for originality was changed centrally by the EU to require an author's own intellectual creation. And it is not clear if, and how, this is satisfied in respect of computer-generated works. 

More generally, I expect to see an increasing debate on whether the contribution of the user of generative AI suffices for that user to have a direct claim to be the author – and for that, in effect, not to be a computer-generated work. Typically, the user enters a prompt to guide the output, and then may take further steps to refine and expand the image. 

UK case law has precedents for verbal instructions, giving rise to joint authorship of an artistic work, which gives some credence to the idea that someone entering a prompt is actually an author of an artistic work. But so far, the only detailed treatment of this has been by the US Copyright Office. And it has issued guidance that prompts alone do not suffice, at least for the purposes of US law.

Zoom out and the prospects for the law to provide more clarity seem grim.

It took the UK Government about 13 years to make use of video recorders legal, but there was widespread use throughout that time. Meanwhile, patent law substantially crystallized before most IP practitioners were born. 

So, the likelihood of IP law moving fast, or as fast as technology, is low. I would also highlight that there's been no economic study on the impact of creative AI since a Nesta analysis in 2014. But really, the assumptions made in that paper no longer seem appropriate in light of generative AI.

He adds:

If we can't tailor-make an exception in this country, and we don't know what the impacts are going to be, should we follow the US and have an open-ended fair-use exception, which will be considered by the courts on an ad hoc basis? Or, should the UK follow what other countries in the EU, North America, and elsewhere have done, and have some sort of levy, as on recordable media? 

Do we need to have a generative AI levy in order to balance the economic interests of all parties?

My take

An excellent question. But for individual creative people who risk – despite their talents – being swept aside by the economic and developer might of trillion-dollar corporations, these questions are more immediate and acute. 

Because in a world of streaming, micropayments, and ‘exposure’, in which more and more consumers have been taught that music, photography, design, illustration, and writing are things you get free with your phone, how many creative people can earn a viable living in the first place?

Be too creative, and too successful at it, and your work is most likely to be cloned and replicated. Logically, over time, this will depress even the most creative humans’ ability to earn a living, at least online. 

In this sense, generative AI is like a global game of whack-a-mole. And mega-wealthy developers are the hammer.

Loading
A grey colored placeholder image