AI and creativity, part two – why vendors must work with, not against, the creative sector

Chris Middleton Profile picture for user cmiddleton March 1, 2024
Summary:
In the second report of our three-part series, we hear a clarion call for collaboration, and not adversarial debate. But why has an argument arisen between AI vendors and creatives in the first place?

An image of a brain, with one side of the brain overlaid with equations, the other side overlaid with lots of colours
(Image by Elisa from Pixabay)

In the US, the number of lawsuits against generative AI companies is growing. Among ongoing actions, the New York Times alleges that ChatGPT and DALL-E maker OpenAI scraped millions of its articles, opinion pieces, and stories to train its Large Language Model, and thus – effectively – commercialize decades of the newspaper’s intellectual property, and its historic investment in journalism. 

But this week the story took an unexpected turn: OpenAI – a company that sacked its CEO for breach of trust in November, then rehired him and sacked the board (ker-ching!) – alleges that the New York Times “hacked” ChatGPT to uncover evidence for its copyright suit. 

The NYT has responded that the process could more accurately be called red-teaming or prompt engineering – active engagement with ChatGPT to infer its training data, perhaps, rather than the passive consumption that generative AI seems to encourage. Either way, the suit has – at the time of writing – become more about arcane aspects of the technology than the underlying claim: that OpenAI breached the paper’s IP on an epic scale.

Similar suits are ongoing in the US, brought by writers, image banks, illustrators, and more, against a number of providers. A day of reckoning for the likes of $80 billion OpenAI and its competitors, perhaps, with the former backed by the world’s most valuable company, $3 trillion Microsoft. But vendors must have known that day was coming; it is almost as if they have been trolling the planet as much as trawling the Web.

Or perhaps a more accurate metaphor is a siphon: once the uphill flow has started – the economic one towards Big Tech – you can only stop it by removing the hose. That’s what these lawsuits seek to achieve, or at least seek fair recompense.

In the meantime, as explored in my previous report on Gen-AI and the creative process, OpenAI has launched text-to-video app Sora, thus almost guaranteeing a legal battle with the world’s movie studios, and the powerful unions who represent that industry’s creative professionals.

From the outside, vendors’ gamble seems to be that the economic centre of gravity will shift so far and so fast towards Big Tech that, even if manifold lawsuits are successful, a handful of licensing deals will result, leaving most self-employed creatives in a ‘David and Goliath’ relationship with their own tech providers. 

Soon, those creatives’ clients will begin asking what seems like a reasonable question in 2024, in whatever field of creative endeavour those professionals work: writing, illustrating, composition, design, filmmaking, acting, and more: 

Why should I pay you hundreds, or thousands, of dollars for your days or weeks of work, when a simple text prompt gives me something similar instantly, and for nothing – or for my monthly subscription to my AI provider?

From the buyer’s perspective, that’s a good question.

Of course, these broad issues have long been familiar to musicians: a single stream via $50 billion market gorilla Spotify nets them $0.003, perhaps – but only if they own all the rights to that song. This is why most musicians are unable to even recover their costs, and many get more revenue from selling a single CD than from tens of thousands of streams.

In the UK, the Publishers Association has alleged a wholesale breach of IP by LLM companies to train their generative models. Addressing a Westminster eForum on AI regulation last autumn, Caroline Cummins, the Association’s Head of Policy and Public Affairs, said:

The problem is copyright infringement on a massive scale, when text and data that is subject to copyright is used in the training of AI without consent or compensation.

In its January report on LLMs, the Communications and Digital (select) Committee of the UK’s House of Lords concurred, saying:

LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process.

And the worst aspect of all this? Much of it seems like industrialized laziness – vendors playing to humans’ worst collective impulses: the desire to get something for nothing, and with zero effort. And this principle is now being applied to the very concept of human endeavour in every creative field, as though making something yourself is somehow a barrier to human progress.

Distributing the pie

So, are we witnessing a vast transfer of value and economic power from vulnerable creatives to AI companies, such as Microsoft, which are already richer than Croesus? 

Indeed, is this the real output of the network effect: an Einsteinian view of the tech-enabled universe, in which a small number of massive objects bend the fabric of every market around them? (In this picture, individual creatives are like tiny rocks in a far-flung asteroid belt, billions of miles from the sun.)

Helen Keefe doesn’t see things that way. She is Head of Policy and Regulation at Oliver & Ohlbaum Associates, boutique consultants to the media and entertainment sectors – with a recent client list that includes governments, Google, the BBC, Ofcom, Spotify, and News UK.

Speaking at a Westminster eForum policy conference on the future of creativity in an AI-enabled world, she responded to a question from diginomica by saying:

What I am trying to show is that you can have a lowering of the barriers to entry for creation, which you may see as a democratization of creation, and an increase in the diversity and plurality of creators. 

In the past, there have been many more barriers – in terms of gateways or distribution routes – for you to share that work with the public. 

This we see as being on the same spectrum, a continuation of what we see in social media or streaming for music, which is licensed, but you have much bigger catalogues. And so, to the extent that we have discussions around remuneration in those contexts, there's a big increase in the number of creators. 

So, it may not be that, necessarily, there's a ‘transfer’ [of economic power or value]. It can be that the pie grows, or that the pie is distributed differently.

What Keefe seems to be saying is that as AI brings more ‘creators’ into the market – in the apparent belief that there is a finite amount of cash to spend on creative work – so each creator is paid proportionately less. 

Yet that is not what is happening with AI. What some generative tools allow users to do is receive a text or image that may be based entirely on the work of a target creator, who is paid nothing at all. Or that artist may be asked to compete with the cost of a monthly AI subscription for what was once a premium, bespoke service. Surely, in that model, the AI company is simply monetizing that artist’s portfolio – without their consent?

Caroline Norbury is CEO of Creative UK, a non-profit that supports the interests of the creative industries – and actively invests in them. According to her, the organization primarily backs the small companies and sole traders that comprise roughly 90% of the sector. 

She was at pains to point out that AI is not a new phenomenon, yet the pace of change over the past 18 months has been unprecedented. As a result, she said:

Industries need quick solutions that match the pace of that technological development. We need support from policymakers and AI developers to really ensure that this technology works with us, not against us.

In its LLM report in January, the Lords’ Communications and Digital Committee urged the government to act swiftly and decisively on copyright, yet there is little sign of that happening.

Norbury continued:

I'm very confident that AI can be a positive development for the creative industries. But the main message I want to land is that, too often, this conversation is very binary. So, let's hope that we can get to the principle of a more inclusive approach!

But first, a reminder that the government’s own definition of the creative industries is any business activity which is primarily concerned with the production of intellectual property. 

By definition, therefore, the creative industries are a product of IP. And novel ideas are the bedrock of what we do. Without them, we wouldn't have world-leading adverts, literature, film and television, fashion design, or the cutting-edge gaming sector, or our rich and thriving media landscape. 

IP protects the integrity of our innovation, underpinning the framework with the ongoing revenue stream that creatives need in order to grow. At the moment, we have a gold-standard IP framework that incentivizes the production of novel ideas, products, and services, and plays a critical role in supporting the development of the future sector.

Failing to act in both the spirit and letter of the UK’s existing IP regime might result in diminished foreign direct investment (FDI), she added, plus a loss of confidence in the creative sector, including among those professionals who are attracted to work in it from overseas.

Recognition and compensation

So, how to arrive at that more inclusive, mutually respectful approach? Norbury pointed to the collapse last month of the British government’s preferred option: what used to be called a gentleman’s agreement.

It’s an unfortunate reality that the attempt to establish a voluntary code of practice, among a group that comprised different parts of the creative industry – news organizations, the BBC, the British Library, the FT, as well as representatives from government, and tech companies such as Microsoft, DeepMind, Stability AI, and so on – failed. We got to a point where a voluntary agreement was not reached. 

So, that means that the emphasis is now back on the government to reaffirm its commitment to the existing legal framework on copyright and IP. And that really has to happen. Technological advancement in AI can't come at the expense of creators. And it can't come at the expense of devaluing the ideas and innovations that fuel this hybrid sector. 

Novel ideas can't be exploited without proper recognition and financial compensation. And we need to see that coming loud and clear from government and policymakers.

Hear, hear. Words that echo the House of Lords’ LLM report: the government must intervene and not leave it up to the courts to decide if IP theft has taken place. The Committee believes that it has, and it reached that view after quizzing expert witnesses from all sides – from copyright holders and industry bodies, but also from AI companies, academics, and lawyers.

But Norbury also warned against seeing this febrile debate as a simple case of ‘artists versus Big Tech’:

It isn't useful or accurate. By their very nature, the cultural and creative industries are highly innovative. But the strength of our IP and the value of our ideas can also benefit the development of AI. 

There are diverse and plentiful perspectives available among the different subsectors. We are a group of thinkers, after all, and we're well placed to support the imaginative use of this technology. The sweet spot is going to be that human interface between mind and machine, where creators have so much to offer.

She added:

Generative art is a good example. Machine-assisted artists have been working this way since the 1950s. It's not here yet, but what we need to do is develop tools that can further creative practices and, crucially, create new IP rather than mine existing concepts, innovations, and ideas. 

We have to be careful that the tech sector and the creative industries don't work at cross purposes. So, perhaps one way to strengthen that relationship is to consider the cultural and creative industries as collaborators from the outset. 

We're more in the tent than many people realize. […] We have to be part of the mainstream, not constantly sitting outside the development tent, if you like.

Wise words, which diginomica supports. Yet it must be said that the main reason talks to establish a voluntary, non-binding code of conduct failed is because AI companies refused to put their name to any agreement that imposed a financial burden on them, citing the “barrier to innovation” this would cause. A cynic might interpret that as “so sue us”.

Norbury added:

Without a more concerted effort to develop technologies that assist the creative industries [as opposed to exploit them], [AI] is going to remain underutilized. [There’s little sign of that!] And that means its capacity to transform will remain unrealized too.

Finally, we need to ensure that there's real diversity in how large language models are built. And that we all understand the importance of context, when it comes to who's holding the pen, who's behind the camera, and who's holding the brush.

The same is true for those who are developing new technologies. We need to lean in and ensure that, wherever possible, we're building culturally sensitive models that are representative of a host of different viewpoints and cultural backgrounds. And that they're not just the perspectives and biases of a small group of people baking in and perpetuating existing prejudices and inequalities. 

Ensuring that diversity across gender, ethnicity, colour, geography, and different intellectual perspectives is paramount for both democracy and business relevance.

My take

It is hard to argue with anything Norbury said, though the flipside of her call for diversity in LLM’s training data was revealed by Google’s problems with Gemini this month, as we explored in my previous report. So, what do creators themselves think? In my next, and final, report in this mini-series, we hear from the worlds of music, acting, and more.

Loading
A grey colored placeholder image