AI and creativity, part three - musicians, the BPI, and Equity speak out

Chris Middleton Profile picture for user cmiddleton March 1, 2024
Creative industries and new technology are natural bedfellows. So why does one sector feel like it is used as a one-night stand? Or might a LTR develop?

An image of a humanoid robot with it’s face pulled off
(Image by 0fjd125gk87 from Pixabay )

In my first two reports on generative AI and the future of creativity – my personal perspective and industry’s call for a more mutually supportive environment between tech innovators and the other creative sectors – we saw that an increasingly adversarial relationship is playing out in courts and parliamentary chambers worldwide; houses that are designed for that purpose, of course. But why does that confrontation exist? 

It is because we can define the creative sectors as those that are focused on longstanding conventions on IP and copyright protection – and that includes technology. Given that many Gen-AI developers have scraped copyrighted data to train their systems, many creatives feel that their only recourse is litigation, to wrest payback from companies that are, in some cases, the richest that have ever existed. At the same time, policymakers are sitting on their hands on the bus of survivors (to paraphrase David Bowie) – a situation that is increasingly untenable. 

But not every creative person sees the situation as a fight for survival or a transfer of economic power and value to Big Tech; far from it. Visual artists and musicians have long been at the cutting edge of new technology. Indeed, the music and tech industries have always been fellow travellers, since the days when microphones changed the very nature of singing from operatic performance to something more intimate and natural, and Les Paul’s experiments with multitrack recording in the 1940s and 50s turned the studio itself into an instrument.

In many ways, to be a musician is to be a technologist, with many of us – I play several instruments and own a studio – seeking out the latest hardware, sound library, or app to help us push our own creative boundaries, and sometimes just to experiment and trigger new ways of thinking. A new tool can open up entirely new genres and avenues of self-expression: the history of popular music proves it. At the same time, many musicians compile and refine a group of trusted systems to support their human work.

So, it’s no surprise that AI is the latest in a decades-long line of digital tools, with apps that might suggest melodies, chord structures, rhythms, and more, in a human-centric production process. And these processes might be applied in any genre, not just obvious ones such as electronica, synthwave, or trap. 

The same applies to the visual arts, where creators have long used film, video, photography, projection, lighting, and computer-generated elements in their work.

Charlie Hooper-Williams is a composer, inventor, and multimedia performer who is better known as Larkhall. (As a technologist, he was one of the developers of music app Shazam.) His live work combines classical piano, audio processing, custom-built machines, and live-generated visuals from a system he calls Otto.

AI can help creative people work much faster, he explains:

It can speed up a lot of existing work processes. That's happening already with coding. Generative models are really good with code because it's structured, and they're great at spotting problems. And most developers that I know are already using these in their day-to-day workflows. 

And then there is something that I haven't heard discussed: this idea of improving quality in places where you wouldn't have hired a human in the first place!

So, if I'm making art for a Spotify playlist, that's not something I would have hired a human for. I would have done a bad job myself because I'm not a visual artist. So, I can just go to [generative image app] Midjourney and get something that's much better very quickly. That, to me, is a win.”

And none of this impedes his own self-expression. Indeed, it enhances it, he explains. But he acknowledges that a halcyon world of enhanced, speedy workflows is not without its challenges – many of which may not be apparent for years. 

He explains:

There's definitely going to be an impact on jobs. But the thing I see under-discussed is the talent development pipeline. 

Higher-end work will be more resistant to AI replacement, in part because people want a human connection in what they're doing. They often want a specific person – not just for their existing style [which might be copied by AI], but their unique take on specifics, on whatever the project is. 

Where I see it being more problematic is that, in order to be a higher-level talent, you need to start with entry-level work, and move on up. And it's that entry level that I see being eaten by AI systems already. 

I think a lot of people are not going to hire a junior illustrator anymore. They're just going to go on Midjourney. And if you can't be a junior illustrator, then where is the next generation of senior illustrators going to come from?

An excellent point, and one that diginomica has touched on in previous articles about automation, robotics, and AI: the risk that by starting with bread-and-butter work – in any industry, not just the creative sectors – the ladder to being a professional is pulled up behind those who are already at the top. This needs urgent consideration.

At the same time, creatives’ ability to make money, especially if they are self-employed, is undermined by clients’ discovery that they can get professional-grade work with a simple text prompt. In such an environment, why pay a human creative more than a token fee?

Hooper-Williams also acknowledges the issue of copyright and remuneration. But says:

What I'd like to talk about is models for how we might deal with this, because what I see a lot of is, ‘Either let AI companies do whatever they want, or let's put the cat back in the bag and not have AI.’ Those are both not good!

The problem is the law takes ages to catch up. We have had streaming audio for 20 years, but we are still not paying artists fairly. There's just not been any progress on that! And this really has changed how musicians can make a living – or not make a living.

So, we need a sustained effort into finding a good way through, that doesn't go to either of those extremes.

What’s the answer? 

Performing rights societies – such as PRS in the UK – track how any music registered with them is used, in order to remunerate rights holders. So perhaps that model could be applied to work ingested in training data, he suggests. However, this assumes that legitimate, rather than pirated, work has been scraped, and also relies on AI companies being transparent – a leap of faith in many cases. 

Often, the odds are stacked against performers by design on some tech platforms. For example, if your music is used by someone in a TikTok video, that counts as a single stream – even if that video is viewed a billion times. A world in which creatives are plankton sucked up by whales.

Hooper-Williams says:

We could also have an attribution model. This doesn't address the financial side of it, but in terms of attribution, future AI models could have a way of tagging generated content in a human-readable way – metadata that exposes what weights from the training data went into it.

In the meantime, artists can ask search engines not to index a page, he suggests, or even use apps like Nightshade [he didn’t mention it by name] to ‘poison’ their work, so that any AI that scrapes it will produce gobbledegook from it. Currently Nightshade mainly applies to the visual arts – making pixel-level changes to original works – but it seems inevitable that similar solutions will appear for other media.

Consent and transparency

So, what does the music industry itself think? Sophie Jones is Chief Strategy Officer for the BPI, which represents 500 music companies, from local labels to the biggest multinationals.

She says:

As part of the £100 billion ($126 billion) a year creative industries, music sits high on those shoulders. We're number two as an exporter of recorded music and the UK is the third biggest music market in the world. 

That incredible success is right at the centre of how we think about AI, because we are all about preserving the ability of that creativity to happen, and for artists to be able to make their music and share that with fans globally. And for our music businesses to continue to be able to invest and enable that creativity to flourish.

But the other frame that’s really important for us is that creativity and technology are natural partners, but policy discussions often separate them. It should be with the creators’ involvement and consent – led by their creativity, and by the desire to create fantastic music that fans can engage with and enjoy. It’s about preserving human creativity.

The problem is the complete lack of transparency from many tech companies, combined with a general awareness that industrial-scale copyright infringement has taken place. 

She explains:

Without that transparency, we can't move to where we want to be, which is proper licensing partnerships, which will provide fair recompense and consent for the creators and music businesses that are creating that music in the first place! Consent and transparency are absolutely vital components of how we take this forward. 

And we're very mindful of the ways in which people's image and voice likenesses are being used as well. Music has been at the sharp end of seeing people's identity scraped without their consent, via fake tracks and deep fakes of artists.

Plus, there's also a risk of generated music flooding the market.

Indeed, generative imagery is already submerging the visual markets in a sea of me-too works, muscling out original perspectives. And, as we saw in my earlier report, in some cases being priced higher than that of human artists.

And this is despite the fact that US law, at least, prevents the copyrighting of entirely AI-generated pieces. Indeed, this is part of the problem facing the creative sectors: the fact that the very notion of copyright is being dismantled by sheer economic force, backed by the most valuable companies in the world. It’s hard not to see that as the deliberate creation of an economic black hole.

But doesn’t this lower the barriers to entry for everyone, and thus democratize creativity? Jones says:

Lowering the barriers has been a fantastic opportunity for many creators, but we're already in a market in which upwards of 100,000 tracks a day are being put onto the biggest streaming platforms. 

So that level of competition is already incredibly intense. And if we're in a world in which generated music is doing that at an exponential level, then it raises some very big questions about how genuine human creativity is going to be able to find its corner and pay its way.

So, what’s the answer?

I hope it’s constructive and collaborative. And that it is about exploring ways to develop partnership between AI companies and the creative sectors, based on the principles of consent and transparency. So, I'm disappointed that the conversations to date fell down [on creating a UK voluntary code of conduct].

We need a robust mechanism that allows us to licence and get remuneration whenever music is used to train AI models. And we are ready and willing to participate in those conversations.

A fightback against adversarial tech companies is certainly beginning in some quarters, judging by the combative tone of at least one senior artists’ representative.

Lynda Rooke is President of actors union Equity, which represents 50,000 people who work in the entertainment sector – an industry that, as we explored in my previous report, now faces extreme upheaval via text-to-video app Sora, and other AIs that generate photorealistic scenes, actors, special effects, and more.

She says:

We're not opposed to AI, but it must be harnessed to support human-centred performing arts and entertainment!

Generative AI is already presenting significant challenges to our long-established frameworks for protecting performance rights and intellectual property. And those frameworks allow performers to exercise control over their work, negotiate their remuneration, and – vitally – provide transparency.

There’s that word again. The collapse of the recent voluntary code of conduct in the UK – see diginomica, passim – and the lack of remedial action by the government is not helping, she says. Especially when AI companies claim the “misconception” that scraping copyrighted data for training AIs is acceptable behaviour.

There needs to be urgent clarification. It cannot be used to absolve AI companies of their responsibilities under copyright and intellectual property law. Guidance by the IPO [Intellectual Property Office] is urgently required to prevent continuing, widespread illegality.

She adds:

The current government's instinct may be the pursuit of free-market values, and the enabling of growth and innovation in AI, but the creative industries are an economic powerhouse.

Explicit protections

If musicians are plankton being consumed by Big Tech whales, then many actors are no different, she explains:

I've heard of colleagues arriving for dubbing and ADR [automated dialogue replacement, where an actor re-records dialogue in a studio] sessions to be presented with a roomful of cameras to capture their facial movements, which can then be used digitally to create new performances. No prior consent was asked for, and the artist is under pressure to accept those terms on the spot.

Similarly, supporting artists are being offered minimal payments for being used as training data for digital doubles that can be used in perpetuity.

That’s right, and this problem is commonplace among AI video companies such as Synthesia and others, and AI fashion model providers, such as Lalaland and its many competitors.

So, what needs to happen? She says:

The Copyright, Designs and Patents Act of 1988 has to be updated and amended to introduce explicit protections for protected works, using performance synthesis action, as well as a stronger image rights framework. 

The Act was created back when the compact disc was the dominant format for fixing [sic] performances. Since then, only piecemeal updates have been made to the Act.

And finally, protection of historic performances is another issue where previous contracts used to engage performers – before the development of generative AI – are now being interpreted as authorizing performance, synthesis, and cloning. That is not a reasonable interpretation of these clauses!

We call on government to pass statutory regulations, which clarify that previous contracts used to engage artists should not be interpreted as authorising this, and that they require express consent.

My take

Transparency, consent, remuneration. Are these too much to expect – or even hope for – from some AI companies? 

The impression from all these discussions is strong and distinct: the handful of AI innovators and their Big Tech enablers urgently need to grow up, and stop behaving like petulant teenagers who are running around, breaking things, and daring the rest of the world to sue them. 

Alas, that may be more hope than expectation with the likes of Sam Altman ruling the roost. Even his own board wanted to sack him – and briefly did. But money talks. And big money shouts, screams, and slams doors, it seems. We need grown-ups in the room.

• Disclosure: the speakers all appeared at a Westminster eForum policy conference on AI and the creative sectors, on 22 February.

A grey colored placeholder image