IP in crisis – can the music industry survive the AI onslaught?
- Summary:
-
The music industry has often been the canary in the coal mine of new technology. Can it keep breathing in the latest cloud of innovation?
Few arts have been so consistently in the frame of new technology as music-making and sound. All musical instruments are artefacts of the age in which they were made. And with each tech evolution, music itself changes.
And now artificial intelligence is here to change it again. Perhaps to help creative people; or to make something new, unprecedented, and exciting; or to change existing music just enough for no one to sue; or simply to fill the world with deep fakes. Probably all of the above.
To be a musician is – at least to some degree – to be a tech innovator, because music, sound, and new technology have always been linked, right back to the drum, piano, saxophone, organ, or harpsichord. Tools often come to define an era as much as the music they inspire and enable: the symphony orchestra, the microphone, the electric guitar, the synthesiser, the drum machine, the sampler. Meanwhile, formats like the vinyl record have long encouraged artists to push creative boundaries.
However, each innovation and (since the 19th Century), each recording medium and format, has not only changed music itself, but also how it is consumed and sold. So, early in its history, the music industry became focused on the other stuff: intellectual property, publishing, and rights – and on retaining as many of them as possible.
But now generative AI is here to say that, in addition to the decades-old challenges of plagiarism, copyright infringement, and/or unlicensed sampling, technology can now begin simulating the essence of an artist, producing works that are in their unique style and even voice. For free.
It stands to reason, therefore, that generative systems must have been trained on copyrighted work, as these so-called AIs are neither artificial nor intelligent. In essence, they are derivative work generators.
So, far from being merely the latest tool for innovative artists to play with (and many are doing just that), AI may begin posing an existential threat to the very concepts of intellectual property, licensing, and rights – not to mention the value of human creativity. Issues that affect artists directly.
Only this week, an AI-generated track purporting to feature vocals by Drake and the Weeknd was pulled from streaming services after being posted by user Ghostwriter on TikTok. But not before it had gained 15 million TikTok views, 600,000 Spotify streams, and over a quarter of a million YouTube hits: far more than most original bands.
(Remember: in the age of streaming and micro-cent payments, most artists struggle to make any money at all, because the network effect has taught consumers not to pay for their work.)
A spokesman for Universal Music Group (UMG), said:
The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law), as well as the availability of infringing content created with generative AI on DSPs [digital service providers], begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans, and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.
So, generative systems must be new, right? Wrong. Generative music tech has been around for decades; it just wasn’t called ‘AI’ (and in general, today’s AIs shouldn’t be called that either).
For example, Koan, developed by British innovators Sseyo (Tim and Peter Cole) in the 1990s, was original-composition software that enabled artists to specify the rules and sounds for a piece (like planting seeds in a garden), while the computer generated the actual music (grew the flowers) locally.
The processor as artist; music that never bloomed the same way twice. However, Sseyo’s problem was being too innovative too early: arriving in the pre-smartphone age of desktops and client/server computing, when chips were slow, and ecommerce was in its infancy. The likes of Brian Eno and composer Tim Didymus aside, artists and public alike were baffled by the concept of generative systems and music created by prompted machines.
Since then, there have been countless tools for musicians that, among other things, suggest original chord progressions and top-line melodies, generate entire sequences, or offer libraries of copyright-free loops, samples, and sounds on which to base new work. Innovations that many artists regard as inspirational creative assistants, or as prompts for original pieces.
But today’s generative AIs – they’re not AIs, but let’s move on from that – have a colossal advantage over previous innovations. Some have been trained on a whole Web full of data, including on copyrighted material that was scraped without artists’ consent, and our legal systems are nowhere near catching up. As we’ve seen in previous reports in this series, the law has barely made it into the 1990s, let alone into the realm of GPT and its ilk.
The future viability of careers in music
So, what does the music industry think? Who better to ask than the entity that exists to represent the rights of artists in their own works? The Performing Rights Society (PRS) is the rights management organization for more than 160,000 songwriters, composers, and music publishers.
Alexandra Condon is Head of Policy & Public Affairs, PRS for Music. Speaking at a Westminster Legal Policy forum this week, she said:
We look after the rights of songwriters, composers, and publishers, and we license those so our members can get paid. Many of them are keen to engage with AI technology, embrace it as a new tool, and push the boundaries.
But others view it with more suspicion. Composers, in particular, feel really vulnerable about their role in the industry being effectively supplanted. It's a bit of a shortcut to a career, and a lifetime working on that.
There are serious misgivings about how creators are being treated in the AI system and what that means for the future viability of careers in music.
It’s worth adding a point here that is rarely discussed. Free tools aside, music is expensive to make and record to a professional level. If composers, songwriters, bands, and performers lose yet more income streams from the few they still have, then the whole edifice of a viable industry – instrument makers, audio equipment manufacturers, specialist software houses, studios, live venues, publishers, and more – may crumble. Who benefits from that, beyond a handful of trillion-dollar Big Techs and remote billionaires?
For her part, Condon is concerned that across-the-board fair-use copyright exemptions may be granted to AI providers and users for every output, including commercial applications. A move that would effectively tear up the concept of owning rights in a piece of music, if anyone can simply feed a song into a machine and produce something similar – like a low-grade photocopy. She added:
An exception, to our mind as a collective management organization, is entirely unnecessary and completely unjustifiable. The copyright-protected content in question is also a fundamental part of the machine-learning process. AI tools don't have anything to learn from without that content.
I’m assuming you want high-quality inputs, otherwise your outputs are not going to be high quality. But that's an act of reproduction. And it's licensing or the right to authorize the use of one's work that is the foundation stone of copyright.
Creators in all creative industries, not just music, rely on copyright as the only way to realize value from their work: authorization is generally granted in return for remuneration. An exception obviously eradicates any option for remuneration – at that stage anyway.
So, we have to start drawing a distinction between whether it is a public good to allow unfettered access to copyrighted materials, or whether it might simply be an undersirable inconvenience for would-be users [to acquire the rights legitimately].”
This is an excellent point. The thrust of 21st Century technology has been to regard any impediment – any friction in a technology-assisted process, whether it be shopping, banking, transport, or writing a song – as automatically bad. But is it? Taken too far, that attitude risks turning us all into supine consumers with low attention spans.
Wealthy tech evangelists love mantras like, “Move fast and break things” and “Fail fast and move on”, but doesn’t that just fill the world with broken things, like spoiled brats who expect others to clean up their mess?
An assistant tool to human creation
In the creative realm at least, friction can be a positive. Sitting at a piano, guitar, or laptop for hours to write five minutes of good music is a process of discovery, play, experimentation, learning, self-improvement, pleasure, self-expression, and collaboration – even useful tension.
Working methodically at something to achieve a creative breakthrough should never be seen as boring or bad. That said, needless barriers to creativity can be a frustration. It works both ways. Condon seems to share this perspective:
All creative content, whether it be music, film, voice acting, art, or photography, it's not data, it's human expression and human emotion. It's a record of our lived experience.
One of the things that we really need to emphasise is that licensing isn't a barrier to innovation or growth. A viable business model is one that can cover the cost of doing business, and invest in its raw materials. Something like $4.5 billion of venture capital was poured into the AI industry in 2022 alone. So, the money is there. And the music industry has a history of developing licensing solutions.
That's the beauty of licensing. It's flexible, it's adaptive, it’s adaptable to meet the needs of the customer, while respecting the wishes of creators and ensuring their remuneration. While also ensuring that both sides are aware of their rights and obligations.
But she added:
But I would just say that licensing regimes should be voluntary. There shouldn't be any insinuation of compulsory licensing.
There's a huge emphasis on the UK being a world leader in AI, and we're currently in the top three globally as it stands. But to continue that doesn't require a complete deletion of ISO standards of protection. The UK is generally deemed a very good place to look after your IP. And it's a good place to be a creator. And that comes from having a balanced framework.
The balance is innovation with responsibility and accountability.
Well put. So, what about artists who use AI in a process of legitimate, considered creation – as an assistant, or as a tool for creative play and experimentation, rather than as a lazy means of ripping someone off? She said:
The use of AI as an assistive tool to human creation – that is where it aids the human author to improve their instrumentation, or helps write the top line or melody – that doesn't alter the characteristics that determine copyright eligibility.
But the position on a purely AI-generated work where there's minimal human intervention is much clearer. And I think there are real questions as to the desirability of those outcomes.”
So, where does licensing come into play? And what are the long-term solutions that would allow everyone to co-exist happily? She said:
As a starting point, you have to ensure compliance with existing regulation and strengthen those enforcement mechanisms. But there's increasing evidence that the industry is already experiencing infringement on a massive scale. And Universal have thrown down the gauntlet by asking DSPs to prevent AI systems from scraping their services.
So, let's create a framework in which we encourage technology to grow in a way that's beneficial for all stakeholders, rather than just at the expense of creators. We need a holistic framework that sets out some really high standards of good behaviour – characterised by transparency, auditability, and accountability, but underpinned by the fundamental principle of authorization.
Nobody’s work should be ingested or used without their permission. This is the core principle of copyright. And that enables reinvestment in the content upon which your services ultimately depend.
She concluded:
AI work should be clearly labelled. It should be identifiable as the product of an AI system, and not a human creation. We need access to information about what the system ingested, and how the algorithm has used the work to produce the output. We need clear lines about how the creator's source material will be credited and remunerated. And we need to understand who's accountable.
The mechanisms which deliver this information should be baked into AI systems before they hit the market. It shouldn't be an afterthought.
My take
This recurring suggestion of labelling AI content is a good one: the only objectors would be people or organizations that don’t want others to know that machines are making their work. (Many creators proudly associate themselves with tech, of course; and that’s fine. It’s honest and transparent.)
But Condon’s last point is the critical one – sadly for many creators. While some AI makers in every field have stressed the need for consent and licensed content (and have put mechanisms in place to ensure both), the ones that are driving the market have been unethical and cynical on a breath-taking scale.
And trillion-dollar corporations are backing them, in a legal regime – the US – which assumes fair use has taken place. Until someone is wealthy enough to sue them, of course. The rest of us now live in that world. Whether we like it or not.