Generative AI - will it be summer for humanity? Or a legal winter for vendors?
- Summary:
-
An epochal discussion at the World Economic Forum on generative AI prompts the question: will AI vendors be sued by every content creator on the planet?
Artificial Intelligence finds itself in a Springtime of public acceptance. The cause of this blossoming of support – after years of media fear-mongering – is Generative AI. Popular examples, of course, include Chat GPT, Midjourney, Stable Diffusion, DALL-E, and others, which have become public playthings almost overnight.
Their heralds were Alexa, Siri, and other digital assistants: systems that reassured us that we could safely talk to machines, and, in response, they would play us a cheery song, dim the lights, and order some slippers for our looming obsolescence.
More innovations are to come, with OpenAI’s GPT-4, plus Google’s Sparrow, Anthropic’s rival Claude, and more. Google’s earlier chatbot trials anticipated the AI Spring, but failed to bask in it. Its tools were too early and too threatening, perhaps, when all the planet’s pleasure-seeking apes needed were some toys to make them docile.
Of course, the public’s enthusiasm may wane just as rapidly, once the early bloom has fallen. Remember: we are not always marching in a straight line towards the future, abandoning the past as we go. Sometimes we circle back; sometimes controversial technologies are rejected.
So, should decision-makers embrace Generative AI – even if public support is one day tempered by job losses? What are the implications of the (apparent) revelation that creativity is no longer a human preserve? Can every job be automated if there is sufficient data about it? Or are AI innovators about to be hit by a wave of lawsuits? More on the latter in a moment.
One thing is certain: the words of Dr Anders Sandberg of Oxford University’s Future of Humanity Institute sound increasingly prophetic. Speaking at a 2016 AI and robotics event, he warned:
If you can describe what you do for a living, then your job can – and will – be automated.
The World Economic Forum discussed this issue in Davos, with its characteristic mix of evangelical innovators, sceptical politicians, and hosts who knew to give a fait accompli a veneer of gravitas.
Among the panel were Azeem Azhar of podcast Exponential View; Priya Lakhani, founder and CEO of education engine, CENTURY; Hiroaki Kitano, Senior Executive VP and CTO of Sony; Refik Anadol, an artist whose adoption of AI suggests we should not feel threatened by it; and Jean-Noël Barrot, French Minister-Delegate for Digital Transition and Telecommunications.
Azhar expressed scepticism from the get-go: He said:
Now seemingly powerful, these tools seem capable of human-like, sometimes superhuman output. But in reality, they're statistical beasts lacking any real understanding of the world of facts and logic. They are adept liars in some cases. Like many technologies, they convey both potential and instability.
As I explain later, those observations may prove to be significant. But for now, the point is that natural-language tools only seem intelligent because they use the same words as we do. In fact, they are neither intelligent nor ‘artificial’. These human-made systems merely crunch human-made data. And as we will see, that fact may have legal ramifications.
Yet at the same time, they are reportedly capable of passing exams or of passing themselves off as journalists. Indeed, Harvard University’s Nieman Foundation for Journalism has warned:
We will see ChatGPT and tools like it used in adversarial ways that are intended to undermine trust in information environments, pushing people away from public discourse to increasingly homogenous communities.
(The first part of that statement is probably correct, but I doubt the second is. The rise of extreme, adversarial communities seems more likely.)
For CENTURY’s Lakhani, generative AI will certainly have an impact on education. She said:
When ChatGPT came out, within hours every educator was on social media talking about AI.
Some educators are seeing it as an enabler, and that's fascinating. They’ve gotten over their digital fatigue after the pandemic and they're interested in technology. […] They’re thinking, how can we use this as an enabler in different contexts?
But then you get the sceptics who are terrified because kids are using it to do their homework, which has real-world implications.
Think about the lost learning over the pandemic. The children who weren't online or able to learn in some form. If they’re cheating their way through homework, how does a teacher know that? Especially if they’re teaching one to 35, or one to 50 in some countries. They're already exhausted.
Then she added:
[The real problem is that] the assessment system in most countries is flawed, it has needed reform for many years. Memorizing is the key strategy for passing exams. And if ChatGPT can do that better than humans [then we need to think about how] human intelligence differs from artificial intelligence.
[For example], why do we spend weeks learning long division? Why not spend that time learning about financial literacy, mental health, or fake versus fact, so that when our students are logging in and using this technology, they can see that a source might be made up?
This technology could end up being the biggest opportunity for the educational landscape.
A plausible and – in some ways – exciting perspective. Yet also a troubling one, because it suggests that layers of human knowledge may be stripped away and relegated to the past as we increasingly trust machines to give us facts in the present. Richer social engagement for our children, perhaps, but built on deep factual disassociation.
A crisis for creators?
So, who is using generative AI? A delegates’ show of hands revealed that nearly everyone in the room had experimented with an application, but only five percent were deploying one professionally. The findings suggest two things: a handful of organizations have rushed into early adoption, but the majority have yet to be convinced that ‘plaything’ equals ‘useful tool’.
However, the onset of AI Spring demands a policy response. One that anticipates how organizations might adopt generative systems, and what the implications might be for society. Noting that AI might become “one of the key interfaces that we have with machines”, French Minister Barrot said:
I'm optimistic that if they take on an important role in the labour market, then, as with any technology, this will allow us to shift our attention, our value added as humans, to other purposes.
But IT companies have been saying this for years, claiming that AI, plus robotics, automation, and other ‘Industry 4.0’ technologies will help us focus on the unique skills that make us human. But generative systems seem to be doing the exact opposite.
It is almost as if AI’s designers have looked at the world in 2023 and decided that the biggest problem we face isn’t climate change, war, nuclear Armageddon, poverty, rogue asteroids, heart disease, or cancer. Instead, it’s musicians, poets, authors, editors, artists, designers, photographers, cooks, philosophers, academics, inventors, scientists, journalists, publishers, marketers, filmmakers, architects, and historians… all those creative people and knowledge workers.
If only we could stop paying them a living, the coders implicitly say, then we could finally get on with all the important stuff that can’t be automated. Like… coding (automatable); banking (ditto); accounting (ditto); investment advice (ditto); logistics (ditto); legal services (ditto); medicine (ditto); and surgery (ditto). You see the problem: we’re back to Sandberg’s 2016 prediction.
With sufficient data and metadata, pretty much anything can be automated. But why start with art, with creative human endeavour? Was it simply that datasets are available? Or to make people feel talented?
Or is this the wrong perspective? Should we see instead see the AI Spring as an opportunity, rather than (as songwriter Nick Cave noted recently) an exercise in “replication as travesty”, something that cheapens lived human experience, emotion, and suffering?
Should we, in fact, greet AI with open arms? Some at Davos thought so. Sony’s Hiroaki Kitano said:
Sony is a creative entertainment company backed by technology. So, we're here for the creators. We consider [generative AI] to offer a great opportunity for creators of all kinds. We deal with artists, musicians, movie directors, and game designers.
And by ‘creators’ we also mean engineers, scientists, and entrepreneurs, because they create the future. We are the company to provide the technology and work with them to create new artforms, new entertainment, and future opportunities.
The point was echoed by Anadol, who uses AI and ML tools to make art:
The input is data, and data means our memories as humanity. So, the fundamental, when we talk about ‘artificial realities’, is that they are not really.
As an artist, I am fascinated by the idea of using reality to imagine. So, if a machine can learn, can it dream? And, if it can dream, then who will define what is real? I think these questions were always, as anyone creating in the field of imagination and machines knows, fundamental.
I'm very aware of what may go wrong and, of course, misunderstandings, data bias, and ethical problems. But I think we have to ask, ‘What else can we do with them?’ And when we ask these questions, information turns into knowledge and then into wisdom – the fundamentals of human understanding – and life becomes very interesting.
For me, Generative AI has major potential to extend human minds.
Fair points. Art has always gone hand in hand with technology, from daubing pigments on cave walls to the invention of the film camera and recorded sound, to using samplers, projection mapping, drones, avatars, virtual realms, augmented reality, deep fakes, and pixel editing. Tech has always led to innovations in art, including the use of generative systems in music (eg Koan in the mid-90s). However, up to this point, it has largely been humans doing the imagining.
Inspired
Despite all this, another issue will bite the hand that feeds AI: copyright law.
The issue is simple: if a Generative AI system is neither intelligent nor artificial, but instead is simply a pattern-recognition algorithm stuffed with data, then it stands to reason that this data has come from somewhere. Almost certainly without permission.
If ChatGPT can, for example, write a lyric in the style of Nick Cave (or produce a poor simulation of it), then it must have been populated with, or trained on, Cave’s work. Or at least have access to it online.
If an artist is alive and/or their work is in copyright (and therefore not in the public domain), then it stands to reason that a ChatGPT simulation must be a derivative work. Indeed, it can ONLY be derivative. That much would be implicit in the instruction ‘Write a Nick Cave song…’
In other words, generative tools are not AI at all, but are really little more than derivative work engines. And derivative works are covered by copyright law.
ChatGPT may not plagiarize a specific Cave lyric, but its output would clearly be derived from his. Not inspired by it in a human sense, but generated from it automatically. And this legal principle would apply to every human endeavour stored in OpenAI’s database.
As musicians find that others could use free tools to, potentially, emulate their unique voices, lyrics, music, instrumentation, and more, without paying them a cent, then they will act to protect themselves.
In the visual arts – where AI can deprive living artists of commissions – lawsuits are already beginning. These actions may force vendors to depopulate their databases of everything except public-domain, out-of-copyright work.
And if class action seeks financial compensation for all the world’s living human creators, then only the richest vendors could survive.
My take
For all the philosophising about AI’s promise, questions remain about how we got to this point, what coders’ priorities are, what the legal response may be, and whether AIs may end up full of ancient, public-domain data.
Remember: Spring isn’t always about sunshine and blossom. “April is the cruellest month”, observed TS Eliot. Indeed, “sometimes it snows in April”. That was Prince, not ChatGPT.