AI and the creative process, part one – a personal introduction

Chris Middleton Profile picture for user cmiddleton February 29, 2024
Summary:
In the first of three reports, diginomica looks at how AI is impacting human creativity – for good or ill. In the next two, creative professionals and their organizations weigh in. But first, let’s set the scene…

An image of oil paint swelling up in multiple colours
(Image by Antonio López from Pixabay)

In little over a year, generative AI has become not just commonplace, but also force-fed to us from countless platforms. 

For example, if you are on LinkedIn as a professional author, journalist, copywriter, or editor, then you – along with everyone else – now see a button saying, ‘REWRITE WITH AI’ every time you post an opinion. The lurking but unwritten subtext is clear: AI can do your job better, suggests Microsoft, despite your lifetime of accumulated knowledge, experience, and skill. 

How? Because the data of millions of expert humans was scraped without their knowledge or consent. And instead of paying them, we now pay the world’s most valuable company. 

Can you opt out of seeing the cursed button? No: Microsoft said in a statement to diginomica that making it optional would “interfere with its tracking”. So, the message from this $3 trillion company – a market cap equivalent to UK GDP – is equally clear: we are all just data points, and so our feelings and professional pride are irrelevant. 

We are just product, it seems. Not even that, but lab rats in a live experiment with our careers and livelihoods. And this from a networking site for human professionals! I would describe that as a meta irony, but that’s a different AI company…

Of course, some argue that if you are really skilled or talented, then AI can’t replace you. Perhaps, but in many cases the opposite is true: the more high-profile you are, the more likely it is that your data has been scraped to train a large-language or generative model. The result? Work that seems a lot like yours can be generated at will.

Meanwhile, hardware giant NVIDIA has ridden the AI wave to a valuation of $2 trillion this quarter. Indeed, anyone using its GPUs to mine cryptocurrency in recent years would have done better by using that money to buy NVIDIA shares. If you spent $39 to buy just one share in 2019, it would be worth $787 today: an increase of 1,912%. (Bitcoin? A paltry 1,500% uptick over the same timescale – with a crypto winter thrown in.)

But most significantly this month, text-to-video tool Sora became the latest generative release from ChatGPT and DALL-E vendor OpenAI – though the makers of online student reading app Sora might have something to say about the use of its name.

In one fell swoop, OpenAI’s photorealistic video app added the work of actors, directors, cinematographers, film editors, character designers, set designers, scenic artists, camera operators, prop builders, makeup artists, stunt performers, sound designers, and more, to the ever-expanding roster of talent whose skills have been scraped, recycled, and commoditized by a company that sees them as a more urgent problem for AI to solve than cancer or climate change. 

Don’t take my word for it: one film studio has already abandoned its expansion plans. This month, US entrepreneur, actor, and film and TV mogul Tyler Perry scrapped longstanding plans to scale up his Atlanta complex, citing the “shocking” ability that Sora offers him to never leave home to make a movie again. 

He told the Hollywood Reporter:

I can sit in an office and do this with a computer, which is shocking to me.

He added:

I am very, very concerned that in the near future, a lot of jobs are going to be lost. I really, really feel that very strongly.

It can’t be one union fighting every contract every two or three years. I think that it has to be everybody, all involved in how we protect the future of our industry, because it is changing rapidly, right before our eyes.

Of course, his comments echo those expressed by countless industries since the 1980s, such as the expert typesetting, print production, and layout teams whose work was usurped by desktop publishing. (I know. As an early adopter of Apple Macs in that industry, I was one of the usurpers.)

Elsewhere, Gen-AI is already being offered by some as a premium service. For example, online prints and posters outlet Desenio prices its ‘Create AI Art’ service higher than the work of its human artists (who must now be feeling like writers do when they log on to LinkedIn: objects of implied contempt).

Thanks to Desenio, you can now decorate your apartment – assuming you are old and rich enough to afford a deposit – with something that is almost, but not quite, exactly like something else made by the same AI. 

“What is it?” your friends will ask, standing back from your latest AI canvas, admiringly. “It’s me standing on a gold tower laughing at anyone born this century!” you reply. And you will all laugh hysterically – and vow to never have talented children, because you’ll be paying for them forever.

But not everything is going the vendors’ way. Google had a torrid February when its Gen-AI platform Gemini started outputting black Vikings and Chinese Nazis in a kind of visual diversity implosion – perhaps triggering the first (and hopefully last) belly laugh shared by white supremacists, anti-woke campaigners, and self-employed illustrators. Poor Google (market cap $1.7 trillion): now it’s only worth as much as Australia!

Either way, thanks to Gen-AI, nearly every form of creative and artistic endeavour is now available near-instantly via a simple text prompt – songs, essays, paintings, drawings, poems, speeches, movies, videos, photos, and more. But that is no small, evolutionary step; it’s a giant leap forwards from using computers as a creative tool or aid. A leap into the dark, backed by the desire to get something for nothing.

So, it is hard not to see everything that has happened since Autumn 2022 as a wholesale attack on the very notion and value of human creativity. But that isn’t quite right. Generative digital systems have existed in the art and music worlds since at least the mid-1990s: generative music engine Koan, for example, made at the time by the UK’s SSEYO. 

It was an idea before its time. Thirty years ago, a composer – Brian Eno was one – could plant the ‘seeds’ of a piece of music, and it would grow, shift, and evolve in their computer, producing beautiful, ever-changing work. Using it, the artist became the prompter – the planter of the garden, perhaps; but it was the computer that generated the blooms.

Automating creativity

So, is Gen-AI really an attack on human creativity?

Vendors – and some creative people – believe the opposite: that it represents a flattening and democratization of creativity, by making professional-grade work into something that can be achieved by anyone with an imagination. 

That automation process, arguably, has been ongoing since the Industrial Revolution in the Nineteenth Century. And it is hard to imagine a more creative period in human history than the century that has passed since the dawn of Modernism and the Machine Age in the Twentieth – the Renaissance notwithstanding. Didn’t the digital age, desktops, laptops, tablets, apps, and smartphones just accelerate that creativity, turn us all into artists?

In many ways the view that AI is democratizing creativity is correct. Take me as an example (bear with me for a moment). 

I’m a journalist, author, and editor, and also fortunate enough to be a musician, and to have visual skills with pencil, camera, pixels, and paintbrush – though I spent thousands of pleasurable, deep-learning hours acquiring those skills; none of them arrived fully formed. Yet I can’t afford to pay video companies, filmmakers, designers, and actors, for example – any more than some of them can afford to pay me. Yet I might have a good idea for a movie, an animation, or a video (or a terrible one).

So, why shouldn’t I make it? The answer is there is no reason, and every reason why I should. And the same applies to every form of creative human endeavour, including the ones that I charge money for myself.

But let’s not kid ourselves that this is a world with zero consequences.

First, by AI vendors turning every form of human creativity into something that can be generated at will with a prompt, we are witnessing a wholesale transfer of economic power from creative professionals to companies that are already as rich as some of the world’s top 10 economies. 

In other words, an economic black hole is forming, and fast. One in which we pay the likes of Google, Microsoft, Meta, Apple, Amazon, and their investments (like OpenAI), not for making us tools and platforms, but for doing our creative work for us – via illegally scraped data (more on that below).

Second, we are redefining creativity itself, by removing the creating – the making – from that process. This was implicit in comments at an AI session at the World Economic Forum in January, where speakers such as YouTube CEO Neal Mohan hailed the “previously unimaginable” levels of creativity that AI can unleash.

What he meant was previously unimaginable savings. He said:

Imagine a tool where you can go as an artist, give it a text prompt saying, ‘Give me a chorus that's in the style of this artist with this type of melody and beat’. And it’s created right there! Something that they might have had to go back and forth on, maybe for weeks, with an actual chorus, you know, or an instrument set or what have you.

So, it's like a supercharger for their creativity. What artists tell me every single time is like, ‘Wow, I can create music that, a week ago, I wouldn't have even thought was humanly possible.

As I joked in my report at the time, up until then it just wasn’t humanly possible for an artist to make music that was exactly like someone else’s – unless they could play an instrument and had, perhaps, listened to a record. Unless they had invested time in developing a skill in other words. Now, thanks to AI, none of us needs to learn anything ever again!

Then Mohan added:

Another [app], which we just released in beta form, is called Dream Screen. So, if you're a creator, you might just say to YouTube, ‘Hey YouTube! Give me a video with a dragon flying through Broadway in Manhattan!’

From a creator’s perspective that would probably have taken days’ or weeks’ worth of work to actually create it, and now it's happening instantaneously! So, they can take that video to the next level of creativity!

The implication of those statements could not be clearer. In the mind of that tech CEO, actually making something yourself is pointless drudgery – mere wasted effort.

But that is the polar opposite of the creative urge for most people, in which the making – the process – is at least as important as the end result, even if they use technology to get there. That’s where the love is, the humanity

Ask any painter or musician; they love the process, the hours of work, play, experimentation, and self-expression. They are the whole point! Often the finished piece is just an artefact of the journey, a souvenir; it’s not about the arrival or the destination.

The copyright issue

Which brings us to the third repercussion: we are already crediting AIs for the brilliance of their generated works, when we should be crediting the skills and talents of the millions of human minds – and hands – whose work was scraped as training data. 

Again, don’t take my word for it. The House of Lords LLM Inquiry in November – in which the Communications and Digital Committee heard from expert witnesses on all sides of this debate – did not pull its punches on the question of artists’ copyright. 

Its January report said:

LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles. We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process.

It added:

There is compelling evidence that the UK benefits economically, politically, and societally from upholding a globally respected copyright regime. [Nearly six percent of total economic activity in the creative sectors alone, says my research.]

The application of the law to LLM processes is complex, but the principles remain clear. The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation. The current legal framework is failing to ensure these outcomes occur and the government has a duty to act.

Indeed. Yet there is a fourth, more subtle and more subjective effect from all this: increasingly, AI is generating work that looks remarkably similar, familiar, obvious, and dull. 

Members of online art forums see this problem daily: thousands and thousands of images that all share a similar look: like lightly airbushed photos of creepily perfect people, or fantasy landscapes that resemble a dozen or more middling films. More and more, these asinine creations barge more original and interesting work out of the way by sheer force of numbers. Where is the work that’s as startling as the technology?!

Just like a trawl of shoppers’ data risks creating identikit coffee shops and town centres with the same few, familiar brands, so AI may gradually be usurping human frailty, imperfection, and originality with a kind of bland anti-art. Works that are superficially impressive – often deeply so – but lacking any personality at all. 

It’s like a machine looked at the total output of humanity available online and generated something that looks like a 1980s Athena poster, or the album cover of a second-rate AOR band.

My take

So, is AI undermining human creativity, or taking it into new, unpredictable, uncharted territories? All of these and more are among the issues explored in my next two reports, in which we hear from a range of creators and industry organizations. Stay tuned!

Loading
A grey colored placeholder image