Main content

Workday Rising EMEA - model collapse could bring a 'winter of AI', says Workday Co-President Sayan Chakraborty

Phil Wainewright Profile picture for user pwainewright November 16, 2023
At this week's Workday Rising EMEA conference, Workday's Sayan Chakraborty spoke about the remarkable speed at which AI is evolving - and some of the risks, including the potential for 'model collapse'

Rising EMEA Sayan Chakraborty speaks about generative AI - Workday
Sayan Chakraborty at Rising EMEA (Workday)

As is customary at every tech vendor conference this year, AI came top of the agenda at Workday Rising EMEA in Barcelona this week, reprising themes and announcements from the recent US version of Workday's annual user conference. But the company's Co-President and technology leader Sayan Chakraborty nevertheless sounded a warning that there are risks to navigate as the latest iteration of AI continues to evolve at breakneck speed. In comments to media and analysts, he emphasized the astonishing pace of innovation:

If you were to think back three years ago, the models that we're operating today are 10,000 times larger, [they are] computing and consuming 100,000 times more. We always talk about hockey-stick exponential curves. This one actually is. We're actually living through an exponential curve capability. It is almost vertical in terms of how rapidly this is changing — it puts things like Moore's law to shame. What I think that means for people is, we are just at the beginning of what these things are capable of.

But alongside that almost unprecedented speed of development, there are risks. Responding to a question from diginomica about whether any issues are being overlooked in all the excitement about generative AI and the Large Language Models (LLMs) that underpin the technology, he spoke about a phenomenon known as model collapse:

As you start to spread this technology widely, there will be challenges. One of the things that I think is very interesting, and a very nuanced point, is model collapse. An AI model trained on the output of other AI models collapses in three or four rounds, depending on which scientific paper you like more, but it's three or four, top five or six. If I train the AI model on AI generated data, and then I do it again, and then I do it again, that third time, that model is no longer able to provide useful output.

There is a current prediction, and I don't know that it's very far wrong, that by 2026, 95% of the content on the Internet will be AI-generated. That's interesting by itself as a fact. But then when you think about model collapse, you could see a winter of AI coming, where the AI-generated data churns out all of the ... human-generated data, to a point where the models are not really fit for purpose in solving problems that humans care about.

This is compounded by the fact that there's currently no way that machines can distinguish accurately between machine-generated output and human-generated output. So the models have no means of automatically detecting and excluding AI-generated source material.

Model size isn't everything

It may not be all bad news, as the worst affected LLMs will be the generic ones like ChatGPT that rely on the open Internet as their source material, while those produced by vendors like Workday from a closed pool of business data will be more insulated from the effect (as well as sites like diginomica that have taken a stance against publishing AI-generated content). Chakraborty goes on:

We always think of these things as superhuman and super-capable, but now, let's talk about some of the challenges with this technology. I think that will again mean that the companies and the organizations that are producing relevant, human-created data are going to become increasingly more important over time, that those storehouses of human data are going to be what we need, to continue to be able to leverage this technology effectively.

Another shortcoming of the technology compared to people's expectations is that it can only work with the data that it's been given. So if something has been missed out of its training data, or if a black swan event occurs for which no data exists, it's not going to be helpful. He explains:

One of the challenging things about this technology is, if there's no pattern to find, it won't find it. I describe it as probably the greatest general problem solver human beings have invented so far. But it does have a limitation, which is if it's never seen something in the data, it's not going to find that thing. So if you give it a bunch of geological data that doesn't share that there's earthquakes, it doesn't hear about earthquakes ... So that is something people have to be willing to talk about.

Maybe my main takeaway is, everyone has been obsessed with the size of these models. 'Oh, it's 350 billion parameters,' [or] 'Oh it's 1.7 trillion parameters.' And that's interesting, but what matters here is the data. If there's no pattern, you're not going to find a pattern. And it's telling that things like OpenAI have been completely unwilling to share what data they actually trained on.

This doesn't mean that Workday customers shouldn't be adopting the technology — many of the generative AI capabilities the vendor has announced will be delivered to customers over the next few months, he said. But it has been engaging with regulators to ensure that it is adopting and delivering the technology in a responsible way. He commented:

I think that for a lot of technology companies, they have developed at least a reputation or maybe a reality of moving fast and breaking things. That's a phrase that's been used to describe US technology companies. Workday has never been that company. We have a responsibility to our customers, we have a responsibility to their employees, that we take very seriously ... We have maintained the engagement that we've always had around data privacy and other regulatory as we've advanced to the AI era.

Those regulators have learned lessons from earlier technologies and are determined to get AI right, it seems. He mentions a comment from one senior US legislator he met with recently, who told him:

'We were asleep at the wheel for social networks. And we're not going to do that again here. Because there are societal level impacts when we don't pay attention to it.'

It's up to vendors like Workday therefore to take a calm and measured approach. He added:

I don't think it does anyone a service to be wide-eyed and running around with my hair on fire, talking about how great this technology is. In fact, there's plenty of people out there doing that. I can't add anything to that conversation. But my customers want to understand practically how it's going to impact their business in meaningful, thoughtful, actual ways, versus 'Oh, look at this cool thing I did on ChatGPT.'

Enterprise adoption

The speed at which the tech is evolving means that most enterprises are going to have to rely on vendors like Workday, which is able to tune and combine models for specific use cases based on a broader data set than individual customers have access to. On the pace of change, he observed:

The way that your hiring models are being built today in November of '23 is not how they were being built in March of '23. The approaches are substantially different. They're substantially different both in using many more smaller models in combination versus giant models — if you go under the covers of GPT-4, what you'll find is more than a dozen smaller models. And if you look at GPT-3 and 3.5, that was one big monolithic model. So we've already made that transition.

Another aspect that's getting increased attention now is reducing the cost of training models and of using them to infer answers. Workday's more targeted models are far cheaper to operate than the massive generic LLMs of OpenAI and others, but there is still more that needs to be done to make the technology cost-effective for many use cases. Whatever the case, Workday wants to make the technology available across its customer base rather than gating access with pricing. Ending with a tongue-in-cheek reference to his earlier comments on model collapse — which just one potential outcome rather than a prediction — he commented:

There's really two possibilities here. One possibility is this is all hype and it doesn't matter, all the stuff around large language models, which means that we should forget everything I said this morning. Or this is a transformative technology that will create competitive advantages for people who are able to adopt it effectively. In which case, it would be a disservice to our customers to create have and have-nots, by creating tiered pricing models, and not driving this technology through our entire platform and application so that all these companies, big and small, are able to to gain the advantage of the technology.

My take

A helpful and balanced note of caution from Workday's top tech guru. This remarkable technology is still in its very early days and there is still much to learn about what it can do — and what it can't.

A grey colored placeholder image