With all the current hype about ChatGPT and other Large Language Models (LLMs) you'd be forgiven for thinking that AI had only just been invented. But of course LLMs are just the latest in a long line of significant advances, notable mainly for the extent to which they've seized the public imagination far more than earlier innovations. That leaves vendors in a quandary though — how to catch up to the LLMs bandwagon without throwing out everything that's gone before? Zendesk's answer to that quandary is being unveiled today at the opening of its annual Relate conference.
Zendesk AI is a package of capabilities that includes introduces new features alongside longstanding AI offerings, as well as building on its recently announced partnership with ChatGPT creator OpenAI. The big leap forward is making these technologies accessible to business users, as Cristina Fonseca, Head of AI at Zendesk, explains:
If we make AI very easy to understand and use, people will start dreaming about 'Okay, what can we do with it?' It's here, it's so approachable. To me, that's the biggest lesson of all of this buzz around large language models and ChatGPT ...
This is a solution [where] everything works from day one. Of course you can extend it, you can customize it, but it works off the shelf. And in the past few years, we couldn't have AI that would work off the shelf.
New capabilities in today's announcement include:
- A new set of advanced, pre-trained bots for messaging and email with ability to interpret customer intent and automatically solve issues.
- AI-powered insights and suggestions help agents respond faster, with proper context, to customer issues.
- Intelligent triage based on intent detection, language detection, and sentiment analysis to help classify incoming customer requests and allow teams to power workflows based on these insights.
- New features using generative AI to rephrase or shift the tone of responses, helping agents craft clearer and more thoughtful responses with less time and effort.
Limitations of LLMs
While these new announcements include some based on OpenAI and ChatGPT, these technologies also have limitations. For example, Zendesk's existing models have been built up over time, based on the company's own datasets, to serve specific CX use cases. Fonseca explains:
Understanding very well what the customer wants, without me as a customer having to train a complex model, was very hard, it didn't exist. This was the AI that we've been building. ChatGPT comes out and offers some of these capabilities already off the shelf. But OpenAI doesn't have the proprietary data that we have, the CX data that we have ... If you want to build intent detection and sentiment detection on top of OpenAI, that gives you a good advantage. But we go even one level further.
Not every AI use case is a good fit for an LLM like GPT, because of the resources these large general-purpose models consume, both in terms of compute time and energy. For example, detecting customer sentiment and reporting the result on a five-point scale is much more efficiently and quickly done with a specially tailored model rather than a general-purpose LLM, which might take several seconds to come back with an answer instead of the milliseconds the special-purpose model would need.
These special-purpose models also have protections built in to make sure that sensitive customer data isn't processed. Fonseca explains:
The model doesn't learn from the data. The model learns from a representation of the data, which is basically a vector with numbers. So no sensitive information or anything goes into the model. And then the model only predicts a label. So there's no risk there.
Customers can also choose to opt out entirely of having their data used, but there's a quid pro quo, she adds:
It's important to educate customers on what your data is being trained for, and how things work. At the end of the day, if you want to use generic models that were pre-trained, it's also fair to ask that you contribute with a little bit of data. But we have all the necessary measures in place — lots of measures internally that that we do to remove PII — to not have sensitive data be sent to the models.
What OpenAI's GPT models add is most useful in areas to do with understanding and manipulating text, so the obvious first use cases are in agent productivity, such as crafting a well-written reply to a customer from a few bullet points, or summarizing a conversation thread or a set of articles in a knowledgebase. But even here, guardrails are needed. She goes on:
There's still a lot of research that's needed so the models behave within the boundaries they should — they don't invent stuff and so on. At Zendesk, we are going to do that for our customers. We are going to make sure we fine-tune the models so they stay within the context of your business. We are putting these boundaries in place.
What do we want to do is deliver value for our customers, so they can click a button, turn on AI, and we immediately get use cases that make a difference for them. We love the technology, but we see it as a tool to build the features that make sense ... We are going to use the best available technology to build what makes sense for our customers. The problems we are solving have not changed in the past six months.
There's a fair bit of market education still to do about what these new AI models can and can't do. In the meantime, let's not forget all the existing work that can still be built on to help surface knowledge and automate processes using AI in all its forms.