ServiceNow has this week revealed how it is planning to capitalize on the recent rapid advancements seen in the field of generative AI, using large language models (LLMs) - showcasing some of the potential use cases live on stage during its second Knowledge 2023 keynote in Las Vegas this week.
diginomica outlined previously how the workflow vendor has announced partnerships with Microsoft’s Azure Open AI Service and OpenAI’s API LLMs, which is something a number of other B2B software vendors have prioritized too. However, perhaps more interestingly, ServiceNow doesn’t see these (what it calls ‘general purpose AI tools’) as fundamental to its future success of the platform.
They’re useful, to some extent, it says. But what it is more interested in is domain specific LLMs, ones that it builds itself using data from the Now platform.
To that end, ServiceNow today announced a partnership with NVIDIA to develop enterprise-grade generative AI capabilities, using NVIDIA software, services and infrastructure, to create its own custom LLMs trained specifically for the Now platform.
The two companies said that the partnership, and the planned LLMs, will see new generative AI use cases for the enterprise, including for IT departments, customer service teams, employees and developers.
Speaking with media and analysts this week, CJ Desai, ServiceNow’s Chief Product Officer, said:
OpenAI is a great piece of technology, a great group of people, but that is a very general purpose AI.
So in parallel we have been building very specific LLMs for ServiceNow use cases. Why does that matter? You get higher accuracy for ServiceNow use cases, you have data privacy for our customers’ data, and these models will provide a lot more insight - because we are going to train them for ServiceNow specific work.
The use cases
As observers of the B2B technology world, diginomica has sat through thousands of product demos over our years in the industry. And if we are being honest, it’s rare that one of these impresses or surprises. Usually it’s an iteration of the status quo, meeting users where their expectations are.
However, some of the demos during the keynote at Knowledge 2023 felt innovative. And whilst they are demos, on stage at a conference aimed at impressing thousands of people, quite far removed from the reality on the ground, they certainly showcased the art of the possible when considering the interplay between workflow/process automation (ServiceNow’s bread and butter) and generative AI (particularly conversational AI).
Desai showed both how the general purpose LLM could be used on the Now platform, as well as the domain specific AI opportunity.
One example saw a customer of Starbucks use a LLM-based chatbot to say that their coffee order had been placed at the wrong location for pick up, asking it ‘what can be done about that’? The response was helpful, giving the user two options: a refund or to divert the order to a new location. Once the user selected which one they wanted, the conversational AI kicked the Now platform into gear and carried out the process on the back-end.
However, the more impressive examples showcased how a ServiceNow administrator at a customer organization could ask for case summaries of problems that had occurred that day, and would then provide recommended actions (generated by the AI), which took into account company procedures. The AI then summarized all this activity and put it into the notes field. What would have taken a significant amount of human intervention, understanding, and content creation, which was done by generative AI in a matter of seconds.
Finally, Desai showcased a domain specific (for ServiceNow) text to code example, where someone looking to generate code for the Now platform could be delivered what they needed simply by typing something along the lines of “// query incidents older than two years”, with the code populating the screen in seconds. This final example resulted in a huge cheer from the audience, unprompted.
Commenting on these domain specific examples, Desai said:
Domain specific LLMs - if you think about our biggest product line, ITSM, we already know why employees of our customers use ITSM. If we can have LLMs that we take from our quality partners, we train for an ITSM specific use case, and we run it for that specific customer, that’s when we get the high test value for generative AI. That’s what we are solving for, first. Can we have HR specific LLMs? Customer service LLMs? IT specific LLMs?
And text to code…we know how to code on ServiceNow really, really well. Think about a high school graduate who wants to write ServiceNow specific code to modify a business rule. They can do that with text to code that we have trained for that domain specific LLM.
Right now OpenAI scans the internet and whatever code exists on the internet, they will try to help you code with it. That’s interesting…but not really. For higher quality code, higher throughput, higher business rules, you want our top engineers to train that model.
It’s obviously very early days and there’s lots of detail that we still need clarity on - particularly around how the models are developed between ServiceNow, NVIDIA and customers. Where the data sits, what data is used, and whether those models are owned by the customer, or the vendors, I’m not entirely sure just yet. Also the timelines for all of this are still uncertain. But seeing those live demonstrations, it was hard to deny the value that could be quickly extracted across an enterprise, if these LLMs were used at scale. The reduction in time needed to carry out the tasks showcased was really quite astounding - not to mention how this could enable customers to do things differently. It’s going to be interesting to see how this develops - and lots of our concerns about AI, particularly how it impacts the workforce, remain - but it’s clear that ServiceNow recognizes that enterprise expertise is valuable.