ServiceNow’s Chief Strategy Officer on the impact of AI and the future of work
- Summary:
-
Dave Wright, ServiceNow’s Chief Strategy Officer, talks about the company’s Intelligent Automation Engine and the impact on a service-focused organisation.
The key announcement coming out of ServiceNow’s annual user conference this week in Orlando is the upcoming release of its Intelligent Automation Engine, which is intended to help companies prevent outages before they happen, automatically categorize and route incidents, benchmark performance against IT peers and predict future performance.
Core to the Engine is the company’s recent acquisition of DxContinuum, a Silicon Valley based machine learning company, which was acquired both for the technology and the staff to bolster ServiceNow’s AI capabilities.
DxContinuum’s technology is currently being re-platformed to the ServiceNow platform and will be closely integrated, if customers choose, to predict the effectiveness of enterprise services and help companies compare performance with peers.
I got the chance to sit down with ServiceNow’s Chief Strategy Officer, Dave Wright, to discuss how the technology will be used across the vendor’s platform and to get an understanding of how he thinks it could impact the future of work.
The release coincides with ServiceNow’s annual State of Work survey, which found that nearly half of executives believe that by 2018 advanced automation will be needed to cope with rising work volumes. The survey found that 87% of companies plan to investigate or use advanced automation moving forward and that 94% agree that it could increase productivity.
The core
ServiceNow’s Intelligent Automation Engine is built up of four key components, according to Wright.
Firstly, ServiceNow has already done work around the event management side of IT operations, allowing for it to store longer time series of data. This means that you can do more analysis and map the relationships between events - so predicting the precursor to events becomes possible. Wright said:
Now we can forecast with a degree of accuracy whether we think something is going to happen, based on a precursor. That was something that was all developed in-house.
Secondly, the DxContinuum acquisition comes into play, where the Engine uses machine learning to enable actions such as categorisation, predicting severity, and predicting priority. This will not be available in the latest release of the ServiceNow platform, but will be available this year. Wright explained:
What we will do is we will just run a data extract to a separate machine to give it the power to do the number crunching, do the training session, evaluate the training session, then we put the model back into the customer’s instance.
The one thing that’s very unique to what we do in this though, is that it’s machine learning that’s tailored for you. We will be training everyone individually on an instance. The reason we have to do that is because nobody implements in the same way. What I can’t do is, I can’t say, ’based on this description, this looks like a networking issue’.
If you haven’t got a category of networking, if you’ve got network hardware, network infrastructure, then the categorisation is going to fail. So I have to build a model your system. And you may well end up with multiple models, where you want to predict different things. So people just choose what they want to do.
However, the user interface allows the customer to choose which fields that they wish to take and what datasets that they want to look at - it’s all configurable for each customer.
The third component is ServiceNow aggregating and anonymising the data that it holds on its platform. This allows for it to create realistic benchmarks for companies of a certain size, or in a certain industry, to compare how they are fairing against their peers. Wright said:
So obviously we can’t look at customers’ data. But we can look at record size and file size, so we can isolate how many CIs they have. We can isolate how many high priority incidents they get. What we can do is form benchmarks that allow people to see what people in their industry are performing like, or what people who are of a similar size perform like. You can see how you rate against though, but you can also see a trend analysis where about’s you are going - are you getting better or are you getting worse?
The final component of the engine is simply the base mathematical models that ServiceNow can apply to all of the data it has sitting on the platform, from a performance analytics perspective. It can take this data and use extrapolation to predict performance levels and be prescriptive, to allow people to understand what changes they can make to get a level quicker than they would have had previously.
Concerns
Wright said that ServiceNow’s AI focus is likely to start with its IT roots, as this is the area that it holds the most data. However, he added, as other business units begin to grow more popular - such as HR, security and costumer service - these too will soon get similar results. In order to get an accurate model, Wright estimates that a customer needs between 50,000 and 100,000 records to effectively train the data.
However, AI doesn’t come without customer concerns, Wright admits. For example, using AI to automate workflows inherently relies on a certain degree of confidence in the predictions. If that confidence is lost, the capability quickly becomes useless. Wright said:
The accuracy is as critical as not getting a better result. What people hate about AI is that if it predicts something and it’s wrong, people lose all confidence in it. So what we are very sure about when we implement this is that we make sure that people set the right confidence level - say, I only want you to classify this if you are 95% confident that this is correct.
And the rest of the time don’t guess, just say ‘I don’t know’. And then it’s an iterative process after that. Then the ones you couldn’t identify on it, let’s start retraining on that.
ServiceNow has been running the Engine on its own systems and Wright said that it straight away was achieving 82% levels of accuracy for categorisation. As training has continued over time, the results has leapt to 90%.
Another popular concern with AI in the workplace is the displacement of jobs - the rise of the machines. Any vendor that tells you AI won’t lead to job displacement is lying, there’s inevitably going to be a degree of job loss as a result of automation. You only have to look at retail and the hospitality industry to see evidence of this.
The ServiceNow State of Work survey found that at the C-Suite level, 79% of said that they thought AI would generate more jobs. Whereas 87% of employees felt that it was going to have a negative impact on jobs.
Wright argues that AI in the workplace should lead to augmented job roles, where employees are using the technology to enhance their careers and do more valuable work. He said:
I think people look at it in slightly the wrong way. I think people look at it and say ‘this is going to automate and take my job away’. There’s another element to it, where if you use that AI to actually augment skills, you can enable someone to potentially do a job that they weren’t necessarily skilled to do.
One of the interesting use cases I’ve seen is people using AI to analyse sales opportunities. Normally a really good sales person will look at their opportunities and analyse which ones they think are going to come in. If the machine comes back and says six out of the 10 opportunities aren’t going to close, you have increased that sales person’s skill by enabling them to focus on things that are going to get benefit.
A lot of people said they were struggling to find the right people for the job. They were also saying that a lot of people were spending two days a week doing admin work. Now if you can automate some of that admin work, you don’t need to struggle the people, you’ve already got them. You’ve just got them doing the wrong things.
For the sake of it
Finally, Wright warned that companies shouldn’t look to AI just because it’s the latest buzzword. A warning echoed by new CEO John Donahoe during his keynote address this week. Whilst the appeal is obvious, companies really need to think about what AI can do to solve complex problems internally and externally that impact end-users. He said:
I think the first thing to evaluate is, what problem are you trying to solve? Have you got an issue, is it an issue that you can’t solve with people? The other thing you need to think about is, not using AI for the sake of using AI. It needs to give a benefit to the company. You need to make sure that you’ve got the data and the quality of data to train on what you’re going to do when you generate the model side of the solution.
Wright also had some interesting comments on the future of AI technology, both at ServiceNow and in terms of the broader industry. He said that at the moment machine learning is solving a specific problem in mind, where algorithms are used to train the data for a specific use case. However, what will be interesting is when people don't have to provide the checks and balances and the machines have the insight to effectively train themselves. Wright said:
There is always new mathematical models coming out. The interesting thing will be, and this is probably years off from a tech perspective - at the moment, all of the machine learning people do is supervised, you give it a training set, you train it. When we get to the point where we can do unsupervised learning and we get the system to inspect itself and understand unusual behaviours and different patterns, that’s when it starts to get really interesting.
My take
Early days for ServiceNow, but a smart move. It’s good to hear that they’re anonymising and aggregating the data on the platform to provide cross-customer insights - a move that others are struggling to do, as it requires customer buy-in.
I’m still intrigued to see as time goes on whether machine learning native vendors spring up to challenge the vendors that are selling this as an add-on. It’s a bit like chicken and the egg. These cloud companies hold the data, so makes machine learning an easy choice. But they haven’t been built from the ground up with AI in mind. Others may emerge with AI capabilities built in from the onset that are compelling - but then they need the data…
Either way, twelve months from now it will be interesting to hear the use cases. As they say, the proof will be in the pudding.