Whilst Splunk itself is driving customers to the cloud with its ‘Data-to-Everything' platform, the company itself is recognising that it has to adapt and change to cater to the new requirements of customers operating multi-cloud environments. Some of this is around internal client management, such as implementing customer success programmes, but it is also about eyeing opportunities across its portfolio.
Splunk has found success within the IT and security fields of the enterprise market, where its analytical platform has powerful capabilities to interrogate and understand constantly streaming time series-orientated data, across a wide variety of assets. However, the vendor is now betting big on the ‘observability' market, with the launch of its Observability Suite, which has largely been built out via a number of acquisitions of fast-growing companies in the area.
Observability in this context means real-time understanding of your technology portfolio in the public cloud - or modern day application performance management, infrastructure monitoring and incident response.
The launch of Splunk's Observability Suite comes after it acquired SignalFX last year for over $1 billion, and the more recent acquisitions of Plumbr and Rigor, which all operate in this space.
We got the chance to sit down with Splunk CEO Doug Merritt, who explained the rationale behind the company's venture into the observability arena. And his ambitions are clear, where he said:
Success for us in two years would be that we have risen to a number one standing with observability, we've maintained our number one standing with IT and security, and we're using the beacons from those three core buying centres, to make sure that the platform is robust.
Merritt explained that the buying habits and architectures of digital enterprises have driven Splunk's ambitions in the observability space. He said that public cloud architectures and the new development paradigms require new tools.
In the world that I grew up in, you had dedicated equipment for the apps that you were creating and your code directly addressed the capabilities of the infrastructure. For things like application performance management, you could take samples, you could survey the environment and if you got close to where something was you could go directly and start to interrogate.
In this new world, everything is containerised, there is no linkage between the software and different hardware components. Everything was built on purpose to be a fail-often framework, to scale out horizontally, and if it fails you've got a layer of indirection and it doesn't matter. So now the new way of gathering data and understanding where problems may occur is to grab the constant flow of API stream data or log data coming from each one of these independent services.
And that changes the core thinking and core architecture of what you would need software-wise to address the problem, to measure and monitor your environments. The simplest way to think about it is that it has shifted from more of a transactional interrogation world to a data analytics problem, because now you've got hundreds of terabytes, to petabytes, of data flowing across these different public clouds. You need to gather all data elements, have fully continuous interrogation and extraction of data from that entire stream.
Merritt said that ‘observability' - and the aim of Splunk's Observability Suite - is to allow customers to be able to observe everything that happens across their technical landscape. This includes everything from basic monitoring of a cloud-based infrastructure, to much more advanced levels of extracting traces from data so you can see the entire logical structure of the applications, code and services.
Getting buy-in from the C-suite
Interestingly, Merritt also provided some context around the challenges facing buyers at the moment and how they are changing, with regards to data use. He said that buyers typically think about data in four different spheres:
Classic transactional data management - or relational databases
Data warehousing - how do you do a better job of understanding transactional data?
Big data interrogation - making use of Hadoop or data lakes
And finally, the field Splunk focuses on - constantly streaming, time series-oriented data
Merritt said that Splunk's sphere is different from the other three because it's brand new ground that buyers haven't had a great deal of time to think about or understand, but is becoming increasingly more important given that every device in our physical world is able to communicate and receive communications. We are now able to get much more near real-time visibility and instrumentation of everything around us.
However, the novel nature of this data can make it difficult to communicate outcomes with the C-suite, which are often still looking for a simple solution to a complex problem. Merritt explained:
It's a harder concept for people to wrap their heads around. And most CIOs, CTOs, Chief Data Officers, they want to know: how can I get one thing? This is way too confusing. And my dialogue continuously with them is, why do you want one thing? Can we just go back to: what is your need? And they kind of stumble there.
Why do you have object DBs, transactional DBs, data warehouses, graph DBs - why do they all exist? Because they are architected for very different use cases on the totality of the data landscape and I've never seen one generalist thing do an effective job for all those. Your job as a CIO, CTO or CEO is to make wise choices. You're going to have to manage a portfolio of seven, eight, nine underlying data artefacts. A Swiss Army Knife doesn't generally work for a professional construction crew. People are evolving with it.
An interesting conversation with Merritt and it's astute of Splunk to recognise that its previous successes won't necessarily dictate its future successes. The world of IT and technology is changing for buyers and Splunk needs to be at the forefront of solving those complex data problems for them. It's early days still for its observability venture, but it seems to be making sensible moves. In terms of future challenges, Merritt himself noted that driving customers to the cloud is still a top priority. He said:
Our opportunity and risk is continuing to drive cloud as the primary thing of everything we're doing. It was great to cross the 50% mark in. Q2, but there's a long way to go to make sure that we're 90%/95% cloud-based in the way that we serve our customers. There's a lot of work to do there still. From the portfolio, from people change management, from messaging, all the way through. Of the solution areas, observability is our big bet. We've done super well in IT and security - we're early in market for observability, so that's a must win market for us as well.