Confluent EMEA chief on delivering value and focusing on trust during uncertain times

Derek du Preez Profile picture for user ddpreez May 15, 2023
Summary:
Confluent’s SVP of EMEA speaks with diginomica ahead of Kafka Summit in London this week about how the data-in-motion vendor is thinking about customer needs in 2023.

Image of a man pointing his finger to a map of the world that’s lit up in blue with different data point
(Image by Gerd Altmann from Pixabay )

Ahead of the Kafka Summit in London this week, where Confluent will be making some new product announcements, we got a chance to speak with the ‘data-in-motion’ vendor’s SVP of EMEA, Richard Timperlake, to get a sense of how the company is engaging with customers during difficult macroeconomic conditions - and how it is thinking about the development of AI, given the recent wave of market hype and interest. 

Confluent is a commercial provider of the open source Apache Kafka movement, providing a platform that aims to shift organizations away from data at rest and batch processing tools, towards real-time data streams (or ‘data-in-motion’). As we noted recently off the back of Confluent’s Q1 2023 earnings, it is seeing significant growth in its Confluent Cloud offering, which is supporting enterprises that are faced with a significant shortage of Kafka skills in the market. 

Timperlake highlights how the popularity of Confluent Cloud, which is fuelling the vendor’s growth, is directly tied to making Apache Kafka easier to manage for these organizations, which are recognizing that operating based on a historical view of data has its limitations. Timperlake says: 

On the cloud side we reported $74 million revenue in Q1, which was up 89%, so we are seeing a lot of traction around cloud. That might be existing Confluent Platform customers moving from on prem into cloud, but it also may be new customers coming in as well. But we are seeing a huge amount of traction. 

And the reason for that is simply that it is an easier environment to manage, because we take on the overheads. If I'm a customer and I'm using Kafka, I might be managing manufacturing hubs around the world where I've got to run 24/7 support, have multilingual capabilities, and have capabilities in different regions. There's a big cost to that and there's a lot of complexity to it. And we remove it. 

It gives us the ability to manage that environment and it allows you, as a customer, to be more innovative and focus on some of the other issues that you're facing. We are seeing across EMEA, pretty much all countries, focus more and more on cloud. And I expect that trend to continue. 

Navigating a complex environment

Timperlake recognizes that vendors, and customers, are having to navigate difficult complex macroeconomic conditions at present. We have seen how some suppliers of B2B technology have had to cut costs significantly after a very enthusiastic period of spend during the height of the COVID-19 pandemic, whilst others continue to thrive as they become increasingly strategic to customers. In a sense, we are getting a better idea of those that are able to meet customers where their needs are and those that are struggling to make a case for what value they provide, whilst budgets are under increased scrutiny. 

Buyers are tasked with needing to continue to invest to support changing customer and employee needs, whilst grappling with higher interest rates, soaring inflation, supply chain difficulties, and political uncertainty due to the land war in Ukraine. Timperlake says: 

Organizations are reflecting on where they really want to spend. And while the sales or the buying cycle might be slightly different to what it was maybe two years ago, as long as they see a business case that stacks up and they see the value that you can create, that will continue to build a business very successfully.

For Confluent, Timperlake believes that the company is well positioned because of its ability to solve a real use case for customers that recognize that their data needs are changing. The whole pitch from Confluent is that customers understand that they can no longer rely on historical data to make decisions about how they need to operate and instead need to move to a model that requires assessing data in real-time. Timperlake says: 

Firstly, our customers, their competitive environment is changing. You'll have heard us talk about data-at-rest and data-in-motion. Organizations have built up very successful businesses based on the sort of historical technologies that have been available, which tend to be more data-at-rest. It's isolated and it's very difficult to connect to other environments, to then allow somebody to have a more informed conversation with the customer. By deploying Confluent and by creating what we call data-in-motion, it changes their ability to interact with their customers. 

And I think what drives customers in many places to do that is that they are in a changing world, competitively, and they have to service their customers in a different way. Those customers could be external to them as an organization, but also internally.

We're working with one customer now, they're a large retailer, and because they can't get data on where their produce is in the supply chain, they're writing off $70-$80 million worth of food every year. That’s a lot of money and that's a problem. Because all of the information is just just locked in different systems. If we can then bring that together, through Confluent Cloud, and eliminate the $70-$80 million cost, then that's a pretty significant benefit. 

Timperlake explains that Confluent measures its customers’ use of the platform based on a maturity curve, whereby it runs from Level One (essentially playing around with the tools), through to level five (which is what Confluent would describe as having a ‘central nervous system’ based on using real-time streaming data). 

The company, however, argues for an approach that starts small and scales up. He says: 

We tend to engage with customers and say don't go with Big Bang on day one. Bring the technology in and start to work with it on some use cases, focus on some low hanging fruit where you can get some quick wins. Because that creates a bit of excitement. We normally engage with the architects or the developers first, but then once they start to see the value, then it really starts to take off and accelerate. 

You go from one use case, to multiple use cases, and then potentially up to this Level Five central nervous system. And that's a journey over a period of time that we tend to work with organizations on. 

Timperlake adds that whilst buyers are under pressure to increase profitability, to reduce costs, and to provide better services to their customers, they are also dealing with a rapidly changing competitive landscape. And this is fuelling investment towards creating this central nervous system. He says: 

Think about how a bank has built itself. You have a mortgage system, an insurance system, a credit card system, a current account - they're all product based. And that's potentially in different areas of the bank.

You then get a digital native come along, and it has a single view of the customer on day one, and that allows them to cross-sell and up-sell. That's a very different experience. 

And if one organization can offer that experience in a better way than another, then it's going to create a dynamic in the market where there will be change. But that for us is a huge opportunity.

The AI opportunity

We at diginomica have been cataloging and analyzing the development of AI for some time, but it is hard to deny that there is a renewed interest in the technology since the public release of ChatGPT. The levels of hype and zealousness for the use of generative AI and LLMs is intense - and whilst there are opportunities, there are plenty of risks too. Particularly around how much users can trust the results of using these models that essentially scrape information and present it in a convincing way. In essence, not every data set is the same, and Timperlake believes that this will work to Confluent’s advantage. He explains: 

We chose the data pipelines approach and within that you have to have very high levels of security, very high levels of governance. We built that into the solution. Pretty much any application is only as good as the data that it gets to be able to run models on. So having that core infrastructure in a trusted way, in a secure and well governed way, is key to any organization. 

And again, think about how much data people are generating these days, it's vast amounts. And that's not going to change. So we see governance and security being absolutely critical.

My take

Keep an eye on diginomica this week for news, analysis and customer stories coming out of Kafka Summit - there will be plenty of content coming, as we dig deeper into how organizations are using data-in-motion tooling in practice. 

Loading
A grey colored placeholder image