Main content

Confluent continues to deliver as CEO Jay Kreps pitches an era of data streaming platforms

Derek du Preez Profile picture for user ddpreez August 3, 2023
Confluent’s Q2 2023 earnings show significant cloud growth and large customers spending more with the platform.

Digital technology 3D illustration. Web of global data. Network connections in the cyberspace of the future. Computer wires in an abstract © Yurchanka Siarhei - Shutterstock
(© Yurchanka Siarhei - Shutterstock)

Confluent - the company that provides a commercial offering of Apache Kafka and is spearheading the concept of ‘data in motion’ - continues to deliver solid earnings, with its Q2 2023 numbers showing substantial growth in cloud customers and in customers spending more than $100,000 in ARR. 

CEO Jay Kreps said during this week’s earnings call that whilst the company had built a solid foundation in commercializing Apache Kafka for enterprise customers, which he said could sustain the vendor’s growth for years, the future for Confluent lies in buyers realizing the value of a data streaming platform. 

The key numbers for the quarter include:  

  • Second quarter revenue of $189 million, up 36% year over year

  • Second quarter Confluent Cloud revenue of $84 million, up 78% year over year

  • Remaining performance obligations of $791 million, up 34% year over year

  • 1,144 customers with $100,000 or greater in ARR, up 33% year over year

Speaking with analysts this week, Kreps said: 

Achieving this sustained level of high growth despite ongoing market challenges underscores the mission critical nature of data streaming and reinforces our product leadership.

Our Kafka business has phenomenal growth ahead of it. Modern data architecture is increasingly centered around streaming and this has driven Kafka to be adopted by hundreds of thousands of organizations, including over 75% of the Fortune 500. This open source user base is growing rapidly and we are still in the early days of monetizing it. 

The inherent TCO and performance advantages of our cloud offering mean that in addition to the natural growth of this user base, we believe we can dramatically improve the proportion that is monetized as usage shifts to the cloud and can be captured by a managed service.

However, he said that streaming is just the beginning, with the future being centered on a platform approach. Kreps added: 

If that were the extent of Confluent’s opportunity, that would be a very exciting prospect and enough to sustain our growth for many years but Kafka is just the start.

This evolution is the rise of the data streaming platform. 

Data streaming platforms

What is a data streaming platform? Kreps explained that the key capabilities are the ability to stream, connect, governan, process and share. He said: 

These capabilities capture the full life cycle of streaming data -how to get it, process it, use it, manage it and share it between systems. 

Essentially, Kafka is the stream of data. It allows buyers to produce and consume real-time streams of data, becoming the foundational hub of data exchange in an enterprise. Connectors enable organizations to connect all their systems together, in order to capture the real-time streams of data. Confluent has created over 120 connectors for some of the most common enterprise systems. 

Kreps added: 

However, the ecosystem of connectors is far larger than just these. There are many hundreds of open source connectors to less common systems that are available. We are still early in monetizing this area in Confluent Cloud, as fully unlocking it requires ease of use across cloud networking layers and disparate data and SaaS systems. 

We took a major step towards this in Q2 with the release of our custom connectors offering, which allows running any open source connector inside Confluent Cloud, expanding our reach beyond the set of connectors we ship with out-of-the-box. We believe this is still in the early phases of full unlock.

The third component is governance, which is critical as use cases grow and real-time data flows around the enterprise - enabling users to discover, monitor and ensure security and data integrity. And then this leads into process, which Kreps describes as: 

This is an easy one to understand. Data processing is a key component of any major data platform and SQL and other processing layers are a key component of modern databases. Stream Processing extends these processing capabilities to real time data streams. 

We believe that Apache Flink is emerging as the de facto standard for stream processing. Flink has the most powerful implementation of stream processing of any technology, open source or proprietary, fully realizing streaming as a generalization of batch processing and making it available across a rich ecosystem of programming languages and interfaces.

And finally, for a fully effective data streaming platform, organizations need to be able to share data streams effectively. Kreps said: 

Sharing within a company has been a mainstay of our platform for some time. However, now we have extended that between companies with a feature we just launched at Kafka Summit, Stream Sharing. 

This intercompany sharing is a pattern we noticed was gaining startling traction in our customer base in recent years. Customers in financial services and insurance needed to integrate and provide key financial data streams with a complex set of providers.

Stream Sharing allows these companies to enable this inter-organizational sharing for any of their existing streams and to do so in a way that enables the same governance and security capabilities that they'd use internally with added capabilities to address the additional concerns of allowing access from external parties. 

This means extending our central nervous system vision for something that spans a company to something that spans large portions of the digital economy. By doing this the natural network effect of streaming where streams attract apps which in turn attract more Streams is extended beyond a single company, helping to drive the acquisition of new customers as well as the growth within existing customers.

The growth effect

Kreps said that whilst it might take some time, it’s important for markets to understand that these components for a data streaming platform add value to each other and create a compound growth effect that should add value to Confluent beyond Kafka itself. Kreps said: 

It's essential to understand that these five capabilities stream, connect, govern, process, share, are not only additional things to sell, they are all part of a unified platform and the success of each drives additional success in the others. 

Each of these capabilities strengthens the other four. The full value of this will not be realized overnight. Cloud infrastructure takes time to mature and reach completion. Each of these areas is earlier in the S-curve of maturity and adoption than Kafka, but over time, we think these will directly contribute revenue larger than Kafka itself in addition to driving further consumption of Kafka. Most importantly is what these capabilities let our customers do.

As these parts come together, they comprise a data platform that is as complete as data warehouses, data lakes or databases have grown to be over the years. We think this data streaming platform will be of equal size and importance to these other platforms serving as the fundamental nervous system for a modern company. 

My take

Confluent has big ambitions and all indicators suggest its strategy is paying off. Executing on this strategy and scaling the business will be key to sustained success. However, given how organizations are increasingly relying on data to underpin their own strategies, the opportunity is there for the taking. 


A grey colored placeholder image