A quick point of etiquette for conversations with Snowflake executives: Don’t call it a “cloud data warehousing” company. You’ll likely be met with a polite wince and then a swift (but equally courteous) correction. Snowflake is a “data cloud” company, they’ll tell you. The more evangelical among them may even insist that it’s the data cloud company.
Some hubris at present is perhaps to be expected. Snowflake’s September IPO was an absolute showstopper, breaking all previous records to become the biggest software IPO to date. And with that triumph under their belts, CEO Frank Slootman and his team are ready to make a much bigger noise about how they see the company’s future. There’s this week’s virtual Snowflake Data Cloud Summit, for example, as well Slootman’s forthcoming book, ‘The Rise of the Data Cloud’.
The corporate vision is pretty intriguing. While it’s perfectly true that Snowflake’s business was built on providing a cloud-based alternative to traditional, on-premise data warehouses, the data cloud concept is just as much about the data itself, and its shareability, as it is about the platform on which it resides.
The traditional model of storing data for analysis is flawed, argues Slootman, because it’s based on building multiple silos and bunkers that make it difficult to share data across different internal departments, let alone with other companies - for example, those working together in an extended supply chain. The data cloud idea, in contrast, envisages:
a global network, where thousands of organizations mobilize data with near-unlimited scale, concurrency and performance.
No shrinking violet
It’s ambitious, certainly, but Slootman is not a man to shrink from a challenge. His impressive tech career - in which he previously steered both Data Domain and ServiceNow to their respective 2007 and 2012 IPOs and beyond - has been built on setting big goals for his teams and expecting them to be met, no excuses. Now, the challenge he’s setting them is explaining the data cloud to customers and prospects and getting them to buy into the idea. As he pitches it:
You know, when you look at Snowflake, ever since its founding, we were and are a workload execution platform. And people look at us in terms of the architectural distinctions - how we do things in the cloud, versus how things have historically been done. And we love to benchmark, we love to POC [proof of concept] existing workloads and see how that pans out.
But here’s the point about the data cloud: it’s not just about workload execution. It’s about the combination of workload execution for data operations, and creating a global data universe, where users can gain and provide unfettered access to data. So there’s two halves of a whole here.
And Slootman clearly feels it’s time to start talking much more about that second half of the whole - the way that the Snowflake architecture and the scalability of the public cloud can support an approach to data management and analytics that “explodes” those silos and bunkers. There’s no time to lose, he says:
A lot of companies right now are building the silos of the future, because they approach data operations one workload at a time. We see it every day, which is why we need to have this conversation every day as well. We really advocate to customers that they should envision their own data cloud for their enterprise, because future needs are very different from legacy needs around data. Companies are going to face demands for data and data access that they have a hard time envisioning at the moment and we just want to prepare them for it.
A big ask
It’s a big ask for customers, because it represents a dramatic step away from established ways of working, not to mention a huge amount of legacy investment. Slootman acknowledges as much:
It’s not a half-hearted strategy [for customers]. You’ve got to go all-in to create a data cloud and really unlock the potential that is there. And that requires fortitude. You're not just going to end up there by luck or by happenstance.
But participation will also open up powerful network effects, he argues. In other words, the more companies that adopt the Snowflake Data Cloud, the more data there will be to be exchanged with other Snowflake customers and data providers. In this way, the value of the Snowflake Data Cloud could be enhanced for every company that participates. It also opens up the potential for monetization of data in a powerful way, by effectively creating a marketplace where some data can be bought and sold between participants.
Among a flurry of news announcements from Snowflake this week - the majority with a strong Data Cloud spin - is the expansion of the Snowflake Data Marketplace, the company’s platform for hosting third-party data sets from specialist providers, for access by Snowflake customers. The marketplace now hosts over 100 providers, some 50% of which were signed in the last four months, according to Christian Kleinerman, Snowflake’s Senior Vice President of Product.
On top of that, Snowflake Data Marketplace can now be used to establish live, secure, bidirectional links with data service providers, who can work on an organization’s data on its behalf: for example, running risk assessments on customer data, augmenting datasets with behavioural scoring, or simply acting as an outsourcing provider for advanced analysis.
It would be extremely foolhardy to bet against Slootman, or his vision for Snowflake. Dutch by birth and education, he’s certainly pragmatic. His brash, no-excuses ambition, meanwhile, is all-American, owing much to the country in which he’s spent pretty much the entirety of his tech career. He’s an extremely effective operator.
And while other tech firms may also be using the term ‘Data Cloud’ - Oracle and SAP spring to mind - Snowflake has a head of steam now that will be hard to beat, thanks to its IPO. It’s still pretty small, but it’s growing fast. According to its S1 filing, revenues for the first half of 2020 more than doubled to $242 million from $104 million a year earlier and headcount jumped by more than 50% to just over 2,000. At the time of writing, its market capitalization stands at almost $68.3 billion.
Finally, at a time when every organization aspires to be more data-driven, and create competitive advantage from the insights it can derive from data, the old ways of managing that data increasingly look too onerous and clumsy to satisfy the demands of digital transformation. And, at the same time, the data available to organisations from their own internal sources may be way too limited. As Slootman puts it:
There are plenty of organizations that are already bringing new and different data sources into their analysis environments, through FTPs and APIs and what have you. It’s not unusual for a hedge fund or a bank to have 600, 700 different data sources feeding their environments on an ongoing basis. This is already happening. Now, are there other industries and enterprises where this is only happening on a very small basis? Sure. But the world is headed towards massively accessing data, driven by data scientists needing to build highly descriptive and predictive models for business decision-making. So providing data access and getting data access, this will become the standard in normal data operations - not just processing internal data and getting the same old indicators for your organization.