When cloud platforms get a going over, Nutanix-style

Martin Banks Profile picture for user mbanks June 29, 2023
You can always tell when vendors feel they are heading in the right direction, for they start to hone, refine and extend the product or service that is hitting the user sweet-spot. Nutanix is one of the latest to make its move by adding centralized management that makes moving applications around a hybrid multi-cloud environment a great deal easier, and advantageous.


As the cloud provides an increasingly unified model of operation across the world of IT, the vendor community are inevitably starting to reformat themselves along increasingly parallel lines, and the word 'platform' is now established as the order of the day. Now, they are moving into positions where they can address their mainstream markets, rather than just the early adopters, now is the time make additions and modifications that fill gaps in their platform offerings.

For example, Nutanix is now extending its Cloud Platform to improve and extend its capabilities in an area where the company’s recently published Cloud Index Survey showed a growing need amongst users: the ability to service their growing need to manage moving an expanding roster of applications around a mixed environment that now includes on-prem resources and multi-cloud services (both dedicated bare metal and tenanted services hosted by third party service providers).   

According to Lee Caswell, the company’s SVP Product & Solutions Marketing, the additions are the result of the growth in the number of applications available and in use. As the volume of data has continued on its exponential growth rate the management problems already identified down at the level of Edge Computing – that it can be both easier and cheaper to move applications to the data rather than the classic alternative approach. 

Now both approaches  – data-to-compute and compute-to-data - are in widespread and increasing use. The cloud is making this an attractive and valuable mode of operation except of one problem, which itself looks likely to grow exponentially as the volume of applications and data movement themselves grow. That problem is, of course, management of the process. Longer term, that will also lead to the wider issue of standards and their use in making the management task ever easier, while creating the next potential lock-in. Caswell argues: 

What we found is over the last 10 years, the reality for many customers is an accidental hybrid multi cloud environment. That pace of applications growth, as well as the fact that now cost concerns, particularly as money is not free as interest rates and geopolitical issues have driven up inflation and interest rates, you see that people are starting to starting to take a closer look at how they would optimally locate applications and their data over time.

This requirement is being accelerated by the continued increase in compute power available to users through the cloud and the growth of environment available to businesses through extension out into the edge. Couple that with the growing capabilities of AI and the universality of server-based cloud architectures and it become possible to have a software platform that runs wherever servers run, from the data center, or cluster of data centers if running on hyperconverged environments, right through to the smallest monitor device in the remotest corner of the corporate network, all on a common platform.

Or as Caswell asks: 

The question now for many customers is, 'How do I gain control over this and put in place all of the resiliency, compliance, security, ransomware protection that I would expect in enterprise applications?'.


This has prompted three new developments from the company, with the first being the introduction of Nutanix Central, aimed at users that have many dissimilar end points to manage across a hybrid multicloud environment. The aim is to provide simplified management as a service across federated endpoints running on Nutanix or public, hyperscaler environments. There are several additional developments that spin off, and exploit, the arrival of Nutanix Central. For example, the company is now supporting the ability to 'mix’n’match' servers that offer compute and storage, compute only or storage only. Caswell explains: 

This is important because it gives you more flexibility in mixing and matching different server nodes to meet unpredictable demand. We're finding it's very challenging for people to predict what they will need in the future, so having compute-only nodes and storage-only nodes gives you the ability to flex the use of infrastructure to meet these workload needs.

The company is also using Nutanix Central as the hub for managing an ongoing problem for many cloud services users – the difficulty in managing operational costs. This has been a problem for many users, especially as they have transitioned from 'one-app-per-rack' on-premises resources – where an application is always live and available even if only used infrequently – on to multi-tenanted public cloud service provision.

On premises, the time and cost of shutting down and restarting an application rack would quite likely cost more than leaving it running unused. Running the same app, in the same way, in an instance on a public cloud service will cost money regardless of use, a situation that requires serious changes in operational management, all based around maintaining much stricter, tighter control of all activities so that large, unexpected costs are not incurred. 

This is particularly important because an increasing number of users are now moving applications and data around their available environments depending on the work needed. Different workloads will require different resources – large data-capacity plus average compute performance through to the opposite end of the scale. It is likely that they will be available on different services, from different service providers, so the resources assigned for each workload will have its own cost structure, so management of where workloads are placed will require detailed choices to be made coupled with real time management of the activity.

Kubernetes with knobs on

Extensive use of Kubernetes is now available to make switching applications and workloads much easier, and the company has added specific data services for Kubernetes to provide orchestration capabilities for new container-based applications. But because it doesn't provide a mechanism for delivering enterprise resiliency such as data services, snapshots, recovery replication and disaster recovery, all of which are already available for its traditional VM-based applications. So they have now been made available for Kubernetes applications. This should provide both resilience and compliance capabilities to new container-based applications, according to Caswell: 

One element is the rent versus buy argument, which has got a little bit lost in the last three years. Now customers are coming back and saying, 'Hey, you know, I'm thinking about which things I should own, which things I should rent, and I'll go and move those applications. But I'll think carefully about which data services I use that would lock me into a particular location’.

The second development is multi cloud snapshot technology, which Caswell sees as offering a different perspective on operational recovery in a multi-cloud hybrid cloud world:

“It used to be, you would go and replicate data to a dedicated DR site and recover back to a local site. The hybrid multi-cloud environment allows more flexible models to take snapshots, offload them from your local data center, edge, or even cloud property, and move them to another endpoint. It's a very interesting way to start thinking about recovery in a new way. Basically using the fact that you have multiple endpoints and S3-compatible storage.

It is currently available to work with AWS S3 compliant targets, Microsoft Azure blobs, or with the company’s own S3-compatible object store target. The idea is users take snapshots of primary storage, put them directly into a lower-cost object store and then recover them wherever Nutanix runs. 

The third development is more of a partnership: integration with Snowflake, extending the latter’s ability to deliver fast queries at very large scale without having to aggregate storage and data into one location. Caswell says: 

We've gone and worked with Snowflake to make sure that queries initiated by Snowflake for new analytics tasks would be able to be applied to object storage, running on Nutanix, across their hybrid multi-cloud environment.

My take

This does suggest a future where this then becomes the next lock in, for it has the ability to become a standard that has a similar potential to the way Windows affected how computers developed - that Nutanix is pitching at creating the next standard environment that applications developers will have to work to. Caswell counters: 

I'd say, actually our pitch is almost the opposite. Rather we're giving you the flexibility and choice, so you can go and choose what you need over time. And that model is something we've always valued from a company standpoint. So I think the idea that we would exert a standard is probably better left to other companies.

Yes, but it's just that there are a million brands of chocolate, and they all offer very different treatments, but they're all fundamentally chocolate. And to actually gain access to that flexibility there has to be that one common access point. Caswell's view?: 

That's correct. Yes. so that's why, I prefer talking about switching costs. So the fact that in Kubernetes environments, you've got low switching costs, we've got a standard there, right, you've got now the ability to move data across the hybrid cloud and that gives you different flexibility. You're seeing our value move up the stack over time. Our value has been at the infrastructure layer and now we're saying the data services that we provide actually could float more generally beyond the infrastructure that we provide directly. 


A grey colored placeholder image