The false equivalence between private and public cloud

Kurt Marko Profile picture for user kmarko July 5, 2017
Summary:
The purveyors of private cloud are fixated on candle optimization while the Amazons, Azures and Googles are mass producing better and cheaper light bulbs. Which do you think wins?

cloud abstraction
Cloud abstraction layers via MSDN

The preeminence of hybrid cloud as the ideal operating model for modern IT is a truism that's become so ingrained that no discussion of enterprise technology strategy is complete without paying homage to the wisdom and inevitability of organizations having a seamless mix of public and private cloud resources.

As I mentioned last week, numerous surveys document an overwhelming preference for hybrid architectures in enterprise IT blueprints, even though there remain significant differences in how people define “hybrid.” Vendors like Nutanix, Cisco, Dell EMC/VMware and HPE are seeking to capitalize with slick marketing and updated products designed to close the public-private cloud gap.

Unfortunately, this gap is more substantial than something that can be easily bridged with mere software. Instead, it's a chasm of conceptual proportions more profound than something so simple as the ability to manage infrastructure on multiple cloud platforms.

The problem centers on the difference in abstraction layers and resulting application designs when using public cloud services versus so-called private cloud infrastructure and virtually none of the legacy enterprise IT equipment vendors has a viable solution. Indeed, according to one estimate, only 10% of internal IT workloads run on a true private cloud, namely one with attributes mirroring the best public clouds like AWS, Azure and Google Cloud, and none of these offer a set of services, APIs and SDKs identical to those of the mega clouds.

Just look to last week's Nutanix announcements, which centered on extensions to the company's VM and storage management software that provide support for more vendors and online service providers, to see a perfect example of the private-public cloud dichotomy.

While Nutanix and VMware have outstanding product suites for operating VMs and associated network and storage resources, these encapsulate a traditional server OS like Linux or Windows as the underlying application platform. The cloud equivalents are baseline infrastructure services like AWS EC2, EBS and S3 or Google Cloud Compute Engine with Persistent Disk and Cloud Storage. However, with these, once you have the VM instance and attach some storage, you're back in the 90s running applications on a remote server.

While EC2 as a rent-by-the-hour VM was revolutionary when it was introduced almost 11 years ago, AWS has spent the intervening decade relentlessly developing higher-value database, data analytics, AI, application, serverless, DevOps and automation services that progressively decouple users from the details of the underlying server OS and hardware implementation.

Microsoft Azure and Google Cloud have done the same in what has become an escalating battle of technologies seeking developer, architect and executive mindshare. Each cloud service portfolio fits the textbook definition of a good abstraction by generalizing things that can be made abstract while enabling customization and integration through configurable parameters and APIs. The same can’t be said for VMs running traditional OSs.

The private cloud conundrum: Legacy versus Greenfield application platforms

Organizations that have made public cloud services a core element of their business and IT strategies understand the value of operating at higher levels of abstraction. These savvy and experienced cloud users have learned that the most effective way to extract value from these cloud services is by moving to higher levels of abstraction where you're more concerned with business services and application features than infrastructure implementation and operational details. Meanwhile, as legacy vendors focus on optimizing traditional server-based application designs by layering on management features, the growing cadre of cloud native organizations have turned to alternatives such as Cloud Foundry, OpenShift and microservices frameworks using application containers and distributed cluster managers like Kubernetes.

Granted, infrastructure vendors like Nutanix and VMware have made enormous progress in automating the deployment and administration of their systems, however such software-defined infrastructure is just part of the plumbing that public cloud users exploit, but never see or worry about.  As I noted when writing about the Cloud Foundry Summit, cloud developers and business users are more concerned with platform-layer abstractions that decouple infrastructure deployment details from the application code and implementation. Automated infrastructure is table stakes. In contrast, I found that one of the most impressive architectural visions came from SAP by demonstrating the ability to integrate application services from various products and broker deployment to multiple cloud platforms without the developer knowing or caring about the infrastructure endpoint.

Those already living in the cloud understand and rely upon the abstraction layer advantages of the public mega clouds. As the co-founder and CTO of Iguazio, a data analytics startup, Yaron Haviv put it in a recent post,

Last week I sat in on an AWS event in Tel Aviv. I didn’t hear a single word about infrastructure or IT and nothing about VMs or storage, either. It was all about developers and how they can build production grade solutions faster, at a fraction of the cost, while incorporating Amazon’s latest security, AI and analytics capabilities.

Why mess with VMs, deploy a bunch of software packages, write tons of glue logic and abstraction layers, figure out end-to-end security, debug and tune that all? The new model presented was based on taking an API gateway, plugging into a user directory-as-a-service, then databases-as-a-service (DBaaS), some 'serverless' functions with business logic and voila! You’re are done.

Haviv goes on to decry the preoccupation of hyperconverged infrastructure and storage vendors with systems optimization while developers and business units would rather focus on building revenue-producing products and services. His critique illustrates the public-private cloud dichotomy and why true hybrid cloud remains a chimera.

My take

I'm reminded of the adage, "You can't invent the light bulb by continuously improving the candle." It seems that the purveyors of private cloud are fixated on candle optimization while the Amazons, Azures and Googles are mass producing better and cheaper light bulbs. Delivering a true hybrid cloud requires replicating public cloud service abstractions, all of them, internally, not ginning up some software that automates the process of migrating VMs and storage volumes from one location to another.

There are a few options, however for those heavily invested in AWS, no good ones since, unless you're a three-letter agency with a 10-figure IT budget, Amazon isn't interested in operating your own private AWS. The Cloud Foundry and similar camps of generic PaaS proponents argue for building to a vendor-agnostic abstraction layer that's open and can be deployed on many types of infrastructure, including public cloud, albeit using its lowest-common-denominator of infrastructure services like EC2. While this solution works, it's severely crippled by the need to avoid using higher-level cloud services for things like image recognition, serverless functions, machine learning and IoT data streaming. Unfortunately, these are the areas where the mega clouds are most innovative by playing to their strengths in scale and R&D resources, and consequently democratizing access to advanced technologies.

Another option is running the same cloud services internally as are available publicly. The most viable choices here are Azure Stack paired with public Azure, OpenStack with something like Rackspace OpenStack services or Red Hat OpenShift, which also has a managed cloud option. Of these, Azure is the only one with a full-featured service portfolio that can compete with AWS, while the others are niche offerings full of compromises. Although Azure Stack seems to be hopelessly delayed, Microsoft still has the most coherent hybrid cloud strategy, if and when it can deliver the private component.

By operating at different planes of abstraction, public cloud services and private cloud infrastructure make it virtually impossible to have a coherent, seamless hybrid cloud design. As organizations mature in their understanding and use of public services like AWS and move beyond simply treating them as rentable virtual server farms, they will internalize the public-private dichotomy and see that their hybrid cloud strategy has flaws. Until the industry better addresses the abstraction-layer mismatch, I expect to see more and more organizations rethinking their hybrid cloud plans.

Image credit - via MSDN

Read more on:
Loading
A grey colored placeholder image