You only got the first half of the story from CEO Diane Greene’s kickoff of Google’s Cloud Next event, where she said “AI is our biggest opportunity.” The platform plumbing, namely containers, serverless functions, private network interconnects and strong security are the components that will unlock that AI potential and much more.
While Google Cloud Platform has been a hit with cloud-savvy developers and zero-infrastructure startups, enterprises look at the cloud through a lens clouded (ahem) by existing massive investments in on-premise hardware, systems software and staff trained in the care, feeding and use of said systems.
Amidst the AI hype and nestled among the barrage of announcements at Cloud Next, were several that illustrate a strategy for breaking Amazon's cloud hegemony. The idea is to commingle cloud services with legacy infrastructure without locking an organization into a particular cloud service provider.
AWS penetrated the walls of Enterprise Fortress IT by going through the back door, as a growing cadre of individual developers and project teams procured AWS services with credit cards whose reimbursement was hidden in project budgets. As usage grew and AWS became entangled in critical business applications and services, its use gradually came into the open. CIOs and business executives begrudgingly accepted something they didn't initially approve, but couldn't deny was working well.
Google rightly understands that the days of surreptitious cloud proliferation in the enterprise are over and that it needs a frontal assault to win enterprise business.
To capture enterprise customers for its sophisticated cloud-native data and AI services, Google has encased Google Cloud Platform in the Trojan Horse of container and Kubernetes goodness with the promise of frictionless, multi-cloud movement and legacy infrastructure preservation. All the while, Google is making containers a more robust and compelling application environment through new development frameworks and event-driven, serverless runtime options.
Here are the details of how Google's containers über alles strategy unfolded at Cloud Next.
Kubernetes everywhere: now on-premise
An opening day highlight at Next, at least in terms of online buzz if not technical innovation, came when Google waded into the hybrid cloud market by announcing an on-premises version of its Container Engine service, GKE.
Google had already stitched GKE into a hybrid design last year when it worked with Pivotal Software to develop the Pivotal Container Service (PKS) (see my coverage here), a product that allows Cloud Foundry PaaS users to supplement (or replace) on-premise container clusters by using GKE as a deployment target.
Google has generalized the offering by developing a GKE-compatible Kubernetes software package that can be installed on local systems but managed via the Google console as if it were just another cloud cluster.
Much like Azure Stack's integration with the Azure Resource Manager, the Google Cloud Services Platform, which includes GKE On-Prem, makes private container clusters look for administrative purposes like a cloud instance and allows workloads to smoothly migrate between environments.
The hybrid cloud product also includes hooks into other Google Cloud Platform services including Stackdriver (monitoring and logging), IAM (user and security policy management), Container Builder (automation tool for containerizing applications) and networking (to connect on-premise clusters to Google Cloud Platform “without the need for complicated VPNs”).
The networking component is noteworthy since, according to Google's SVP for infrastructure, Urs Hölzle, it borrows concepts Google introduced in its BeyondCorp network strategy by including application-layer (L7) authentication and encryption to allow secure, certificate-backed communication between services over public networks without a VPN.
Like most Google announcements, GKE On-Prem is still a limited-access alpha product, but if it delivers as promised, GKEO-P will provide a compelling hybrid container environment that’s based on the most mature cloud container service (CaaS) available.
GKE On-Prem was predictable but Google’s other set of container-related announcements were less so, albeit logical in retrospect: turning containers into serverless functions. As I Tweeted at the time, "wrapping containers in a serverless, event-driven shell to allow any code to execute on-demand is a no-brainer."
Existing serverless implementations like Lambda and Azure or Google Cloud Functions support a limited number of language runtime environments (Node.js, Python, Java (Java 8 compatible), and C#/.NET and Go in the case of Lambda). Extending serverless support to containers allows any arbitrary code to be deployed, run and scaled as an event-driven service.
The core technology behind this magic is a Google-sponsored open source project called Knative. However, Knative is much more than just a serverless wrapper for containers, but a development framework for containerized applications. As Google describes it (emphasis added),
Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center. Knative components are built on Kubernetes and codify the best practices shared by successful real-world Kubernetes-based frameworks. It enables developers to focus just on writing interesting code, without worrying about the ‘boring but difficult’ parts of building, deploying, and managing an application
Collectively, Google Cloud Platform's container and serverless announcements move cloud infrastructure to a higher abstraction level by further insulating developers and application owners not just from the details of virtual instance sizing and management, but from infrastructure management entirely.
In Google's world of serverless containers, the customer supplies the code and the Google Cloud Platform takes care of the rest. This quote from a Google customer at T-Mobile with early access to the technology sums up the vision and its benefits,
The technology behind the GKE serverless add-on enabled us to focus on just the business logic, as opposed to worrying about overhead tasks such as build/deploy, autoscaling, monitoring and observability.
While the serverless container or App Engine model works well for custom code, Google is pushing serverless automation of IT infrastructure, which it defines as "no upfront provisioning, no management of servers, and pay-what-you-use economics for building applications," across all services. These include databases (Firestore), data analytics (Dataflow), data warehouses (BigQuery), machine learning (ML Engine), storage and application messaging (Pub/Sub). I call this the SaaS-ification of cloud infrastructure.
It's a bold and compelling strategy that pushes cloud services, and the IT organizations that adopt them, in the right direction: out from toiling away on infrastructure operations — namely server, storage and network management — into a focus on business problems that can be addressed by custom services, new applications and creative data analysis.
Google has done a stellar job pioneering a service-centric, aka serverless, cloud strategy and developing the requisite technology. A tougher challenge will be getting enterprise customers to come along since it requires an organization with a mature understanding of and commitment to a new version of IT (see my previous column on the end and rebirth of IT for details) that defines itself by tangible business benefits, not squishy, internal metrics. In that sense, Google is actively providing the components that will help IT and the business better align to problem-solving and it will be up to both IT and the business to figure out how this plays out best.
I expect AWS and Azure to rapidly imitate Google with serverless container extensions, most likely to their container instance services, Fargate and ACI respectively. However, I doubt they will push as aggressively to replace traditional infrastructure services like user-managed VMs (aka compute instances) that nicely fit with existing IT operational models with serverless, SaaS-like infrastructure.
As with container services, Google may once again be several years ahead of both its competitors and customers. Whether it can use containers and hybrid container infrastructure as the bridge to get enterprises to a new phase of cloud services will be interesting to watch over the coming years.
Helping customers navigate that transition from virtual servers and storage to higher levels of cloud service abstraction is critical to Google’s Cloud success since, there are no awards for technical innovation that fails in the market. History is littered with superior technologies that were usurped by better marketed or received products. Google doesn't want to replicate its Glass humiliation with a Cloud service that becomes a business school case study alongside the Sony Betamax.