A truism about most IT systems is that added features come with added complexity. The phenomenon is familiar to most people through the feature creep and associated menu trees and toolbars endemic to much PC software. In IT systems, complexity is often an unfortunate consequence of unpolished software that requires IT administrators to spend time combing through documentation to understand and correctly configure the various options. Sadly, persistent complexity has bled over to the cloud, where one can spend hours contemplating the various options for compute instances, configuration parameters and deployment choices.
Unleashing a profusion of choice and flexibility is generally done for good reasons - new cloud features result from customer demands - however, it makes life difficult for those that want to use a feature without becoming an expert on its internals. It is something to celebrate when a company addresses the paradox of choice by using sophisticated software to hide complexity, not expand it. Thus, what makes several of Google Cloud's recent announcements at its Cloud Next OnAIr event so notable isn't so much the features themselves, but how Google implemented them (see our news coverage from Derek du Preez here).
Confidential computing gets single-click setup
Confidential Computing is the industry's marketing term for hardware-enforced secure enclaves to create trusted execution environments (TEE) that protect data while an application is running. I detailed the characteristics and technical elements of secure enclaves and TEE in a recent column that noted the implementation hurdles to adapting enterprise applications to such environments. As I mentioned, Azure and IBM Cloud currently offer instances with TEE, while AWS has a preview product in Nitro Enclaves that appears to have comparable capabilities. Last week, Google Cloud joined the party by announcing its Confidential VMs feature.
Google hasn't provided many technical details, but from the announcement and associated sparse documentation, we know that Confidential VMs:
- By default use N2D series VMs running on 2nd-generation (Rome) AMD EPYC CPUs that Google claims deliver 13 percent better price-performance than equivalent Intel-based (Cascade Lake) N2 instances. Users can also customize the N2D instance using custom machine types within the following limits:
- 2 to 96 vCPUs
-0.5 to 8 GB per vCPU
- Instances use Google Shielded VMs that incorporate added security features, including boot images secured by a virtual trusted platform module (vTPM) and checks of image integrity to protect from remote privilege escalation attacks and malicious insider attacks.
- Have software and drivers optimized for EPYC hardware that virtually eliminates the performance degradation of using memory encryption.
- Uses Google-developed and now open source Asylo software framework for confidential computing. Asylo, which Google supplies as a container image from its cloud registry, includes the requisite code dependencies and interfaces to allow running applications within a TEE without modification and provides portability across hardware environments.
The most impressive aspect of Google's implementation is how it hides all the technical wizardry behind a single checkmark on the instance setup screen, which makes setting up a confidential computing environment just another configuration option like selecting an instance size or deployment region. Monitoring instance integrity has also been integrated into the existing Stackdriver logging facility via a few new events for image boot validation and instance launch attestation.
Blurring the lines between standard and government cloud regions
Google pulled a similarly impressive feat of technology simplification in its implementation of Assured Workloads for Government, an alternative to isolated cloud regions like AWS GovCloud or Azure Government. Unlike these regional approaches, Google has turned compliance with various DoD, FBI and FedRAMP standards into a deployment option when creating a new environment. As such Assured Workloads acts as a security overlay to existing cloud regions that provides:
- Enforcement of data and workload locality based on organizational policies.
- Tighter access and management restrictions, such as citizenship, geographic location and background checks, on Google administrators working on Assured Workload infrastructure.
- Preconfigured security configurations to comply with controls including DoD IL4 and FBI CJIS.
These controls can be applied to Compute Engine, Persistent and Cloud Storage, and Key Management resources in any U.S. region. As a Deloitte spokesperson says in Google's announcement, the implementation of "reduces the friction of compliance" since the controls can be applied as a security overlay to any U.S. region, rather than as a one-off environment.
Taking analytics to your data
BigQuery Omni is another example of Google Cloud simplifying life for its customers; in this case, data analysts familiar with and wanting to use BigQuery semantics, with data on other platforms. Derek du Preez's article cited Google Cloud's GM of data analytics, Debanjan Saha, summarizing how BigQuery Omni breaks down the environmental silos that Balkanize data within an organization (emphasis added):
To solve that problem we are announcing BigQuery Omni, which is a multi-cloud analytics platform. It lets our customers analyze their data, no matter where their data is. It could be Google Cloud, it could be in AWS (which is now available as a private alpha), and very soon it is going to be available on Azure. Our customers always wanted to run data analytics wherever their data sits and with BigQuery Omni they have it today.
Google achieves cross-cloud capabilities by deploying Anthos container clusters running the BigQuery Dremel analysis engine on its competitor's infrastructure. The design allows data to be analyzed in situ and results stored back to the local cloud's storage, S3 for example, thus avoiding data transfer costs and performance-sapping network latency. Indeed, Omni's cross-platform design is similar to how Google allows GSuite customers to use Dropbox or Box as storage for Docs, Sheets and Slides content, thus providing a consistent UI for content creation and analysis regardless of where the data is stored.
Individually, none of the new Google Cloud features are earthshaking, but collectively they the platform much more compelling by:
- Removing much complexity and friction associated with setting up, configuring and managing sophisticated features like trusted execution environments, regulatory compliance and big data analysis.
- Providing flexibility for those that wish to customize features like TEE and government clouds that typically have few options for instance sizing (TEE) or geographic location (government/regulatory clouds).
- Establish Google's BigQuery engine as a cross-platform interface for in situ data analysis, which will lead many analysts to try its other analytic services.
- Demonstrates the feasibility of Anthos as a cross-platform application engine, illustrating the benefits of Google's centrally-managed container infrastructure.
Google Cloud might be a distant third in IaaS market share, but the gap narrows when comparing its features and technology to its two larger competitors. The days of writing Google Cloud off as another expensive dalliance to be abandoned once executives tire of funding it are over. The significant investments and innovations in both Cloud and GSuite the company is demonstrating during its Next OnAir event show a company that isn't backing away from staunch, well-positioned competitors like AWS, Microsoft, Slack and Zoom. Any post-pandemic enterprise building a cloud-based IT environment that overlooks Google is making a mistake.