In just a few short years, containers have gone from a curiosity for a handful of developers to a preferred platform for next-generation applications.
When you consider that the first Dockercon was a mere three years ago with a few hundred attendees, the fact that one survey recently found that a third of respondents are already spending more than $500,000 per year on container licenses and usage is astounding.
However, building and running a container cluster, particularly the network configuration and management of the container ecosystem necessary for production applications isn't easy. The lack of ease is why another survey found that a third of container users opt for cloud container services for new workloads.
While products like AWS ECS, Azure Container Service and Google Cloud Container Engine (GKE) insulate users from much of the overhead of deploying a container cluster, they're still infrastructure services at heart: under the covers, users are still running virtual machine (VM) instances, albeit preloaded with a container runtime engine, creating a cluster, managing storage connections and configuring networking.
It's a lot of overhead, particularly starting out. But not anymore for Azure users as Microsoft's new Azure Container Instances (ACI) turn containers into another platform service that eliminates the need to fuss with low-level VM infrastructure.
ACI turns application containers into a high-level service that puts a simplifying abstraction layer between the user and underlying container host (see my previous column on the benefits of higher-level cloud abstraction layers for background on why this is a good thing).
Conceptually, ACI is a cross between a traditional PaaS, like the Azure App Service, and serverless Azure Functions. Like a PaaS, ACI allows users to quickly deploy a versatile runtime environment suitable for almost any application, but as with serverless functions, ACI is available almost instantly and billed with extreme granularity.
Indeed, ACI turns containers into a such an easily deployed and disposed of resource, they combine, as described by the Azure product manager for containers, the invisible infrastructure and micro billing of serverless without forcing the event-driven programming model.
Existing container services
The most striking things about ACI are how easy it is to use and how fast containers start up. Microsoft isn't exaggerating when it says ACI containers start within seconds since the speed was apparent during an introductory demo by Azure's director of compute, Corey Sanders.
Another area Microsoft has wisely focused on is security since enterprise users in particular remain skeptical that containers on shared infrastructure are immune from attack by malicious users. Microsoft claims that applications are "as isolated in a container as it would be in a VM."
While it hasn't released details, this indicates that ACI uses Hyper-V Containers, announced two years ago, to fully isolate container from the host hypervisor. Reinforcing the hypothesis is the fact that Hyper-V Containers recently gained support for Linux containers to supplement its native support for Windows images and the fact that ACI will run either flavor of container image.
If true, it's quite ironic that running Linux images on Windows (since Hyper-V containers are a feature of Window Server) is more secure than running them on Linux itself.
As a PaaS-like service, ACI eliminates the need to manage VM instances; configuration consists of choosing an image, specifying whether it is on a public or private network and the number of virtual CPUs and memory per container.
Images can be pulled from the public DockerHub or a private repository in the Azure Container Registry. ACI usage is billed by the second based on the number instances in use, their memory and core configuration and the total time deployed.
The Azure documentation has pricing details, but an example of a one vCPU, two GB configuration used 50 times daily over a month for two-and-a-half minutes each time comes to $12.19 per month.
ACI doesn't provide any orchestration features such as automatically deploying an image across a cluster of machines or auto scaling new instances in response to usage. You still need something like Kubernetes or Docker Swarm for that, however Microsoft doesn't want to relegate ACI usage to simple, single app or test/dev deployments.
To that end, as part of the ACI introduction Microsoft, announced that it has developed and released to open source an ACI connector for Kubernetes that enables Kubernetes clusters to deploy to Azure Container Instances. The combination provides the power of today's most popular container cluster manager cum orchestration engine with the convenience and pricing granularity of ACI.
Integrating the two introduces an interesting hybrid deployment model as Sanders notes in blog post announcing the product,
Azure Container Instances can be used for fast bursting and scaling whereas VMs [ note: as part of a Kubernetes cluster ] can be used for the more predictable scaling. Workloads can even migrate back-and-forth between these underlying infrastructure models.
Insulating users from VM deployment and management makes ACI significantly different than existing container services like AWS ECS, Azure Container Service and Google GKE.
Even so, as a general-purpose container platform, it's also notably different from most PaaS products, which are either strongly opinionated regarding the development and application development methodology or designed for specific scenarios such as Web sites, mobile backends or particular business processes.
However, there are PaaS similarities, which makes extending a tightly-defined cloud application platform to support for general purpose Docker containers an obvious decision.
Indeed, Google App Engine has largely done this by supporting custom runtimes written in any language and packaged as a Dockerfile with a Docker image, custom software stack or third-party libraries.
The GAE billing model, priced per hour, isn't as granular as ACI, however GAE provides more flexibility and has features such as access to databases (Cloud SQL and NoSQL Cloud Datastore), persistent disk (cloud Storage) and a search service. Although GAE flexible environment as it's called still seems harder to use than ACI, one can suspect Google to improve it over time.
Like ACI, Hyper can instantiate containers in less than a second, has a small memory footprint, uses a small kernel OS to deliver hardware-enforced container isolation and can run any Docker image.
In that respect, Hyper resembles VMware Photon platform without the dependence on a single, proprietary hypervisor and container OS.
Labeled a preview service, ACI is a work-in-progress and initially available in just three locations (US East, US West, West Europe), so it will undoubtedly be made more widely available pending improvements as Microsoft gathers feedback from early users.
One area that needs attention is better support for other products in the broader container ecosystem, including better packaging with Kubernetes, support for other orchestration systems like Docker Swarm and Apache Mesos, and access to other hosted container registries.
Microsoft also needs to provide better guidance, documentation and code samples for integrating ACI with other Azure services including its PaaS stack, Azure Functions, Scheduler and ARM (Azure Resource Manager) templates.
By providing significant and beneficial technological differentiation from other cloud container services, ACI sets an example I expect competitors to follow. Don't be surprised if AWS shows that it's already been working on something similar by introducing a competitive service at re:Invent this fall.
Likewise with Google Cloud, which has made a point of differentiating itself based on the performance and billing granularity of its services.
I expect a two-pronged response from Google, first by improving the usability and marketing of GAE flexible instances along with a direct ACI competitor that provides VM-less containers-as-a-service.
For organizations still debating how to use containers as part of their application portfolios, the simplicity, flexibility and granular billing model of ACI makes it a no-brainer.
As users scale ACI implementations, they must pay close attention to costs and adjust their usage to exploit the granular billing model and the ability to instantly instantiate containers by aggressively disposing of instances when idle.