Virtualization is dead, long live containerization

SUMMARY:

It’s a bit geeky but the impact of this technology shift on cloud computing costs and enterprise app development could be huge, so bear with me

© Oleksiy Mark - Fotolia.comOK, I’ll admit this is a bit of a geeky topic but it’s going to have a huge impact on the cost of cloud computing and on how enterprises develop their cloud applications. In other words, it directly affects some of the most important technology decisions you’ll be involved in over the next year or two. So bear with me on this one.

The purists will probably object to my headline because containerization is actually just another approach to virtualization. Where it differs, though, is in dispensing with the conventional virtual machine (VM) layer, which is quite a radical departure from the way we’ve thought of cloud computing up until now.

In a classic infrastructure-as-a-service (IaaS) architecture — think Amazon Web Services, Microsoft Azure, or a VMware-based private cloud — you distribute your computing on virtual machines that run on, but are not tied to, physical servers. Over time, cloud datacenter operators have become very good at automating the provisioning of those VMs to meet demand in a highly elastic fashion — one of the core attributes of cloud computing.

The trouble with VMs is that that they’re an evolution from an original state when every workload had its own physical server. All the baggage that brings with it makes them very wasteful. Each VM runs a full copy of the operating system along with the various libraries required to host an application. The duplication leads to a lot of memory, bandwidth and storage being used up unnecessarily.

Orders of magnitude better

docker_vmContainerization eliminates all of the baggage of virtualization by getting rid of the hypervisor and its VMs, as illustrated in the diagram. Each application is deployed in its own container that runs on the ‘bare metal’ of the server plus a single, shared instance of the operating system. One way to think of it is as a form of multi-tenancy at the OS level.

Containerization as it’s practised today takes IT automation to a whole new level, with containers provisioned (and deprovisioned) in seconds from predefined libraries of resource images. The containers consist of only the resources they need to run the application they’re hosting, resulting in much more efficient use of the underlying resources.

We are talking improvements by orders of magnitude rather than a few dozen percentage points. People commonly report improvements in application density of 10x to 100x or more per physical server, brought about by a combination of the compactness of the containers and the speed with which they are deployed and removed.

In one example I encountered recently, UK-based IaaS provider ElasticHosts plans to exploit this differential with a metered offering based on Linux containers that eliminates much of the approximation seen in a traditional VM-based hosting environment. Says CEO Richard Davies:

You can offer very fine grained on-demand scaling … When resources become available they can be completely scaled down [because] the system is one that is more transparent to the operator.

The plan is to provision resources that are much more closely matched to demand on a minute-by-minute basis, avoiding slowdowns in performance when demand spikes or overpaying when resources remain idle after demand subsides.

Into the enterprise mainstream

The concept of LinuX Containers (LXC) is not new. It’s been an important component under the covers of many of the largest web application providers for years. Google is said to have more than 2 billion containers launching in its datacenters every week. Platform-as-a-service vendors such as heroku, OpenShift, dotCloud and Cloud Foundry have been using Linux containerization since their inception.

What’s changed is that it used to require a lot of expertise and handcrafted code to do it right. It’s only in the past year or two that the mainstream Linux kernels and associated toolsets have built in more robust support.

Last month, the release of Docker 1.0 productized containerization for enterprise use. Suddenly the concept burst into the tech media headlines, with Docker support announced by Google AppEngine, Microsoft Azure, IBM SoftLayer, Red Hat and others.

Encouraging cloud-native development

Containerization is not coming to the classic enterprise software stack anytime soon. That’s still going to be the preserve of the classic VM — quite rightly, as that’s the use case the likes of VMware was designed for.

Where containerization excels is in deploying the kind of microservices-based architecture that is becoming increasingly characteristic of cloud-native web applications. Thus it’s no surprise to see it being combined with a technology like Modulus.io, a PaaS platform for applications built on Node.js and MongoDB, which last month announced its acquisition by Progress Software. Says VP technology Matt Robinson:

Node really encourages an API-first design. In line with a container technology, you can have those modular units and have much more fine-grained scalability control over that.

A technology like Docker is instrumental in automating the principles of devops. Its predefined library images are operationally pretested, allowing developers to just go ahead and deploy. Chris Swan, CTO of CohesiveFT, says that this encourages the practice of rapid testing and ‘fast failing’ while iterating.

Verdict

  1. For a long time, containerization was understood by a small minority of insiders and unavailable to the vast majority. Docker has changed that irrevocably.
  2. The ready availability of containerization technology massively increases the economic advantage of microservices-based application architectures.
  3. Virtualization is turning into a technology ghetto where legacy applications go to live out their remaining days.

 

Image credit - Containers © Oleksiy Mark - Fotolia.com; diagram courtesy of Docker.

    Comments are closed.

    1. Vijay Vijayasankar says:

      That is like saying cars are dead , long live hybrids . Containers are just another way of virtualizing with hopefully better resource utilization . The fundamental issue of relying on pre-created images doesn’t go away .

    2. dor laor says:

      Hmm, while there is sense to use bare metal containers within some private clouds that
      do not require multi tenancy, containers won’t ever replace many other scenarios where the hypervisor
      shine. 
      You’re welcome to ready my blog post about it [1] – my background – I managed the kvm development for 7 years and was in charge of the Xen release at Red Hat. Today I’m part of the http://osv.io team that brings the goodies of containers to the hypervisor world.

      Cheers,
      Dor
      [1] http://osv.io/blog/blog/2014/06/19/containers-hypervisors-part-1/
      [2] http://osv.io/blog/blog/2014/06/19/containers-hypervisors-part-2/
      [3] http://osv.io/blog/blog/2014/06/23/containers-hypervisors-part-3/

    3. neuroserve says:

      Just to set things straight a little bit: If you are talking about recent developments in the area of containers please try to distinguish “full containers” and “application containers”. Docker tends to be in the camp of “application containers” and the new school of microservice-based applications architectures (as if the burst of the SOA bubble of the early years had never happened). “Full containers” are a complete virtual environment with the respective pros and cons.

      http://en.wikipedia.org/wiki/SWsoft tells us, that at 2001 SWsoft released http://en.wikipedia.org/wiki/Virtuozzo (for Linux and Windows, I guess). Obviously it is the fault of the “vast majority”, that they didn’t buy a copy.

      Cheers

    4. skies2006 says:

      Virtualization will still be around, there are usecases then you need custom kernels and 3:rd party drivers, etc that can not be done with containers.

      Containers are nice as there is lower overhead compared to full virtualization. You can overbook memory using thin-provisioning.

      Same design issues in regard to IP networking/virtual switching/SDN on migration between hosts applies to containers aswell.