Main content

Bringing serverless convenience to containers

Kurt Marko Profile picture for user kmarko May 15, 2019
Microsoft, Red Hat partner on new features, serverless technology.

Serverless vs other Cloud Native Technologies diagram © CNCF
(© CNCF)

The efficiency and automation of containerized applications and infrastructure have IT organizations gravitating from traditional VM servers to container clusters managed by orchestration software, notably Kubernetes. Indeed, several surveys show that containers have penetrated more than three-quarters of all enterprises, with most of them using containers for production applications.

However, as developers continue to seek higher levels of software abstraction to further insulate them from the details of infrastructure implementation, serverless functions (aka FaaS, functions-as-a-service) have become an attractive alternative for many needs. Pioneered by AWS Lambda, FaaS can significantly simplify the design and implementation of modular, cloud-native applications. 

Unfortunately, early proprietary FaaS products like Lambda and Azure Functions clashed with another critical requirement of enterprise cloud buyers: multi-platform portability for lock-in protection. Of course, by supporting popular programming languages like Java, Node.js, Python and Ruby, it allowed cloud vendors to rightfully claim that the function code is portable, but the configuration, implementation and cloud service integrations are most certainly not, meaning that DevOps teams faced some work moving from one serverless platform to another. 

A cleaner architectural approach would combine the multi-platform portability of Kubernetes with the usage simplicity and deployment immediacy of serverless functions. Several recent announcements from Microsoft Azure and Red Hat, coupled with earlier work by Google and the broader container open source community are bringing serverless convenience to containers and enabling portable, multi-cloud application environments based on an ecosystem of containers and associated orchestration and service management-service mesh software.

Azure and Red Hat making containers first-class serverless hosts

At coinciding, albeit not entirely competitive events last week, Microsoft (Build) and Red Hat (Summit) announced several products and projects designed to improve container usability, automation and flexibility. Microsoft went first with a series of announcements it positioned as making Kubernetes easier to use and “enterprise-grade”, namely:

  • The general availability of AKS virtual nodes that can replace container nodes on user-managed Kubernetes clusters (made up of Azure instances) with Azure-managed container instances.
  • Kubernetes-based Event-driven Autoscaling (KEDA), jointly developed with Red Hat, that provides FaaS-like deployment and scaling of Kubernetes pods (a virtual host for Kubernetes-based applications that is the atomic unit of workload deployment) based on application (not system) events such as the length of an Azure (message) Queue or Kafka Stream. As an open source project, KEDA is designed to work on any Kubernetes environment, including on-premises systems, not just Azure AKS clusters.
  • The preview release of Azure Policy support for Azure Kubernetes Service (AKS) which assists in the consistent application of security and policies to Kubernetes clusters and integrates with existing identity-management systems like Azure AD.
  • The general availability of Azure Dev Spaces, a managed development environment for Kubernetes workloads that simplifies the implementation of automated CI/CD (continuous integration and delivery) processes using Azure Pipelines.


Gopinath Chigakkagari, Group Program Manager for Azure Pipelines summarizes its goals this way (emphasis added)

We believe developers should be able to go from a Git repo to an app running inside Kubernetes in as few steps as possible. With Azure Pipelines, we aim at making this experience straightforward, automating the creation of the CI/CD definitions, as well as the Kubernetes manifest. When you create a new pipeline, Azure DevOps automatically scans the Git repository and suggests recommended templates for container-based applications. Using the templates, you have the option to automatically configure CI/CD and deployments to Kubernetes.

Of these, the combination of AKS virtual nodes and KEDA is the most significant since it promises to bridge the worlds of containers and serverless functions. A key feature is KEDA’s support for Azure Functions languages, programming model and development tooling. As explained in a blog by Jeff Hollan, the Senior Program Manager for Azure Functions (emphasis added):

Because Azure Functions can be containerized, you can now deploy functions to any Kubernetes cluster while maintaining the same scaling behavior you would have in the Azure Functions service. For workloads that may span the cloud and on-premises, you can now easily choose to publish across the Azure Functions service, in a cloud-hosted Kubernetes environment, or on-premises.

While Azure Functions provides a fully managed serverless service, many customers want the freedom to run serverless in an open environment they can manage. With the release of KEDA, any team can now deploy function apps created using those same tools directly to Kubernetes. This allows you to run Azure Functions on-premises or alongside your other Kubernetes investments without compromising on the productive serverless development experience.

Kubernetes cluster diagram © Azure blog
(Kubernetes cluster diagram © Azure blog)

Making containers ‘safe’ for the enterprise: Red Hat’s contributions

Containers were a highlight of Red Hat’s annual Summit where it used the dog-and-pony show to announce several enhancements to its OpenShift Kubernetes environment. Pounding the multi-cloud goals and lock-in averseness of most enterprises, Red Hat said that OpenShift would soon be available on all of the major cloud services, namely Alibaba, AWS, Google Cloud, IBM Cloud and Microsoft Azure (notably absent, Oracle Cloud) along with being supported on private cloud infrastructure like OpenStack, VM platforms and bare-metal servers.

Red Hat’s latest OpenShift 4 release has several significant additions that improve both operational efficiency and developer flexibility including:

  • Developer self-service and on-demand provisioning features along with improved automation of container builds and deployments.
  • Support of the aforementioned KEDA (Kubernetes-based event-driven autoscaling) technology for serverless containers and that enables the use of Azure Functions in OpenShift as a preview feature.
  • Comparable preview support for the Knative serverless application framework for building, deploying and managing FaaS workloads. Like KEDA, Knative, initially developed and evangelized by Google, provides function deployment and autoscaling based on application events.
  • Integrated service mesh that combines the Istio control plane, which includes the Envoy sidecar proxy to handle the data plane, the Jaeger distributed tracing tool and Kiali container monitoring software.
  • CodeReady Workspaces for that includes an entire container development environment including a Web-based IDE, tooling and dependencies needed to create applications or functions for deployment on Kubernetes and Knative.

As a CNCF-compliant Kubernetes implementation, OpenShift is both portable to various on-premises and public cloud environments and compatible with workloads running on other Kubernetes implementations. Indeed, besides on-premises deployments, OpenShift is available as a managed cloud service on Azure, Atos and DXC Technologies.

My take

Container technology and the corresponding market for products and services are rapidly evolving from their roots in the cloud developer community, where efforts went towards optimizing for large-scale, distributed, cloud-based online services. As enterprises fuel the next stage of container adoption, recent work, both that summarized here from Microsoft and Red Hat, along with products such as Google Cloud Anthos (see my coverage here) have focused on hybrid infrastructure and multi-cloud portability, along with building out the required software framework, such as service mesh/workload routing, security, DevOps CI/CD process integration and application conversion/bundling, to make containers a useful production environment for a broader set of enterprise use cases and IT expertise.

The next stage of container evolution entails addressing the potentially disruptive technology of event-driven serverless functions. After more than four years of experience with AWS Lambda and similar services from Azure and Google Cloud, developers have embraced the ability to glue composite applications together using lightweight, instantly-available code execution environments. However, the system design and usage assumptions underlying serverless services, i.e. those not requiring prior configuration and instantiation, limit their utility. You’re not going to build an entire application out of event-driven functions.

A short history of serverless technology diagram © CNCF
( © CNCF)

As a white paper from the CNCF’s Serverless working group details, serverless is best used for asynchronous, independent and infrequent workloads that are stateless and highly dynamic. By further insulating developers from the details of infrastructure implementation, FaaS appeals to developers that want to focus on their business logic and application architecture and trust the cloud provider to deliver a highly reliable, dynamically scalable service. Thus, FaaS is a perfect compliment to containers, not a replacement for them. Recent announcements show that Microsoft and Red Hat with KEDA along with Google, IBM, Red Hat and the rest of the Knative developer community are building a multi-cloud foundation for next-generation applications that bridge the world of container infrastructure and serverless FaaS; a platform that enterprise architects, developers and IT executives are wise to investigate thoroughly.

A grey colored placeholder image