Puppet Labs extends reach of DevOps automation

Phil Wainewright Profile picture for user pwainewright March 30, 2015
Summary:
Puppet Labs extends its DevOps automation tools to Docker containers, AWS and bare metal as it seeks to embrace enterprise IT hybrid infrastructures

Data center management DCIM businessman iPad © everything possible - Fotolia.com
Stories still surface of organizations where it takes as long as 300 days for IT to provision a new system — and that's just putting the hardware and software in place, before any functional implementation work can start.

Software from Puppet Labs, and its main rival Chef Software, are designed to cut that lead time to hours or even minutes. Last week, Puppet extended its reach to operate across Docker containers, Amazon Web Services instances and even bare-metal servers.

By automating the entire process of bringing a system live, such tools are helping to enable the mainstream adoption of a concept known as DevOps. By marrying the twin disciplines of developing systems and then operating them, rather than leaving the two as discrete processes, DevOps massively speeds up the pace of IT delivery and evolution.

The market for such tools is growing strongly, according to research data released earlier this month by Gartner:

The total [market] for DevOps tools [will reach] $2.3 billion in 2015, up 21.1 percent from $1.9 billion in 2014. By 2016, DevOps will evolve from a niche strategy employed by large cloud providers to a mainstream strategy employed by 25 percent of Global 2000 organizations.

That adoption is being driven by the need for these enterprises to implement more agile, digital business processes, said Gartner research director Laurie Wurster:

Digital business is essentially software, which means that organizations that expect to thrive in a digital environment must have an improved competence in software delivery.

Massively automated at scale

DevOps was first pioneered by large-scale cloud providers, such as Google and Amazon, to allow them to operate massively automated datacenters at scale. Now large enterprises are adopting it to bring similar economies of operation to their own IT estates, as they move more and more of their assets to both private and public cloud infrastructures.

Of course, making the transition from a 300-day provisioning cycle to one that's more of a next-day proposition isn't just a matter of bringing in a trendy new software tool. Implementing IT automation becomes a trigger for eliminating all of the form-passing, departmental fiefdoms and manual workarounds that have grown up over the years. As Gartner's Wurster points out:

Culture is not easily or quickly changed. And key to the culture within DevOps is the notion of becoming more agile and changing behavior to support it — a perspective that has not been widely pursued within classical IT operations.

Tools such as Puppet and Chef — others in the market include Ansible, CFEngine, and Salt — have mostly been used to provision software within a virtualized environment, whether that's on a public cloud such AWS or on private cloud infrastructure. The automation makes it possible to build new instances or take them down on demand — as the WSJ Europe's CIO blog noted earlier this month, that allows an online business such as e-commerce marketplace Etsy to revise its software up to 70 times a day:

Etsy noted in its IPO prospectus Wednesday [mar 4] that it updates code as often as every 20 minutes. Changes are made up to 70 times per day. In 2014, the company executed 10,000 code 'deploys'.

Enterprise flexibility

Last week's Puppet announcement adapts that flexibility to the more hybrid environment of the enterprise. It's part of a deliberate move to add new tools that extend its core capabilities beyond the scope of the virtual server, as Tim Zonca, director of product marketing at Puppet Labs, explained to me in a briefing last week:

The value that Puppet provides our customers is one system rather than pockets of tools. Our customers want to have a consistent and repeatable way to manage their infrastructure. It's less that there's not something else out there, it's a question of consistency across these sophisticated and heterogenous environments.

Last month there was a partnership with Arista Networks to extend its management reach to datacenter networking. Last week's announcement adds new code management and infrastructure-as-code capabilities.

Perhaps most significant is the extension of Puppet Node Manager to launch and manage Docker containers — an increasingly popular form of predefined application environments that utilize server resources more efficiently than conventional virtual machines. Zonca told me:

What we see is, businesses as they're working with containers, especially to simplify application deployment, it's critical to avoid complications when deploying the Docker daemons.

They want the same kind of help managing the complexity as with some of the other stuff they're relying on Puppet to help manage.

They spend less time firefighting configuration issues and more time streamlining their application deployment.

There are new change control processes for a variety of AWS functions including EC2, virtual private cloud, elastic load balancing, auto-scaling, security groups and Route53.

Finally, the new release has an updated version of a tool called Razor, which automatically sets up virtualization on a bare server machine ready for Puppet to manage. The purpose is to automate a further step in the provisioning process.

Managing hybrid environments

The primary target for all these capabilities are enterprise teams who are collaborating within an IT organization to manage hybrid infrastructures. This covers pretty much every enterprise Puppet speaks to, said Zonca:

Most of the sales calls I've been on, the smallest one had about sever different IT teams. Hybrid cloud is what they're trying to figure out.

They're trying to move some stuff to the cloud, they want to use Puppet as an organization to help them move across all their environments.

Puppet helps manage access so that individual teams are limited to specific resources within the hybrid environment, he added.

Role-based access control allows the management teams within this hybrid cloud environment to say that certain teams can only do certain subsets, for example AWS or a certain application stack on Linux or Windows.

Even though the whole picture is hybrid cloud, the specific user team is constrained to a certain set of resources.

Effective use of these tools has a palpable impact on outcomes, according to Puppet Labs' annual State of DevOps survey, which last year (PDF) found that:

High-performing organizations are deploying code 30 times more frequently, with 50 percent fewer failures than their lower-performing counterparts ...

High IT performance correlates with strong business
performance, helping to boost productivity, profitability and market
share.

Puppet is currently collecting responses for its 2015 State of DevOps report.

My take

IT has operated like a craft industry for far too long. Time to automate and industrialize.

Image credit: Data center management with iPad © everythingpossible - Fotolia.com.

Loading
A grey colored placeholder image