At the start of his keynote presentation to last week’s Dev/Ops World conference, Sacha Labourey, the CEO and co-founder of DevOps specialist, Cloudbees and host of the whole event, addressed what he called ‘the elephant in the room’, namely just how much the world had changed since the company’s conference in 2019. He said that back in January this year the conference planning team decided that the theme of this year’s event should be ‘Transforming the Future of Software Delivery’.
As this year has progressed that choice has come to seem more apposite than many would have guessed possible. The result is that DevOps looks like it will, within the purview of CIOs, change from being a `good to have’ capability amongst any resident applications development community to a `do without at your peril’.
The nature of that peril was outlined to the conference by one of the guest speakers at the keynote, Aparna Sinha, Director of Product at Google Cloud:
Nearly every company I've spoken to over the past few months is looking to modernise their applications to better deal with three major challenges. First is COVID-19, which has resulted in extreme demand patterns. Second, is working from home: that's become the new norm, with companies facing new security and compliance issues as a result. Third, cost containment which has emerged as a clear priority for 2020, and the foreseeable future. All costs from CapEx to OpEx are under scrutiny.
Normally, I wouldn’t consider the possibility of such dramatic change as being likely, but one thing we can be sure about is that ‘normal’ is no more, even if no one is yet sure what it will morph into. That old normal would have continued to see a transition of existing legacy applications that still add value to the business to the cloud. New functionality would appear as cloud-native apps, with it all joined up by APIs.
But maybe the changes forced upon businesses by COVID-19 – the impact of key personnel being struck down, for example – coupled with the move to (and broad acceptance of) working from home may end up so significant that major re-thinks of what applications will be required – even in the back office. Modifying and upgrading legacy applications may no longer fit the bill.
Personally, I would add one more candidate to this list of challenges, that of coping with the changes that come with the increasing prevalence of edge computing. Certainly, it does not have the urgency of the pandemic about it and for many users edge is still just something to read about, but its potential to create serious, permanent and possibly terminal change amongst many classes of established legacy applications is increasingly strong.
While by no means a certainty, back office and mainstream `big apps’ may be in for significant change as these events impact how businesses operate and restructure themselves to accommodate the changes required. In particular, the place of traditional, large, centralised monolithic applications suites are likely to be put under pressure and found wanting as the application’s need becomes more distributed to match the distributed nature of employees and workloads. By the same token traditional back office applications are likely to shrink as they effectively become the resting places of the recent history of every business, with little need to process data. Such changes will be beyond the scope of a bit of code hacking to update the current legacy application.
Death of the three year upgrade
Major architectural rethinks will be the order of the day, and that could open up the floodgates for new applications and for DevOps, and in particular the Continuous Integration/Continuous Delivery (CI/CD) of code. After all, if you need to get rid of the old applications, why not get rid of the three-year wait for an upgrade that goes with them? As we have seen, a hell of a lot can change in three years and businesses will need to react much, much faster than that over the coming years.
Sinha’s view on this is that organizations can overcome these hurdles through biting the bullet of application modernisation, and they can expect to see improved business outcomes:
In fact, our research shows that elite performers, that's ones who ship code multiple times a day, are two times more likely to achieve or exceed their commercial goals, including increased profitability, productivity, and customer SAT.
She acknowledges that actually achieving this can be difficult, especially for large enterprises. Problem areas include maintaining visibility, control and a consistent developer experience across fragmented environments; how to measure progress with the right metrics; and deciding which operations practices to adopt at the team level and organisational level. There is also the thorny universal problem of driving cultural change across organizations.
There are, however, clear best practices that address these questions, which can be replicated and which Google Cloud has made available with its recently announced Google Clouds Application Modernisation Program, AKA Google Camp. This claims to provide a data-driven baseline assessment, a blueprint with a specific set of proven development/security/operations practices, and, of course, the Google Cloud platform on which to build it.
The Camp platform can secure and manage both legacy and new applications and is built to support modern principles like shifting left on security, fast feedback on changes, gradual rollouts, and rapid elasticity. The tools available under Camp include cloud code, cloud build, artefact registry, and cloud ops.
As for conference hosts company Cloudbees, Labourey outlined the mission he is setting for the company - to find ways to support all users in the pursuit of accelerated applications resiliency which, given the central role of applications software in business, is still an issue:
Our society is used to things just working. Even a slight interruption, a cancelled Flight, a power outage, is a major disruption to us and actually quite a bit of frustration as well. And few of us had any idea what disruption truly meant before what we went through it.
With respect to this problem, Cloudbee’s position is that resilience can be built on three pillars: everything is automated, everything is connected, and everything is resilient in itself. Breaking down problems into those three helps segment them into achievable challenges and provides a roadmap brings users closer to that state of accelerated resiliency.
If in doubt, automate
According to Shawn Ahmed, Senior VP and General Manager of Software Delivery Automation at Cloudbees, speed has often been a ‘nice to have’, but he also associated it with an increase in risk to security and safety. The Cloudbees reality is that the solutions that will bring speed can also bring security and safety, if done right. And this means that this transformation has become a must have, because it brings resiliency.
One speed restrictor is the single point of failure problem, where disconnections between siloed teams, tools and processes end up getting in the way of delivering software fast and safely. The solution here is increased use of automation. Codifying the knowledge base across the software delivery lifecycle extracts the distributed knowledge that sits across organizations and people's brains as tribal knowledge. This can be captured and converted into formal knowledge rules:
Once that's done, it means all team members can work on the same playing field and no one individual, no one team or toolset is a critical linchpin, even by accident.
This also helps overcome the fact that very few organizations today have truly standardised and fully integrated CI/CD into a single qualified software value stream. Pipelines don't speak to each other, which means very few companies can pinpoint and quantify where value is stuck in the value stream. Thus, companies that still rely on value processes or a set of disconnected tools to build and deliver software are, in his view, unlikely to survive at scale without a fully automated software delivery backbone.
‘Everything automated’ is now table stakes, according to Buffy Gresh, Cloudbee’s VP of Product Business Teams, and as part of that the mantra of ‘everything is code’ has resurfaced as part of the company’s lexicon. This is because many companies are finding it too difficult to work with that concept. In her view however, it is worth persevering, for if companies can achieve everything is code, developers can get sanctioned, secure, pre-production environments, on demand.
This will play an important role with any company looking to create auditable applications that scale, a capability likely to play a major part if the new normal demands new applications. Gresh felt strongly enough about the subject to lay down a challenge to the online delegates:
I challenge you to architect your codified elements, because they will become the key components of your scalability. You will begin to be able to use some advanced capabilities like matrix pipelines, which builds on the foundation of pipelines as code but allow you to externalise and parameterise and achieve everything automated, but at scale, auditable, and reusable.
The Cloudbees goal is to provide users with a software delivery platform that allows them to connect best of breed solutions across an entire, complex and diverse set of development tools. This approach works for small agile teams, but it also enables them to scale to enterprise levels with the goal of having everything connected.
One can only speculate what organisational, structural changes will contribute to the new normal for businesses, and for each company their will be different priorities. But changes there certainly will be. And one of the key differences may well be that change itself, and the requirement to change rapidly and often to keep in step with changing circumstances, is going to end up as one of the primary constants in that new normal. The days of the three year upgrade cycle must whither and die, and the continuous integration and continuous delivery of DevOps will become one of the other primary constants