Telling the cloud story – make it a 'no-brainer'!

Profile picture for user mbanks By Martin Banks March 23, 2017
Summary:
Some notes on how cloud vendors are addressing the need to tell the cloud story in ways that sidestep the wonders of the technology involved, and pitch at make it a no-brainer in business terms.

shiny-brain-1150907
Many pundits and analysts are saying variations on a theme of 2017 being `the year of hybrid cloud’, which seems a bit silly as I have always felt that it has all just been 'cloud' manifesting itself in different ways.

That early insistence by the pundits, and even more so by many of the vendors, that there were distinctly different types of cloud has tended to obfuscate the view of the subject for potential users and, arguably, has in my view long hindered cloud’s development.

But now it is really getting under way and a new range of tools and services are starting to address the fundamental change that the cloud is bringing with it. This is the need to decerebralise the over-burdening of users’ minds with the kind of techno-hyper-babble that created the unnecessary 'divisions' between different areas where cloud concepts can be applied.

Cloud is not about the technology, it is about the results, and the way the resources are consumed to achieve those results. So what is needed are companies that finding ways to achieve that de-cerebralisation – making cloud a 'no-brainer' by packaging up resources and/or services so that the technology is put out of sight to do its job, rather than be the focal point of discussion.

One interesting observation on this process, set out in more detail below, is that new members of the CIO community are starting to come from the early digital generation. Facebook has been around some 13 years, while the Google domain was registered nearly 20 years ago. These people undeerstand the `tech’ and accept the cloud.

The recent Cloud Expo at London’s ExCel emporium showed signs of that shift coming to fruition, and the following are a couple of random selections from interactions I had at the show where vendors are starting to decerebralise their offerings in order to help users shorten the 'time to cash' when using cloud.

Pulsant

Managed Services Provider Pulsant has responded to the growth of customers, both new and existing, that are looking at moving to a more packaged up implementation of the cloud based on Microsoft’s Azure infrastructure and the Office 365 applications environment by form a specific brand identity to meet the need – Amp.

And the reason the company has seen it as the right option is that the needs of such customers are different for those looking for managed service provision. This is much more equivalent to the old days of providing a turnkey, packaged solution so that they get the fastest practical time to cash from the moment the decision is made to go for a cloud option.

According to Matt Lovell, Pulsant’s CTO, there is now an up-surge of movement towards cloud services that is coming from both new customers and from existing users of Pulsant’s managed services:

In both cases, they are looking for a complete solution. They have often done the development work themselves on pilot projects but now they are looking for an organisation that can take on putting it into production and scaling it to meet their needs. They are looking for the 'how to' of business transformation That is why we decided to create a separate brand and invest in talent specifically skilled in the legal and security tools needed.

This is now becoming a requirement even for customers that have a long track record of using IT and working with managed services providers. The cloud represents a step further than their current experience allows for and getting help for it, rather than building the skills internally, is being seen as a better investment.

One factor here is the arrival on the mid- and senior management scene o-f the next cohort of younger people coming through, says Lovell:

Yes, they are starting to appear in senior jobs such as CIO positions, and these are people who, for the first time, have no experience of ever having 'owned' an infrastructure. So working with cloud services is quite acceptable and reasonable to them.

Indeed, they are amongst the first people whose background and experience has been dominated by access to the internet and the early cloud-based social media services, multiple-user games and the like that were evolving as they came through.

HyperGrid

'Hyper' has, of course, already become one of those prefixes that makes the sceptic juices start to flow. There is now the suspicion that vendors are being tempted into the game of 'if in doubt hyper-ise the brand name’.

This is, however, being a tad unfair on both Canadian company, HyperGrid, and its HyperCloud offering, which is approaching the decerebralisation requirement from the viewpoint of users not being sure of what cloud resources they may need. One of the common 'holes' that established businesses fall into with cloud is in managing the resource planning of their applications.

They either contract for too little resource, on the basis that application X is usually in a few VMs on a single server, and then hit trouble when periodic peaks cannot be matched and service delivery – either internally of, more importantly, to customers – nose dives. That is the cheap but damaging approach.

The alternative expensive, but assured, model spec’s resources permanently for the highest possible workload, leaving the user to pay for plenty of unused resources.

The HyperGrid approach has been to create a cloud service that can run customers’ cloud services for them. The company has close partnerships with most of the mainstream Cloud Service Providers, and can provide the management front-end to run a customer cloud service on the most appropriate cloud service provider, as the workload or other issues such as data sovereignty compliance demand.

To smooth the movement of applications and data the company has come up with a container technology that, like others such as Odin, is geared to managing the movement between cloud services of complete business processes rather than just individual applications.

This is therefore effectively a container for containers, the latter holding individual applications, for example, invoice production while the HyperGrid container is effectively moving the sales ledger. This capability includes being able to move and maintain all Governance attributes that go with the business process and, in addition, it is possible to overlay additional policies and controls to work with the business process, if that is appropriate.

One potentially valuable new DevOps-related capability with HyperCloud is now coming on-stream with IoT in mind, though it will no doubt apply to a wider range of applications areas as well. This is the ability to provide customers with a `sand-box’ area where IoT applications and systems can be modelled and simulated in isolation and safety. If the model simulations work then they can be pushed out to the production environment, with the ability to snapshot the system both before and after the exercise.

My take

The examples of cloud de-cerebralisation are starting to come through now, and the key commonality is the way the technology underpinning all the developments is subsumed by raising the level of abstraction used to describe what can be achieved. That level of abstraction will, I suspect, continue to rise as more young people born and bred into a cloud-using world come through.