CLOE - a time and money saver for users and service providers?

Profile picture for user mbanks By Martin Banks March 19, 2018
Summary:
A rather mundane application on first glance, but CLOE could save users and CSPs a lot of time and money.

guy-with-computer
One of those rather mundane, but very tangible, trends that usually emerges from the first over-hyped appearance of any new technology has shown its hand again in the introduction of CLOE. This is a Cloud Learning Optimisation Engine from Densify, which may also be known to some by its earlier name of Cirba.

The trend that CLOE demonstrates is that, while a new technology may be hyped-up as the answer to everything, it often finds its best roles in tightly-focused tasks, basically doing one job.

CLOE is designed to solve an increasing problem for many businesses as they move more workloads to the cloud. Many of them are finding it close to impossible to accurately manage the costs of the cloud services. Not least this is because accurately gauging the ongoing costs of using cloud services can be far more complex, and usually costly, than the headline service charges suggest. In practice, every service is a mass of different options, both in terms of services provided, and prices associated with them.

And as Andrew Hillier, a co-founder and CTO of Densify observes, most users end up with fairly crude 'rules of thumb' about the applications and services they run, as they always have:

They used to measure resources needed in MIPS , then servers. With cloud services, it is now racks in the service providers facility. And now they often use a bill reader or a pre-order resources system. But these can now be counter-productive.

The upshot of this is inevitable - most users end up making poor decisions on what services to use, both within a single service provider and between different service providers. Those poor decisions are generally also expensive. In essence, most users have already reached the point where they are selecting the wrong services for workloads, using the wrong resources and paying over the odds for the privilege.

According to Hillier, the situation has already surpassed the point where any human – or huge group of humans – can really make any sense of the problem. As he points out, Cloud Service Providers, such as Amazon, can now offer users a choice of some 1.7 million different service options, each with different resource allocations, service levels, performance capabilities and, of course, prices.

How long?!?

This is worth thinking about: suppose a person took 5 minutes to consider the match between one workload and each of those service options? Suppose as well that they work on this 24 hours a day, 7 days a week and 52 weeks a year.

If the perfect match was the last one examined, it would have taken over 16 years to arrive at the decision. Hillier also notes that most users now have a choice of around 90 different major service providers for which to choose. Suddenly, a company could be looking at near 1,500 years to get to an answer.

This is, of course, exactly the type of job that Big Data analytics is tailor-made to handle – and with CLOE it sets out to do just that over a limited set of questions: which service is best-suited to a workload, what is the best price for the workload, and which service provider is best for that job?

As it is a cloud-delivered service, Hillier claims that users can be signed up and saving money in a matter of hours. What is more he has started to see a jump in interest from the service providers themselves in re-selling CLOE as a service to their own customers:

Yes, we do have partnerships with service providers, and it is proving to be to their advantage even though it looks like they are telling customers how to reduce costs. But they are reducing costs by helping customers reduce the resources they consume. And that means they can take on more clients. And they end up looking good.

CLOE analyses cloud usage patterns, enabling Densify to proactively make individual applications self-aware of their resource needs. – matching application needs to available cloud resources.  Unlike traditional cloud tools that focus on the bill but don't fix the underlying issue, Cloe goes to the heart of addressing the problem at the application and infrastructure layer.

It also couples analytics and machine learning techniques to build a detailed knowledge of available services and then compares them with the requirement s of the applications and workloads. The system can then work with both the self-aware data of the applications and whatever policy requirements are set by the user. This is particularly useful where the there are specific compliance or regulatory issues that need to be considered.

This capability could make it particularly useful when users are working to port legacy applications to the cloud. Traditionally, developers have not had to consider what resources such workloads actually required, so long as they didn’t exceed the upper limits of the hardware available. As they move to operational environments where they can be shipped in and shipped out alongside other workloads, being profligate with resource specifications or claims can prove to be expensive and very wasteful.

So knowing exactly what resources the application requires, in terms of amount, type, and variations from the norm, makes the service providers’ jobs much easier, and the customers operational costs significantly lower. According to Hillier, CLOE’s Beta customers are are now averaging cost savings around the 40% mark on public cloud services.

To help manage the implementation of the applications on a daily basis, Densify has a partnership with ServiceNow:

CLOE usually sends changes through a round trip to a service management environment such as ServiceNow, where it goes up stream to the manifest when changes need. The Densify app definition becomes the resource definition in the manifest.

There are still a number of options being considered for where manifests are held in the future. Hillier was not in a position to confirm anything, but one obvious contender would be using Kubernetes as the ultimate location. In this way, every application can then become virtualised and self-aware. He also didn’t demure from the suggestion that using Kubernetes could be the basis of a way to containerise any application in the future.

My take

Here is, at first sight, a rather mundane application, but it is in practice using analytics and machine learning to create a useful win:win for both users and service providers. The former can hope for faster, more reliable and above all cheaper services, while the latter can boost their resource utilisation, getting more and longer returns on their investments by reducing the amount of unwitting over resourcing by those users.