Virtualization efficiency – could do better
- Summary:
-
Virtualisation was supposed to solve the slothful inefficiencies of the old one-server-per-app datacentre model, but evidence from Xangati suggests its success has so far, in practice, been marginal. Is now the time to get people out of that infrastructure management loop and let autonomics take over?
What is even stranger, sometimes the reason is not always down to the technology itself. Sometimes, it has to be said, it is down to the people using it.
Take virtualization as a good example. Here was a technology aimed at resolving one of the great productive feats. Typical server systems were, on its arrival in the marketplace, running at single figure utilization rates, largely because they were running a single application.
True, there would be times when its utilization rate was high, when that was running at peak demand levels, but most of the time servers chuntered on at an average utilization rate of under 10%. This meant that most of the time they were actually doing damn all.
And post-virtualization, when most servers are running virtual machine loads numbering in the high tens of VMs, that average utilization rate has risen to..?
Less than dramatic
Well, according to Atchison Frazer, marketing vice president of Xangati, that has risen a good bit less than dramatically - in fact to an average of around 25%. In other words, while the physical resources of server farms and the rest have scaled both up and out significantly over the intervening period, the efficiency of utilization of that capability has only scaled a bit.
This begs two obvious questions: why has that happened and what might resource capabilities and availability be like if the issue could be made to go away?
The answer to the first question is surprisingly simple, for it as Frazer observes, a people issue:
The problem now is the politics of people and what they think they want. This is creating an impossible task for IT managers, they need to know everything about all the applications being used and VMs being run. It is reaching the point where is it difficult having any humans involved in the process as there is so much scripting and customisation in the chain now.
This is leading to what Quocirca analyst Clive Longbottom has called `Virtual Machine Sprawl'. This is where the ease with which VMs can be spun up leads to problems that range from managing capacity to the more legally tantalising - managing licenses.
Xangati is targeting this issue with a targeted data analytics solution that aims to provide a holistic management view across an entire infrastructure. This sets out to create a management environment that is capable of identifying service delivery shortfalls and SLA misses even where every individual application or utility making up that service reports itself as working well within tolerances. Frazer says:
The issue is the way management is done in modern virtualised environments. Each application and utility is a disparate entity with its own management and monitoring environment. Yet while each one can be performing well in its own right, the interaction between them aren't working, and that can cause major performance problems. The need is now to be able to look across all the application silos and those interactions, and do it in real time.
This, then is a mix of monitoring and reporting tools coupled with specifically developed analytics that can not only provide real time cross-infrastucture status information to its own dashboard, but also employs advanced heuristics to predict probable performance issues and failure modes in advance.
Such capabilities give Xangati the potential to move to the next step - the delivery of autonomic, self-healing environments that can automatically adjust and repair poor performance issues without manual intervention. This could significantly speed up the process of performance management to the point where it is in real time.
According to Frazer, the company is capable of providing this capability already, and has APIs available that could provide the necessary interfaces to automated management tools. He was reluctant to be drawn on the reasons why this might not yet be a hot favourite with IT Directors and operations management teams, but it seems clear one of the primary reasons is the politics of which he spoke.
He indicates that IT departments can take several days writing scripts to fix performance issues that Xangati can resolve in minutes. The unspoken implication of this is that many of them may even like it that way, even though it is a contributor to the overall lack of efficiency that still exists in virtualized infrastructures.
Xangati has also made that efficiency part of the information it delivers to users as a matter of course. Frazer explains:
We have recently added a fourth column to the information provided to users on our dashboard. This informs them of the efficiency with which their infrastructure is operating, in real time. The goal is to help users make better IT investments.
This should certainly help them stop making irrelevant investments by helping them identify where extra workloads can be placed to get better efficiency in the use of resources, as well as stopping the VM sprawl that Longbottom has identified.
It offers, Frazer claims, the complete management tool for multiple, complex SaaS environments. He also says it is ideal for managing the coming hyper-converged, software-defined-everything environments. The company is already in early conversations with hyper-converger Nutanix about such possibilities.
Though Xangati’s primary market so far has been the larger enterprises, where the more complex and less easily managed infrastructures are to be found, it has started a search for a partner community within both Cloud Service Providers (CSPs) and, in particular, Value-Added Resellers (VARs). This is not least because Citrix is already both a partner and investor in Xangati, but is also now a partner with Nutanix. The company also, of course, partners closely with VMware. Frazer concludes:
The VAR community is a real target for us, as those offering VM, VDI, and Citrix are all warm targets for us. They all already have the knowledge needed to write the scripts their specific applications and USPs require to exploit what we offer.
My take
Virtualization is, of course, brilliant stuff, but in practice it still has a high propensity to create huge management and resource consumption problems. Anything that can help manage that situation has to have some benefits, and if it can do it in real time, predict problems before they happen and offer the potential for automated, autonomic management that removes a major operational headache for IT department. Er, doesn’t it?