HPE's last hurrah or a new beginning?

Martin Banks Profile picture for user mbanks November 22, 2016
Summary:
New technologies and new architectures from HPE? A system designed for in-memory processing, using newly architected non-volatile memory chips, is certainly worth more than a second glance. And if it does achieve more than be a better home for SAP HANA, then the old stager may yet gain a whole new lease of life.

hpe
It would be easy to park the revamped HPE into that corner marked: 'old, legacy, no longer exciting, probably around for a while yet’. Not surprisingly, someone like David Chalmers, HPE’s Chief Technologist in EMEA, would argue that point.

And did when we met. The time we had together left some unanswered questions, of course, but also pointed to an upcoming development that might just be a spectacular Last Hurrah for the company, but also could be the start point of a whole raft of both hardware and applications innovation from more than just HPE itself.

Chalmers readily admits that the old HP had become an IT conglomerate that had lost its way  – a collection of pieces that had relatively little in common. There has been what he referred to, perhaps with a good degree of understatement, as a `tidying up process’.

Then came the decision to hive off the services business and separate the company into two pieces, PCs and Printers, and the enterprise part.

The services business - HP bought EDS back in 2008 and turned it into HP Enterprise Services a year later - proved to be restricting the systems side when trying to deal with all the other services companies. So it was spun it off. Chalmers says: 

If we are wholly subsidiary to that one services company then we are only going as fast as that company can go. And they are a relatively niche company focused on about 350 large customers worldwide. We decided we would be in the infrastructure innovation piece and partnering – in the real sense of the word – with a range of SI’s, that don’t have the technology infrastructure or ability to drive the technology that we do.

The three thirds of infrastructure

The enterprise part is very much Chalmers’ balliwick. His focus now is on enterprise infrastructure as it evolves into three threads.

The first is hybrid IT, which he sees as now being a much wider, richer concept that just offering hybrid cloud capabilities. That was supposed to be the deployment model but became an unpopular theme as most cloud service providers just wanted to offer pure play cloud.  He now sees hybrid becoming much more expansive, changing to include a growing amount of both compute resource and analytics processing capabilities out at the edge.

Secondly, he sees all infrastructure being software-defined, arguing this is a differentiation point and an area where HPE can drive innovation. The only way to go fast enough is to build infrastructure that is software-controlled. It is a dramatically more flexible and interoperable model.

The third is that edge technology will become fundamentally important and where most of the infrastructure explosion in growth will come from.

While he references IDC figures in suggesting that data center growth will continue, but only in single digits of around 4-6%, Chalmers sees edge devices, applications and services moving to provide growth of around 40-50% in the not-too-distant future.

At this point it is clear that HPE has a different view point from what might be called 'collective wisdom’ when it comes to the world of software defined infrastructure. He eschews, for example, the hyper-convergence approach being taken by the likes of Nutanix and others, with simple, standardised appliances as the basic building block. Chalmer though disagrees: 

Convergence has been about bringing together compute, storage, networking, management. They were typically designed separately and brought together. It was flexible, but the downside was it was `expert friendly’, it needed a level of sophistication to make the best of it. Hyper-converged gives you simplicity, and wraps up the complexity into a well-behaved box. There is goodness in simplicity, but it is rigid. If you want to grow it, you buy another one, and they are all the same. So, if you only want to buy compute you have to buy the memory and storage that goes with it. 

What we see is the composable infrastructure beyond that. It is different because it combines the best of history, from mainframe, through mini, client server, converged and hyper-converged, but now designed to work together as a coherent whole, with the lessons of complicity built into but  with dramatically more flexibility. If you need more compute, you just buy compute. If you need an I/O heavy environment then you can have it. It is far more flexible - and with the hyper-converged user experience to go with it. That is where we very much see on-premise technology being.

This approach does raise some interesting questions, not least that it is of itself very much 'expert friendly' and potentially playing to the natural job-preservation instincts of the IT department. It will surely require a degree of skill and understanding of the workloads being run, to determine whether more compute, memory or I/O (or combination thereof) is required. An appliance, though technically an overkill for any particular task, is likely to be cheap and easy enough to specify for a Line of Business manager to specify.

The HPE approach, though more specifically matching the requirements of a task, will demand an investment in either specialist staff or reasonably smart AI management software.

This is the model HPE sees for the next three to five years at least. Chalmers sees hyper-converged systems doing well in remote offices where there is less requirement for localised change. But he also sees the rigidity of hyper-converged systems getting in the way in data centers requiring several thousand servers.

There is an argument here, of course, that hyper-converged is quite applicable in such environments if only because the typical enterprise data center will have a majority of applications that are themselves pretty rigid. There are those applications that have been in place for years and will be in place for years to come – because they do the job and the job does not change. One estimate I have seen quoted suggests the split between established, 'rigid' applications and those demanding flexibility and agility, as being 75% to 25%.

There are many stories of old DEC VAXes and even PDP-11 minis still working in banking and financial services because the applications they run still do the job – but now no one now knows how they work. HPE should know this well as it bought Compaq, which had previously bought Digital Equipment which made those DEC systems, and it still refurbishes them, for a good income, to keep them running.

Virtualizing those applications so they run on hyper-converged systems, such as SAP running Nutanix boxes, is arguably all that is needed. The real composable infrastructures are only likely to be needed by those enterprises that require high levels of agility on a near constant basis.

The ultimate question here is how much agility and flexibility will the typical enterprise require, and where will they need it. For example, Chalmers’ comments about the edge being a place for growth and innovation are, I suspect, more accurate than even he might imagine.

Here comes 'The Machine'

Having stripped the old conglomerate down to the core, HPE is now starting built itself up again, for example with the recent acquisition if SGI, which completed at the beginning of this month.

This acquisition is a core part of that Last Hurrah/New Beginning for HPE. Its starting point is in providing a comprehensive, high performance platform for SAP’s HANA environment. Longer term however, it could be the platform on which a whole new range of applications innovations appear.

According to Chalmers some 50% of the overall HANA business runs on HPE hardware, and the aim is to make that share bigger: 

HANA is an interesting market. SAP sells a cloud implementation, but most of its sales are on-premise, and there most of the implementations run on HP. Now apply that same model to other areas, such as Microsoft Azure.

The switch to HANA has been a significant change in technology and concept for SAP, which is why, Chalmers suggests, the take up and delivery has been progressive rather than a mad rush. SAP was at least a year ahead of most of the big systems centres when it came to the provision of the necessary resources.

He said that HP’s SuperDome at the high and SGI in the mid-range were the only options available to begin with. But now he sees more users starting to demand much more of their IT, and finding that traditional architectures are no longer able to provide the solutions they seek.

HPE’s answer is 'The Machine', which is due to appear next year. This is an in-memory processing architecture system that is expected to be equipped with petabytes of non-volatile memory. The memory technology is HPE’s own development, with the chips designed by the company. Long term, according to Chalmers, they will be licensed out to other chip vendors.

At that point, if the architecture of The Machine proves to be successful, variations on the same in-memory processing theme from other vendors can be expected to appear.

This is the architecture that HANA is crying out for: how do I get 300 Terabytes of memory rather than just thirty? How do I not have any storage because everything is in memory? This is a non-CPU dependent architecture that will be able to offer binary compatibility. So there will be different CPUs in there – expect to see ARM in there – to cover different workload requirements. And it will also run types of application that simply cannot be run today, such as modelling a smart city in real time, or managing the reality of millions of driverless vehicles at the same time. This is why we announced Distributed Analytics earlier this year, it is a key part of it.

My take 

'The Machine' could mark a significant turning point for HPE or the ultimate in lost causes – and it could be argued the company has a bit of a track record there - viz its commitment to Intel’s Itanium processor. But the in-memory architecture does hold out the potential of being the next big step that IT takes.

And if HP’s developments in non-volatile memory devices prove to give the lead Chalmers feels sure they will, and they get licensed out in a timely fashion together with the licensing of a reference system architecture, there could soon be available a wide range of machines at price points that make fast, high-throughput systems the next obvious choice. The interest of the company in edge services also suggests the coming of distributed, virtualised, logical in-memory systems comprised of multiple edge devices and core systems 'blocks'.

And if he is right that they can then run applications types that are difficult to run now, and even more applications that cannot even be created yet, this could be the foundation for a whole range of new software innovations.

Loading
A grey colored placeholder image