Fancy dismembering your data center and throwing it to the four winds?

Profile picture for user mbanks By Martin Banks September 2, 2018
Summary:
Ever-deeper 'layer-cake' memory devices and data growth are combining to make the notion of virtualised, distributed, 'dismembered’ data centers a probable goal for large enterprises.

shatter-glass-1312011
One of the key technologies behind the developing potential for increasingly distributed, virtual data centers, and the related move towards enterprises outsourcing at least part of their data center requirements, is in the process of taking a major leap forward. Unusually, this is at least in part the result of a major step function in the level of over-supply of that technology's primary product, static memory 3D NAND memory chips.

Like an answer to every CIO's primary prayer - 'please send me an answer to how am I going to store all this damn data''- these memory chips look like they could provide a significant part of the answer, and therefore play a part in facilitating - and quite possibly opening up in a major way - the new areas of development in virtualised data centers spreading right out to the edge of enterprise networks, and the empowerment of the Edge in its own right.

Both of these will not only be prodigious consumers of data, but will also be major creators of new data, adding significantly to its exponential rate of growth. The real problems here will not just be the capturing and storage of the data generated, but the ability to access, analyse and process it at speeds that generate real value for businesses. This is where the developments occurring in the world of 3D NAND devices are really starting to come into play.

The developments in 3D NAND static memory devices come in two areas: one, the technology itself, and two, the way that business is growing. Taking the latter first, there are a growing number of semiconductor manufacturers entering this burgeoning marketplace that is growing rapidly as solid state static memory devices take over from traditional disk storage. This effect is particularly important in any application where data availability and data reading performance are vital.

Their arrival is set push down chip unit prices, and therefore the cost per gigabyte. This is, however, unlikely to end up pushing storage system prices down, but rather give CIOs more storage bang for the same amount of bucks. In addition, the demand for more storage in the marketplace is such that the oversupply of storage chips that should occur will quickly get absorbed.

Indeed, any oversupply would not be seen if recent developments in technology had not come along. The typical semiconductor device, of any type, is planar - a single substrate if silicon with impurities diffused into it to make the required circuit elements. But with static (Flash) memory a new production technology has emerged - stacking multiple layers of diffused substrate on top of each other to make, in effect, a layer-cake of storage.

Size isn’t everything, but it helps

The 3D moniker means exactly that, these are devices constructed in three dimensions. And this is not just four or five layers of storage either. The current devices now appearing have 64 layers, and when put together in a 2.5inch format solid state drive it produces a unit capable of storing over 15 Terabytes of data.

But 64 layers is already set to be surpassed before it begins. Next year we can expect to see 96 layer devices which, in that 2.5 inch disk format, will be providing an expected 64 terabytes of storage. It won't stop there, of course. The manufacturers are already well advanced on plans for 128 layer devices. These could be around in a couple of years and, to give an idea of what to expect, the current prediction is that these will make possible the 100 terabyte small form factor PC.

For enterprise users, however, there may be more interest in a different storage packaging technique than the solid-state drive (SSD). This it increasingly referred to as 'the ruler' - a block some six-to-eight inches long, maybe an inch or so wide and half an inch thick. These already hold at least twice as much data as SSDs, so with current NAND technologies 70 terabyte units are possible, with much more coming in the future.

They also work well with standard 1U rack systems, and up to 32 can be fitted into a single 1U server unit. That makes for a 2 Petabyte capacity storage server, and a fully loaded rack could hold as many as 40 such servers. So a, by traditional standards, rather modest two-rack environment could boast well over 89 Petabytes of storage. And this is just the beginning.

In addition to this development, work is also progressing on the design of individual storage cells in such devices. In the early days of Flash memory each cell stored 1 bit, but now this has been raised to 3-bits per cell. Coming down the line, however, is a 4-bit per cell technology, which will add 25% to the storage capacity for roughly the same physical size.

Learning to swim in the sea of data

The common trope these days is usually about the exponential growth of data, and these developments will certainly play a part in helping businesses to manage the growth and, more importantly, exploit the data to generate some real value. After all, it is not beyond reason to suggest that, by the end of next year, there will be announcements which claim a single rack of storage – a physical unit that would be well-suited to a network edge set of applications – could contain 38 1U, storage servers plus two servers for compute and networking and offer 240 petabytes of storage, just for a local operations and management role.

That would, of course, also then demonstrate the potential weakness in much of this – namely that the ability to store, access and exploit data has the inevitable by-product of creating even more data, even faster.

Such storage developments bring, right out to the edge, fast, local access to data for the types of data-intensive applications that are currently the preserve of the traditional data centre. Indeed, as pointed out by Intel’s Alex Quach in his recent discussion with me on the relationship between the coming of 5G mobile comms and the development of distributed, virtualised data centers the biggest problem enterprises now have is their inability to shift vast volumes of data back to a singular, physical data center at speeds which make managing operational and business processes remotely practical.

Far better to dismember, virtualise and re-distribute the data center to where its various components can do- their work quickest – at the scene of the crime. This is where the speed of 5G comms, together with its potential flexibility in reaching remote locations with slower, but still useful, data rates could come in handy. And one example of that second scenario – often held up as one of 5G’s weak points – is its ability to re-use old analogue TV frequencies that are still extant, but no longer used. Many places where BT would not venture to lay a landline might well be reached by an old TV signal.

The move to dismember and virtualise data centers also then plays to the cloud-based outsourcing model set out by CEO Pandey and VP of engineering Potti from Nutanix. In particular it plays to their suggestion that the major third party cloud services providers – Amazon AWS, Microsoft Azure and Google – will become the essential service providers that fulfil much of that dismembered, virtualised environment for enterprises.

For many enterprises this might even go so far as those third parties providing the onsite bare metal end of the edge. Others may want to have their own systems that far out, especially if the workload at the edge requires a degree of `secret sauce’. But remembering that a small form factor PC will soon be offing 100 terabytes of storage, the "little shack hosted at the edge of the network" that Quach spoke about is readily foreseeable.

And while the likes of market research firm Gartner sees the traditional competition between the leading players for top-slot bragging rights, I suspect the leading players will soon realise that they need each other more than they need to beat each other. This will be driven by two factors: one is that many enterprises operate in global marketplaces; two, that is also a driver of the dismembered data center model where distributed management of operations and associated data in a holistic (both logically and physically) conjoined manner will be essential.

Users may need AwAzGoo-baba

This will mean having the right elements in the right places at the right times, and no single cloud service provider will be able to do that across the globe. So enterprises will want – will need – to use whichever vendors provide the best service at every individual logical/physical location in their network. And those vendors will need to collaborate well to make it work, for they will all have a vested interest in the others surviving so the `house of cards’ does not collapse.

It is perfectly possible to foresee enterprises having 'data center' resources spread across – for example – AWS and Google in the USA, Azure in Europe and the Near East, and Alibaba (seen as a potential major player over the next few years) leading in Asia-PAC, with specialist niche players filling holes all over the place.

My take

Technologies such as 3D NAND storage devices and, of course, the cloud, are drastically changing the functional capabilities available to enterprises around the world. They will have the inevitable effect of forcing inevitable change on the way enterprises manage their businesses, and the tools and infrastructures they use to conduct that management. The above has been one possibility – a likely one for me, otherwise I would not have written it. There will be other equally valid options along in a minute, and enterprise CIOs need to be ready to make their choices.