Intel Developer Forum - what comes next for the data-center is scaling racks and fiber everywhere
- If Intel has its way the future of the data-center will move to being measured in racks as the bare minimum, which will be good for some, and pumped up by silicon photonics bringing fiber bandwidths right into the data-center, which should be good for all.
Diane Bryant, Executive VP and General Manager of Intel's Data Center Group, launched into her second day keynote presentation at the Intel Developer Forum with what, to some, would seem as a rather self-evident observation that the future is hybrid cloud.
She suggested that the future will be hybrid cloud, and that as a part of this movement, the last 18 months had seen the share of enterprise infrastructure run on private clouds has risen from 12% to 20%. This means there is now a need to move up from the accepted standard unit of computing from the server to the rack.
So the company is introducing a couple of developments that are aimed at making data centers run better.
Intel has been talking about that notion of 'Rack Scale' for some time but it is only now starting to appear in any formal sense. To some extent it is possible to look at this as Intel discovering the Software Defined Data Center (SDDC), and doing so a good while after many others, including partners such as Nutanix and SuperMicro. In another it is an attempt to be in on the ground floor of a new set of standards for the next stage of developments in data center design and implementation.
One of the key components of the Rack Scale approach will be one of the new developments, the SNAP open telemetry framework that will optimise the placement of applications in the datacentre, matching the application requirements with the current availability of resources at that time.
The idea is for it to become a standard, a reference design with which all hardware suppliers to the data-center market can comply. As the name implies, it hinges around the rack becoming the accepted standard unit of compute resources, which is something of a curate's egg.
For example, it is likely to prove popular with the traditional data-center marketplace where most customers run, most of the time, mostly pretty standard applications in pretty standard configurations and with pretty standard workloads that change little outside a range of known variables.
Here the watch word is 'uniformity', and here Rack Scale will likely do well, providing users with lower cost points and simpler management because SNAP will be monitoring and managing the pretty standard variables the applications require.
There is a downside, however, which is that uniformity brings with it a couple of compromises that can be important to enterprises as they face up to the need for digital transformation.
First of all, uniformity in the applications leads to uniformity in the hardware required in the rack. As some commented at IDF, a 'rack' in essence becomes an analogue of a 'server' - a pretty standardized mix of compute, memory, communications and storage resources. That uniformity inevitably carries the penalty of less operational flexibility, especially in types of workload that can be handled. This, in turn, offers the business user the potential for restricted business agility.
To be fair, the majority of data center customers are, at the moment at least, likely to find this only a small inconvenience at best; the majority will probably care not a jot. But it should also be observed that this type of architecture, running these types of workload, are generally heading into the twilight years of their life-cycles. So the question will be whether Intel and its partners will have moved Rack Scale forward far enough - both technically and as an accepted set of industry standards - to keep pace with where their user community will need to be as they digitally transform themselves.
It may find itself superceded before it reaches real market maturity by the more flexible architectures from the likes of the hyper-converged data-center vendors such as Nutanix This offers highly scalable software that runs on appliances made both by the company itself and partners such as Dell and Lenovo. Scaling comes from adding new appliances on the fly, which are automatically integrated into the whole fabric by the management system, which then assigns available resources to applications in a similar fashion to Rack Scale.
This model does have some specific advantages, in that it can accept appliances with different compute resources, and is more granular in scalability. With Rack Scale, the minimum unit in effect becomes the rack, whereas Nutanix can start with a single appliance (this is considered unlikely in practice).
What is interesting here as well is that Intel is not rushing at building up on racks scale despite it being a marketplace with huge sales potential. Version 1.0, according to Bryant, in essence wraps up some basic stuff of the server rack business, such as standardising the shared cooling for systems in a rack. She told the IDF audience that it will be next year before Version 2.0 appears, which will then add standardisation of such capabilities such as pooled storage.
Bryant also finally introduced some important technology the company has been talking about at previous IDFs stretching back over some 16 years. Now, however, Silicon Photonics is finally here as a product. The goal is easy to talk about, but technically difficult to achieve - fiber-optics has long been used as the backbone of major telecoms networks, but its one weakness has been the integration of a light-based communications medium with electron-based compute resources.
The company has found a way of integrating laser light sources onto the silicon so that the company has been able to construct a small, direct interface between the electrons of compute resources and fiber-optic networking. One of the biggest issues the company had to overcome, according to Bryant, was the precise alignment of the lasers.
This has taken some time to perfect but Intel has now devised a way of automatically aligning them, which means it can now run fibre right through into the data center. It started shipping the devices in June, and using them is going to allow data-centers to continue meeting the bandwidth demands of users, which are fast heading towards 100Gbit/sec levels.
There is certainly a need for new ways to look at data centers and the standards that might be relevant as they grow, and Rack Scale looks a good vehicle for helping existing users in that process. But whether it really maps onto their possible needs as they move through digital transformation and realise how they might need to be on the other side.
The new technologies - and Silicon Photonics in particular - are likely to have widespread appeal across all types of user, even if only as a benefit they get because their network service providers are now exploiting them.