When it comes to data center users, Intel recognises `what do you want?’ rather than `look what I’ve got’
- Summary:
-
Against a background of Intel falling behind its competitors in a significant area of technology and as its share price takes a tumble after some mauling on that subject by the financial analysts (despite its latest numbers looking pretty good), Intel held a Summit to talk through its new approach to the important data center marketplace – one where the industry-standard technology `willy waving’ was for the first time set aside in favour of focussing on what users might want to achieve with their businesses
The following cameo is important, not least because Intel had some announcements that individually were not overly earth-shattering, but together point at something of a sea-change in the way enterprise IT solutions will get architected and implemented in the not too distant future.
It is the sort of `announcement’ that no CFO will need to sign-off on in the next week or two but does need to be added to the list-of-important-things-to-think-about-very-soon that lives in the head of every enterprise CIO.
So, that cameo…….When Q&A time at the Summit came round, a US analyst asked a blindingly obvious question, one prompted by a great deal of activity both within the semiconductor brand leaders and the Wall Street financial analyst community. To paraphrase, it went: `how is Intel going to compete when others are already starting to ship 10 nm geometry devices and it is still stuck at 14 nm, and will be till maybe 2020?’
This is a reference to some mind bogglingly complex design and manufacturing issues that lie at the heart of the famed Moore’s Law, so are not without some genuine importance. In essence, this is delving into the realm of manufacturing at the molecular, near-atomic scale, and the geometries mentioned are a reference to an indication of the size of individual components in an integrated circuit.
The specific reference is to the width of connecting tracks between those components and the unit of measurement - `nm’ – is a nanometre. That 0.000001 of a millimetre, or a billionth of a metre, and as a pointless comparison to something widely known, visible light has a wavelength between 400 and 700 nm.
Meanwhile, in the real world
The real-world upshot of the ability to reduce those geometries, across the whole chip, is that it can the contain more transistors with which to store data or process it, and do it faster, while using less power. So shrinking chips by some 30% can end up meaning an existing device – a Xeon processor, for example – uses less power, works quicker and costs less. Or it has room available for extra functionality to be added.
So yes, it is important stuff. There is, however, a `but’ involved in all this. The real world is not about pure technological capability for its own sake. And the analyst’s question was geared to just that. Intel’s competitors have been saying that this lag equals, QED, that intel is doomed. Many of the assembled analysts duly did nod in agreement.
It was here that I would venture to suggest Navin Shenoy, Executive VP and General Manager of Intel’s Data Center Group, missed a trick. His answer was classic diplomacy:
We are not seeing any demand out in the market for 10 nanometre devices.
Fair comment, but there was a part of me wanted to hear a response like: `who gives a damn anyway?’ For when push comes to shove whether a new, 10nm device is faster, and does use less power, is neither here nor there when it comes to the majority of the enterprise data center community.
For a start, early examples of such devices are going to be expensive, that is inevitable because initial production yields will be measured in many (silicon) wafers per working chip. It is only when the yield consistently gets to hundreds of good chips per wafer that economic serenity is achieved. Making these things is extremely expensive, and most of that is up-front investment.
The nature of the `but….’
So there will only be a small real market for them right now: one that is running the type of applications where the performance advantages, and perhaps the lower power consumption, give sufficient economic advantages to warrant the expense right here, right now.
That is the nature of the `but’, the real world of enterprise IT requirements will want to exploit 10 nm processors and memory devices at some time. But for the vast majority that time is not now. What they would find of more value to their business are developments that shorten their time-to-cash, and their ability to help their customers achieve whatever business goals they have, as quickly, efficiently and repeatedly as possible.
This is the objective that Intel has latched onto and directed itself at during the Summit. As Shenoy observed during his keynote:
It is data that is defining the future. Some even called it the `new God’. Yet little meaningful business value has been derived from it yet.
He observed that cloud service provision is now a big growth area for the company, already 41% of its revenue, But the new trend is now customising devices such as processors for these markets. This, he said, has already grown from 18% to 50% of products into the cloud sector. There is already a pretty broad range of options available when it comes to `customised/optimised’ products, ranging from bringing high end technology developments that started life in data centers down into lower end devices and systems, and the development of new, physical appliance systems, such as intelligent mobile base stations for the burgeoning edge distributed/dispersed data center applications area outlined by Intel’s Alex Quach recently.
One area where a specific new product saw the light of day as a saleable production item is a development from the company’s existing Optane Solid State Disk technology. This takes the Flash memory technology and turns it into a new class of memory that complements DRAM devices used in server main system memory, and could well replace it in the future
Called Optane Persistent Memory it exploits a key feature of that technology to overcome an issue with DRAMs that is becoming a problem as workloads and their datasets get ever-larger. This means main system memory has to be able to hold very large files so all the data is available while the workload runs. Holding that data in DRAM means the memory has to be continually refreshed. This takes time that has never been noticed in most legacy apps and smaller data sets. But now it is a problem. It also takes energy.
The Optane Persistent memory’s use of Flash saves that time, and energy. It is also a real boon if there is a system crash, for then there will be no need to re-load the data. This is reckoned to save many hours. One of the first customers for this is Google, which will be using it as part of a partnership with Intel and SAP to provide resources for running large HANA applications where scaling is currently an issue.
The company has a processor road map that includes the appearance of a 10nm device in 2020, but for now its focus is on versions of its Scalable Xeon family. And it is becoming a family range of processors targeted at different applications areas. For example, the end of this year should see a version equipped with integral Optane Persistent memory and Deep Learning Boost (DL Boost) software.
This is designed to accelerate AI workloads by combining common AI-related functions into single instructions and is part of a program of applications-focused processors collectively labelled Intel Select Solutions.
“The aim is keep making `The Easy Button’ for users to press.”
Customisation is also becoming a key capability in the Cloud Service Provider (CSP) sector. It is also an area where the possibilities are still nowhere being really tapped. The company also sees this trend spreading beyond the obvious big three in the public CSP market, and second tier providers are starting to use customised devices.
This makes a certain sense, of course, as they will need differentiation in the marketplace, and specialised services based on customised processors, networks and storage are likely to become the rule of survival rather than a nice-to-have.
The cloud services operation within Intel is pushing the customisation idea further by having a team of over 200 software engineers that are directly engaged with CSP customers doing new development work. This includes such tasks as such as re-optimising entire CSP infrastructures so that legacy management services work with latest technology.
My take
This is an interesting development as one of the `techiest’ of tech vendors re-jigs its future direction to the important goal - what it is that customers want to achieve and why they want to achieve it, rather than the `mine is smaller/faster than yours is’ marketing model that Intel’s competitors, and the analysts associated with the industry, still seem happily addicted to.