Intel fights to maintain data center business as competitors encroach

SUMMARY:

Intel’s days of acquiescent customers and dormant competitors are over but with AWS, Google and Microsoft sniffing around at hardware, Intel is not standing still.

 

Xeon-LT-performance-chart

Intel’s widely-telegraphed updates to its dominating line of x86 processors are kind of like the annual iPhone announcements. Anticlimactic. Between the efficient rumor mills surrounding each company and the predictable evolution of their respective products, there aren’t a lot of surprises.

The 2017 edition of Intel’s rebadged Xeon Scalable Processors (aka Purley platform) delivered more of the same — better performance, more cores, faster I/O — and managed to throw in enough wrinkles to be interesting, however the bigger stories are how Intel is facing intense competition on several fronts and has countered by taking on a more expansive strategy for its server-class products.

In controlling nearly all of the market for data center CPUs, Intel had nowhere to go but down. Until recently it wasn’t clear what could precipitate such a decline.

However the exploding growth of mega cloud providers that are both extremely sophisticated buyers and looking for every iota of competitive edge has significantly changed server market dynamics. Add in new, workload-optimized system architectures to power their infrastructure.

In tandem, Intel has recognized that the traditional lines between servers and other infrastructure like storage systems, network switches and application-specific accelerators have blurred to insignificance. This means that the same x86 processor foundation can be applied to many different data center workloads. Thus, 2017 sees Intel’s Data Center Group (DCG) playing both defense and offense. The revamped Skylake-generation Xeons are its primary weapon of choice.

Intel’s axis of adversaries

Ever since mobile devices sent the PC market into a tailspin, DCG has been Intel’s growth engine. Unfortunately for Intel, as cloud services have begun displacing on-premise infrastructure, even the data center market has slowed.

What executives once predicted would be steady, double-digit growth has been cut in half in recent quarters as enterprise customers rent capacity from cloud services like AWS, Azure and Google that operate with far greater efficiency.

Despite the past decade’s growth, DGC still only constitutes 29% of company revenue, just over half the size of the client computing (PC) segment. Worse still, profit margins in the data center dropped 9 points last quarter, although management assured analysts that it was a temporary phenomenon due to seasonality, technology development expenses and increased overhead allocations in anticipation of DCG’s growing share of the company.

We’ll see, but increased competitive pressure won’t make it easy to reinflate margins.

Intel-segments 2017

A greater long-term threat to Intel’s hegemony is how the mega clouds are reshaping all markets, not just servers, but network equipment and storage, for data center equipment.

Intel execs often point to the cloud as an important source of growth that offsets the declines from traditional enterprise IT buyers, but it’s a double-edged sword. As I wrote last winter,

While Intel is benefitting, for now, it will face increased competition as cloud builders with their deep bench of engineering talent are much more open to alternative processor architectures like GPUs, ARM and POWER. Indeed, one of the beneficiaries has been NVIDIA, which has seen its stock price triple over the past year largely on the back of the increased use of GPUs for machine learning applications and the introduction of new GPU-based cloud services (see my coverage here and here).

Not only are the cloud builders aggressively pursuing alternative processor architectures to accelerate new AI, machine learning and data analytics workloads, but they’re actively seeking cheaper alternatives to those high-margin Xeon CPUs for traditional x86 applications.

NVIDIA GPUs have become the de facto standard for cloud deep learning applications with annual performance improvements far outpacing those of Intel’s x86.

Not satisfied, Google has developed a proprietary processor designed to accelerate AI workloads that it claims delivers

…15–30X higher performance and 30–80X higher performance-per-watt than contemporary CPUs and GPUs. These advantages help many of Google’s services run state-of-the-art neural networks at scale and at an affordable cost.

AI algorithms may be the future, but they’re still a tiny fraction of the total number of cloud and enterprise workloads. Thus, more concerning for Intel is the reemergence of long-dormant AMD as a credible threat in both the PC and server markets with the release of its long-awaited next-generation processors with a significantly improved core microarchitecture.

Last month’s unveiling of the EPYC server CPUs made clear that AMD is a suddenly viable threat. By using an innovative multi-chip module and internal communication fabric, AMD manages to pack more cores and I/O capacity in a single package than even the largest of the new Intel Xeons.

My colleague Charlie Demerjian who has studied the details believes AMD’s chips will have a “significant advantage” over Xeon on many workloads. Furthermore, with AMD’s history of undercutting Intel on price and its support for one-socket servers with ample I/O capacity for multiple GPU accelerators, the EPYC could have a substantial price-performance advantage, particularly for AI workloads.

While not an immediate threat to Intel, non-x86 processors are lurking in the background as server-class ARM SoCs have become a reality, and IBM’s POWER gains a following in Asia via the OpenPOWER initiative.

Microsoft’s support for the former via the Open Compute Project (OCP) and  Google’s involvement in the latter as an OpenPOWER sponsor and developer of several POWER-based motherboards, show that mega clouds are putting engineering talent behind developing Intel alternatives.

A technology juggernaut attacking new markets

Intel responded this week with the 2017 edition of Xeon upgrades.

Along with a new microarchitecture based on the Skylake cores, the Scalable Processor Family includes a significantly different internal I/O and cache design, better memory performance and a bump in maximum core count to 28.

Those interested in the internal details can read the analysis of my friends Charlie Demerjian and Tim Prickett Morgan who joined me at an Intel-sponsored workshop last month where DCG execs and engineering leads walked analysts and reporters through the Xeon product roadmap, processor design and benchmarks. My purpose here is to focus on the implications.

The new products, which Intel has repackaged into four metallic-themed (bronze through platinum) product families, provide a notable acceleration in inter-generational performance improvements, up 50% from last year’s Broadwell parts for typical data center workloads like virtualized applications and databases. This is a slight uptick from the 40% CAGR of general application performance over the past 11 years. Since the Purley family uses essentially the same process geometry as Broadwell, the performance bump is a testament to many internal design innovations.

More important than the evolutionary improvements for conventional server applications are features that Intel has added to accelerate other data center workloads like network switching and packet forwarding, streaming (IPsec, SSL) and bulk (AES, SHA) data encryption and hashing, storage data redundancy (erasure coding) and HPC vector floating point operations.

When used as a network appliance, storage system or computational modeling platform, the new Xeons offer double to triple the performance of prior generation Broadwell parts.

By significantly enhancing performance across a broader spectrum of workloads, Intel seeks to expand the market for traditional server CPUs into areas historically reserved for special-purpose products and custom ASICs.

In introducing the product, Lisa Spelman, Intel’s  VP and GM of Xeon products and data center marketing cites its applicability to “more diverse variety of workloads on each system.” Intel’s commitment to having a product for every application in every industry is seen in the four-tiered product structure comprised of several dozen SKUs having different features enabled and with various core counts, frequencies and memory performance.

My take

Intel has always been on a technology treadmill, but like Apple with the anticipation around the 10th-anniversary iPhone, 2017 looks like a pivotal year. Whether the Purley announcement proves to be “one of those “once-in-a-career” kind of days,” as Spelman described it, or a speed bump on the path to waning influence and profitability won’t be clear for some time.

Nevertheless, just as nature abhors a vacuum, business abhors a monopoly making it near impossible for a single company and its products to hold sway over an entire industry for long.

To its credit, Intel recognizes this and exudes a needed sense of urgency around reinvigorating its data center business. While it was unusual, it wasn’t surprising that Intel execs devoted an entire two-hour session of its Purley analyst workshop with a detailed analysis of AMD’s EPYC processor to preemptively counter plausible performance advantages.

However cloud builders like AWS, Microsoft and Google are full of engineers with sharp pencils who can find and exploit any technical or financial flaws in Intel’s products or pricing. Intel’s days of acquiescent customers and dormant competitors are over.

Image credit - via Intel

Disclosure - Intel paid for the author's travel and lodging to the Purley workshop in June.