The last few years haven’t been good to Intel, buffeting it with a litany of problems:
- Its once-vaunted process technology has bumbled through missed refresh cycles, failing to keep pace with silicon foundries like TSMC
- It has struggled to address the booming market for AI accelerators created by NVIDIA
- Seen long time competitor AMD transformed from roadkill to technology innovator, winning enterprise market share and cloud endorsements from AWS, Azure and Google Cloud.
- Lost a major and influential PC manufacturer as Apple introduced updated products featuring custom-designed, Arm-based Apple Silicon chips.
- Seen cloud operators, notably AWS, release internally-designed processors and AI acceleration chips using licensed Arm technology, standard cell libraries and custom modules.
Intel’s mistakes are reflected in its stock price, which is essentially flat over the past two years, even as AMD has almost quadrupled, NVIDIA almost tripled and the NASDAQ Index nearly doubled. The underperformance attracted the wrath of famed hedge-fund gadfly Daniel Loeb, who has acquired a billion-dollar stake in the company and issued a public letter to CEO Omar Ishrak will a damning bill of particulars and call for corrective action.
Some of Intel’s problems are self-inflicted, but others, like an emerging preference by technologically advanced companies like Apple, AWS and even longtime ally Microsoft for custom silicon designed for particular applications and scenarios, result from broader trends going back decades. Indeed, the move from mass-produced commodity components to custom silicon was first propounded by the father of structured semiconductor design using standardized rules and cell libraries at the dawn of the microprocessor era.
From commodification to customization
Standardized microprocessors enabled the PC and were the impetus for one of IT’s major architectural transitions from terminals and minicomputers to PC clients and servers. Indeed, early in the client-server era, the microprocessors in high-end PCs and servers were identical. Over time, Intel bifurcated its product line to allow design specialization. Over time, the two diverged with PC devices incorporating fewer cores, adding built-in graphics co-processors and optimizing for low power consumption, while server processors added cores, cache memory and hardware virtualization features. However, both client and server products remained general-purpose processors capable of handling any x86 application.
The cost and expertise required to design and build custom chips them unfeasible for most applications until recently. Alternatives to microprocessors and other standard devices in semiconductor product catalogs emerged in the 1980s, strongly influenced by the work of Professor Carver Mead at CalTech, the father of VLSI systems design who created a bridge from the world of semiconductor circuit designers and computer system architects. Mead’s seminal text, Introduction to VLSI Systems (which I used in college soon after it was first published) attempted to abstract the details of semiconductor devices, circuits, fabrication technology, process scaling, logic design and systems architecture into a universal design method that could be used to create custom silicon for any application using any manufacturing process.
The intervening decades have seen the industry evolve ever closer to the ideals. Mead espoused via several key market and technological developments:
- Silicon foundries offering manufacturing and testing of custom chips using standard process nodes and design rules.
- Standard cell libraries providing building blocks for commonly used systems like compute cores, I/O interfaces, embedded memory, particular functions (video codec, packet buffer, SerDes). These are tailored for each process node optimized to meet various design requirements such as high performance, low power, maximum density and maximum yield.
- Electronic design automation (EDA) software to translate system-level designs composed of graphical and/or logical (code) elements into chip-level representations ready for fabrication. Such software simulates and verifies the design’s logical and electrical properties using device models customized for each process node.
Each area spawned specialized companies — TSMC in foundries, Arm compute cores, Synopsis in standard cells and EDA — that pushed the technology to higher densities, greater performance and more features. Other firms like Broadcom and Marvell offer custom ASICS to supplement their standard product portfolio.
Although custom chip design remains the realm of experts, Mead’s ideas pushed chip design to higher and higher levels of abstraction, fostering system-level thinking. Abstracting a chip’s functional and physical details also allows much of the design process to be captured using code that can be compiled and optimized into chip-level layouts. Chip design has been transformed from an intricate process of circuit design and layout into a system-level description of logical elements and their relationships.
As smaller fabrication geometries allowed packing more components on each chip, it became feasible to combine multiple components or subsystems onto a single device, what we now call an SoC (system on a chip). The combination of high-level design methods, a long menu of standard cells and high-density foundry processes let system designers combine modules into custom SoCs tailored to any workload or situation. Where computer makers were once forced to add glue logic, discrete coprocessors-accelerators and memory onto a motherboard, now they can put everything on to a single chip. For example, Apple’s M1 SoC includes four performance CPU cores, four low-power CPU cores, up to 8 GPUs, a 16-core AI accelerator, various other accelerators and cache memory. The M1 is packaged with two DRAM chips in a single device that contains almost all the hardware needed to power a MacBook.
To paraphrase the famous spiritual saying, the arc of technology history bends towards democratization. Like the struggle for liberty and justice, the path isn’t linear, however, the proliferation of SoCs in phones, tablets, PCs, cloud servers and automobiles shows a technology world asymptotically approaching the ideal of a device for every product.
Note that democratizing SoC design doesn’t mean that the entire process is automated, with some sophisticated silicon compiler turning readable, high-level code into fab-ready layouts. Many of the modules, like various types of Arm cores, GPU cores, video encoders, or I/O interfaces, will remain highly optimized for each foundry process node by design experts. However, the availability of cell libraries that can be combined with custom logic and assembled into an SoC by EDA significantly lowers the barriers for organizations wanting hardware optimized for a particular product or workload.
Furthermore, deep-pocketed organizations have the resources to build highly customized hardware that provides significant competitive advantages and aren’t easily copied. While tech companies like Apple, AWS and Tesla are at the forefront of the customization revolution, they will soon be joined by financial traders (who are already using FPGAs to accelerate trading models), pharmaceutical companies and data scientists (to accelerate anomaly detection).
Once the cost difference between bespoke and off-the-rack clothing narrows, fewer people will want the mass produced items. Similarly, as the design and production of custom chips become easier, cheaper and more convenient, much like they’ve done with custom software, more organizations will use bespoke hardware to build a competitive moat.