Main content

Superconducting chips that could pack a data center into a shoebox? Here's how

George Lawton Profile picture for user George Lawton June 14, 2024
Imec, which coordinates chip fabrication equipment integration, has developed superconducting chips that will put 20 exaflops into a shoebox using existing fabrication technology. They will also reduce power requirements over a hundredfold, which could help lower the barriers to training larger AI models.

shoe box

Researchers have been toying with superconductors for decades that have always promised to move electricity or power chips without little to no loss. Much like fusion or quantum, these things have always been about 20 years out ‘once researchers work out a few research challenges.’

Now, the Interuniversity Microelectronics Centre (imec) in Belgium has sorted out at least three of the biggest hurdles for superconducting chips, bringing an era of superconducting chips significantly forward. The big breakthroughs include developing mass production viable superconducting materials, new circuit designs, and a novel architecture. It also takes advantage of recent advances in cryogenic cooling thanks to the enthusiasm in quantum computing.

Quentin Herr, Scientific Director at imec, estimates commercial superconducting chips could start rolling off the production line within five years once a large foundry like TSMC, Intel, or Samsung licenses and finetunes the technology.

Building on a solid foundation

That’s actually a doable proposition since imec is already the gold standard in semiconductor fabrication research and development. The research organization already leads integration and interoperability R&D efforts across semiconductor equipment makers and fabricators to ensure the latest kit all works smoothly once installed. Last year, I visited their chip fabrication lab in Leuven, where they brought all the latest kit for laying down angstrom-sized photomasks, depositing metallic layers, and cutting wafers into chips.

Herr and his wife, Anna Herr, also a research director at imec, have been working on superconducting technology for decades. Herr says:

When Anna and I came to imec only three years ago, we had a vision. The reason we came to IMEC was because of their reputation for being able to develop base technology in integrated circuits. They've been leading the CMOS roadmap for decades and branching out into other areas. So we thought it would be the perfect home to develop the technology. But imec does not do commercial fab. They do module development. So, really, we're at this stage of looking for commercial partners and commercializing the technology.

It's a full stack development, meaning we're developing the basic fab processes, but we're also developing the packaging, the gate library design tools, and you could, you know, carry that up into system architecture and software. That being said, we would need to leverage CMOS technology and CMOS companies to keep it moving forward. If all that came into place, you know, there's like a five-year plan to demonstrate something that proves the value of the technology in an actually useful system.

Scaling superconductors

Superconductors have been around as a scientific curiosity for decades. The problem was that all the existing superconducting materials worked great in the lab but were incompatible with how we mass-produce chips. For example, niobium is a popular superconducting material, but its superconducting properties break once heated up to the temperatures used in baking freshly coated silicon wafers into finished chips.

Scientists only recently discovered that niobium titanium nitride can stay superconductive and be crafted into new semiconductor logic and memory circuits using Josephson junctions rather than the transistors in traditional chips. Josephson junctions are made by sandwiching a non-superconducting material between two superconducting layers. In this case, they found that amorphous silicon could do the job, scale down, and be mass-produced using existing chip fabrication processes.

Packing processors

Many factors are involved in packing 20 exaflops into the size of a shoebox. In all fairness, the shoebox-sized superconducting computer must also be packed into a rack-sized insulated cylinder cooled by a three-computer rack-sized refrigerator.

Frontier, the top supercomputer in 2024, peaks at 1.7 exaflops (a billion billion floating point operations per second). Frontier takes up 7,300 square feet (680 m2) across 77 rack cabinets and consumes 22.7 megawatts of power. For comparison's sake, a typical hyperscaler data center consumes 20-50 megawatts.

Herr said they have managed to shrink these Josephson junctions down to about 210 nanometers. That’s quite a bit larger than today’s smallest transistor feature sizes of 2 nanometers, so I asked him to walk me through how this thing could be so much smaller.

First, the wires are only about 50 nanometers, which is about the same width as required for connecting up all those 2 nanometer transistors. As a result, the wires take up a much larger percentage of the space in a classic chip. Also, superconducting wires could get skinnier over time, which is not practical with traditional chips.

Superconducting chips can also run about 30 GHz today without worrying about overheating or power loss. Although a desktop computer CPU can hit a few GHz, it wastes a lot of heat energy, which can be cooled by your room air, which is not practical when you pack a bunch together. Herr said data center CPUs need to run at 1 GHz to hit the power efficiency sweet spot.

Also, traditional CPUs have thermal management issues, which means that even though they may have a billion transistors, they are never all turned on simultaneously to prevent overheating. Superconducting chips don’t have the same problem. Herr explains:

When they put 50 billion transistors on a chip, they don't turn them all on simultaneously. There's this concept of dark silicon. So they're all there and can all be turned on sequentially, but they're not all expected to be fully active all the time. And that's something that is not an issue in our technology. We can have full activity factor without a thermal problem.

In addition, layers of superconducting circuits can be stacked on top of each other. This is significant since only about 1% of the thickness of a traditional chip is active. The other 99% is for mechanical stability and thermal management. This makes it easier to stack multiple memory layers on top of the logic in superconducting chips. Today, memory layers are stacked on their own but not on top of logic due to thermal management issues. Multiple superconducting boards can also be stacked without worrying about overheating. Herr explains: 

The real reason it gets smaller than bigger is because of three-dimensional packaging. We are using these CMOS industry packaging technologies. However, there is a limiting factor in CMOS, which is that as they start packaging things denser, that's always a question. What about power? How are you going to get the heat out? It's maybe the hardest problem in the whole concept. So that's where the superconductor system really blows everything else away in terms of thermal management. So we can, if we can stack it and package it, we can turn it on and power it because we have enough headroom in the thermal set.

Faster connectivity

Another interesting property of superconducting chips is that they can be connected much faster and with less loss than traditional chips. One big issue with conventional chips is that communication speed slows down significantly when going between chips.

For example, Cerebras’ wafer scale engine chip can pass information along at 27.5 petabytes per second compared to 150 gigabytes per second between chips in a cluster. However, wafer scale chips also need to account for and lock out the defective circuits that are normally discarded when wafers are cut into lots of chips.

Herr said that superconducting chips could hit the same speeds between chips. This makes it much easier to scale them up without worrying about circuit defects.

About that fridge...

Superconducting chips don’t heat up much while processing data, but a lot of energy is required to achieve superconducting temperatures.  The cryogenic fridge necessary to keep the whole thing cooled to 4 kelvin (-269 Celsius) takes a lot of power. About 320 watts is converted into only 1 watt of cooling to achieve these chilly temperatures.

Herr estimates a 20-exaflop computer would require about 320 kilowatts of cooling power. For comparison, about 30-50% of the energy in a traditional data center is devoted to removing heat from the chips. The upshot is that economics currently only works for the kind of high-performance machines used for training AI models, scientific computing, or packing a lot of virtual machines onto a single super server.

Another issue is that they have not figured out how to scale the dynamic random-access memory (DRAM) used for holding the bulk of data required for a big calculation cost-effectively. Herr said they could make superconducting on-chip memory, but DRAM is more cost-effective for processing larger data sets.

So, they have developed a novel glass/copper thermal break interface that allows them to connect to DRAM cooled to 77 kelvin in a separate section. This speeds up DRAM performance and reduces cooling costs compared to the superconducting part.

My take

It’s not too often multiple breakthroughs come through to usher in a new era of technology. I remember visiting a Princeton fusion reactor about 35 years ago, where they said commercializing it is about 20 years out. Despite the incremental progress in new magnets and designs, it probably still is.

The same thing could probably be said about quantum computing. People are always pitching me about the breakthrough that will usher in this new quantum era until I do the math on the error correction and realize they have maybe five working cubits when thousands will be required for any real problems.

The recent breakthrough in superconducting chips seems a bit closer and more realistic. At this point, progress will require solving some chip fabrication engineering challenges rather than waiting for a fundamental scientific discovery.

These days, much scientific research seems to be translated into clickbait by the mainstream press. Although new scientific discoveries can be interesting, it feels equally important to contextualize what innovators are learning about turning novel ideas into real-world progress.

A grey colored placeholder image