High Performance Computing matters - supercomputing and HPC as a service in the real world

Neil Raden Profile picture for user Neil Raden March 12, 2020
Summary:
Advances in supercomputing aren't just for the geeky - High Performance Computing has real world implications. Here's my take on some recent advancements, including HPC as a service, and cloud deployments.

Is there a future for commercial supercomputing?

The short answer is not only is there a future, but there is also a present.

First of all, we don't know how many "supercomputers" there are in the world because yesterday's supercomputer may look a little less "super" compared to what's available today.

For example: when the Titan computer was installed at Oak Ridge National Labs in Tennessee in 2014, it was rated the most powerful computer in the world according to the Top500 List at 17 petaflops (a petaflop is 1000 teraflops, so Titan had a max throughput of 17,000,000,000,000,000 double-precision floating point operations per second).

When it was dismantled and sold for scrap five years later, it was still rated at #12. What could possibly be the motivation for doing this? What is it about supercomputers that makes them disposable? ORNL had to make room for its new computer, Frontier, which would run 75-80x faster, at a staggering 1.5 exaflops (an exaflop is a million trillion, 1,000,000,000,000,000,000 double-precision floating-point operations per second). As the 12th fastest supercomputer in the world, surely there would be plenty of takers. After all, 18 petaflops is still pretty useful.

Titan required a boatload of infrastructure to work. The next generation of supercomputers, besides being light years faster, do not require, as Titan did, the combination of refrigerant, chilled water, and air conditioning for cooling, —all very expensive to maintain. The energy consumption of Titan, as much as 6 megawatts, made it unsuitable for most locations. This is how the economics of supercomputers work. Given that the majority of the workload is for military purposes, keeping up is tantamount.

These monstrous computers, which take up to 10,000 square feet of space and cost hundreds of millions of dollars, might give the impression that supercomputing, or its more fashionable name, High-Performance Computing (HPC), is only for the rare few who have access to one. It may come as a surprise that there are five hundred of them on the Top500 list, and perhaps hundreds more that don't make the cut.

China has 46% of the HPC’s on the Top500 list, and the USA 23%, though the aggregate power is about even at 33%. But, to make the cut, the slowest of the 500 comes in about 1 petaflop. The IBM Roadrunner, which was the world's fastest supercomputer as recently as June 2009, and the first supercomputer to hit 1 petaflop, is now obsolete and has been shut down. That means that every supercomputer put into service before 2009 would not even appear on the Top500 list. The implication is: there are many out there.

Government agencies and a few universities own the largest, most powerful ones. Because of their design, they are suited for massive problems in physics, biology, weather, and, of course, simulating the effectiveness and reliability of the U.S. nuclear arsenal. But how can commercial organizations, even small ones with specialized needs, get access to HPC?

One place to look is Cray, an HPE company, which offers HPC-as-a-service through a partner, Markley. There are other options, too, where companies purchase a supercomputer to share. The results for commercial companies can be dramatic. Another offering is from Univa. Univa has deployed over 1 million cores in a Univa Grid Engine Cluster using AWS to showcase the advantages of running large-scale electronic design automation (EDA) workloads in the cloud.

From Automotive supplier doubles in size after adopting HPC:

Many small to mid-size manufacturing companies do not use high-performance computing (HPC) to create and test potential parts and products virtually because of cost concerns. But one firm that did make the investment in HPC developed a new product line -- and subsequently doubled in size.

The company, L&L Products, in Romeo, Mich., is an automotive supplier that makes proprietary chemicals that can be used in any number of applications. Its products include high-strength adhesives that will bond to any material and structural composites used to strengthen vehicles.

L&L began using high-performance computing about six years ago to build a new structural composite line for automotive makers. To accomplish this, it needed to design and test its products in vehicle crash simulations to see how they could be best applied in automobiles. The composite line became a new product line for the company, one that would have been impossible to start without HPC resources, said Steven Reagan, the computational modeling manager at L&L.

Some other examples from Summit Discusses Why HPC Is Becoming More Important For Commercial Use

  • A senior database analyst at PayPal now uses HPC to detect fraud. The company's system has to process 3 million events per second and look for any anomalies in real time –a workload that could only be handled by HPC.
  • A vice president and general manager of BioTeam, a technology consulting company, said that the technology has brought important advancements to pharmaceuticals in particular.
  • John Deere's manager of advanced materials and mechanics, focused on the challenges of HPC adoption in the commercial sector. He said companies tend to add compute power without proper investigation of their need. But if the added cores are suited to the right applications, the practice is productive.

There are some reasons commercial organizations have not applied the capabilities of HPC to their operations:

  • Primarily, there is a lack of understanding of how HPC can solve engineering problems because many of the engineers working in the field never had computational sciences as part of their curriculum in either mechanical or electrical engineering schools
  • The time to train engineers in HPC modeling and simulation tools takes them away from their regular work, which many SME's cannot afford.
  • HPC modeling packages for very complex simulations may be too complicated and over-designed for the requirements of smaller manufactures.

More from Automotive supplier doubles in size after adopting HPC:

Several initiatives have been launched to help remedy the lack of availability, accessibility, or approachability to HPC tools for SME manufacturers. For example, the National Center for Manufacturing Sciences (NCMS) has created a dozen centers throughout the United States (located near universities and national labs to tap into local expertise) to connect manufacturing firms with HPC resources. NCMS’s network of “Predictive Innovation Centers”

represents public-private collaborations (the public component thanks mostly to state-level matching investments) providing U.S. manufacturers with high-performance computing tools aimed at increasing product design cycles, improving manufacturing processes, and reducing the need and costs of laboratory testing of new products.

My take

What is a supercomputer? That's like asking "What is tall?" In the U.S., the national labs own the five fastest ones in the world. Some of them are actually available to researchers to pursue research, and others are "air-gapped" so they have no connection to the outside world. HPC computers are designed for specific tasks that are unlike those of more general-purpose computing. However, problems to solve that require HPC, but are forced to frame their models to suit their computing environment, suffer from simplification and sub-optimal solutions.

The two fastest supercomputers (until the exascale computers, El Capitan, Frontier and Aurora come online in 2022 and 2023) are Summit and Sierra. Sierra is housed at Lawrence Livermore and is used strictly for nuclear weapons simulations. Summit is tasked with civilian scientific research and is located at the Oak Ridge National Laboratory in Tennessee for applied physics, biology, climate and weather, and other scientific pursuits. But there are many supercomputers still operating that can be used for commercial purposes.

Loading
A grey colored placeholder image