How Luminary is rethinking physics simulation for the cloud

George Lawton Profile picture for user George Lawton March 20, 2024
Summary:
Luminary is rethinking physics simulation workflows spanning GPU clusters from the ground up. The new company hopes the new architecture and business model will impact the simulation industry as much as Snowflake transformed the database industry. This novel approach has important implications for the future of digital twins.

brain
(Pixabay)

Luminary Cloud has officially launched out of stealth with a new simulation architecture and SaaS business model that promises to democratize simulation processes as part of the product development lifecycle.

On the technical front, they have developed an automated prep process, a more efficient approach to scaling simulations across GPU clusters, and simplified analysis. On the business front, they are also pioneering a consumptive pricing model with no upfront or per-seat licensing costs like Snowflake pioneered for databases. 

The young startup hopes to make the same splash for physics simulation problem solvers as Snowflake did in cloud data. Indeed, Mike Speiser, founding CEO of Snowflake and Pure Storage and now a managing director at Sutter Hill Ventures, serves on Luminary’s board. Luminary co-founders include CEO Jason Lango, who previously led Bracket, and CTO Juan Alonso, who founded Stanford’s Aerospace Design Laboratory and was the former director of NASA Aeronautics research programs. 

The company bills its new service as “Realtime Engineering.”  Although it does not achieve the sub-millisecond processing one might associate with the term, it can certainly run simulations tens to hundreds of times faster than competitive approaches. For example, it can run a 150 million mesh aerodynamic simulation in 7 minutes, compared to 4-6 hours with traditional approaches. It also requires far less specialty expertise than traditional simulation workflows. 

Its first offering supports computational fluid dynamics for understanding air and heat flows in aircraft, cars, pumps, windmills, and sporting equipment. Early customers include companies like Joby Aviation, Piper Aviation, Trek Bikes, and Cobra Golf, a subsidiary of Puma. Competitors such as Ansys, MATLAB, Siemens, Cadence, and Synopsys support a broader range of tools for solving problems in mechanical, electrical, and chip engineering designs. It’s a fast-growing business as enterprises design new products, such as electric cars and aircraft, to meet net-zero goals. Ansys recently agreed to be acquired by Synopsys for $35 billion.

Starting from scratch

Alonso argues that these legacy vendors have grown by cobbling together point solutions, which will be hard to refactor to take advantage of new GPU architectures and cloud infrastructure:

Historically, the legacy vendors have grown mostly based on acquiring disparate simulation applications, which is ‘inorganic growth’ as they say in the investing community. Due to inorganic growth, the large legacy vendors have 10s-100s of poorly integrated applications in their product portfolios. It’s a natural startup advantage to be able to start from scratch on a modern SaaS product. The incumbent legacy vendors will need to rewrite all of their software in order to build a modern cloud- and GPU-based SaaS product. The transition from an enterprise seat license model to a consumption-based pricing model will not be easy for public company competitors. Vice versa, the R&D-intensive nature of this product category is a moat against all but the highest-funded startup competitors.

The other big difference is a new business model that allows enterprises to adopt a true pay-as-you-go pricing model. It's important to note that all major simulation vendors are starting to roll out SaaS offerings in various ways. Still, current licensing approaches often come with various upfront costs and don’t always align with usage. Alonso explains:

Engineering workflows are often project-based and sometimes hard to forecast usage in advance, which aligns well with a consumption-based pricing model where you can pay $0 up front and pay as you go based on actual usage, which is finance/accounting speak for $0 up front pay-as-you go billing. The legacy alternative to the enterprise seat licensing model would be to pay in advance for peak demand on an annual basis, which often results in overspending & under-utilization of capacity. In the consumption-based model, an engineering team or company can pay for just what they require with no commitment, but then look for a volume discount in exchange for committed spending once they are able to forecast their future usage.

Customers pay on a dollar-per-minute basis for GPU usage. So, if a simulation can run across ten GPUs, it runs nearly ten times faster but costs the same as running it on a single GPU for ten times as long. As a result, customers are not penalized for parallelism and faster runtime. Lango says:

A quick conceptual design simulation of an aircraft could run for a couple of minutes and cost less than $90. Customers doing larger, high-fidelity simulations or large numbers of simulations exploring different design alternatives would achieve volume for prepaid capacity discounts. The on-demand and prepaid capacity consumption model is a great way to have a win/win with forecast volume and discounting. Customers of any size can start with a $0 up-front on-demand relationship.

Rethinking the simulation lifecycle

Luminary has also rethought the simulation lifecycle to make it easier for people with different levels of expertise to test designs without waiting around for the right expert. Preparing CAD models for simulation has historically been a major challenge. New approaches in user experience design and automation greatly benefit this area. 

The digital simulation pipeline can typically be broken into three major steps: 

  1. The engineer prepares the simulation by importing CAD geometry, creating meshes, setting up parameters and operating conditions, and deciding what outputs will be saved from the simulation. 
  2. Allocating compute resources and conducting the simulation. 
  3. The engineer makes sense of the simulation results via visualization, plotting, and data analysis.

For the first phase, an AI-powered mesh generator automates this process. Traditionally, a specialist was required to translate a raw CAD file into the appropriate mesh for the simulation. In the second phase, the tool automatically configures the appropriate GPUs and deploys the simulation code across the infrastructure. In the post-processing phase, the tool supports workflows for 3D scientific visualization, analysis, and design exploration suitable for stakeholders with various expertise. Data can be shared broadly; insights can be arrived at through a collaborative process; decisions regarding the future evolution of a design can be made with input from all major participants.

Alonso explains:

As a SaaS platform, we can automate ensembles of simulations and data management for designs of experiments and provide access to tabular data for surrogate modeling, all with clicks of the mouse or via Python scripts. We see the long-term opportunity in our realtime engineering vision as making CAE accessible to both advanced and citizen analysts and enabling collaboration between the two.

Under the hood

All major simulation vendors have deep partnerships with NVIDIA to run some aspects of their tools across the company's leading GPU and AI infrastructure, with the latest NVIDIA H100 GPU having over 18,000 cores. Simulations typically see an improvement of 1.5 to 10 times by porting existing elements of a physics solver to a GPU. Alonso argues competitors are missing the even bigger opportunities available by re-developing physics solvers from the ground up and architecting all of the elements of the solution pipeline so that they can be conducted efficiently in multi-GPU clusters. 

This is a challenging problem because the interdependent relationship between different regions in a simulation requires a novel approach to breaking down the problem. The simulation algorithms must also consider the hierarchical memory system used in modern GPUs. The algorithms must also break down a problem into Partial Differential Equations (PDE), including linear and nonlinear solvers, flux calculations, turbulence and heat transfer models that can run across the parallel GPUs. 

In addition, many irregular graph-like operations, such as those involved in search and interpolation, meshing, and moving grid calculations, are significantly more complicated to implement on the GPU. Luminary started from scratch to create a GPU-based solver that supports these challenging parts of the simulation workflow. Also, many GPUs in the cloud come with multi-core CPUs that can execute operations asynchronously. Luminary’s tools can help do work on these CPUs without stalling the GPU. 

More cross-disciplinary research and development was required to rethink the traditional simulation architecture. Alonso says:

Luminary’s team includes many of the top minds in computational algorithms for physics, GPU implementations, distributed systems, data engineering, and HPC to create proprietary physics solvers that utilize the compute and bandwidth characteristics of multiple GPUs in the cloud. That is the secret sauce of the speed of Luminary.

Different kinds of models

A given simulation problem solver can take advantage of various techniques depending on the requirements and time required. For example, when simulating the turbulent flow around any object (e.g., an aircraft, automobile, golf ball, etc.), one can attempt to solve the Navier-Stokes equation directly, which is a partial differential equation for describing the motion of fluids. This is called Direct Numerical Simulation (DNS) and has the advantage that it is an exact computer representation of the real problem.  But it's not practical today. Alonso says:

Unfortunately, the computational cost of resolving every single, tiny, turbulent eddy is prohibitive for realistic engineering problems and, therefore, DNS is beyond our current computational reach and will be for the foreseeable future. For this reason, we resort to various models of turbulence.

At one end of the spectrum, Reynolds-averaged Navier Stokes (RANS) equations are used for typically steady-state flows, which is computationally cheaper but less accurate. Conversely, techniques, like Delayed Detached Eddy Simulation (DDES) and Wall-modeled Large-eddy Simulation (WMLES), are more accurate for computing turbulence but also more expensive. In actual practice, teams often use a mixture of RANS, DDES, and WMLES models depending on the accuracy requirements and the point in the development timeline of a product. Alonso elaborates on what these differences mean from a practical perspective:

Many engineering problems of interest can attain sufficient accuracy with a RANS steady-state model and, therefore, RANS is typically the model of choice in more than 90% of engineering problems. For example, Luminary can solve for the RANS solution around a full aircraft with 150 million mesh elements in around 7 minutes using GPU computing when that computation normally would have taken 4-6 hours on CPUs. When ultimate accuracy is needed, DDES or WMLES models are typically chosen. Luminary can complete DDES and WMLES calculations using GPUs in about 30 minutes when those computations would have normally taken 24-36 hours or more using CPUs.

Connecting the dots

Digital twin pioneers Dr. Michael Grieve and John Vickers previously weighed in for diginomica about the siloed nature of simulation within the product development lifecycle. They argued that despite progress in Model-Based Systems Engineering approaches, large gaps remained for technical and cultural reasons. In the ideal world, teams could explore how design choices affect tradeoffs on aspects such as structural integrity, aerodynamic drag, electrical interference, and cost that might be required to build a better car or wind turbine. 

One big factor is that various engineers must solve different types of problems, depending on where they are in the process. A given problem solver can also present results in multiple ways. Alonso explains:

Normal Computer Aided Engineering (CAE) tools compute some outputs of interest (say the range of an EV, the drag of an aircraft, etc.) that depend on inputs (such as the shape of the car, the span of the wing, etc.)  These outputs tell the engineer whether the design is good or bad. But they do not tell the engineer how to improve the design. Sensitivity analysis is an advanced computational workflow that, unlike other vendors, Luminary provides and that tells the engineer how the outputs change when the inputs are altered. For example, these sensitivities can be used as guidance for the next design iteration. Luminary has developed GPU-native sensitivity analysis capabilities.

Loops of loops

When engineers try to connect different simulations across a more agile workflow, they need to think about different feedback loops between individual and collective results. The inner loop considers a single analysis that starts with a CAD geometry, resulting in a physics solution and some of the outputs of interest. 

The outer loop involves repeatedly executing each inner loop by changing the inputs and learning from the outputs. Examples of such outer-loop workflows include: 

  • Design optimization: Repeatedly evaluating a changing design until an optimal design, the best design, is obtained. 
  • Uncertainty quantification: Repeatedly evaluating a design under varying operating conditions to understand the variability in the outputs). 
  • Physics-based AI/ML model generation: Fitting or regressing mathematical models to many inner loop analyses. 

Luminary is building a multi-physics platform where more than one discipline can be coupled into a single simulation. For example, they have created a fluid-thermal coupled solver, allowing tradeoffs between the fluid and the thermal disciplines.  With the ability to run such coupled simulations very fast, parameters in the fluid and thermal models can be changed to understand the tradeoffs.  Down the road, it intends to enhance its product portfolio with additional physics (and their combinations) to empower engineers to understand how performance in any one discipline affects performance at the system level.

Alonso says: 

While digital twins would be a future product use case for us, at the moment, we are enabling the user to leverage our SDK to implement such capabilities via integration with external modeling tools and their own models.

One of the hottest areas in the simulation community is finding ways to use simulation and experimental data results to train physics-based models. These can sometimes run thousands or even millions of times faster than traditional models for quick analysis across design variations and different physical domains. However, once a team has narrowed down a design, it must be double-checked against conventional models. 

Down the road, Alonso says they are also looking at ways to automate the process by improving the data management for ensembles of simulations that combine individual and connected components. 

My take

Until recently, the roster of leading database giants remained fixed by companies like Oracle, IBM, and the various native cloud databases from Microsoft, AWS, and Google. Many of these supported SaaS offerings with various usage based pricing. Then Snowflake exploded on the scene with a new data-as-a-service offering that combines a new take on database architecture with a genuinely consumption based pricing model. Snowflake now has a $51 billion market and $2 billion annual revenue, double that of 2021. 

Today, the leading Product Lifecycle Management (PLM) and simulation vendors are migrating some of their extensive portfolios to the cloud with new SaaS business models. Luminary executives argue these legacy vendors are not taking advantage of recent innovations in GPUs or fairer and more transparent pricing models. If this is true, it will likely have as transformative an impact in simulations and digital twins as Snowflake has had on databases. 

It's also important to note that Luminary is still early in their journey. They are just starting with simulating fluids and heat. Broader success will require extending the success here to other physics domains to help build out digital twins, which are applicable across all aspects of product development. In addition, evolving beyond multi-physics modeling to digital twins will also require considering IoT data from operating products and business aspects related to cost and supply chain tradeoffs as part of interconnected loops of simulation workflows. 

Loading
A grey colored placeholder image