Main content

How memory management is key to scaling digital twins in the cloud

George Lawton Profile picture for user George Lawton March 17, 2023
Memory management matters - here are a few reasons why.


Scientists have been building supercomputers for simulating climate change for decades on special-purpose machines using specially crafted algorithms. Today powerful cloud computers are growing in compute and raw memory capacity for running industrial simulations. However, some consideration must be given to how this memory is managed to get all these different models to work together.  

Memory issues may not be the first thing that comes to mind as enterprises and researchers build ever-larger digital twins. Memory issues could become more significant as enterprises and researchers push the limits of larger models for adaptive planning scenarios like climate resiliency or building better products. The big challenge comes with predictive and prescriptive analytics designed to tease apart the knock-on effects of climate change on businesses and regions. 

Building more accurate models means increasing the resolution and types of data. But this can also create hiccups that can stall models required to test various scenarios. This can be significant when running dozens or even hundreds of models to tease out the impact of multiple strategies or assumptions. In many of the largest models today, built-in programming languages like C require a lot of hand-tuning to help free up the memory requirements. Meanwhile, programming languages like Java, with ambitious memory management capabilities, could be vital in building more extensive and flexible digital twins. 

Planning for climate resiliency

Maarten Kuiper, Director of Water International at Darieus, a civil engineering firm based in the Netherlands, has been developing ever larger digital twins for planners, farmers, citizens, and business plans for climate change. In some respects, the Netherlands has been on the front lines of climate change for decades with ambitious efforts to protect low-lying lands from rising seas. 

These days, Kuiper is helping plan against new combinations of floods and droughts. During a flood, it may be tempting to try and run all the water out to sea as quickly as possible, but then groundwater loses a valuable buffer against salt water running in. He was an early adopter of digital twin simulation tools from Tygron that allowed him to combine and overlay data sets about land elevation, hydrological conditions, land values, and demographic conditions.

The software also makes mixing and matching models from different sources easier. For example, he finds the latest tree models do a better job at modeling a tree’s ability to suck up water, its impact on nearby structures, and how they are affected by wind and elevation. Kuiper says:

Many people look at trees from different angles. You need to bring all those people together to make better decisions. With water security and climate change, we must bring citizens, governments, and businesses together.”  

Digital twin frameworks make it easier to bring new data sets, models, and visualizations for different use cases. A business might want to see how flooding or, conversely, lands subsiding might impact shipping routes, compromise the integrity of facilities, or affect supply chains. For example, the Port of Rotterdam used the same software to help plan a massive port expansion. This allowed them to align investment with new expansion with returns to guide profitable growth.

A big challenge is bringing more data to bear on better predictions and recommendations for planners.  Kuiper explains:  

We were early adopters. It started with a great visualization. But then we also need calculations for all kinds of simulations in their own domain. For example, we might need to calculate groundwater levels when the rain falls or what happens with a heat event. We needed software that could combine all those simulations in real time since the results are interconnected. This has helped us integrate analysis with all kinds of stakeholders who might be looking at something from different angles. It was also important to have information quickly in case of a disaster.

For example, in the wake of a flood, adding a relatively small earth bank in the right place can help adapt much better than a larger change elsewhere. A fast digital twin allows them to calculate all sorts of scenarios before acting in the real world. It also allows them to evaluate dynamic actions.

The memory bottleneck

These larger digital twins would not have been possible with better memory management. Maxim Knepfle, CTO of Tygron, started working on the platform shortly out of high school. He adopted the Java programming language to strike the right balance between speed and performance. But he started running into long pauses as these digital worlds grew. Past a certain point, the simulations would pause for an extended period, which kept the simulations small or course. He had to keep the size of each grid cell about twenty to thirty meters on a side, which also limited the accuracy and precision of the models. Knepfle  says:

In those large data sets, the normal Java virtual machine would freeze for about two or three minutes, and your entire application would freeze.

While at the Java One conference, he stumbled across Azul, which was doing cutting-edge work in building more performant garbage collection into the Java runtime. He tried the new runtime, which cut the pauses to several milliseconds versus several minutes. This enables his team to scale the latest model’s past twenty terabytes to support grids as small as twenty-five cm on a side with over ten billion cells. 

Even with the explosion in new languages, Knepfle is still a big fan of Java since it is faster than Rust or Python and automates the underlying resources better than languages like C++. This is important in building better digital twins since they want to be able to bring in the latest algorithms and have them run quickly. This becomes a problem when the data sets become big, 

Scott Sellers, CEO and co-founder of Azul, says that memory sizes available to work with have been growing thanks to cheaper memory and improvements in x86 architectures that give programmers access to more memory:

We would not have been able to do it without Moore’s Law allowing more memory to be put into boxes and without help from Intel and AMD adding hooks in the microprocessor to tap into terabytes of memory. Five years from now, we will talk about maybe half a petabyte of memory in a physical box.  

This is taking what used to be done on a supercomputer and enabling it in the cloud, which makes a lot of sense. Instead of building these $300 million data centers and populating them with expensive servers, we can replace them with lower-cost servers in the cloud.

My take

The rapid advances in GPUs are paving the way for building ever-larger digital twins for industrial design, planning, predictive analytics and prescriptive analytics. Increasingly these models will require running calculations across different types of data in parallels. For example, engineering and design teams are turning to multi-physics simulations that help identify design changes on mechanical, electrical, and thermal properties. 

Other realms might similarly combine different kinds of economic, weather, demographic, and geologic models to adapt supply chains, plan expansions, or mitigate climate risks. Exploring multiple scenarios could require running lots of variations. Developers will need to consider the impact of allocating memory in creating these larger models at scale.  

A grey colored placeholder image