Grieves and Vickers - digital twin challenges and opportunities
- Summary:
- After learning about the history of digital twins, what does the future hold?
I recently had the good fortune to sit down with Dr. Michael Grieves and John Vickers, who shaped the current landscape of digital twins.
Grieves is executive director and chief scientist for the Digital Twin Institute. Vickers is a NASA principal technologist who has been rolling out and maintaining cutting-edge spacecraft and scientific systems for the last thirty years.
As diginomica recently covered in the History of Digital Twins, Grieves insight lay in formalizing the process of connecting digital representations of physical things across different aspects of the engineering, manufacture, maintenance and disposal of things across their lifecycle. This came to be known as agile product lifecycle management.
Building a digital representation connected via threads to various backend systems is getting easier thanks to improvements in compute, cloud integration, and 3D graphics. Innovators like Tesla have transformed the process of building cars using the agile PLM processes and digital twins to build cars faster and more profitably than competitors. One of the biggest challenges to widespread adoption of digital twins is cultural.
Organizational aspects
Although digital twins are a very technical thing, it also involves sharing more information across different parts of the company. This can have political issues related to funding, staffing, and prestige within large organizations. However, bridging silos within a company can bring tremendous value. It's important to address these human factors when rolling out digital twins.
Vickers says:
It's like the debate between theoretical physics and experimental physics. It's just been around for hundreds of years. And it's a tough cultural paradigm to break. I'm a believer in the Thomas Kuhn Structure of Scientific Revolutions, where these transformative ideas don't come from day-to-day and gradual experimentation and data accumulation. They sort of happen all at once. And that's what's happened to us with the digital twin.
I don’t mean it happens all at once in a matter of days. Just look at how the Internet and computing progress. It sort of built up gradually, and then, over a short amount of time, there is a tremendous revolution. I think that's what's happening to us with the digital twin. We're right at the precipice of this revolution and change.
Grieves adds:
A lot of the science gets done at the margins, and it's puzzle-solving. And when you come up with a real paradigm shift, I mean, that's where you really get a major impact in what happens, and that’s what we're doing. The science fiction writer Arthur Clarke famously said, ‘Any scientifically advanced technology is indistinguishable from magic.’ And senior management doesn't like magic. And to them, that's what it is. And so, you've got a lot of what I call retirementitisis. ‘I'm five years from retirement, please don't bring me a project like this, because it could be a disaster because I don't understand it.’ And this sort of weeds out over time.
It is also important to note that this paradigm shift also needs to bring in the digital natives, who might be overly focused on the social media aspects of VR and the metaverse. But Grieves does not believe that is where we will get the most value out of these new tools. The paradigm shift will need to address both edges of the divide.
Low-hanging fruit
It is more important for enterprises at least to focus on the low-hanging fruit where they can demonstrate meaningful value. This will not start with perfect digital twins that cost $500 million. The current efforts need to focus on use cases where there is sufficient compute to deliver real value. Down the road, this will help fund initiatives that can take advantage of the incredible improvements in compute infrastructure.
Grieves says:
The problem you don’t want is to overhype this and it fails. Somebody recently told me they heard that 90% of all digital twin projects fail. I am like, ‘Where would you come up with that statistic?’ I think that is the issue that you need to overcome. Yes, some projects could be a failure, but look at what companies are getting from having digital twin capabilities.
Vickers observed that various industries struggle to understand the data they already have. For example, when you get a medical test, healthcare providers struggle with making sense of the firehose of empirical information that should and could be at their fingertips. He was recently at a doctor with his father, and the doctor struggled to tease apart the complications that could arise across multiple prescriptions. Vickers asked the general practitioner:
'Would it be you of any value to have a computer program that you just put these medicines in and quickly tell you about any interactions and stuff?’ And that probably would be a good thing. But I don't think they have it.
While the medical industry still struggles to catch up with this trend, NASA is making some interesting progress. Vickers is working with a team to build a big rocket to take people back to the moon. They just had one test flight and are planning another next year to launch the biggest launch ever built. This promises a resurgence in spaceflight systems back to the moon. After careful analysis across multiple systems using digital twins, they built a prototype of a 200-foot-tall liquid hydrogen tank and tested it to failure. NASA had spent hundreds of millions of dollars on it but wanted to understand its ultimate capability and failure modes.
As it turned out, the digital twin predicted the point of failure to within three percent and for a few tens of thousands of dollars rather than hundreds of millions for the physical prototype. However, the existing culture mandated spending hundreds of millions of dollars on the physical prototype. Vickers observes:
We are really good at this. That’s why I keep pushing us to do more and more digitally and virtually and then decide how much to experiment with. We do it in reverse today where we do the experiment and then decide how good we are analytically.
The future
So, how will digital twins evolve in the near future? Grieves believes the future opportunities lie in improving the interoperability across different silos for digital twins:
We are going to see platforms so that you don’t have to do all the connections to have your digital twins. For example, I have an airplane manufacturer with a fuel tank, fuselage and wings, and they all have their own digital twins. You have got to bring all those together into a composite digital twin in order to get real value. And I think we have got some work to do there.
I think the next step is you are going to see these integrated digital twins. And then you are going to see companies provide the environment because it is not enough to have your own digital twin of how your thing performs. How does it perform in the environment you want? With airplanes, you need weather, clouds, and things like that. So, you will see companies that do digital twins of environments, so we can do that. It's like in the book Snow Crash, where there was one metaverse. But we will see a bunch of metaverses that allow you to pick the environments you want and test your products on that.
But when it comes to the rising popularity of ChatGPT and other large language models, Grieves is a bit more cautious. He says:
I always scratch my head when I hear large language models like they are supposed to be an end-all-be-all kind of thing. Quite frankly, it’s a giant correlation engine as far as I am concerned, but they dress it up in these magical language models and things like that. Our native world is the physical world, and we are very comfortable in it, and we have to be because we live in it.
If you look at the difference between us and nature, nature tries all possible combinations and lets the environment sort it out. We can't afford that. But if we move over to the virtual world, AI can sort of act like nature and try out all possible combinations without our computational limits. We can let that environment sort it out, so it finds the best alternatives. I view it as an opportunity to help and not as a replacement for humans. We have plenty of humans, but we need humans to make better decisions on a constant basis.
My take
I agree with Grieves and Vickers that the future of digital twins lies in figuring out better ways of interconnecting existing islands of innovation and information. And this is not just about technical systems but also involves breaking across organizational and process siloes. It also seems like LLMs and generative AI are being treated as a magical sauce that can be applied everywhere.
But it is also important to consider the original purpose of Google’s research on transformers to build a better translator between English and French. It just so happens that this is ideally suited as a translator between words and other domains like code, robot instructions, and various digital representations of things.
At least on the technical side of things, data integration is a very hard problem to solve on a case-by-case basis. I think there is an opportunity for these more fine-grained applications of LLMs to prove a valuable role in making it easier to share across the different data structures used in engineering, manufacturing, supply chain, and finance in a meaningful way. Guard rails will certainly be required to mitigate hallucinations in the results, but there is also where digital twins can help vet these translations for accuracy and safety.
Grieves kindly posted the full interview on YouTube. Please note that some of the comments in this story have been edited for clarity and brevity.