Engineers love the concept of digital twins. We build them for “unit operations” all the time: a computer model for the inputs and outputs to a processing step (called a unit) in the production of something. Almost everything you use or eat goes through these steps.
For example, something as simple as flour starts out as seeds to be planted and then given adequate moisture, sunlight, and nutrients to form mature grains. They are then harvested and processed to remove the grain from the chaff at which point they are most often stored until they are transported and ground into flour which is then packed and transported to the store shelves.
Nowadays with the emphasis on sustainability the farmer will also design in waste reduction and reuse and may in fact use the biproducts or waste from one area into others. For example, we all eat cheese, but the byproduct whey has now been incorporated into cheese food as well as pharmaceuticals. It is no longer discharged into the environment with its associated challenges.
Efficiency and cost control demands that each step in this process be optimized, and that process is made precise and measurable with a digital twin. Complex operations like making paper then use these digital twins to control the paper machines. Petroleum refineries use digital twins to constantly adjust the distillation operations.
These are extremely satisfying technological advances, but success here must be humbly reconciled against a truth condition: we must have the research and proof that we understand the underlying mechanisms and have repeated confidence that when we change an input variable, we get a corresponding repeated output condition.
I hope you can see the problem with attempting to build a digital twin for our planet ecosystem. You would think all our modern math would permit some level of aggregate modeling accuracy. Perhaps the following illustration will help. If you compare the predicted number and intensity of hurricanes each year with the actual storm experience over the past two decades you get a very high level of correlation. Unfortunately, it is a negative correlation which means that you could “bet against” those offering forecasts and be right quite often.
This should be troubling … because it indicates we do not yet have a digital representation of even aggregate storm intensity. And, by the way, if you look at any long-term trends in storm number and intensity, it is trending lower over time … but you won’t hear about that because it does not align with the political agendas dominating the digital world these days.
So, one might rightfully ask whether today’s climate modelers are looking for science to validate their models or are more than likely looking for anything that confirms their desired conclusions.
Follow the money … the correlation is convincingly real.