The CORDEX initiative, as the abbreviation ‘COordinated Regional climate Downscaling Experiment‘ suggests, tries to bring together the community of regional climate modellers. At least, this initiative has got a blessing from the World Climate Research Programme WCRP.
I think the most important take-home message from the workshop is that the stake holders and end users of climate information should not look at just one simulation from global climate models, or just one downscaling method. This is very much in agreement with the recommendations from the IPCC Good Practice Guidance Paper. The main reason for this is the degree of uncertainties involved in regional climate modelling, as discussed in a previous post.
I sense that the issue of uncertainty is sometimes seen as problematic and difficult to deal with. Uncertainty does not mean that we are completely clueless – it means that we do not have accurate knowledge about absolutely every detail. Uncertainty is nothing new – we live with it every day. All scientific disciplines have to live with uncertainty too.
Moreover, we can describe and model uncertainty. The question about uncertainty is a question about information about processes, whether understood, random variations (known as ‘noise’ or stochastic processes), or systematic model shortcomings (biases).
Theoretical physics have a long tradition with uncertainties – take quantum physics for example. This has lead to one anecdote, where – according to Wikiquote – Albert Einstein is paraphrased to have said “God does not play dice with the universe”.
Uncertainty is often modelled in statistics though stochastic models. One common phrase in statistical texts is “take a random variable X…”. I believe that statisticians can contribute more to climate sciences in better description of the uncertainties, in addition to better calibration of statistical models. And I’m not alone. In a European COST-action initiative named ‘VALUE’, spearheaded by Douglas Maraun, the intention is to include statisticians for improved climate modelling.
We can get much information about all these aspects from our models and real observations (empirical data). The key words are evaluations and testing. Hence, it so important to combine models based on our understanding of the world with empirical data.
Another point is the fact that general circulation models have our understanding of relevant processes encoded into lines of computer code, whereas empirical-statistical models capture all relevant processes simply by the fact that these are emedded in the data itself. The problem with statistical models is that one does not know if they actually capture the real connections, or if a good match is just a coincidence.
The beauty of computer modelling is when real features are predicted, and the combination of empirical-statistical models with physics-based models enhance our confidence of actual predictive skills.
One thing is fairly clear from the CORDEX workshop – it has very strong relevance to the global climate services, called for at the World Climate Conference-3 (WCC-3), as well as the
IAV community – ‘IAV’ being ‘Impact, Adaptation and Vulnerability’.
A global warming is bound to affect a great deal of earth’s population. For many people, scientific facts do not really speak loud and clear. What does model bias mean, and what implication does that have for my risk assessment? Scientific facts must also be complemented with narratives, or a story line which visualises possible outcomes. For global climate services, science and infrastructure is not enough – we also need to interpret the information into knowledge, based on science. I wonder if it is assumed that this communication work will happen effortlessly, or if people see that this will be a herculean task. Maybe we will get the answer to that at the open science conference in Denver 24-28 October this year.