There is a growing need for local climate information in order to update our understanding of risks connected to the changing weather and prepare for new challenges. This need has been an important motivation behind the World Meteorological Organisation’s (WMO) Global Framework for Climate Services (GFCS).
There has also been a lot of work carried out to meet these needs over time, but I’m not convinced that people always get the whole story.
A background on downscaling
The starting point is that global climate models (GCMs) are not designed to provide detailed information on local climate characteristics, but are nevertheless able to reproduce the large-scale phenomena reasonably well. The models tend to be associated with a minimum skillful scale.
The local climate is also connected with the large-scale conditions in the region that the models are able to reproduce well, as well as being influenced by local geographical factors.
The dependency of local climate to the large-scale situation implies that it’s possible to downscale information about temperature and precipitation on a local scale, based on a description of the large-scale information and geographical effects.
There has also been a fair amount of activities connected to downscaling climate information, notably through the international COordinated Downscaling EXperiment (CORDEX) under the World Climate Research Programme (WCRP).
The main emphasis in CORDEX has been on running regional climate models (RCMs) over a limited region, using results from GCMs on their boundaries, but with a finer grid mesh than those of the GCMs. The grid size of the GCMs are typically of the order 100 km, whereas for the RCMs they tend to be 10-50 km (some go down to a few kms).
There are also some activities following a different approach to running RCMs, using empirical-statistical downscaling (ESD) where statistical downscaling models have been calibrated on observational data. This approach has much in common to Artificial Intelligence (AI).
It is important to use both RCMs and ESD in downscaling since a combination of the two can say something about the confidence we should expect in the results. The reason is that the two types of downscaling make use of independent information sources, where the former derives an answer based on coded equations representing dynamics and thermodynamics, whereas the latter utilises information hidden in empirical data.
ESD is also important because it offers a computationally cheap tool for downscaling, which makes it suitable to downscale large multi-model ensembles such as the Coupled Model Intercomparison Project (CMIP) experiments presented in the IPCC reports.
Three different ESD approaches
Furthermore, there are three approaches in ESD, where one is known as ‘Perfect Prognosis‘ (PP) which uses pure observations for calibrating the models, and a second approach is referred to as ‘Model Output Statistics’ (MOS) that uses model output to represent the large-scale predictors and observations to represent local conditions during the calibration stage. I’ll return to the third approach later on.
Most of the work on ESD so far has tried to replicate results on a similar basis as the RCMs, which involves downscaling the local temperature or precipitation on a day-by-day basis similar to the output provided by the RCMs. I refer to this approach as ‘downscaling weather’ as in Oxford Research Encyclopedia. This practise has also framed networks and projects such as the European COST-VALUE project and the experiment protocol of CORDEX-ESD.
Differences about best downscaling approach
But, is downscaling weather the optimal way?
I am not convinced.
For starters, this approach often requires that the predictors comprise a set of different variables describing the large-scale conditions, such as a mix of mean sea-level pressure, the temperature near the surface and at various levels in the atmosphere (e.g. 500 hPa, 700 hPa and 850 hPa), the specific humidity at various heights, and the geopotential height at some levels.
It is important to keep in mind that once the statistical models are calibrated with reanalyses as predictors, the models subsequently replace them with corresponding data simulated by the GCMs to make projections.
I doubt that the GCMs are able to reproduce the covariance structure between all the variables in a typical mix of predictors with sufficient accuracy to give reliable results.
Information useful for climate adaptation
On the other hand, the question is what type of information do people really need?
Very few decision-makers I have met need a time series, and those who ask for time series tend to be impact researchers who use them as input in an impact model. Daily time series are in other words as intermediate results.
In the end, the impact researchers too usually produce some information about the risk or probability of some events to take place.
In many cases, the probability density functions (pdfs) will do, especially for mapping risks, and what we really need is to predict how the pdfs will change in the future as shown in Figure 1.
I have hung along with statisticians and mathematicians long enough to realise that it may be possible to try to predict the pdfs directly, either for temperature or precipitation (e.g. Figure 1), or for the output of the impact models.
Figure 1. The most common variables for the local climate are daily temperature and 24-hour rainfall. The right panel shows a normal distribution that can represent temperature anomalies, and the left panel is an exponential distribution that can represent wet-day 24-hr rainfall (e.g. on days with more than 1 mm). If the objective is to predict the change in their pdfs (or cumulative probability functions), then it does not have to involve a long chain of calculations. Here μ is the mean and σ is the standard deviation.
So rather than downscaling weather, like the majority of scholars engaged in ESD, it makes sense to downscale pdfs, which I rephrase as ‘downscaled climate’.
On local scales, climate can for all intents and purposes be defined as the pdfs describing variables such as daily temperature and precipitation as shown in Figure 1.
The downscaling climate approach has many advantages:
It allows using mean seasonal values as predictors representing the large-scale conditions which are more readily available from GCM simulations.
It requires less computational resources and is faster.
The statistical properties often are more predictable than single outcomes.
The seasonal mean values are closer to being normally distributed according to the central limit theorem.
Experience indicates that only one variable is typically needed as predictors as opposed to a set of many.
When the local predictands describe the parameters for the seasonal pdfs, they also tend to be approximately normally distributed because they are aggregated over samples, which make principal component analysis (PCA) an efficient and suitable way of representation.
The third approach
Also, the downscaling climate approach is ideal for using common EOFs (see previous post) to represent the large-scale predictors. Of course, using common EOFs means that you no longer use the PP approach, but a PP-MOS hybrid approach. This is the third approach in addition to PP and MOS described above.
There is some contention in the ESD community about how to classify these approaches, but the calibration of statistical models using common EOFs as a framework involves a mix of observations (reanalysis) and GCM results to represent the large-scale conditions.
Hence it is consistent with neither definitions for PP nor MOS.
To me, it seems to be a ‘no brainer’ to downscale parameters for the pdfs and use common EOFs. It is a bit curious that so few others in the ESD community use these methods.
The use of common EOFs implies that the problem matching predictors from reanalysis and GCM is greatly reduced, and they enable an evaluation of the GCM results which often is lacking.
Furthermore, using PCA to represent a set of predictands within a given region also appears to be superior to downscaling the sites one-by-one and it ensures spatial consistency, which often is a problem.
A complete picture is important for climate services
It seems that the ESD community is split, and the strategy for downscaling climate has been ignored or neglected. For instance, it was not appreciated in the European COST-VALUE project. The COST-VALUE project is sometimes presented as an all-encompassing project for ESD, but I don’t agree with that view.
I participated in COST-VALUE, but felt that many decisions were enforced by the leaders with strong opinions and an uncompromising attitude. A number of suggestions were brushed aside and the project never accomodated for the downscaling climate strategy or included evaluation aspects based on common EOFs.
Despite this limitation, the COST-VALUE project can in many ways be regarded as a successful effort that produced a great deal of good results. However, it doesn’t provide the whole story when it comes to ESD.
I have repeatedly come across incomplete accounts of the ESD development, even in recent papers where the strategy of downscaling climate has been ignored. This common omission may lead to new generations of scholars in the downscaling community missing a part of the story.
If the work on downscaling is to carry on, then it’s also important to account for and acknowledge all related work done to make the best out of our knowledge for climate services and climate change adaptation. This is particularly relevant these days, as a new IPCC report is being drafted on climate change on global and regional scales.