The IPCC model simulation archive

The work involved in these experiments was significant and unfunded. At GISS, the simulations took about a year to do. That includes a few partial do-overs to fix small problems (like an inadvertent mis-specification of the ozone depletion trend), the processing of the data, the transfer to PCMDI and the ongoing checking to make sure that the data was what it was supposed to be. The amount of data was so large – about a dozen different experiments, a few ensemble members for most experiments, large amounts of high-frequency data – that transferring it to PCMDI over the internet would have taken years. Thus, all the data was shipped on terabyte hard drives.

Once the data was available from all the modelling groups (all in consistent netcdf files with standardised names and formatting), a few groups were given some seed money from NSF/NOAA/NASA to get cracking on various important comparisons. However, the number of people who have registered to use the data (more than 1000) far exceeded the number of people who were actually being paid to look at it. Although some of the people who were looking at the data were from the modelling groups, the vast majority were from the wider academic community and for many it was the first time that they’d had direct access to raw GCM output.

With that influx of new talent, many innovative diagnostics were examined. Many, indeed, that hadn’t been looked at by the modelling groups themselves, even internally. It is possibly under-appreciated that the number of possible model-data comparisons far exceeds the capacity of any one modelling center to examine them.

The advantages of the database is the ability to address a number of different kinds of uncertainty, not everything of course, but certainly more than was available before. Specifically, the uncertainty in distinguishing forced and unforced variability and the uncertainty due to model imperfections.

When comparing climate models to reality the first problem to confront is the ‘weather’, defined loosely as the unforced variability (that exists on multiple timescales). Any particular realisation of a climate model simulation, say of the 20th Century, will have a different sequence of weather – that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing (increases in greenhouse gases, volcanoes etc.). There is no expectation that the weather in any one model will be correlated to that in the real world either. So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer). However, in the model archive it is relatively easy to distinguish.

The standard trick is to look at the ensemble of model runs. If each run has different, uncorrelated weather, then averaging over the different simulations (the ensemble mean) gives an estimate of the underlying forced change. Normally this is done for one single model and for metrics like the global mean temperature, only a few ensemble members are needed to reduce the noise. For other metrics – like regional diagnostics – more ensemble members are required. There is another standard way to reduce weather noise, and that is to average over time, or over specific events. If you are interested in the impact of volcanic eruptions, it is basically equivalent to run the same eruption 20 times with different starting points, or collect together the response of 20 different eruptions. The same can be done with the response to El NiƱo for instance.

With the new archive though, people have tried something new – averaging the results of all the different models. This is termed a meta-ensemble, and at first thought it doesn’t seem very sensible. Unlike the weather noise, the difference between models is not drawn from a nicely behaved distribution, the models are not independent in any solidly statistical sense, and no-one really thinks they are all equally valid. Thus many of the pre-requisites for making this mathematically sound are missing, or at best, unquantified. Expectations from a meta-ensemble are therefore low. But, and this is a curious thing, it turns out that the meta-ensemble of all the IPCC simulations actually outperforms any single model when compared to the real world. That implies that at least some part of the model differences is in fact random and can be cancelled out. Of course, many systematic problems remain even in a meta-ensemble.

Page 2 of 4 | Previous page | Next page