RealClimate logo


CMIP5 simulations

Filed under: — gavin @ 11 August 2011

Climate modeling groups all across the world are racing to add their contributions to the CMIP5 archive of coupled model simulations. This coordinated project, proposed, conceived and specified by the climate modeling community itself, will be an important resource for analysts and for the IPCC AR5 report (due in 2013), and beyond.

There have been previous incarnations of the CMIP projects going back to the 1990s, but I think it’s safe to say that it was only with CMIP3 (in 2004/2005) that the project gained a real maturity. The CMIP3 archive was heavily used in the IPCC AR4 report – so much so that people often describe those models and simulations as the ‘IPCC models’. That is a reasonable shorthand, but is not really an accurate description (the models were not chosen by IPCC, designed by IPCC, or run by IPCC) even though I’ve used it on occasion. Part of the success of CMIP3 was the relatively open data access policy which allowed many scientists and hobbyists alike to access the data – many of whom were dealing with GCM output for the first time. Some 600 papers have been written using data from this archive. We discussed some of this success (and some of the problems) back in 2008.

Now that CMIP5 is gearing up for a similar exercise, it is worth looking into what has changed – it terms of both the model specifications, the requested simulations and the data serving to the wider community. Many of these issues are being discussed in a the current CLIVAR newsletter (Exchanges no. 56). (The references below are all to articles in this pdf).

There are three main novelties this time around that I think are noteworthy: the use of more interactive Earth System models, a focus on initiallised decadal predictions, and the inclusion of key paleo-climate simulations as part of the suite of runs.

The term Earth System Model is a little ambiguous with some people reserving that for models that include a carbon cycle, and others (including me) using it more generally to denote models with more interactive components than used in more standard (AR4-style) GCMs (i.e. atmospheric chemistry, aerosols, ice sheets, dynamic vegetation etc.). Regardless of terminology, the 20th Century historical simulations in CMIP5 will use a much more diverse set of model types than did the similar simulations in CMIP3 (where all models were standard coupled GCMs). That both expands the range of possible evaluations of the models, but also increases the complexity of that evaluation.

The ‘decadal prediction’ simulations are mostly being run with standard GCMs (see the article by Doblas-Reyes et al, p8). The different groups are trying multiple methods to initialise their ocean circulations and heat content at specific points in the past and are then seeing if they are able to better predict the actual course of events. This is very different from standard climate modelling where no attempt is made to synchronise modes of internal variability with the real world. The hope is that one can reduce the initial condition uncertainty for predictions in some useful way, though this has yet to be demonstrated. Early attempts to do this have had mixed results, and from what I’ve seen of the preliminary results in the CMIP5 runs, significant problems remain. This is one area to watch carefully though.

Personally, I am far more interested in the inclusion of the paleo component in CMIP5 (see Braconnot et al, p15). Paleo-climate simulations with the same models that are being used for the future projections allow for the possibility that we can have true ‘out-of-sample’ testing of the models over periods with significant climate changes. Much of the previous work in evaluating the IPCC models has been based on modern period skill metrics (the climatology, seasonality, interannual variability, the response to Pinatubo etc.), but while useful, this doesn’t encompass changes of the same magnitude as the changes predicted for the 21st Century. Including tests with simulations of the last glacial maximum, the Mid-Holocene or the Last Millennium greatly expands the range of model evaluation (see Schmidt (2010) for more discussion).

The CLIVAR newsletter has a number of other interesting articles, on CFMIP (p20), the scenarios begin used (RCPs) (p12), the ESG data delivery system (p40), satellite comparisons (p46, and p47) and the carbon-cycle simulations (p27). Indeed, the range of issues covered I think presages the depth and interest that the CMIP5 archive will eventually generate.

There will be a WCRP meeting in October in Denver that will be very focused on the CMIP5 results, and it is likely that much of context for the AR5 report will be reflected there.


113 Responses to “CMIP5 simulations”

  1. 101
    Hank Roberts says:

    oops — mispasted inquiry 20 Aug 2011 at 11:03 AM
    re Salby, sorry; entirely out of place

  2. 102

    I need a recommendation for a SHORT and readable book (or article) which is readable by a HS graduate and sets forth the case for climate change as clearly as possible. My grandson is not scientifically oriented, and at age 28

  3. 103
    Hank Roberts says:

    > recommendation for a SHORT and readable book
    I’ll post one or two, replying over in the open thread:
    http://www.realclimate.org/index.php/archives/2011/08/unforced-variations-aug-2011/

  4. 104

    Thanks, Hank. Appreciated.

  5. 105
    Meow says:

    @102: Try this, which is from a lively historical perspective.

  6. 106
    Denis Royer says:

    @ CMIP5 simulations : Glad that you mention the role of hobbyists in the climate science. I rather would term them as “amateur climatologists” with reference to amateur astronomers whose contributions to astronomy is not more to mention. The analogy is obvious, the data mining and micro-computers replacing the sky observation and the telescopes.
    From a retired nuclear physicist (and an amateur astronomer)

  7. 107
    William Freimuth says:

    Earlier I posted a rant (#4); please forgive me and accept my heartfelt gratitude for your scientific research.

  8. 108
  9. 109
    Hank Roberts says:

    http://www.agu.org/pubs/eos/eo1131.shtml

    Paywalled ( $20/yr for AGU membership) — worth a look:

    VOLUME 92 NUMBER 31 2 August 2011

    View entire full-size issue. [PDF 5.15 MB]

    FEATURE
    Guidelines for Constructing Climate Scenarios
    How best can scientists understand and characterize uncertainty in climate predictions, and what are some key considerations when selecting and combining climate model outputs to generate scenarios? Addressing these questions in the context of recent research leads to some possible guidelines for creating and applying climate scenarios.
    [PDF]
    By P. Mote et al.

  10. 110
    Geoff Sherrington says:

    The journal “Analytical Chemistry” in 1986 reported a summary of analytical chemistry of lunar material collected during Apollo missions. Author, G.H. Morrison.
    Many laboratories submitted their replicate analyses of the same material, to give a “within laboratory variance” (here WLV). More than a score of the world’s top laboratories analysed most of the elements, so it was possible to compute a “between laboratory variance”, BLV. In many cases the WLV was less than the BLV, meaning that the laboratories were optimistic of their capabilities. Unfortunately, the BLV was larger than many has hoped for, but it was the variance that had to be used because there was no known way to discard some laboratories and keep others.
    To theme. After CMIP3, I blogged repeatedly that modellers should submit the results of all test runs, excluding those with known, excusable errors,not just those that had a good feel to them. In this way, a within model variance can be found. When all modellers report, a between model variance can be calculated and compared. If it is calculated with the inclusion of the within model variance, rather than on 1 or 2 preferred model outcomes per modeller (analogous to the lunar rocks example) a better estimate of confidence in models can be derived.
    Do you know if this is going to be done?

  11. 111
    Geoff Sherrington says:

    Re 110 – date error. The paper is ANALYTICAL CHEMISTRY, VOL. 43, NO. 7, JUNE 1971. “Evaluation of Lunar Elemental Analyses”. George Morrison was at Cornell University.

    In the past I have been troubled by the significance of some modelling in GCMs because the uncertainty was ill defined. In 110 I suggest application of a method learned at high expense from older study. Although older, the fundamentals have not been overtaken by new developments.

  12. 112
    Ray Ladbury says:

    Geoff Sherrington, Thank you for that utterly irrelevant suggestion, based on an utter and complete misunderstanding of how climate models, climate science and science in general work.

  13. 113
    Geoff Sherrington says:

    112. Ray Ladbury. And the suggestion is irrelevant because ….?Soingli Museaum


Switch to our mobile site