RealClimate

Comments

RSS feed for comments on this post.

  1. Geert Jan van Oldenborgh’s video abstract of the “Reliability of regional climate model trends” paper is very well done:

    http://iopscience.iop.org/1748-9326/8/1/014055

    Thanks, Geert Jan.

    Comment by Bob Tisdale — 15 Apr 2013 @ 10:12 AM

  2. A layperson here, anxiously awaiting a translation. JP

    Comment by John Parsons — 15 Apr 2013 @ 12:52 PM

  3. This is a very interesting post. Please elaborate:

    “…the climate change signal is now emerging from the noise in many regions of the world…”
    Exactly how is the “climate change signal” separated from the noise, and is this “climate change signal” considered to be anthropogenic?

    “the verification of regional past trends in climate models has become possible.”
    Exactly how are “regional past trends in climate models” verified? Is this through “hindcasting”?

    Comment by Louis Hooffstetter — 15 Apr 2013 @ 5:05 PM

  4. @2 John Parsons. Follow Bob Tisdale’s link and watch the video.

    Comment by MikeH — 15 Apr 2013 @ 9:01 PM

  5. Interesting. I looked at some regional climate projections for Australia a while back and had noticed the depressingly large error bars on precipitation and temperature projections. Are the issues solely dependent on missing physics in the models or could spatial resolution be an issue?

    Comment by SCM — 15 Apr 2013 @ 9:28 PM

  6. The full article:
    http://www.knmi.nl/publications/fulltexts/reliability.pdf

    In general, I wish posts would give links to complete articles when possible, since some readers would have to pay to download from publishers’ sites.

    Comment by T Marvell — 16 Apr 2013 @ 2:31 AM

  7. John Parsons: we checked whether past trends on the regional scale (which, in the jargon, means a few hundred kilometres still) are correctly simulated by the set of climate models that the IPCC is using to make the Fifth Assessment Report, due in September. You cannot demand that the models completely reproduce the trend, because it is also influenced by unpredictable weather. Also, we know that climate models are not perfect, so we ask rather that they fall in the range of models. Taking these two uncertainties into account we still find that the trends are more often than expected not simulated well by the climate models. We do not yet know why, but it means that we cannot take the climate model output for the future simply as a numerical climate forecast, but have to use the useful information in the climate model output in more sophisticated ways.

    Comment by Geert Jan van Oldenborgh — 16 Apr 2013 @ 4:38 AM

  8. Louis Hooffstetter: the climate change signal is separated from the noise using a linear regression on the global mean temperature. This gives the best signal to noise ratio. We compare that to the modelled trends, computed the same way but using the modelled global mean temperature. These models include all forcings: natural (solar variability, volcanic & natural aerosols) and anthropogenic (greenhouse gases, anthropogenic aerosols). Howevr, it is clear from the maps that indeed the anthropogenically forced signal dominates (as is also shown by the detection and attribution studies).

    Indeed, past trends are verified by considering the historical runs of these climate models as hindcasts. We have also done this for decadal prediction hindcasts with very similar results (see http://pielkeclimatesci.wordpress.com/2012/03/07/guest-post-titled-decadal-prediction-skill-in-a-multi-model-ensemble-by-geert-jan-van-oldenborgh-francisco-j-doblas-reyes-bert-wouters-wilco-hazeleger/)

    Comment by Geert Jan van Oldenborgh — 16 Apr 2013 @ 4:44 AM

  9. John Parsons,

    I think the essence is:

    – the CMIP5 multi-model ensemble seems to encapsulate surface temperature change at the global scale fairly well.

    – However, when looking at smaller spatial scales different model runs seem to be too samey in what they predict. There are fairly robust spatial change patterns in the models which aren’t well reflected in observations. This means any probablistic prediction based on the robustness of trends across the model ensemble is unlikely to provide an accurate forecast for what will be observed. In the words of our guest hosts: ‘the ensemble is somewhat overconfident’.

    – That being the case they suggest that the multi-model ensemble would not be useful for making weather forecast style probablistic predictions for climate.

    I wonder how this finding relates to the Deser 2012 paper featured on here a few months ago? That paper described a model experimental setup where they produced a large number of realisations from a single model, paying special attention to varying initial conditions and found large differences in regional trends. Is there some reason that CMIP5 models would be initialised in similar ways that produce false robustness?

    Comment by Paul S — 16 Apr 2013 @ 5:02 AM

  10. @T.Marvel: this is an open access publisher, so there should be no problem downloading the PDF from their site.

    Comment by Geert Jan van Oldenborgh — 16 Apr 2013 @ 6:34 AM

  11. Paul S: a same-model ensemble like Clara Deser’s (or our ESSENCE ensemble) only captures the natural variability, not (part of) the model uncertainty, so these ensembles are in general hugely overconfident. Yokohata et al (2012) in their rank histograms also include perturbed-physics ensembles where a single model is run with different parameter settings, which should include all natural variability and some of the model uncertainty. With a larger spread than a single-model ensemble these ensembles still are hugely overconfident. Of curse these statements depend strongly on the realism of the natural variability, Tom Knutson’s paper has some good graphs on this.

    Comment by Geert Jan van Oldenborgh — 16 Apr 2013 @ 6:39 AM

  12. My thanks to Geert Jan van Oldenborgh for this post.

    Comment by Matthew R Marler — 16 Apr 2013 @ 12:47 PM

  13. Geert Jan, Thank You so much for that synopsis. I went to the video and found it to be tremendously helpful. It looks to be important work, which will doubtlessly be very helpful to model development.

    I hope you and your colleges know how much we amateur scientists appreciate R/C.

    Best Regards, JP

    Comment by John Parsons — 16 Apr 2013 @ 1:18 PM

  14. Many Thanks to Bob T., Mike H. and Paul S. for your very helpful comments. JP

    Comment by John Parsons — 16 Apr 2013 @ 1:23 PM

  15. SCM@#5 re Australia:

    The Fig 1C pattern for Australia troubled me on sight. It’s correct; you can reproduce something similar here (linear time basis): http://www.bom.gov.au/cgi-bin/climate/change/trendmaps.cgi?map=rain&area=aus&season=0112&period=1950.

    Problem is, most of that is probably not an AGW signal, at least in the E and SE (it surely is in the far SW). Compare, for example, the pattern for 100 years, here: http://www.bom.gov.au/cgi-bin/climate/change/trendmaps.cgi?map=rain&area=aus&season=0112&period=1900. Or have a look at the last 40 years.

    Aus rainfall is just really really variable. Expecting any sort of GCM ensemble to reproduce something like the 60-year pattern of change is a step too far IMO (except in the SW). That is in annual means; extremes may be another matter.

    Comment by GlenF — 17 Apr 2013 @ 6:28 AM

  16. The acronym TCR usually refers to ‘transient climate response’, which is defined in terms of an idealized CO2 emissions scenario (1% per year increase to doubled CO2):

    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2.html#8-6-2-1

    Comment by Tim M. — 17 Apr 2013 @ 9:47 AM

  17. GlenF: The method we use takes the natural variability into account (to the extent that it is correctly simulated by the models). We do not look at the multi-model mean, but at the full spread of the ensemble, and ask whether trends are in the top or bottom 5% more frequently than expected by chance. Part of the coloured regions on Fig.1c are due to chance and our method cannot tell which ones are and which ones are not. We can only state that there are too many areas with colours compared with chance alone. A detailed study on SE Australia would start with a check whether the models correctly reproduce the large natural variability there. We are planning to do work in this direction, but first I have to finish some other obligations.

    Comment by Geert Jan van Oldenborgh — 17 Apr 2013 @ 9:55 AM

  18. GlenF: yes, you are correct, I guess this shows that I come from seasonal forecasting and not climate change research. It should be Total Global Response (TGR).

    Comment by Geert Jan van Oldenborgh — 18 Apr 2013 @ 5:43 AM

  19. Mr. Geert Jan van Oldenborgh, I saw your video. Your explanations are clear are convincing. The conclusion,the large scale patterns are very similar, but the regional scales are overconfidence is very meaningful. It is not possible that uncertainties may be reduced to zero. Local variabilities have significant impacts on the precision of forecasts from general scales.

    Comment by KIM-NDOR DJIMADOUMNGAR — 19 Apr 2013 @ 8:52 PM

Sorry, the comment form is closed at this time.

Close this window.

0.307 Powered by WordPress