This is a very interesting post. Please elaborate:
“…the climate change signal is now emerging from the noise in many regions of the world…”
Exactly how is the “climate change signal” separated from the noise, and is this “climate change signal” considered to be anthropogenic?
“the verification of regional past trends in climate models has become possible.”
Exactly how are “regional past trends in climate models” verified? Is this through “hindcasting”?
Comment by Louis Hooffstetter — 15 Apr 2013 @ 5:05 PM
@2 John Parsons. Follow Bob Tisdale’s link and watch the video.
Interesting. I looked at some regional climate projections for Australia a while back and had noticed the depressingly large error bars on precipitation and temperature projections. Are the issues solely dependent on missing physics in the models or could spatial resolution be an issue?
John Parsons: we checked whether past trends on the regional scale (which, in the jargon, means a few hundred kilometres still) are correctly simulated by the set of climate models that the IPCC is using to make the Fifth Assessment Report, due in September. You cannot demand that the models completely reproduce the trend, because it is also influenced by unpredictable weather. Also, we know that climate models are not perfect, so we ask rather that they fall in the range of models. Taking these two uncertainties into account we still find that the trends are more often than expected not simulated well by the climate models. We do not yet know why, but it means that we cannot take the climate model output for the future simply as a numerical climate forecast, but have to use the useful information in the climate model output in more sophisticated ways.
Louis Hooffstetter: the climate change signal is separated from the noise using a linear regression on the global mean temperature. This gives the best signal to noise ratio. We compare that to the modelled trends, computed the same way but using the modelled global mean temperature. These models include all forcings: natural (solar variability, volcanic & natural aerosols) and anthropogenic (greenhouse gases, anthropogenic aerosols). Howevr, it is clear from the maps that indeed the anthropogenically forced signal dominates (as is also shown by the detection and attribution studies).
– the CMIP5 multi-model ensemble seems to encapsulate surface temperature change at the global scale fairly well.
– However, when looking at smaller spatial scales different model runs seem to be too samey in what they predict. There are fairly robust spatial change patterns in the models which aren’t well reflected in observations. This means any probablistic prediction based on the robustness of trends across the model ensemble is unlikely to provide an accurate forecast for what will be observed. In the words of our guest hosts: ‘the ensemble is somewhat overconfident’.
– That being the case they suggest that the multi-model ensemble would not be useful for making weather forecast style probablistic predictions for climate.
I wonder how this finding relates to the Deser 2012 paper featured on here a few months ago? That paper described a model experimental setup where they produced a large number of realisations from a single model, paying special attention to varying initial conditions and found large differences in regional trends. Is there some reason that CMIP5 models would be initialised in similar ways that produce false robustness?
Paul S: a same-model ensemble like Clara Deser’s (or our ESSENCE ensemble) only captures the natural variability, not (part of) the model uncertainty, so these ensembles are in general hugely overconfident. Yokohata et al (2012) in their rank histograms also include perturbed-physics ensembles where a single model is run with different parameter settings, which should include all natural variability and some of the model uncertainty. With a larger spread than a single-model ensemble these ensembles still are hugely overconfident. Of curse these statements depend strongly on the realism of the natural variability, Tom Knutson’s paper has some good graphs on this.
Aus rainfall is just really really variable. Expecting any sort of GCM ensemble to reproduce something like the 60-year pattern of change is a step too far IMO (except in the SW). That is in annual means; extremes may be another matter.
GlenF: The method we use takes the natural variability into account (to the extent that it is correctly simulated by the models). We do not look at the multi-model mean, but at the full spread of the ensemble, and ask whether trends are in the top or bottom 5% more frequently than expected by chance. Part of the coloured regions on Fig.1c are due to chance and our method cannot tell which ones are and which ones are not. We can only state that there are too many areas with colours compared with chance alone. A detailed study on SE Australia would start with a check whether the models correctly reproduce the large natural variability there. We are planning to do work in this direction, but first I have to finish some other obligations.
Mr. Geert Jan van Oldenborgh, I saw your video. Your explanations are clear are convincing. The conclusion,the large scale patterns are very similar, but the regional scales are overconfidence is very meaningful. It is not possible that uncertainties may be reduced to zero. Local variabilities have significant impacts on the precision of forecasts from general scales.