Addendum to “A Mistake with Repercussions”

Figure: The upper panel shows the how the MBH methodology matches the trends and variability in the calibration period (the unshaded area). The estimate over the verficiation period (shaded region) can then be compared to see what predictive power it has. In the lower panel, the von Storch et al emulations are shown as a function of the non-climatic ‘noise’ that is added. The large difference between the black solid line and its reconstruction, the red dashed line, shows that the Von Storch method already fails in the calibration and verification periods. This should have alerted the authors that something is wrong.

The basic problem is illustrated in the above figure. The MBH method is designed so that the mean and variance over the calibration period in the proxies and the observations are the same, and the overall trends will be similar. The ‘verification’ period is used to see whether the reconstruction of the proxies matches the mean offset during that period seen in the observations. The results from the von Storch et al emulation demonstrate that this is not so for their methodology and that it is clear that their emulations would fail verfication tests. Detrending during the calibration interval is thus equivalent to removing a substantial part of the low frequency signal (though exactly how much will depend on the simulation). As an aside, Burger and Cubasch (GRL,2005; Tellus, 2006) suggest that detrending is simply an arbitrary choice in these kinds of methodologies, but these results show that it is clearly deliterious to the skill of the reconstruction, and thus there is an a priori reason not to use it.

3. The problem of climate drift

Von Storch et al. (2004) started their model from an initial climate state that was in equilibrium with 372 ppm CO2, which already includes the anthropogenic rise of carbon dioxide. However, their model experiment was meant to start at 1,000 AD, when carbon dioxide levels were only 280 ppm. Therefore, CO2 concentration was dropped in the model from 372 down to 280 ppm over 30 years, followed by a 50-year adjustment period with constant 280 ppm CO2, before the start of the 1000-year run proper. Not surprisingly, the global temperature dropped by about 1.5 ÂșC during this transition phase. None of this is reported in their paper or online supplement.

This initialisation procedure is a rather unsatisfactory as it would be expected to cause a large climate drift during the experiment. It is as if someone wanting to measure temperature variations outside was using a thermometer that he just brought out from a heated room. If you looked at this thermometer before it had fully adjusted to the outside temperature, you would see a cooling trend that has nothing to do with actual temperature changes outside.

Those experienced with coupled climate models know that the time scale to fully adjust after such a major drop in CO2 concentration is many hundreds of years, due to the slow response time of the oceans. Even if the Von Storch team had not expected this problem, they must have clearly seen it unfold: the trend in their model during the transition phase reveals that it was far from equilibrium, and they should at the very least have mentioned this problem as a caveat in their papers.

In the absence of this information, many colleagues were puzzled by the strong cooling trend throughout the pre-industrial era in this simulation, which made it an outlier compared to all other available simulations of the past millennium. Was this due to a particularly high climate sensitivity of this model? Or was this due to the forcing used? Correspondence with the Von Storch group brought no clarification. Finally, Osborn et al. (2006) identified this as due to a climate drift problem.

Page 2 of 3 | Previous page | Next page