by Gavin Schmidt and Stefan Rahmstorf
Two stories this week, a paper in Nature (Stainforth et al, 2005) describing preliminary results of the climateprediction.net experiments, and the Meeting the Climate Challenge report from a high level political group have lead to dramatic headlines. On the Nature paper, BBC online reported that “temperatures around the world could rise by as much as 11ºC “; on the latter report it headlined: “Climate crisis near ‘in 10 years’”. Does this mean there is new evidence that climate change is more serious than previously thought? We think not.
Both issues touch on the issue of uncertainty, in particular, the uncertainty in the global climate sensitivity.
It is important to know roughly what the climate sensitivity of the planet is. There are a number of ways to do this, using either climate models or data or a combination of both. From the earliest experiments model estimates have ranged from around 2 to 5°C (for 2xCO2). The most quoted range comes from the 1979 Charney report. There, two models were looked at (from Suki Manabe and Jim Hansen) which had a 2 and 4°C sensitivity, respectively. Jule Charney added a half a degree uncertainty at the low and high end and thus the range became 1.5 to 4.5°C. Thus, this early range stood on rather shaky grounds. It has lasted for a surprisingly long time, with subsequent results neither challenging it, nor being able to narrow it down further. Subsequent model estimates have pretty much fallen within those limits, though the actual range for the state-of-the-art models being analysed for the next IPCC report is 2.6 to 4.1°C. (Note that the range of climate sensitivity is not the same as the temperature range projected for 2100 (1.4 to 5.8°C), which also includes uncertainty in projected emissions. The uncertainty due purely to the climate sensitivity for any one scenario is around half that range.)
Attempts have also been made to constrain climate sensitivity from observations. Ideally, we would need a time when the climate was at an equilibrium state, and with good estimates of the forcings that maintained that state, and good data for the global mean temperature change. The 20th Century has the best estimates of the global mean temperature changes but the climate has not been in equilibrium (as shown by the increasing heat content of the oceans). Also, due to the multiplicity of anthropogenic and natural effects on the climate over this time (i.e. aerosols, land-use change, greenhouse gases, ozone changes, solar, volcanic etc.) it is difficult to accurately define the forcings. Thus estimates based purely on the modern period do not have enough precision to be useful. For instance, total forcings since 1850 are around 1.6+/-1 W/m2, the temperature change is around 0.7+/-0.1 °C and the current rate of warming of the ocean (to correct for the non-equilibrium conditions) is around ~0.75 W/m2. Together, that implies a sensitivity of 0.8 +/- 1 °C/W/m2 or 3.2+/-4°C for 2xCO2). More sophisticated methods of looking at the modern data don’t provide more of a constraint either (i.e. Forest et al., 2002; Knutti et al. 2002). (This large uncertainty essentially due to the uncertainty in the aerosol forcing; it is also the main reason why the magnitude of global dimming has little or no implication for climate sensitivity).
What about paleo-climate? An early attempt to use the Vostok ice core data in a regression analysis (Lorius et al., 1990) resulted in a climate sensitivity of 3-4ºC. The best period for these purposes is the last glacial maximum. This was a relatively stable climate (for several thousand years, 20,000 years ago), and a period where we have reasonable estimates of the radiative forcing (albedo changes from ice sheets and vegetation changes, greenhouse gas concentrations (derived from ice cores) and an increase in the atmospheric dust load) and temperature changes. A reasonable estimate of the forcings is 6.6+/-1.5 W/m2 (roughly half from albedo changes, slightly less than half from greenhouse gases – CO2, CH4, N2O). The global temperature changes were around 5.5 +/-0.5°C (compared to pre-industrial climate). This estimate then gives 0.8 +/- 0.2°C/(W/m2), or ~3+/-1°C for 2xCO2. This is actually quite a strong constraint, as we will see.
With this background, what should one make of the climateprediction.net results? They show that the sensitivity to 2xCO2 of a large multi-model ensemble with different parameters ranges from 2 to 11°C. This shows that it is possible to construct models with rather extreme behavior – whether these are realistic is another matter. To test for this, the models must be compared with data. Stainforth et al. subject their resulting models only to very weak data constraints, namely only to data for the annual-mean present-day climate. Since this does not include any climatic variations (not even the seasonal cycle), let alone a test period with a different CO2 level, this data test is unable to constrain the upper limit of the climate sensitivity range. The fact that even model versions with very high climate sensitivities pass their test does not show that the real world could have such high climate sensitivity; it merely shows that the test they use is not very selective. Our feeling is that once the validation becomes more comprehensive, most of the extremely high sensitivity examples will fail (particularly on the seasonal cycle, which tests for variations rather than just a mean).
A yet more stringent test for realistic climate sensitivity is the application of a model to a climate with different CO2 levels. Consider the implications for glacial climate of a sensitivity of twice the most likely value of 3°C, i.e. 6°C. This would imply that either the glacial forcings were only half what we thought, or that the temperature changes were twice what we infer. This would be extremely difficult to square with the paleo-data. Obviously the situation becomes even more untenable for the larger values (>6°C). Hence, we feel that the most important result of the study of Stainforth et al. is that by far most of the models had climate sensitivities between 2ºC and 4ºC, giving additional support to the widely accepted range (Update: As mentioned in the follow up post, this clustering is mainly a function of the sensitivity of the original model and the random nature of the perturbations). The fact that some of the models had much higher sensitivities should not be over-interpreted.
The ‘Meeting the Climate Challenge’ report tried to quantify what is meant by ‘dangerous’ interference in climate. All countries including the US and Australia have signed the Framework Convention on Climate Change which obligates them to prevent ‘dangerous’ interference with the climate system. Actually quantifying what this means is rather tricky. For various reasons (although some are subjective) they suggest that any global warming above 2°C (above the pre-industrial) is likely to be increasingly dangerous. The issue is how one prevents such an outcome given the uncertainty in the climate sensitivity.
The analysis used in this report is based on a study by Baer and Athanasiou. They perform a probability calculation assuming that any of the climate sensitivities in the IPCC range are equally likely. This is a relatively conservative assumption (since it does not include the really high sensitivities that we argued above are ruled out by paleo-data). The results suggest that in order to avoid ‘dangerous’ climate change with a reasonable probability (>90%), the maximum forcing that could be allowed is around 2 W/m2 over pre-industrial levels. This corresponds to a CO2 level of around 400 ppm, assuming all other forcings were at pre-industrial levels. This limit is to some extent subjective, but it is similar (though a little lower) than the level proposed by Jim Hansen.
Note that this is not the same as simply reaching the 400 ppmv CO2 level (which is highly likely to happen over the next ten to 15 years). The reason is because the other forcings (aerosols mostly) have collectively diminished the total forcing up to now. Currently this is about 1.6 W/m2. Whether and when we reach 2 W/m2 total forcing is a function of the changes in many different forcings. CFCs are projected to decline in the future and CH4 is currently steady (and possibly could be reduced), however aerosol growth rates are quite uncertain.
Is there a “point of no return” or “critical threshold” that will be crossed when the forcings exceed this level, as reported in some media? We don’t believe there is scientific evidence for this. However, as was pointed out at an international symposium on this topic last year in Beijing by Carlo Jaeger: setting a limit is a sensible way to collectively deal with a risk. A speed limit is a prime example. When we set a speed limit at 60 mph, there is no “critical threshold” there – nothing terrible happens if you go to 65 or 70 mph, say. But perhaps at 90 mph the fatalities would clearly exceed acceptable levels. Setting a limit to global warming at 2ºC above pre-industrial temperature is the official policy target of the European Union, and is probably a sensible limit in this sense. But, just like speed limits, it may be difficult to adhere to.
Uncertainty in climate sensitivity is not going to disappear any time soon, and should therefore be built into assessments of future climate. However, it is not a completely free variable, and the extremely high end values that have been discussed in media reports over the last couple of weeks are not scientifically credible.
david jones says
Thanks all for an excellent thread.
I have a couple of question relating to the exact meaning of “radiative forcing”.
I have always understood this to be a forcing acting on the surface, so the current +1.6W/M^2 means that there is effectively an additional 1.6W/M^2 of down radiation at the earth’s surface. Is this correct?
If so, then the Stefan-Boltzmann equation applied to the observed temperature change (and projected temperature change under double CO2) should approximately tell you the total forcing incorporating climate system feedbacks? Then, one can presumably decompose this “total forcing” into its components which might include WV feedbacks, cloud feedbacks, surface albedo feedbacks etc. If there a fault in this logic, and can anyone point to a comprehensive analysis in the literature along these lines?
Cheers,
DJ
[Response: The radiative forcings are strictly defined at the tropopause, not at the surface. The impact of various forcings and the decomposition of the impacts due to various feedbacks can be found in any number of papers, of which Hansen et al (1997) or Hansen et al (2002) (and references therein) are good starts. -gavin]