Almost 30 years ago, Jule Charney made the first modern estimate of the range of climate sensitivity to a doubling of CO2. He took the average from two climate models (2ºC from Suki Manabe at GFDL, 4ºC from Jim Hansen at GISS) to get a mean of 3ºC, added half a degree on either side for the error and produced the canonical 1.5-4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report. Admittedly, this was not the most sophisticated calculation ever, but individual analyses based on various approaches have not generally been able to improve substantially on this rough estimate, and indeed, have often suggested that quite high numbers (>6ºC) were difficult to completely rule out. However, a new paper in GRL this week by Annan and Hargreaves combines a number of these independent estimates to come up with the strong statement that the most likely value is about 2.9ºC with a 95% probability that the value is less than 4.5ºC.
Before I get into what the new paper actually shows, a brief digresssion…
We have discussed climate sensitivity frequently in previous posts and we have often referred to the constraints on its range that can be derived from paleo-climates, particularly the last glacial maximum (LGM). I was recently asked to explain why we can use the paleo-climate record this way when it is clear that the greenhouse gas changes (and ice sheets and vegetation) in the past were feedbacks to the orbital forcing rather than imposed forcings. This could seem a bit confusing.
First, it probably needs to be made clearer that generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales – the atmosphere and upper ocean on multi-decadal periods. This corresponds in scope (not un-coincidentally) to the atmospheric component of General Circulation Models (GCMs) coupled to (at least) a mixed-layer ocean. For this subsystem, many of the longer term feedbacks in the full climate system (such as ice sheets, vegetation response, the carbon cycle) and some of the shorter term bio-geophysical feedbacks (methane, dust and other aerosols) are explicitly excluded. Changes in these excluded feaures are therefore regarded as external forcings.
Why this subsystem? Well, historically it was the first configuration in which projections of climate change in the future could be usefully made. More importantly, this system has the very nice property that the global mean of instantaneous forcing calculations (the difference in the radiation fluxes at the tropopause when you change greenhouse gases or aerosols or whatever) are a very good predictor for the eventual global mean response. It is this empirical property that makes radiative forcing and climate sensitivity such useful concepts. For instance, this allows us to compare the global effects of very different forcings in a consistent manner, without having to run the model to equilibirum every time.
To see why a more expansive system may not be as useful, we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation driven by orbital changes. However, in the global mean, these changes sum to zero (or very close to it), and so the global mean sensitivity to global mean forcings is huge (or even undefined) and not very useful to understanding the eventual ice sheet growth or carbon cycle feedbacks. The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed.
So in order to constrain the climate sensitivity from the paleo-data, we need to find a period under which our restricted subsystem is stable – i.e. all the boundary conditions are relatively constant, and the climate itself is stable over a long enough period that we can assume that the radiation is pretty much balanced. The last glacial maximum (LGM) fits this restriction very well, and so is frequently used as a constraint. From at least Lorius et al (1991) – when we first had reasonable estimates of the greenhouse gases from the ice cores, to an upcoming paper by Schneider von Deimling et al, where they test a multi-model ensemble (1000 members) against LGM data to conclude that models with sensitivities greater than about 4.3ºC can’t match the data. In posts here, I too have used the LGM constraint here to demonstrate why extremely low (< 1ºC) or extremely high (> 6ºC) sensitivities can probably be ruled out.
In essence, I was using my informed prior beliefs to assess the likelihood of a new claim that climate sensitivity could be really high or low. My understanding of the paleo-climate record implied (to me) that the wide spread of results from (for instance, the first reports from the climateprediction.net experiment) were a function of their methodology but not a possible feature of the real world. Specifically, if one test has a stronger constraint than another, it’s natural to prefer the stronger constraint, or in other words, an experiment that produces looser constraints doesn’t make previous experiments that produced stronger constraints invalid. This is an example of ‘Bayesian inference‘. A nice description of how Bayesian thinking is generally applied is available at James Annan’s blog (here and here).
Of course, my application of Bayesian thinking was rather informal, and anything that can be done in such an arm waving way is probably better done in a formal way since you get much better control on the uncertainties. This is exactly what Annan and Hargreaves have done. Bayes theorem provides a simple formula for calculating how much each new bit of information improves (or not) your prior estimates and this can be applied to the uncertain distribution of climate sensitivity.
A+H combine three independently determined constraints using Bayes Theorem and come up with a new distribution that is the most likely given the different pieces of information. Specifically they take constraints from the 20th Century (1 to 10ºC), the constraints from responses to volcanic eruptions (1.5 to 6ºC) and the LGM data (-0.6 to 6.1ºC – a widened range to account for extra paleo-climatic uncertainties) to come to a formal Bayesian conclusion that is much tighter than each of the individual estimates. They find that the mean value is close to 3ºC, and with 95% limits at 1.7ºC and 4.9ºC, and a high probability that sensitivity is less than 4.5ºC. Unsurprisingly, it is the LGM data that makes very large sensitivities extremely unlikely. The paper is very clearly written and well worth reading for more of the details.
The mathematics therefore demonstrates what the scientists basically thought all along. Plus ça change indeed…