One of the central tasks of climate science is to predict the sensitivity of climate to changes in carbon dioxide concentration. The answer determines in large measure how serious the consequences of global warming will be. One common measure of climate sensitivity is the amount by which global mean surface temperature would change once the system has settled into a new equilibrium following a doubling of the pre-industrial CO2 concentration. A vast array of thought has been brought to bear on this problem, beginning with Arrhenius’ simple energy balance calculation, continuing through Manabe’s one-dimensional radiative-convective models in the 1960’s, and culminating in today’s comprehensive atmosphere-ocean general circulation models. The current crop of models studied by the IPCC range from an equilibrium sensitivity of about 1.5°C at the low end to about 5°C at the high end. Differences in cloud feedbacks remain the principal source of uncertainty. There is no guarantee that the high end represents the worst case, or that the low end represents the most optimistic case. While there is at present no compelling reason to doubt the models’ handling of water vapor feedback, it is not out of the question that some unanticipated behavior of the hydrological cycle could make the warming somewhat milder or on the other hand, much, much worse. Thus, the question naturally arises as to whether one can use information from past climates to check which models have the most correct climate sensitivity.
In this commentary, I will discuss the question "If somebody were to discover that climate variations in the past were stronger than previously thought, what would be the implications for estimates of climate sensitivity?" Pick your favorite time period Little ice age, Medieval Warm Period, Last Glacial Maximum or Cretaceous the issues are the same. In considering this question, it is important to keep in mind that the predictions summarized in the IPCC reports are not the result of some kind of statistical fit to past data. Thus, a revision in our picture of past climate variability does not translate in any direct way into a change in the IPCC forecasts. These forecasts are based on comprehensive simulations incorporating the best available representations of basic physical processes. Of course, data on past climates can be very useful in improving these representations. In addition, past data can be used to provide independent estimates of climate sensitivity, which provide a reality check on the models. Nonetheless, the path from data to change in forecast is a subtle one.
Climate doesn’t change all by itself. There’s always a reason, though it may be hard to ferret out. Often, the proximate cause of the climate change is some parameter of the climate system that can be set off from the general collective behavior of the system and considered as a "given," even if it is not external to the system strictly speaking. Such is the case for CO2 concentration. This is an example of a climate forcing. Other climate forcings, such as solar variability and volcanic activity, are more clearly external to the Earth’s climate system. In order to estimate sensitivity from past climate variations, one must identify and quantify the climate forcings. A large class of climate forcings can be translated into a common currency, known as radiative forcing. This is the amount by which the forcing mechanism would change the top-of-atmosphere energy budget, if the temperature were not allowed to change so as to restore equilibrium. Doubling CO2 produces a radiative forcing of about 4 Watts per square meter. The effects of other well-mixed greenhouse gases can be accurately translated into radiative forcings. Forcing caused by changes in the Sun’s brightness, by dust in the atmosphere, or by volcanic aerosols can also be translated into radiative forcing. The equivalence is not so precise in this case, since the geographic and temporal pattern of the forcing is not the same as that for greenhouse gases, but numerous simulations indicate that there is enough equivalence for the translation to be useful.
Thus, an estimate of climate sensitivity from past data requires an estimate of the magnitude of the past climate changes and of the radiative forcings causing the changes. Both are subject to uncertainties, and to revisions as scientific techniques improve. The mechanical analogy in the following little parable may prove helpful. Down in the dark musty store-rooms of the British Museum, you discover a mysterious box with a hole in the top through which a rod sticks out. The rod supports a platform which has a 1 kilogram brick on it, but the curator won’t let you fuss with the brick, otherwise something might break. For various reasons, though, people in the Museum are thinking of adding a second 1kg brick to the platform, and you’ve been hired by the Queen to figure out what will happen. Though you can’t mess with the device yourself, you notice that every once in a while a mouse jumps down onto the brick, and the platform goes down a little bit, when this happens, after which the platform returns to its original level without oscillating. From this you infer that there’s some kind of spring in the box, which is sitting in molasses or something like that, which has enough friction to damp out oscillations. Your job amounts to estimating how stiff the spring in the box is, without being allowed to take apart the box or perform any experiments on it. If the spring is very stiff, then putting another brick on the platform won’t cause the platform to sink much further. If the spring is very soft, however, the second brick will cause the platform to go down a great deal, perhaps causing something to break. The displacement of the platform is analogous to global mean temperature, and the stiffness of the spring is analogous to climate sensitivity.
Now, the unfortunate thing is that the mice are too light and come along too infrequently for you to get a good estimate of the stiffness of the spring by just watching the response of the platform to mice jumping on it. However, from looking through other dusty records elsewhere in the basement of the British Museum, you discover some notes from an earlier curator, who had also observed the box. He notes that there used to be big, heavy rats in the Museum basement, and has written down some things about what happens when the rats jump on the platform. From indirect evidence, like footprints in the dust, size of rat droppings, shed fur, plus some incomplete notes left behind by the rat catcher, you infer that the typical rat weighed a quarter kilogram. Now, the curator has left behind some notes about how much the platform drops when a rat jumps onto it from the shelf just above the platform. Unfortunately, the curator was a scholar of Old Uighur, and left behind his notations in the Old Uighur numeration system so his rivals couldn’t read it. Also unfortunately, the curator died before publishing his explanation of the Old Uighur numeration system, and that has been lost to time. Using the same Uighur wheat production records available to the curator, you estimate that his notes mean that the typical displacement is 10 centimeters per rat. From this you estimate that the stiffness of the spring is such that a 1 kilogram brick would cause a 40 centimeter displacement of the platform. Things are looking good. You get paid a handsome sum. Then, one day, to your horror, you open a journal of Uighur studies and find a lead article proving that everybody has been interpreting Uighur wheat production records wrong, and that all previous estimates of what the Uighur numbers mean were off by a factor of two. That means that while you thought the typical displacement of the platform was 10 centimeters per rat, the "natural variability" caused by rats jumping on the platform is much greater than you thought. It was actually 20 centimeters, using the new interpretation of the Uighur numbering system. Does that mean you ring up the Museum and say, "I was all wrong — the natural variability was twice what we thought, so it is unlikely that adding a new brick to the platform will cause as much effect as I told you last year!" No, of course you don’t. Since you have no new information about the weight of the rats, the correct inference is that the spring in the box is softer than you thought, so that the predicted effect of adding a brick will be precisely twice what you used to think, and more likely to break something. However, being a cautious chap, you also entertain the notion that maybe the displacement of the platform was more than you thought because the rats were actually fatter than you thought; that would imply less revision in your estimate of the stiffness of the spring, but until you get more data on rat fatness, you can’t really say. If you think all this is obvious, please hold the thought in mind, and bring it back when, towards the end of this commentary, I tell you what Esper et al. wrote in an opinion piece regarding the implications of natural variability observed over the past millennium.
The Last Glacial Maximum (i.e. the most recent "ice age", abbreviated LGM) probably provides the best opportunity for using the past to constrain climate sensitivity. The climate changes are large and reasonably well constrained by observations. Moreover, the forcing mechanisms are quite well known, and one of them is precisely the same as will cause future climate changes. During the LGM, CO2 dropped to 180 parts per million, as compared to pre-industrial interglacial values of about 280 parts per million. Depending on just what you assume about cloud and water vapor distributions, this yields a radiative forcing of about -2.5 Watts per square meter. Global mean temperatures dropped by about 7°C at the LGM. Does this mean that the true climate sensitivity is (7/2.5) = 2.8°C per (Watt per square meter)? That would indicate a terrifying 11.2 °C warming in response to a doubling of CO2. Fortunately, this alarming estimate is based on faulty reasoning, because there is a lot more going on at LGM time than just the change in CO2. Some of these things are feebacks like water vapor, clouds and sea-ice, which could be reasonably presumed to be relevant to the future as well as the past. Other forcings, including the growth and decay of massive Northern Hemisphere continental ice sheets, changes in atmospheric dust, and changes in the ocean circulation, are not likely to have the same kind of effect in a future warming scenario as they did at glacial times. In estimating climate sensitivity such effects must be controlled for, and subtracted out to yield the portion of climate change attributable to CO2. Broadly speaking, we know that it is unlikely that current climate models are systematically overestimating sensitivity to CO2 by very much, since most of the major models can get into the ballpark of the correct tropical and Southern Hemisphere cooling when CO2 is dropped to 180 parts per milllion. No model gets very much cooling south of the Equator without the effect of CO2. Hence, any change in model physics that reduced climate sensitivity would make it much harder to account for the observed LGM cooling. Can we go beyond this rather vague statement and use the LGM to say which of the many models is most likely to have the right climate sensitivity? Many groups are working on this very question right now. Progress has become possible only recently, with the availability of a few long-term coupled atmosphere-ocean simulations of the LGM climate. Time will tell how successful the program will turn out, but you can be sure that Real Climate is monitoring the pulse of these efforts very closely.
However that shakes out, if somebody were to wake me up in the middle of the night tomorrow and tell me that the LGM tropical temperatures were actually 6°C colder than the present, rather than 3C as I currently think, my immediate reaction would be "Gosh, the climate sensitivity must be much greater than anybody imagined!" That would be the correct reaction, too, because the rude awakener didn’t suggest anything about revisions in the strength of the forcing mechanisms. Indeed, this is the very reasoning used, in reverse, by Dick Lindzen in the late 1980’s, for that decade’s flavor of his argument for why CO2 increase is nothing to worry about. At the time, the prevailing climate reconstruction (CLIMAP) indicated that there was little reduction in tropical surface temperature during the LGM. Making use of mountain snow line data indicating larger temperature changes at altitude, Lindzen proposed a new kind of model of the tropical response, which fit the CLIMAP data and indicated very low sensitivity to CO2 increases in the future. When the CLIMAP data proved to be wrong, and was replaced by more reliable estimates showing a substantial tropical surface temperature drop, Lindzen had to abandon his then-current model and move on to other forms of mischief (first the "cumulus drying" negative water vapor feedback mechanism, since abandoned, and now the "Iris" effect cloud feedback mechanism).
Now, how about the Holocene including the Little Ice Age and Medieval Warm Period that seem to figure so prominently in many skeptics’ tracts ? This is a far harder row to hoe, because the changes in both forcing and response are small and subject to large uncertainties (as we have discussed in connection with the "Hockey Stick" here). What we do know is that the proposed forcing mechanisms solar variability and mean volcanic activity are small. Indeed, the main quandary faced by climate scientists is how to estimate climate sensitivity from the Little Ice Age or Medieval Warm Period, at all, given the relative small forcings over the past 1000 years, and the substantial uncertainties in both the forcings and the temperature changes. The current picture of Holocene climate variations is based not just on tree ring data, but on glacial mass balance and a wide variety of other proxy data. If this state of knowledge were to be revised in such a way as to indicate that the amplitude of the climate variations were larger than previously thought, that could very well call for for an upward revision of climate sensitivity
Indeed, quantitative studies of the Holocene climate variations invariably support this notion (e.g. Hegerl et al, Geophys. Res. Lett 2003, or Andronova et al Geophys. Res. Lett 2004.). Such studies can reasonably account for the observed variations as a response to solar and volcanic forcing (and a few secondary things) with energy balance climate models tuned to have a climate sensitivity equivalent to 2.5C per doubling of CO2. If the estimates of observed variations were made larger, a greater sensitivity would then be required to fit the data. Ironically, even arch-skeptics Soon and Baliunas, who would like to lay most of the blame for recent warming at the doorstep of solar effects, came to a compatible conclusion in their own energy balance model study. Namely, any model that was sensitive enough to yield a large response to recent solar variability would yield an even larger response to radiative forcing from recent (and therefore also future) CO2 changes. As a result, their "best fit" of climate sensitivity for the twentieth century is comfortably within the IPCC range. This aspect of their work is rarely if ever mentioned by the authors themselves, and still less in citations of the work in skeptics’ tracts such as that distributed with the "Global Warming Petition Project."
This brings us to the claims made recently by Esper et al. In an opinion piece in Quaternary Science Reviews (J. Esper, RJS Wilson, DC Frank,A Moberg, H Wanner and J Luterbacher, "Climate: past ranges and future changes," Quat. Sci. Rev. 24, 2005) they outlined the uncertainties in knowledge of the amplitude of Holocene climate variations, and also a strategy for reducing the uncertainties. We at RealClimate could hardly object to that. Better estimates of the Holocene variability would be of unquestioned value. However, Esper et al. concluded their piece with the statement:
- "So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberget al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought."
They go on to qualify their conditional criticism of Kyoto by stating "This scenario, however, does not question the general mechanism established within the protocol, which we believe is a breakthrough," but the political opinions of the authors are not our concern. Neither are we weighing in here on the relative merits of the various Holocene climate reconstructions. What is our concern is that the inference regarding climate sensitivity is precisely opposite what elementary mathematical and physical analysis dictates it should be. Our correspondents at the Montreal climate negotiations which concluded last week report that Esper et al was given a lot of play by the inaction lobby. The only major news outlet to pick up on the story, though was Fox News, whose report by "Junk Science" columnist Steve Milloy here arguably represents a new low in propaganda masquerading as science journalism.Milloy does not mention that Esper et al is an opinion piece, not a research article. He also fails to mention that Esper et al do not actually conclude that a downward revision in the importance of CO2 actually is necessary; they only attempt to say (albeit based on faulty logic) what would happen if higher estimates of climate variation proved right. Milloy also fails to note the final quote supporting Kyoto, for what that’s worth. Of course, it is too much to expect that Milloy would look into other papers on the subject to see if there might be something wrong with the reasoning in Esper et al. . The lack of "balance" in this instance is jarring, for a network that claims to have a copyright on the description "Fair and Balanced". What Milloy was engaging in goes beyond mere lack of balance. It is an example of "quote mining" which has become a favored tactic of those seeking to counter sound science with unsound confusion (see the interesting discussion on quote mining on the Corante site.). The fact that no other media outlets have picked up on the unfortunate Esper quote leaves us with some feeling of encouragement that journalists are beginning to be able to filter out bad science, no matter how interesting an article it might make.