RealClimate logo

Technical Note: Sorry for the recent unanticipated down-time, we had to perform some necessary updates. Please let us know if you have any problems.

First published response to Lindzen and Choi

Filed under: — gavin @ 8 January 2010

The first published response to Lindzen and Choi (2009) (LC09) has just appeared “in press” (subscription) at GRL. LC09 purported to determine climate sensitivity by examining the response of radiative fluxes at the Top-of-the-Atmosphere (TOA) to ocean temperature changes in the tropics. Their conclusion was that sensitivity was very small, in obvious contradiction to the models.

In their commentary, Trenberth, Fasullo, O’Dell and Wong examine some of the assumptions that were used in LC09′s analysis. In their guest commentary, they go over some of the technical details, and conclude, somewhat forcefully, that the LC09 results were not robust and do not provide any insight into the magnitudes of climate feedbacks.

Coincidentally, there is a related paper (Chung, Yeomans and Soden) also in press (sub. req.) at GRL which also compares the feedbacks in the models to the satellite radiative flux measurements and also comes to the conclusion that the models aren’t doing that badly. They conclude that

In spite of well-known biases of tropospheric temperature and humidity in climate models, comparisons indicate that the intermodel range in the rate of clear-sky radiative damping are small despite large intermodel variability in the mean clear-sky OLR. Moreover, the model-simulated rates of radiative damping are consistent with those obtained from satellite observations and are indicative of a strong positive correlation between temperature and water vapor variations over a broad range of spatiotemporal scales.

It will take a little time to assess the issues that have been raised (and these papers are unlikely to be the last word), but it is worth making a couple of points about the process. First off, LC09 was not a nonsense paper – that is, it didn’t have completely obvious flaws that should have been caught by peer review (unlike say, McLean et al, 2009 or Douglass et al, 2008). Even if it now turns out that the analysis was not robust, it was not that the analysis was not worth trying, and the work being done to re-examine these questions is a useful contributions to the literature – even if the conclusion is that this approach to the analysis is flawed.

More generally, this episode underlines the danger in reading too much into single papers. For papers that appear to go against the mainstream (in either direction), the likelihood is that the conclusions will not stand up for long, but sometimes it takes a while for this to be clear. Research at the cutting edge – where you are pushing the limits of the data or the theory – is like that. If the answers were obvious, we wouldn’t need to do research.

Update: More commentary at DotEarth including a response from Lindzen.

Lindzen and Choi Unraveled

Filed under: — group @ 8 January 2010

Guest Commentary by John Fasullo, Kevin Trenberth and Chris O’Dell

A recent paper by Lindzen and Choi in GRL (2009) (LC09) purported to demonstrate that climate had a strong negative feedback and that climate models are quite wrong in their relationships between changes in surface temperature and corresponding changes in outgoing radiation escaping to space. This publication has been subject to a considerable amount of hype, for instance apparently “[LC09] has absolutely, convincingly, and irrefutably proven the theory of Anthropogenic Global Warming to be completely false.” and “we now know that the effect of CO2 on temperature is small, we know why it is small, and we know that it is having very little effect on the climate”. Not surprisingly, LC09 has also been highly publicized in various contrarian circles.
More »

The carbon dioxide theory of Gilbert Plass

Filed under: — gavin @ 4 January 2010

Gilbert Plass was one of the pioneers of the calculation of how solar and infrared radiation affects climate and climate change. In 1956 he published a series of papers on radiative transfer and the role of CO2, including a relatively ‘pop’ piece in American Scientist. This has just been reprinted (as an abridged version) along with commentaries from James Fleming, a historian of science, and me. Some of the intriguing things about this article are that Plass (writing in 1956 remember) estimates that a doubling of CO2 would cause the planet to warm 3.6ºC, that CO2 levels would rise 30% over the 20th Century and it would warm by about 1ºC over the same period. The relevant numbers from the IPCC AR4 are a climate sensitivity of 2 to 4.5ºC, a CO2 rise of 37% since the pre-industrial and a 1900-2000 trend of around 0.7ºC. He makes a lot of other predictions (about the decrease in CO2 during ice ages, the limits of nuclear power and the like), but it’s worth examining his apparent prescience on these three quantitative issues. Was he prophetic, or lucky, or both?
More »

Unforced variations 2

Filed under: — gavin @ 1 January 2010

Continuation of the open thread. Please use these threads to bring up things that are creating ‘buzz’ rather than having news items get buried in comment threads on more specific topics. We’ll promote the best responses to the head post.

Knorr (2009): Case in point, Knorr (GRL, 2009) is a study about how much of the human emissions are staying the atmosphere (around 40%) and whether that is detectably changing over time. It does not undermine the fact that CO2 is rising. The confusion in the denialosphere is based on a misunderstanding between ‘airborne fraction of CO2 emissions’ (not changing very much) and ‘CO2 fraction in the air’ (changing very rapidly), led in no small part by a misleading headline (subsequently fixed) on the ScienceDaily news item Update: MT/AH point out the headline came from an AGU press release (Sigh…). SkepticalScience has a good discussion of the details including some other recent work by Le Quéré and colleagues.

Update: Some comments on the John Coleman/KUSI/Joe D’Aleo/E. M. Smith accusations about the temperature records. Their claim is apparently that coastal station absolute temperatures are being used to estimate the current absolute temperatures in mountain regions and that the anomalies there are warm because the coast is warmer than the mountain. This is simply wrong. What is actually done is that temperature anomalies are calculated locally from local baselines, and these anomalies can be interpolated over quite large distances. This is perfectly fine and checkable by looking at the pairwise correlations at the monthly stations between different stations (London-Paris or New York-Cleveland or LA-San Francisco). The second thread in their ‘accusation’ is that the agencies are deleting records, but this just underscores their lack of understanding of where the GHCN data set actually comes from. This is thoroughly discussed in Peterson and Vose (1997) which indicates where the data came from and which data streams give real time updates. The principle one is the CLIMAT updates of monthly mean temperature via the WMO network of reports. These are distributed by the Nat. Met. Services who have decided which stations they choose to produce monthly mean data for (and how it is calculated) and is absolutely nothing to do with NCDC or NASA.

Further Update: NCDC has a good description of their procedures now available, and Zeke Hausfather has a very good explanation of the real issues on the Yale Forum.

Switch to our mobile site