RealClimate logo


The wisdom of Solomon

Filed under: — gavin @ 29 January 2010

A quick post for commentary on the new Solomon et al paper in Science express. We’ll try and get around to discussing this over the weekend, but in the meantime I’ve moved some comments over. There is some commentary on this at DotEarth, and some media reports on the story – some good, some not so good. It seems like a topic that is ripe for confusion, and so here are a few quick clarifications that are worth making.

First of all, this is a paper about internal variability of the climate system in the last decade, not on additional factors that drive climate. Second, this is a discussion about stratospheric water vapour (10 to 15 km above the surface), not water vapour in general. Stratospheric water vapour comes from two sources – the uplift of tropospheric water through the very cold tropical tropopause (both as vapour and as condensate), and the oxidation of methane in the upper stratosphere (CH4+2O2 –> CO2 + 2H2O NB: this is just a schematic, the actual chemical pathways are more complicated). There isn’t very much of it (between 3 and 6 ppmv), and so small changes (~0.5 ppmv) are noticeable.

The decreases seen in this study are in the lower stratosphere and are likely dominated by a change in the flux of water through the tropopause. A change in stratospheric water vapour because of the increase in methane over the industrial period would be a forcing of the climate (and is one of the indirect effects of methane we discussed last year), but a change in the tropopause flux is a response to other factors in the climate system. These might include El Nino/La Nina events, increases in Asian aerosols, or solar impacts on near-tropopause ozone – but this is not addressed in the paper and will take a little more work to figure out.

Update: This last paragraph was probably not as clear as it should be. If the lower stratospheric water vapour (LSWV) is relaxing back to some norm after the 1997/1998 El Nino, then what we are seeing would be internal variability in the system which might have some implications for feedbacks to increasing GHGs, and my estimate of that would be that this would be an amplifying feedback (warmer SSTs leading to more LSWV). If we are seeing changes to the tropopause temperatures as an indirect impact from increased Asian aerosol emissions or solar-driven ozone changes, then this might be better thought of as impacting the efficacy of those forcings rather than implying some sensitivity change.

The study includes an estimate of the effect of the observed stratospheric water decadal decrease by calculating the radiation flux with and without the change, and comparing this to the increase in CO2 forcing over the same period. This implicitly assumes that the change can be regarded as a forcing. However, whether that is an appropriate calculation or not needs some careful consideration. Finally, no-one has yet looked at whether climate models (which have plenty of decadal variability too) have phenomena that resemble these observations that might provide some insight into the causes.

The IPCC is not infallible (shock!)

Filed under: — group @ 19 January 2010 - (Italian)

Like all human endeavours, the IPCC is not perfect. Despite the enormous efforts devoted to producing its reports with the multiple levels of peer review, some errors will sneak through. Most of these will be minor and inconsequential, but sometimes they might be more substantive. As many people are aware (and as John Nieslen-Gammon outlined in a post last month and Rick Piltz goes over today), there is a statement in the second volume of the IPCC (WG2), concerning the rate at which Himalayan glaciers are receding that is not correct and not properly referenced.

More »

2009 temperatures by Jim Hansen

Filed under: — group @ 17 January 2010 - (Français)

This is Hansen et al’s end of year summary for 2009 (with a couple of minor edits). Update: A final version of this text is available here.

If It’s That Warm, How Come It’s So Damned Cold? 

 
by James Hansen, Reto Ruedy, Makiko Sato, and Ken Lo
 
The past year, 2009, tied as the second warmest year in the 130 years of global instrumental temperature records, in the surface temperature analysis of the NASA Goddard Institute for Space Studies (GISS). The Southern Hemisphere set a record as the warmest year for that half of the world. Global mean temperature, as shown in Figure 1a, was 0.57°C (1.0°F) warmer than climatology (the 1951-1980 base period). Southern Hemisphere mean temperature, as shown in Figure 1b, was 0.49°C (0.88°F) warmer than in the period of climatology.


Figure 1. (a) GISS analysis of global surface temperature change. Green vertical bar is estimated 95 percent confidence range (two standard deviations) for annual temperature change. (b) Hemispheric temperature change in GISS analysis. (Base period is 1951-1980. This base period is fixed consistently in GISS temperature analysis papers – see References. Base period 1961-1990 is used for comparison with published HadCRUT analyses in Figures 3 and 4.)

The global record warm year, in the period of near-global instrumental measurements (since the late 1800s), was 2005. Sometimes it is asserted that 1998 was the warmest year. The origin of this confusion is discussed below. There is a high degree of interannual (year‐to‐year) and decadal variability in both global and hemispheric temperatures. Underlying this variability, however, is a long‐term warming trend that has become strong and persistent over the past three decades. The long‐term trends are more apparent when temperature is averaged over several years. The 60‐month (5‐year) and 132 month (11‐year) running mean temperatures are shown in Figure 2 for the globe and the hemispheres. The 5‐year mean is sufficient to reduce the effect of the El Niño – La Niña cycles of tropical climate. The 11‐year mean minimizes the effect of solar variability – the brightness of the sun varies by a measurable amount over the sunspot cycle, which is typically of 10‐12 year duration.

More »

Plass and the Surface Budget Fallacy

Filed under: — raypierre @ 13 January 2010

RealClimate is run by a rather loosely organized volunteer consortium of people with day jobs that in and of themselves can be quite consuming of attention. And so it came to pass that the first I learned about Gavin’s interest in the work of Plass was — by reading RealClimate! In fact, David Archer and I have a book due to appear this year from Wiley/Blackwell (The Warming Papers), which is a collection of historic papers on global warming, together with interpretive essays by David and myself. Needless to say, we pay a lot of attention to the seminal work by Plass in this book. His 1956 QJRMS technical paper on radiative transfer, which is largely the basis of his more popular writings on global warming, was one of the papers we chose to reprint in our collection. In reading historic papers, it is easy to fall into the trap of assuming that investigators of the past are working on the basis of the same underlying set of assumptions in common use today. Through a very close reading of the paper, David and I noticed something about the way Plass estimated surface temperature increase, that Gavin and all previous commentators on Plass — including Kaplan himself — seem to have overlooked.

More »

L&C, GRL, comments on peer review and peer-reviewed comments

Filed under: — gavin @ 10 January 2010

I said on Friday that I didn’t think that Lindzen and Choi (2009) was obviously nonsense. Well, a number of people have disagreed with me, and in doing so, have presented some of the back story on the how the response was handled. I think this deserves to be more widely known in the hope that it will generate some discussion in the community for how such situations might be dealt with in the future.
More »

First published response to Lindzen and Choi

Filed under: — gavin @ 8 January 2010

The first published response to Lindzen and Choi (2009) (LC09) has just appeared “in press” (subscription) at GRL. LC09 purported to determine climate sensitivity by examining the response of radiative fluxes at the Top-of-the-Atmosphere (TOA) to ocean temperature changes in the tropics. Their conclusion was that sensitivity was very small, in obvious contradiction to the models.

In their commentary, Trenberth, Fasullo, O’Dell and Wong examine some of the assumptions that were used in LC09’s analysis. In their guest commentary, they go over some of the technical details, and conclude, somewhat forcefully, that the LC09 results were not robust and do not provide any insight into the magnitudes of climate feedbacks.

Coincidentally, there is a related paper (Chung, Yeomans and Soden) also in press (sub. req.) at GRL which also compares the feedbacks in the models to the satellite radiative flux measurements and also comes to the conclusion that the models aren’t doing that badly. They conclude that

In spite of well-known biases of tropospheric temperature and humidity in climate models, comparisons indicate that the intermodel range in the rate of clear-sky radiative damping are small despite large intermodel variability in the mean clear-sky OLR. Moreover, the model-simulated rates of radiative damping are consistent with those obtained from satellite observations and are indicative of a strong positive correlation between temperature and water vapor variations over a broad range of spatiotemporal scales.

It will take a little time to assess the issues that have been raised (and these papers are unlikely to be the last word), but it is worth making a couple of points about the process. First off, LC09 was not a nonsense paper – that is, it didn’t have completely obvious flaws that should have been caught by peer review (unlike say, McLean et al, 2009 or Douglass et al, 2008). Even if it now turns out that the analysis was not robust, it was not that the analysis was not worth trying, and the work being done to re-examine these questions is a useful contributions to the literature – even if the conclusion is that this approach to the analysis is flawed.

More generally, this episode underlines the danger in reading too much into single papers. For papers that appear to go against the mainstream (in either direction), the likelihood is that the conclusions will not stand up for long, but sometimes it takes a while for this to be clear. Research at the cutting edge – where you are pushing the limits of the data or the theory – is like that. If the answers were obvious, we wouldn’t need to do research.

Update: More commentary at DotEarth including a response from Lindzen.

Lindzen and Choi Unraveled

Filed under: — group @ 8 January 2010

Guest Commentary by John Fasullo, Kevin Trenberth and Chris O’Dell

A recent paper by Lindzen and Choi in GRL (2009) (LC09) purported to demonstrate that climate had a strong negative feedback and that climate models are quite wrong in their relationships between changes in surface temperature and corresponding changes in outgoing radiation escaping to space. This publication has been subject to a considerable amount of hype, for instance apparently “[LC09] has absolutely, convincingly, and irrefutably proven the theory of Anthropogenic Global Warming to be completely false.” and “we now know that the effect of CO2 on temperature is small, we know why it is small, and we know that it is having very little effect on the climate”. Not surprisingly, LC09 has also been highly publicized in various contrarian circles.
More »

The carbon dioxide theory of Gilbert Plass

Filed under: — gavin @ 4 January 2010

Gilbert Plass was one of the pioneers of the calculation of how solar and infrared radiation affects climate and climate change. In 1956 he published a series of papers on radiative transfer and the role of CO2, including a relatively ‘pop’ piece in American Scientist. This has just been reprinted (as an abridged version) along with commentaries from James Fleming, a historian of science, and me. Some of the intriguing things about this article are that Plass (writing in 1956 remember) estimates that a doubling of CO2 would cause the planet to warm 3.6ºC, that CO2 levels would rise 30% over the 20th Century and it would warm by about 1ºC over the same period. The relevant numbers from the IPCC AR4 are a climate sensitivity of 2 to 4.5ºC, a CO2 rise of 37% since the pre-industrial and a 1900-2000 trend of around 0.7ºC. He makes a lot of other predictions (about the decrease in CO2 during ice ages, the limits of nuclear power and the like), but it’s worth examining his apparent prescience on these three quantitative issues. Was he prophetic, or lucky, or both?
More »

Unforced variations 2

Filed under: — gavin @ 1 January 2010

Continuation of the open thread. Please use these threads to bring up things that are creating ‘buzz’ rather than having news items get buried in comment threads on more specific topics. We’ll promote the best responses to the head post.

Knorr (2009): Case in point, Knorr (GRL, 2009) is a study about how much of the human emissions are staying the atmosphere (around 40%) and whether that is detectably changing over time. It does not undermine the fact that CO2 is rising. The confusion in the denialosphere is based on a misunderstanding between ‘airborne fraction of CO2 emissions’ (not changing very much) and ‘CO2 fraction in the air’ (changing very rapidly), led in no small part by a misleading headline (subsequently fixed) on the ScienceDaily news item Update: MT/AH point out the headline came from an AGU press release (Sigh…). SkepticalScience has a good discussion of the details including some other recent work by Le Quéré and colleagues.

Update: Some comments on the John Coleman/KUSI/Joe D’Aleo/E. M. Smith accusations about the temperature records. Their claim is apparently that coastal station absolute temperatures are being used to estimate the current absolute temperatures in mountain regions and that the anomalies there are warm because the coast is warmer than the mountain. This is simply wrong. What is actually done is that temperature anomalies are calculated locally from local baselines, and these anomalies can be interpolated over quite large distances. This is perfectly fine and checkable by looking at the pairwise correlations at the monthly stations between different stations (London-Paris or New York-Cleveland or LA-San Francisco). The second thread in their ‘accusation’ is that the agencies are deleting records, but this just underscores their lack of understanding of where the GHCN data set actually comes from. This is thoroughly discussed in Peterson and Vose (1997) which indicates where the data came from and which data streams give real time updates. The principle one is the CLIMAT updates of monthly mean temperature via the WMO network of reports. These are distributed by the Nat. Met. Services who have decided which stations they choose to produce monthly mean data for (and how it is calculated) and is absolutely nothing to do with NCDC or NASA.

Further Update: NCDC has a good description of their procedures now available, and Zeke Hausfather has a very good explanation of the real issues on the Yale Forum.