RSS feed for comments on this post.

  1. Re the GISTEMP Land-Ocean Index graph: I should think that an 8-year RUNNING MEAN would give an astonishingly-good fit to the data; one that will be statistically-sound as a regression. But would we welcome the eventual outcome of its prediction?

    Comment by Charles Raguse — 11 Jan 2008 @ 10:27 AM

  2. Gavin and Stefan – nice discussion. I saw Tierney’s post yesterday and thought – “why is anybody trying to do a comparison of such a short time period? It’s meaningless!”

    That said, I do have a question that’s come up in my mind after some discussions with perhaps more knowledgeable “skeptics” than Tierney and Pielke Jr. As you put it, “the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods.” and this seems to derive from the chaotic forces that determine weather itself.

    So the question is – do we have a good handle on the natural decay times for random perturbations on the climate system? Pinatubo provided one example of a delta-function perturbation, and it seemed like the response decayed over roughly a year. But there are certainly longer-term responses as well: the deep ocean we know has 1000-year response times, the large icesheets presumably also on about that timescale. What about in between? Wouldn’t a natural climate system response time of several decades make it almost impossible to get meaningful averages out of climate system variables, if they could be significantly influenced by random fluctuations accumulated over such long time periods? Any good references on this issue?

    [Response: It's clear from the data that there is no one time scale for the climate system response to forcings. There are short term responses to the seasonal cycle for instance, longer responses to Pinatubo, and even longer decadal responses to GHG forcing - which could in fact be longer once you factor in vegetation or ice sheet impacts. Thus for any averaging period, one needs to be cognizant of the slower components for which that period won't average over the noise. The deep ocean instrumental records are particular problematic in this respect. Glaciers are a little different because they are integrated metrics and have some averaging already built in. - gavin]

    Comment by Arthur Smith — 11 Jan 2008 @ 10:34 AM

  3. In the category of comparing “long-term climate change to short-term weather variability,” Boston Globe columnist Jeff Jacoby has offered something that may be new. In a Jan. 6 column headlined “Br-r-r! Where did global warming go?” he used a list of recent instances of unexpected coldness to suggest that the planet may in fact be cooling, not warming. It’s interesting that he admitted that his data are anecdotal, but that he persisted at the level of global climate generalization anyway. (Have your cake and eat it too?) He wrote that “all of these may be short-lived weather anomalies, mere blips in the path of the global climatic warming that Al Gore and a host of alarmists proclaim the deadliest threat we face. But what if the frigid conditions that have caused so much distress in recent months signal an impending era of global cooling?”

    Comment by Steven T. Corneliussen — 11 Jan 2008 @ 10:37 AM

  4. Oh do come on.
    I have seen that the hadley center has said that the global average temperatures since 2001 and 2007 are statistically indistinguishable. each year has a temperature that falls within the error bars of all the years 2001 – 2007. That being the case you can say NOTHING about any trend in this data set except that the data indicates no change in temperature over that period. Whether you think that is evidence that global warming has halted is debatable but you can’t bring any statistical analysis to bear to show that the temperature change in the period was other than zero.
    You are working hard with a lot of statistical analysis to see structure in the 2001-2007 period that you are just not justified in seeing. We know nothing about any trend in this period because we haven’t measured a trend. The measurements show the temp has been constant and no amount of ‘analysis’ on your part will change that.

    Comment by Christian Desjardins — 11 Jan 2008 @ 10:40 AM

  5. Congratulations on an excellent post. I thought I would draw attention to a stark example of a national newspaper in the UK repeatedly claiming that global warming has stopped, using exactly the flawed reasoning that you have highlighted – a columinist for ‘The Sunday Telegraph’ has made this claim nine times in the last six months – most recently last Sunday (see halfway down: I have written a number of times to the newspaper to correct the misleading impression he is creating – sometimes they publish the letter, mostly they don’t. But Booker hasn’t stopped repeating his misleading claim. I’m now planning to take the case to the UK Press Complaints Commission on the grounds that it represents a persistent breach of its code on misleading and inaccurate reporting.

    Comment by Bob Ward — 11 Jan 2008 @ 10:48 AM

  6. Gavin-

    Thanks for this post. There are a few clarifications that you ought to probably point out, and I’d ask that you present my comments in full, rather than selectively edit them.

    1. IPCC 2007 issuing a “prediction” starting in 2000, is not really a prediction, since it starts at a time before the prediction is made. As you know, rigorous forecast evaluation begins at the date a prediction is made. Thus, what I prepared and John Tierney showed are simply examples of how a forecast verification is done. In addition, the figure that you point to in IPCC AR4 Chapter 1 really wouldn’t qualify as a rigorous forecast verification (except perhaps for 1990).

    2. One way to look at a comparison of models and short-term is as you have and suggest that it is “misguided” because models are based on the longer-term. My view is the opposite — modelers are doing us a disservice by neglecting the short-term. If multi-year and decadal variability is so great as to obscure a long-term trend, then it would be nice to see that reflected in the uncertainty estimates of the models for what to expect over the next 10 years. You run very close to suggesting that climate predictions simply cannot be verified except on a multi-decadal time scale, which I think overstates the case at best, and moves modeling outside the realm of falsifiability, and thus away from the scientific method. Hindcast checks are great, but science aways works better with falsifiable hypotheses on timescales that allow for feedback into the research process.

    3. You are incorrect when you assert that “Each of these analyses show that the longer term temperature trends are indeed what is expected.” In fact, IPCC 1990 dramatically over-forecast trends, as show by the IPCC figure that you reference and that I provide here:

    There are perhaps good reasons for this like IPCC treatment of aerosols and a subsequent volcanic eruption (but forecast failures always tell us why they were wrong, which is part of their value). IPCC 1995 dramatically lowered its predictions (and as I’ll post up on our blog soon, and had a much more accurate prediction through 2007). But both IPCC 1990 and IPCC 1995 cannot be consistent with observations, since 1995 cut its decadal average trend by 50%.

    4. Finally, the IPCC has made many predictions in 1990, 1995, 2001, and 2007. To suggest that comparing the evolution of predicted variables with observations is misguided is itself a strange dodge. There are good reasons to compare models with data, and in fact, your deconstruction of Douglass et al. paper does exactly that. Over the long run, if your believe your predictions to be correct, then a comparison with actual data will eventually prove you to be correct. That there is large variability in the shorter terms simply means a bit larger uncertainty bounds on such predictions. But to avoid forecast verification altogether is a strange position to take.

    Thanks again.

    [Response: Roger, How can you read the post above and claim that we're avoiding forecast verification altogether? This is just perverse. I spend almost all of my time comparing observations with predicted variables, so I don't see what you are getting at in point 4 at all. The point being made is that each comparison has varying degrees of usefulness. 8 year trends in the global mean temperatures are not very useful. 20 year trends are more so. Better still are superposed averages of the response to volcanoes or ENSO variability etc. In each case, models predict a mean response and an estimate of the noise.

    Dealing with your individual points though, IPCC may have been published in 2007, but the model runs reported were made in 2004 in most cases, similarly, Hansen's paper was published in 1988, but the model runs were started in 1985, thus forecast validation can usefully be made from when the runs started, rather than when they were published. Secondly, the pictures you and John showed were not correct if you wanted to do a short term forecast simulation. The line you took from IPCC was not the envelope, or even a fair distribution of the simulations over that short period. You would find that the actual simulations would have a substantially greater error bar (as in the distribution of 8 year trends we mentioned above). Instead you used the IPCC estimate of the long term trend (which has a much smaller uncertainty). This data is easily available, and if you want to do forecast validation properly you should use it. The response you've got from other scientists is precisely because they are aware of this.

    I'm not really sure what was being forecast in 1992, and I'll have to look it up before responding.

    You talk about the absence of short-term validation as being unscientific. This is simply ridiculous. It is easy to see that unpredictable weather noise dominates short term variability. It is well known that this is unpredictable more than a short time ahead. Claiming that the forced climate response must be larger than the weather noise for climate prediction on all time scales is just silly. There are examples where it is - for instance in the response to Pinatubo (for which validated climate model predictions were made ahead of time - Hansen et al 1992) - but this is not in general going to be true. A bigger point is that 'predictions' from climate models do not just mean predicting what is going to happen next year or the next decade. They also predict variables and relationships between variables that haven't yet been measured or analysed - that is just as valid a falsifiability criteria. They can test hypotheses for climate changes in the past and so on. The statistics of the weather make short term climate prediction very difficult - particularly for climate models that are not run with any kind of initialization for observations - this has been said over and over. Why is this hard to understand? - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 10:53 AM

  7. Gavin- A further comment. Based on your analysis is it fair to conclude that linking the 2007 NH sea ice melt to long-term climate change is equally as misguided as comparing an 8-year record of global temperatures to long-term climate change?

    [Response: The long term decline in Arctic sea ice is by far the more powerful test. But 'equally misguided' is incorrect. The issue is how unusual an event is (and by all analyses the 2007 melt was huge). That makes it extremely unlikely to be on it's own just another fluctuation of the noise - the same was true of the 2003 European heat wave. In such cases it is sometimes possible to estimate how much more likely a similar event has become because of climate change, and if that is a substantial increase, it might make sense to apportion some causality. But it depends very much on the case at hand, how unusual it was and how it ties into expectations. - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 11:01 AM

  8. Bob Ward should take the UK’s Met Office Hadley Center to the courts as they have made exactly the same claim as the telegraph Newspaper. What’s source for the goose is source for the gander as I think they say.

    Comment by Christian Desjardins — 11 Jan 2008 @ 11:18 AM

  9. I offer this brief off-topic introduction. I have been a regular at the accuweather blog for awhile now, and I’ve come here because I welcome the more disciplined moderation policy here. I’m weary of seeing the same denier lies repeated relentlessly with no apparent intervention. The mere fact that several of the more egregious deniers there complain of being “censored” here raises my appreciation for this site.

    This particular piece is an excellent example of the kind of reasoned, analytic, and scientific information about AGW that I seek. Regarding Corneliussen’s comment (number 3) above, I do hope that nobody takes Jacoby too seriously. While there is, of course, some statistical likelihood that we are entering a period of cooling, I would remind us that a) even a stuck (analog) clock is right twice a day and b) even if we do enter a period of cooling, my take on our current understanding (which may eventually prove incorrect) is that the anthropogenic warming signal will make such cooling less pronounced than it would otherwise be.

    In short, even a period of cooling will not, of itself, invalidate the AGW hypothesis whatsoever. Even in that scenario, it seems to me that the question is whether or not the measured temperature is or is not warmer than it would be without the measured dramatic increase in anthropogenic atmospheric CO2.

    Comment by BrooklineTom — 11 Jan 2008 @ 11:23 AM

  10. I am curious why you chose an 8 year running average. Those who suggest that global warming has “stopped” or that the data does not suggest recent global warming usually say “since 2000″ or “since 2001″. These would indicate 7 year or 6 year averages, yet you choise a 8 year average for illustration.


    [Response: Because Tierney&Pielke's plot runs from 2000 to 2007 (8 data points), and since we responded to Tierney, we chose the same interval as them. For shorter intervals the problem obviously gets worse and worse. The extreme case is two-year trends. That is the red lines in the plot. They go downward very often. Yet nobody in their right mind would describe this as "global warming stopped in 1981, then again in 1983, then again in 1988 and again in 1990". Or claim that these short coolings cast any doubt on global warming. -stefan]

    Comment by John Lederer — 11 Jan 2008 @ 11:27 AM

  11. Fig.1.1. of the 4AR reference given clearly shows that since 2001 we have stable global temperatures (2001 to 2005, now we know this to be true also for 2006 and 2007); this is an observational situation that was never found before since 1990. So ignoring trends, and focusing on year to year variability, what would be the correct conclusion to draw?

    [Response: Making statements about statistically significant changes in variability is even harder than statements about trends. Thus, I would simply note it and not draw any particular conclusion. - gavin]

    Comment by Francis Massen — 11 Jan 2008 @ 11:28 AM

  12. For what it’s worth, the blue 8-year trend lines all seem to converge into a positive linear trend between 1995 and 2005, suggesting a consistent increase in temps. Obviously it isn’t worth much since a decade isn’t really long enough to look for climate trends, but I don’t understand what Tierney and Pielke Jr’s logic is. Even a rudimentary understanding of trends and stats renders their point moot.

    Comment by Figen Mekik — 11 Jan 2008 @ 11:30 AM

  13. I’ve been staring at the Hadley data lately, and they are definitely reporting cooling for the last two to three years. It seems this is far more southern hemisphere than northern, and far more ocean than land.

    First, what meaning can be derived from the difference between land and ocean anomalies? Is it accurate to produce a global average which gives the ocean temperature more weight than land? In other words, is an “area average” a good way to determine the actual global trend?

    Second, have the instruments being used to determine sea surface temperature been satisfactorily validated? If I recall correctly, there have been some problems correctly interpreting readings from the instruments which were launched several years ago.

    Or is this more related to the difference in methods Gavin mentioned?

    Comment by Walt Bennett — 11 Jan 2008 @ 11:39 AM

  14. Gavin – thanks for an informative post. Does the 2007 value you plot include December? I noticed from GISTEMP that Dec. ’07 was cooler than other months – can this be attributed to the strong ongoing La Nina in the Pacific? Thanks, David

    [Response: Yes it does include Dec 07. And indeed it does seem to be related to the La Nina. - gavin]

    Comment by David Lea — 11 Jan 2008 @ 11:52 AM

  15. Gavin (7)-

    Thanks, a few replies:

    You ask “How can you read the post above and claim that we’re avoiding forecast verification altogether?”

    Well, my first clue is when you misrepresented why I did, and what John Tierney reported. You characterized the effort as “attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007.” No such claims were made by me, and I don’t think by Tierney.

    In my post I was careful to note the following: “I assume that many climate scientists will say that there is no significance to what has happened since 2000, and perhaps emphasize that predictions of global temperature are more certain in the longer term than shorter term.”

    And John Tierney wrote: “you can’t draw any firm conclusions about the IPCC’s projections — a few years does not a trend make”

    So why misrepresent what we said? Models of open systems cannot in principle be “validated” (see Oreskes et al 1994).

    I simply compared IPCC predictions with observations as an example of how to do a verification, which is standard practice in the atmospheric sciences, but much less so in the climate modeling community (and yes, I think this is indeed the case). Instead of telling your readers all of the reasons that a verification exercise is “misguided” you might have instead constructively pointed to the relevant forecasts with proper uncertainty bars (please do post up the link), or better yet, simply shown how an analysis comparing 2000-2007 with relevant predictions would have been done to your satisfaction.

    Given that you point to the IPCC AR4 Figure 1.1 in positive fashion, I remain confused about your complaint about what I did — I don’t recall you complaining about IPCC efforts in verification previously.

    How about this: We agree that rigorous forecast verification is important. There also does not a clear agreement among researchers as to (a) what variables are most important to verify (b) Over what times scales, (c) what actual constitutes the relevant forecasts, and (d) what actually constitutes the relevant observational verification databases. Then this is a subject to work through collegially, rather than try to discredit, dismiss, or suppress.


    PS. As I stated on my blg. If discussing forecast verification in the context of climate model predictions is to be a sign of “skepticism,” then climate science is in bad shape. For the record I accept the consensus of IPCC WG I.

    [Response: Roger, I'm flummoxed. You keep bringing in things that have not been said, and rebuttals to arguments that have not been made. All we have done is point out statistical issues in two things you compared. Oreskes paper is a case in point - I have no desire to argue about the semantics of verification vs evaluation vs validation - none of that is relevant to the overriding principle that you have to compare like with like. In a collegial spirit, I suggest you download the model data directly from PCMDI and really look at what you can learn from it. You might get a better appreciation for the problems here. Verification is not misguided. Your attempt at verification was. - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 12:08 PM

  16. John Lederer,

    Comment by Hank Roberts — 11 Jan 2008 @ 12:16 PM

  17. Gavin,

    You stated “The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well),…

    Can you provide graphs of the other data sets? I want to see them do just as well..


    Jon P

    Comment by Jon Pemberton — 11 Jan 2008 @ 12:21 PM

  18. If you replace 8-year trend lines with n-year, at what value of n do they start to faithfully reflect the underlying trend?

    Comment by Earl Killian — 11 Jan 2008 @ 12:29 PM

  19. I see now that in reporting in item 3 above on the Boston Globe’s Jeff Jacoby’s contribution in the category of misconstruing weather as climate, I probably should have bee explicit: my own opinion, for what it’s worth, is that Jacoby’s contribution is preposterous.

    Comment by Steven T. Corneliussen — 11 Jan 2008 @ 12:30 PM

  20. Gavin,

    You indicated that the GISS product extrapolates over the arctic region. Extrapolate means to infer or estimate by extending or projecting known information. How is this done and can you explain why is it valid? Is this simply using widely spaced and limited data from within the arctic itself and extrapolating to cover the entire region, or extrapolating somehow from the perimeter of the arctic?

    [Response: This is explained in the GISTEMP documentation, but in areas with no SST information (like most of the Arctic), the information from met stations is extrapolated over a radius of 1200 km. That fills in some of the Arctic. A validation for that kind of approach would be a match to the Arctic buoy program - but I haven't looked into that specifically. - gavin]

    Comment by B Buckner — 11 Jan 2008 @ 12:30 PM

  21. “In fact, IPCC 1990 dramatically over-forecast trends, as show by the IPCC figure that you reference and that I provide here:”


    Your blog post neglects to even mention the “good reasons” for “forecasts” to be off–namely the Mt. Pinatubo eruption. Don’t you think you owe it to those who read your blog to point out that:

    A. The IPCC projections do not include volcanic eruptions and

    B. Your graphs starts immediately before a volcanic eruption that had a large effect on global temperatures.

    I’m afraid burying it (and IMO downplaying it) in your comments does not draw an accurate picture.

    Comment by Boris — 11 Jan 2008 @ 12:40 PM

  22. Jon P writes ” Can you provide graphs of the other data sets? I want to see them do just as well” I agree. And this, of course, brings up the perennial question. Which of the various data sets of average annual global temperature anomaly is closest to the truth? When we get some good scientific analysis which brings an understanding of that question, we will have advanced the yardsticks a very long way.

    Comment by Jim Cripwell — 11 Jan 2008 @ 12:42 PM

  23. Is there an element of “cherry picking” in this article? Would an illustration using HadCRU or RSS with a moving average of either 5, 6, 7, or 9 years been as helpful to you in making your point?

    [Response: The distribution of trends will be very similar - most of this variability is due to real weather, not instrumental noise. But I'll check and report back. - gavin]

    Comment by Patrick Hadley — 11 Jan 2008 @ 12:43 PM

  24. As a layman who is trying to understand this issue, I’ve been struggling with how to assess the forecast skill of the GCMs. I think this is an extremely important issue and I wish I could find more information regarding tests that would confirm or falsify the forecasting skill of the GCMs. I agree with your point that short-term deviations from a trend are not necessarily significant and do not necessarily indicate that the GCM’s are unreliable. However, I think that Roger Pielke, Jr. has a point when he suggests that accurate short-term forecasts are used to show how reliable the GCMs are, but inaccurate short-term forecasts are attributed to random noise in the actual data. This seems like a “head we win, tails you loose” verification method.

    As another example, I have read here how the GCM’s accurately forecast the effect of the Pinatubo erruption. If by accurate, you mean that the GCM’s predicted that dumping massive amounts of aersols into the atmosphere would cause cooling, I am not impressed. The mental model that exists in my head would be just as “accurate”. If, however, by “accurate” you mean that the GCM’s made reasonably close predictions of the extent of the cooling, then it seems you are playing the “head I win, tails you loose” game suggested by Roger Pielke’s comment.

    [Response: Huh? When did we say that short term predictions were good if they agreed with the models? Any test of a model has to be accompanied by an analysis of the uncertainties - and if a test happens to be a good match, but the uncertainties indicate that was just a piece of luck, then it doesn't count. Pinatubo is different because the forcing in that case was very strong, and so it dominated the short term noise (at least in some metrics like the global mean temperature). Look at Hansen et al 2007 for more discussion of this. Pinatubo is more interesting validation than just for temperatures too though. Models get the LW and SW TOA radiation changes, they get the changes in water vapour, they get the impact on the winter NAO. None of those things were programmed in ahead of time. - gavin]

    Comment by Paul — 11 Jan 2008 @ 12:47 PM

  25. Walt, you said you’d been “staring at the data” and that Hadley was “definitely reporting ….” How much statistics do you know? Did you do any math? People are very good at detecting trends simply by looking at images — and very often see what is not actually there.

    This worked to detect large predators — imagining a leopard has low cost compared to not seeing a leopard — but the same talent leads to gross error — imagining a recession is expensive compared to not noticing a recession.

    Looking doesn’t suffice for definitely reporting. Math may.
    How did you arrive at your conclusion?

    Comment by Hank Roberts — 11 Jan 2008 @ 12:47 PM

  26. Hank Roberts:

    I understand the problem with short term trends.

    However, if someone says “the last 6 or 7 years is suggestive that global warming has stopped” it is not good form to refute by showing a spread of 8 year trends. It just inserts another issue.

    Why 8 for the refutation rather than the 6 or 7 year record advanced in the original assertion?

    [Response: Because that was what was used by Tierney. The spreads for 6 or 7 year trends are even higher. - gavin]

    Comment by John Lederer — 11 Jan 2008 @ 1:05 PM

  27. The problem here is that you dont have weather variability you have VOLCANO variability. If you factor out the volcano effects, ( I recall tamino doing something similiar on his site) and then look at the 8 year trend you will get a different picture. That might be an interesting excercise. Tamino??

    Comment by steven mosher — 11 Jan 2008 @ 1:20 PM

  28. Some interesting comments here, and would just like to add my own observations, well not my “own” but reports here in Scandinavia. Spitzbergen is having extremely mild conditions, and when I say mild, well. Last week is was the warmest place in Norway about +8C, and heavy preciptation at their main weather station, 43mm in one day, and no it was not snow, it was rain. There is still no winter sea ice south of the islands, neither was there any for the past 2 winters. Further south on the mainland of Norway, the high plateaus are having huge snow deposits and its only January, something like 10-12 meters already on the Jolsterdals glacier. Because of the milder winter “weather”, records are being broken all the time with precipitation, most likely the record of over 5000mm is in danger.

    Comment by George Robinson — 11 Jan 2008 @ 1:24 PM

  29. Gavin-

    Thanks but this is a pretty lame response: “In a collegial spirit, I suggest you download the model data directly from PCMDI and really look at what you can learn from it.” You are the climate scientist no? If you are unwilling to explain what is substantively wrong is my efforts to provide an example of forecast verification, then so be it.

    I am quite confident in my conclusions from this exercise as summarized from my blog Prometheus, and nothing that you say here contradicts those conclusions whatsoever:

    1) Nothing really can now be said on the skill of 2007 IPCC predictions.

    2) By contrast IPCC dramatically over-predicted temperature increases in its 1990 report.

    For 1995, 2001 (and some interesting surprises) please tune in next week.

    Gavin, if you do decide to provide substantive critiques of the two conclusions above please do share them, as I still have absolutely no idea what your complaint about this exercise actually is, other than the fact that it took place.

    [Response: You are again changing the subject. Who ever claimed that the 2007 IPCC projections had been shown to be skillful? I need to look at the 1990/1992 reports in more detail to comment on point two (as I said above). If you don't 'get' what my complaint was after all this, I am surprised, but I will repeat it concisely: 1) You need to compare like with like. 2) Long-term trends have different statistical properties than short term variability. 3) Any verification attempt needs to take that into account. - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 1:42 PM

  30. A quick question about the 1997/1998 El Nino and the annual mean growth in atmospheric CO2 concentrations that occurred in 1998. See:

    Is there a connection that’s obvious from the models, or other studies?

    (Also, what’s the html tag for the tilde? That’d be useful to post in a handy location, given how often El Nino/La Nina events come up.)

    Comment by Jamie Cate — 11 Jan 2008 @ 1:52 PM

  31. Roger Pielke Jr is capable of doing good work, but too often he just likes to be provocative and will be disingenuous to do it. His latest critique of the IPCC is another unfortunate example of the later.

    Roger took a quote from a comment of mine on RealClimate about Environmental Defense’s website. I wrote that I liked the Q&A section where readers could submit questions about climate change. Roger on his blog dishonestly claimed that Dr Judith Curry made my comment and claimed Dr Curry was endorsing Environmental Defense’s politics. Roger then blocked my comments when I tried to correct his mistaken claims.

    Its a bit of a stretch for him to complain that RC is selectively editing his comments.

    Comment by Joseph O'Sullivan — 11 Jan 2008 @ 1:54 PM

  32. Gavin-

    Please explain how you accounted for short term variability in your over effort at verification here (other than to say they don’t mater when looking at trends):

    My approach to verification is identical to yours in the Hansen post that you link to. And indeed in my first blog post on this I was careful to make the same qualification about short term trends as you do in this current post, which I will repeat since you haven’t acknowledged it:

    “I assume that many climate scientists will say that there is no significance to what has happened since 2000, and perhaps emphasize that predictions of global temperature are more certain in the longer term than shorter term.”

    So what is it that you are complaining about again?

    [Response: If you try and step back from simply trying to be contrary, I suggest focusing on the the main principle that you have to compare like with like. In the post I did on the 1988 projections, I compared long term trends with long term trends. It works there because in both the model output and observational data have uncertainties in the long term trends that were small compared to the signal. This is not true for 8 year trends. The figures you produced show the long term trend (and it's uncertainty) and the short term variability. That is not an appropriate comparison. Either put in the full envelope of model output over the same period, or just plot the trends and their uncertainty for the 8 year period. - gavin]

    [Response: Let me try to explain with a simple example. Imagine you want to check the prediction that Colorado gets warmer during spring. The prediction is for a roughly sinusoidal seasonal temperature cycle, based on solar zenith angle. Would you test this prediction against a piece of observational data from 10-17 April? Of course not! Random weather variability means that a cooling from 10-17 April does not falsify the seasonal cycle, nor would a warming verify it in any way, because the time period is just too short. The variance of weather is larger than the seasonal warming from 10-17 April. Do exactly the same exercise for a two-month period rather than 8 days, and it will make sense. The variance of weather is then still the same, but the seasonal warming over this longer period is much larger, so now you get a sensible signal/noise ratio. -stefan]
    p.s. If you’re still not getting it, try reading our popular post “Doubts about the advent of spring“.

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 2:01 PM

  33. Re: #25


    Not sure I follow your train of thought, but I can handle that last question: the Hadley graphs clearly come down for the past two years, and I have seen the results from number crunchers which validate that result.

    The color coded maps clearly show two large cooling regions, both over water.

    The graphs for northern and southern hemisphere are broken out; the northern hemisphere goes up continuously while the southern hemisphere goes down in the last two years.

    The graphs for land and sea are broken out. Land goes up sharply through the most recent year; sea temps trend down noticably over the last two years.

    Comment by Walt Bennett — 11 Jan 2008 @ 2:06 PM

  34. Mr. O’Sullivan (31): I’m not sure why RC lets off-topic hearsay comments with personal attacks through like this but for the record you are welcome to post a comment on our site at any time. I have no idea what you are talking about regarding Judy, but she is a top scholar who I have an awful lot of respect for, even though we don’t always agree on everything.

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 2:08 PM

  35. As promised, the distribution of 8 year trends in the different data sets:

    Data ____ Mean (degC/dec) ___ standard deviation (degC/dec)

    UAH MSU-LT: __ 0.13 ____ 0.25
    RSS MSU-LT: __ 0.18 ____ 0.24
    HADCRUT3v: ___ 0.18 ____ 0.16

    In no case is the uncertainty low enough for the 8 year trend to be useful for testing models or projections.

    Comment by gavin — 11 Jan 2008 @ 2:14 PM

  36. Great post Gavin and Stefan. This whole line of attack by skeptics/doubters/deniers is silly, since they won’t admit they’re wrong once we get some record hot years in the near future. I have blogged on this:

    Comment by Joseph Romm (ClimateProgress) — 11 Jan 2008 @ 2:30 PM

  37. A pdf / frequency distribution of the eight year trends from your graph might be interesting too. The centre of the distribution should be ~0.2/decade.

    Comment by Simon D — 11 Jan 2008 @ 2:31 PM

  38. Gavin, I think you’re copying me! My own version of the signal-to-noise issue is here, my display of year-end results from NASA GISS is here, and while I’m waiting for HadCRU and NCDC to post their year-end data I looked at northern-hemisphere land data from NASA here.

    I guess it’s often true, great minds think alike.

    Re: #17 (Jon Pemberton) and #22 (Jim Cripwell)

    Jon, it looks like my blog isn’t the only place you’re contributing a drive-by pot-shot. As I responded there, I haven’t posted the year-end results from HadCRU or NCDC because they haven’t been put in the online data files yet.

    Comment by tamino — 11 Jan 2008 @ 2:35 PM

  39. Gavin-

    My last comment on this thread. As the focus of your criticism, you have selected from a series of posts leading to one focused on a much longer time period starting in 1990 an example that I provided to show what verification looks like using IPCC AR4 predictions from 2000 (hence 8 years). You ignore the longer term view that I have provided.

    Not only that, you pretend as if I did not explicitly acknowledge in my first post that nothing can be said about verification over 8 years, for exactly the reasons that you describe here. Instead, you suggest that I have implied otherwise.

    If you can do a better verification of historical IPCC predictions, then by all means show it. There are many forecasts that have been made since 1990 and the more people engaged in this the better.

    More generally, you guys at RC may indeed be the smartest guys in the room, but sometimes constructive efforts are more appropriate than simply criticism of why everyone else doesn’t meet your standards. I’ll continue the exercise next week, with verifications for IPCC 1995 and 2001 and then I’ll place all four assessments into comparative context. I have no doubt that you won’t like any of it, but even so, your comments are welcomed, but especially constructive comments.

    Thanks for the exchange today, and providing a forum for discussion here at RC.

    [Response: I still don't know why you are reacting so negatively to this post. We spent considerable effort to outline the issues involved in forecast verification very much in the spirit of constructive engagement, and yes, pointing out issues with your figure as used on Tierney's blog. Your responses here and the additional commentary on your blog this morning have certainly not added to the spirit of collegiality. That is a shame. - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 2:47 PM

  40. I see over at Roger Pielke. Jr.’s blog he has now posted the IPCC forecasts from 1990. It appears from his graphic that the projection based on a 1.5 degree climate sensitivity for 2x CO2 has been the most accurate. I would further note that Steve McIntyre discusses today (1-11-2008) Wigley 1987 published in Climate Monitor. McIntryre notes in passing a discussion by Wigley of climate sensitivity. Wigley states, “If one accounts for the ocean damping effect using either a PD or UD model, and, if one assumes that greenhouse gas forcing is dominant on the century time scale, then the climate sensitivity required to match model predictions is only about 0.4 deg C/wm-2. This corresponds to a temperature change of less than 2 deg C for a CO2 doubling. Is it possible that GCMs are this far out? The answer to this question must be yes.”
    So it appears from Pielke’s graph from 1990 on that the actual data is consistent with a CO2 doubling sensitivity of about 1.5C and from Wigley’s comments based on observations prior to 1987 a climate sensitivity of less than 2C for 2x CO2. So looking at these “long-term” trends isn’t it reasonable to question the output of GCM that place CO2 climate sensitivity well above 2C?
    I realize this may be a complex question to answer so if you could direct me to link with an explanation that would be helpful to me.

    [Response: I am looking at the 1992 projections as we speak. First off, these are not GCM estimates, but from a simple box model - and so 'weather' variability is outside their scope. Nonetheless, it's worth looking into. The first question is the forcing that was applied. From figure Ax.1, it appears that the increase in forcings for all the IS92 scenarios are in the range of 0.5 W/m2 per decade to 2010 (slightly higher maybe, but it's difficult to read off the graph). In the real world, forcings (as estimated by GISS and not including volcanic effects) increased at 0.36 W/m2 per decade (1990 to 2003). Thus the projections are likely biased high not due to the climate sensitivity but due to the overestimated forcings growth. The temperature trends over that period in the GISS record is 0.24 +/- 0.04 degC/dec. The best estimate '2.5 deg C' sensitivity model has a trend of 0.25 degC/dec in figure Ax.2 (2.8 degrees in 110 years) which is somewhat less than shown in RP's graph for some unknown reason (actually, none of his model output lines seem to match up with the figure - puzzling). Once you include an adjustment for the too-large forcings (by ~40%) the mid-range model outputs line up pretty well. Pinatubo complicates things, but I don't see any big problem here. - gavin]

    Comment by Paul — 11 Jan 2008 @ 3:24 PM

  41. Jon Pemberton has commented on my blog that he’s not taking pot-shots, just asking a question. I have no reason not to believe him. So, my apologies.

    Comment by tamino — 11 Jan 2008 @ 3:31 PM

  42. Christian Desjardins asserts:

    [[That being the case you can say NOTHING about any trend in this data set [i.e., 2001-2007] except that the data indicates no change in temperature over that period. Whether you think that is evidence that global warming has halted is debatable but you can’t bring any statistical analysis to bear to show that the temperature change in the period was other than zero.]]

    Well, no, because a sample size of N = 7 isn’t enough to be statistically meaningful. But if you do a linear regression, the trend is up. Try it yourself!

    Comment by Barton Paul Levenson — 11 Jan 2008 @ 3:32 PM

  43. I think that roger is trying to wind you all up here RC. It looks to me as if you have wasted enough time answering his queries and deliberate attempts at obfuscation.

    You have a great deal of patience.

    Comment by pete best — 11 Jan 2008 @ 3:35 PM

  44. I second that pete best (#43).

    Comment by Figen Mekik — 11 Jan 2008 @ 4:17 PM

  45. re 29 (Gavin):

    Gavin: “Who ever claimed that the 2007 IPCC projections had been shown to be skillful?”

    What is your opinion on the “skill” of the 2007 IPCC projections?

    [Response: I expect that they will be skillful. But this can't yet be determined. - gavin]

    Comment by Patrick M. — 11 Jan 2008 @ 4:30 PM

  46. I was caught off guard by the 2007 NASA hemispheric and global temperature anomalies which just came out. It seems that December of 2007 was much warmer than NOAA and others had figured.

    Comment by pat n — 11 Jan 2008 @ 4:37 PM

  47. Re #40: Gavin, now why did you exclude the volcanic forcing when figuring the GISS “actual” forcing increases? Is there no “average natural aerosol forcing” that is assumed in the projections? It seems plausible anyway that when volcanic forcing is added then the forcing increase due just to the emmissions scenario is going to be higher than you have shown. It may in fact be close to what is shown by the 1992 report? Roger states that we are running on the high end of the emmissions scenario. Please advise.

    [Response: Because I was comparing the scenarios which didn't have volcanoes either. But as I said, Pinatubo complicates things. Given that those projections were done with a simple energy balance model though, it would be trivial to add Pinatubo in as an extra forcing and see what difference it made. As to whether we are on the high end of the scenarios, I don't think that is correct (but I haven't looked carefully). CO2 is increasing faster, but CH4 has stabilised, and CFCs are falling faster, aerosol changes are potentially important but not very well characterised. This will become clearer in a few years time. - gavin]

    Comment by Bryan S — 11 Jan 2008 @ 4:56 PM

  48. > I have seen the results from number crunchers

    Cite please? For a two or three year trend to be significant it has to be huge.

    Gavin, Tamino, could y’all do what William Connolley did in his exercise (stoat, link above) and indicate on your similar images which of the short-term trend lines are significant and which aren’t? It helps make the point that we can’t _see_ which is which on a picture.

    Comment by Hank Roberts — 11 Jan 2008 @ 4:56 PM

  49. What I would like to see is a 29 year graph (1979 – 2007) with all the data sets GISS, UAH, RSS, and HADCRUT3v as a comparison.

    Love to have an explanation of the differences between the data sets (what average warming trend they are showing and why they ‘might’ be different) and if there are discrepancies between the actual data and any modeling for the same period.

    I am assuming that this data exists for the requested time period.

    As you can all tell I am a layman, but find this all very interesting and believe all temperature measurement data in one “post” would be the most beneficial to many people.

    btw, Tamino, thank you. I can understand your reaction from what I have read on your blog at times :-)

    No deniers, no alarmists, just science. Cue the Coke commercial music…..

    Jon P

    Comment by Jon Pemberton — 11 Jan 2008 @ 4:57 PM

  50. Ref 42 Christian writes “Well, no, because a sample size of N = 7 isn’t enough to be statistically meaningful. But if you do a linear regression, the trend is up. Try it yourself!” I have and you are quite right when you start any time in the 1970′s. Which means current temperatures are significantly higher than they were in the 1970′s. However, if you try any other sort of least squares regression fit, e.g. polynomial, then the NASA/GISS data still shows increasing temperatures, but the other data sets show that temperatures have stabilized, if not actually peaked. I used CurveExpert 1.3 for the analysis; shareware.

    Comment by Jim Cripwell — 11 Jan 2008 @ 5:09 PM

  51. I’m relatively new here, but I watched PBS NOVA’s “Dimming the Sun” documentary last night, for about the third time. I find it very powerful, and scientifically accurate. Regarding modeling, are the effects of pollution, aerosols, and aircraft contrails accounted for in climate models? In the comment sections of USA Today’s articles on global warming, there are many skeptics that say NASA’s models just don’t reflect reality. If that’s right, it might have to do with the “global dimming” effect.

    Dr. Hansen says if we didn’t have that pollution in the atmosphere, global temperatures would be about 2 F. higher than they are now.

    How many here have seen that documentary, and what do you think of it?

    [Response: We discussed it when it was first broadcast in the UK and later on in the US. - gavin]

    Comment by Jack Roesler — 11 Jan 2008 @ 5:16 PM

  52. #50. Jim Cripwell, I don’t understand your argument. Can you plot your analysis ona graph and post it on the net somewhere so we can see it?

    Comment by Figen Mekik — 11 Jan 2008 @ 5:27 PM

  53. Jon, have you used the “Start here” link at the top of the page?

    Or read anything at yet?
    Have you read the discussion on the page there along with this, which is at least half of what you’re wishing for I think:

    If you will give us some idea what you’re starting from, where your questions arise, where you’ve looked, what you believe or know so far, it will help us (most like me are just fellow readers here) point to answers.

    Comment by Hank Roberts — 11 Jan 2008 @ 5:33 PM

  54. Gavin- On #40 above. You want to use scenario IS92e or IS92f, rather than IS92a that you found in Figure Ax.2 (which says something about temperature change under different assumptions of climate sensitivity for IS92a). As I explained in my blog post on this, the proper figure to use is Figure Ax.3 to determine these values.

    You write “Once you include an adjustment for the too-large forcings” — sorry but in conducting a verification you are not allowed to go back and change inputs/assumptions that were made at the time based on what you learn afterwards, that is cheating. Predicting what happened after 1990 from the perspective of 2007 is easy;-)

    [Response: The IS92 scenarios do not diverge significantly until after 2010 - so assessing model response to 2007 is independent of exactly which scenario is used. I don't see that your figure uses the data from fig Ax.3 - that figure has all models on the same trajectory until 2000 and only a small amount of divergence in the years following. That is nothing like what you have used. And finally, when it comes to projection verification, ideally you would only do it (as I did for Hansen et al 1988) if the scenarios matched up with reality - that gives a test of the model. If the scenarios are off, then the model response is also off and the model-data comparison not particularly useful. If your claim is that the IS92 scenarios were high, I'm fine with that. But don't confuse that with model verification. - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 5:49 PM

  55. Gavin- Good. With this exercise I am not interested in model verification and never have claimed to be, and as I’ve stated all along my interest is in forecast verification. You can find out more about the various scenarios in that same report beginning at p. 69, Figure A3.1 for instance shows how dramatically the scenarios diverge quite early. If you spend a bit more time with it you’ll also see that my 1990 IPCC prediction matches just about exactly with the used by IPCC AR4 in their figure TS.26, so if I’m wrong, so too is IPCC. More next Monday.

    [Response: The only thing that matters in those simple box models is the net forcing, which is in fig Ax.1 - which clearly shows that the different scenarios have not significantly diverged by 2010. It's not clear to me what the FAR range in fig 1.1 of AR4 represents and it isn't clearly stated. Plus they reference it to IPCC 1990, not the 1992 supplement. I invite anyone who knows what's plotted there to let me know. - gavin]

    Comment by Roger Pielke. Jr. — 11 Jan 2008 @ 6:06 PM

  56. It will be interesting to see if this temperature slowdown/fall since 1999 (depending on the dataset used) continues in the future and become statistically significant.

    It is my observation that skeptics are only doing what the other side has been doing for years. That is every record high temperature recorded in the United States is promoted in the media via climate scientists as “proof” that humans are causing global warming. In Montana, recent bad fire seasons are promoted in the media via climate scientists as “proof” that humans are causing global warming. It is refreshing to read that RealClimate has taken a stand against temperature chasing, drought chasing, and fire season chasing.

    [Response: As we always have. - gavin]

    Comment by VirgilM — 11 Jan 2008 @ 6:17 PM

  57. Hank,

    Thank you, I looked at the links but they are not current.

    What I am exactly looking for is a comparison of all satellite and GISS temp data compared on the same graph through 2007.

    Beliefs… Warming? yes Global? not convinced, CO2 as main forcing? not convinced.

    I read RC, Open Mind, Climate Audit, Accuweather, Lubus Motl, Eli Rabbet, Anthony Watts, and Pielke sites/blogs. And probably some random others.

    I also like looking at sea ice extents from Artcic and Antartic.

    I see the “forts” with “high walls” being bulit between various camps and usually one piece of data posted on and then the “fur flies”. Recent example, GISS data reveals 1998 and 2007 are tied as 2nd warmest year, but RSS data differs (as noted on Lubos’s site).

    Looking for a post that discusses and shows all data on one graph. I selected 1979 as the starting point as that is when satellite data became available, correct me if my assumption on this is wrong.

    Another example, Eli has running post on Artic Sea ice extent, but not Antartic. Antartic melting appears to be behind last years rate. Maybe a long term comparison of both of these.

    I can read the data and understand it, but I am in no position to validate it. For that I have to rely on others and it is pretty tough doing so when you are between the “forts”..

    Jon P

    Comment by Jon Pemberton — 11 Jan 2008 @ 6:20 PM

  58. Jon Pemeberton (57) — Have you read The Discovery of Global Warming, linked in the Science section of the sidebar?

    Comment by David B. Benson — 11 Jan 2008 @ 6:40 PM

  59. >forecast
    These are used to plan picnics and political decisions.

    Scenarios can be run for the deep past, recent past, and near future. The better a model’s range of outcomes matches the known real climate, the more interesting its outcomes when it’s run through into the near future.

    Because when I read Dr. Pielke write
    > that is cheating
    > my interest is in forecast verification

    That’s saying the work done with an original Cray supercomputer wasn’t so good, so redoing that work today is cheating.

    Cheating? It’s showing politicians how much better work can be done now — and that runs can better match what did happen, so they may better match what _will_ happen.

    That will scare those who want to say nobody knows enough to decide.

    Remember, the original Cray supercomputer could not have run Windows 95, it didn’t have enough memory. Your doorstop Win95 machine is more powerful than the Cray was then.

    Gavin describes how one can improve a model, and run it again starting at some past point, and if the model is better, the range of outcomes when it’s run on into the future may also be better.

    That’s not cheating, for science, it’s how it’s done.

    20 years ago Dr. Hansen was saying it’d take about til now to have an idea whether the models then were useful, because the climate signal would take that long to emerge from the noisy background.

    I’m sure he was referring to statistical and measurement noise, not to political noise. Over 20 years, the statistical noise decreases.

    Political scientists might study whether statistical noise is inverse to political noise, on issues like this.

    Comment by Hank Roberts — 11 Jan 2008 @ 6:49 PM

  60. Has anyone tried to calculate the heat content of the entire atmospheric and oceanic system? Wouldn’t that be a better metric to verify instead of surface based averages of temperatures? Wouldn’t this take ENSO and lags because of ocean storage out of the equation?

    [Response: You find this analysis in the IPCC report. The heat content change is completely dominated by the change in ocean heat content, because of the large heat capacity of water this is the only component of the climate system that stores a significant amount - see Fig. 5.4 in chapter 5. So this tells you how much ocean heat storage has delayed surface warming, i.e. what portion of the anthropogenic forcing is soaked up by the ocean rather than being balanced by radiation back into space (the latter implies a surface warming, so the portion going into the ocean does not lead to immediate surface warming). - stefan]

    Comment by VirgilM — 11 Jan 2008 @ 6:51 PM

  61. RE#8 Try not to misrepresent the UK Met Office. Here is a link to its most recent media release on trends in global average temperature:

    I think climate scientists in the UK have given up correcting the misrepresentationa and misuses of global temperature data in the media. I can understand why – it is a seemingly endless task. Unfortunately, some newspaper editorial teams are apparently not up to the task of spotting arguments based on dodgy statistics and are publishing them, misleading millions of people in the process. I am glad that RC has not given up the task of challenging attempts to mislead the public about climate change science.

    Comment by Bob Ward — 11 Jan 2008 @ 7:12 PM

  62. VirgilM — are you the VirgilM from CA? You know about Triana, right?

    Comment by Hank Roberts — 11 Jan 2008 @ 7:25 PM

  63. One only has to read the comments for the “Global warming has stopped” article to see how pervasive the denialist take on this issue is with the public – or at least with those who chose to comment. Even though the essential flaws in Dr Whitehouse’s opinion piece were glaring even to someone without deep knowledge of the subject such as myself, pointing them out merely ended up lost within a plethora of unsubstantiated claims of biased science, misrepresentations and repetitions of repeatedly debunked denialist myths. The attempts to inform by some commenters probably changed no-one’s mind.
    Has anyone with the in depth knowledge and solid scientific arguments taken Dr Whitehouse to task? I hope some of you do – he ought to have the education and intelligence to be engaged by what real science tells us, but I wouldn’t count on it. I think too much of the media are primarily about entertainment and his career is a media career. Controversy, even when it has little sound basis, attracts readers/viewers and Dr Whitehouse’s career in media is probably strengthened by the kind of writing in his “Has Global Warming Stopped?” article.
    I for one am pleased when this Blog does take people like Dr Whitehouse to task. Letting them and the organisations get away with it uncriticised leaves the public free to believe what they say is true ie ill-informed on an issue of critical importance.

    Comment by Ken Fabos — 11 Jan 2008 @ 8:51 PM

  64. Skip the trailing period to get Bob Ward’s link to work:

    See also (would Dr. Pielke say this is ‘cheating’ by improving a model and then showing that it better matches what happened recently?)

    “… Dr Doug Smith said: “Occurrences of El Nino, for example, have a significant effect on shorter-term predictions. By including such internal variability, we have shown a substantial improvement in predictions of surface temperature.” Dr Smith continues: “Observed relative cooling in the Southern Ocean and tropical Pacific over the last couple of years was correctly predicted by the new system, giving us greater confidence in the model’s performance”

    Comment by Hank Roberts — 11 Jan 2008 @ 8:51 PM

  65. The December, 2007 RSS anomaly for the lower atmosphere is -0.046C.

    The December, 1979 RSS anomaly is +0.022C

    Cherrypicking for sure, but a change in temperatures of -0.068C over 28 years should be taken into account I imagine.

    Comment by John Wegner — 11 Jan 2008 @ 8:59 PM

  66. A decade,or less, does not a climate era make. What would we call this period- the tiny little ice age? Climate eras have lasted centuries and millenia, while a dominant forcing(GHGs)governs which is the case at present.

    It should be a given that the more data available, the more accurately projections regarding future scenarios can be made. To reduce this to an absurdity, who ever rolled a single die and came with a 3.5?( the expected average over a large number of throws.)
    At the Tierney Lab site on the right just under the heading About Tierney Lab, it states “John Tierney always wanted to be a scientist,but …..”.

    That says a lot.Beware of wannabes.

    Comment by Lawrence Brown — 11 Jan 2008 @ 9:21 PM

  67. Commentary to this point (Post # 62.) seems to indicate that my suggestion (in Post # 1) of using a running mean (average) was misinterpreted, especially since other readers employed the same terminology to quite different measures. A true “running mean” of eight values begins with the first data point and is an arithmetic average of the first 8 values. This becomes the first graph point. The next graph point simply drops the first data point and increments by one. It forms a nicely smoothed, continuous regression line. It’s a much better way to present data such as those plotted in the GISTEMP index, and, I believe, is a better portrayal of reality than the “pick-up-sticks” jumble of “8-year trends”.

    Comment by Charles Raguse — 11 Jan 2008 @ 9:36 PM

  68. Jon Pemberton –

    Seems to me you’re asking others to do an impossible amount of work for you — to get and chart for you exactly what you want, all of what you want, and nothing but what you want. It’s certainly doable. But the person behind you in line will have slightly different demands.

    You can take the chart I pointed you to, look up the next year’s numbers and chart them.

    You can charm the person who maintains the site with intelligent questions and suggestions and perhaps get the charts updated a bit sooner than they’d otherwise find time to do it. I’ve _often_ found that to be true. Sometimes I find I’m the only person all year to thank a scientist for making the time to put such charts up online, they may get lots of pointers and lots of mentions but theys till perk up when someone emails a simple thank-you along with a question about how to find out a bit more, say, to extend such a chart.

    Bystanders like you and I could keep a whole lot of experts very busy, if they tried to respond to all such requests (and many people do start off by insisting that they know exactly what they need to find, to get their certain and final understanding of what puzzles them, or irrefutable proof of some claim they heard, or the like).

    That’s why people refer you to the data sources and programs that let you do your own charting. If you understand the statistics you can do your own error ranges. If you don’t, charts won’t help much.

    My suggestion is to stay away from anything that looks like a “fort” to you, and hang around in the trenches with the people who are doing the actual digging, or watch them as I and so many others do.

    The better you inform yourself, first, the better questions you can ask, and I tell you, I feel really happy if I manage to get a “Good question!” response from one of the climate scientists here once a year myself. They’re making a gift of their time to the rest of us.

    Last thought, I’ve heard this as ‘Rule One of Database Management’ — one data set, many pointers. One reason very few people make the kind of effort you find at globalwarmingart is that there is a vast collection of data sets, some easier to find than other, some very significantly revised when errors are found in work later on.

    People who make copies and then write based on their copies may be doing so based on outdated information. Look for people who give you references rather than put answers together for you — the references will lead you forward in time to better info.

    Comment by Hank Roberts — 11 Jan 2008 @ 10:25 PM

  69. Re: #48


    You misunderstood my post. Also, did you see my followup, with links to the references I made?

    My point was not: “Hey, isn’t it statistically significant that Hadley shows two consecutive years of cooling?” My point was: “Is Hadley right?”

    NASA-GISS seems to think it has kept on warming.

    I’m just trying to understand the Hadley data at face value.

    Comment by Walt Bennett — 11 Jan 2008 @ 10:27 PM

  70. Oh, Jon, you mentioned Eli’s notes on sea ice and said you wished he had something. Did you click his link? The comparison of the poles that you wished for is at the source Eli gives. The charts there that are pulled automagically from the databases are working now; the hand-edited one will be updated in a week or two to fill out the 2007 year, I just asked (nicely) about that yesterday myself (grin).

    Comment by Hank Roberts — 11 Jan 2008 @ 10:29 PM

  71. Charles, a moving average gives you a different look than a trend line.

    There’s an excellent treatment of this here, in a rather famous website:

    “… Like most attempts to characterise a complicated system by a single number, a scale throws away a great deal of the subtlety. … The scale responds with a number that means something or other. If only we knew what…. Over time, certainly, the scale will measure the cumulative effect of too much or too little food. But from day to day, the scale gives results that seem contradictory and confusing. We must seek the meaning hidden among the numbers and learn to sift the wisdom from the weight.”

    “… The right way to think about a trend chart is to keep it simple. The trend line can do one of three things:
    * Go up
    * Go down
    * Stay about the same
    That’s it. The moving average guarantees the trend line you plot will obviously behave in one of these ways; the short term fluctuations are averaged out and have little impact on the trend.”

    He addresses the reason that the moving average you ask for may not give a clear picture, on the same page:

    “… The familiar moving average trend line is drawn as before. The dashed line is the best fit to all 90 days …. But, obviously, it misses the point. … Short term straight line trend lines … provide accurate estimates ….

    Excel workbooks are provided at the site to try out the different methods of charting. See also everything else, wonderful site.

    Comment by Hank Roberts — 11 Jan 2008 @ 10:51 PM

  72. It is now raining in mid-winter where before it would be snow. I need no other proof or to be told weather is different than climate. Again, in mid-winter, it is now rain, and if it does snow, it is a wet snow and melts a few days later. before, there would be snow on the ground from Thanksgiving until Easter, now, it is raining in January.I am old enought to know the pattern has changed by what I have experienced and what I am experiencing now.

    Comment by PaulM — 11 Jan 2008 @ 11:11 PM

  73. Re: #31 my comment and #34 Roger Pielke Jr’s reply:

    My tone was unduly harsh. Roger Pielke Jr does like to provoke discussion. To do this he will make controversial statements on his blog. Its common in some academic circles, but its likely to be misunderstood in a public forum like a blog. I did not like see my comment used in a way I thought was inappropriate.

    The misquote did occur, and I submitted comments and several where not admitted. After the posts moved on, one my milder comments made it through. After that I had all but one of my comments admitted on other posts.

    I do not think that Roger Pielke Jr does not respect Dr Curry, but I do think he was trying to stir things up.

    This is the post:

    Comment by Joseph O'Sullivan — 11 Jan 2008 @ 11:35 PM

  74. I am not convinced by much of this.

    You take a data set of 30 years and say that the initial warming observed of less than 20 years is a long term trend and can be relied upon but you say the past 7 years of statistically indistinguishable temperature data is short term climatic fluctuation and can be ignored (note this short term ‘fluctuation’ isn’t fluctuating).

    There are no errors bars in the graph – put them in and you can draw your ‘trend lines’ with much more latitude. And because the recent observed stasis is 7 years you, ignoring errors, chose an 8 year grouping which is bound to drag the stasis back towards the rising section of the graph because you are giving it less weight than the data in the centre of the graph.

    YOU are the denialists – denying data. Finding ways to prove it isn’t what it is and making it conform to your worldview.

    Comment by Andre Narichi — 12 Jan 2008 @ 12:08 AM

  75. I made a chart from the updated NASA-GISS anomaly data and I think it came out fairly well:

    I plot gross annual anomalies (a score of 1200 would be a mean anomaly of +1*C per month) along with 5, 10 and 30 year running means.

    I’d appreciate any feedback as to method and conclusions.

    Comment by Walt Bennett — 12 Jan 2008 @ 12:21 AM

  76. First of, let me qualify my post by saying that I come here as a believer, a questioner, and a skeptic. By that I mean that, I am a believer in that I find the evidence for long-term global warming, since at least the 1880s, overwhelming and apparently irrefutable. Secondly, basic physics/thermodynamics dictate that increasing concentrations of greenhouse gasses will have an affect on the overall climate. However, I am a questioner in that I am not as fully convinced of the magnitude of the impact of anthropogenic versus non-anthropogenic causes of the warming trend but readily concede that the current warming trend is at least in part due, if not substantially, due to anthropogenic releases of CO2 and other GHGs. Finally, I am a skeptic that has some experience in much simpler modeling(mostly ground water contaminant fate & transport)in that I have much less confidence in our current ability to accurately model future climatethan is often portrayed in the popular press or even on websites such as this. In other words, I believe we can with near certaintude say that if global temperatures continue to rise, there will be a rise in sea level due to the melting of glaciers and thermal expansion of the oceans, but that projections of drought, hyper-intensive storms, mass extinctions, and other calamities, etc. are somewhat less certain.
    As an educated layman, my take on this is Much Ado about Nothing. I have read and re-read Tierney and Pielke’s posts and what I get out of both of them is that they are saying you can read whatever you want out of the recent (2001-2006) Global temperature estimates. ehy are very clear in stating that the recent Global temperatures neither prove or disprove the overall AGW model. What Pielke did say is that the most recent number will provide “cherry-pickers” with ammunition to quibble about this, that, or the other thing. Tierney correctly noted that there is a wide range of variance in the estimated global temperature anomalies and that, depending on where you fall on the sociological spectrum (denier, questioner/ skeptic, advocate, disciple), will help dictate which estimate you will rely on most. What both Tierny and Pielke seem to be asking for is continued and further refinement of the models as we gather additional climate data. In other words, don’t become defensive and just say short-term perturbations don’t affect the validity of the “MODEL”, continue to try to make the model account for the short-term perturbations. The models are nothing more than our attempts to account for all the variables that do drive climate change.

    Bob North

    Bob North

    Comment by Bob North — 12 Jan 2008 @ 12:42 AM

  77. Gavin, a man with the patience of a saint. My question has to do with how well behaved the psuedo-climate system, as defined by GCM runs is. I work in FEA engineering, and it is quite common for such systems to contain bifurcations, whereby a large collection of runs will demonstrate two (or more) general solutions, superimposed of course with shortterm noise. Have any of the climate models shown such behavior? I.E. do you see situtaions where some fraction of the runs (with the same parameters and forcings, but perturbed initial conditions) show more than a single solution trend?

    Comment by Thomas — 12 Jan 2008 @ 12:50 AM

  78. Another way to see that data is take the GISTEMP data, compute 8-year regressions (via SLOPE), and put that series into as scatter plot, whihc gives one line that graphs the slopes of the blue lines. The only times the slopes go below zero are those around the volcanoes.

    Comment by John Mashey — 12 Jan 2008 @ 1:17 AM

  79. Re: 57

    Here is the 12 month moving average for NASA GISS, Hadley/CRU, UAH, and RSS temperature analyses from January 1979 to October 2007. NASA GISS and Hadley/CRU are land-ocean instrument data, while UAH and RSS are lower troposphere satellite data.

    Here is the raw data for each analysis, including the linear fit. With both of the instrumental analyses, the slope is 0.17 degrees per decade. In the UAH satellite analysis, it is 0.14 degrees per decade. In the RSS satellite analysis, it is 0.18 degrees per decade.

    In all cases, the anomalies are adjusted up or down so as to give the linear regression for each analysis a y intercept of zero.

    In short, the anomalies are where you’d expect them to be, given the warming signal, plus natural variability and the fact that each analysis uses different methods.

    Comment by cce — 12 Jan 2008 @ 1:46 AM

  80. Looks like we’re about to have a big volcanic eruption in Ecuador. So temperature will go down by 0.2 K for a couple of years and we’ll have to put up with two more years of deniers saying, “See, global warming stopped!”

    Comment by Barton Paul Levenson — 12 Jan 2008 @ 6:53 AM

  81. Ref 52. If you want to see the sort of thing I am talking about, I am afraid you need to go to Yahoo Climate Skeptics and download the graphs I uploaded under the title “Rctner”. I fully realize that there a pseudo-infinite number of such graphs, and these three are merely examples.

    Comment by Jim Cripwell — 12 Jan 2008 @ 7:12 AM

  82. Thanks much for the crystal clear diagram and discussion. Contrary to some readers who are growing weary of it, I’m delighted to see RC giving the other side “enough rope” like this.

    Your main point (which you have now repeated 60 zillion times) is irrefutable, as is your observation that the exercise in question violates the simple principle that LIKE SHOULD BE COMPARED WITH LIKE. (You haven’t tried shouting yet, I suppose, but I doubt that would prove any more effective.) In the face of an argument which could hardly be framed more crisply, your dissenter has no more interesting stratagem than feigning deafness. Like a World Wrestling Federation spectacle, this classic thread has been a grossly unfair fight, and a lot of fun!

    Comment by Daniel C. Goodwin — 12 Jan 2008 @ 7:27 AM

  83. I understand that between the 1940s and 1970s, global mean temperature did not change much and this has been ascribed to increased particulates arising from industrialisation. So if you take a 30 year average between say 1945 and 1975 and then again between 1975 and 2005, the latter average should be significantly higher. What is the proper explanation for this? What changed in the 70s? Surely particulate emissions on a world scale did not decrease in the 1970s though it might have, in some industrialised countries? What about increasing use of coal by China and India say, and the consequent increase in particulates in the recent past? Would this not be expected to have a dimming effect and a reduction in global temperatures?

    [Response: Not all aerosols are the same. Some, such as black carbon released by coal burning, actually have a surface warming impact. Sulphate aerosols and secondarily nitrate aeresols, which do have a surface cooling impact, increased substantially in burden from the 1940s through the 1970s, decreasingly markedly with the passage of the Clean Air Acts of the 1970s and 1980s. The various issues you raise have been discussed many times before here and in links provided. Start here, here, and here. -mike]

    Comment by Gautam Kalghatgi — 12 Jan 2008 @ 9:01 AM

  84. “These comparisons are flawed since they basically compare long term climate change to short term weather variability”

    What is short term weather variability?

    Isn’t it obvious that the warming had to stop after a 5 W/m2 drop in the energy input to the climate system in 2002.

    It should be equally obvious that the 90s had to get much warmer than the 80s because of a much higher energy input at TOA.

    But the explanation is that there was one type of quite stable weather between 1994 and 2000, and a totally different type of weather between 2002 and 2005 (and probably longer), capable of reducing the radiation by 5 W/m2?
    What is this assumption based on?

    [Response: The ISCCP data is great, but can't be relied on for trends due to difficulties in tying different satellite records together. The implication that albedo has suddenly increased to Pinatubo levels without anyone else noticing or a rapid decrease in temperatures is .... surprising, to say the least. - gavin]

    Comment by lgl — 12 Jan 2008 @ 9:28 AM

  85. cce says in #79: “In short, the anomalies are where you’d expect them to be, given the warming signal, plus natural variability and the fact that each analysis uses different methods.”

    Of course, the denialists will say it’s clear the warming stopped in 2000…and in 1998 and in 1995 and in 1991 and in 1987…” At least Pielke and Douglass et al. are sufficiently sophisticated to realize that the only way to attack the anthropogenic hypothesis is to simply deny that warming is occurring. Unfortunately, since all the science and the evidence support a warming trend, they can’t get beyond misusing statistics and saying “No it isn’t.”

    Comment by Ray Ladbury — 12 Jan 2008 @ 9:35 AM

  86. A couple of small comments:

    1. Pielke is right that so long as the four major sources of “global” temperature disagree what one sees is largely a matter of which record one looks at.

    This is quite troubling since the members of each pair, the two surface records, and the two satellite records, essentially have the same raw data. The differences between members of the pairs is thus a difference in after the fact “adjustments”. Surely those can be ironed out and agreement reached on the “better” methods?

    The increased divergence of GISStemp and HadleyCRU, and between UAH and RSS, suggests that rather than being reconciled the differences are growing.

    Very troublesome.

    [Response: Not really. The trends for the most part are similar given the underlying uncertainty, and there are defendable reasons for the various choices made in the different analyses. Reasonable people can disagree on what is 'best' in these cases, and so rather than being troubled, one should simply acknowledge that there are some systematic uncertainties in any large scale climate metric. That uncertainty feeds into assessments of attribution and the like, but is small enough not to be decisive. Of course, assessing the reason for any divergence is important and can lead to the discovery of errors (such as with the UAH MSU record a couple of years ago), but mostly it is due to the different choices made - gavin]

    2. For assessing “global climate change” the absolute trend lines are apt. However, for assessing man caused global warming the “neutral” trend line would not be zero. Since the Little Ice Age we have naturally warmed. Should not this be taken into account? In other words a trend of some figure, say .6 C per century, should be regarded as “neutral” for purpose of assessing a man caused trend.

    [Response: There is no 'neutral trend' that can simply be regarded as 'natural' - volcanic and solar forcing changes are not any different to GHG forcing when it comes to attribution and modeling. If instead you propose that the 20th Century trends are simply some long term intrinsic change then you need to show what is going on. No such theory exists. The trends can however be explained quite adequately from the time-series of forcings (including natural and human caused effects). - gavin]

    Comment by John Lederer — 12 Jan 2008 @ 9:53 AM

  87. In 68 Hank writes “Jon Pemberton Seems to me you’re asking others to do an impossible amount of work for you — to get and chart for you exactly what you want, all of what you want, and nothing but what you want. It’s certainly doable. But the person behind you in line will have slightly different demands.”
    I think you are being a little unkind to Jon. In my computer I have complete sets of temperature data from NASA/GISS, HAD/CRU and RSS/MSU. If anyone can direct me to a site which has similar data for NCDC/NOAA I would be grateful. I have done some simple trend analysis on all four sets of data, and I am convinced that the HAD/CRU, NCDC/NOAA and RSS/MSU are highly correlated and give very similar results. The NASA/GISS data set is different. Notice I say “different”. I have no idea which set is closest to the truth. I have searched, written emails, etc. but I cannot find any study which has compared and contrasted the different data sets, so I have no idea which is “best”. Gavin has, however, used the NASA/GISS set, and seems to claim that any analysis using the other data sets would give the same result. My simple minded analyses indicates this may not be true. So I think it is perfectly legitimate to ask that Gavin either shows that the NASA/GISS data set is the best one available, or he shows that using the other three data sets produces the same answer.

    Comment by Jim Cripwell — 12 Jan 2008 @ 9:56 AM

  88. I do not agree. Dr. Pielke and others have a point when comparing data and model output. At least they have a strong point in the public discussion. One can clearly argue that tackling climate change would be easier if recent years would have shown significant warming. Correct, this does not invalidate the models and the time series is too short and error margins are too big to have a scientific argument against them, however, it is disturbing. The sceptics on the other hand have to come up with their own models and theories and support it by data to have a plain scientific battelfield. Criticism is easy. As long as is there is nothing better out there, we better stick with what we have.

    Comment by PeterK — 12 Jan 2008 @ 10:12 AM

  89. In 1980 I became the national training director for a carburetor company. I would travel around America training mechanics on how to fine tune using a chassis dynamometer and an exhaust gas analyzer. As part of the demonstration I would disable the then mistrusted emissions systems so mechanics could see that the systems were in fact eliminating large quantities of CO, NOX, and hydrocarbons from the exhaust gas – especially when simulating climbing hills at high speeds (needles pegged). When I would enable the emissions systems the exhaust gas, even under very heavy loads, would be mostly CO2 and H2O vapor – plant food is what I told them.

    So the change the emissions systems made in dramatically reducing aerosols makes perfect sense to me.

    In the mid 1980s there was a widespread resurgence in the use of wood stoves for home heating. Lots of towns, even small ones, had a store that specialized in selling them. Did those make their presence known?

    Comment by JCH — 12 Jan 2008 @ 10:23 AM

  90. #84

    The ISCCP and ERBS data are in very good agreement, are they also wrong? page 21

    “without anyone else noticing” ?
    So it’s not true that the ocean heat content peaked in 2003 either, after a hugh increase since 1994? page 29


    [Response: I don't see any support in Loeb's data for any significant shift in TOA SW absorption in recent years. And on slide 18, he estimates that it would take 15-20 years to be able to detect such a shift, and slide 19 shows no big changes in either ISCCP or CERES. As you should know, ocean heat content data post-2003 are the subject of a lot of investigation right now because of the shifts to the ARGO network and various problems that there have been. Whether OHC peaked in 2003 is therefore very uncertain, but SST records are rising less rapidly than SAT records there is likely an increasing offset in air-sea temperatures which implies that the oceans are still warming (and that OHC is increasing). As in the rest of this post, there is short term variability in all these metrics, and only significant long term trends count. - gavin]

    Comment by lgl — 12 Jan 2008 @ 11:10 AM

  91. re 83: I started thinking, what if aerosol forcing is now growing fast enough to counterbalance GHG forcing by increasing CO2? Then GW would be temporarily stopped. We know CO2 emmisions have increased rapidly in the past few years prinicapally due to rapid growth in developing economies. These economies tend to burn coal dirtily. A second factor is that depletion of higher grade coal is forcing more and more consumption of lower grade product. Could these twin trends mean that the aerosol load might be rapidly increasing? Perhaps fast enough to counteract GHG forcing? A global temperature metric wouldn’t be the best way to detect such a change. Perhaps some other globally measured metrics could shed some light on this question?

    If this is indeed hapenning, it could make the job of obtaining consensus for mitigation more difficult.

    Comment by Thomas — 12 Jan 2008 @ 11:10 AM

  92. Jim Cripwell, glad you’re volunteering, but I suggest you post pointers to the source data (and note the date when you downloaded the copies you’re using), and describe what you’re doing to compare them. It’s too easy to get the wrong file or outdated copies, elsewise.

    Comment by Hank Roberts — 12 Jan 2008 @ 11:20 AM

  93. Jim Cripwell, was it NASA or NOAA data that you’re looking for?
    NOAA is here:

    “digital holdings … contain almost 300 terabytes of information, and grow almost 80 terabytes each year”

    Comment by Hank Roberts — 12 Jan 2008 @ 11:43 AM

  94. I am a newbie here, and would like to pose a question that may be seem a bit stupid to some.

    I read the statistical work at this site, and it claims that a standard deviation for the annual average temperature series is 0.1 deg C.

    The data indicates the standard deviation on the yearly data since 1975 is 0.1 deg C. Any data with +/- 0.2 deg C (two standard deviations) would be in the 95% confidence interval. Instead Dr. Pielke has shown dashed lines only one fourth that interval on his chart.

    Isn’t this standard deviation calculated from the temperature data alone, not from the IPCC models? So wouldn’t the temperature noise variation be used for the confidence intervals in a forecast verification (not model verification, as Dr. Pielke points out)? Wouldn’t any forecast have the same confidence intervals applied to it, regardless of how the forecast was arrived at?

    Is it possible the confidence levels shown on the Pielke chart, are based on a plot of some kind of longer term average, and seem to be overlaid on annual temperature data? Somehow, the large scatter in the data compared to the confidence intervals, seems inconsistent?

    (Disclosure… I am just a chemical engineer, who worked for an oil company once upon a time, and have no experience in climate studies.)

    [Response: The error bars on the forecast for the IPCC models is the uncertainty on the long term trend, not the envelope or distribution for annual anomalies. I think that this is misleading when comparing to real annual data, and is at the heart of my statements above. - gavin]

    Comment by Paul Klemencic — 12 Jan 2008 @ 12:17 PM

  95. Now Gavin and Stefan, changes in upper ocean heat content is a really interesting question, and relates directly to the subject at hand. As you are aware, Roger Pielke Sr. (and others) have been pointing out the need to assess upper ocean heat content changes. Although there have been some well publicized problems with changing from XBTs to the Argo floats, the error bars in assessing global changes in ocean heat content are decreasing dramatically. The conclusion that the upper ocean has warmed over the last 40 years is certainly robust, but the shorter term changes are also interesting, since they are a direct proxy to the current radiative imbalance at the top of the atmosphere. After the Willis and Lyman papers showing short-term cooling, then their correction, we are still left with the more robust conclusion that the upper ocean heat content has been essentially flat the last few years. This is really more informative than a two-dimensional surface temperature analysis, since the ocean heat content goes directly to the question of unrealized “heating in the pipeline”, and also to what the current sum of all the forcing and feedbacks add up to, and how this sum changes on annual and multidecadal scales. What is also interesting is how well the SST changes map over upper ocean heat content, and how quickly these SST changes seem to be realized in the atmospheric volume (ie El Nino). It has been pointed out that the so-called “time constant” to equilibriate to a change in forcing via ocean mixing processes has a direct bearing on climate sensitivity to changes in a forcing.

    Comment by Bryan S — 12 Jan 2008 @ 12:28 PM

  96. Ref 92 and 93. Each month I download the following sites.

    The first, ncdc/noaa site only gives data for that month. I have searched the site you quoted, but cannot find anything similar to the other three sites. If you can find it, I would be grateful. As I have noted, I downloaded shareware CurveExpert 1.3, and simply plug in different time scales for all four data sets, call for a specific type of analysis, and look at the results. I do this for my own education; I am sure you dont want to read any non-peer reviewed results.

    Comment by Jim Cripwell — 12 Jan 2008 @ 12:37 PM

  97. In 86 Gavin writes “That uncertainty feeds into assessments of attribution and the like, but is small enough not to be decisive.” I wish I shared your optimism. I think the uncertainly is large enough to be decisive, but have nothing to offer in the way of a reference. Just the results of my playing around with simple analyses. On what do you base your opinion?

    [Response: The fact that the model match to obs doesn't depend on whether you use GISTEMP, NOAA or HADCRU. - gavin]

    Comment by Jim Cripwell — 12 Jan 2008 @ 12:46 PM

  98. Thanks for the quick response, Gavin. Couldn’t Dr. Pielke fix his chart, by simply drawing 95% confidence interval lines, 0.2 deg above and below the IPCC forecast, and then compare with the annual anomaly data? That interval would reflect where the annual data should fall.

    The forecast looks pretty good, if that is the expected range of variation in the actual temperature anomaly data.

    I realize this may not be entirely correct, because the standard deviation probably wasn’t calculated from just one of the temperature measurement systems, but then the chart wouldn’t as misleading as it currently is.

    [Response: I would suggest asking him. - gavin]

    Comment by Paul Klemencic — 12 Jan 2008 @ 12:56 PM

  99. #90

    The inconsistency between Loeb’s slide 19 and ISCCP’s own web page is strange, I agree. Which one should be the more reliable?

    I find it hard to believe we were able to send man to the moon 40 years ago but now we are unable to measure the temperature of the oceans. There must be some system still in operation which was also in operation in the 90s. Replacing all floats without an overlap would be too stupid.
    So why would that system be correct pre-2003 and wrong post-2003?

    Comment by lgl — 12 Jan 2008 @ 1:01 PM

  100. #42 Barton,

    ” 2001-2007] except that the data indicates no change in temperature over that period”

    And Superman might sell Mr Desjardins his fortress of solitude at the North Pole for a good price!

    I don’t think its possible, just yet, to have a single trend based on a couple of years without some deviation from the over all long term path, because the measuring techniques resolution are not at the quantum level. In addition, here is what I don’t see: world wide maximas and minimas anomalies, say from sea level to Bogota’s height above sea level.

    In the Arctic, just the other day, the temperature at surface was about -27 C (….no its warm, should be -35 C), yet aloft at about 900 meters it was -15.4 C. If this station was 900 meters high, the surface record would be different. These thermal heat sources aloft, may be found anywhere in the world, but are particularly strong in the Arctic. If we pick one single Upper air level, we will either miss a cooling or a warming above or below. The idea that adiabatic lapse rates are constant and therefore we can pick a single representative height, will equally mislead.
    But if we search a GW trend, we should find that every year the maximas (1 to 3000 ASL) are pushing upwards relentlessly. Then again I don’t think satellites can find profile maximas, and the world wide radiosonde network is too sparsely located, although its data is state of the art.

    Lacking resolution, we can rely on other measurements: very long term temperature trends, Polar ice melt, world wide glaciers retreating, deep sea temperatures for the most part and other bench mark measurements, until temperature resolution deficiencies have been eliminated, by increasing present network densities, or by finding a different way to measure the weighted temperature of the entire atmosphere, we do that for other planets. As a complement to present techniques, I suggest using the sun as a fixed disk of reference,,, It works!

    Comment by wayne davidson — 12 Jan 2008 @ 1:43 PM

  101. Its a little off topic, but in the sunday comics there was a funny strip about interpreting statistics in research. It reminded me of the current discussion. The strip even uses the +/- symbol. Its funny.

    Comment by Joseph O'Sullivan — 12 Jan 2008 @ 1:47 PM

  102. I would say that every modern weather event bears the fingerprints of global warming. The problem is that it is hard to find “fingerprints” on a thunderstorm or hurricane. However, it is easy to find the fingerprints of global warming on the weather prediction models. Those models use sea surface temperatures and atmospheric temperatures that reflect the full impact of global warming. The success of these weather models proves that global warming affects our daily weather.

    Climate we can plan for, engineer for, and survive. The problem is always the weather, ( E.g., a rain event, a snow event, a drought, ice melt, a heat wave, a cyclone, a typhoon.) Global warming brings us increasingly intense weather. That means the weather problems become more intense.

    Last week there were tornadoes across the US in January. In our recent climate that would have been a very rare event. Global warming supplied heat to make it a less rare event. Sudden Arctic Sea ice melt was a rare event. Global warming has supplied the extra heat necessary to make it less rare. At some point you have to say, ”Global warming has given us a new climate. We are having weather that for all practical purposes did not occur in the old climate.” We may not be sure what else global warming may bring us, but we can be very sure that most of us are not going to like it, because our infrastructure is not designed to withstand it

    Odd weather also confuses the plants that provide us with food. Over the last few decades, the climate around my house has warmed from temperate to subtropical. My fruit trees are no longer adapted to our climate. I live near major commercial fruit growing areas, so this is a real economic cost in the near future. Oh, but you say that, there are colder regions that can still grow fruit. Yes, but they currently grow varieties that are adapted to high chill conditions. As their climate warms, their trees will stop producing, and they will have to replant with trees that require less chill. This will happen around the world. Expect the price of fruit and nuts to go way up – soon.

    We can not wait for certainty. We need to act. We are dealing with very nonlinear processes, some of which can proceed much faster than any current published peer reviewed modeling predicts. One case in point is Arctic sea ice. The melt ponds and moulins on Greenland suggest that we will soon have another example. There are reports of melt water on WAIS. Mother Nature can out run our fastest models. We need to “cut her off at the pass.” Any effort to do that is cheap compared to the costs if she (global warming) gets away from us.

    Comment by Aaron Lewis — 12 Jan 2008 @ 2:09 PM

  103. Re #95: Sorry for the last statement, you are the expert, so let me rephrase in the form of some questions. 1) Why are yearly or multi-year changes in ocean heat content are unimportant or uninteresting? 2) Assuming the finding of very little or any upper ocean warming over the last three-four years holds up (I am pursuaded that following the corrections to the Argo data, this is likely), does it challenge *anything* that modelers think they know about the sum of *current* radiative forcings and feedbacks? 3)If the earths heat balance can be neutral or even negative for multiple years, even with the increasing dominance from the radiative forcing of C02, doesn’t it make you even a little curious that the models handling of feedbacks, or deep ocean mixing, or other processes may be incomplete?

    [Response: If... if.... if.... all those things were demonstrably so, of course one would be interested in why. But they aren't. Right now there is substantial systematic uncertainty in OHC numbers for a single year (much greater in terms of signal to noise than in the surface temperature record). There is additionally substantial natural variability in the system - i.e. OHC changes due to La Nina/El Nino events are likely to be very significant. And although the balance of evidence does suggest significant warming in the long-term - there are certainly issues related to spatial coverage in earlier periods. Given that uncertainty, it will be a long time before this dataset becomes primary in issues of attribution. That's unfortunate, but there's not much to be done about it. - gavin]

    Comment by Bryan S — 12 Jan 2008 @ 2:31 PM

  104. Bob North (76) — (1) That all GW is AGW follows from the theory of orbital forcing. The climate should be (on average) slowly cooling now and for the next 20,000 years, if humans had not added about 500 billion tonnes of carbon (so far) to the active carbon cycle.

    (2) Predictions of drought are easily understood as predictions that water vapor patterns will change. Indeed, Australia is already experiencing this, as is the southern portion of the Amazon Basin and in the Sahel. Hadley Centre has regional predictions for 2050. While your concern for accuracy is reasonable, you still may find that report of interest.

    Comment by David B. Benson — 12 Jan 2008 @ 2:59 PM

  105. 90#

    “and slide 19 shows no big changes in either ISCCP or CERES”
    Solved it, it’s two different things. Loeb’s is albedo, my link is net TOA.

    I doubt the ISCCP net TOA is wrong. There’s a close to perfect correlation between that and volcanic activity.
    After every VEI4 or larger eruption the net TOA drops. There were no large eruptions between 1994 and 2000, therefore high radiation input. After 2000 there have been 6.
    A OHC level off should be expected.

    [Response: There have been no climatically significant eruptions since 2000. What you need for that is not just an highly explosive one, but one that deposits significant sulphate in the stratosphere - this has not occurred (you don't see the telltale signs of lower strat warming in MSU4 for instance, and there is no increase in aerosol optical depth either). I suggest emailing someone at ISCCP and asking about the post 2000 offset. I would not be at all surprised if it did not coincide with the inclusion of data from CERES. - gavin]

    Comment by lgl — 12 Jan 2008 @ 4:10 PM

  106. In 97 Gavin writes “The fact that the model match to obs doesn’t depend on whether you use GISTEMP, NOAA or HADCRU”. Do you have a reference where I can read up all the details? Preferably one that has been peer reviewed.

    [Response: Just plot one on top of the other from 1975 to 2007. it's not rocket science. - gavin]

    [Response: For the Hadley and GISS data the graph and corresponding peer-reviewed paper is linked in the "update" at the bottom of the post. -stefan]]

    Comment by Jim Cripwell — 12 Jan 2008 @ 4:27 PM

  107. Re: #105, Gavin, I am quite surprised of your reluctance to embrace the new Argo network data, even given some of the recent issues. As far as I can understand, these are being worked, and it seems that appropriate corrections are now being made. New drop rate calculations for the XBT’s are being implemented, and the cold bias on the infected Argo floats has been removed. Some recent work on integration techniques has also been performed, and seem to show that the difference between techniques is quite small and produce very small error bars. Are you aware of other potential problems? If so, please advise.

    So, are you going on the record saying the Argo float coverage or collection techniques, or instrumentation are not up to the task of providing reliable yearly or multi-year ocean heat content numbers? If so, I am sure the entities who have invested lots of money in this program will be mighty dissapointed, being that just a couple of years ago, the climate science and ocean science communities heralded the Argo network as the greatest thing since sliced bread.

    Also, I am skeptical of your suggestion that ENSO makes a significant difference in the annual global heat content integral. Is the amount of heat transferred to the atmosphere during an El Nino of a similar magnitude to the total change in heat content of the entire upper ocean on an annual basis? The heat content changes manifested by SST anomolies in the equatorial Pacific are regional changes, and the Joules are already in the ocean, and should be accounted for in the global volume integral. If what you say is correct, then one would expect there to be a clear ENSO signal manifested in the OHCA graph over the past (ie a drop in the global heat integral following an El Nino event). I see none. Maybe I misunderstand something fundamental here, and if so, please clarify.

    [Response: You appear to be jumping to conclusions. How did you get the idea I didn't think the ARGO program was useful? None of the resolutions of issues that you mention have made it into the literature yet and I'm happy to refrain from commenting on papers that don't yet exist. When they do, ask me again what they might mean. As to whether a big El Nino should impact the OHC numbers, surely the answer must be yes. Whether the magnitude is sufficient to take it into negative territory is unclear to me (but I haven't looked into it), but I'd be very surprised if there was no effect -at minimum you'd expect a slowing of the rate of growth. During a La Nina, you expect the opposite - an increase in the rate of growth. Once ARGO has been in place for a few more years and the teething problems dealt with, it will indeed be a big boost. But don't prejudge what will be found in the future (unless you have good reasons). - gavin]

    Comment by Bryan S — 12 Jan 2008 @ 4:32 PM

  108. gavin> OHC changes due to La Nina/El Nino events are likely to be very significant.

    I understand there may be measurement problems, but aside from that, how can OHC change significantly other than by radiation balance?

    [Response: OHC changes if the net heat flux at the ocean surface changes. That can be driven by sensible/latent/SW and LW changes - and all of those change with ENSO. Looking roughly at some AMIP runs, the GISS model has around 0.3 to 0.4 W/m2 DJF heat loss from the oceans for the 1997/1998 El Nino which is significantly smaller than the heat loss anomaly to space (implying a significant heating of the atmosphere). One could look into that more closely of course.... - gavin]

    Comment by Steve Reynolds — 12 Jan 2008 @ 4:59 PM

  109. FWIW I predict 2008 will be a scorcher based on the dramatically different start to the year with temperatures in places up to 15C higher than the January average. For some reason the oceans must have ‘sucked it in’ last year and are now ready to ‘let it out’. If not the next year or two then down the track.

    Comment by Johnno — 12 Jan 2008 @ 5:12 PM

  110. > FWIW I predict

    You can decide WIW and see if anyone will take your bet:
    with more at

    Comment by Hank Roberts — 12 Jan 2008 @ 5:36 PM

  111. I’ve left a question for Mr. Tierney in his blog comments; perhaps we should do likewise for Christopher Booker of the Telegraph? (re Bob Ward at #5)

    For some entertaining reading, search for “Christopher Booker” in the tobacco papers
    “The lead from four-star petrol falls harmlessly on the roadside… this is simply another pointless victory for a self-righteous lobby that is not even aware that its dogma rests on science that has long since been discredited.”

    Comment by Anna Haynes — 12 Jan 2008 @ 5:45 PM

  112. Ok Gavin, in post 68 you suggested that I ask Dr. Pielke to revise the confidence interval shown to reflect how the annual average anomaly has varied in the past. I suggested at least twice the natural system standard deviation of 0.1 deg. I posted this on his blog.

    I also put the following posts up on Tierney’s blog at the NY Times in response to a poster who interpreted the confidence level shown by Pielke:

    Post #34 Frank’s comments include this:
    “1. If the dotted lines are 95% confidence levels, the probability that the IPCC forcast is too high is quite large over the past four years. Two of the series are below the 95% confidence level in 3 of the last 4 years.”

    If I understand the statistical analysis on this site, the dashed lines are not confidence levels based on historical annual temperature anomaly data.

    The data indicates the standard deviation on the yearly data since 1975 is 0.1 deg C. Any data within +/- 0.2 deg C (two standard deviations) would be in the 95% confidence interval. Instead Pielke has shown dashed lines only one fourth that interval on his chart.

    Incidentally this standard deviation is calculated from the temperature data alone, not from the IPCC models.

    I suspect, the confidence levels shown on the Pielke chart, must be based on some kind of longer term average, and seem to be overlaid on the annual data.

    Do we have an apple to apple comparison here?

    We could draw the expected 95% confidence interval by placing the dashed lines on his chart, 0.2 deg above and below the forecast. From natural weather variation alone, about 95% of the data collected should fall in that band. And the real data shown, does.

    Of course to do this correctly, error bands should be set by the standard deviations calculated for each measurement system; NASA, UKMET, UAH, and RSS. Each measurement system has its own individual measurement error that should be added to the natural system variation
    But at the very least, the error bands must be +/- 0.2 degrees… anything less would be clearly [edit] misleading.

    Comment by Paul Klemencic — 12 Jan 2008 @ 5:59 PM

  113. #105


    “There have been no climatically significant eruptions since 2000″

    How can you be so sure sulphate to the trosphere is insignificant?
    VEI4 eruptions in 73,74,75 and 76 and a significant drop in temperature in 76 for instance.

    And we are talking a level off since 2003, not a significant drop (yet).

    [Response: The only way sulfate to the troposphere is climatically relevant is if large amounts are output persistently for months at least. Otherwise it just washes out. Laki (1783) is a good example perhaps (Oman et al). But, as far as I am aware, none of the volcanoes in recent years have been that persistent or voluminous. Overall levels of aerosol have been dropping (ever so slightly) over that period in any case (Mischenko et al 2007). - gavin]

    Comment by lgl — 12 Jan 2008 @ 6:02 PM

  114. It is midnight in Sweden, 100 km NW inland from Stockholm, the night between Jan 12 and 13. Outside it is +6°C, there is no snow, no ice on Lake Mälaren, our third biggest lake in Sweden, no ice in the Bothnian Gulf. Last year average temp in Sweden was 1.5-2°C above average and in January so far average temperatures are 5-6”C above normal. You may call this “weather” but is really starting to look like climate

    Comment by Bo Norrman — 12 Jan 2008 @ 6:48 PM

  115. Given the impact of volcanoes, I think that Krakatoa and Tambora should be referenced when temperatures from the 19th Century are compared to today.

    Krakatoa was 3 times bigger than Pinatuba and Tambora was 10 times bigger yet they are never referenced when temperature records are compared.

    [Response: They are referenced in model simulations (at least Krakatoa which is after the 1850/1880 start date for most simulations)- see - gavin]

    Comment by John Wegner — 12 Jan 2008 @ 6:54 PM

  116. Gavin– In answer to #2m you discussed the multiple response times of various components of the planet. Could you point me to a reference that describes how the response time to Pinatubo, the deep oceans or any other components? Thanks in advance.

    [Response: Where do you want to start? Google 14C distributions in the ocean, the seasonal cycle, glacier retreat etc. The literature on responses to Pinatubo is also pretty extensive: Hansen et al 1992, Wigley et al 2005 (in response to Douglass), .... If you want to be more specific, I could probably be more helpful. - gavin]

    [Response: If you don't have access to the Wigley et al '05 paper, which indeed addresses this most directly, you can find some discussion in the IPCC AR4 report, which is publicly available. See page 61 of chapter 9 of the Working Group I report (warning, its 5 MB in size and may take a minute or two to download, depending on the speed of your internet connection). - mike]

    Comment by lucia — 12 Jan 2008 @ 8:31 PM

  117. I’m sad. I provided references for Hank, and he has yet to follow up with me, even though he is still active in the thread. And nobody has a comment on the graph I labored most of the night to create?

    I’m feeling neglected :-)


    1. We are due for a spike year. We haven’t realy spiked since 2002, and it is about time for another one, based on what I see on the historical graph.

    2. On any scale: 5 year, 10 year or 30 year, it is unmistakable that the warming trend continues. The rate may be slowing or accelerating; we should know the answer to that in 5 years. However, “cooling” would be the wrong word no matter how you look at the last 9 years, when all are well above the 1951 – 1980 mean and 2 of them exceeded 1998. As I have been saying, the “spike” has become the “norm.” Is that not news?

    Comment by Walt Bennett — 12 Jan 2008 @ 8:42 PM

  118. Walt, did you see Gavin’s inline response at #86? It seemed to me to answer your question as well as you could hope for, though it was in response to someone else asking something similar.

    I’m just another reader here, not a climatologist, I can hardly answer questions like the one you asked –”Is Hadley right?”

    I doubt anyone could. If you mean do their published measurements reflect the instruments they used — likely so.

    You could ask what their error bars are for short term measurement (those will be relatively large) compared to their error bars for long term trends (those will be relatively smaller), perhaps.

    Comment by Hank Roberts — 12 Jan 2008 @ 9:17 PM

  119. Re: #118,


    I am specifically asking (anybody):

    1. Can Hadley’s SST measurements be considered reliable to the same extent land measurements can?

    2. Is it correct to allow the extrapolated SST signal to dominate the reconstructed climate signal? I suppose that rolls back to question 1, but also the method by which overall SSTs are determined. I assume there are very few measuring points on the open ocean; what method is used to extrapolate the overall temp, and has it been in any way verified with, for example, spot sampling from passing ships?

    My intuition is telling me that the warming is clear, detectable and accelerating, at least here in the northeast U.S., and from what I read, the same is true in many other places. We have visual evidence of rapid change in the Arctic region, and I suspect we will be seeing much the same soon in certain parts of the Antarctic continent, even moreso than we have already seen. There may be some confusing year-to-year feedbacks, but the overall trend is clear to me.

    If we have SST measurements that seem to offset the warming, there is bound to be less concern about the overall increase in global temp. If it turns out that the methods used to determine overall SSTs were severely in error, we would be looking back and asking “Why did we let ourselves believe overall global temps were stable, when all of the visual evidence told us that the warming was continuing?”

    Comment by Walt Bennett — 12 Jan 2008 @ 9:32 PM

  120. Idea – someone should run a “Debunking for Dollars” site where anyone with a denialist talking point could submit it with a PayPal donation of appropriate size & get said point addressed, instead of sitting around at Dot Earth (and elsewhere) complaining that RealClimate won’t address the point [for the 152nd time].

    If there are conflict-of-interest issues, the money could go to a charity. And multiple people could do the debunking, to spread the load.
    (I’ve just now registered the url as a precaution, and would be happy to help defray the costs of setting up the site…)

    Comment by Anna Haynes — 12 Jan 2008 @ 9:41 PM

  121. Walt, when you write this:

    > If we have SST measurements that seem to offset the
    > warming, there is bound to be less concern about the
    > overall increase in global temp.

    you’re now talking PR, political spin … not science.

    It doesn’t matter what your political beliefs say would be good PR or good spin or good press, if you want to talk about the science. What matters is getting good information.

    The models have all indicated areas that will be warmer, and areas that will be cooler, over time. Hadley in particular has that 10-year look into the model’s future that I already pointed to. But look at any of the pictures of what may occur and you’ll see areas warmer and other areas cooler.

    Getting it right is what matters. Not getting it one way for PR.

    Comment by Hank Roberts — 12 Jan 2008 @ 11:27 PM

  122. Mike– Thanks, for the citation. (I have a fast connection.) I’ve glanced at that, and it appear to be a summary/ literature review type document. So, while it doesn’t contain what I’m looking for, I suspect I can order some of ther referenes to find the basis for some numbers. It should help.

    Thanks. I’d probably be more specific if I had a clue what types of things existed. :)

    I’m interested in papers that specifically describe how that set or particular authors estimate time scales rather than simply citing other authors and mention what someone esle found. I have, for example, Schwartz 2007, ““Heat Capacity, Time Constant, and Sensitivity of Earth’s Climate System””, which suggested the climate system can be modeled as a simple lumped parameter and then found a time constant based on the autocorrelation of surface temperatures. Schwartz cites a number of papers in section 4. I’m in the process of trying to get them and familiarize myself with the sorts of approaches used, but Schwartz doesn’t, for example, cite Hansen 1992. (He does cite a Hansen 1996, Geophys Rev Letters. He also cites Wigley 2005. I assume Hansen wrote more than 1 paper in 1996? )

    Anyway, as I know you criticized Schwartz’s paper, it occurred to me you might cite papers Schwartz did not. I guess as long as you are writing about this topic, and you answered a question mentioning these time constants, I’m asking hoping to find which papers describing time constants you think are most worth reading rather than simply googling.

    It sounds like the Wigley paper’s that Mike, you and Steven Schwartz’ all cite must be a good start. If it’s not I’ll probably be able to trace back through the references.

    Comment by lucia — 12 Jan 2008 @ 11:35 PM

  123. The sun is quiet, and has been on a downturn for a few years. Why are cooler ocean temps any surprise? It *is* possible that the sun will soon enter a relatively extended quiet period. What ramifications that would have on our climate remains to be seen.

    In reply to comment 120, Why are all of us with open minds labeled debunkers? Are open minds now a bad thing? Thanks, but I will keep my open mind and skepticism readily handy for both sides. Everyone these days seems to have an agenda. If the oceans are cooling.. Thats great. If not, that is great too. This whole thing makes me miss the old days when everyone thought nuclear war was inevitable. :)

    Comment by Dusty — 12 Jan 2008 @ 11:50 PM

  124. Re:#107 Gavin, you say “none of the resolutions of issues that you mention have made it into the literature yet and I’m happy to refrain from commenting on papers that don’t yet exist”

    Ah, but they will be in the literature soon. A steak dinner at your favorite steakhouse if the OHC gain for 2004, 2005, 2006, and 2007 turns out to be more than “statistically insignificant” in any refereed paper on the issue. If less, then you buy at mine. If the result is overturned later, I will refund your dinner in kind. Are you having any of it? Maybe afterward we can get down to business talking about some interesting climate science.

    P.S. The beef is better down here in Texas.

    Comment by Bryan S — 13 Jan 2008 @ 12:33 AM

  125. How well do the CGM models used in the various generations of IPCC studies capture the effects of the two volcanoes highlighted in the main article. I gather that volcanic aerosols are specified in model runs up to present. One would expect that they should show the two cooling pulses. Has anyone plotted model outputs for the few years around these eruptions?

    Comment by John — 13 Jan 2008 @ 12:35 AM

  126. Dusty, is there some correlation between sunspot cycles and El Nino/La Nina/ENSO published somewhere? Cite please? Hadley’s prediction for the next decade doesn’t involve the sun missing a cycle. And the sun’s on the upswing, first sunspot of the new cycle happened.

    Comment by Hank Roberts — 13 Jan 2008 @ 12:35 AM

  127. As per #78, an even better way to get over all this arguing about 7 years, 8 years, etc goes like this:

    1) Download the GISTEMP data (year, anomaly), I used 1977-2006.

    2) Compute N-year SLOPEs centered on each year:
    N years
    3 1978-2005
    7 1980-2003
    11 1982-2001
    15 1980-1999
    19 1978-1997

    3) Do a scatter plot of the 5 series, which shows how the slopes vary over time for a different number of years.

    4) One finds:
    –3– –7– -11– -15– -19–
    .018 .017 .017 .017 .016 MEAN Not much difference
    .074 .023 .008 .007 .003 STDEVP Standard deviation decreases strongly
    .135 .069 .036 .030 .020 MAX
    -.130 -.022 .006 .006 .011 MIN
    .265 .091 .030 .024 .010 range Range shrinks strongly, unsurprisingly

    5) People sometimes argue with any specific series-length as cherry-picking, but by showing multiple lengths on one chart, that clearly isn’t happening.
    The chart is another way to show what anybody *should* know, but some people seem to not understand, or don’t want to:

    a) If you pick a short span in a noisy series, you can prove anything.

    b) As series get longer, the variability shrinks, and that is obvious on the chart.

    6) And in this particular case, once you get to 11-years, there are *no* negative-slope series [min = .006], i.e., even Pinataubo doesn’t do it.

    Comment by John Mashey — 13 Jan 2008 @ 2:04 AM

  128. Re: #121


    I fear that you have once again missed my point; I do not intend to be so obscure.

    I am singularly focused on getting it right. I have asked a battery of questions on that very subject and have gotten zero response.

    The point, if there was one, to my question was, is it possible that we are getting it wrong with regard to SST cooling? Hadley’s own graph shows that the ocean tends to warm when land warms. Why in the last two years has that not been the case? What would the explanation be for the oceans, overall, to be cooling for what to my eyes seem to be at least the last two years?

    Comment by Walt Bennett — 13 Jan 2008 @ 3:12 AM

  129. Re # 60 etc on Ocean Heat Content (OHC):

    please see the new study

    Johnson, Gregory C., Sabine Mecking, Bernadette M. Sloyan, and Susan E. Wijffels, 2007. Recent Bottom Water Warming in the Pacific Ocean. Journal of Climate Vol. 20, No 21, pp. 5365-5375, November 2007, online


    Decadal changes of abyssal temperature in the Pacific Ocean are analyzed using high-quality, full-depth hydrographic sections, each occupied at least twice between 1984 and 2006. The deep warming found over this time period agrees with previous analyses. The analysis presented here suggests it may have occurred after 1991, at least in the North Pacific. Mean temperature changes for the three zonal and three meridional hydrographic sections analyzed here exhibit abyssal warming often significantly different from zero at 95% confidence limits for this time period. Warming rates are generally larger to the south, and smaller to the north. This pattern is consistent with changes being attenuated with distance from the source of bottom water for the Pacific Ocean, which enters the main deep basins of this ocean southeast of New Zealand. Rough estimates of the change in ocean heat content suggest that the abyssal warming may amount to a significant fraction of upper World Ocean heat gain over the past few decades.

    Comment by Timo Hämeranta — 13 Jan 2008 @ 6:30 AM

  130. Hmmm, was it a low sunspot number, or was it bad observing conditions due to volcanic activity? Interesting idea:

    Volcanism, Cold Temperature, and Paucity of Sunspot Observing Days (1818-1858): A Connection
    Author(s): Wilson, Robert M.
    Abstract: During the interval of 1818-1858, several curious decreases in the number of sunspot observing days per year are noted in the observing record of Samuel Heinrich Schwabe, the discoverer of the sunspot cycle, and in the reconstructed record of Rudolf Wolf, the founder of the now familiar relative sunspot number. These decreases appear to be nonrandom in nature and often extended for 13 yr (or more)…. The drop in equivalent annual mean temperature associated with each decrease, as determined from the moving averages, measured about 0.1-0.7 C. The decreases in number of observing days are found to be closely related to the occurrences of large, cataclysmic volcanic eruptions in the tropics or northern hemisphere….”

    Interesting to think that the counts of sunspot numbers could actually have been counts of dirty air days, and the variations in temperature due not to sunspots but volcanic activity far around the Earth.

    I don’t know where this idea when, if anywhere. Just happened on it.

    Comment by Hank Roberts — 13 Jan 2008 @ 6:44 AM

  131. RE this post, it’s a good thing there are some scientists around, like you folks, to do the science.

    I’m thinking sci-fi on this — a world in which journalists do the science. Title? Maybe, THE NEW DARK AGES.

    Comment by Lynn Vincentnathan — 13 Jan 2008 @ 9:36 AM

  132. I saw a program on Nat Geo, RING OF FIRE (around the Pacific Ocean). It said volcanism is higher now than thousands of years ago, and that it could get a lot worse.

    If it does and it gets extremely worse (like 251 mya ago), then I’m guessing the initial impact might be an aerosol cooling effect, but that once that clears from the atmosphere, then it would be leaving GHGs behind for a very long time to cause increased GW.

    But before the denialists start gloating about this, we should consider that we might need those very fossil fuels we are wastefully burning now to help us keep warm during the cold snap.

    So either way, whether volcanism is due to increase drastically or not, we’ve got to cease and desist from our fossil fuel burning right now.

    Comment by Lynn Vincentnathan — 13 Jan 2008 @ 9:50 AM

  133. From the referenced Jan 10, 2008 article: “Dr. Pielke suggests that more scientists do reality checks”…

    However, Pielke fails to mention that U.S. scientists have run into a road block in attempting to do reality checks on climate change with National Weather Service climate and streamflow data. Pielke ought to know that to be true in his having received funding from NOAA NWS in years past.

    Comment by pat n — 13 Jan 2008 @ 10:31 AM

  134. I think what the original article was tapping into is the general publics observation that there has been no meaningful global temperature rise in the last decade. This is at odds with the alarmism that is continually being preached. I agree with the point that the IPCC predictions are not meant for the short term but they create a credibility problem for themselves when the alarmist predictions don’t materialise in the short term. Looking at the global HADCRUT3 data set and overlaying the IPCC (2001 report) range of predictions clearly shows the global data points (or rolling averages if you prefer) for the last 6 or 7 years plotting underneath the entire range given in the 2001 report. This is a fact and making excuses for it is pointless. It would be much better to just state : “the predictions are meant for the long term (eg.2100)and are still valid but there has clearly been an overestimate of warming in the short term”. Being honest about it will allow some credibility to remain. Otherwise by this time next year (should 2008 be another flat or cooler year) these small ripples of doubt will have become an unstoppable wave of disbelief and apathy.

    [Response: Pointing out that short term values are not 'meaningful' is being 'honest'. And your suggestion to the contrary is not appreciated. - gavin]

    Comment by Imran — 13 Jan 2008 @ 11:15 AM

  135. Walt, don’t confuse sea surface temperature with the temperature of the oceans overall. The ocean can be absorbing heat and still show bigger areas of surface cold water happening. You might want to look up the research behind this recent science news story, for example, to understand how apparently paradoxical reports can make sense:

    Comment by Hank Roberts — 13 Jan 2008 @ 11:26 AM

  136. Walt Bennett> Hadley’s own graph shows that the ocean tends to warm when land warms. Why in the last two years has that not been the case?

    Their graph seems to show 0.5C more land warming than ocean warming since 1975. I do not think all of that can be attributed to thermal lag. Some speculation:

    Recent land pollution (more soot, black carbon relative to sulfates) causes a net addition to warming.

    Ocean measurements are more accurate (no UHI or micro-climate effect warm biases).

    Comment by Steve Reynolds — 13 Jan 2008 @ 11:44 AM

  137. Walt, there’s a somewhat better article, here:

    Alphagalileo is a good scienc news service, and does us the courtesy of citing the publication — any science news source should. Sciencedaily didn’t.

    Comment by Hank Roberts — 13 Jan 2008 @ 12:13 PM

  138. Why, I wonder, has no-one quoted the results of F-testing against a linear regression of temperature changes against time.

    The F-Test will compare the residual variation about the regression line to the overall mean. In other words, it will compare the variation “explained” by the regression with the residual variation about the line. The calculated significance takes into account the number of observations, so we can see if any calculated trend differs significantly from zero.

    To be convincing, we need both a trend greater than zero, and a probability less than, say, one chance in 20 that the trend arises by chance.

    For example, using the UAH monthly data, from December 1978 to November 2007, the regression line increase is 1.43 degrees centigrade per century, and relative to the monthly variations there is no doubt that the increase is significant.

    But, a big but, the starting point is 1978 (happily about the same as Hansen A, B, and C for model comparisons), after a fall in temperature from the previous peak in the forties.

    From the same 1978 start point, how far forward must we go before we find a significant increase? The answer is that from 1978 to 1995 there was no significant increase in global temperatures, and the (not significant) trend was only 0.38 degrees centigrade per century. It was the sharp increase over the next few years that substantiated Hansen’s original forecast.

    To see if “global warming has stopped”, we can work backwards from today, and see how far back we must go to detect a significant increase. The answer, not surprisingly is before the peak in 1998. From July 1997 to 2007 there has been no significant increase in global temperature.

    Take those two results together and you can see that it would be difficult to make a convincing case for global warming without the 0.5 degree increase in temperatures from 1996 to 2001.

    [Response: What 0.5 deg rise between 1996 and 2001? - gavin]

    Comment by Fred Staples — 13 Jan 2008 @ 12:34 PM

  139. Re (120), last week we had the first sunspot of opposing polarity, which means the next 11year cycle has officially begun. By definition we are at a minima of the cycle.

    (132) I’d take anything heard on Nat. Geo, of Science Channel with a substantial grain of salt. So many statements are obviously based on misunderstood things from scientists, that their credibility has to be considered low.

    Comment by Thomas — 13 Jan 2008 @ 1:15 PM

  140. This is one of those times when I feel like the kid who just doesn’t get it, and nobody (except you, Hank) considers it worth the time to straighten the kid out.

    Gavin? Stefan? Anybody? Am I insane or has GISS shown continued warming where Hadley shows cooling, and is this not directly related to SST measurements? How can those measurements be so radically different, simply based on differences in approaches which are “defensible”?

    We aren’t talking about forecasts; these are the actual conditions we are talking about.

    You can’t both be right, and it sounds weak to me to hear it chalked up to “different approaches”.

    You gonna keep ignoring the (47 year old) kid? Or at least tell me why I’m wasting time pursuing this inquiry?

    [Response: If you want to investigate further, compare the spatial patterns. I haven't done it yet for 2007, but I'll guess that the biggest anomalies will be over the Arctic Ocean. The sea ice retreat in the summer certainly lends support to the GISS analysis, but the Arctic Buoy Program would be the best check. - gavin]

    Comment by Walt Bennett — 13 Jan 2008 @ 2:13 PM

  141. #105

    ISCCP confirms your assumption, the post 2000 offset is ‘not real’. Whether this means that ‘probably exaggerated’ on their page should be changed to ‘totally wrong’ or something else I don’t know, but then why put the curves on the web at all.

    Are there any reliable measurements of the net TOA somewhere else? And I don’t mean albedo.

    [Response: Thanks. Actually, the CERES data is the best bet. - gavin]

    Comment by lgl — 13 Jan 2008 @ 2:13 PM

  142. Fred,
    > using the UAH monthly data

    Point to the data set so people know what you’re talking about please.
    That could mean a lot of different altitudes and instruments.

    > we can work backwards from today, and see how far back we must
    > go to detect a significant increase

    Cite please for this idea of how to apply statistics? I don’t believe it’s correct, though it was thirty years ago I had my statistics classes; at that point in time, the rule was you had to decide on your test _before_ looking at the data, not look at the data until you found a set that when tested gave you the desired answer.

    Comment by Hank Roberts — 13 Jan 2008 @ 2:18 PM

  143. Whoah, Fred, are you basically saying that if you take the 1998 peak out of this graph, what’s left is no warming? Recipe for fudge.

    Was it you who mentioned at Tamino’s site the idea of using software that digitizes from images to derive data points? Don’t do that with charts based on data that you can download from original sources.

    Comment by Hank Roberts — 13 Jan 2008 @ 2:29 PM

  144. Gavin, et al — What observations would falsify your understanding of global climate change?

    Comment by Donald E. Flood — 13 Jan 2008 @ 3:34 PM

  145. I’m sorry, Gavin, my last comment in 138 was careless.

    The 1978/1995 and the 1997/2007 regression lines are not significantly different from horizontal. Relative to the zero line the two averages are -0.035 and +0.247, so the best estimate of the step change between the two periods is 0.28 degrees C. Without that change, it would be impossible to make a convincing case for AGW.

    In mitigation, I had left the statistics and I was looking at the UAH chart.

    It is striking that the temperatures climb the 1998 peak immediately after my first period, and promptly fall all the way back to the 1996 level by the end of 1999. Over the next 2 years the temperature rises again to current levels, and the increase (1996 to 2001)is about 0.5 degrees. Without that step change etc.

    Comment by Fred Staples — 13 Jan 2008 @ 3:34 PM

  146. It seems as if there are some problems to reconcile different data sets and measurement methods. My statistical knowledge is not enough to be able to slam SD and p values in the head of other debaters. However, can these problems be be taken as a proof of “end of AGW” if other parameters point to a continued warming? Moreover, are there regional data that shows clear trends? I would assume that the Swedish temperature data, dating back to 1720 would be of value. Graphs and data sets 1720-2005 are found at
    In my view, these data seems to point to a clear trend for the last decade. And this is not including 2006 and 2007, where 2007 had mean temperatures 1,5-2 °C over mean. Furthermore, ice and snow data for Sweden point in the same direction. Thus, if AGW would have stopped why has the temperature for the last month nad a half been 5-6 °C over mean??

    Comment by Bo Norrman — 13 Jan 2008 @ 7:09 PM

  147. #134
    Gavin – I’m not trying to imply anyone is being dishonest. All I’m stating is that, from the general publics point of view, it is becoming increasingly difficult to believe the alarmism when the projections (IPCC 2001 report) are now clearly shown to have been overestimated in the short term. It doesn’t matter what the reason is eg. short term weather variability – as you yourself say.

    [Response: Your statement is wrong on two counts: a) the data is in line with projections, and b) the projections are not alarmist. They may be alarming but there is nothing projected for the near future that relies on anything other than very dull conservative assumptions. -gavin ]

    [Response: To replay the simple analogy I have used above: imagine a spell of cold weather arrives in April, and for a week or even two temperatures are dropping or at least not rising. Nothing unusual of course, just normal weather. Would you then say: "From the general publics point of view, it becomes increasingly difficult to believe that it is spring-time when the expectation of warming temperatures is clearly shown to have been wrong in the short term"? Of course not, because the public can tell the difference between short-term weather and the seasonal cycle. The difference between interannual variability and long-term greenhouse warming is exactly the same. Just less familiar to the general public. -stefan]

    Comment by Imran — 13 Jan 2008 @ 7:23 PM

  148. Somewhat related to this subject, I am looking for input into my presentation covering the claims of skeptics (factual and logical critiques preferable to “your narration sucks.”) One section is dedicated to the temperature record. It includes such topics as:
    1) UHI corrections
    2) The significance (or not) of the GISTEMP 2000-2007 flaw for the US lower 48 states.
    3) A comparison between instrument and satellite measurements.
    4) The claim that “Global Warming stopped in 1998.”
    5) The difference between NASA’s analysis and the others.
    6) And the observation that “if the world really is warming, why is it cold outside.”

    This section can be viewed here:

    Quicktime 7+ is required. It is about 17 mintues and has been greatly expanded since the last time I solicited input.

    Comment by cce — 13 Jan 2008 @ 8:18 PM

  149. Re #134

    “I think what the original article was tapping into is the general publics observation that there has been no meaningful global temperature rise in the last decade.”

    I would suggest that the Northern hemisphere trends refute that argument.

    Comment by Phil. Felton — 13 Jan 2008 @ 8:29 PM

  150. Hank #68

    I was requesting not demanding, as I would not have beed upset or complained if I did not receive a response.

    Fortunately I did from cce #79. Thank you! That is exactly what I was looking for. My interpretation? The layman Joe Q Public says: Looks like we are warmer than average according to all sources, but only GISS shows continued warming trend for this century The other source are flat for 21st century)

    Please, I know 7-8 years does not tell us anything yet. But the next 3-5 years will be interesting to watch.

    Thank you all, I will now return to my reading only part until I have another question.

    Play nice, the public is watching ;-)

    Jon P

    Comment by Jon Pemberton — 13 Jan 2008 @ 8:51 PM

  151. Along the lines of #144 above. What testable hypotheses are at the foundations of AGW with respect to the theoretical basis, mathematical models, numerical solution methods, and Validation of applications to the analyses of interest. With respect to the latter, those applications the results of which might impact the health and safety of the public via changes in public policy relative to energy generation and consumption are of special interest.


    Comment by Dan Hughes — 13 Jan 2008 @ 9:00 PM

  152. Dan, looking at your ‘auditblogs’ page, I’d suggest the place to begin is the assumption that “foundations” exist for scientific work. Literally, that means some original work that, if shaken, will cause later work to collapse.

    Are you looking for something like the assumption that the Earth was the center of the universe, which when removed left the whole complicated structure of epicycles up in the air?

    On your website you are questioning the Navier-Stokes equation — that’s a description, not a foundation. Like Newton’s gravity, another description that worked well enough for a long time and is still sufficient for construction work if not for rocket science.

    This sort of extension and improvement is common, the early work is usually superseded in science, e.g.:

    [PDF] An Exact Mapping from Navier-Stokes Equation to Schrodinger Equation via Riccati Equation
    V Christianto, F Smarandache

    Are you really looking for a “foundation” that can’t be shaken without changing everything we know now? It seems improbable to me.

    Comment by Hank Roberts — 13 Jan 2008 @ 10:50 PM

  153. #124 Bryan S says: A steak dinner at your favorite steakhouse if the OHC gain for 2004, 2005, 2006, and 2007 turns out to be more than “statistically insignificant” in any refereed paper on the issue.

    Given the expected year-to-year natural variability, inaccuracies in measurements, and sampling erros, it seems plausible that published OHC gains that exactly matched the projections would nevertheless be “statistically insignificant”. If so, the bet is meaningless and we’re back to the original topic of this post.

    It seems a common misconception in general that observed trends must either be insignificant or in agreement with projections. In the short term, they can (and usually will) be both.

    Comment by John N-G — 13 Jan 2008 @ 11:18 PM

  154. “I think what the original article was tapping into is the general publics observation that there has been no meaningful global temperature rise in the last decade.”

    I would suggest that the Northern hemisphere trends refute that argument

    A fact about the NH cannot refute a statement pertaining to a global average.

    Comment by Fair weather cyclist — 14 Jan 2008 @ 4:36 AM

  155. Re #152 (Hank Roberts) With regard to scientific foundations, I think there are beliefs which could be undermined by empirical evidence, and which are essential supports for all scientific endeavour, but whether any of these beliefs are themselves part of science, I’m not sure. One such belief is that there is no superhuman agency manipulating the evidence we gather. Suppose we were to discover that an alien intelligence far beyond our own had been monitoring us for the last million years, and had in some cases intervened to change the results of experiments? Timothy, as unofficial philosopher-in-residence, what’s your view on this question of foundations?

    Comment by Nick Gotts — 14 Jan 2008 @ 5:41 AM

  156. Walt Bennett writes:

    [[The point, if there was one, to my question was, is it possible that we are getting it wrong with regard to SST cooling? Hadley’s own graph shows that the ocean tends to warm when land warms. Why in the last two years has that not been the case? ]]

    What makes you think two years is a meaningful sample size?

    Comment by Barton Paul Levenson — 14 Jan 2008 @ 6:27 AM

  157. #147 : Gavin, Stefan – thanks for your comments – and I take the good point about alarmist vs. alarming. An observation is your two responses are slightly contradictory – if the data is in line with projections (as per Gavin’s comment), why do we need to invoke the seasonal weather variability analogy (as per Stefan’s comment) ?? Good analogy by the way.

    I have made some plots of the HADCRUT global data vs. the IPCC2001 predictions and I would like to share with you for your opinion if possible – personally I really struggle to see how the 2001 predictions can be considered anything other than an overestimation in the short term. Have you got an e-mail address I could send to ? Thanks.

    Comment by Imran — 14 Jan 2008 @ 6:41 AM

  158. After losing one post, I painstakingly retyped it, only to have it rejected as spam — with, of course, no way to tell WHAT IN IT constituted the “spam.” This is getting very annoying. There’s no way to tell how many people have quit trying to post here because of this sort of thing.

    Comment by Barton Paul Levenson — 14 Jan 2008 @ 6:52 AM

  159. Besides the prediction that “average global temperature will increase by 0.2 C / decade”; isn’t there any other way to test how good the Climate Models are in modelling the CO2 effects; Is there any way to measure eg how athmosphere radiation changes with yearly changes in the CO2 concentration?

    Comment by Antti — 14 Jan 2008 @ 7:35 AM

  160. All, about the Ocean Heat Content (OHC), please see the following new study:

    van der Swaluw, E., S. S. Drijfhout, and W. Hazeleger, 2007. Bjerknes Compensation at High Northern Latitudes: The Ocean Forcing the Atmosphere. Journal of Climate Vol. 20, No 24, pp. 6023–6032, December 2007, preprint online

    Fort those of you who don’t know how OHC is estimated I copy a bit:

    “1. Introduction

    The heat transport from the equator to the poles, through the atmosphere and the ocean, contributes to the maintenance of the quasi-equilibrium heat budget on earth. The total meridional heat flux can be calculated by integrating the observed net radiative fluxes
    (=the difference between the absorbed short-wave minus the emitted long-wave radiative flux) at the top of the atmosphere (TOA). In order to split up the total heat transport into its two components, one generally estimates the atmospheric heat transport from atmospheric observations and attributes the residual to the oceanic heat transport. Most studies use this indirect way of estimating the oceanic heat transport (however, see Wunsch (2005) for a reversed approach), since direct estimates from oceanic observations are sparse….”

    Comment by Timo Hämeranta — 14 Jan 2008 @ 8:02 AM

  161. What a pleasure, Hank, (142) to write about a topic that I know something about. Nuclear Reactors are (or were in my day) controlled by statistical inference (you can’t measure the temperatures everywhere).

    The F-test is the simplest form of analysis of variance (ANOVA), which is the basis for all statistical testing. Any set of data will have a mean and a variance, which is the sum of the squares of the difference from the mean.

    If you believe that your data, temperature in this case, is increasing with time, you can substitute a regression line for the average, re-calculate the variance about the line, and see how much of the original variance the line explains. Taking into account the number of observations, (relatively few observations need a very tight fit to be significant, relatively many can be more scattered) the F value is the ratio of the “explained” variance to the remaining variance.

    Assuming that the data is normally distributed, the tables of F values give the probability that the line has arisen by chance. To accept the trend line, and consequently to look for a physical explanation, you usually ask for a chance probability of less than 1 in twenty (F less than 0.05). If you want a reference fom my bookshelf, Hank, try Chapter 3 of “Using Multivariate Statistics” by Tabachnick and Fidell or a more descriptive “Introductory Statistics for business and economics” by Wonnacots’ T and R , page 484.

    The UAH data is from

    Am I taking out the 1998 peak, Hank (143)? Of course not. For a first approach you must use all the data, and you must be aware of two possible errors: trends which appear significant, which are not, and trends which do not appear significant, which are.

    So, to summarise my results:

    Overall, from 1978 until today, the trend line of 1.43 degrees centigrade per century is significant. The probability of that trend arising by chance is infinitesimal.

    But, there was no significant increase from 1978 until December 1995, and there has been no significant increase from July 1997 to date.

    Between the two trend line means, there was a step increase of 0.28 degrees centigrade in just 19 months.

    The questions I put from this analysis are simple. Would the general public, politicians, and journalists accept the AGW argument if that step had not appeared?

    And if that step was crucial, what caused it?

    [Response: Yet the trend from 1978 to today gives a difference 0.43 deg C. Your claim that a flat-line, short 0.28 jump and then another flat line is a better fit is not true. In fact it simply demonstrates that fitting trends to short post hoc picked periods is misleading. And that doesn't even deal with the impossibility of coming up with a mechanism for such a strange series of events. This is just nonsense. - gavin]

    Comment by Fred Staples — 14 Jan 2008 @ 8:57 AM

  162. #128
    Walt – a good point indeed. Keep asking the questions about the ocean because this is the key. There are serious problems with the heat transfer (from atmosphere to water) models and there are a few points, which if you analytically think about, will bring you to very different conclusions about whats warming what.
    1) The ocean contains ~1000 times the energy of the atmosphere and water has a specific heat capacity 4 times greater than air (ie. it takes 4 times the energy to raise the T of a kg of water compared to 1 kg of air by 1K). How much impact would w eexpect to see in the ocenas froma 1 deg C rise in air temperature ? I like to think of the analogy of putting a fan heater in the kitchen and looking for a temperature rise in the center of the aga.
    2) Anecdotally we all know that oceans are a major driver of air temperature (Gulf stream, El Nino, La Nina etc). Not the other way round.
    3) The sea level rise since 1850 has been steady and relentless (20 cm in the 20th century), mostly put down to ‘thermal expansion’ …. but an analysis of the distribution of water temperature (vertically and latidudinally) and understanding of the variable thermal coefficient of expansion of water(which is not constant with T or even linear) will tell you that this thermal expansion is far more than can ever be explained by the observed atmospheric T rise – even if it was assumed to immediateley transfer into the water without a time lag.

    Think through this and you will get a different picture of the interface between ocean and air temperature – one which much more elegantly fits the observations of sea and land temperature differemces.

    Comment by Imran — 14 Jan 2008 @ 9:26 AM

  163. Very interesting graphs, the blue lines are getting straighter as well meaning less annual variabilty and more and more steady climb to the top wherever that may be?? I’ve been studying and old edition of encyclopeadia brittanica 1969 edition of Greenland and would like to share a few ovsevations I made. 1: the mean temp of central greenland in winter is usually -50C to -55C with the temp occassionally falling to -67C. The temp in the central regions in summer however reaches -3C..that was then in 1969 so now it should be around -1C -0C. 2:The majority of snow and rain falls in the southern half. Which means that any loss of ice in the summer through morains in the northern half would not get replenished at the same rate as it’s loss due to decreased snow. They mentioned that even in 1969 there was clear signs of glacial retreat in the south west quadrant. But my concerns about sea level rise were slightly allaied by the extreme cold and depth of the vast majority of the ice sheet. Even with worst case temp predictions it would still take a heck of a long time to melt enough of greenland’s ice to make a significant change to global sea level. This is extremely hard and relatively homogenous compacted ice that for the most of the year is kept at a frigid -55C..that ain’t gonna melt in a hurry! Even with the speed of the glaciers 1969 it as about 1 inch/year and the fact that once these ice shelves are moving it will take the pack ice from the higher altitudes up to 7000ft with is still going to take a long long time. I could be wrong but I’m not sure that in my lifetime I will be witness to much of a rising ocean at all. In regards to the other climatic effects of ACC I very much believe that they will become more and more obvious the the years coming…just not sea level rise..yet.

    Comment by Lawrence Coleman — 14 Jan 2008 @ 9:29 AM

  164. Re: #156,


    You too miss my point, if there was one: These are *measurements* we are talking about, not *projections*.

    One day is enough of a sample size if you are two different organizations measuring the same thing.

    Comment by Walt Bennett — 14 Jan 2008 @ 9:48 AM

  165. Dr. Nielsen-Gammon, Howdy from a fellow Texican. In my comment, I was not referencing the OHC to any projection, only saying that according to what I am hearing, there has been no statistically significant gain of heat into the climate system over the last 4 years. I think the change in Joules will likely be very small. Obviously, this finding is not in a refereed journal yet, so we will have to wait for confirmation. Thus I put forth a friendly wager. I take it you have learned not to bet on Aggie football games!

    It seems to me that the system heat content changes (not weather) over even annual time scales are very interesting since they are a direct metric for the TOA radiative imbalance over that same period. If the equalibriated sum of radiative forcings+feedbacks can cause a TOA negative radiative imbalance for even short annual periods of time, this would seem of fundamental importance to understand whether or not this observed variabilty is being properly handled in the models. No?

    I will also make a suggestion, then ask a question to you and Gavin concerning ENSO and its effect on global annual ocean heat content changes. I *think* (very dangerous)that Gavin’s statement that ENSO significantly effects the heat content of the oceans is inadequate at best. Certainly, in a direct way, the heat needed to increase the temperature of the entire atmospheric volume a significant amount is not even a drop in the bucket in terms of the magnitude of most annual variability in ocean heat content that is represented in the time series graphs of OHCA (Joules). This is because of the insignificantly small heat storage capacity of the atmosphere compared to the upper ocean. For the entire averaged atmospheric volume, I understand there is no significant long-term trend in heat content increase (ie warming troposphere vs cooling statosphere). I do however understand why the changing temperature of the troposphere due to ENSO will have an effect on the way radiation is processed (ie latent and sensible heat fluxes from the ocean surface+ short-term feedbacks), but if this made a significant effect on the total heat content of the ocean, wouldn’t we expect to be able to correlate OHCA time series with ENSO? Just eyeballing, I see not even a clue of ENSO. It would seem to me that it is the TOA radiative imbalance that is approximately equal to the summation of all the equalibriated radiative forcings+feedbacks taking place below (and really THE SUM OF ALL THE WEATHER PROCESSES), and all these are approximately equal to the changes in ocean heat content. This is why such a metric is so important, because it cuts through a bunch of complex processes and sums them all up. When I first read Roger Pielke Sr.’s paper on this subject, I thought it was bull-ony, but the more I have read, the more it makes sense to me.

    Comment by Bryan S — 14 Jan 2008 @ 9:49 AM

  166. Walt Bennett posts:

    [[You too miss my point, if there was one: These are *measurements* we are talking about, not *projections*.

    One day is enough of a sample size if you are two different organizations measuring the same thing.]]

    I got your point. I just think your point is wrong. A sample size of two isn’t enough no matter how careful the measurements, especially in a case like climate where so many different factors are involved. The oceans can still be warming on a long-term trend even if they seem to be cooling for a couple of years.

    Comment by Barton Paul Levenson — 14 Jan 2008 @ 9:59 AM

  167. Re: #144,

    Donald wrote “What observations would falsify your understanding of global climate change?”


    Having been coming here for going on two years now as a layman seeking information, I can assure you that the makers of this board are fully invested in AGW theory. There is no serious doubt in their minds that rising CO2 leads, inevitably, to higher temperatures, which in turn will lead to various changes in the climate system. These changes include some places being wetter, some being drier, and most of the planet being much warmer, which will affect native plant and animal species.

    However, your question is craftily worded. The “understanding” of AGW is a slippery toad. I for one have come to strongly suspect that models, and thus projections, underestimate the effects of acceleration, both in terms of rising temperature and the melting of previously permanent ice.

    I believe it is fair to say that our understanding of AGW, both for scientist and layman, are still developing.

    IPY (International Polar Year) is underway and will yield not only new information about changes in the status of the coldest parts of the planet, but will install permanent capability to continue the monitoring.

    A brand new study shows that west Antarctica is losing ice, causing the continent as a whole to lose ice, and that the rate is accelerating. These sorts of findings are above and beyond what models have so far been capable of predicting, and in fact IPCC AR4 simply punted when it came to projecting how much ice will be lost from Antarctica and Greenland in the 21st century.

    That’s an amazing omission, especially considering that it will take far less than a century to feel the effects of the melt.

    I have read that IPCC now wants to turn its attention to these changes and produce a new report.

    So, what you are seeing is science trying to keep up with what the planet is telling us. Our “understanding” is clearly evolving.

    Comment by Walt Bennett — 14 Jan 2008 @ 10:22 AM

  168. Re: #167,

    If you are comfortable with that “analysis”, so be it.

    I am looking at NASA and Hadley over two years, one of whom says oceans are warming and the other of whom says the oceans are cooling.

    All I have been seeking is any sort of explanation whereby both results could be considered “valid”.

    And to think I am an AGWer, and cannot get a straight answer to a simple question. It makes me understand why skeptics get so frustrated. Are we so defensive slash parochial that we cannot take a step back and try to make sense of confounding, conflicting studies?

    I appreciate Gavin’s suggestion to check out the spatial patterns. I’d be even more grateful if climate scientists would do it. Shouldn’t such disparities at least pique their interest?

    Comment by Walt Bennett — 14 Jan 2008 @ 10:30 AM

  169. re#152

    I did not ask about the foundations of science. I asked about ‘foundations of AGW’. Plus if you are referring to this post there is nothing in it that questions the Navier-Stokes equations. It is a question about very specialized applications of those to situations for which they were not derived. Have you BTW counted the number of unknowns and number of equations at an interface and determined under what conditions a well-posed problem is set?

    Your response has been become so typical here at RC and several other so-called science blogs. Attempts at diversions from the questions asked along with presumptions of motive are just about all that many people asking for information can expect. And most importantly not a single word devoted to providing the information asked for.

    Testable hypotheses are very significant aspects of all of science, engineering, and all technical issues in general. How about listing a couple and open them up for open discussions. Otherwise all I can do is assume that there aren’t any. That leads to the single conclusion that AGW is not science based.

    [Response: Oh please. Possibly you might want to think about the perceptions people have when the fact that we don't respond to every ill-posed question is immediately interpreted as proof that the AGW is not science. Playing 'gotcha' with this kind of trick is tiresome. If you want answers to questions, then ask something specific. 'AGW' is not a fundamental theory in and of itself, it is the conclusion of many different lines of evidence and basic physics. There aren't going to be any major revisions of HITRAN or radiative transfer theory that will make any difference to forcing related to GHG increases, but there's plenty of uncertainty in cloud feedbacks or aerosol-cloud interactions or the impact of unresolved ocean dynamics when it comes to the climate response. Come up with a specific hypothesis that you think we should be testing and we can discuss. - gavin]

    Comment by Dan Hughes — 14 Jan 2008 @ 10:57 AM

  170. No Gavin,(161) I am not claiming that a three part line is a better fit. That would be data-mining.

    I have analysed only two sets of data which I have assumed to be independent.

    The first, from 1978 to 1995 is typical of the long-term temperature record; it is variable but flat. It has a mean value and a variance, and will serve as a base-line.

    It is perfectly legitimate to take any two set of data and to test if their means are significantly different, taking into account their variances and their numbers of observations. I wish to test data, from July 1997 to date, to see if it is significantly different from the base-line. The key word is “significantly”.

    The easiest way to do this is to T-test the difference in the means. We are asking if the variance about the separate means is significantly different from the variance about their common mean. It is a legitimate question which we could ask about any two data sets in the record – which, in the UK goes back to 1684.

    The answer is that the variances of the two samples of data are similar, the difference between their means is 0.28 degrees centigrade, and the probability of that difference arising by chance is infinitesimal.

    The interval, in time, between the two sets of data could have been anything we chose, but it was, in fact, just 19 months.

    So it is a fair question, not nonsense, to ask what caused that change in temperature. If the two data sets had been twenty years apart, you might reply that it was the CO2 concentration. But 19 months?

    Comment by Fred Staples — 14 Jan 2008 @ 11:01 AM

  171. Walt, first, please cite your sources. I recognize the Antarctic melt as just mentioned in EurekaAlert; others may not though.

    Second, you’re making a huge overbroad statement. Yes, you’re looking at two numbers and saying, lo, they differ. And you’re leaping to the conclusion that each number represents the entire world ocean, and thus thinking the agencies behind the numbers must mean that, lo, the entire world ocean is described by the one number from that agency, and OMG, the single number from one doesn’t match the single number from the other, so one has to be wrong.

    Look at the sources for each agency’s work. They pull data from different devices in different locations in different ways and use different models with different assumptions to handle them.

    There are dozens if not hundreds of different agencies and climate models, and a vast number of sources of information each with a huge background of knowledge about variability and reliability over time.

    This is, after all, the world we’re trying to describe. You recall the blind men and the elephant fable? Suppose the blind men were living ON the elephant and trying to describe it ….

    Both results are valid because each number you’re looking at is merely a summary representation for the press of a huge amount of information that has very fine grain detail behind it.

    Look at the details — which are published, and easily available. Look at the maps from each agency of temperatures.

    Really, this is becoming silly.

    Comment by Hank Roberts — 14 Jan 2008 @ 11:10 AM

  172. re: Gavin at #169

    Gavin, I was addressing Hank’s response. RC has yet to respond. Other that to prove once again exactly what I said, ‘ … along with presumptions of motive …’. It was not intended to be a ‘trick question’. May I ask exactly which aspect of the question made it into a trick question having some hidden motive?

    Your recent contributions here are also very enlightening.

    Comment by Dan Hughes — 14 Jan 2008 @ 11:28 AM

  173. Re: #171,


    if it’s becoming so silly, then disengage.

    So you are comparing Hadley and NASA-GISS to two blind men trying to describe an elephant.

    That’ll give the skeptics comfort.


    Why are the results so starkly different, and why is it considered unimportant to know that answer?

    Comment by Walt Bennett — 14 Jan 2008 @ 11:59 AM

  174. Ref 173. Could I add my endorsement of what Walt has written namely “Why are the results so starkly different, and why is it considered unimportant to know that answer?” This particular discussion has skirted around this isaue from the beginning; carefully never discussing this real issue, and never providing any sort of an answer. And if the analogy of the two blind men is valid, where can I find a description of what the elephant actually looks like?

    [Response: The results aren't 'starkly different' no matter how many times someone says they are. The differences there are, as has been stated many times, are mainly related to treatment of the Arctic. Look at the spatial results and see for yourself. - gavin]

    Comment by Jim Cripwell — 14 Jan 2008 @ 1:59 PM

  175. Fred, Pielke quotes McKittrick as writing about
    “… 2 flat intervals interrupted by step-like changes associated with big volcanoes….”

    Any relation?

    Comment by Hank Roberts — 14 Jan 2008 @ 2:06 PM

  176. Walt, I’m pointing out that the fable is about everyone:

    Comment by Hank Roberts — 14 Jan 2008 @ 2:39 PM

  177. There has been quite a bit of discussion concerning ocean temperatures. Are people referring to the 2006 paper by Lyman et al. which shows that oceans are cooling? If so they may want to checkout the correction published in 2007 by Lyman et al. (Correction to “Recent Cooling of the Upper Ocean”).

    Here is a quote from the correction:

    “Although Lyman et al. [2006] carefully estimated sampling errors, they did not investigate potential biases among different instrument types. One such bias has been identified in a subset of Argo float profiles.
    This error will ultimately be corrected. However, until corrections have been made these data can be easily excluded from OHCA estimates (see htttp:// for more details). Another bias was caused by eXpendable BathyThermograph (XBT) data that are systematically warm compared to other instruments [Gouretski and Koltermann, 2007]. Both biases appear to have contributed equally to the spurious cooling”.

    Is this the reason for conflicting data?

    The correction can be found at:

    Comment by Ian Forrester — 14 Jan 2008 @ 3:07 PM

  178. Walt Bennett posts:

    [[Why are the results so starkly different, and why is it considered unimportant to know that answer?]]

    Because they are only “starkly different” in your mind. If they go on being starkly different for 30 years, you’d have a case. As is, there’s nothing much to investigate.

    Comment by Barton Paul Levenson — 14 Jan 2008 @ 3:29 PM

  179. #177

    But isn’t there still a cooling, after ‘excluding
    profiling floats (gray line).’ page 10?

    Comment by lgl — 14 Jan 2008 @ 3:52 PM

  180. [The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines - one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative - depending on which year you start with.]

    Since a 30-year period is considered the “standard” reporting period, show 30 year trend data for each 30-year period in the same chart (GISS).

    Comment by henry — 14 Jan 2008 @ 4:04 PM

  181. #170

    You should choose 1983 as your starting point instead. All warming at lower latitudes between 1950 and 1983 is probably a result of ENSO, mostly the shift in 1977.
    There is almost no warming between 1983 and 1997.

    Comment by lgl — 14 Jan 2008 @ 4:13 PM

  182. ‘New Statesman’ magazine has published a rebuttal to the article by David Whitehouse that is mentioned at the start of this RC post:

    It cites the RC post – well done on helping combat this outbreak of dodgy statistical analysis!

    Comment by Bob Ward — 14 Jan 2008 @ 4:23 PM

  183. #170

    Comment by lgl — 14 Jan 2008 @ 4:35 PM

  184. In 174 Gavin writes ” The results aren’t ’starkly different’ no matter how many times someone says they are.” Fair enough. Then why doesn’t your presention show the same sort of graphs calculated for all the different time/temperature data sets (HAD/CRU, RSS/MSU and NCDC/NOAA)? It is surely a trivial matter to repeat the calculations, and reproduce 4 graphs instead of only one. Why select NASA/GISS in the first place, when so many people believe it is different from the other three? Why not use RSS/MSU? Incidentally I have done my own calculations, and I challenge the idea that the “results are not starkly different”. They are, indeed, starkly different. And it is easy to prove I am wrong by simply doing the calculations with all data sets, as I have suggested, and showing the results.

    Comment by Jim Cripwell — 14 Jan 2008 @ 4:40 PM

  185. I don’t mean to be short, but if the results, as you say Jim Cripwell, are so starkly different and you know this by your own calculations; why don’t you prove the climate scientists wrong by your sharing your calculations? It’s an open forum but why ask them to do all the work? If you have evidence they are overlooking, show it, prove it.

    Comment by Figen Mekik — 14 Jan 2008 @ 5:00 PM

  186. Jim- why would you use RSS/ MSU? Can you state your reasons why you would prefer that over the surface temperature record?

    Fred Staples- I am not a statistician, but I can almost make sense of your words, they seem quite clear. However, I cannot quite get my head round what you are claiming. Are you saying that there was a 0.28C jump in temperature between some time in 1995 and 1997?

    Comment by guthrie — 14 Jan 2008 @ 5:12 PM

  187. Walt Bennett- Have you thought of asking Hadley themselves? I’m sure they would be helpful, just don’t expect an answer in 48hrs.

    Comment by guthrie — 14 Jan 2008 @ 5:19 PM

  188. #165 Bryan S, you miss my point. If I believe the IPCC projections (and I do), and if I believe that the changes in OHC consistent with those projections are too small to rise above the statistical uncertainty caused by measurement constraints and the like over the next few years (and I do), why would I bet otherwise? It’s much more lucrative to bet against the prevailing mood of Aggie supporters.

    I agree with RP Sr. that total heat content is the ideal metric, and I agree with Gavin that scientists haven’t yet demonstrated the ability to measure it sufficiently accurately to distinguish among projections.

    Comment by John Nielsen-Gammon — 14 Jan 2008 @ 5:31 PM

  189. Gavin, Johnson et al.(2007) observed the change in the global ocean heat integral between 2005-2006 and concluded this quantity was equivalent to a net surface flux of 80 W/m2. By looking at some time series graphs (Lyman, 2006), it seems likely that this is dwarfed by changes observed in some previous years. Such a magnitude of variability must be driven by the sum total of all the atmospheric and oceanic process+any changes in incoming shortwave reaching TOA. It would seem important to check model output against these observations.

    Q: What is the magnitude (an average ballpark number) of annual variability(in W/m2) of system heat content changes from AOGCM output? or another way: Is the magnitude of interannual variability in models close to that that observed from past changes in OHC? Thanks.

    [Response: You might want to look at that paper again. The idea that the net annual mean surface flux into the ocean could anything like 80 W/m2 is so far out that I have to assume you've misinterpreted something. The variations from year to year in net heat flux into the ocean will be on the order of a few tenths W/m2 - with maybe some multi-W/m2 peaks related to volcanoes or extreme tropical variability. - gavin]

    Comment by Bryan S — 14 Jan 2008 @ 5:52 PM

  190. Frank,
    How many data points did you need to characterize a reactor system? Try scaling that up and calculate how many points you will need to characterize a climate system’s behavior. Nineteen months is weather. For statistical purposes, it is random, chaotic, and a whopping big population to sample. Your sampling approach does not recognize the diverse time scales of the forcings acting on the subject population. Poor sampling can result in poor results from otherwise correct statistical methods.

    It is easy to prove that we have a different climate than we had when I was a senior scientist at “Hanford.” However, it is in the nature of the system, that we can’t demonstrate statistically that recent weather is ongoing climate change. We can show it in other ways.

    For recent climate change, I rely on the flowering plants in my yard. They integrate climate change better than the NWS. I might question NWS data, but I never argue with my apple tree. My apple tree says things are getting warmer year by year, and the native daffodils nod in agreement. (According to John Muir’s journal, those daffodils should wait until March to bloom.) That tells me, that today my soil is warmer that it was 8 or 5 years ago when I planted the beds. Soil temperature is an integrating measurement. It integrates highs, lows, means, hours of sun, cloud cover, radiation – everything. It is confirmed by my hyacinths, freesia, and tulips sprouting 2 months earlier over the last 4 years. These bulbs, integrate soil temperature. Not just a probe reading now and then, but a true integration across nights and days for weeks on end.

    I can look out my window and see 20 examples of unusual plant behavior caused by the unusually warm weather that we have had for the last few years. The only thing that gives me hope for our nearby commercial growers is the honeybees dancing from blossom to blossom. Honeybees in January? There were no honeybees around here out collecting nectar 5 years ago or 10 or 50 years ago. I have talked to the old beekeepers in the area.

    Now, the plants and bees are responding to small changes in the weather. When I run statistics on the local weather data, the changes come up as statistically insignificant. For example, this year is bit colder than some recent years ( ), and yet, more and more of my plants are blooming earlier. It is easier to see the problem by looking at the plants than by looking at the National Weather Service or chill hour data. My point is that the weather and climate data that we collect may not reflect the stress that global warming causes on ecosystems and agriculture. See for example:

    There is some background at :

    Comment by Aaron Lewis — 14 Jan 2008 @ 6:44 PM

  191. Meanwhile, in Antarctica:

    Escalating Ice Loss Found in Antarctica
    Sheets Melting in an Area Once Thought to Be Unaffected by Global Warming
    By Marc Kaufman
    Washington Post Staff Writer
    Monday, January 14, 2008; Page A01

    Climatic changes appear to be destabilizing vast ice sheets of western Antarctica that had previously seemed relatively protected from global warming, researchers reported yesterday, raising the prospect of faster sea-level rise than current estimates.

    While the overall loss is a tiny fraction of the miles-deep ice that covers much of Antarctica, scientists said the new finding is important because the continent holds about 90 percent of Earth’s ice, and until now, large-scale ice loss there had been limited to the peninsula that juts out toward the tip of South America. In addition, researchers found that the rate of ice loss in the affected areas has accelerated over the past 10 years — as it has on most glaciers and ice sheets around the world.

    “Without doubt, Antarctica as a whole is now losing ice yearly, and each year it’s losing more,” said Eric Rignot, lead author of a paper published online in the journal Nature Geoscience.

    The Antarctic ice sheet is shrinking despite land temperatures for the continent remaining essentially unchanged, except for the fast-warming peninsula.

    The cause, Rignot said, may be changes in the flow of the warmer water of the Antarctic Circumpolar Current that circles much of the continent. Because of changed wind patterns and less-well-understood dynamics of the submerged current, its water is coming closer to land in some sectors and melting the edges of glaciers deep underwater.

    “Something must be changing the ocean to trigger such changes,” said Rignot, a senior scientist with NASA’s Jet Propulsion Laboratory. “We believe it is related to global climate forcing.”

    Comment by Jim Galasyn — 14 Jan 2008 @ 6:52 PM

  192. Bryan S, which article? This one?

    Comment by Hank Roberts — 14 Jan 2008 @ 7:38 PM

  193. An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longe]r the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.

    Shouldn’t there be an increasing trend in the standard deviations due to global warming causing more extreme weather?

    [Response: I'm not aware of any such demonstrated trend in monthly or annual mean anomalies. When discussing extremes in the context of climate change, one has to be very specific about what extremes you are talking about. General statements about 'more extreme weather' are not supported either by theory nor observation. - gavin]

    Comment by Count Iblis — 14 Jan 2008 @ 8:11 PM

  194. Gavin, thank you for the updated and new charts. Anyone who’s missed them, look again at the main article and you’ll see the update with new links.

    Comment by Hank Roberts — 14 Jan 2008 @ 8:35 PM

  195. Hi Fred,

    I’m not sure RC will post this, but…

    I’ll summarize what I think you’re saying:

    Your analysis concludes that temperature was approximately stable from 1978 to 1995, then there was a step change of 0.28 deg C from 1995 to 1997, and that from 1997 to 2007, there was no significant increase in temperature. You wonder how this “step change” is compatible with AGW theory.

    I think the reason your analysis seems to show a “step change” is that you include the temperature data following the major volcanoes of El Chinchon in 1982 and Mt. Pinatubo in 1992. Mt. Pinatubo was a particularly big eruption.


    Comment by Mark Bahner — 14 Jan 2008 @ 10:01 PM

  196. Gavin, can you tell me what the temperature anomaly for the USA will be for this summer to within .10C please? Also, are you skeptical at all about surface temperature reconstructions that represent the entire globe? What kind of margin of error is there in such reconstructions?

    [Response: 1) No. Seasonal prediction is a) not what I do, and b) not a very mature field. 2) For the instrumental record the sampling errors are on the order of 0.1 deg C in the annual mean. Issues with the network were discussed exhaustively here. - gavin]

    Comment by tom s — 14 Jan 2008 @ 10:37 PM

  197. re: 191

    Yup, really melting down there…any day now she’ll be slippin’ into the sea…(sigh)

    Comment by tom s — 14 Jan 2008 @ 10:38 PM

  198. It appears that the inhabitants of the denialosphere are falling into the same trap as the creationists–trying to find one single devastating observation or experiment that will falsify anthropogenic causation of climate change. Their search is futile for the same reasons as well–the fact that support for anthropogenic causation like evoloution does not rest on any single line of evidence, but rather is extremely broadly based. The science behind the theory is well established and understood–and there is no reason why (or evidence to suggest) it should depend on whether CO2 concentrations are 280 ppmv or 560 ppmv. This physics is so interwoven into our understanding of both modern and paleoclimate that if we were to see behavior very different from that expected, it would have to mean:
    1)that there is some sort of effect (e.g. negative feedback)not in the models (in which case, given the persistence of CO2 in the environment, we’d still have a problem when this petered out.
    2)Our entire understanding of the climate would have to be scrapped and rebuilt from scratch–and if this were true, it’s unlikely the models would do as good a job as they do.

    There is a lot we don’t understand about climate–but the effect of adding CO2 to the atmosphere doesn’t fall into the class of things we don’t understand.

    Comment by Ray Ladbury — 15 Jan 2008 @ 8:24 AM

  199. Gavin, I think you have misinterpreted what was written. Below, I cut and paste the statement from the Johnson et al. 2007 paper. I am interested if models show this magnitude of annual variability in heat storage.

    The difference in combined OHCA maps between 2006 and 2005(Fig. 3.4) illustrates the large year-to-year variability in ocean heat storage, with changes reaching or exceeding the equivalent of an 80 W m–2 magnitude surface flux.


    Johnson, G. C., J. M. Lyman, and J. K. Willis, 2007, Global Oceans: Heat Content. In State of the Climate in 2006, A. Arguez, Ed., Bulletin of the American Meteorological Society, 88, 6, S31-S33.

    [Response: You're right, I misunderstood. The -80 to 80 W/m2 range is for the local values, and from the figures, the biggest changes look to be related to the switch to El Nino in 2006. This figure is interesting though, because the net heat uptake is the integral of all those points. The net value, which isn't given in that article, will be less than 1 W/m2, and so there is a huge amount of variability that needs to be averaged out. That makes any one year's value rather uncertain, and thus longer term trends are more reliable. - gavin]

    Comment by Bryan S — 15 Jan 2008 @ 8:41 AM

  200. As mentionned in the post, one of the factors adding to the noise in the global temperature record is El Nino. I just had a look at the order of magnitude of its influence on the trends discussed here.
    Since best fit estimates suggest an alteration of global temperature through ENSO by about 0.1 times the MEI El Nino Index ( with a 5-6 month time lag, one can estimate the influence on global temperature trends by looking at the MEI index trends. This shows, that e.g. there is a negative MEI trend corresponding to about -0.02 to -0.04 K per decade over the last 10 years.
    There seems to have been a slow down of global warming of that amount by the negative ENSO trend over that period. Over the last five years the ENSO trend corresponds to a cooling of about -0.06 K per decade.
    Thus ENSO is not only important for interannual variations but considerably influences trends up to at least 10y periods. Of course, the longer the period investigated, the smaller the ENSO induced trend will be.

    Comment by Urs Neu — 15 Jan 2008 @ 9:07 AM

  201. Regarding the reports of increased Antarctic and Greenland glacier melting and the record low area of summer sea ice in the Arctic, my question is: To what extent could deposition of particulate emissions (such as from diesel exhaust) on the ice surface contribute to the increased melting by decreasing the ice’s albedo and increasing the retention of heat. This could in part explain the increased ice melt when an initial review suggest that 2007 temperatures were not dramatically higher than in the recent past. Not CO2, but still another anthropogenic contribution to warming. Have the impacts from this been investigated or calculated?

    Bob North


    Comment by Bob North — 15 Jan 2008 @ 9:25 AM

  202. Gavin, Now that we are on the same page, I will ask my original question again.

    Q: What is the magnitude (an average ballpark number) of annual variability(in W/m2) of system heat content changes (the whole enchilada) from AOGCM output? or another way: Is the magnitude of *interannual* variability in models close to that observed from past changes in OHC (the entire volume integral)? or another way: Do models underestimate annual variability of heat content changes? Thanks in advance for the answer.

    Just another point however on variability. The problem of global measurement (obtaining signal in a noisy system) is not substantially different from the challenge of obtaining average values of the weather, but the results from the ocean are potentially more profound. The thing I think that most people don’t realize though, is that nearly all of the actual “global warming or cooling” of the earth system is taking place in the ocean. Surface temperature or SST are really just 2-D proxies for these 3-D heat changes. There are really a bunch of interesting questions which emerge from this, including how quickly and deeply heat is mixed into the ocean, and how quickly this ocean heating is realized in the atmosphere. Urs Neu (#200) is most certainly right that ENSO has a profound effect on global surface temperature, but I am interested in how much it really affects the global ocean heat content integral (which will determine the amount of “global warming in the pipeline”)

    [Response: It's unclear whether the models underestimate the interannual variability (AchutaRao et al, 2007). It's possible that they do, but without better data on what that variability really is, it will remain ambiguous. Unforced decadal variability is even more uncertain. Sampling issues have definitely increased the variability in the data, though maybe that is now getting better. You liken the annual average in OHC to that of SAT, but you need to consider the size of the variability relative to the signal. From a range of -80 to 80 W/m2 locally, we are looking for a < 1 W/m2 signal (almost two orders of magnitude smaller). For SAT, the sd in one location interannually is a couple of degrees, say 5 deg C for a range (I haven't checked). The signal we are looking for there is a couple of tenths which gives a signal to noise ratio ~8 times as large. Plus the SAT variability has large spatial scales and is much better sampled.. etc. I thought I answered your question above though, the interannual heat content changes of the whole system is on the order of a few tenths of W/m2. - gavin]

    Comment by Bryan S — 15 Jan 2008 @ 10:08 AM

  203. Bob North – “To what extent could deposition of particulate emissions (such as from diesel exhaust) on the ice surface contribute to the increased melting by decreasing the ice’s albedo and increasing the retention of heat.”

    As I understand the recent finding of ice melt in the western Antartic, they speculate it is being caused by deep ocean currents, not surface causes.

    Comment by weather tis better... — 15 Jan 2008 @ 10:33 AM

  204. Good to remember how computers and information have improved. Imagine what the IPCC could have done 20 years ago if current technology had been available to the researchers, eh?

    This news article

    mentions that military as well as civilian satellites are now being used to track these issues. That’s something I’d hoped but hadn’t seen mentioned before.


    “… University of Colorado at Boulder climate scientist … Konrad Steffen, director of the Cooperative Institute for Research in Environmental Sciences…. presentation … American Geophysical Union … Dec. 10 to Dec. 14 [2007]…. used data from the Defense Meteorology Satellite Program’s Special Sensor Microwave Imager aboard several military and weather satellites ….

    Steffen maintains an extensive climate-monitoring network of 22 stations on the Greenland ice sheet known as the Greenland Climate Network, transmitting hourly data via satellites to CU-Boulder to study ice-sheet processes.

    —–end excerpt——-

    I’d guess the ‘hourly data’ is for managing military assets. Could ice sheet changes be happening that fast?. Icequakes, maybe?

    Comment by Hank Roberts — 15 Jan 2008 @ 11:02 AM

  205. Gavin, thanks for the really informative discussion.

    Comment by Bryan S — 15 Jan 2008 @ 11:17 AM

  206. Where can I find a good annual time series for global sulfate and/or black carbon aerosols? The only thing I’ve managed to find on-line so far is Lamb’s Dust Veil Index, and that only goes up to 1983.

    Comment by Barton Paul Levenson — 15 Jan 2008 @ 11:25 AM

  207. Re: #174, #184


    I am driving myself crazy trying to figure out where Hadley’s SSTs come from. It seems that they are using NOAA satellite data; do you concur?

    I have been to the NOAA site and I cannot find any data for global SSTs.

    What I really want to know is, what are the sources and methods for determining SST for both Hadley and GISS? Is one (Hadley) using satellite and the other (GISS) using buoys and ships? Perhaps I am speaking nonsense, but part of the problem is that I don’t do this for a living, don’t have unlimited time to track things down, don’t really have the expertise to make sense of the data (though I do try), and of course it is easy to get overwhelmed by the sheer volume of sites, articles, links, datasets and so forth.

    The reason I used the word “stark” (which you endorsed) is because, to my eyes, we are talking about a difference in sign. GISS sees a plus sign, Hadley sees a negative sign. To the layman, the next question is “Huh?”

    I also repeat my question: Is it valid to extrapolate a global temperature which is dominated by SSTs, which ends up burying the significant increases in land temperature?

    If we look at a graph which contains only land temperatures, we see a result which more closely resembles observed changes. I know of some people who categorically refute that humans can detect changes of “tenths of a degree”, and I agree. However, the change on land is much greater than that, especially here in the U.S. northeast, and I am confident that I can detect a general warming trend in the last ten years, not to mention almost completely different seasons than 30 to 40 years ago.

    I know that water is 70% of the surface. This means that when we take “averages”, land gets 3/10 the weight of water. Is that the correct way to look at the rise in temps? Isn’t the increase of land temps more accurate in terms of what’s actually happening to the planet?

    It seems to me that land and oceans have substantially different properties. A rise of .1*C in overall ocean temps would be a much bigger deal than a similar rise on land.

    I would appreciate if Jim or anybody would take over my feeble attempt at making a point, and put it in clearer terms. I know that I am botching this, but I swear there is a valid question in here somewhere.

    Comment by Walt Bennett — 15 Jan 2008 @ 11:25 AM

  208. Walt, check these:

    You can improve on the search after reading the first few pages of hits to this approximate search; lots of clues therein already.

    Comment by Hank Roberts — 15 Jan 2008 @ 1:23 PM

  209. Re: Walt Bennett 227

    GISS and Hadley temperatures are different in a number of respects. Data sources? Sure — up to a point, but they share mcuh of the same data as well. Base periods against which you calculate anomalies is another place where they differ. One could be positive, the other negative and yet both be showing the same exact thing.

    But then there is a difference in terms of methodology — that is how they interpolate to fill in the gaps.

    Hadley interpolates to a lesser exent than GISS, considering only neighboring cells (if I remember correctly — from a couple of days ago). In contrast, GISS relies upon the fact that while temperatures aren’t that well correlated over great distances, temperature anomalies are — up to 1000 km, if I remember correctly, where more weight is given to a given measurement for filling in a gap the closer it is to the gap that you are trying to fill in.

    As such, Hadley has less coverage, and little above 70 N, but GISS “covers” a fair amount above 70 N. However, Hadley has been extending its coverage over the past several decades.

    They are actually both quite good and generally well-correlated, at least within the range of uncertainty. But both are going beyond the data using somewhat different methodologies and therefore do not give quite the same view of the world.

    Anyway, here are links to both where you may find out a little more…



    Comment by Timothy Chase — 15 Jan 2008 @ 1:55 PM

  210. Re: #208,


    Thank you, and I will make still more time to track the answer down.

    However, I have had time to figure out the question I am trying to ask.

    Let’s turn the planet into a 2D grid, and report a temperature anomaly for each box in the grid, based either on direct measurement or extrapolation. We know that 70% of those boxes will be over water, and 30% will be over land. Now, my question is this: what do we do to these numbers in order to determine a global anomaly? Do we simply add them up and divide by the total, giving equal weight to each anomaly whether it occurs over land or over water? Or is their some sort of weighting done to account for the greater heat capacity of water?

    Comment by Walt Bennett — 15 Jan 2008 @ 1:55 PM

  211. Walt, that sounds like the kind of problem that was being faced in the 1980s. A search for examples turned this up, just to pick one place people stated similar problems. I think you’d find the pictures revealing, they look at the globe and the oceans and how to lay a useful grid over the sphere, for example:

    “The state of the art ocean models are based on logically rectangular grids which makes it difficult to fit the complex ocean boundaries. In this paper we demonstrate the use of unstructured triangular grids for solving a barotropic ocean model in spherical geometry with realistic continental boundaries …….resembles 2D grid … tested on a cluster
    of workstations and on an IBM SP-2….”

    Of course the computer in your Walkman is probably more sophisticated than the ones they were using. But it still isn’t as simple as you’re trying to make it, I think.

    Comment by Hank Roberts — 15 Jan 2008 @ 4:10 PM

  212. Re: #211,


    Simple is all I can offer, my friend :-)

    And I do believe that I was, once again, less than clear. My question has nothing to do with models. On the 2D grid we write down actual measured anomalies in each box, averaged over a year or a month or whatever time period we are evaluating. My question is, how do we derive the average global anomaly from those numbers? Do we add them together and divide by the number of boxes, or are the boxes over water given a different weight than the boxes over land?

    [Response: If the boxes are complete, each box is weighted proportional to its area, ocean and land boxes alike. This is an estimate of temperatures, not heat content, and so no weighting by heat capacity is required. There is a slight twist due to the incomplete and unequal coverage of the hemispheres. Some groups calculate the average NH and SH numbers separately, and then take the average to get the global mean. Since the SH has less coverage, that weights SH boxes slightly more strongly. - gavin]

    Comment by Walt Bennett — 15 Jan 2008 @ 4:33 PM

  213. Ref 207. Walt. My understanding of the measurement of average global temperature anomalies is largely second hand. I simply cannot answer your questions. I have picked up bits and pieces of information from several different places. I have written to Environment Canada, asking which data set is “best”, but received a “political” answer, not a scientific one. I am going to try again in a few weeks, when I have all of the 2007 data. I believe that the three, NASA/GISS, HAD/CRU and NCDC/NOAA use essentially the same data, collected by various means from all over the world. The RSS/MSU is different as it only uses the data from it’s own sensors. The three “massage” the data in different ways. The broad outline as to how each one does it, has been explained. However, the “devil is in the details”, i.e. the computer codes as to precisely what is done to the data. [edit]

    [Response: The computer code for NASA/GISS is available at and the different methodologies were compared in Vose et al (2005) - gavin]

    Comment by Jim Cripwell — 15 Jan 2008 @ 4:33 PM

  214. Walt Bennett (210) wrote:

    Let’s turn the planet into a 2D grid, and report a temperature anomaly for each box in the grid, based either on direct measurement or extrapolation. We know that 70% of those boxes will be over water, and 30% will be over land. Now, my question is this: what do we do to these numbers in order to determine a global anomaly? Do we simply add them up and divide by the total, giving equal weight to each anomaly whether it occurs over land or over water? Or is their some sort of weighting done to account for the greater heat capacity of water?

    The central question is: what are you measuring?

    If you were trying to measure heat content, then it would make sense to take into account heat capacity, but you aren’t so you don’t. When you are measuring the global average temperature (or to be more precise, the global average temperature anomaly) to a first approximation, it doesn’t matter whether the temperatures being measured are over land or water.

    If you could arrive at a continuous function such that each point on the globe had a specific numerical value, you would integrate over the surface, then divide by the total surface for an instantaneous value, or you would integrate over surface and time, then divide by the total surface and divide by the length of the period for which you are constructing the average. But that is only in principle since we have only a finite number of measurements.

    To the extent that you use interpolation, preferably on the basis of simple rules which involve the least number of assumptions, you may take into account the difference between land and ocean, but only for the sake of estimating the missing data. NASA makes use of this a bit more than Hadley as we know that temperature anomalies are strongly correlated over relatively large distances. Hadley takes a more conservative approach.

    Incidentally, if you are trying to calculate normalized measures, it is generally best to stay away from weighted averaging. Oftentimes you simply won’t know what to weight by.

    I will use an example from my own experience in the study of highway performance. You should calculate everything in terms of aggregate measures such as vehicle*miles and vehicle*hours, then divide at the end to get your normalized measure e.g., average speed = vehicle miles traveled divided by vehicle hours of travel.

    However, state department of transportation I worked for was doing weighted averages by distance traveled. As such, using a single car that traveled 50 mph for 1 hour then 10 mph for 5 hours, they came up with 30 mph as the average speed since the length of each leg of the journey was the same. But the total miles were 100 and the time of travel was 6 hours, which means that the actual average speed was 16.67 mph over the entire length of the journey.

    Now if you had weighted by hours of travel instead of miles of travel, you would arrive at the right answer. But the problem is you still wouldn’t know that weighting by time of travel was the right thing to do — unless you performed the calculation without the use of weighted averaging. So avoid the weighted averages. Chances are you will even improve the performance of whatever program you are writing to do the calculations.

    Comment by Timothy Chase — 15 Jan 2008 @ 5:24 PM

  215. Thanks for the intellectual debate. I’m positive that almost everyone who has commented is much smarted than I am. I have no scientific background, but without the time to read each comment, I do have a question…Maybe you covered it and I missed it. It has a few parts to it.

    How much oxygen producing vegetation (Global %) has been eliminated in the last 150 years? In the same time-frame, how much CO2 has been added to the atmosphere? How much heat trapping concrete/asphalt has replaced the vegetation? How many rooftops world-wide with heat trapping materials have been added since the dawn of the 20th century? Finally, does any of this matter to your models?

    Or is this just a dumb question?
    You don’t have to be polite. If it’s dumb tell me.

    Comment by Jeff Phillips — 15 Jan 2008 @ 6:31 PM

  216. Jeff, one place to start — where, and how, sunlight becomes life on the planet is the basis for it all. Much else there if you back up the directory tree and search on the terms you’ll find here, this helps put us and our part into proportion. We’ve really just slightly overbalanced a very large system, so far, and are wondering how far it can be pushed ….

    Comment by Hank Roberts — 15 Jan 2008 @ 6:55 PM

  217. Pushed until what?

    Comment by Jeff Phillips — 15 Jan 2008 @ 7:23 PM

  218. Walt, I think you have fallen victim to the old dictum: A man with one watch always knows what time it is; a man with two is never sure. Of course, to a scientist, 2 watches working independently are better than one, as you can take an average and probably get closer to the actual time. 3 are better still, as you can start to estimate your errors. Given polar amplification, it is not surprising that Hadley’s estimates might undershoot actual warming. This is of course why Hadley and UAH are the models of choice in the denialosphere. However, the fact is that any reasonable analysis shows increasing temperature, regardless of what dataset you use. The watches are still running forward. Physics still works.

    Comment by Ray Ladbury — 15 Jan 2008 @ 7:48 PM

  219. > Pushed until what?
    Jeff, for that, the Start Here link at the top of the page will get you started. Look at the scenarios at the IPCC too. I’m just another reader here (well, more talkative than many). Earlier topics here at RC will give you a lot of possible answers.

    From the link I gave you:
    “… What fraction of the terrestrial NPP do humans use, or, “appropriate”? It turns out to be a surprisingly large fraction. Let’s use our knowledge of ecological energetics to examine this very important issue. (Why NPP? Because only the energy “left over” from plant metabolic needs is available to nourish the consumers and decomposers on Earth.)….”

    Comment by Hank Roberts — 15 Jan 2008 @ 8:39 PM

  220. Just for the record, Ray (198). I see nothing wrong, scientifically, with us skeptics (or anyone) pointing out and pursuing what might look like chinks in the AGW armor. Some might even lead to an AHA! moment. Or at the least it could aid our understanding. And the incessant litany of the science being irrefutable is no deterrence. I would, however, agree with your criticism of my pals (??) who automatically proclaim that every perceived chink is, in fact, complete refutation of the theory. That’s silly and the criticism is rightly deserved.


    Comment by Rod B — 15 Jan 2008 @ 10:23 PM

  221. #218

    “Given polar amplification, it is not surprising that Hadley’s estimates might undershoot actual warming.”

    Have you mentioned this to them? How can we have a model “out there” that consistently and transparently gets it wrong? Is this an oil-company-funded model? I thought models were inviolate?

    In fact, I will ask them this myself; apologies in advance if they respond with requests for further reasons they are wrong as these requests will be passed directly on to this forum.

    Comment by Alan K — 16 Jan 2008 @ 6:09 AM

  222. Rod, by all means if you see a shortcoming, point it out. What I condemn is the inability of many so-called skeptics to see results in the full context of the science. Basically, if you believe in the basic science of the greenhouse effect (and it is well established) there is no reason to assume it chages magically when CO2 reaches 280 ppmv or 560 ppmv and perhaps even >1000 ppmv. If that is so, then for the anthropogenic theory to be wrong, we would have to be missing a very significant piece of physics from the models. Even then, we would not necessarily be out of the woods, as the effects of CO2 can persist for thousands of years, and might well outlast any negative feedback, etc. we were to discover. It would be a reprieve, but not a pardon. We also have zero evidence of such missing physics. Indeed, the models perform quite well, suggesting we’re pretty close on the most important forcings.

    The alternative is that we have greenhouse forcing all wrong. It can’t be just a little wrong, because it is independently constrained by many separate lines of evidence. If that is true, then our entire theory of the climate would have to be scrapped. This idea that a little twiddling around the edges will make the whole problem go away is simply wrong. Basically, anthropogenic causation of the current warming epoch is a simple consequence of the known physics of climate science. Saying it is wrong has consequences as serious as saying evolution is wrong.

    Comment by Ray Ladbury — 16 Jan 2008 @ 8:40 AM

  223. Alan K, To quote George Box: “All models are wrong. Some models are useful.” Climate skeptics love the first part of the quote, but would prefer to forget the second part. Hadley and GISS take different approaches to the same problem–the paucity of measurements in polar regions. GISS uses extrapolation–itself prone to errors. Hadley just says, “We don’t know.” (Yes, Gavin et al., I know this is a gross simplification and welcome corrections chastisement, etc..) Both are wrong. Both are useful. Independent of Hadley or GISS, we KNOW the polar regions are warming dramatically. What matters is that regardless of which dataset you are using, you get the same answer: We’re still warming. You may get slightly different answers, but the slope is still positive. The fact that you get the same answer with such independent and different analyses is itself a confirmation of the robustness of the result. Unlike the skeptics, we don’t have to cherrypick the dataset, period or analysis method to sculpture a particular fore-ordained conclusion. Real science is nice that way.

    Comment by Ray Ladbury — 16 Jan 2008 @ 8:55 AM

  224. Where would I find a list (lat/lon) of the stations GISS uses to extrapolate the Arctic temps?

    I’m wanting to see the amount of area covered by this “extrapolation”. Someone should be able place a 1200km circle around these stations and show area covered by GISS.

    When I go to the GISS homesite, it says the data and programs can be downloaded, but can only be viewed or used if:

    “The subdirectories contain software which you will need to compile, and in some cases install at particular locations on your computer. These include FORTRAN programs and C extensions to Python programs. Some Python programs make use of Berkeley DB files.”

    [Response: I don;'t think you need to do it all yourself to get the answers you want. Try going to the global maps page: and look at the various options. If you choose the GISS analysis + HadI/Reyn SST you'll see the Arctic area filled in from extrapolation of the met stations. If you select 'none' for the land analysis, you'll see how much of the ocean area was filled in. - gavin]

    Comment by henry — 16 Jan 2008 @ 9:08 AM

  225. Re: #212 inline,


    Perhaps I should be reading something on this subject so I am not burdening anybody here to teach me something they learned 20 years ago in college, so feel free to point me in a direction and shoo me away :-)

    You have zeroed in on my question, sir. It does not seem intuitively correct to me, to take a surface temperature over open ocean and compare it to a surface temperature over land. I’m not sure exactly what properties would be different, but heat absorption seems to be a property which would be different and which would affect surface temp.

    In other words, if I have the sun beating down on 1000 square miles of land, and the same amount of sun beating down on 1000 square miles of ocean, and of course atmosphere is identical in both cases, I would expect, over time, that the land would warm more than the ocean would, because the ocean will absorb and redistribute the heat more efficiently than land will.

    I’m sure that there are other factors which are involved, which is why I suggest that you might want to recommend some reading I could do on a subject which I am sure has been well explored by others.

    What this comes down to is, your answer was what I expected, based on the effect SSTs have on global average. However, it just doesn’t make sense to me that the properties are the same for each box.

    Thanks for helping me understand this better.

    [Response: Actually, there has been several decades of research on precisely this question. The predominant mechanisms determining the appropriate averaging length scale is the horizontal mixing by large-scale atmospheric motion, and this is largely similar over land and over ocean. This leads to spatial correlation scales of between 1500 and 2000 km in annual mean surface temperature. You can find some discussion of this in Mann and Park (1994) [Mann, M.E., Park, J., Global scale modes of surface temperature variability on interannual to century time scales, Journal of Geophysical Research, 99, 25819-25833, 1994] (available as pdf here). Note the discussion on page 3, and in particular the references to decades of earlier work related work by Livezey, Madden, North, and others. It should be noted that there is some regional variation. See e.g. Briffa and Jones (1993) [Briffa, K. R. and P. D. Jones (1993) "Global surface air temperature variations during the twentieth century: Part 2, implications for large-scale high-frequency palaeoclimatic studies." The Holocene 3(1): 77-88] abstract of which is as follows:
    This paper is the second of a two-part series analysing details of regional, hemispheric and global temperature change during the twentieth century. Based on the grid box data described in Part I we present global maps of the strength of regional temperature coherence measured in terms of the correlation decay length for both annual and seasonal mean data. Correlation decay lengths are generally higher for annual rather than seasonal data; higher in the Southern compared to the Northern Hemisphere; and consistently higher over the oceans, particularly the Indian and central north Pacific oceans. Spatial coherence is relatively low in all seasons over the mid to high latitudes of the Northern Hemisphere and especially low in summer over the northern North Atlantic region. We also describe selected regional temperature series and examine the similarities between these and hemispheric mean data, placing emphasis on the nature of the relationships in different seasons…. See also Weber et al (1995) [Weber, R. O. and R. A. Madden (1995). "Optimal Averaging for the Determination of Global Mean Temperature: Experiments with Model Data." Journal of Climate 8: 418-430], abstract of which follows:
    Optimal averaging is a method to estimate some area mean of datasets with imperfect spatial sampling. The accuracy of the method is tested by application to time series of January temperature fields simulated by the NCAR Community Climate Model. Some restrictions to the application of optimal averaging are given. It is demonstrated that the proper choice of a spatial correlation model is crucial. It is shown that the optimal averaging procedures provide a better approximation to the true mean of a region than simple area-weight averaging does. The inclusion of measurement errors of realistic size at each observation location hardly changes the value of the optimal average nor does it substantially alter the sampling error of the optimal average.
    There is much more to read on all of this in the references contained within the papers cited above. -mike]

    Comment by Walt Bennett — 16 Jan 2008 @ 9:45 AM

  226. General note:

    We seem to be confusing models and measurements. My line of inquiry deals only with measurements, the differences between them, and how they are applied.

    Comment by Walt Bennett — 16 Jan 2008 @ 9:53 AM

  227. #185
    Figen – I’m not sure there is any need to be so defensive about this. I agree with Jim and Walt about this. When looking at at different data sets I also get to differnet results. I also have done the calculations and made other graphs and can clearly get to different conclusions. If you use the Hadley data you can see basically flat global temperatures for 7 years and if you look at the Haddley sea data the global average SS temperatures are definitively in decline. Additonally if you plot against the IPCC 2001 predictions its transparent that their predictions were overestimated in the short term. I would love to post this graph here – I have asked for an e-mail adress to send it to (see #157) but no reply. Just tell me how to post it or where to send it. Its not about proving anything – its about having the intellectual curiosity to try and understand these differences.

    Comment by Imran — 16 Jan 2008 @ 10:06 AM

  228. Notice how hard it is to stay on topic? Wonder why?

    Comment by Hank Roberts — 16 Jan 2008 @ 10:59 AM

  229. Off topic, congratulations to Gavin on a terrific performance on today’s Dianne Rehm show. As a member of the lay audience I felt I got a clear sense of what we know, what we don’t know, the magnitude of the task ahead and the importance of starting now.

    Comment by Walter Pearce — 16 Jan 2008 @ 11:13 AM

  230. #222

    “Notice how hard it is to stay on topic? Wonder why?”

    good point Hank – the original post concerns model projections. Can everyone pls try to stay focused.

    Comment by Alan K — 16 Jan 2008 @ 11:26 AM

  231. Walt,

    In follow the links provided by Tim Chase above, I was surprised to learn that both GISS and Hadcru use the sea surface temperatures to obtain the anomalies; that is the changes in the temp of water itself is used to obtain the anomaly and not the temp of the air immediately above the surface of the water.

    Comment by B Buckner — 16 Jan 2008 @ 12:05 PM

  232. Walt, measurements don’t exist independent of models. If I take a measurement, I want to know what kinds of random and systematic errors it may be subject to. Right there, you are already talking about modeling.

    Comment by Ray Ladbury — 16 Jan 2008 @ 12:17 PM

  233. More off topic (point of order):

    It seems to me the topic concerns the interface between models and measurements.

    Comment by Arch Stanton — 16 Jan 2008 @ 12:29 PM

  234. If self-styled “skeptics” [edit] were really interested in getting accurate data about the Earth’s climate system, they’d be advocating for increased data collection. For example, they’d be pushing hard for the Deep Space Observatory.

    See Why did NASA kill a climate change project? IHT 2006

    “The better experiment when it comes to global warming was to be the climate observatory, situated in space at the neutral-gravity point between the Sun and Earth. Called Lagrange 1, or L1, this point is about 1 million miles from Earth. At L1, with a view of the full disk of the Sun in one direction, and a full sunlit Earth in the opposite, the observatory could continuously monitor Earth’s energy balance. It was given a poetic name, Triana, after Rodrigo de Triana, the sailor aboard Christopher Columbus’ ship who first sighted the New World.

    Development began in November 1998 and it was ready for launching three years later. The cost was only about $100 million. For comparison, that is only one-thousandth the cost of the International Space Station, which serves no useful purpose.”

    They’d also be advocating for a much larger ocean temperature and current monitoring program based on direct measurements. For example, see Call For Network To Monitor Southern Ocean Current, 2007, ScienceDaily

    There is now a moored North Atlantic monitoring system, which has already revealed that there are large variations in ocean circulation:

    “Prior to the current observations, it was unclear how large and rapid the variations in this overturning circulation were,” Dr Church said. “The latest results show that the variation in the overturning over a year was as large as the changes previously observed by Bryden et al.”

    Instead, “skeptics” troll through existing datasets, looking for time periods that they can use to promote their pre-determined conclusion – “global warming is minimal, is not caused by human use of fossil fuels, and will be a good thing anyway.”

    [edit - please no personal remarks]

    Comment by Ike Solem — 16 Jan 2008 @ 12:32 PM

  235. Re: Jim Galasyn (191) on the Ice Sheets on the West Antarctic Peninsula

    There is a discussion here with Eric Rignot…

    Science: Climate Change Impact on Antarctica
    Marc Kaufman and Eric Rignot
    Washington Post Staff Writer and NASA Scientist
    Monday, January 14, 2008; 12:00 PM

    Incidentally, most of the stories I have seen on this are talking about the ice loss as if its just glaciers and don’t seem to understand that its ice sheets that are becoming unstable.

    Comment by Timothy Chase — 16 Jan 2008 @ 12:45 PM

  236. In trying to understanding the models that the IPCC uses in their assessments and predictions–am I understanding this properly?…the models assume a baseline on all values, and holding only those values constant, they then force into the models increased CO2 and then out comes the result—temperature increases….
    But, the models are predicting up to 100 years into the future—do these models hold all of these values constant over this 100 year span while only CO2 remains the “un”-constant???—When in the history of this planet has the atmosphere remained constant for up to 100 years?

    [Response: Over the 20th Century, the models assume up to 14 or so different things changing - not just CO2 (but CH4, aerosols, ozone, volcanoes, land use, solar, etc etc.). CO2 is big part, but it is not the only thing going on. Similarly, future scenarios adjust all the GHGs and aerosols and ozone precursors etc. They don't make assumptions about solar since there is no predictability for what that will be, and although some experiments throw in a few volcanoes now and again, that too is unpredictable. What would you have the modellers do instead? - gavin]

    Comment by Gaelan Clark — 16 Jan 2008 @ 12:55 PM

  237. Gavin, Thank you for your reply.
    I have no idea what I would have the modellers do, and I do not assume to have any answers…., BUT, I will tell you what I would have the modellers, and the IPCC, DON’T—-scare the public with the top-end of the range of possible outcomes of increased CO2 scenarios, and stop saying that the science is settled.
    Even without your PHD I can see that there are many questions still to be resolved–for instance as you posit–Solar Radiance-a HUGE “what if”-, and possibly others that are not being discussed here—again, I don’t know–but I do know that you do.

    —Quick question, not to nit-pick, what do you mean by “…14 or so different things changing – not just CO2 (but CH4, aerosols, ozone, volcanoes, land use, solar, etc etc.).”…furthering…”They don’t make assumptions about solar…”??

    [Response: Where have you read that I have stated that the 'science is settled'? If I thought that, I wouldn't still be a scientist. That kind of statement is instead a strawman characterisation of what scientists are saying, and at maximum reflects only the basic consensus and certainly not 90% of what it is that scientists are actually doing. For many purposes the outstanding points of contention are not relevant to most people - which is why 90% of papers on climate don't get a press release, but there is a lot that is known, and to make that clear is completely appropriate. To answer your last question about solar in future simulations, the assumption is that there will be no change in the long term irradiance. That isn't likely to be correct but there is no good reason to think it will be higher or lower in the future. If we get better predictions, then we'll use them - But given current understanding, no reasonable changes in solar are likely to change the underlying prognosis. - gavin]

    Comment by Gaelan Clark — 16 Jan 2008 @ 1:29 PM

  238. Re: #231,

    That is interesting. Mike was kind enough to lob copious references at me, so now I have some homework to do.

    Why I seem intent on delving into something that will, in all likelihood, turn out to be well investigated…I can’t answer that.

    Except to say that so far, I don’t know enough to say that my questions have been answered. So, I will see what I learn from Mike’s references. Thanks Mike, for taking the time.

    With regard to the folks who consider it their duty to keep a thread “on-topic” I will ask, who are we hurting by exploring tangentially related questions, and is not the quest for knowledge slightly more important than sticking to a specific agenda? This is a blog, after all, not a 90 minute symposium.

    Comment by Walt Bennett — 16 Jan 2008 @ 1:56 PM

  239. A hiccup in the “it’s not happening” Denialosphere?

    Comment by JCH — 16 Jan 2008 @ 2:25 PM

  240. Gaelan: If you look at the variability of atmospheric concentrations over the past several thousand years, it is tiny compared to the changes in the past hundred.

    If you look at the variation in solar forcing (as well as we understand it) over the past several thousand years, it is small compared to the changes we expect due to anthropogenic forcings over the next hundred years.

    If you look at volcanoes over the past couple hundred years, while they make a difference not too much smaller than anthropogenic changes, those differences last only a year or two.

    So, effectively, the assumption that the natural forcing remains constant is a fairly good one.

    (note that some models do try to account for the interactions between changing atmosphere, climate, and precipitation and ecosystems and oceans. This is hard, though: for example, in ecosystem modeling, most work shows the biosphere taking up a lot more carbon due to carbon fertilization: however, more recently, modelers have begun to realize the importance of nitrogen limitation in carbon rich futures… and precipitation is very important for ecosystems, and poorly predicted… and the effects of temperature change on ocean mixing is still not well understood…)

    Comment by Marcus — 16 Jan 2008 @ 2:26 PM

  241. Re: #237


    I think it’s great that you have come to RC and that you want to participate in this incredibly important discussion. You seem to understand one thing very well: Gavin and his peers know a lot more about this stuff than we do.

    I want to ask you, though: why should they spend their time defending their motives? Don’t we want them busy doing the work? Aren’t we grateful enough that they spend the time answering our science questions?

    I predict that when you ask somebody about their motives, that person will defend those motives. So, why not skip that and make up your mind based on the content of the information Gavin, Mike, Stefan and others post here.

    Isn’t it great that we are on first name bases with the who’s who of climate science?

    Let’s not take too liberal advantage of that, and ask them to spend time defending their practices. As I said, when we have science questions, they answer them.

    That is a wonderful thing, and I hope you agree.

    Just my two cents…

    Comment by Walt Bennett — 16 Jan 2008 @ 2:48 PM

  242. Re: 234, “trolling through datasets”

    Of course I agree that every kind of data collection investment is urgently imperative, but on the subject of trolling through existing datasets, and also somewhat in the original spirit of “comparing like with like” – here is a data exercise any child could perform:

    In the global surface temperature dataset referenced by the map-maker at there are four previous Decembers which scored a global monthly mean anomaly around +0.39 (that of Dec 2007): 1939, 1979, 1990 and 2002. Visual data is extremely powerful – I like the polar projections, as the epicenters of our situation would appear to be the poles. The dynamics of the zonal mean line are also very interesting. It’s a striking progression, and you can draw your own conclusions from a dataset which has weathered extreme scrutiny.

    Comment by Daniel C. Goodwin — 16 Jan 2008 @ 2:52 PM

  243. #240–Thank you very much for your reply. I now understand the modelling concept much better now.
    #241–I have never assumed to undertake anyone’s “motives”, yet their “practices” are indeed a fundamental part of the “science.” And, I am trying feverishly to try to understand the content, but my lack of physics-and any number of science classes precludes me to the sidelines until I get the courage to ask a question—obviously I am asking because I want to expand my limited knowledge.
    Dr.Schmidt has been kind enough to deliver to me and others answers to our questions, albeit in a way that I sometimes do not understand, so I take lesson from the references and learn more through that.
    Again–I do not care about motive, but the application of the practice is of great concern–for instance, how does one extrapolate temperature from bristle cone pines?–Further, by this temperature proxy–that may or may not be correct–how does one say that current warming is unprecedented?
    I simply see no motive from any scientists, though, that would make such inferences—I would like very much to know their practice of science that leads them to this answer—of that, I have found no real proofs, I am still searching, and hopefully can be pointed in the right direction, possibly by you.
    Thank you in advance.

    Comment by Gaelan Clark — 16 Jan 2008 @ 3:03 PM

  244. Thank you Dr. Schmidt, I really do appreciate your time.
    And, I am just parroting the news on the “science is settled” thing, I never meant to infer that you said such a thing.
    There are so many like myself, that want to know more about the science but are limited by our own choice of class load during our school years.
    I am looking into taking some basic science classes at my alma mater, USF–Sun and Fun in Tampa, FL—do you have any suggestions for me that would help me understand the basic science behind what you are doing?

    [Response: Not specifically, but many schools do a 201 level intro to climate or atmospheric science - those are usually a good start (though you'll need to go further to really get a handle on the physics or modelling aspects). If you want to self study working though 'A climate modelling primer' by McGuffie and Henderson-Sellers or Houghton's Physics of Atmospheres would be helpful. - gavin]

    Comment by Gaelan Clark — 16 Jan 2008 @ 3:08 PM

  245. Re: 243


    Excellent question and indeed the basis for my own interest.

    I will point you in a direction:

    The Discovery of Global Warming

    This is a very comprehensive history of the research that went into current understanding of the greenhouse effect in general and AGW in particular. It is written at a level for a layman to understand, and it is well-referenced.

    I hope you take my comment in the spirit intended, that we do not want to abuse the great opportunity we have to sit this close to scientists doing science.

    Comment by Walt Bennett — 16 Jan 2008 @ 3:30 PM

  246. Re Gaelan Clark @ 237: “for instance as you posit–Solar Radiance-a HUGE “what if”

    Is it? A huge variable, I mean. Solar radiation falling on Earth’s surface, or insolation, is of course the source of 99.9% of Earth’s energy budget, so solar insolation itself is huge. And the amount of insolation does vary by both predictable means and unpredictable means. The predictable means include a very slowly increasing solar constant, and very slow Milankovetch orbital and rotational cycles. Neither are appreciably in-play on a decadal or even century scale, so they can be discounted for short term modeling. There may or may not be other long-term periodic variations in solar radiance, but until their existence and effect are proven and quantified there is no way to incorporate their impact in any model.

    Unpredictable means include sunspot cycle, which is at least semi-regular, so we have a known range of variation, aerosol injections from periodic volcanic eruptions, which are included in some models, and cloud distribution, which is not yet fully understood in terms of its net forcing. So how large are these unpredictable variations and how do they compare to known greenhouse gas and other forcings? Can we legitimately call any of them “huge?”

    Comment by Jim Eager — 16 Jan 2008 @ 4:09 PM

  247. Gaelan,

    In addition to what gavin has written, you also need to be reasonable on how much natural variability will change. The increased secular trend in solar irradiance from 1900-1950 or so was actually rather high compared to the Holocene, and is yet considerably smaller than the radiative forcing for CO2 (as shown in this graph- ). Most likely, solar will decrease a bit, but that will be trivial next to rising GHG’s…there is no simulation of a coupled ocean-atmosphere-ice phenomena that is going to suddenly spurt out a change like that of 2x CO2 in Holocene-like conditions. Now if you want to argue that solar will decrease by a few percent, then we can discuss, but I don’t really think anyone is going to work on wishful thinking. Volcanoes, El nino and such things generally work on short timescales, and so the signal as we approach 2x to 3x CO2 is still going to be there.

    Projection modelling is not making a ‘prediction,’ it is making a ‘projection.’ Obviously Gavin cannot predict that a New York sized asteroid might hit Earth in 2015 and that will substantially influence climate for a long term scale. In this instance, it probably wouldn’t be too relevant what the IPCC has to say right now. But the models say that if we reach 2x CO2, and all other things are equal (except feedbacks) you’ll get ~ 3 C of warming. If all other things aren’t equal (but as he noted, scientific and socio-economic projections for those things as well, as well as different “emission scenarios” which we can control) then you need to factor in those effects, but unless you get a huge solar dimming, or the asteroid, CO2 is very likely to be the predominant forcing agent over this century.

    Comment by Chris Colose — 16 Jan 2008 @ 4:12 PM

  248. Roger Pielke. Jr. Says:
    11 January 2008 at 10:53 AM
    Gavin Schmidt-vs. Roger Pielke

    For readers who think “two equal scientists are bickering here” …please look at the peer-reviewed published data which is analyzed by the world-wide scientific community…and is in your public library for your pleasure…ie. Nature Journal, Science Journal etc.

    Roger Pielke’s evidence does not stand up under world-wide peer review scrutiny…please look it up for your selves…with the help of the librarian if you wish.

    Note: Industry has recently bought a few lesser known journals which try to legitimize their “global warming is false” idea…your librarian should be able to tell you which is which.

    Comment by Richard Ordway — 16 Jan 2008 @ 5:21 PM

  249. General Question about temperature Measurements:

    First: If global measured temps fall in a given year, do we believe that the atmosphere actually lost heat over that time period? Or is some variation caused by measurement error? (ie, the heat is is still around, just not where we happen to measure it).

    Thanks, to any one who has the knowledge and the time to answer.


    [Response: For the most part the atmosphere will have lost energy over that period. - gavin]

    Comment by erikG — 16 Jan 2008 @ 6:22 PM

  250. Gaelan, Are you familiar with the open courseware project at MIT. Their goal is to put course materials on-line for every course they teach. Here’s the website:

    and a more specific link that looked good:–Atmospheric–and-Planetary-Sciences/12-301Fall-2006/CourseHome/index.htm

    Also, the course that Hank linked to above:

    I actually worked with George Kling (the prof) writing up his research on Lake Nyos, the volcanic lake in Cameroon that belched out a bunch of CO2 and suffocated several villages. He’s a good guy.

    If you can tell me your background, maybe I can come up with other resources. After getting my PhD in physics, I never wanted to take a class again. I prefer to learn on my own.

    Comment by Ray Ladbury — 16 Jan 2008 @ 6:54 PM

  251. 1. Any time series showing less than a decade of data is not going to show trends that can be considered climatological, be it for temperature or any other variable. Anyone using such a graph to make claims about climate trends, including to verify GCM predictions, is making a misleading argument. The graph shown in the Tierney/Pielke blog is a good example of misleading data representation. If it was Pielke’s intent by showing the chart to show that one cannot make statements about temperature trends by giving an example of bad analysis, more power to him. He would have made his point much clearer by including a chart like Gavin’s above.

    Gavin’s 30-year graph is a good summary of significant trends in global temperatures than the Pielke graph. Most readers will be able to discern from Gavin’s graph that the year-to-year variability is going to be too high to make any real conclusions about temperature with a time series of less than 10 years. Anyone who has ever analyzed a graph can clearly see this.

    2. GCM’s are not designed to simulate what will happen 7 years from now. They are designed to tell us what the average conditions will be like on timescales of 10-100 years. One needs to display at least 20-30 years of data to draw statisticly significant conclusions about a climate trend and whether a model has been useful in predicting it.

    A “spot check” can still be done for the 2001-2007 climate forecast, though: a) you could compare the predicted average temperature over the 7-year forecast with the actual mean temperature over that 7 year period; If one wants to look for trends in data, one could look to those 7 year means compare to the seven pre-2001. b) one could test for robustness and plot a time series of the mean temperature for 2001-2002, 2001-2003, 2001-2004 and compare it to predicted temperatures; c) you could see weather the highly variable real temperature calculated above converges to the smooth predicted temperature curve in time.

    3. An honest appraisal of any time series of global surface temperatures ending in 2007 or 2008 will also note that we are currently experiencing a La Nina in the equatorial Pacific and have been since about October. It is predicted to last until May or June. This means that 2/3 of the equatorial Pacific is 1 to 3 degrees cooler than normal, or 2 to 6 degrees C cooler than during an El Nino. Because the equatorial Pacific is such a huge chunk of the Earth (check any globe to see how huge), it will almost certainly make global temperatures for 2007 and 2008 cooler than most previous years (even if all Arctic sea ice disappears). It will be interesting to see if this Cold phase of the ENSO cycle will be associated with cooler or warmer global temperature than past La Nina events.

    P.S. Please see following websites for La Nina data and predictions:

    Comment by Werner Wintels — 16 Jan 2008 @ 10:22 PM

  252. RE#249,

    “General Question about temperature Measurements:

    First: If global measured temps fall in a given year, do we believe that the atmosphere actually lost heat over that time period? Or is some variation caused by measurement error? (ie, the heat is is still around, just not where we happen to measure it).”

    One of the main variables there is the latent heat – the energy stored in the atmosphere due to the evaporation of water. A cubic meter of dry air at a given temperature will hold less energy than a cubic meter of air that is saturated with water vapor. You can sometimes feel this energy being released right before a rainstorm, as water vapor condenses to water droplets and releases heat (or as ice crystals melt to water droplets).

    That’s a route to warm the polar regions – warmer sea surface temperatures at the equator lead to more water vapor in the atmosphere, and when that vapor condenses back to ice and water in polar regions, heat is released.

    Water vapor also plays a direct role in global warming, since it absorbs infrared radiation and acts as an atmospheric blanket. For more, see WATER VAPOR FEEDBACK AND GLOBAL WARMING, Held & Soden 2000

    ▪ Abstract: Water vapor is the dominant greenhouse gas, the most important gaseous source of infrared opacity in the atmosphere. As the concentrations of other greenhouse gases, particularly carbon dioxide, increase because of human activity, it is centrally important to predict how the water vapor distribution will be affected. To the extent that water vapor concentrations increase in a warmer world, the climatic effects of the other greenhouse gases will be amplified. Models of the Earth’s climate indicate that this is an important positive feedback that increases the sensitivity of surface temperatures to carbon dioxide by nearly a factor of two when considered in isolation from other feedbacks, and possibly by as much as a factor of three or more when interactions with other feedbacks are considered. Critics of this consensus have attempted to provide reasons why modeling results are overestimating the strength of this feedback.

    Regarding the temperature measurements, try Top 11 Warmest Years On Record Have All Been In Last 13 Years, Dec 2007

    “The decade of 1998-2007 is the warmest on record, according to data sources obtained by the World Meteorological Organization (WMO). The global mean surface temperature for 2007 is currently estimated at 0.41°C/0.74°F above the 1961-1990 annual average of 14.00°C/57.20°F.”

    Comment by Ike Solem — 17 Jan 2008 @ 12:38 AM

  253. Natural Variability

    La Nina Still Going in January 2008
    Taken January 14, 2008
    Includes a kml file…

    The Dawn of a New Solar Cycle
    Cycle 24
    Includes two QuickTime movies from Jan 1 to Jan 14 2008

    Putting things into context…

    Discussion of 2007 GISS global temperature analysis is posted at Solar and Southern Oscillations: what it means for us to tie second with a year (1998) that had a strong El Nino when this year we have a La Nina and a cool solar year.

    GISS 2007 Temperature Analysis

    Comment by Timothy Chase — 17 Jan 2008 @ 2:57 AM

  254. I am not a denialist, Ray, (198), but I do think the level of certainty expressed on this site is unjustifiable.

    There is always room for scepticism over scientific hypotheses. Do you remember Wigner energy? That minor oversight in the Physics of nuclear reactors (probably the best researched theory ever) might have caused the evacuation of the Lake District and the end of nuclear power station development. For that matter, do you remember Arrhenius’s views on CO2 concentrations (without feedbacks) as an explanation of the Ice ages?

    I am sure you are familiar with the spectroscopy at, for example:

    The dominance of H20, per molecule, in the infra-red absorption region is obvious, and the concentrations of CO2 and H2O are about 380 ppm and 2000 ppm respectively. I am not saying that the CO2 perturbation models are wrong – just that they need a good deal of unqualified experimental verification before they are unequivocally accepted.

    Finally, what is the most politically influential metric predicted by the models? Right or wrong it is surely the global average temperatures, and the most influential prediction is Hansen’s in 1988, because it has had time to be tested. His A, B, and C predictions diverged after year 2000, and as far as CO2 emissions are concerned we are firmly on the B line.

    From 2000, his B line predicted temperature increase (fig 3, 5 year running mean) is 0.4 degrees C in 10 years and 0.6 degrees in 20 years.

    My flat line regression ( 161, 170 ) from 1997 gives a (not significant) increase of 0.065 degrees in 10 years. The standard error on that slope is 0.055 degrees per decade, very high precisely because (251) the time interval is short and the data variable. However, we can use the standard error to ask for the odds against the real slope being as high as the Hansen prediction.

    At plus 3 standard errors (minus is equally likely), the slope would be 0.23 degrees per decade. In other words the odds are far more than 1000 to 1 against the Hansen increase being the real world figure.

    Time will resolve this – over the years to 2030 the CO2 signal (over and above aerosols, solar influences, random “noise” and aerosols, as Hansen says) will either appear or it won’t.

    Can we really say that we are certain of the CO2 influence today?

    Comment by Fred Staples — 17 Jan 2008 @ 7:37 AM

  255. RE: 248 Gavin Schmidt-vs. Roger Pielke

    I don’t think its fair to denigrate Dr. Pielke’s research on this basis. I’m not an expert on these matters by any stretch. But I happen to be reading “Storm World” at the recommendation of this site and although it centers on the controversy surrounding the global warming-hurricane connection, it gives plenty of insight into the different scientific camps on the various sides of the global warming issue. Correct me if I’m wrong but Dr. Pielke seems clearly identified with the empiricist school of thought whereas Dr. Schmidt falls within the climatologist/climate modelling camp. Although denialist have found it politically expedient to identifiy with the empiricists, in fairness their research is nonetheless valid and useful and should not immediately be discounted as ideologically driven. Is that a fair assessment?

    Comment by AZ — 17 Jan 2008 @ 9:22 AM

  256. Thank you to Walt, Jim, and Chris for your replys, they are most useful.
    Also, #251, your point and explanation are very enlightening to a mind that has not quite wrapped itself around these perplexing concepts–thank you very much.
    Ray, I would be more than happy to share my background with you-History, Political Science at USF–entered into a number of small business ventures–now into sustainable oil palm plantations in Mexico.
    I am grateful to you, indeed everyone else who has as well, for your willing help in providing a mass of research info for me to start my knowledge quest.
    —The last few comments have been especially helpful.

    Comment by Gaelan Clark — 17 Jan 2008 @ 10:08 AM

  257. #247–I just loooked at your link, and one thing strikes me as peculiar–Land Use seems to have a net cooling effect. How could this possibly be true?
    You take a field, bulldoze it–taking up all of the existing vegetation–and then put a parking lot on top of it—you have stopped all of the natural evapotranspiration and are now retaining heat from the surface that you have just laid—–I know this is oversimplification, but aren’t the basic precepts correct in what I have just laid out?—If so, how does one justify the graph from the IPCC??
    —Ray–fantastic link to MIT, I have downloaded a few classes already!!!

    [Response: You've forgotten about albedo. - mike]

    Comment by Gaelan Clark — 17 Jan 2008 @ 10:43 AM

  258. RE #255 & “empiricists camp” v. “modelling camp”

    As an anthropologist teaching Expressive Culture this semester, I have to point out (as I did to my students yesterday) that no one sees or experiences the real reality, except through cultural-tinted glasses. It’s all models. Every word is a symbol. When we take the “temperature” with a thermometer, it’s a model. But my prof some 30 yrs ago assured us students that there is a real reality, after we got a bit worried about the issue. And I passed that assurance on to my students yesterday.

    I guess a good test of whether our models are serviceable models of that real reality (as it pertains to important aspects of our lives) is whether or not they can predict the future to some extent. And we’ve come a long way since crystal balls and soothsayers.

    Reality — that thing we can’t really know except through our cultural/model lenses — seems to have a way of biting back, letting us know it really is there.

    I’ll stick with the analyses of those using science-based models over the charletan soothsayers and snake oil salespersons.

    Comment by Lynn Vincentnathan — 17 Jan 2008 @ 11:28 AM

  259. Fred Staples asks: “Can we really say that we are certain of the CO2 influence today?”

    I’m glad you asked that question Fred, as the answer is an unequivocal “YES”. The reason is in part because the data support CO2 as the cause of the warming–both the qualitative aspects and the quantitative results. With a 10 year trend, you are likely to fall victim to noise, and your analysis is ignoring even the known sources of noise–ENSO and volcanic eruptions. Fred, I will readily concede that there are a lot of things we don’t know about climate. The effects of adding CO2 don’t fall into this class. If you believe CO2 contributes to the greenhouse effect at 280 ppmv, then there is no reason to assume its effects will cease at 560 ppmv. And if it is the greenhouse effect you are questioning, then throw out all of climate science. And given the remarkable success climate scientists have had, there’s no reason to assume the models are dramatically wrong.
    Look, I realize you’re having fun doing fits to data, but I really think your time would be better spent teaching yourself a little bit about the physics of climate–e.g. that it is not just a matter of the amount of ghg in the atmosphere, but also WHERE it is.

    Yes, there is always room for skepticism in science, but not every proposition is equally deserving of skepticism. I’m a lot more likely to question whether quarks are truly fundamental than I am to question conservation of energy. I’m a lot more likely to question whether we understand the effects of aerosols and clouds than I am to think we’re out to see on insolation and greenhouse gasses. Indiscriminate skepticism is not productive. It merely distracts you from areas where your skepticism would be more profitable.

    Comment by Ray Ladbury — 17 Jan 2008 @ 11:29 AM

  260. #248

    “Note: Industry has recently bought a few lesser known journals which try to legitimize their “global warming is false” idea…your librarian should be able to tell you which is which.”

    Are you using “bought” strictly here? Or metaphorically?

    Comment by bigcitylib — 17 Jan 2008 @ 11:34 AM

  261. Fred, Pielke quotes McKittrick as writing about
    “… 2 flat intervals interrupted by step-like changes associated with big volcanoes….”

    Any relation to your calculations, or unrelated?

    Comment by Hank Roberts — 17 Jan 2008 @ 11:47 AM

  262. Fred Staples writes:

    [[do you remember Arrhenius’s views on CO2 concentrations (without feedbacks) as an explanation of the Ice ages?]]

    Yes, and to a large extent he was right, since the solar energy distribution changes we now believe were the immediate cause of the ice ages were amplified greatly by the CO2 feedback. BTW, his model did take water vapor feedback into account.

    [[The dominance of H20, per molecule, in the infra-red absorption region is obvious, and the concentrations of CO2 and H2O are about 380 ppm and 2000 ppm respectively.]]

    Right. Nonetheless, CO2 is important in global warming and H2O is less so. The reasons for this are:

    1. Water vapor has a very shallow scale height (about 1.8 km compared to 7 km for the troposphere in general), so it peters out quickly with altitude. CO2, on the other hand, is well-mixed.

    2. An average molecule of water vapor stays in the air about nine days. An average molecule of carbon dioxide stays in the air 200 years. We can’t affect water vapor very much whatever we do; it rains out or evaporates up too quickly. That’s why CO2 is treated as a forcing in climate models and H2O is treated as a feedback.

    [[ I am not saying that the CO2 perturbation models are wrong – just that they need a good deal of unqualified experimental verification before they are unequivocally accepted.]]

    Lab work by John Tyndall proved that CO2 was a greenhouse gas in 1859. We had a good idea of the line structure by the 1950s and have now mapped thousands of individual CO2 lines, so we have a good idea how it affects radiative transfer.

    [[over the years to 2030 the CO2 signal (over and above aerosols, solar influences, random “noise” and aerosols, as Hansen says) will either appear or it won’t.]]

    It already had by 2001 or so.

    [[Can we really say that we are certain of the CO2 influence today?]]

    Yeah, pretty much.

    Comment by Barton Paul Levenson — 17 Jan 2008 @ 11:49 AM

  263. Empiricists, modelers, and croupiers:

    Comment by Hank Roberts — 17 Jan 2008 @ 11:50 AM

  264. Mike–have not forgotten–never considered–still learning, and thank you for that.
    But, I still don’t get it, because what I am infering is that the urban areas are heating up because of land-use change from arable land to concrete jungle. Another peculiarity from the surface tempertaure constructions is that a number of the weather stations were once in the open vegetation lands, and are now surrounded by parking lots, air-conditioning vents, or airport runways, etc., etc.,–using these temperature readings would surely show an increase that is neither natural nor “model” AGW—while it certainly is anthropogenic, it should not be used to show that CO2 is causing the temp spikes.
    Please, I know this is sophmoric to you, but in my circle of friends we discuss this quite a bit—not to any scientific degree, but discuss none-the-less, and it helps to know what the answers are, in “sophmoric” terms.

    Comment by Gaelan Clark — 17 Jan 2008 @ 11:52 AM

  265. AZ (255),

    Why do you think that modelers are not empiricists? You realize, don’t you, that all science that uses mathematics is modeling? F=MA is a model. g=Gm1m2/r^2 is a model, useful but not “true.”

    I’m a programmer who has done some modeling/simulation. I have no direct knowledge of what is in climate models, but after hanging out here for a while, my guess is that they involve hundred(s) of differential equations, a lot of which are involved in circular relationships. If that is the case, how does an non-modeling “empiricist” contribute? Such systems of equations are impossible to solve analytically.

    How does anyone not working with a model have anything to say?

    Comment by Tim McDermott — 17 Jan 2008 @ 11:59 AM

  266. Gaelan, you can read about this on this site–among other places, here

    Basically, though, yes development around a station will affect its temperature. However, there are lots of stations nearby. As a result, if you see one station warming when the others are not, you can not only see that it may be in error, but tell roughly how much. The algorithms all look at spatial and temporal averages and filtering.
    You don’t want to throw out urban stations, as they still provide data, and give you info about urban heat island effects as well.

    Comment by Ray Ladbury — 17 Jan 2008 @ 12:28 PM

  267. Gaelan,
    There’s another factor at play: agriculture and irrigation. See Irrigation may not cool the globe in the future, LLNL:

    The team, which included Bonfils and David Lobell at Livermore Lab, first studied the net impact of widespread irrigation on local and regional climate in California, the top irrigating state in the United States (3.3 million hectares). In highly irrigated regions of the San Joaquin Valley, daytime temperatures relative to low irrigated areas have cooled by 1.8 degrees – 3.2 degrees C since the introduction of irrigation practice in 1887.

    “In comparison, there was no clear effect of irrigation on temperatures over the 1980-2000 period when there was no net growth of irrigation,” Lobell said.

    Irrigation cools the surface of the earth by increasing the amount of energy used to evaporate water rather than heat the land. The more irrigated the land, the more intense the effect. “It was quite surprising how well we could distinguish a cooling trend that incrementally increases with the amount of irrigation,” Bonfils said.

    This study also shows that the rapid summer nighttime warming, well observed in Central California since 1915, cannot be explained by irrigation expansion, as outside research has implied. “Our results show that the expansion of irrigation has almost no effect on minimum temperatures and that irrigation cannot be blamed for this rapid warming,” Bonfils said.

    “An increase in greenhouse gases and urbanization would best explain this trend, which exceeds what is possible from natural climate variability alone,” Lobell said.

    The most recent IPCC report estimated that about 1/5 of the observed warming was due to deforestation and other land-use changes, although such estimates are very difficult to do. The problem is that the biosphere has multiple effects on climate – for example, tropical forests may have a greater role in moderating the global climate than northern temperate forests do. See Planting temperate forests no solution to global warming.

    Comment by Ike Solem — 17 Jan 2008 @ 12:51 PM

  268. re 266. Really Ray,
    ” there are lots of stations nearby”

    How many stations are there in Brazil. How many urban and how many rural. And compare that to the number of stations in the US. Just for grins.

    Comment by steven mosher — 17 Jan 2008 @ 12:58 PM

  269. Steven, It’s called “GLOBAL CLIMATE CHANGE” for a reason. Noise in datasets really isn’t a new or impossible problem.

    Comment by Ray Ladbury — 17 Jan 2008 @ 1:30 PM

  270. Re: 258, 265 modelers vs. empiricists

    I’m not trying to make some big philosophical point here. Its a distinction made by Mooney in “Strom World” and one that I’ve personally encountered in my research (in hydrological modeling circles). There are just some scientists (some “empiricists” but not all), who simply bristle at any sort of computer modeling or at least are innately “skeptical” of computer modeling results.

    Comment by AZ — 17 Jan 2008 @ 1:30 PM

  271. #257: Gaelan, the majority of land-use change is in agriculture, so I imagine the cooling albedo change comes from cutting down dark forests and replacing them with amber waves of grain. Also, desertification, where it happens.

    But yes, where you pave parking lots and such, you do expect land use to cause warming.

    (trees and other plants are a little odd, in that they have opposing local effects: dark leaves absorb heat, but transpiration transforms heat into latent heat which effectively cools the immediate surroundings but doesn’t change the net heat in the atmosphere. I think.)

    Comment by Marcus — 17 Jan 2008 @ 1:31 PM

  272. Re Galean @ 264: Gaelan, as Ray pointed out, the urban heat island effect has been discussed here, repeatedly, and as has been stated here numerous times, known noisy or dirty data is still useful data, IF it’s KNOWN to be noisy or dirty. It is then easy to detect and to filter out the induced bias during processing of the data. Once the bias is identified and removed, the same anomolous trends shown by neighboring rural stations can be detected in the urban data. The biased urban stations are kept precisely because they have long term continuous data records. To simply throw them out removes a valid long term data set. To replace them with new, unbiased sites means that you have no replacement calibrated data set from the new sites for many, many years. There is no fatal problem to begin with, and the solutions are definitely negative.

    Comment by Jim Eager — 17 Jan 2008 @ 1:47 PM

  273. I replied to your comment before, Hank, but it was – what is the word I want – moderated.

    [edit. once we'll allow, twice gets you banned. last warning]

    As for the Physics, Ray and Barton, we have debated ghg’s before, and I look forward to doing so again on a suitable post. I certainly do not claim to have a coherent view of climate science, but I would be interested in your comments on the following assertions:

    I do not believe that we are dealing with anything other than the resonant absorption of electro-magnetic energy in the infra-red regions of the spectrum. Electron shell excitation states leading to quantum radiation will not be involved. Tri-atomic molecules will absorb this radiation as kinetic energy much more efficiently than di-atomic molecules, (and promptly dissipate their temperature increase within the atmosphere) but that does not mean that N2 and O2 will not warm directly at all.

    I find the CO2 radiation saturation argument originally demonstrated by Angstrom (and the resulting logn relationship to concentration) convincing. Whatever CO2 and H2O do to warm the surface, they will do it at low altitude and low concentrations.

    I agree with everyone else that the lapse rate is crucial, and that it can be derived from ideal gas equations. Without it, the only plausible explanation for AGW, “higher is colder”, would not be tenable.

    Most important, I agree with Woods’ 1909 comments on the low importance of the back-radiative effects in comparison with straightforward thermal insulation. There must be a back-radiative effect on surface warming, but if it is not significant in a glass (as opposed to a rock-salt) greenhouse, why is it so dominant in the atmosphere?

    Have you read S D Silverstein’s 1976 paper based on Wood’s experiments, which I found excellent value for 10 dollars?

    Comment by Fred Staples — 17 Jan 2008 @ 1:52 PM

  274. AZ, the power of science is that it combines empiricism with modeling–models are constrained by data and in turn tell you what data are important. The reality of anthropogenic causation of climate change does not in any way depend on complex computer modeling. It is known physics, and whether you do it with a computer or pen and paper should not matter. Like it or not, models are how we understand the world. The are not reality, but they distill the important elements of reality and make them understandable to us. If they are objecting to models, they are objecting to science.

    Comment by Ray Ladbury — 17 Jan 2008 @ 1:59 PM

  275. Re AZ @ 270: “There are just some scientists (some “empiricists” but not all), who simply bristle at any sort of computer modeling or at least are innately “skeptical” of computer modeling results.”

    And there are some modelers who dismiss empiricists as “stamp collectors” and “curve fitters,” also from Chris’ book. Such dismissals from either camp are not helpful. Developing models without comparing the model functions and results to observed processes empirical data is pointless, just as is graphing empirical data and looking for fits and correlations without developing an understanding of the underlying physical system, aka a model of what is going on and how it works.

    Comment by Jim Eager — 17 Jan 2008 @ 2:00 PM

  276. There is certainly some angst out there while comparing one global temperature trend with another. All these measurement methods must be put in perspective with actual world class bench marks. When was the last time when a ship from North Central Russia crossed the sea in a straight line to Alaska?
    Go beyond present satellite data temporal limitations, and its clear, on no other time in history was this ocean so open, the correct temperature trend chart must reflect a direct decrease in sea ice volume (not only surface) as world wide temperature steadily increases. If a chart shows no recent Northern Hemisphere temperature increase, it is likely not representing the entire area.

    Comment by wayne davidson — 17 Jan 2008 @ 2:15 PM

  277. Gaelan: If the urban heat island effect was seriously affecting urban temperature stations, wouldn’t they show a significant difference from rural stations?

    [Response: Actually they do, at least in the US. That's why the GISTEMP analysis corrects the urban trends to match the rural ones. The issue is not the existence of UHI (this is acknowledged by everyone), but it's remaining influence on the large scale averages. - gavin]

    Comment by Barton Paul Levenson — 17 Jan 2008 @ 2:22 PM

  278. > empiricists

    Discussed here previously, this will help understand what the author was talking about I think:

    RealClimate » Storm World: A Review
    Note: Chris Mooney has provided us with an early copy of Storm World and we’re reviewing …

    “… Mooney also traces their respective work back to two different historical schools of thought in the atmospheric science community. On one side are the data-driven empiricists, such as Redfield, Loomis,and Riehl and on the other side the theorists such as Espy, Ferrel and Charney. Gray naturally follows in the tradition of the first group (his Ph.D adviser was Riehl who is sometimes credited as the father of the field of tropical meteorology). Emanuel, a student of Charney, follows in the tradition of the great theorists in atmospheric science. Of course its not quite that simple (and Mooney acknowledges as much)….”

    I’d also recommend revisiting Spencer Weart’s AIP History (first link under Science, right side of web page) for a reminder of how meteorology was done up until the very recent development of large computers, and how much that has changed what it’s possible to learn. And how incredibly _fast_ it’s changed.

    I did my first statistics coursework using a Frieden mechanical “Automatic Calculator” — it’s the very first piece of equipment pictured here:

    My college had one computer — an IBM 1620.

    How should we be evaluating the climate work done using tools like those, around 1970?

    How should we be evaluating the climate work done around 1980? 1990? 2000?

    Ask a climatologist, not a political scientist.

    Pielke writes above in #54 replying to Gavin:

    “You write ‘Once you include an adjustment for the too-large forcings’ — sorry but in conducting a verification you are not allowed to go back and change inputs/assumptions that were made at the time based on what you learn afterwards, that is cheating. Predicting what happened after 1990 from the perspective of 2007 is easy;-)”

    This is nonsense. It assumes no progress in the models or technology, and the biggest question for political decisions is exactly whether the models are improving.

    The straightforward test is to do exactly what Pielke calls “cheating” — take the data as it was then, apply today’s model.

    It’s not “cheating” to take the data used in the 1980s, and run it with today’s models.

    Claiming the 1980s work wasn’t reliable, by comparing short term to long term work improperly, then calling that “validation” and suggesting that contemporary work can’t be better, is serving fudge with fudge frosting, seems to me.

    But of course I’m not a climatologist, nor is Pielke, nor is Tierney. So I’ll listen to the climatologists. I know the political need to discount their warnings. I don’t trust that at all.

    Empiricists use statistics. Statisticians use computation and models, nowadays.

    Comment by Hank Roberts — 17 Jan 2008 @ 2:27 PM

  279. #270, maybe that’s because even though the weather models have become pretty good in describing current weather conditions and making short-term future predictions, they still are not highly accurate more than a week into the future. So I imagine those concerned with warning people about hurricanes would be reluctant to say that hurricane brewing in the Atlantic is definitely not going to hit us over here in S. Texas.

    OTOH, climate (which is weather at the macro-statistics-level) is more stable, so that they can even print atlases describing the regional climates that in the past (before GW) held up for decades or more.

    So if the climate changes, even a little, that’s a really big thing compared to daily weather changes. And it’s so complex, with so many variables that a model that includes these major variables (some causing the climate to warm, some causing it to cool, etc) is much better than simply looking at 2 variables (say, GHGs & T). It helps to explain those ups and downs better (see above the volcano effects and el nino effects in the graph).

    What the modellers do (I think) is use all the relevant empirical data they can and see if they not only match fairly closely climate stats that have actually happened in the past, but also are based on well-established principles of science (like laws of thermodynamics, etc), then forward these models into the future to see what might happen. I believe the models are frequently tested against actual empirical data as it becomes available each year. That’s my impression.

    So future modelling is an extrapolation from past empirical data and general scientific princles. I can understand that some scientists due to the scientific cautionary mode of avoiding false positives might feel funny about saying anything at all about the future. But people like me want to know. Hindsight is always 20/20 & safe.

    Aside from using some type of model, I can’t think of how we’d talk about the future.

    An analogy re this seemingly halting or slowing of the warming over a few years, might be earthquake dynamics. Just because Calif has not had an earthquake for a few years does not mean earthquakes have ended. We know (through hard working scientists who’ve told us) that there are tectonic activities going on underfoot and pressures moving plates in different directions, but that they sort of snag up for a long time (? due to earth friction ?), then burst forth moving in spurts (which are the quakes).

    So we have a pretty good idea that GHGs have kept the earth a lot warmer than it would have been without them. And we’ve gotten preliminary data that our human additions of GHGs have warmed the earth a bit, as expected, as predicted by the models, once important variables, like the aerosol and albedo effects, were included.

    How else would a person make climate projections into the future, without the use of mathematical equations based on past empirical observations and general scientific principles, and without taking all the variables into account (i.e., without our current climate models)? Educated insight? Crystal ball? Simply saying such-and-such?

    Saying GW has stopped is making a forecast generated from some model or other, just as saying GW is continuing is based on models. We need to make those skeptics’ models more transparent and compare them to the climate science models. I wonder if they include as many variables and detailed empirical observations as the models used by climate scientists do.

    Comment by Lynn Vincentnathan — 17 Jan 2008 @ 3:21 PM

  280. So, Fred, Have you read anything from the current millenium? How about from the last half of the last century? And wherever did you get the idea that logarithmic dependence leads to saturation?
    You claim that greenhouse gasses act only near the surface. Pray, what magic stops them from acting high in the atmosphere? How does a CO2 molecule know where it is in the atmosphere and how does it know where the photon came from? Is is your contention that excited molecules in the mid-troposphere never decay radiatively? That there are no photons in the CO2 band once you reach 10 km or so altitude? As I have said, you really owe it to yourself to learn the physics.

    Comment by Ray Ladbury — 17 Jan 2008 @ 5:39 PM

  281. The ‘thirty meters from the surface’ idea keeps popping up and I’d always wondered where it came from. Eli notes this:

    Comment by Hank Roberts — 17 Jan 2008 @ 6:01 PM

  282. #273. Fred, I found a
    useful starting point for finding out what was wrong. Go into the literature from there.

    Comment by Phil Scadden — 17 Jan 2008 @ 6:03 PM

  283. gavin> That’s why the GISTEMP analysis corrects the urban trends to match the rural ones. The issue is not the existence of UHI (this is acknowledged by everyone), but it’s remaining influence on the large scale averages.

    I appreciate how that works in the U.S. and a few other places with a high density of measurement sites, but how does the UHI correction work where only urban sites exist?

    Comment by Steve Reynolds — 17 Jan 2008 @ 6:13 PM

  284. OT, but many here will be very interested to see this abstract (posted just now by solar physicist Leif Svalgaard over at CA):

    “This is an abstract for an upcoming meeting:

    “‘SORCE’s Past, Present, and Future Role in Earth Science Research, Science Meeting 2008
    La Posada de Santa Fe Resort & Spa, Santa Fe, New Mexico, February 5-7, 2008 :

    “‘Fire vs Fire: Do Volcanoes or Solar Variability Contribute More to Past Climate Change?
    Thomas Crowley [] and Gabriele Hegerl, School of Geosciences, The University of Edinburgh, Scotland.

    “‘Geologists in particular are quick to ascribe past centennial scale climate changes to solar variability. But successively refined records of volcanism from ice core studies suggest that pulses of volcanism explain more decadal temperature variance than can be linear linked to cosmogenic isotope variations. Formal statistical detection and attribution studies arrive at the same conclusion. However, there still seems to be some (literally) wiggle room for perhaps a small contribution from solar. An example will be given from a 2000 year northern hemisphere temperature reconstruction that suggests (at least at the time of writing this abstract) that there may be a moderately significant solar linkage at ~200 year period.

    “‘Given time, a somewhat disconcerting apparent correlation between pulses of volcanism with the Dalton, Maunder, and Sporer Minima will be discussed. Given the unlikely physically significant correlations between the two, the possibility will be explored that cosmogenic records may have an uncorrected overprint from volcanically driven climate change. Provisional summary judgement: solar may be at best marginally significant on the multidecadal to centennial time scale.’

    “‘My [Leif's] comment: 10Be is deposited by adhering to stratospheric aerosols which then drift down and rain out. The amount of aerosols in the stratosphere is controlled mainly by volcanic eruptions. There were such strong eruptions in 1693 (Hekla on Iceland, having large effect on nearby Greenland), 1766 (Hekla), 1809 (see Dai JGR 96, 1991), 1814 (Mayon), 1815 (Tambora), 1883 (Krakatoa).”

    My comment is to wonder why this is coming up *now*. Dozens if not hundreds of researchers, including an RC co-author or two, must have looked at this exact issue and not found anything. Presumably Crowley and Hegerl have some new angle, but what? Of course these results haven’t even been published yet and will require confirmation, but C+H are very highly respected researchers; I wouldn’t be surprised if the abstract alone (has anything else been circulated?) is enough to trigger a bit of a scramble to re-examine this issue.

    But to the extent this creates consternation among the paleoclimatologists, just imagine how the solarphiles will react. :)

    Finally, even if Mike is kicking himself for not spotting this, it appears he may have cause to be pleased since volcanoes with little or no solar would seem to point toward a flattish global reconstruction.

    [Response: The discussion in the abstract is interesting, but I don't see where it has any relevance to resolving any of the key outstanding issues for several reasons. Solar reconstructions back to AD 1610 are based on sunspot data, not the cosmogenic isotopes. solar reconstructions such as those developed by Crowley and others simply splice the longer-term Be10 or C14 records onto the sunspot-based estimates to extend the estimates of solar forcing back in time prior to the early 17th century. So the amplitude scale of the solar forcing is set by the calibration of sunspot data against modern satellite irradiance measurements, not by the isotope data. Note that the primary discrepancies between various proxy-based Northern Hemisphere temperature reconstructions (see e.g. the wikipedia comparison) is actually during the 17th century, when solar reconstructions are independent of the isotope data anyway. Finally, in modeling studies, regardless of which of the various alternative longer-term solar reconstructions are used (see e.g. the comparison in Jones and Mann (2004)), solar forcing is always secondary relative to volcanic forcing in terms of its contribution to the long-term temperature trends. In all simulations, the main reason for the moderate observed hemispheric "Medieval Warm Period" is low explosive volcanic activity, and the main reason for the moderate observed hemispheric "Little Ice Age" is high explosive volcanic activity. Solar forcing is simply much smaller than volcanic forcing even when averaged on the centennial timescales of interest. This is even more true in the most recent work. In much of the current modeling work, solar irradiance estimates have been even further down-sized relative to the earlier (e.g. Lean et al '95) estimates used in earlier simulations. This is due to the fact that the larger amplitude previous estimates relied on an additional low-frequency calibration based on Baliunas' work on 'sun-like' stars, which is now believed to be invalid. So the Crowley and Hegerl abstract is sort of interesting, and they may well be right--but it doesn't really matter much either way, at least not in this context. Sorry :( - mike]

    Comment by Steve Bloom — 17 Jan 2008 @ 7:50 PM


    Someone please reveiw this partial critic on the IPCC report. In particular, this statement:

    “About half the energy at the surface leaves through
    infrared radiation, and the other half is removed by the fluid dynamics of the atmosphere: convection,
    turbulence, wind, evaporation, and so forth.”

    This statement seems inconsequential, and misleading, since the globe can only emit energy by radiation or particle emission.

    [Response: The statement is approximately true, but like many things emanating from that source, misleading. It refers to the energy balance of the surface of the planet, which is indeed affected by turbulent fluid heat transfer as well as radiation. These transfers are all properly accounted for in general circulation models, so there's no sense in which this statement can be considered a "criticism" of the IPCC. As you note, the only way heat can leave the planet is through radiation (actually particle transfers are insignificant as an energy loss mechanism). It is the top of atmosphere radiation budget, rather than the surface radiation budget, that is in fact the prime determinant of climate. --raypierre]

    Comment by Matt — 17 Jan 2008 @ 8:41 PM

  286. I wonder if anyone has done the numbers regarding the total amount of hydrocarbons liberated from the ground vs the amount of carbon now in the atmosphere. All the voices talking about bio-fuels that permit carbon recycling don’t seem to touch the point that hydrocarbon mining has brought into the biosphere carbon that will stay there (it seems) until some geological time-scale process returns it to the earth. Until that occurs, the greenhouse effect gets amplified by all the carbon that’s been liberated in the past several hundred years. Temperature profiles may be only one indicator for all that extra carbon now in circulation as there are carbon repositories besides the atmosphere. Are any others showing shifts similar to those appearing on global temperature charts (eg, the acidity of waters both fresh and marine)?

    Comment by Richard Tew — 17 Jan 2008 @ 9:00 PM

  287. Gavin,
    In Hansen ’88 projections, you provided a data file with data file for A, B, & C forcings used. I’ve plotted those and don’t see the effect of ‘scenario volcanic eruptions’. I gather from the text, you were providing the non-volcanic bits.

    Since everyone is discussing these elsewhere… do you have the forcings as actually run. (That is, assuming I am interpreting what’s in those files correctly?)

    Thanks in advance.

    Comment by lucia — 17 Jan 2008 @ 10:18 PM

  288. Re #283 response: Thanks for that thorough answer, Mike.

    Reading over those links and the relevant portions of AR4 WG1 Chapter 6, I see that I wasn’t quite up with the times. I had known about the general trend toward reducing estimates of solar forcing, but I hadn’t known the part about the pre-1600 solar forcing estimates being so dependent on the post-1600 sunspot counts or that the MWP was already thought to be explicable mainly by reduced volcanic explosiveness. That would certainly explain Leif’s interest, since as I’m sure you know his latest proposed solar forcing reconstruction for that period implies that it’s small indeed (as in about the same amplitude as the 11-year solar cycle). It does appear that C+H’s work is nice confirmation for this since it seems to be a direct argument that the volcanoes can explain everything. Summing up, it sounds as if the solar irradiance trends are in the process of being demoted from small to nearly insignificant.

    IIRC it’s been said here recently that solar forcing is still thought to explain a good part of the early 20th century warming, but it sounds as if a further downward adjustment of the solar component isn’t much of an issue.

    Re Baliunas, I hadn’t known that her work had ever had that much influence. It does help explain a lot of her subsequent behavior.

    BTW, I’m not so disappointed if it’s only the solarphiles who are consternated, so no apologies necessary! ;)

    Comment by Steve Bloom — 17 Jan 2008 @ 10:43 PM

  289. Mr Mckitrick (see #284) is an economist. My confidence in his judgment of the risks from climate change is low. In all his bobbing and weaving and posturing about the risk, he never even hints that some climate changes might be both baleful and dreadfully hard to reverse if we wait too long to start. However, after a quick scan of the linked document, I think his thrust in structuring economic incentives to control emissions makes a lot of sense. I wish he would put aside his climate science opinions and participate vigorously in the debate about how to reduce emissions. A waste of expertise. His point that mechanisms should be kept flexible over the coming decades to react to new data seems obvious to me, yet I see it made almost nowhere. Instead we encourage our politicians to throw around targets for 2050 which may easily be off in either direction, and are discouraging to contemplate from where we sit today. Ironically, I think the desirability, and obvious possibility, of reacting flexibly over the decades is itself one of the strongest arguments for starting now. If Mckitrick, or Lindzen, or Fred Singer (please stifle those eye rolls, I said IF) turns out to be pretty close to right, we’ll know soon enough to correct course economically, with no long-term damage done, contrary to the loud howls we hear from defenders of BAU. But if we sit around and wait, as Mckitrick would have us do, and some of Hansen’s more worrisome projections are right, we’re truly screwed.

    Comment by Ric Merritt — 18 Jan 2008 @ 12:11 AM

  290. Volcanoes: Are there any publications which mention or discuss the impact (if any) of the Kilauea eruption (ongoing for 20+ years)? I am curious if this constant low-altitude injection of sulfate aerosols has any regional climatic impact (I been living here for 20+ years … I just take it for granted).

    Great stuff, great discussions, great site!

    Comment by Jon Gradie — 18 Jan 2008 @ 12:50 AM

  291. Richard — yes.

    The “Start Here” link at the top of the page will help you with your questions. So will the first link under Science in the right column, to the AIP History website.

    Comment by Hank Roberts — 18 Jan 2008 @ 12:53 AM

  292. “Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.”

    When are you guys going to issue a revised sea level forecast? It seems to me you have vastly underestimated the true position and that we can already see the beginnings of an exponential increase. Hansen’s forecast of ~5m by the end of the century, while brutal, looks more realistic. There still isn’t any real sense of urgency about the problem, or the political will to face up to it. People see your upper limit forecast of 60cm, shrug, and think it’s not too bad.

    Comment by Bill Tarver — 18 Jan 2008 @ 4:49 AM

  293. Re 251, 253

    If looking at ENSO influence on global temperature, keep in mind the roughly half year time lag. That means that 2007 is mainly influenced by a moderate El Nino (corresponding to an influence of about plus 0.05 to 0.07 K of global temperature), while the influence of the current La Nina will fully hit 2008 (last six ENSO months of 2007 and first six of 2008), corresponding to about minus 0.03 to 0.07 K to global temperature.

    Comparing 2007 to 1998: the ESNO influence is positive for both, with about plus 0.05 to 0.07K and plus 0.20 to 0.25 respectively.
    For 2008 besides La Nina there is a minimum of the solar cycle, where we are not quite sure of the magnitude of the effect and if the roughly 10y quasi-oscillation we find in the global temperature data is really due to solar variability or just an internal oscillation of the climate system. The influence of this oscillation, be it due to the solar cycle or not, is about 0.1 K for the full cycle, i.e. a departure of about 0.05K from the average at minimum stage. Together this would give roughly minus 0.1K departure from the trend line in 2008. In addition, there will be a considerable contribution of stochastic natural variability. Nevertheless I think it is quite likely that global-warming-has-stopped claims will not recede this year. We’ll probably have to wait for 2009 and 2010 without El Nino and increasing solar radiation (or whatever the cycle is due to). In agreement with Smith et al., BTW.

    Comment by Urs Neu — 18 Jan 2008 @ 6:10 AM

  294. Fred Staples writes:

    [[I find the CO2 radiation saturation argument originally demonstrated by Angstrom (and the resulting logn relationship to concentration) convincing. Whatever CO2 and H2O do to warm the surface, they will do it at low altitude and low concentrations.]]

    That argument was shown to be wrong back in the 1940s. Yes, the infrared radiation from the surface is absorbed fairly low down. No, that doesn’t mean that absorption higher up doesn’t count. If the upper layers absorb more from the lower layers, they will heat up. And radiate more. And heat up the lower levels, which in turn will heat up the ground. Absorption at all levels counts, even if the lowest level absorbs 100% of the radiation from the ground.

    Comment by Barton Paul Levenson — 18 Jan 2008 @ 6:35 AM

  295. Very well, (273), without the joke, Hank, (197/259)) there is no connection. Presumably he (McKitrick) can see what anyone who looks at the satellite data can see.

    The logn rule arises from saturation, Ray, (280) not the other way around. Each additional CO2 molecule added to the atmosphere finds less surface radiation to absorb.

    Reactor Physics, Ray, (280) depends on twentieth century Physics, and it is very much on the particle side of the wave/particle duality. Nuclear fission requires the absorption of neutrons to break down the strong nuclear force.

    Does Climate Science have much to do with this? The great classical physicists, Maxwell, Stefan, Bolzmann et al, (on which much of climate science depends) knew nothing about atomic structure.

    How, Ray, does a CO2 molecule absorb surface heat – electron shell excitation (quantum physics), or increased inter-atomic vibration (kinetic energy). How is the increased energy dissipated (not lost)?.

    At the tropopause we have a dry N2/O2 atmosphere, with CO2 at 380 ppm, 0.04%, at a temperature about 255 degrees K. Beyond it, eventually, is space at 3 degrees K, close to absolute zero.

    How is that atmospheric energy lost to space?.

    Simple radiative models (using Stefan-Bolzmann) are easy to construct. Complex models, I am sure, are incredibly difficult. Their validity can be judged only against their predictions, and given the inherent variability and short time scales this can only be done statistically. Hence the “fun” (259) of “line fitting” and statistical testing.

    Incidentally, whether or not two data sets (surface and troposhere temperatures, for example) are statistically different has nothing to do with error bars. We have to calculate the probability that they are not part of the same set of data.

    Comment by Fred Staples — 18 Jan 2008 @ 7:54 AM

  296. Re Richard Tew #285

    Ocean acidification

    Comment by P. Lewis — 18 Jan 2008 @ 8:22 AM

  297. Fred, if you have read all the way through Weart’s book and the ‘What Angstron Didn’t Know’ comments threads, you know as much as we your fellow readers here know. For those of us who don’t do quantum math, we have to rely on those who do. (So does all modern electronics, it works whether we believe in it or not.)

    All the radiation physics work is in that area. And it works.

    But you are asking the same questions Weart answers, and that people worked very hard to answer, in those two comment threads, as though you were unaware of them. Please reread them, or read them.

    Please read, save people vast amounts of retyping, and off topic yet.

    Comment by Hank Roberts — 18 Jan 2008 @ 11:59 AM

  298. Fred, this link has interesting elements on some of what you’re asking (how some heat is released to space from the stratosphere):
    I found it more to the point than the RC discussion in this subject, although it is probably oversimplified for the Gavin types.
    Clough and Iacono have a lot of good work on that subject, here are other names that you can look up to find info on stratospheric radiative processes: Santer, Ramaswamy, Scwartzkopf, Cess. All that info is out there, if you use the right words to search, you don’t even need Scholar, you’ll find the articles. There is no reason to be continuously asking others to spoon-feed you, as Hank has justly pointed. Furthermore, whatever the quantum mechanics may be, there is abundance of observational data for the middle and high troposphere, which you can find with CERES, ARM and ERBE (pick which is most relevant).

    Comment by Philippe Chantreau — 18 Jan 2008 @ 1:01 PM

  299. Aren’t you just saying that one should look at long term trends, not just a few years? (what may seem like a long time to us, eight years is nothing in terms of climate history) And that when one does so, the evidence for global warming is irrefutable?
    ‘It’s All About Green Psychology’

    Comment by marguerite manteau-rao — 18 Jan 2008 @ 1:33 PM

  300. Meanwhile, in the Arctic:

    Beaufort Sea ice pack fracture

    A massive fracture of the Beaufort Sea ice pack as well as other interesting features are observed this winter in the Arctic. A new page under “Specific Ice Events” in the Education Corner shows this with satellite imagery and animations.

    In December 2007, a massive fracture of the Beaufort Ice pack was observed west of Banks island. The image below clearly shows this fracture.


    Comment by Jim Galasyn — 18 Jan 2008 @ 1:56 PM

  301. First, thanks Gavin for your many hours of dedication to this site. This site is equal to or better than a university course in climate change. Thanks to all the other contributors as well.

    Relating to this article, I really like the comments by Aaron Lewis, # 190, and Wayne Davidson, #276. Mr. Lewis’ idea, letting the plants do all the integration for us appeals to my common sense. I might just throw in the rest of the biological world. I’ve been wondering why the robins are hanging around my house in central VA in the middle of the winter. As a child I can remember them being out on the southeastern tip of Louisiana. And, I must agree wholeheartedly with Mr. Davidson, the lines on a graph are one thing and much of the Arctic Ocean, including the Northwest Passage, clear enough for normal ship traffic for almost a month last summer is quite another. Look around your world; climate change is everywhere.

    To get to the actual graph at the beginning of this article, it reminds me of many graphs I’ve seen of stock and commodity prices. Price charts are filled with the noise of millions of investors deciding what’s best for them. However, the prices can show step-like advances like this graph does. The prices will increase and meet a determined group of sellers who must be satisfied before they can advance further. The increasing temperature deviation portrayed in the graph can only proceed when the environment (the atmosphere, oceans, biosphere, etc.) stores the heat and reaches equilibrium with the previous advance.

    A further examination of the graph does show the bottoms of the troughs to be tending upwards as one might see in the beginnings of what could be an exponential increase in the rate of advance. Could this imply that the storage mediums referred to above are reaching capacity?

    Finally, two of the troughs in the graph seem to be well linked to the eruptions of two volcanoes and probably the effects of the shading particulates they added to the atmosphere. What do the models say the temperature deviation might now be without those eruptions? I suspect it would already be above .5 degrees C.

    Comment by Mike Tabony — 18 Jan 2008 @ 2:19 PM

  302. Fred, I’ll address a part of your post 295. You’re bouncing around a point that is part of my skepticism (but I’m leaving that alone for now), and something that was an involved discussion here a little while back. The moderate explanation of radiation transfer is that outgoing infrared radiation is absorbed intramolecular by matching up frequencies with discrete energy levels in the rotations and vibrations of the molecular bonds. Different from but like the process of electron level quantizations. Only greenhouse gasses have the molecular/atomic layout required for this absorption. This absorption does not heat the molecule, though there was less than unanimous agreement with this.

    The molecule relieves itself by either re-radiating out — up or down, or transferring the absorbed energy to translation of the molecule (which does heat the molecule — thought there was considerable discussion over whether a single molecule can exhibit temperature) through a process of equipartion, or by direct transfer to another molecule’s translation energy (kinetic — and atmospheric temperature increase) via a collision. At least at lower altitudes, the latter seems to predominate, given the likelihood and frequency of molecular collisions; it also allows energy (heat) transfer to N2 and O2, et al, the non-greenhouse gasses.

    That initial (equivalent) photon’s energy follows a very tortured path until it either is radiated away from the top of the atmosphere, returns to be absorbed by the surface, or spends a while in the atmosphere.

    Comment by Rod B — 18 Jan 2008 @ 2:57 PM

  303. Folks,

    Does anyone have any comments on how to reconcile the recent JPL/NASA reports on Antarctica (see with the University of Illinois – Urbana data on southern hemisphere ice accumulation (see – at the bottom of the page)?


    Comment by Donald Dresser — 18 Jan 2008 @ 3:00 PM

  304. Richard Tew (286) — Carbon capture and sequestration (CCS) is being seriously proposed as a way to continue to burn coal without adding carbon to the active carbon cycle. If biomass, such as biocoal, is burned instead of fossil coal, the result of CCS-firing is carbon-negative.

    Dr. James Hansen has, I believe, proposed that a concentration of carbon dioxide in the atmosphere of between 300 and 350 ppm is necessary to preserve arctic ice. From today’s 385 ppm, about 182 billion tonnes of carbon need to be removed from the active carbon cycle to reach the maximum of his estimated range. I propose 315 ppm, simply on the grounds this was the concentration enjoyed in the 1950s. To reach this requires removing about 350 billion tonnes of carbon from the active carbon cycle. Either way, I opine that we owe it to future generations to get started right way.

    Comment by David B. Benson — 18 Jan 2008 @ 3:06 PM

  305. Fred Staples,
    OK, let’s start simple. Does log(n) go to infinity as n goes to infinity? Since I assume you know this, HOW CAN YOU CALL THAT SATURATION?

    And, great, I’m all for learning about nuclear physics. However, don’t you think the physics might be just a wee bit different when we’re trying to understand the energy balance of the atmosphere?
    For instance your distinction between electronic excitation and vibrational excitation? The vibrational excited state and the electrical excited states are both quantum mechanical states. They absorb energy and they relax. Yes they have different timescales, and that’s important for understanding how the greenhouse effect works. However, a CO2 vibrational state can relax collisionally or radiatively.
    I also note that you seem to want to really simplify atmospheric structure. It is not a single, homogeneous, isothermal layer. I really do not understand why you are so resistant to learning the physics behind our current understanding of climate. Without some sort of model, your statistical analyses are no more useful than Soduku.

    Comment by Ray Ladbury — 18 Jan 2008 @ 3:26 PM

  306. Donald Dresser –

    To reconcile the two links, read the first one down to the words “on land.”

    And read the second one down to the words “sea ice.”

    Does that make the difference clear?

    Comment by Hank Roberts — 18 Jan 2008 @ 3:54 PM

  307. Donald Dresser-

    I’m not sure I’m looking at the same info on the Cryosphere Today site as you, but all the pictures and charts at the bottom of the page you linked deal with sea ice extent–the area over water covered with ice, however thick or thin–whereas at a glance the Washington Post article seems to be dealing with the mass of the large ice sheets attached to land. So if I’m correct about the things you’re comparing, there’s not really anything to reconcile. The skin of ice over the water surrounding Antarctica has recently covered a somewhat larger area than normal. At the same time, the ice sheets (anchored to land) are losing mass–melting and crumbling off into the ocean around the edge of the Western Antarctic Ice sheet. No conflict. In fact, it seems to me like the ice breaking off of the ice sheet could even possibly contribute to the sea ice extent. But I may just not know what I’m talking about.

    Comment by Kevin Stanley — 18 Jan 2008 @ 4:07 PM

  308. Donald Dresser — try the transcript of the online conversation with Dr. Rignot, here. I’ve excerpted a bit that is relevant to the topic of this thread, how good models are, how good they were 30 years ago, and how they’re changing to allow better decisions:

    for example, just an excerpt:


    Cary, N.C.: What is the source of the claim from global warming skeptics that Antarctic ice is growing, not shrinking, despite the collapse of the Larsen ice shelves?

    Eric Rignot: Climate models have been predicting climate warming would increase precipitation in polar regions (because of enhanced evaporation on the oceans), which has indeed been the case in a few places (e.g. Antarctic Peninsula), but the effect is very modest. Since there is no melt in Antarctica and these models ignored the influence of glaciers, Antarctica could only grow. Reality shows otherwise. Reality shows that glaciers speed up and drive the ice sheet mass budget. This is a major shortcoming of models which we will now try to improve.

    Models predicted a loss of Antarctic mass only after a warming of 4-5 degree Celsius. We are obviously there much sooner than expected.

    Eric Rignot: … I am a bit more optimistic. I think the scientific community is coming to grasp with the Earth climate system. Slowly but surely. We are not doing very well in terms of predicting the future of ice sheets, but we are now gathering important information which will help improve the next generation of models. Global climate models were not doing very well 30 years ago. They came a long way and are now becoming more reliable. I remain optimistic that in years to come, not decades, we will produce more realistic predictions of the evolution of Greenland and Antarctica. We are learning tremendously right now about the dynamics of these systems. An unfortunate byproduct of changing the climate of the Earth so rapidly …


    Silver Spring, Md.: If Antarctic ice loss is accelerating – particularly in West Antarctica – as your work shows … what do you think odds are it will collapse this century?

    Eric Rignot: That is a very difficult question. I think a collapse of West Antarctica in the next 100-200 yr is now a concept that is back on the table; it was not on the table anymore 10 years ago; it was first put on the table in the early 1970s. But even if the ice sheet does not collapse, a loss of a significant portion of Antarctica and Greenland could raise sea level 1-2 m in the next century, and I think this is already something to worry about.

    —- end excerpt—-

    NOTE, some ellipses added, some in original, see source.

    Comment by Hank Roberts — 18 Jan 2008 @ 5:41 PM

  309. #298

    So, the stratospheric temperature stays quite stable for several years. Then there’s a large volcanic eruption. After a couple of years the temp stabilizes at a lower level (or even increases), and then it all happens once more. What does that tell us about the cause of the temperature drop? The slow and steady increase of CO2?

    Comment by lgl — 18 Jan 2008 @ 5:50 PM

  310. lgl, what’s your source for believing “So …” is a correct description? Why do you consider your source reliable?

    Comment by Hank Roberts — 18 Jan 2008 @ 6:23 PM

  311. lgl (#309) wrote:


    So, the stratospheric temperature stays quite stable for several years. Then there’s a large volcanic eruption. After a couple of years the temp stabilizes at a lower level (or even increases), and then it all happens once more. What does that tell us about the cause of the temperature drop? The slow and steady increase of CO2?

    To me at least (and I don’t have any formal background) it suggests that the stratosphere is being affected by the climate system which exists in quasi-stable regimes, but as CO2 continuously increases, that system reaches a tipping point of sorts where the climate system reorganizes itself in response to certain changes — such as the rise or fall in levels of energy in various “pools,” determining how energy flows through the climate system. The reorganization probably involves teleconnections between climate modes (atmosphere-ocean oscillations, such as the El Nino-Southern Oscillation, the Arctic Oscillation/North Atlantic Oscillation, Indian Ocean Diapole, Pacific Decadal Oscillation, etc.) where teleconnections are formed and broken.

    The reorganization will involve changes in the tendency for a given oscillation to be in one state or another, as well as the likely duration and strength of those states. Such reorganization is suppose to be common in chaotic systems. According to some of the literature I have been running across, there would appear to be a small-world network of teleconnections between the oscillations. (I’ll share titles a little later once I have had the chance to read through the essays once and can intelligently say what they are about.)

    Given the length of time between steps, I would guess that a large part of what is going on at root involves changes to a hierarchically-organized ocean circulation at different levels and scales and over different regions within the hierarchy — but that is just a guess on my part. The stratosphere is further removed from the ocean, therefore it shouldn’t be that surprising that its behavior is more step-like.

    Comment by Timothy Chase — 18 Jan 2008 @ 7:34 PM

  312. Ray (305) asserts “log(n) go(es) to infinity as n goes to infinity…” This has very little relevance to the practicalities of the physics.

    Comment by Rod B — 18 Jan 2008 @ 8:38 PM

  313. Re lgl #309

    Simpler Thought on stratosphere trend, volcanoes….

    Aerosols in the upper stratosphere reduce the amount of energy entering the climate system. Gives the surface a chance to cool and simultaneously carbon dioxide a chance to build up. A cooler surface means less thermal radiation. The additional carbon dioxide renders the troposphere more opaque to radiation, meaning that the surface will have to warm up more to reach the point that it is emitting sufficient radiation to bring the temperature of the stratosphere back up. Meanwhile, more carbon dioxide is building up in the stratosphere which has the net effect of cooling it since it is above the effective radiating layer.

    However, if you check:

    You will see that there is a downward trend early on before the first three eruptions, a shallower downward trend after the first, a slight upward after the second, and a long flat after the third. Probably has something to do with how strong the blow to the climate system was as the result of each eruption and the state the climate system was in prior to that eruption.

    So there are trends (up, down, neutral) between volcanic disturbances.

    Comment by Timothy Chase — 18 Jan 2008 @ 9:01 PM

  314. In the global climate models, what forcings would cause a reduction in temperature and CO2? Would GCMs predict another ice age if you could turn off the human CO2 contribution in the models and had the computing power to simulate the next 50,000 years?

    Comment by Mike — 18 Jan 2008 @ 11:26 PM

  315. Is there any quantifiable means to determine how much air travel and the booming aviation industy has and will play in climate change. With the numbers of passenger aircraft and air-freight transport expecting a huge jump in numbers in the coming years is there any way for sure to guage the impact that these millions of tonnes of CO2 being pumped directly into the upper atmosphere @ 30-37000 ft will do?. Does the latest computer modelling take this into account? This may also be a silly question but is there technology available to filter and trap CO2/carbon when it leaves the aircraft’s engines. Maybe if the fuel was burnt more completely then there would be less pollution, I’m guessing more refined higher octane jet fuel with a very narrow explosion band may help in the medium term. It’s pretty obvious that solar power is next to useless on aircraft, nuclear could be an option? Hydrogen fuel cells to bulky, costly and dangerous. LpGas likewise. Nuclear powered aircraft seems the best bet as far as I can see as long as stringent safety measures are implemented especially upon the possibility of crash landings. What do you guys think??

    Comment by Lawrence Coleman — 19 Jan 2008 @ 2:27 AM

  316. Jim Galasyn reported at 300:
    “Beaufort Sea ice pack fracture

    A massive fracture of the Beaufort Sea ice pack as well as other interesting features are observed this winter in the Arctic. A new page under “Specific Ice Events” in the Education Corner shows this with satellite imagery and animations.”

    Some of the collapsed ice pack is perennial ice:

    Implication for next summer Artic sea ice area is evident.

    Comment by Petro — 19 Jan 2008 @ 4:11 AM

  317. Ray (305) asserts log(n) go(es) to infinity as n goes to infinity”
    This has very little relevance to the practicalities of the physics.

    Oh, but it has. What Ray means to say — in science talk — is that, as n gets bigger and bigger, also log(n) goes on growing; the growth gets less and less, but it never stops, and what’s more, there is no limiting value that it will never exceed.

    Saturation would be if there were a point at which adding more CO2 would cease to have any effect; or (weaker), that log(n) (or whatever) would approach to a finite limiting value rather than infinity. Both these behaviours would warrant being called ‘saturation’. Neither occurs, not for the log function and not for the greenhouse effect either.

    Calling log behaviour ‘saturation’ is misleading at best. And at the present point in time the issue is irrelevant, as the delta of CO2 concentration is still so small that

    log(c/c0) == (c-c0)/c0

    to rough approximation, i.e., near-linear behaviour.

    Or was it that what you meant by ‘very little relevance’? Then I agree. Here’s to hoping that it never becomes (very) relevant.

    Comment by Martin Vermeer — 19 Jan 2008 @ 4:26 AM

  318. #310

    The link in #298 and in #313 which is roughly the same as shown in Technical summary. When several measurements show the same trend I usually consider them reliable. And it does not surprise me that these large eruptions have severe impact on the climate system.

    Comment by lgl — 19 Jan 2008 @ 4:59 AM

  319. #313

    The trends are, increase between 1973 and El Chichon, increase between 1985 and Pinatubo and flat after 1995. And the temp steps down about 0,5 degrees both times.
    The models don’t show this behaviour so they must have missed something.

    Comment by lgl — 19 Jan 2008 @ 5:15 AM

  320. [[In the global climate models, what forcings would cause a reduction in temperature and CO2? Would GCMs predict another ice age if you could turn off the human CO2 contribution in the models and had the computing power to simulate the next 50,000 years?]]

    They would if you added the physics of the Milankovic cycles. At the moment GCMs don’t generally include those, since they operate on a time scale of tens of thousands of years, and most GCMs simulate a hundred years or so at a time at most.

    Comment by Barton Paul Levenson — 19 Jan 2008 @ 7:24 AM

  321. Re 311 Timothy Chase :

    I’ve also been wondering about the chaotic aspects of the climate system and the stability of the various oscillations etc. Does anyone know if there a possibility that at some level of greenhouse forcing there will be a major shift such that we see something completely novel appear. For example, the complete disappearances of one of the existing oscillation’s and the emergence of a new one, or a major alteration to the route of one or more of the oceanic currents. Does anything like this radical from any of the climate models?

    Comment by Craig allen — 19 Jan 2008 @ 7:25 AM

  322. Re:311 Tim Chase
    The “lower strat” temp data you reference includes measurement of a significant portion of the upper troposhere which is warming. The plots are deceptive.

    Comment by B Buckner — 19 Jan 2008 @ 9:11 AM

  323. I have come across another reference, namely
    This is in Swedish, and for those, like me, who do not understand the language, if you click just under the graph, you get a largert version. No comment.

    [Response: This is Pielke's graph and the IPCC90/92 numbers are exaggerated by about 25% from what is actually in the 1992 report. Projections from 1990 and 1992 are still higher than they were in the 95 and 2001 reports, mainly because of expectations of continued CFC and methane growth. However, they are still within the 2-sigma of the derived trends in the observations. - gavin]

    Comment by Jim Cripwell — 19 Jan 2008 @ 9:45 AM

  324. Tim Chase and lgl are writing about very short term ‘trends’ between volcanic events, but to declare a trend you need some statistical work, if I understand the word correctly. Not just a picture.
    Numbers, anyone? Where has this been published?

    Comment by Hank Roberts — 19 Jan 2008 @ 11:03 AM

  325. Martin (317), what I meant was that the ln(n) function might or might not apply throughout the range of concentration, and it is presumptuous, and irrelevant, to say that CO2 will continue to absorb more radiation until its concentration is infinite. [Though the idea would make moot the thorny issue of LN to the (about) 5th power of concentrstion ratios...]

    Comment by Rod B — 19 Jan 2008 @ 11:44 AM

  326. Tangential, for those in N. America — watch for spring. Got robins?

    “… count for today’s song map was many fewer than for the other two maps, right? This shows that robins are not yet defending their breeding territory. Robins returning to breed and nest are indeed signs of spring. That’s why increases of reports for the “Song” map will be the clearest pattern we expect to see as we track this spring’s robin migration. …”

    Redwing blackbird? tulip? hummingbird? This is meant for kids, but ‘grownups’ who haven’t quit paying attention could use this as well.

    Comment by Hank Roberts — 19 Jan 2008 @ 11:54 AM

  327. Re 312: Rod, think about it. The fact that log(n) continues to rise and diverges as n–>infinity is precisely the point, since it means that adding more CO2 continues to have an effect. If you look at the Taylor expansion, for small changes in concentration, the logarithmic term grows nearly linearly.

    Comment by Ray Ladbury — 19 Jan 2008 @ 12:12 PM

  328. Ray Ladbury wrote: “It appears that the inhabitants of the denialosphere are falling into the same trap as the creationists –- trying to find one single devastating observation or experiment that will falsify anthropogenic causation of climate change.”

    To me that is the most striking thing about the folks I would call “honest” denialists — “honest” in that they genuinely believe what they are saying, as opposed to some who are obviously disseminating fossil fuel industry propaganda.

    Despite, in most cases, having little or no real knowledge, background, or education in climate science, they seem prepared to conclude that they have discovered the simple and obvious flaw in “the anthropogenic hypothesis” that completely demolishes the whole notion of anthropogenic climate change, the simple and obvious flaw that has somehow escaped the attention of hundreds of climate scientists who have been studying the matter for decades.

    Regarding the “modelers vs. empiricists” categorization, at present it is the empiricists who have every reason to sound the loudest and most urgent alarms about anthropogenic global warming, since its empirically observable effects are far outpacing the predictions of the modelers.

    Comment by SecularAnimist — 19 Jan 2008 @ 2:18 PM

  329. Ref 323. Thanks for the comment, Gavin, but I was not concerned with the IPCC predictions. Rather I was concerned with the correlation between the different data bases. In 213, you referred me to Vose et al (2005), which was completed by 2005. However, the Pielke graph shows good correlation between the different data sets before 2005, but it is in the last 2 years that this correlation seems to have broken down. This is why I query whether a study done in 2007 only using NASA/GISS is a proper scientific way to proceed.

    [Response: The offset in one particular year doesn't noticeably affect the correlation. On any timescale and for any period I'd be surprised if it was less than 0.95. The difference in the last couple of years is almost all due to treatment of the Arctic. I'll post something on this next week. - gavin]

    Comment by Jim Cripwell — 19 Jan 2008 @ 2:22 PM

  330. B Buckner (#322) wrote:

    Re:311 Tim Chase
    The “lower strat” temp data you reference includes measurement of a significant portion of the upper troposhere which is warming. The plots are deceptive.

    The plot was in 313, and I had given:

    Global lower stratospheric anomalies from Jan 1958 to Nov 2007

    … which shows HadAT2, UAH and RSS. I assume they are all using channel 4 (given how recent the plot is) with differences due to methodologies, but they show what appears to be good agreement on this trend at least.

    In any case, channel 4 is presumably almost all stratosphere:

    And as if that weren’t complicated enough, satellites don’t measure the temperature in a thin atmospheric layer but in a large region. The two microwave “channels” which have been most studied for climate information are MSU channels 2 and 4. Channel 2 (giving temperature T2) measures mostly the troposphere but is also affected by the surface and the stratosphere, while channel 4 (giving temperature T4) is almost entirely reflective of temperature in the stratosphere.

    December 31, 2007

    … and as such it should show very little contamination from the troposphere.


    The principle cause of stratospheric cooling
    (Wikipedia in need of correction?)

    However, there is also the question of what is principally driving stratospheric temperature anomaly trends: depletion of ozone or higher levels of carbon dioxide?

    According to:

    Figure 8.20 d Stratosphere
    Observed global annual mean surface, tropospheric and stratospheric temperature changes and GISS GCM simulations as the successive radiative forcings are cumulatively added one by one.

    … at the bottom of:

    Climate Change 2001:
    Working Group I: The Scientific Basis

    … which gets repeated in:

    The satellites also measure the lower stratospheric temperature and show a decline in stratospheric temperatures, interspersed by warmings related to volcanic eruptions. Global Warming theory suggests that the stratosphere should cool while the troposphere warms[citation needed]. However, the lower stratospheric record is mostly explained by the effects ozone depletion, which has caused a cooling of the stratosphere[21].

    Satellite temperature measurements: Satellite measurements of the stratospheric temperature

    … it is principally ozone depletion (where ozone absorbs ultraviolet directly from sunlight) rather than higher levels of greenhouse gases.

    However, and this may be a new result, according to:

    Thursday, November 09, 2006
    Stratospheric cooling rears its ugly head….

    … the principle cause of stratospheric cooling is higher levels of greenhouse gases, and the effects of ozone on such cooling are much less significant.

    [Response: It all depends on where you are. lower strat is principally ozone-depletion related (the majority of the MSU4 trend), while mid and upper strat (and mesosphere as well) is mainly CO2. - gavin]

    Comment by Timothy Chase — 19 Jan 2008 @ 3:06 PM

  331. Ray (327), I fully understand the mathematics. My assertion (325) is simply that it is not clear, obvious or proven that the mathematical formula, ln(nb), matches the physics of concentrations from zero to infinity. To positively claim it applies to n=infinity, or even n=50, or maybe 30, or maybe…(???) as a proof of absorption, IMHO, is not relevant or helpful, and might be specious. It seems you would have a relevant argument (right or wrong I don’t know, but it’s scientifically arguable) if you kept concentrations around realistic maximum possibilities (10, 20, ??) and not try to prove your case with the ridiculous (or sublime, take your pick) infinity… or million.

    Comment by Rod B — 19 Jan 2008 @ 3:36 PM

  332. Hank Roberts (#324) wrote:

    Tim Chase and lgl are writing about very short term ‘trends’ between volcanic events, but to declare a trend you need some statistical work, if I understand the word correctly. Not just a picture.

    Numbers, anyone? Where has this been published?

    lgl’s criticism is based off of short-term statistical trends. I haven’t any criticism – only questions. I wouldn’t base a criticism of the models off of a short-term statistical trend, and in the case of satellite records of stratospheric and tropospheric trends, the long-term trends may be even more problematic.

    With respect to your call for numbers, I am not sure that “the numbers” would be a great deal of use at this point as they would be showing data joined together from multiple satellites, corrected for satellite drift, etc.. In any case, in the past, when actual conflict existed between the models and the satellite data, the problem lay with the satellite data, not the models. John Christy’s difficulty with being able to tell the difference between day and night is one very good example.

    However, while there are problems with model projections, (Hansen mentions cloud cover and consequently temperatures along the west coasts of continents below), in terms of lower stratosphere temperature anomaly and nearly all other global trends, the current NASA GISS Model E seems to be doing quite well.

    Please see page 672 (pdf page 12) of:

    J. Hansen et al. 2007, Climate simulations for 1880-2003 with GISS modelE, Clim. Dynam., 29, 661-696, doi:10.1007/s00382-007-0255-8.

    Comment by Timothy Chase — 19 Jan 2008 @ 3:53 PM

  333. #330

    If most of the 0,5 oC drop in lower stratosphere is caused by ozone depletion, then how much more SW (in W/m2) has reached the troposphere and surface?

    [Response: a little (maybe tenths of a W/m2). But ozone depletion is a net cooling factor since it also has an impact on LW absorption. gavin]

    Comment by lgl — 19 Jan 2008 @ 4:01 PM

  334. Gavin-

    Can you show your work to support your claim that the 1990 IPCC projection is “still within the 2-sigma of the derived trends in the observations”?

    The 1990 IPCC projection can be found here:

    1990 IPCC predicted trend = 0.33/decade
    Range of trend in 4 observational datasets (using GISS L/O) 1990-2007: 0.20-0.22

    In order for the IPCC trend to be within the 2-sigma of the observed requires that error to be > +/-0.13, which seems unreasonable given that the 2 sigma error in the 1979-2004 (only 7 years longer) is +/- 0.04 for the surface datasets and 0.08 for the satellites.

    What is the 2 sigma of the trends in observations 1990-2007?

    [Response: To be clear, the graph that was linked to and that I was discussing was your original attempt at discussing the 1992 supplement forecast. The numbers on that graph showed 0.6 deg C for 1990 to 2007, which even you subsequently acknowledged (but never graphically showed) should have been 0.45 deg C. That implies a trend of 0.26 degC/dec. The trends in the two GISS indices over the same period are 0.22 +/- 0.09 degC/dec and 0.26 +/- 0.11 degC/dec (2-sigma confidence, with no adjustment for possible auto-correlation, using standard Numerical Recipes methods). You could play around with the uncertainties a little or pick different datasets, but the basic result is the same. Given the uncertainty in the trends because of interannual variability and the imprecision of the data, the 1992 forecast trends are within observational uncertainty.

    Now, for 1990 it is a slightly different matter. I would estimate the 1990 forecast is closer to 0.5 for 2007 (thus 0.29 degC/dec) than the 0.57 you graphed. It would then still be within the error bounds calculated above. For your estimate of the trend (and both of us are just reading off the graph here so third party replication might be useful at this point) it would fall outside for one index, but not the other. Making different assumptions about the structure of the uncertainties in that case could make a further difference in how it should be called. Since I wasn't discussing this in the comment above, I'm happy whichever way it goes.

    For the record, your comments about this exchange in other forums are singularly unprofessional and rather disappointing. My only suggestions have been that a) you read from a graph correctly before making comparisons, and b) calculate error bars (as much as can be done) before pronouncing on significance. These might strike some as simple basic procedures. You appear to think they are optional. I'm not much worried that my professional reputation will suffer because of our apparent disagreement about that.

    One final note. I find the mode of discourse that you have engaged in - serial misquotation with multiple ad homs in various parts of the web with different messages for different audiences - unproductive and unpleasant. I am not the slightest bit interested in continuing it. - gavin]

    Comment by Roger Pielke. Jr. — 19 Jan 2008 @ 4:11 PM

  335. Re #331

    “My assertion (325) is simply that it is not clear, obvious or proven that the mathematical formula, ln(nb), matches the physics of concentrations from zero to infinity.”

    Actually it’s easily proven that it does not!
    The ln() relationship arises because the absorption lines in the CO2 spectrum become saturated at their centers and therefore the dependence on concentration is due to the non-saturated ‘wings’ of the broadened lines yielding a ln-dependence. This applies for the range of concentrations experienced in our atmosphere not at very low concentrations nor at extremely high ones I suspect.

    Comment by Phil. Felton — 19 Jan 2008 @ 4:39 PM

  336. #332

    > in terms of lower stratosphere temperature anomaly and nearly all other global trends, the current NASA GISS Model E seems to be doing quite well.

    Can’t agree with you there. The model shows a decrease between the eruptions, and the total drop between 1958 and 2000 is around 0,8 oC while the observations you linked to show close to 1,5 oC drop.
    Assuming I am comparing apple to apple?

    Comment by lgl — 19 Jan 2008 @ 4:56 PM

  337. Rod, So, since you don’t believe in physics, what do you propose we use? The logarithmic dependence is a direct consequence of the fact that we are having greatest effect (near Earth) in the wings of the absorption line. As the wings are quite broad, there is no reason to expect that CO2 will magically stop absorbing IR at some concentration.
    As most of Earth’s carbon is bound up in rocks, for the range of possible concentrations of CO2 in the atmosphere, saturation quite simply will not occur.

    Comment by Ray Ladbury — 19 Jan 2008 @ 5:12 PM

  338. Gavin- Thanks for reporting back. Thanks for observing that the trends that I reported for the 1990 IPCC are outside of the 2 sigma range. By picking your own smaller trend (rather than the one I reported) you came up with a slightly different answer. I see that you call for error bars when convenient (now) and ignore them when they are not (your comparison of Hansen’s 1988 projections with data). Of course, you well know that in my posts on this I had not claimed anything about statistical relationships of observed and predicted, simply presented the central tendencies.

    On your complaints, we can both feel misrepresented I suppose, and that is why it is best to present analyses when making claims, as especially when representing someone else’s work. So choose not to engage if you must, but conversations among people that disagree can can lead to constructive learning.

    [Response: Hopefully the constructive learning you are doing involves checking your facts. My description of the Hansen et al trends most certainly did include discussions of the error bars in the observations and in the model simulations:

    From 1984 to 2006, the trends in the two observational datasets are 0.24+/- 0.07 and 0.21 +/- 0.06 deg C/decade, where the error bars (2\sigma) are the derived from the linear fit. The 'true' error bars should be slightly larger given the uncertainty in the annual estimates themselves. For the model simulations, the trends are for Scenario A: 0.39+/-0.05 deg C/decade, Scenario B: 0.24+/- 0.06 deg C/decade and Scenario C: 0.24 +/- 0.05 deg C/decade.

    As I have stated repeatedly, discussion of differences without discussion of uncertainties is misleading, and no verification of a forecast can be done without it. You can 'report' what you like, but don't be surprised when people who bother to look up the original references correct you. I have neither mis-stated any fact, nor misrepresented your claims in any way. Your continued claims to the contrary, and your doing the precise same thing you accuse me of, are a poor reflection on your professional integrity. - gavin]

    Comment by Roger Pielke. Jr. — 19 Jan 2008 @ 6:10 PM

  339. Gavin,
    As a follow up to our discussion earlier this week, I have been considering your suggestion that ENSO significantly alters the global heat budget on annual or longer time scales.
    The magnitude of natural tropical variability (specifically ENSO), and its effect on the earth’s heat budget over annual to decadal periods seems an important issue in climate science. To probe such a question further, I would like to ascertain some basic information. Firstly, what is the net radiative flux (in W/m2, then converted to Joules) needed to raise the temperature of the troposphere (entire global integral) 0.2 degrees C** (or whatever is the best number attributed to the 1998 Super El Nino) over the relevant one year span? Over this same period, when considering the upper 750 m of the ocean, how does this quantity of atmospheric heating compare to the change in the ocean heat content observed for this same time period (measured in Joules or converted to W/m2) in Lyman et al. 2006 (Figure 1)? From this data, are we then able to estimate the fraction of the observed change in the global heat budget that was directly attributable to the El Nino event (since the heat content anomaly in the atmosphere was certainly correlated to the large 1998 El Nino)? Once this fraction is determined, it seems we could better speculate on the validity of the argument by Ellis (1978) that even over annual periods of time, the change in ocean heat content is a proxy for the top of atmosphere radiative imbalance, which Roger Pielke Sr. asserts approximately equals the mean non-equilibrium radiative forcing+feedbacks). I have e-mailed Roger, and also asked him to comment on this issue, which he indicated he would do on his weblog next week.
    I hope my question is well posed, and this would be a sound methodology to address this question? (ie, I understand the error bars on ocean heat content limit the precision of this comparison), but it seems important to understand if the numbers are even of the same order of magnitude)
    Thanks again for the continuing education.

    [Response: Good questions. It will take me a little while to track down some answers, but I think it should be do-able. You could make a stab at it yourself by looking at the differences in the 'Energy into Ground' (+ve down) and 'Net radiation TOA' (+ve up) in one of the AMIP-style model runs we have archived. Look at the anomalies month by month (compared to some baseline) from mid-1997 to mid-1998. The numbers are ensemble means, and so you should be looking predominantly at the El Nino impact alone. You can compare that to the evolution of the surface temperature anomaly. I'll report back later with my analysis. - gavin]

    [Response: Update. Both 'Energy into Ground' and 'Net radiation TOA' are +ve down. Time series (including zonal means) are found here: - gavin]

    Comment by Bryan S — 19 Jan 2008 @ 6:41 PM

  340. > in my posts on this I had not claimed anything
    > about statistical relationships
    > of observed and predicted, simply presented
    > the central tendencies ….

    When you “present” a “central tendency” you thereby make a statistical claim about the data, as I understand the term. No?

    > it is best to present analyses when making claims …

    At least, cite the published analysis you base your claims on.

    Comment by Hank Roberts — 19 Jan 2008 @ 7:21 PM

  341. Timothy,
    While I have a lot of respect for Tamino, if you go to the source:
    you will find that MSU channel 4 covers a range of altitudes between 10 km and 30 km, and is centered at an altitude of about 17 km. The troposphere/stratosphere boundary is at 17 km at the equator, and at 8 km at the poles, so clearly channel 4 (lower stratosphere) samples a significant portion of the troposphere. Further, as shown on Figure 9.1(f) in AR4, global warming of the troposphere is expected to warm the air to an altitude of about 17 km, with a sharp drop off and cooling after that. So the MSU channel 4 sampling below 17 km is measuring warming air, while above that cooling air.

    Comment by B Buckner — 19 Jan 2008 @ 8:11 PM

  342. Mr. Buckner, when I look for papers citing the MSU channels, I always find mention of how the raw satellite data is adjusted to deal with these questions; I don’t find anything as simple as you make it sound, however, in your last sentence there (8:11pm posting).

    Here for example there’s an extended discussion of the many factors considered to make use of the raw satellite data, including those you mention and many more. There’s a lot of work done;
    Vinnikov et al.
    Temperature Trends at the Surface and in the Troposphere
    J. Geophys. Res, 2006 –
    “… radiation measured by MSU channel 2 results from changes in surface temperature and another 10% from the atmosphere above 180 hPa….”

    You’ll find that paper mentioned at RC and elsewhere, q.v. for discussion. But see the original full paper for quite a few detailed paragraphs addressing the issues involved in making sense of the many different instruments. They do work at this.

    Comment by Hank Roberts — 19 Jan 2008 @ 8:48 PM

  343. Bryan,

    I did a preliminary analysis from the sources I linked to. From early 1997 to mid 1998, the simulated atmosphere accumulated anomalous heat at about 0.2 W/m2 on average due to the El Nino event. This corresponded to an increase of surface air temperature (in these simulations) of around 0.3 to 0.4 deg C. Over that same time, the model ocean was losing heat at approximately 0.7 W/m2. The heat out at the TOA was thus ~0.5 W/m2. Now whether this is a good approximation to reality is unclear – my sense is that it is going to be sensitive to the details of the cloud feedbacks in the tropics, but assuming for the time being that this is reasonable, it implies that the interannual variability in OHC is certainly on the same order as the long term TOA imbalance, and that the factor between TOA imbalance and OHC might vary between 0.8 and 1.2 on interannual timescales. One would therefore expect to see a plateau or even a dip in OHC growth through this period in the real world.

    These kinds of model simulation do have a couple of disadvantages though that might be apropos. In mid-latitudes where SST variability is driven mainly by the atmosphere, starting off with the SST makes the fluxes go the wrong way. You could alleviate this by doing the analysis in the tropics alone – might even be easier to compare with data on clouds or rainfall too. But this is probably ok for a first cut.

    Comment by gavin — 19 Jan 2008 @ 9:36 PM

  344. gavin> Over that same time, the model ocean was losing heat at approximately 0.7 W/m2. The heat out at the TOA was thus ~0.5 W/m2.

    Was the area the same for ocean and atmosphere?

    [Response: These are all true global means. - gavin]

    Comment by Steve Reynolds — 19 Jan 2008 @ 10:09 PM

  345. Gavin,

    With more detailed analysis, this definately needs to be published!

    I need to think through the ramificatios of this conclusion a little before commenting on details, and its getting late, and my family is calling. I look forward to continuing this informative discussion. Thanks for staying up late!

    Comment by Bryan S — 19 Jan 2008 @ 10:41 PM

  346. Phil (335), I concur. Ray (337) says, “As the wings are quite broad, there is no reason to expect that CO2 will magically stop absorbing IR at some concentration.”

    How broad would you guess are the wings? There is likewise absolutely no reason to expect that it won’t stop. Given the quantization of the absorption it makes logical sense that it would stop absorbing long before infinity.

    “….saturation quite simply will not occur….” My implied suggestion exactly: give it up. You have nothing to gain/prove but a small amount to lose.

    Comment by Rod B — 19 Jan 2008 @ 10:47 PM

  347. > There is likewise absolutely no reason to expect that it won’t stop.

    Which “it” are you talking about now?

    Comment by Hank Roberts — 20 Jan 2008 @ 12:21 AM

  348. Rod,

    see Eli Rabett’s page here . Also, if you go to Ray Pierrehumbert and Spencer Weart’s saturated gassy argument/part 2 pages you can see that venus is not even “Saturated” and more and more CO2 will cause warming well beyond the realm of what is practical for policy-making. This is going nowhere unless you want to discuss the details for acadmeic reasons, but the ideas in the blogosphere that “CO2 will be saturated soon so there is not much to worry about” is nonsense

    Comment by Chris Colose — 20 Jan 2008 @ 1:40 AM

  349. #340 “When you “present” a “central tendency” you thereby make a statistical claim about the data, as I understand the term. No?”

    To “present” a central tendency is to make a statistical “claim”, yes. A single sample mean is an unbiased estimate of a population mean; it makes a statement.

    However – to address the case RPJ was referring to – it is possible to present a set of central tendencies without drawing any inference as to whether the central tendencies are the same or different. It is a matter of descriptive vs inferential statistics. To move from a description of central tendency to an inference about a set of tendencies you need to estimate the uncertainties in central tendency. No estimate of uncertainty, no inference.

    But – a brief digression – operating in descriptive mode rather than inferential mode is really a kind of a dodge. What the author is essentially doing is letting the reader assume the risk of making a bad inference, thus shifting the liability from author to reader. In purely scientific circles this doesn’t really matter, as every scientist knows the literature is “buyer beware”. But it can be a problem for policy makers who aren’t aware of this cultural practise. They are often forced to, sometimes unknowingly, take on credibility risk that the scientists have chosen to waive.

    Comment by Richard Sycamore — 20 Jan 2008 @ 3:50 AM

  350. It is interesting to see what aspects of the IPCC report were selected by Pielke and Tierney for discussion. They seem to shy away from the IPCC sea-level rise predictions discussed by Rahmstorf et. al:

    Since 1990 the observed sea level has been rising faster than the rise projected by models, as shown both by a reconstruction using primarily tide gauge data (2) and, since 1993, by satellite altimeter data (3) (both series are corrected for glacial isostatic adjustment). The satellite data show a linear trend of 3.3 ± 0.4 mm/year (1993–2006) and the tide gauge reconstruction trend is slightly less, whereas the IPCC projected a best-estimate rise of less than 2 mm/year. Sea level closely follows the upper gray dashed line, the upper limit referred to by IPCC as “including land-ice uncertainty.” The rate of rise for the past 20 years of the reconstructed sea level is 25% faster than the rate of rise in any 20-year period in the preceding 115 years. Again, we caution that the time interval of overlap is short, so that internal decadal climate variability could cause much of the discrepancy; it would be premature to conclude that sea level will continue to follow this “upper limit” line in future.

    I’d like to know why Dr. Pielke is focusing only on one subset of IPCC predictions, considering that Dr. Pielke calls the surface temp data “a feast for cherrypickers”, and then goes on to say “Rather than select among predictions, why not verify them all?”

    [Response: In fairness, Pielke discussed sea level rise here. -gavin]

    Comment by Ike Solem — 20 Jan 2008 @ 4:30 AM

  351. B Buckner (#341) wrote:

    Timothy, While I have a lot of respect for Tamino, if you go to the source:
    you will find that MSU channel 4 covers a range of altitudes between 10 km and 30 km, and is centered at an altitude of about 17 km. The troposphere/stratosphere boundary is at 17 km at the equator, and at 8 km at the poles, so clearly channel 4 (lower stratosphere) samples a significant portion of the troposphere. Further, as shown on Figure 9.1(f) in AR4, global warming of the troposphere is expected to warm the air to an altitude of about 17 km, with a sharp drop off and cooling after that. So the MSU channel 4 sampling below 17 km is measuring warming air, while above that cooling air.

    Tamino isn’t the only one who seems to be making that “mistake”:

    We quantify the stratospheric contribution to MSU channel 2 temperatures using MSU channel 4, which records only stratospheric temperatures.

    Qiang Fu et al., Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends, Nature 429, 55-58 (6 May 2004)

    Likewise, I notice that the material you link to:

    … refers to TLS as lower stratosphere and makes no mention of it as including troposphere.

    However, you are right that the tropopause (the boundary layer between the troposphere and the stratosphere) extends to a height of 18 km over the equator.

    Please see:

    The upper boundary of the layer, known as the tropopause, ranges in height from 5 miles (8 km) near the poles up to 11 miles (18 km) above the equator. Its height also varies with the seasons; highest in the summer and lowest in the winter.

    Moreover, it has been pointed out that the tropopause is rising as the result of global warming.

    Please see:

    The team’s results show that human-induced (anthropogenic) changes in well-mixed greenhouse gases, which are fairly evenly distributed in the atmosphere, and ozone, a greenhouse gas that is found in higher concentrations in the stratosphere, are the primary causes of the approximately 200-meter rise in the tropopause that has occurred since 1979. In their research, team members used advanced computer models of the climate system to estimate changes in the tropopause height that likely result from anthropogenic effects. They then searched for, and positively identified, these model-predicted “fingerprints” in observations of tropopause height change.

    Tropopause Height Becomes Another Climate-Change “Fingerprint”

    … and:

    We examine changes in tropopause height, a variable that has hitherto been neglected in climate change detection and attribution studies. The pressure of the lapse rate tropopause, pLRT, is diagnosed from reanalyses and from integrations performed with coupled and uncoupled climate models. In the National Centers for Environmental Prediction (NCEP) reanalysis, global-mean pLRT decreases by 2.16 hPa/decade over 1979-2000, indicating an increase in the height of the tropopause. The shorter European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis has a global-mean pLRT trend of -1.13 hPa/decade over 1979-1993.

    Santer, B.D., R. Sausen, T.M.L. Wigley, J.S. Boyle, K. AchutaRao, C. Doutriaux, J.E. Hansen, G.A. Meehl, E. Roeckner, R. Ruedy, G. Schmidt, and K.E. Taylor, 2003: Behavior of tropopause height and atmospheric temperature in models, reanalyses, and observations: Decadal changes. J. Geophys. Res., 108, no. D1, 4002, doi:10.1029/2002JD002258.

    The variability of the tropopause height gets me to thinking: is it possible that the height at which the channel 4 measurements begin is itself variable?

    Looking around, I find:

    TLS (temperature lower stratosphere) (formerly known as channel 4), for which 90% of the emissions are from within 150 and 20 hPa.

    Microwave Sounding Unit (MSU) temperature anomalies
    December 1978 – March 2006

    Now this suggests that it is going off of air pressure. Checking for the air pressure of the stratosphere at the equator, I find:

    The pressure at the tropopause ranges from ~300 hPa at high latitudes to 100 hPa over the equator.

    Dessler (2000), The Chemistry and Physics of Stratospheric Ozone, pg 34

    So that can’t be it. Judging from what I can see, you are right that there is some contamination from the troposphere. But how significant is it? We know that the rate at which temperatures are rising near the equator are considerably smaller than at the higher latitudes, at least at the surface. I suspect you will find the same thing in terms of tropospheric warming. As such, there will probably be very little contamination due to tropospheric trends.

    But this wouldn’t explain why Fu et al (2004) feel comfortable stating that it “records only stratospheric temperatures.” Perhaps Hank Roberts is right, and postprocessing in some form eliminates the contamination, or perhaps they were being somewhat imprecise. In either case, I don’t see much concern for the issue in the literature, but it would be nice to know.

    Comment by Timothy Chase — 20 Jan 2008 @ 4:42 AM

  352. Bryan S writes:

    [[Firstly, what is the net radiative flux (in W/m2, then converted to Joules) needed to raise the temperature of the troposphere (entire global integral) 0.2 degrees C** (or whatever is the best number attributed to the 1998 Super El Nino) over the relevant one year span?]]

    The atmosphere has a mass of 5.136 x 1018 kilograms according to Walker (1977, p. 20); the fourth digit may have been revised since then but I can’t think of any specific sources offhand. The troposphere is usually taken to have 80% of the mass of the atmosphere, which would give it m = 4.109 x 1018 kg. Dry air has a specific heat at constant pressure (yes, I know) of 1,004 Joules per kg per K, and the atmosphere is mostly dry, water vapor constituting around 0.4% by volume on average. Make it 1,010 J/kg/K for the whole troposphere. Change in temperature is dT = dH / (m cp), so dH = dT m cp and a 0.2 K change requires 8.3 x 1020 J. A year is 365.2422 * 86400 = 3.156 x 107 seconds, so the power required is 2.630 x 1013 watts. The Earth’s area is about 5.1007 x 1014 square meters, so the required flux density is about 0.052 watts per square meter.

    Comment by Barton Paul Levenson — 20 Jan 2008 @ 8:08 AM

  353. Rod B posts:

    [[Ray (337) says, “As the wings are quite broad, there is no reason to expect that CO2 will magically stop absorbing IR at some concentration.”

    How broad would you guess are the wings? There is likewise absolutely no reason to expect that it won’t stop. Given the quantization of the absorption it makes logical sense that it would stop absorbing long before infinity.]]

    Venus has 89 bars of carbon dioxide (965,000 ppmv at 92.1 bars pressure) and a mean global annual surface temperature of 735.3 K (both figures from Seiff et al.’s 1986 standard atmosphere for Venus). The simulations of Bullock (1997) and others indicate that in times of intense volcanism, the surface temperature on Venus can go to 900 K. So even at that level it isn’t saturated. Whether some theoretical saturation limit exists or not is irrelevant to climatology; in practice, more CO2 means more greenhouse effect.

    Comment by Barton Paul Levenson — 20 Jan 2008 @ 8:15 AM

  354. Hank

    I understand very smart people understand the issues and work hard to provide representative data. Channel 4 measures a certain layer of the atmosphere. The data needs to be processed to reflect the targeted layer. Complications arise because the targeted layer boundary varies with latitude, pressure and time, and there are huge temperature gradients at the boundary. It is not possible with this technology to get it right and perfectly separate the layers. That is why I said the lower stratosphere data is contaminated with warm troposphere data. If the lower stratosphere MSU satellite temp data is accurate, then there is a problem with global warming theory because the lower stratosphere has not cooled since 1993. I don’t think that is the case and the constant lower strat temps are more likely to be the result of a warming upper troposphere and a rising boundary between the troposphere and stratosphere.

    Comment by B Buckner — 20 Jan 2008 @ 9:39 AM

  355. Bryan and Gavin, I find your discourse very interesting and informative. Same goes for the Buckner, Roberts, Chase, et al discussion. Happy to listen and learn. Thanks.

    Comment by Rod B — 20 Jan 2008 @ 10:00 AM

  356. > Which “it” are you talking about now?

    CO2 absorbing infrared radiation.

    Comment by Rod B — 20 Jan 2008 @ 10:04 AM

  357. per Chris (348): “…the ideas in the blogosphere that “CO2 will be saturated soon so there is not much to worry about” is nonsense…”

    I never said, implied or hinted as such. I did imply that a concentration ration of 20 – 30 times, compared to ~ 2x today, is possibly arguable for continuing absorption. But a million times, let alone infinite, is way beyond the realm of any known physics justifying the ln(n) formula.

    “…This is going nowhere unless you want to discuss the details for acadmeic reasons…”

    You have a good point there!

    Comment by Rod B — 20 Jan 2008 @ 10:15 AM

  358. Hank Roberts and Kevin Stanley, thanks for pointing out what I had missed regarding the Antarctic ice reports.

    Comment by Donald Dresser — 20 Jan 2008 @ 10:21 AM

  359. B Buckner (#354) wrote:

    If the lower stratosphere MSU satellite temp data is accurate, then there is a problem with global warming theory because the lower stratosphere has not cooled since 1993. I don’t think that is the case and the constant lower strat temps are more likely to be the result of a warming upper troposphere and a rising boundary between the troposphere and stratosphere.

    In response to my question in 330:

    However, there is also the question of what is principally driving stratospheric temperature anomaly trends: depletion of ozone or higher levels of carbon dioxide?

    gavin wrote:

    It all depends on where you are. lower strat is principally ozone-depletion related (the majority of the MSU4 trend), while mid and upper strat (and mesosphere as well) is mainly CO2.

    Ozone depletion has gone flat and may even be recovering a bit.

    Ergo, there isn’t that much reason to expect the lower stratosphere to continue cooling. It could even warm up a little if ozone recovers appreciably. However, the middle and upper stratosphere should continue to cool.

    Comment by Timothy Chase — 20 Jan 2008 @ 10:28 AM

  360. #354
    If this is correct, the top of troposphere isn’t warming either. (0,03K/dec)

    Comment by lgl — 20 Jan 2008 @ 10:44 AM

  361. Rod B., your comments represent a misconception about spectral lines. They are not well behaved distributions

    You can read about them here:

    and here:

    Result: Add more CO2 and you will absorb more IR. Absorb all the IR, and the gas will emit more that can be absorbed higher up.

    Comment by Ray Ladbury — 20 Jan 2008 @ 10:45 AM

  362. B Buckner and lgl,

    Apparently the trend in lower stratosphere temperature anomaly is pretty much what theory says it should be:

    Ramaswamy et al., Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling, Science 24 February 2006: Vol. 311. no. 5764, pp. 1138 – 1141, DOI: 10.1126/science.1122587

    Comment by Timothy Chase — 20 Jan 2008 @ 10:50 AM

  363. Gavin- Thanks much for the link in #350

    Richard Sycamore- Thanks for stating what seemed fairly obvious.

    The issue that has me confused about Gavin’s complaints about my efforts to look at past IPCC forecasts is that the 1995, 2001, and 2007 forecasts and observations are clearly and unambiguously right on top of one another. One can certainly do an uncertainty analysis, but, in this case, it seems redundantly unnecessary. As I characterized the IPCC forecasts in my post the forecasts are “spot on.”

    Now 1990 is a little different. And whether you like Gavin’s 0.29/decade or my 0.33/decade, it also seems logically obvious that over the long term the observations (however the future evolves) cannot simultaneously be consistent with IPCC 1990 and IPCC 1995/2001/2007. No uncertainty analysis is needed to reach this conclusion either. Now it might be that the obs fall outside the 2 sigma uncertainty 1990-2007 (using my trend) or 1990-2008 (using Gavin’s trend + UKMET 2008 forecast) or some other period, but over the long term not all IPCC forecasts can be equally accurate, and this says nothing about the truth value of climate science or the integrity of the IPCC. It does however say something about the details of predictive capabilities from different points in time, and how climate forecasts have evolved, which seems to be worth doing (even the IPCC does it!).

    [Response: My complaints were nothing to do with whether verification is worthwhile or not. We all agree it is. They had to do with presentation and accuracy in what was being discussed. You might interpret criticism of specifics as a criticism of approach but that does not logically follow and probably explains your confusion. Our conversations would be much more productive if you simply responded to actual comments rather than reflexively assuming some imagined agenda. For a start, how about apologising for mischaracterising (#338) my discussion of the Hansen et al simulations? - gavin]

    Comment by Roger Pielke. Jr. — 20 Jan 2008 @ 11:35 AM

  364. #362

    Thanks Timothy, but again I disagree.
    Still they only manage to get a flat curve between the eruptions, and
    “Although the overall trend in temperature has been modeled
    previously (5, 9, 10), the steplike structure and
    the evolution of the cooling pattern in the observed
    global temperature time series has not
    been explained in terms of specific physical
    causes, whether these be external forcing and/or
    internal variability of the climate system. Thus,
    attribution of the unusual cooling features observed
    during the 1980s and 1990s has yet to be
    addressed, along with potential implications for
    the future.”

    Not very convincing. And when they write things like;

    “The decadal-scale temperature decline that is dominated
    by stratospheric ozone depletion is very
    likely unprecedented in the historical evolution of
    the lower stratospheric thermal state”…

    How can they possibly know?

    [Response: Err... because ozone-depeleting CFCs are completely man made and have never existed in the Earth's atmosphere before? - gavin]

    Comment by lgl — 20 Jan 2008 @ 12:07 PM

  365. #364

    But there has been far more severe volcanic activity through history than the last decades. How can they know the temp drop after a VEI7 og 8 volcano?

    Comment by lgl — 20 Jan 2008 @ 12:40 PM

  366. As far as the Logn formula is concerned, Ray, (305) it was introduced (empirically) by Arrhenius and modified by Hansen. The Climate Audit is tracking down sources, and the Reference Frame has a sketch of a deduction. It is not supposed to work indefinitely, or at very low concentrations. Angstrom (Weart and Pierrehumbert, A Saturated Gassy Argument, here) thought the gas would be completely saturated beyond quite modest levels. Your contributors did not agree.

    However, over the range where the increase in temperature continues, the point is the doubling time. On the Arrhenius version, doubling the concentrations makes the temperature increment proportional to Logn2, which is constant.
    If a CO2 concentration increase from 280 to 560 increases global temperatures by 1 degree in 100 years, 800 years will be required to raise temperatures by 4 degrees at a concentration of 4,480 ppm. Is CO2 toxic at that level? On the issue of the excitation mode, I think (and will look for sources) that the consequences of electron excitation and intra-atomic excitation are different. The first will re-radiate, at the same temperature and frequency, allowing surface radiation to continue upwards and travel backwards; the second will dissipate heat into N2/O2 as Rod B (302) says.

    If most (even much) of the absorbed energy does increase the N2/O2 temperature, how does that energy leave the atmosphere, and what happens to the back-radiation argument?

    The Saturated Gassy Post says:

    “Or it (the excited molecule) may transfer the energy into velocity in collisions with other air molecules, so that the layer of air where it sits gets warmer. The layer of air radiates some of the energy it has absorbed back toward the ground, and some upwards to higher layers. As you go higher, the atmosphere gets thinner and colder. Eventually the energy reaches a layer so thin that radiation can escape into space.”

    Which other air molecules? Can N2/O2 absorb and radiate?

    [Response: You don't need the other air molecules. It's still the greenhouse gases (CO2, CH4, H2O, etc.) that are doing the radiating -- it's just that they're doing the radiating at a level where the atmosphere is optically thin enough for the radiation to get out. Note that this remark applies to only one part of the counter-argument to the "saturation" claim (the "thinning and cooling" part that shows that saturation as understood by Angstromm wouldn't negate the greenhouse effect increase even for a grey gas saturated at sea level). The other part, which is probably the most important part for CO2 on Earth, deals with the wings of the CO2 absorption spectrum, where CO2 absorption is weak and needs greater concentrations to be saturated. Always more absorption waiting in the wings! (until you approach Venus conditions). By the way N2 can become a greenhouse gas for sufficiently dense atmospheres. It's the dominant greenhouse gas on Titan. Not a factor on Earth, though. --raypierre]

    Comment by Fred Staples — 20 Jan 2008 @ 1:25 PM

  367. We continue to run into skeptical arguments dealing with ground level global temperature noise in the 1920s-1940s period.

    For example, a Facebook blogger wrote:

    … how does one explain the earth’s rise in temperature from the early 1900′s to 1940 when HUMAN production of CO2 was significantly lower in relative terms to today’s production… Wouldn’t it make sense that temperatures should have fallen then, not risen? …

    I assume there is no no one explanation which is supported by scientific evidence.

    Aerosol concentration changes fail to be convincing.

    FYI – My reply to the Facebook blogger:
    — I have an explanation for the apparent cooling period from 1940-1975 which differs from the possible explanation given by many mainstream scientists (aerosols). My explanation is that the 1920s-1940s was a warm bubble, a result of less condensation and cloud cover in the 20s and early 30s, followed by higher intensity El Nino conditions in mid 30s-40s.

    [Response: I'm glad you pointed out that this nonexistent problem is still making the rounds, but I don't buy your explanation. Either the current or previous IPCC report has many citations regarding the causes of the earlier parts of the warming. CO2 still plays a role, but basically the problem is that the warming is still small enough then that it's not beyond natural variability. I suppose some of your claims about clouds and ENSO could be included under "natural variability." And of course, it's well established by now that the main reason for the interruption of the warming is aerosols, notwithstanding that there is still considerable uncertainty regarding the secondary aerosol effect on clouds. --raypierre]

    Comment by pat n — 20 Jan 2008 @ 1:41 PM

  368. Martin (317), what I meant was that the ln(n) function might or might
    not apply throughout the range of concentration, and it is presumptuous,
    and irrelevant, to say that CO2 will continue to absorb more radiation
    until its concentration is infinite.


    obviously logarithmic behaviour breaks down for very low CO2 concentrations (otherwise an almost CO2-free atmosphere would exhibit a negative greenhouse effect). What happens at the very high end is anybody’s guess. But for the range we are actually looking at — say, pre-industrial to twice pre-industrial — do you have any reason to believe the models, which robustly predict log behaviour based on well established physics, have got it wrong?

    Ray’s statement is certainly relevant as a counter-statement to the claim of saturation within the range of concentrations we are actually talking about. What IIRC started this discussion.

    Comment by Martin Vermeer — 20 Jan 2008 @ 1:44 PM

  369. Gavin- We could take this aggrieved academic show on the road – Dr. Abbott and Prof. Costello ;-)

    To recap:

    1. You complained that I posted up a graph without displaying proper uncertainty ranges

    2. I replied that I had discussed them in the text.

    3. I pointed to a graph that you posted without uncertainty ranges.

    4. You replied that you had discussed them in the text.

    Anyway, the substantive discussion of uncertainties I presented in #363 should make my views on this completely unambiguous.

    Shall we go around again? Or is everything quite clear by now? ;-)

    Comment by Roger Pielke. Jr. — 20 Jan 2008 @ 1:58 PM

  370. Lawrence,

    you awoke the technology nut in me.

    is there technology available to filter and trap
    CO2/carbon when it leaves the aircraft’s engines.

    Why burn carbon at all? Burn hydrogen.

    solar power is next to useless on aircraft,

    Not if beamed down from space by laser beam (seriously!)

    [nuclear aircraft] the possibility of crash landings

    Why allow them to land? Visit them in the air by smaller shuttle aircraft.
    (I know, I know. Perhaps not :-)

    Comment by Martin Vermeer — 20 Jan 2008 @ 1:59 PM

  371. lgl (#364) wrote:


    Thanks Timothy, but again I disagree.
    Still they only manage to get a flat curve between the eruptions, and
    “Although the overall trend in temperature has been modeled
    previously (5, 9, 10), …”

    You are quoting from the opening paragraph. It states what hasn’t been achieved — until now. And they manage to get basically the same curve (with envelope) in the simulation as we get from observation. The progressive flattening of the curve between eruptions has to do with the drop in the rate of ozone depletion. Just as I suggested would be the case in 359:

    Ozone depletion has gone flat and may even be recovering a bit.

    Ergo, there isn’t that much reason to expect the lower stratosphere to continue cooling. It could even warm up a little if ozone recovers appreciably. However, the middle and upper stratosphere should continue to cool.

    The reason is that the principle cause of cooling in the lower stratosphere is the depletion of ozone, not higher levels of greenhouse gases. Rising levels of carbon dioxide are the principle cause of cooling in the middle and upper stratosphere.

    Their case is made a bit more impressive with the temporal evolution diagrams showing temperature anomaly by latitude and year — despite their model’s lack of a Quasi-Biennial Oscillation — something they left out of the model.


    Going back to your previous post…

    lgl (#360) wrote:

    If this is correct, the top of troposphere isn’t warming either. (0,03K/dec)


    That isn’t “top of troposphere.”

    Comment by Timothy Chase — 20 Jan 2008 @ 1:59 PM

  372. lgl (365) — Greenland ice core records show that, for example, the aerosols (sulfates) in the atmosphere due to the Mt Toba super-eruption about 74–71 kya disappeared in 3–6 years. Not decadal scale.

    Comment by David B. Benson — 20 Jan 2008 @ 2:00 PM

  373. lgl, Gavin’s exactly correct in answering your question about the historical record on ozone depletion

    > likely unprecedented in the historical evolution

    Catalysis by chlorofluorocarbons; cooling of the stratosphere from the greenhouse changes has reached the temperature where ice clouds that enhance the destruction of the ozone form more often, which is delaying the recovery.

    Prehistory offers one other mechanism that depletes the ozone layer, but if it had happened recently we wouldn’t be here to notice it:

    The third possibility — the one Crutzen pointed out in his Nobel Prize lecture: if industry had chosen bromine rather than chlorine to create halogenated fluorocarbons, the ozone layer would have been gone before we had a clue we were causing the problem.

    See references here:

    Blind luck, on that one, the economics were apparently close to 50-50 between choosing chlorine or bromine as a feedstock.

    Who knew?

    Comment by Hank Roberts — 20 Jan 2008 @ 2:41 PM

  374. #370

    TTS is at 10 km. That’s above the polar troposphere and below the tropic top of troposphere. I assumed if there is almost no warming at 10 km there’s probably even less at top of tropic troposphere, maybe I’m wrong.

    Comment by lgl — 20 Jan 2008 @ 2:45 PM

  375. #371

    But how much ozone disappeared?

    Comment by lgl — 20 Jan 2008 @ 2:48 PM

  376. #370

    Why is the temp drop between 1960 and 1975 as large as between 1990 and 2005?

    Comment by lgl — 20 Jan 2008 @ 3:34 PM

  377. Martin (368) says, “….do you have any reason to believe the models, which robustly predict log behaviour based on well established physics, have got it wrong?…”

    To be truthful this is an area where I have doubts, but I’m a long way from claiming with any credibility that it’s wrong. But it wasn’t my (simple?) point which explicitly questioned the ln(n) relationship at the extremes, in particular when n = infinity. I am not quarreling with the claim, “within the range of concentrations we are actually talking about…”

    Comment by Rod B — 20 Jan 2008 @ 4:34 PM

  378. #366 Also the various isotopic combinations, eg. 18 O, 13 C, etc.

    Comment by Eli Rabett — 20 Jan 2008 @ 5:19 PM

  379. #370. Increasing ghg concentrations are more responsible for stratospheric cooling than ozone depletion. See also Uhreck’s page referenced at the link

    Comment by Eli Rabett — 20 Jan 2008 @ 5:40 PM

  380. Re: #377: Rod, then we agree. And I assume Ray agrees too. If you look back at his post #305, you see that he never claims that the log(n) relationship holds valid to infinity, but rather that Fred Staples’ earlier claim that logarithmic behaviour is the same as saturation, is simply not true. I’m surprised you missed that logic.

    Comment by Martin Vermeer — 20 Jan 2008 @ 6:46 PM

  381. # lgl Says:

    > But how much ozone disappeared?

    Review. Stratospheric ozone depletion
    Issue Volume 361, Number 1469 / May 29, 2006
    Pages 769-790
    DOI 10.1098/rstb.2005.1783
    Author F. Sherwood Rowland

    Comment by Hank Roberts — 20 Jan 2008 @ 6:48 PM

  382. the earlier century warming had a lot to do with anthropogenic, solar, lack-of-volcanic and some internal variability. From my impression of most people on facebook, they seem to assume that *we* think CO2 is the only thing relevant to climate, so if the sun goes down by 5%, CO2 increases, and it cools, our theory is falsified.

    By the way, raypierre, when exactly does N2 become able to absorb/emit IR, I didn’t know that, I didn’t even think it could ever do that??

    Comment by Chris Colose — 20 Jan 2008 @ 6:52 PM

  383. #382 Chris, for all practical purposes N2 does not absorb or emit IR, however if you REALLY want to get picky then you have to start by including collision induced absorptions (the collision complex formed by two molecules in the act of colliding) and quadrupole moments (normal interaction of molecules with light is due to electric dipole interactions between the photons and the molecules. You can find the quadrupole and magnetic dipole terms discussed in books on electromagnetic theory (Jackson is the default). The collision induced stuff is so far in the weeds that you have to go to the primary literature.

    For nitrogen see Demoulin, Farmer, Risland and Zander JGR 96, 13003 (1991). The optical depth from the top of the atmosphere to the top of the Alps is about 10% absorption. O2 has both quadrupole and magnetic dipole lines which are about as strong (Balasubramanian, D’Cunha, Rao, JMS 144 374 (1990). Also Rinsland, et al., JQRST 48 693 (1992). Both are minor but real. The amount of energy deposited into any small region of the atmosphere is zilch, so this is one of those things you can stump an expert on, but in the long run don’t matter much.

    Comment by Eli Rabett — 20 Jan 2008 @ 8:56 PM

  384. Eli Rabett (#378) wrote:

    #370. Increasing ghg concentrations are more responsible for stratospheric cooling than ozone depletion. See also Uhreck’s page referenced at the link.

    Eli, I had raised this very issue (whether stratospheric cooling was principally due to ozone depletion or rising levels of carbon dioxide) back in #330, giving links to the IPCC (2001) and Wikipedia where claims are made that it is principally due to ozone depletion and gave the link to where you claim that it is principally due to rising levels of carbon dioxide in “Stratospheric Cooling Rears its Ugly Head,” and Gavin said:

    Response: It all depends on where you are. lower strat is principally ozone-depletion related (the majority of the MSU4 trend), while mid and upper strat (and mesosphere as well) is mainly CO2.

    I believe he had said something to this effect a while back, but I recalled it only on the second time through. Anyway, ozone is more important in the lower stratosphere helps to make sense of the trend in lower stratosphere anomalies — which have behaved pretty much as models would project. (See #362.)

    But in any case, looking at your chart (I don’t have the air pressures down, but going off of the falling air pressure with increasing height), I believe you may be capturing the switch. In the lower stratosphere, ozone depletion appears dominate. In the middle and upper stratosphere, rising GHGs dominate. Might have preferred three curves, though. One for GHG only, one for ozone only, and one for both. You have the last two, but not the first.

    Comment by Timothy Chase — 20 Jan 2008 @ 10:49 PM

  385. Martin (380). Ray said, “…Does log(n) go to infinity as n goes to infinity? Since I assume you know this, HOW CAN YOU CALL THAT SATURATION?..”

    I dunno. Maybe I misread that. If so I apologize.

    Comment by Rod B — 20 Jan 2008 @ 11:04 PM

  386. lgl (#374) wrote:


    TTS is at 10 km. That’s above the polar troposphere and below the tropic top of troposphere. I assumed if there is almost no warming at 10 km there’s probably even less at top of tropic troposphere, maybe I’m wrong.

    Actually the “Relative Weighting Function” diagram shows a peak at 10 km, but wings going as far down as 0 km and as far up as nearly 35 km. But being above the troposphere in the polar regions means that it is getting some of the stratosphere. Therefore it is measuring both the stratosphere and troposphere — and just enough of both that their trends nearly cancel.

    Comment by Timothy Chase — 20 Jan 2008 @ 11:08 PM

  387. I bought the article for temporary access, but am having technical problems; hopefully I will read it after contacting them.

    The technical chemsitry/spectrometry is probably a bit above me. My question was mainly where the “atmospheres density” comes in as far as allowing N2 to act as a strong greenhouse gas on Titan, and any other differences between Titan and Earth in this regard– chris

    Comment by Chris Colose — 20 Jan 2008 @ 11:21 PM

  388. Rod (#385): the saturation ‘meme’ is alive and kicking in denialist circles well half a century after dying a natural death in the scientific community. Your, what appeared an attempt at deflecting attention from Ray’s debunking it, looked so straight out of the [edit] Deniers’ Handbook (if you can’t win the debate, add confusion), that I reacted strongly. Pardon my thin skin.

    Comment by Martin Vermeer — 21 Jan 2008 @ 1:50 AM

  389. Gavin, thanks for the link to Dr. Pielke’s review of IPCC estimates of sea level rise vs. actual measurements. His summary is a little strange, however:

    “This state of affairs should give no comfort to anyone: over the 21st century sea level is expected to rise, anywhere from an unnoticeable amount to the catastrophic, and scientists have essentially no ability to predict this rise, much less the effects of various climate policies on that rise. As we’ve said here before, this is a cherrypickers delight, and a policy makers nightmare. It’d be nice to see the scientific community engaged in a bit less spin, and a bit more comprehensive analysis.

    Well, there’s actually a very large literature on sea level rise. There are two major components – the thermal expansion of the oceans as warming proceeds, and the contribution from melting ice sheets. Here are some examples:

    1) Willis, J. K., D. Roemmich, and B. Cornuelle (2004), Interannual variability in upper ocean heat content, temperature, and thermosteric expansion on global scales, J. Geophys. Res., 109.

    This paper, which is al about how to combine satellite altimetry data with in situ measurements to get good estimates of upper ocean heat content, has some interesting observations that apply to some discussions on this thread:

    “In addition to the rate of ocean warming, the largescale spatial patterns of heat content variability have been estimated. These show large amounts of interannual variability in the tropics, particularly during the 1997–1998 El Nino episode. The tropics experienced rapid heat loss following the peak of the El Nino, some of which may have been exported from the tropics to higher latitudes…”

    “Figure 11a shows no sharp rise in thermosteric expansion at the onset of the ENSO event. This implies that during an El Nino event, large amounts of heat are redistributed within the ocean, but little heat is lost or gained in the global average.”

    2) Miller L. and Douglas, B.C (2004) Mass and volume contributions to twentieth-century
    global sea level rise, Nature (428).

    This paper also attempts to sort out the components of global sea level rise:

    “Concerning the causes of sea level rise, our results provide clear evidence that changes in ocean volume due to temperature and salinity account for only a fraction of sea level change, and that mass change plays a dominant role in twentieth-century GSLR. This aspect of our results is consistent with the results of Antonov et al.,who show that the global oceans freshened during the latter half of the twentieth century by an amount equivalent to 1.4mmyr of fresh water, but goes further by indicating that the source must be continental.”

    Concerning Dr. Pielke’s attacks on RealClimate as “spin”, we can also look at

    3) Rahmstorf, S. (2007) “A Semi-Empirical Approach to Projecting Future Sea-Level Rise”, Science, 315.

    “. . .Large uncertainties exist even in the projection of thermal expansion, and estimates of the total volume of ice in mountain glaciers and ice caps that are remote from the continental ice sheets are uncertain by a factor of two. Finally, there are as yet no published physically based projections of ice loss from glaciers and ice caps fringing Greenland and Antarctica.

    For this reason, our capability for calculating future sea-level changes in response to a given surface warming scenario with present physics-based models is very limited, and models are not able to fully reproduce the sea-level rise of recent decades. Rates of sea-level rise calculated with climate and ice sheet models are generally lower than observed rates.”

    There’s also this point, which is worth thinking about:

    “Paleoclimatic data suggest that changes in the final equilibrium level may be very large: Sea level at the Last Glacial Maximum, about 20,000 years ago, was 120 m lower than the current level, whereas global mean temperature was 4° to 7°C lower. Three million years ago, during the Pliocene, the average climatewas about 2° to 3°C warmer and sea level was 25 to 35 m higher than today’s values. These data suggest changes in sea level on the order of 10 to 30 m per °C.”

    So, where is the spin and lack of comprehensive analysis that Dr. Pielke is upset about? Given the warming trends in the polar regions, it seems clear that Greenland and West Antarctica will continue to lose mass to the oceans over the next century at an ever-increasing rate. Exactly how long it will take to reach equilibrium is very uncertain, however.

    However, the reports are pointing towards very rapid increases in the rate of melting. See Greenland’s Ice Melt Grew by 250 Percent, Satellites Show, National Geographic News, Sept 2006 as well as Greenland Ice Sheet Is Melting Faster, Study Says, National Geographic News, Aug 2006. Those are news reports on two papers published in Science and Nature that rely on the GRACE (Gravity Recovery and Climate Experiments) satellites for direct ice mass measurements.

    Finally – whether or not the rate of sea level rise over the next 100 years (a rather arbitrary time frame) is “catastrophic” or not depends partly on how people respond to it. Hurricane Katrina, for example, was not intrinsically catastrophic – that was a result of poor disaster planning and a failure to maintain the levees (perhaps they should have asked the Dutch engineers for advice?).

    The best advice for policy makers would be three-fold: halt the use of fossil fuel combustion for energy, implement a massive global renewable energy infrastructure program, and prepare plans for the worst-case scenarios of the unavoidable effects of global warming that are already in the pipeline.

    Comment by Ike Solem — 21 Jan 2008 @ 6:11 AM

  390. Fred,
    Yes, the log(n) dependence was derived empirically. That does not diminish the fact that there are physical reasons for it–the thick tails of the absorption spectrum.

    As to the relaxation of the excited state–if it can absorb radiation, it can emit radiation, right. It can also relax collisionally–and this is what imparts energy to N2/O2. It’s a question of the proportion in each class, and that depends on the density of the air in the vicinity of the molecule. Keep in mind, also, that collisional excitation is possible, even at relatively low temperatures. This is how energy leaves the climate in the ghg absorption bands after all.

    Comment by Ray Ladbury — 21 Jan 2008 @ 6:30 AM

  391. Fred Staples posts:

    [[If a CO2 concentration increase from 280 to 560 increases global temperatures by 1 degree in 100 years]]

    Try 3 degrees.

    Comment by Barton Paul Levenson — 21 Jan 2008 @ 7:00 AM

  392. Igl posts:

    [[Why is the temp drop between 1960 and 1975 as large as between 1990 and 2005?]]

    The mean global annual surface temperature was flat between 1960 and 1975 and rose from 1990 to 2005. Where are you getting your figures?

    Comment by Barton Paul Levenson — 21 Jan 2008 @ 7:05 AM

  393. Chris Colose writes:

    [[The technical chemsitry/spectrometry is probably a bit above me. My question was mainly where the “atmospheres density” comes in as far as allowing N2 to act as a strong greenhouse gas on Titan, and any other differences between Titan and Earth in this regard– chris]]

    The surface air pressure on Titan is 146,700 pascals as opposed to 101,325 at sea level on Earth, but the main difference is the temperature — the surface temperature on Titan is 94.5 K, compared to 287 or 288 on Earth. In those conditions, nitrogen becomes a greenhouse gas (though methane is also important in the Titan greenhouse effect). If you want to google the name McKay along with Titan, several of Chris McKay’s articles on the subject are free on the web.

    Comment by Barton Paul Levenson — 21 Jan 2008 @ 7:09 AM

  394. Tim and Eli,
    Figure 9.1(c)&(d) in AR4 graphically portrays the separate modeled effects of GHG and ozone respectively, for the period between 1890 and 1999. Have a look. The ozone cooling has an effect to a slightly lower height, but it is hard to see that ozone dominates cooling in the lower strat as indicated by Gavin.

    Comment by B Buckner — 21 Jan 2008 @ 8:39 AM

  395. My web site got deindexed by some hacker, and no longer shows up in web searches. I remember this happened to RealClimate a while back. How did you fix the problem?

    [Response: There's a revalidation process in Google tools somewhere, it can take a while though. - gavin]

    Comment by Barton Paul Levenson — 21 Jan 2008 @ 9:21 AM

  396. Instead of dithering about the present yearly ups & downs in the overall warming pattern, how about setting our sights on what might happen if we reach 6 degrees end of this century or in 2 centuries or more.

    SIX DEGREES by Mark Lynas will finally be released here in the U.S. on Jan 22, and Amazon is giving a 5% discount for preorders (I already got mine thru, which was a lot more expensive):

    For RC’s discussion of the book, see:

    Comment by Lynn Vincentnathan — 21 Jan 2008 @ 10:29 AM

  397. #392

    We were discussing the lower statospheric temperature.

    Comment by lgl — 21 Jan 2008 @ 11:24 AM

  398. > 9.1(c)&(d) in AR4

    I can see it — the blue in (c), compared to (d).
    Did you look at the 2 “sources for further information” listed in the caption? (I’ve quoted the caption below.)

    For convenience of others wanting to follow, this is a link to the whole chapter:

    Caption: Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from (a) solar forcing, (b) volcanoes, (c) wellmixed
    greenhouse gases, (d) tropospheric and stratospheric ozone changes, (e) direct sulphate aerosol forcing and (f) the sum of all forcings. Plot is from 1,000 hPa to 10 hPa (shown on left scale) and from 0 km to 30 km (shown on right). See Appendix 9.C for additional information. Based on Santer et al. (2003a).

    Santer, B.D., et al., 2003a: Contributions of anthropogenic and natural forcing to recent tropopause height changes. Science, 301, 479–483

    Don’t miss this followup, Santer et al.’s response to Pielke et al.’s skeptical comment on that article; the response, available as full text online, reads as good clear hard scientific argument.

    (Good illustration of why you should always read the footnote, look up the article, and look at subsequent citations and comments and followups — science grows like a plant, finding new resources and growing in all directions. Don’t focus on a ‘founder’ or ‘origin’ with science — focus on where it is now and how it’s growing.)

    This review also looks quite helpful, and is more recent:

    Detecting and Attributing External Influences on the Climate System: A Review of Recent Advances
    T. Barnett, F. Zwiers, G. Hegerl, M. Allen, T. Crowley, N. Gillett, K. Hasselmann, P. Jones, B. Santer, R. Schnur, P. Stott, K. Taylor, S. Tett
    February 2, 2005 Journal of Climate

    It begins:

    “The “International Ad Hoc Detection group” (IDAG) is a group of spe-ci-a-lists* on climate change detection, who have been collaborating on assessing and reducing uncertainties in the detection of climate change since 1995. Early results from the group were contributed to the IPCC Second Assessment Report (SAR; IPCC 1996). Additional results were reported by Barnett et al. (1999) and contributed to the IPCC Third Assessment Report (TAR; IPCC 2001). The weight of evidence that humans have influenced the course of climate during the past century has accumulated rapidly since the inception of the IDAG. While little evidence was reported on a detectable anthropogenic influence on climate in IPCC (1990), a ‘discernible’ human influence was reported in the SAR, and the TAR concluded that ‘most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations’. The evidence has continued to accumulate since the TAR. This paper reviews some of that evidence, and refers to earlier work only where necessary to provide context…..”

    * Thanks to the WordPress software, some words must now be hyphenated to be allowed.

    Comment by Hank Roberts — 21 Jan 2008 @ 11:47 AM

  399. What you called (Response #367) a nonexistent problem (i.e. the 1920-1970s up and down in global temperatures) in facty continues to be used by many skeptics to push a falsehood that global warming is just cyclical and non-anthropogenic.

    The problem still exists.

    1890-2007 graphs of temperatures at U.S. climate stations in the Midwest and Great Plains show spikes in annual temperatures in 1931 and in 1921 which appear well beyond natural variability at many climate stations.

    The dust-bowl dry conditions of the early 1930s are well documented
    as extreme in comparison to all other droughts of record in the U.S. explained by El Nino or other factors.

    Comment by pat n — 21 Jan 2008 @ 11:52 AM

  400. Gavin, here are my thoughts on the analysis you performed related to the question I posed.

    Willis (2004) and Lyman et al. (2006) observe an ocean heat content increase between early 1997 and mid 1998 of around (eyeballing) 18 zetajoules (+or- 20 zetajoules). This is equivalent to a net surface energy flux of around 0.89 W/m2 (gained into the ocean) Now from your analysis, the model runs suggest that the atmosphere gained approximately 0.2 W/m2 during this same period (due to El Nino), and that the simulated net (upward) heat flux from the ocean was 0.7 W/m2 (cooling the ocean the same amount), leaving a loss of 0.5 W/m2 to space (ocean heat loss minus atmospheric storage).

    Now, if we accept for the sake of argument that the Willis (2004) stated heat gain is accurate (it has a significant error bar), this suggests that there is a difference between shortwave reaching the ocean surface and the modeled loss of heat to space, equal to 1.39 W/m2 over this period, contributing to the observed net 0.89 W/m2 gain of heat into the ocean. Now this difference might be attributed to changes in cloud feedback over the tropics, but this is a rather large variance.

    The 0.2 W/m2 heating of the atmosphere *directly* due to the 1998 Super El Nino thus contributed an estimated 12.6% (0.2 W/m2/1.59 W/m2) to the sum of all the atmospheric processes leading to the actual TOA radiative imbalance. Considering a graph of ocean heat content, it is now apparent to me my why one cannot easily correlate OHCA with ENSO, because there is a bunch more going on in the system that is governing the TOA radiative imbalance.

    I therefore suggest that this analysis has some important implications. Firstly, even a very large El Nino (1998 event) will not have a dramatic direct effect on the TOA radiative imbalance over annual periods. The larger, indirect effect likely comes through changes in cloud and precipitation feedbacks in the tropics (and these may take time to adjust). Maybe this is a good reason to pay close attention to what Roy Spencer is working on in trying to observe some of these cloud feedbacks more closely.

    Based on this thinking, I see no reason not to stick to my guns when suggesting that even over annual spans, the change in ocean heat content is a good proxy (within the limitations of measurement accuracy) for the TOA radiative imbalance, which is due mainly to the sum of the non-equilibrium radiative forcings+feedbacks, following Ellis et al. (1978) and Pielke (2003).

    [Response: Be careful here. The numbers I gave were for a simulation that just had an El Nino event - but with no other forcings. Therefore the difference between that and Willis et al is much more than just in the cloud feedbacks. The estimate of the 0.2 W/m2 gain by the atmosphere over that period is probably reasonable, so you could infer a 1.09 W/m2 TOA imbalance - if the short term OHC numbers are correct. However, the Chen et al paper show a clear increase in outward LW in the ERBE data during the 1987/88 El Nino. This might be clearer if you could analyse just the tropical oceans to see if the tropics lost heat that was then taken up in mid to high latitudes. - gavin]

    Comment by Bryan S — 21 Jan 2008 @ 12:00 PM

  401. Both 389 and the Pielke essay it critiques make it clear that despite the cost and weight, hard copy distribution of AR4 is vital,because a lot of needless controversy is arising from the difficulty of fast parallel access and lack of a good online index .

    This recalls earlier problems arising from multivolume studies like Scope-ENUWAR and many other large reports – sometimes the pseudo-linear narrative form of online single ( or at best double screen access gets in the way of discovering what lies only a few flippable pages away in the hard copy- If you have it.

    The more tedious the read, the better the beach you need to read it on.

    Comment by Russell Seitz — 21 Jan 2008 @ 12:08 PM

  402. Re: #389 “Figure 11a shows no sharp rise in thermosteric expansion at the onset of the ENSO event. This implies that during an El Nino event, large amounts of heat are redistributed within the ocean, but little heat is lost or gained in the global average.” (Willis et al, 2004)

    Thanks Ike for helping support my case to Gavin!

    Comment by Bryan S — 21 Jan 2008 @ 12:11 PM

  403. #373

    Where is the historic ozone record?
    Is there a proxy somewhere?

    Comment by lgl — 21 Jan 2008 @ 12:57 PM

  404. Martin (388), you’re pardoned. I would suggest being a little less Pavlovian which I think caused many to attack for something I didn’t say… maybe.


    Comment by Rod B — 21 Jan 2008 @ 2:03 PM

  405. B Buckner (#394) wrote:

    Tim and Eli,

    Figure 9.1(c)&(d) in AR4 graphically portrays the separate modeled effects of GHG and ozone respectively, for the period between 1890 and 1999. Have a look. The ozone cooling has an effect to a slightly lower height, but it is hard to see that ozone dominates cooling in the lower strat as indicated by Gavin.

    Perhaps the biggest thing that I notice with respect to the “well-mixed greenhouse gases” figure c of that diagram is that there is warming and cooling between 75-100 hPa with respect to the greenhouse gases. To the extent that it evenly balances the two in the lower stratosphere they will tend to cancel as far as the overall effect.

    Moreover, the temperature change due to WMGHGs includes carbon dioxide, methane and nitrous oxide and would have been from 1880 forward. According to Hansen, carbon dioxide has played less of a role in earlier twentieth century warming, with methane playing more of a role — as methane levels flattened in the the later part of the twentieth century. (See for example Hansen et al, Climate Change and Trace Gases, Phil. Trans. R. Soc. A (2007) 365, 1925–1954.)

    With respect to ozone vs. WMGHGs, it would be understandable if Gavin were to confine his analysis to the period of satellite observation. Assuming a constant geometric increase in carbon dioxide, the rate of additional forcing due to carbon dioxide would be constant from 1880 forward. The effects of CFCs and ozone depletion would have been after their introduction in the 1930s, and thus would have largely missed the period of early twentieth century warming.

    And likewise,

    Ramaswamy et al., Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling, Science 24 February 2006: Vol. 311. no. 5764, pp. 1138 – 1141, DOI: 10.1126/science.1122587

    … shows a flattening in recent years and virtually replicates what has been satellite observations of lower stratospheric cooling, despite the fact that they did not include the Quasi-Biennial Oscillation in their calculations. The flattening doesn’t make sense under the assumption that carbon dioxide dominates the lower stratosphere since carbon dioxide has continued to rise geometrically, but it does make sense in terms of ozone depletion since that has flattened. Still, I wish they would split out the different greenhouse gases rather than putting them all into the category of WMGHGs.

    Comment by Timothy Chase — 21 Jan 2008 @ 2:12 PM

  406. lgl Says: Where is the historic ozone record?

    Pasting your question into the search box for your convenience:

    Clicking on the most likely result in the first screenful:

    Your second question into the search box for your convenience:

    I’d suggest repeating the searches in Google Scholar, and following the ‘related’ and ‘cited’ information forward in time.

    Google has no ‘wisdom’ option as Coby reminds us.
    Reading will be necessary.

    I’d suggest adding -SEPP to your search to reduce the bogosity level, but such decisions are yours to make.

    Are you okay taking it from there?

    Comment by Hank Roberts — 21 Jan 2008 @ 2:27 PM

  407. Thanks Hank,
    The problem is 999 of 1000 hits deal with polar region which is not that interesting, and yes I have tried adding things like ‘low latitude’.
    But never mind, I will probably find something in a few days :-)

    Comment by lgl — 21 Jan 2008 @ 3:22 PM

  408. lgl (#402) wrote:


    Where is the historic ozone record?
    Is there a proxy somewhere?

    Checking the article Hank gave us in 381, we have been measuring total column ozone since the 1920s:

    Detailed knowledge of the Earth’s atmospheric ozone distribution seasonally and geographically was outlined beginning in the 1920s by a series of experimenters, especially G. M. B. Dobson, who began regular measurements with an UV spectrometer—an improved version of Hartley’s measurement technique. Dobson realized that careful measurements of the relative intensities of solar UV radiation at different wavelengths could be converted into quantitative estimates of the amount of ozone overhead, and his initial work near Oxford established that the amount of ozone varied from day to day and month to month.

    Review. Stratospheric ozone depletion
    Issue Volume 361, Number 1469 / May 29, 2006
    Pages 769-790
    DOI 10.1098/rstb.2005.1783
    Author F. Sherwood Rowland

    (Anything in pub med more than six months old is open access, I believe. I found it invaluable in my evo days.)

    That would be before the introduction of CFCs in the 1930s, although there should have been some depletion before CFCs due to stratospheric water vapor as the result of increased levels of methane, but I believe that would have been comparatively small.

    Comment by Timothy Chase — 21 Jan 2008 @ 3:42 PM

  409. lgl Says: “… polar region which is not that interesting, and yes I have tried adding things like ‘low latitude’.”

    +historic +ozone +record -polar

    gets you, for example, this within the first page of hits, just as an example:

    Atmos. Chem. Phys. Discuss., 5, 10925–10946, 2005
    SRef-ID: 1680-7375/acpd/2005-5-10925

    Detection and measurement of total ozone from stellar spectra:
    Paper 2. Historic data from 1935–1942

    Atmospheric ozone columns are derived from historic stellar spectra observed between 1935 and 1942 at Mount Wilson Observatory, California. Comparisons with contemporary measurements in the Arosa database show a generally close correspondence. The results of the analysis indicate that astronomy’s archives command considerable potential for investigating the natural levels of ozone and its variability during the decades prior to anthropogenic interference.

    Using Google,
    –”search within” at the bottom of the first page will let you refine.
    –The minus sign is your friend.

    Google Help : Advanced Search
    Once you know the basics of Google search, you might want to try Advanced Search, which offers numerous options for making your searches more precise and

    Look at the actual search string created by using Google’s page of fill-in-boxes ‘Advanced Search’ page and you can figure out how to type them in yourself.

    I’d expect there’s a description somewhere of how to use Google as a command line tool without all the cute boxes to fill in; anyone got a pointer to that?

    Comment by Hank Roberts — 21 Jan 2008 @ 4:15 PM

  410. Re #400: “Therefore the difference between that and Willis et al is much more than just in the cloud feedbacks”.

    Gavin, you are right, this difference involves the complete scope of the external forcings and other weather processes combined. I think my central point still holds however, that it would be difficult to change the sign of the TOA imbalance (cause the total heat budget to experience a net loss to space), even with a very strong El Nino. Maybe some modeling experiments need to be done, to see what forced changes in the tropics would allow such a scenario to occur, given the current TOA non-equilibrium external forcings? It certainly seems probable in any sense, that if one considered the entire ENSO cycle, the percentage of the TOA imbalance attributable directly to ENSO would likely approach 0. It also seems to me, if..if..if it turns out that we observe multiple years of very little heat accumulation in the system (or even small net losses to space), or a great deal of gain for that matter, it is important to acertain whether the models are adequately simulating this variability. I do want to thank you for engaging in this very interesting conversation and wish you the best in your research.

    Comment by Bryan S — 21 Jan 2008 @ 4:44 PM

  411. #15: R. Pielke Jr.: “Models of open systems cannot in principle be “validated””

    Do you believe this to be generally true? Without qualification, it’s obviously false. I happen to be boiling some water on my stove at the moment. I’m completely confident that I could successfully validate the open system model for that I would write down from my undergraduate transport textbook. More complex models of open systems are easy to find in engineering – such as just about any combustion engine, or just about everything in biology.

    What did you really mean?

    [Response: He's referring to a piece by Naomi Oreskes. It's (IMO) a rather semantic distinction, that derives from the lack of absolute proof in the real world. If you define validation as 'proving true' , then it's impossible, but if you define it as 'proving useful', there is no problem. The latter is what everyone is really doing, whatever the word is that is being used. - gavin]

    Comment by Andrew — 21 Jan 2008 @ 5:46 PM

  412. Re #400 (Bryan S.): “Maybe this is a good reason to pay close attention to what Roy Spencer is working on in trying to observe some of these cloud feedbacks more closely.”

    My goodness your biases just pop out all over. There are lots of rather more credible people working on this stuff. Of course it can be safely predicted that you’ll like Spencer’s results better.

    Also regarding ocean heat content, the numbers are how reliable as long as the deep ocean data (or, perhaps more to the point in the short term, data relating to heat transfer to and from the deep oceans) is missing?

    Comment by Steve Bloom — 21 Jan 2008 @ 6:28 PM

  413. Dr. Schmidt, is this is your definition of “validation” – proving useful, as opposed to proving true?

    [Response: In some sense I suppose. Useful is always a matter of degree, therefore the binary distinction implied by validated/invalidated is not really relevant. But if a model has been shown to be useful in predicting some aspect of the real world, it could be considered valid. This is all moot though - I hardly ever use the word. - gavin]

    Comment by Richard Sycamore — 21 Jan 2008 @ 6:48 PM

  414. Andrew (411) [and response] — If people would use the Bayesian terminology of confirmation and disconfirmation rather than the quite confusing (in this context) terms of ‘proof’ and ‘disproof’, much confusion would be avoided.

    The Oreskes et al. abstract does use ‘confirmation’, but rather disparagingly, which is unfortunate. After all, if the weight of the evidence gives odds in favor of hypothesis H against hypothesis K of 100:1, which would you bet on?

    I would go further than Gavin in complaining about that abstract. It seems to treat V&V as absolute, which is not true in any practice outside of mathematics and certain rather special parts of computer science, AFAIK.

    Comment by David B. Benson — 21 Jan 2008 @ 6:49 PM

  415. A broker cannot, in principle, be proved honest.
    Some brokers can be proved useful.

    Comment by Hank Roberts — 21 Jan 2008 @ 6:54 PM

  416. Re #400:

    The larger indirect effect of years with strong El Nino conditions on global temperature annual averages likely may be due mainly to increases in low level moisture in mid N.H. latitudes in winter and not likely a result of changes in tropic’s cloud and precipitation feedbacks.

    Evidence at:

    Comment by pat n — 21 Jan 2008 @ 7:39 PM

  417. Richard Sycamore, In a scientific context, a useful model or hypothesis is one that yields reliably true predictions. Do you have a better suggestion for how to define “true”. If you are holding out for Truth, might I suggest theology.

    Comment by Ray Ladbury — 21 Jan 2008 @ 8:17 PM

  418. Re #112: Steve Bloom,
    If you see your brother standing by the road
    With a heavy load from the seeds he’s sowed
    And if you see your sister falling by the way
    Just stop and stay you’re going the wrong way

    You got to try a little kindness
    Yes show a little kindness
    Just shine your light for everyone to see
    And if you try a little kindness
    Then you’ll overlook the blindness
    Of narrow-minded people on the narrow-minded streets
    Glen Campbell, 1969?)

    Comment by Bryan S — 21 Jan 2008 @ 9:43 PM

  419. nos. 413 and 411

    The Oreskes piece that Gavin dismisses as dealing in “semantic distinctions” is a widely cited and highly influential piece from Science that anyone interested in the role of models in decision making should read. There is a large and valuable literature on models in public policy.

    Oreskes et al. conclude their paper with the following:

    “Finally, we must admit that a model may
    confirm our biases and support incorrect
    intuitions. Therefore, models are most useful
    when they are used to challenge existing
    formulations, rather than to validate or
    verify them. Any scientist who is asked to
    use a model to verify or validate a predetermined
    result should be suspicious.”

    We can ask hard questions of widely accepted beliefs. It is when such questions are unwelcome or summarily dismissed that science suffers. In the end, asking such questions will make our knowledge that much more rigorous, and useful, even if the question themselves are politically uncomfortable, or challenging to the purveyors of quantitative models.

    [Response: There is a world of difference between asking 'hard questions' and using appropriate means to answer them. But sure, read the Oreskes piece, it's interesting. - gavin]

    Comment by Roger Pielke. Jr. — 21 Jan 2008 @ 10:05 PM

  420. “And of course, it’s well established by now that the main reason for the interruption of the warming is aerosols, notwithstanding that there is still considerable uncertainty regarding the secondary aerosol effect on clouds.”

    How can it be “well established,” given these facts:

    1) Worldwide emissions of sulfate aerosols continued to rise well past the mid-1970s (to approximately the mid-1990s by most analyses),

    2) The rise in temperatures in the last decades has been mainly in the northern hemisphere, even though the concentrations of sulfate aerosols are much higher in the northern hemisphere, and

    3) As you yourself note, the IPCC classifies level of scientific understanding of the cloud albedo cooling effect of aerosols as “very low.”

    Comment by Mark Bahner — 21 Jan 2008 @ 10:19 PM

  421. Well, yeah.

    Fossil fuel

    ‘Business as usual until enough damage is proven’ doesn’t work.

    Models surprise us. They allow understanding in time to take precautionary policy steps, before the damage builds up.

    What’s epidemiology? Modeling.

    That’s why the science is attacked so hard by lobbyists.

    Comment by Hank Roberts — 21 Jan 2008 @ 10:23 PM

  422. This University of Florida web page has the definition of validation I’m used to:

    In contrast, the definition of validation, “a demonstration that a model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model,” refers to its performance (Rykiel, 1996). Validation compares simulated system output with real system observations using data not used in model development.

    Defined this way, you can’t separate a judgment about whether or not any code or model is valid from the intended application of the model.

    With regard to validating GCMs (since that’s the topic here) one must ask whether the intended application is

    a) to obtain qualitative results that guide thinking? Or
    b) to predict a metric like the future annual average mean surface temperature for the next 10 years within 1% of the true value with some high degree of confidence? or
    c) Something in between.

    Obtaining validation for the former (a) is rather easier than the latter. It’s also rather important to make sure that we don’t insist we’ve achieved validations of type (b) simply because we’ve achieved (a). It’s equally bad to decide that a model is utterly invalid for any and all purposes because we haven’t been able to validate to very stringent quantitative standards.

    There is valid and there is valid. Forgetting which type of valid we mean results in the logical fallacy of equivocation.

    Comment by lucia — 21 Jan 2008 @ 10:51 PM

  423. Roger Pielke,

    based on this thread, I think you are much better off asking questions instead on going on kamikaze missions on gavin, RC, AGW, models, or whatever else is on the list for that day. I am fully aware that the blogosphere loves vieweing these arena fights between you, McIntrye, gavin, and the other well known climate blogs out there, but in reality it is not very productive, nor is it educating the people who have a dispassionate consideration of the issues. Maybe if ESPN wants to fund blog matches, then we can discuss, but it would be nice to learn things as well. From my impression on the blogs throughout the internet, the only comments I’ve seen from people without much understanding of the issues is “Oh, gavin got caught lying by Roger” or “Oh, Roger got destroyed by gavin.” Very cute. Maybe these people would like to read some actual science, and not bickering, and no offense, but I do not see gavin doing the bickering and cherry-picking and misreprenting.

    You have the credentials to teach people here, at prometheus, and other places (including me). However, no one is going to take you seriously if you just complain about everything that climate science has to offer, document what you don’t like in your blog, turn off comments, then come over here, and start telling an expert modeller how crappy his profession is after throwing out misrepresentations of what he or Hansen or others said.

    I know that scientists have probably gotten so far into technical research, but maybe we need to revisit our grade school books, in Ch. 1, on the scientific method. The harsh reality is that we don’t have an “extra” experiment earth, we don’t have a time machine. If models are going to be used, they need to have a certain degree of “usefulness” (e.g. predicting the past, predicting the present). Explanatory and predictive power is actually important here. You can’t “prove” beyond a shadow of a doubt, that a model ran at 2x CO2 will reproduce exactly what will happen. Even when you get 2x CO2, and observations matches models, you can’t “prove” the model was dead on, because the model could have missed two things entirely which offset each other, so there was no real difference.

    Comment by Chris Colose — 21 Jan 2008 @ 11:52 PM

  424. #422 That’s the definition I typically go by: “satisfactory range of accuracy”. Incorrect models can be useful. They just aren’t valid in the sense of having satisfactory predictive skill. I hope the GCMs are more than that! I hope there’s more science than art to the way they are tested!

    Comment by Richard Sycamore — 22 Jan 2008 @ 12:01 AM

  425. The words “strong el nino” appear three times in this thread. Can anyone tell me how the strength of an el nino (and/or la nina) is defined and measured? One paragraph and a citation would be nice, but any and all help gratefully received.

    Comment by Fair weather cyclist — 22 Jan 2008 @ 5:21 AM

  426. #420 I am wondering how you can claim that most analysis say that emisssions of sulphur dioxide increased from mid 1970ies to 1995.
    EPA estimates for US: 1970: 28 000 tonnes 1980: 22 000 tonnes 1990: 21 000 tonnes 1996: 17 000 tonnes.
    Estimates from Europe (EMEP): 1980: 59 000 tonnes, 1990 42 000 tonnes, 1995: 31 000 tonnes.
    The European emission numbers also include around 3 000 tonnes from volcanic and oceanic sources.
    It is possible that there has been an increase from 2000 –> present day caused by an increase in Asian emissions.

    Comment by oyvinds — 22 Jan 2008 @ 7:18 AM

  427. oyvinds, not to mention China and India.

    Comment by Fair weather cyclist — 22 Jan 2008 @ 8:34 AM

  428. #366.

    Fred Staples questions the validity of the logn2 formula for assessing the surface temperature impact of escalating atmospheric CO2 levels and asks for an explanation of the mechanism by which radiation escapes from the top of the atmosphere to space. In a partial response, Ray Pierrehumbert suggests that there are two counter-arguments to the “saturation” claim – the first is “thinning and cooling” at high level and the second relates to the lack of saturation in the wings of the CO2 absorption bands (at any level). He cites the second as being the more important.

    Might I ask for clarification?
    1) What proportion of total warming does Raypierre attribute to the “more important” wings and has his assessment taken into account the overlapping absorption spectra of CO2 and water vapour in these wings?
    2) At high levels of the atmosphere, where water vapour is largely absent, radiation escape to space must presumably depend upon the presence of CO2. Is it not possible, therefore, to speculate that adding extra CO2 from an initial low concentration could initially facilitate rather than interfere with radiation escape to space?
    3) Could it not follow, therefore, that the logn2 formula may be valid for the lower layers but not for the increasing CO2 concentrations of the upper layers?
    4) An excited CO2 molecule can apparently lose its energy by heating the surrounding air or by emitting a photon. Can the photon emitted ever be of longer wavelength than that absorbed? If so, the longer wavelength photon could readily reach space without further ghg interference if emitted at a level at which water vapour was absent.
    5) I understood that heat rises while radiative energy can travel in any direction. However, wouldn’t its net direction of travel be upwards if the layer below is very much more opaque than that above?
    6) How does energy high in the atmosphere ever get back down to the surface?
    7) Do atmospheric vortex engines, tornadoes etc, which deposit surface energy high in the atmosphere have other than temporary surface cooling effects if the energy so deposited can only exit the system by being translated into radiative energy, courtesy of a finite number of CO2 molecules?

    These are questions from a non physicist trying hard to understand.

    Comment by Douglas Wise — 22 Jan 2008 @ 8:49 AM

  429. Chris (No. 423)- How to respond on blogs when one’s work is misrepresented always requires choices (e.g., let the misrepresentation stand, or perhaps look like a complainer). For example, in your comment you obviously have me confused with someone else, as we’ve never turned our blog comments off. So if you have accurate, substantive critiques, please do share them, and I’ll be happy to engage. But do get your facts right first.

    Comment by Roger Pielke. Jr. — 22 Jan 2008 @ 9:59 AM

  430. Re #425 Where “Fair weather cyclist” asks about strong El Ninos. They are described here:

    Note that euballing the charts shows that there is a trend for El Ninos to get stronger and La Nnas to get weaker.

    Comment by Alastair McDonald — 22 Jan 2008 @ 10:00 AM

  431. I’ll try, Douglas, using your numbers

    1) That’s from the radiation physics references, not Ray’s work.
    2) CO2 is well mixed; when added it increases all through the atmosphere, not just at the top.
    3) Therefore, no.
    4) That’s in the radiation physics references; no, doesn’t follow.
    5) Opaque absorbs and re-emits and absorbs again, so, no.
    6) Same as on its way in, as radiation, plus getting absorbed.
    7) No; moving air rearranges the heat.

    Those are amateur answers from reading here, and not meant as more than doggerel, the real answers take advanced coursework and I’m relying on having read what’s written for nonphysicist.

    Comment by Hank Roberts — 22 Jan 2008 @ 10:19 AM

  432. Douglas Wise–the main thing to do is keep in mind where the energy is coming from, and in the IR, nearly all of it is coming from Earth’s surface or from the lower, warmer layers of the atmosphere. Re 1) the only way adding more CO2 could have zero effect is if there were no photons left in the absorption band of CO2. Remember, water radiates, too, and up to the point where water peters out, most heat is transported via convection, not radiation.
    2)Remember the direction of NET energy flux. Since more energy is coming from below, the net effect will be warming.
    4)Absorption and emission are completely symmetric. If a molecule can emit a photon of a given wavelength, it can also absorb it.
    5)What do you think happens when the photon encounters an opaque layer? It gets absorbed, right? That means it stays in the system.
    6)Mainly backradiation and convection (i.e. the air is warmer when it gets back down to the surface than it would be w/o extra ghgs, so it transports less energy away from the surface)
    7)Transporting energy to the upper troposphere increases temperature, and so radiation; it promotes energy loss but the net effect depends on the ghgs above the level where you dump the energy.

    Comment by Ray Ladbury — 22 Jan 2008 @ 10:30 AM

  433. Question: how will it be possible to detect *in time* abrupt climatic changes, if they arise faster than defined by the statistical science?
    After all, the definition of climate to be thirty year’s average is completely manmade. Nature doesn’t care how we define climate. Records from icecores suggest big climatic changes can actually occur very fast indeed. Last year’s dramatic melting of the arctic sea-ice may indicate that we are at a tipping point for the global climate system. Same goes for the recent data about dramatic melting of the Greenland ice sheet and dramatically fastening flow in icestreams and correspondingly increasing outflow of icebergs from West Antarctica (NASA).

    Comment by Karsten J — 22 Jan 2008 @ 10:39 AM

  434. Alastair, thanks for that page: an interesting chart.

    It leads me to a worry though. Strong/weak ninos/ninas are frequently invoked to explain short term fluctuations in global mean temperature. Since the definition of strength is based wholly on temperature (admittedly in a defined area) is there an element of circularity or tautology in such statements?

    Comment by Fair weather cyclist — 22 Jan 2008 @ 10:52 AM

  435. Pielke Sr. turned comments off entirely at his blog.

    Pielke Jr. turned his blog off for a while. He’s back now.

    This may help:
    “RP Jr seems to find himself frequently mischaracterised, most recently by the AZ Daily Star. But how can this be? With language so precise, what room for misunderstanding could there be? Well….”

    Comment by Hank Roberts — 22 Jan 2008 @ 10:55 AM

  436. Alastair, thanks for that page: an interesting chart.

    It leads me to a worry though. Strong/weak ninos/ninas are frequently invoked to explain short term fluctuations in global mean temperature. Since the definition of strength is based wholly on temperature (admittedly in a defined area), is there an element of circularity or tautology in such statements?

    Comment by Fair weather cyclist — 22 Jan 2008 @ 10:55 AM

  437. Dr. Pielke

    I did get you confused with RP Sr.’s original blog, sorry. Where has RC misrepresented your work?

    This was really not the emphasis of my post. More productive exchange amongst the experts in places where a lot of “laymen” read would be encouraging

    Comment by Chris Colose — 22 Jan 2008 @ 11:03 AM

  438. 433 Karsten

    The NAS defined abrupt climate change as “…an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause” or for a more policy setting friendly definition, “an abrupt change is one that takes place so rapidly and unexpectedly that human or natural systems have difficulty adapting to it.”

    Hope that helps

    Comment by Chris Colose — 22 Jan 2008 @ 11:10 AM

  439. Comment #426, regarding my comment in #420, asks,

    I am wondering how you can claim that most analysis say that emisssions of sulphur dioxide increased from mid 1970ies to 1995.

    I guess I was mainly going from the IPCC Fourth Assessment Report Figure 10.26, which shows (to my keen eyes ;-)) the following worldwide SO2 emissions in TgS/yr:

    1975: 58
    1980: 63
    1985: 67
    1990: 71
    1995: 71
    2000: 70

    Are you saying that the IPCC Figure 10.26 is wrong, and that worldwide sulfur dioxide emissions did not continuing increasing beyond 1975?

    Comment by Mark Bahner — 22 Jan 2008 @ 12:20 PM

  440. Can someone help me with a physical chemistry problem? I’m confused by
    the contradiction between the physical and the chemical definitions of
    atmospheric pressure.

    Pressure from the whole atmosphere is

    P = (M / A) g

    where M is the mass of the atmosphere, A the area of the Earth, g
    gravity. For Earth, M = 5.136e18 kg, A = 5.1007e14 m2, and g
    = 9.80665 m s-2 give 98,745 pascals for surface pressure,
    which is close to the 101,325 Pa of the standard atmosphere. The
    discrepancy is because the USSA uses sea-level pressure and almost all
    land is above sea-level, thus land takes up a little of the atmosphere
    space. Calculating the other way, the atmosphere should have a mass of
    5.27e18 kg. Not much of a difference.

    The volume fraction of CO2 is 384 ppm. Using a carbon dioxide
    molecular weight of 44.0096 AMUs and a dry air figure of 28.9644, I get a
    mass fraction for CO2 of 0.000583. That means there’s 2.99e15
    kg of CO2 floating around up there.

    Now, Dalton’s law of partial pressures indicates the CO2
    partial pressure should be .000384 * 101325 or 38.9 pascals. But when I
    put 2.99e15 into the equation above for atmospheric pressure, assuming
    only the carbon dioxide were present, I get 57.5 Pa. Why don’t the two
    answers match? I assume there’s some simple answer I’m missing here.

    Comment by Barton Paul Levenson — 22 Jan 2008 @ 12:56 PM

  441. Chris (no. 437)-

    Thanks. I won’t revisit the discussions with Gavin, as they are here and on our blog for anyone who is interested/nothing-better-to-do …

    But, as an example of RCs distasteful tactics, have a look at no. 435 on this thread. I ignore 99.99% of this stuff, but why in the world would RC allow such an off-topic and incorrect comment referring to a specific individual? It is certainly not the first of this genre here.

    It is distasteful because: (A) We never turned our blog off, and (B) it provides a link to the well-worn but completely incorrect allegations by former RC participant William Connolley that I am a climate skeptic of some sort (I am not).

    As I have said before, enough of this behavior occasionally leads to some responses. I probably should ignore 100%;-) But please join the conversation on our blog if you’d like, all are welcome, especially when we get our interface updated to 2007 software!

    [Response: The authors of comments are solely responsible for those comments, just as they are on your blog. Regardless, when asked to point out anywhere where RC has misrepresented your opinions, you punted. That is probably the clearest statement you've made on this thread. - gavin]

    Comment by Roger Pielke. Jr. — 22 Jan 2008 @ 1:13 PM

  442. Douglas (428), this is one of the couple or so areas of climatology that is the basis of my skeptical questioning. None-the-less, I’ll try to answer, or maybe shed some light on, some of your questions from an unbiased prospective. I need to do this quick before the smarter climatologists/physicists weigh-in with their view (and maybe corrections) — though I’m already behind Hank.

    1) I can’t answer. The degree of absorption of IR “in the wings” is one of my biggest questions.
    2) It does require greenhouse gases like CO2 to accomplish the exiting radiation since other gases radiate very litttle ala black/graybody type (though some will dispute even that). But it’s not obvious that adding more CO2 will increase significantly the exiting radiation. Re-radiation that heads upward now has a greater chance of being intercepted by another CO2 molecule before leaving the atmosphere. Though not significant, outward radiation might go up a little — I just don’t know.
    3) I would think the log(n) (2???) formula would be more likely in the upper atmosphere if the concentration of CO2 increased there, as opposed to maybe linear which is likely the relationship at very low concentrations. I contend log(n) morphs into something else at very high concentrations but this wouldn’t apply in your upper atmosphere case.
    4) the photon is emitted at the same frequency it was absorbed (but see below), but this might be purely academic: because of the doppler effect, another molecule seeing the re-emission could easily see a different frequency, which it might not absorb and let pass upward. But then the radiation can still hit a molecule that has no doppler effect. It gets even messier because the initial absorption of the surface radiation can also be affected by doppler.
    5) Yes and no. Mathematically the probability of radiating up or down is equal. The radiating molecule has no idea what the concentration is in any direction. If theoretically it’s fully opaque downward, then the radiation will just return to the surface, which a pile of radiation does.
    6) Energy high up gets back to the surface predominately through IR radiation.
    7) Vortices will put energy “high” in the atmosphere. There it will stay until transferring via collision to GHGs (which can transfer energy from its translation to vibration/rotation — the radiating type) — whenever. The GHG can then radiate either up or down. Since it is starting higher in the atmosphere it could have a higher probability of escaping. I just don’t know if that would be significant, though.

    These are answers from a non physicist trying hard to understand…

    ps Oops! Damn! Just noticed Ray beat me, too.

    Comment by Rod B — 22 Jan 2008 @ 1:16 PM

  443. Thanks Hank and Ray Ladbury for your responses to my post #428.

    Unfortunately, your explanations appear to contradict my understanding of Ray Pierrehumbert’s assertion that a beam of 15 micron IR directed upwards could be back radiated but could not penetrate the currently saturated layer of lower atmosphere. (Probably a fault in my logic circuits). As one can identify 15 micron IR from space, I had assumed it must therefore have been generated above the saturated layer by reaction between CO2 (no water vapour up there) and non radiative energy taken up by convection. This led me to the thought that CO2 acts as a key in the upper atmosphere that unlocks the door to space and lets energy out (a sort of IR radiation synthesiser). I accept, of course, that its primary role , particularly at lower levels, is that of IR blocker or downward reflector. At higher levels, saturation below would prevent downward reflection reaching the surface.

    Ray Ladbury speaks of symmetry with respect to absorption and emission. He states that, if a molecule can absorb at a particular wavelength, it can also emit at the same wavelength. May be it can but can it also emit at longer wavelengths if some of the energy has been lost in prior collisions?

    Finally, if CO2 is having significant effects at high levels of the atmosphere, one should expect that the effective radiating level of 15 micron IR would be rising (brightness temperature falling). Is this occurring? I have asked this before but no-one has answered. It would, to me, be compelling direct experimental evidence to put before the sceptics.

    Comment by Douglas Wise — 22 Jan 2008 @ 1:30 PM

  444. My goodness, here we go again. OK, Gavin, now I’ll respond to your provocations. You write, “when asked to point out anywhere where RC has misrepresented your opinions, you punted. That is probably the clearest statement you’ve made on this thread.”

    Your mischaracterization of my work starts with the very first sentence in this post: “John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007.”

    I can’t speak for John Tierney, but I never made any effort to validate or falsify IPCC projections over 2000-2007. Instead I summarized the IPCC predictions along with data, making clear that no significance could be drawn at this time. In fact, I wrote, “I assume that many climate scientists will say that there is no significance to what has happened since 2000, and perhaps emphasize that predictions of global temperature are more certain in the longer term than shorter term.” Funny how that quote didn’t make it into this post when characterizing my efforts as “misguided.”

    I began with the IPCC AR4 as a first step in working back to IPCC 1990, where something could be said with a bit more certainty.

    So with this post you created a strawman argument to knock down. And rather than simply letting me have my say and moving on, you continue to provoke.

    [Response: Because you continue to misread what is said. First off this post is not just about you. Nowhere did I say you personally were 'misguided' - I said specifically, that "short term comparisons" were - a position I wholeheartedly stand by. And my only characterisation of the Tierney post (your post was not linked) was that validation/verification/falsification was 'discussed' - which it most certainly was. Tierney specifically asked for "any thoughts on how to interpret the results on Dr. Pielke’s graph". I gave mine in the comment thread - which included concluding that it was misleading for the reasons stated above. You can either disagree or agree with my criticism. But it is not, and was not, a personal attack on you - despite what you appear to have concluded. However, I am not going to let your responses here and elsewhere, which have unquestionably misrepresented my statements, just go without comment. Feel free to move on. - gavin]

    Comment by Roger Pielke. Jr. — 22 Jan 2008 @ 1:59 PM

  445. Douglas, You’re assuming that the only energy gets transported is via radiation. We have a whole atmosphere that transports radiation along with it in its motions. Also, if you got a gas in the 2-300 K temp range, it will emit in the IR if it can. You have IR photons emitted in all directions at all levels of the atmosphere. It’s what happens to them that varies. And when you add more CO2, fewer escape. Yes, the fact that CO2 can escape in the IR means that it will emit some photons high in the atmosphere that will escape. However, the fact that you have a higher flux from the warmer regions below means that NET more photons get absorbed than get emitted. After all, think what would happen if the CO2 were not in the atmosphere. IR in that wavelength range would simply escape unimpeded through the atmosphere.

    WRT emission and absorption, remember that harmonic oscillator energy leves are quantized. If an excited molecule is going to lose its excitation, it loses all of it. Conceivably, you might distort the molecule slightly via collision, etc. during relaxation, but that energy would go into kinetic energy imparted during the collision. The molecule is either excited or it is not. No in-between energy states.

    Your question about what is happening to emission in the CO2 band could have been answered definitively by the DISCOVR satellite. Instead, DISCOVR sits in mothballs–fully built and ready for launch–somewhere here in Greenbelt, MD. Somebody in Congress decided they didn’t want to know the answer.

    Comment by Ray Ladbury — 22 Jan 2008 @ 2:35 PM

  446. Douglas, one partial insight. I assume it’s theoretically possible for a molecule to absorb energy at a higher vibrational quanta, lose some of that to translation, end up at a lower vibrational energy band, then radiate that out… at a longer wavelength. The propensity and probability of this I have to leave for more knowledgeable folks.

    I also might have misconstrued “opaque” from your earlier post and my subsequent answer. I need to revisit it. But as it ties in with #443, I don’t understand the 1st para. If back radiated re-emission is running into a layer of radiation saturation (at that freq), it won’t be re-absorbed but continue to the surface. Radiation at 15 microns escaping the atmosphere is certainly from CO2 emitting radiation outward in a low enough concentration that it won’t hit another CO2 molecule on the way. The CO2 molecule could have gotten the energy initially from re-emission of other CO2 or from collision exchange with non-GHGs.

    Your statement, “….that its primary role , particularly at lower levels, is that of IR blocker or downward reflector… ” I think would more properly read: “… primarily role…IR blocker or absorber, then re-emitting either downward or upward, or losing it through collision… “.

    But I’m not sure I got your question correctly.

    Comment by Rod B — 22 Jan 2008 @ 3:14 PM

  447. Dr. Pielke, my apology for the wording I was thinking of your ‘Spring Break’ in March 2007 — at the time it was a noticeable pause, but in hindsight a minor one in the usually daily weblogging.

    I’m just another reader, and I do urge you to be less, well, political; you’ve said you’re not a climatologist, but you seem easily upset by climatology, and all the rest of us see is the upset sometimes, not the reasoning.

    Douglas – not ‘reflector’ — greenhouse gases absorb photons, their energy (vibration, rotation, speed) increases; they transfer energy by collisions, and they emit photons. Can’t help beyond that note on terms.

    Comment by Hank Roberts — 22 Jan 2008 @ 3:24 PM

  448. Re #436 Where Fair weather cyclist writes:

    “Strong/weak ninos/ninas are frequently invoked to explain short term fluctuations in global mean temperature. Since the definition of strength is based wholly on temperature (admittedly in a defined area), is there an element of circularity or tautology in such statements?”

    As you note, the definition of an El Nino is based on local sea surface temperatures based in the Pacific. When one occurs global temperatures rise. In fact it has been argued by sceptics that global warming has stopped because the maximum recorded temperature was in 1998. That was the year of the last strong El Nino. When we get the next strong El Nino it is most likely that we will have yet another record in global temperatures.

    If that is a tautology then it is true.

    Since strong El Ninos seem to happen roughly every sixteen years, then we can expect the next one within six years. The result could be disruption to global agriculture with a possibility of famine.

    Comment by Alastair McDonald — 22 Jan 2008 @ 3:27 PM

  449. Douglas,

    By making the lower levels more opaque to infrared, you then move the “effective radiating level” to lower pressures (higher altitudes), and that will produce a gradient such that all levels below P(erl) will warm insofar as T(s) > T(erl), i.e., you have a lapse rate. raypierre’s article emphasizes the importance of spectral dependance, i.e., spectrum of upwelling and emitted radiation. When you add CO2 to the atmosphere, you increase both absorption and emission (and the stratosphere cools because increase of emission outweighs increase of absorption), but in the much “larger” troposphere, there is a lot more of the atmosphere above you to radiate downward. In that sense, you do get radiation emitted to space, but overall more emission downward as the planet moves closer to equilibrium.

    Comment by Chris Colose — 22 Jan 2008 @ 3:30 PM

  450. Douglas, re Ray’s 445: I agree with his first paragraph. I disagree (to an academic extent) with his 2nd paragraph. Molecules have many vibrational energy levels, not just on and off. But, as I implied earlier, I’m not knowledgeable of their propensity or capability to, say, absorb into and only into level 5 (with 1-4 empty) and then lose some to kinetic and drop to, say, level 3, leaving 1, 2, 4, 5 empty, then emitting from level 3. Maybe it can’t do dat, which would make my assertion moot. (Plus I’m a little uncomfortable saying Ray is incorrect when it comes to radiation… )

    Comment by Rod B — 22 Jan 2008 @ 3:41 PM

  451. It is very hard to see 3 degrees centigrade per CO2 doubling, Barton ( 391 )

    If we start from the satellite records from 1978, the overall regression line to date produces an increase of 0.41 degrees centigrade over 29 years. But that line starts from the temperature trough, down from the previous peak 35 years earlier.
    From that peak in the forties, Hansen quotes the fall in temperature as 0.2 degrees centigrade.

    Over 60 plus years, therefore, we have observed a temperature increase of a net 0.21 degrees centigrade. Ignoring the recovery from the little ice age to the forties, and using the Arrhenius version of the logn “saturation laws”, we have experienced 44% of the maximum temperature effect we can expect from doubling CO2.
    The full effect, due in the twenty-first century when CO2 reaches 560ppm, will be less than half of one degree.

    If you want to attribute the Ice Age recovery to CO2 increases, and go back more than 100 years, we have about 0.6 degrees overall for 44% of the maximum effect.
    That gives a temperature increase of 1.4 degrees for CO2 doubling, which is often quoted.

    Comment by Fred Staples — 22 Jan 2008 @ 3:54 PM

  452. Re #440 Barton,

    You are calculating the partial pressure using the volume (ppm) but you should be using density. The partia pressure depends on moles of gas which is the volume divided by the moleculatr weight. See

    HTH, Alastair.

    Comment by Alastair McDonald — 22 Jan 2008 @ 4:02 PM

  453. Number 440 (Barton) “The volume fraction of CO2 is 384 ppm. Using a carbon dioxide
    molecular weight of 44.0096 AMUs and a dry air figure of 28.9644, I get a
    mass fraction for CO2 of 0.000583. That means there’s 2.99e15
    kg of CO2 floating around up there.

    Now, Dalton’s law of partial pressures indicates the CO2
    partial pressure should be .000384 * 101325 or 38.9 pascals. But when I
    put 2.99e15 into the equation above for atmospheric pressure, assuming
    only the carbon dioxide were present, I get 57.5 Pa. Why don’t the two
    answers match? I assume there’s some simple answer I’m missing here.”

    Try 384 ppm by weight, rather than volume.

    Comment by Tracy P. Hamilton — 22 Jan 2008 @ 4:30 PM

  454. Fred, rate of change. We affect addition; removal is by natural processes that have various rates, none of them relatively fast.

    If you trickle water into a bucket with holes in it, the level rises slowly. If you pour water into the same bucket, the level rises faster. If you use a firehose, you may break the bucket.

    Plus feedbacks — like melting permafrost –increase the fill rate further.

    Current rate of CO2 increase is some 100x past rapid events. The 3 degrees number is for where temperature levels off some centuries after we stop adding extra carbon to the atmosphere. Say “when” ….

    Comment by Hank Roberts — 22 Jan 2008 @ 5:06 PM

  455. Rod, a quantum mechanical harmonic oscillator represents a bound state, and so the energy levels must be quantized. There will be multiple excited states (all multiples of the same fundamental frequency), but one (the 1st excited, I believe) particular state is relevant as the one that absorbs IR radiation. The higher states don’t absorb as strongly and are not in the IR.

    Comment by Ray Ladbury — 22 Jan 2008 @ 6:00 PM

  456. 450

    actually when planck said that energy is quantized, that allowed him to compute the total irradiance of a blackbody for a given T, and is in fact important for determining Stefan-boltzmann constant (k). Not sure where this is an issue as far as problems with ray’s argument


    You are ignoring feedbacks (1.2 K comes from deltaT(2x CO2) without feedbacks), and you are getting strange results for the temperature change, which is around 0.6 C over the last 3 decades, and just about 0.8 C over the last century. Moreover, we still have some warming in the pipeline even remaining at 380 ppmv like conditions

    Comment by Chris Colose — 22 Jan 2008 @ 6:03 PM

  457. The Physics in this post, particularly from Ray, is very quantum mechanical – “a molecule is either excited or it is not” (445).
    I have always believed quantum mechanics to be essentially a high energy phenomenon where classical physics breaks down. Incoming solar radiation will excite N2 and O2 to higher electron orbital energy levels, and they will re-radiate as you describe when they return to their original states. That is an effect inside the electron shell of which the classical physicists knew nothing.
    Does that happen at infra-red energies over a 30 degree temperature range? I doubt it. When Hug repeated the Angstrom experiment recently he saw a steady rise in temperature to a limit, without quantised discontinuities.

    At these energy levels we are on the wave side of the wave-particle duality, and your image of photons transferring from ghg molecule to ghg molecule is not convincing.

    The atmosphere is subjected to electro-magnetic radiation from the surface, which all matter will absorb to a greater or lesser extent, (stand in front of a fire), through molecular vibration and motion. Matter will conduct the heat, and radiate (by slowing down) at its surfaces, all the while observing the laws of entropy (at equilibrium, all the matter will come to the same temperature).

    If the ghg molecules absorb radiation through their vibration bands (manifested as peaks in their absorption spectra) will they not transfer kinetic energy to N2 and O2 and raise the air temperature by increasing the collision rate? Do we believe that N2/O2 molecules are mere bystanders, unaffected by their excited ghg neighbours?

    Comment by Fred Staples — 22 Jan 2008 @ 6:06 PM

  458. Ray (455), thanks. This makes sense with one exception: Since the first absorbing state would be exactly the same in all CO2 molecules, what causes the distinct line absorption spectra? A clarifying question: how do the higher vibration energy get filled? Only from collision?? By more IR absorption after level 1 is filled (and before it relaxes)??

    Chris (456) Planck’s quantum energy levels, ala Planck’s Constant are (almost) a totally different phenomena than the quantum (“discrete” might be a better word) molecular energy states of rotation, vibration, and electronic. [Molecular translation/kinetic energy tends more to Planck's quanta.]

    Fred (457) I’ll let Ray address the first two paragraphs (if he wants…) as I’m still trying to absorb it (no pun intended).

    Your last paragraph is correct. A CO2 that has picked up energy by absorbing IR into a vibration band will relax that energy either by re-emission or via collision and transfer to N2 or O2 (most likely) molecular translation energy (=kinetic = heat), the latter seeming to prevail at higher atmospheric concentrations — low altitudes.

    Comment by Rod B — 22 Jan 2008 @ 7:26 PM

  459. Um, Fred, it’s quantums all the way.
    There’s no duality.

    Comment by Hank Roberts — 22 Jan 2008 @ 7:31 PM

  460. Fred, unfortunately, you cannot pick and choose where the laws of physics are quantum and where they are classical. If a system is in a bound state, its energies are quantized. Because a solid is composed of a lattice of atoms, it will have many different vibrational modes and will absorb much of the radiation incident on it. Gasses are quite different. A diatomic atom (undistorted by collision, etc.) has no modes that can be excited, while triatomic and above will absorbe only certain frequencies. When those frequencies are in the energy range of the blackbody spectrum of the planet, the gasses are greenhouse gasses. Collisions, and other interactions can affect the vibrational spectra–collisional broadening is one of the factors giving rise to the wings of the absorption lines. However, this does not change the basic nature of the phenomenon.
    Again, Fred, learning a little physics would pay dividends. It must be very difficult to argue against what you don’t understand.

    Comment by Ray Ladbury — 22 Jan 2008 @ 7:44 PM

  461. Rod, The line corresponds to single wavelength (actually a narrow range), frequency or energy. Since the energy levels of a harmonic oscillator are quantized–> hw(n+1/2), you get absorption and radiation at hw/2, 3hw/2, 5h2/w… The ground state is hw/2–that’s the lowest state of the molecule (i.e. it’s always vibrating). The transition that absorbs IR is (I think) the one from 3hw/2 to hw/2. There are no energy states between, and to populate the ones above, you’d need higher energy photons. Keep in mind that the coupling between different states varies–that is you have different probabilities of absorbing a photon for the different states. The 15 micron line for CO2 is a pretty strong line.

    A couple of references may help refresh your memory:

    Comment by Ray Ladbury — 22 Jan 2008 @ 9:24 PM

  462. Alastair McDonald> Since strong El Ninos seem to happen roughly every sixteen years, then we can expect the next one within six years. The result could be disruption to global agriculture with a possibility of famine.

    Are you willing to take a bet that the next El Nino does not cause a statistically significant increase in famine?

    Comment by Steve Reynolds — 22 Jan 2008 @ 10:12 PM

  463. I have a silly question.

    I’ve read here that ENSO events affect the global mean temperature (as in 1998). Yet I understand ENSO to be an internal/local phenomenon, where heat is simply being redistributed. If this is so, shouldn’t the global mean temp be unaffected? Is it that the total heat content of the system (oceans + atmosphere + land) is unchanged by El Nino, but the mean surface temperature can still change?

    [Response: Not silly at all. Actually, there is a bit of a conversation above that addresses some aspects of that. I think most scientists would say that a) there is a lot of energy being redistributed during a El Nino, b) there is a clear connection between El Nino and global mean temperature, but also c) that because of cloud changes and surface temperature increases, they would be surprised if the net change at the top of the atmosphere was zero. It's not known (to me at least) what the net change should be, but it's certainly conceivable that the planet loses heat in such circumstances (all else being equal). - gavin]

    [Response: There is a lot of heat stored beneath the tropical Pacific ocean surface in the tropics. During an El Nino event, the trade winds weaken or dissappear, the the tradewind-driven upwelling of cold sub-thermocline (that's the boundary between warm near surface and cold deeper waters) water which normally keeps surface temperatures low, ceases. Consequently, the surface warms up. Note that there need not be any net change in combined ocean/atmosphere heat content in this process, its just a redistribution that does indeed tend to elevate ocean surface tempertures over a large part of the tropical Pacific. That increased surface warmth is also available to be advected poleward by air currents. As Gavin notes, it is not inconcievable that these changes do lead to a net change in the planetary heat balance--one of the reasons we need as many remote sensed and in situ tropical ocean and atmosphere observations as we can get! -mike]

    Comment by tharanga — 22 Jan 2008 @ 11:12 PM

  464. Roger Pielke, in #429 You wrote: “Chris (No. 423)- How to respond on blogs when one’s work is misrepresented always requires choices (e.g., let the misrepresentation stand, or perhaps look like a complainer). For example, in your comment you obviously have me confused with someone else, as we’ve never turned our blog comments off. So if you have accurate, substantive critiques, please do share them, and I’ll be happy to engage. But do get your facts right first.”

    Chris isn’t the only one having difficulties posting on Prometheus. I have just tried again, to post a follow-up comment on your January 9th blog post, and no luck. I can log-in, but not post. I just gave up after awhile, and moved on.

    And the inaccurate information presented in the NY Times article, and sourced from you, lives on. John Tierney still has not printed a correction to the statistically fallacious chart printed there, even though in the comments posted, the chart was roundly discredited.

    Comment by Paul Klemencic — 22 Jan 2008 @ 11:42 PM

  465. # 463

    El Nino does involve “mining” some heat out of the ocean to put into the atmosphere, and that redistribution does tweak the global mean temperature; it may also affect albedo a bit but I never looked into that. There is not a perfect offset between the places that cool and the places that warm, and so there can be a small net tropical average warming.

    In general, the ocean and atmosphere are constantly exchanging heat back and forth on time scales characteristic of the mixed layer (so the planet does not need to be in radiative balance on very small timescales, and El nino does appear to tweak global temp when you look), and the changes in atmospheric circulation can affect clouds and WV, which could affect albedo or shortwave/longwave.

    Comment by Chris Colose — 22 Jan 2008 @ 11:48 PM

  466. Chris,
    Since we have moved from 280 ppm CO2 to 390 ppm CO2 we are nearly 40% of the way to a doubling. The temperature effect of that 40% has been .6C. Why would you propose that we are going to get 2.4C more out of the remaining 60%. Considering that the forcing effect of CO2 is logarithmic, meaning the later part of that doubling contributes less forcing than the earlier part, and considering that the solar influence more likely contributed at least some warming over that time period, I simply cannot see how you get 3C per doubling.

    [Response: It's hard to tease climate sensitivity out of the warming record so far, but Tilo does the arithmetic wrong and left out some important physics. First, the logarithmic effect. To use round numbers, 4*log2(390/280) = 1.91 W/m**2, and 4*log2(560/390) = 2.09 W/m*2, so you get about the same forcing again for the rest of the way to doubling. Then, Tilo's ignored the fact that part of the radiative forcing to date is canceled by aerosols (which don't grow so much in the future), and also that the ocean has delayed the warming so we haven't seen the full equilibrium warming yet. --raypierre]

    Comment by Tilo Reber — 23 Jan 2008 @ 1:00 AM

  467. Roger Pielke Jr has very accurately wrote that there is pervasive tribalism in the global warming debates. He could get much further in his posts if he remembered this before he wrote his posts. ;)

    He could take a lesson from Bryan S who at first took a knee-jerk reaction to the ocean heat content and AGW issue, but now on this thread is taking the time to do the science and ask thoughtful questions. Bryan S pushes his position but does in in an informed and if I might say humble way.

    Its not what you say, its how you say it. If Roger Pielke Jr posed his questions in a more careful way, it could lead to a more instructive and enlightening discussion. He does ask interesting questions that could lead to informative discussion if he were to pose his questions better.

    I recognize his style, its very similar to the way law school classes are taught. The socratic method is the normal method in that arena, but it will be misunderstood by people, like scientists, who have been trained in a completely different way.

    Comment by Joseph O'Sullivan — 23 Jan 2008 @ 1:23 AM

  468. raypierre wrote in the inline to 466:

    … and also that the ocean has delayed the warming so we haven’t seen the full equilibrium warming yet.

    Higher levels of carbon dioxide have made the atmosphere more opaque to thermal radiation, leading to a radiative imbalance at the top of the atmosphere where the rate at which energy in the form of sunlight enters the atmosphere has remained roughly constant (since about 1952, actually), but the rate at which energy in the form of infrared thermal radiation leaves the atmosphere has dropped. As long as this situation exists, the climate system will absorb more energy until it warms up enough to radiate sufficient thermal radiation and restore the balance between incoming and outgoing radiation.

    However, the oceans have a higher thermal inertia than land, meaning that they must absorb more energy in order to reach their equilibrium temperature. But in the process of warming up, the partial pressure of water vapor coming off the surface of the oceans will increase, leading to more atmospheric water vapor in a process known as “water vapor feedback.” Since water vapor is also a greenhouse gas, this will amplify the greenhouse effect of carbon dioxide — and the full equilibrium response will take a fair number of decades. Incidentally, the greater thermal inertia of the oceans is what explains why land warms more quickly than ocean, and even why the Southern Hemisphere has warmed more slowly than the Northern Hemisphere: the Southern Hemisphere has less landmass, more ocean.

    Comment by Timothy Chase — 23 Jan 2008 @ 5:49 AM

  469. Re #439

    Yes, I claim that fig 10.26 give a wrong impression w.r.t sulphur emissions.
    This is partly due to the fact that 10.26 is only valid up to 1990 for SO2. As seen from the different colours, the number for 2000 is a scenario, not an estimate. It also underestimates the reduction in Eastern Europe between 1985 and 1990. Updated numbers are given in section The estimate are now a reduction between 15 and 30 % from 1980 to 2000. Anyhow, even with no SO2 reductions in this period, one should expect an increase in warming since concentrations of CO2 has continued to increase.

    An interesting phenomenon to study is that radiative forcing efficiency of aerosols vary strongly from region to region. The direct radiative forcing (“cooling”) is probably larger over south-east Asia than over Europe given the same concentration of aerosols.

    Comment by oyvinds — 23 Jan 2008 @ 6:30 AM

  470. Fred Staples (#457) wrote:

    The Physics in this post, particularly from Ray, is very quantum mechanical – “a molecule is either excited or it is not” (445).

    That it is, Fred.

    That’s why you have the lines and bands in the spectra of greenhouse gases. With the HiTran database, they have recorded well over a million spectral lines, their intensities and locations. The result of precision lab experiments.

    Minus the collisions which bend the molecules, you can derive the spectra of the molecules from the first principles of quantum mechanics. Otherwise you would need the quantum mechanics and a great deal of computer power to achieve the somewhat better level of accuracy you can get in the labs. Doable, but expensive.

    Fred Staples (#457) wrote:

    I have always believed quantum mechanics to be essentially a high energy phenomenon…

    Well, fortunately physics isn’t a matter of someone’s opinion or beliefs. The stuff is repeatable and well-documented. It is obseved with a great deal of accuracy at the surface (the spectra of backradiation from the atmosphere) and from satellites, including satellites that are able to view the atmosphere in over 2000 channels.

    Here are some videos:

    Multimedia Animations

    … and here is a still you might like:

    NASA AIRS Mid-Tropospheric (8km) Carbon Dioxide

    It shows carbon dioxide at 8 km. You will notice the plums rising off the heavily population East and West coast of the United States.

    What is being measured is the infrared radiation being absorbed and then reemitted by carbon dioxide. The thicker the carbon dioxide, the more opaque the atmosphere becomes to the infrared radiation in that channel. So in essence, you are seeing the enhanced greenhouse effect in action when you look at that photo. And we are able to do the same thing with water vapor and methane — a few of the videos deal with those.

    Comment by Timothy Chase — 23 Jan 2008 @ 6:49 AM

  471. Re #430

    Note that euballing the charts shows that there is a trend for El Ninos to get stronger and La Nnas to get weaker.

    That does seem to be the case on first inspection (and maybe closer inspection too). It would seem a reasonable conjecture that this be due to background AGW superimposing itself on the natural ENSO variation. However, I don’t think that one can simultaneously hold that view (or even the possibility that it might be correct) and ascribe short term deviations from the AGW warming trend to the magnitude of a nino/nina. This is an example of the potential for circularity I mentioned in an earlier post. I’d be grateful for any comments as always.
    on the natural

    Comment by Fair weather cyclist — 23 Jan 2008 @ 6:52 AM

  472. Trying to understand the ENSO impact on global temperature.

    La Nina phase has a high temperature pool on the Asian side. This means high evaporation rates. A part of the excess water vapor returns east in the Walker circulation, but another part moves south or north to the middle latitudes. In the La Nina case it soon encounters land masses in Asia, Indonesia and Australia, causing heavy precipitation there.

    During the El Nino phase the process is similar starting from the hot pool close to American coasts, except that there is no land masses nearby. I presume that the water vapour then is mixed and distributed more widely before it is precipitated as rain, further away.

    The difference would mean that in the El Nino case the water vapor stays a longer time in the atmosphere, trapping more IR in the process.

    Gavin stated somewhere that the relative humidity is considered rather constant (in case of a warming atmosphere). Regional short term variations certainly exist, but could this radiation mechanism cause the apparent connection between ENSO and the global temperature? In addition to the latent heat transport, of course.

    Comment by Pekka J. Kostamo — 23 Jan 2008 @ 7:16 AM

  473. Re #462 #

    Steve Reynolds Says>

    Alastair McDonald>>
    Since strong El Ninos seem to happen roughly every sixteen years, then we can expect the next one within six years. The result could be disruption to global agriculture with a possibility of famine.

    >Are you willing to take a bet that the next El Nino does not cause a statistically significant increase in famine?

    That is rather a tasteless offer: if Alastair accepts it, he puts himself in the position of standing to win money through an increase in the number of people dying of hunger.

    Comment by Nick Gotts — 23 Jan 2008 @ 7:52 AM

  474. Tracy — the 384 ppm figure is by volume, not by mass. The mass figure would be 583 ppm. And the paradox still stands. I’m assuming the physical pressure equation isn’t appropriate for one constituent of a mixed atmosphere, but I don’t know why. Does anybody here know? Preferably a professional.

    Comment by Barton Paul Levenson — 23 Jan 2008 @ 8:03 AM

  475. #218, 222, 223 (an age ago it seems..)


    “Dear Alan, Thank you for your email. Your comment #221 in the RealClimate tread was in response to #218.This particular thread was concerned with the observations of temperature rise and not with the Hadley Centre climate model. The HCclimate model of course has polar amplification just as every other climate model does. The point was the interpolation of existing observational data over the polar regions. If you look at the raw observations that GISS uses you can see how little data they are basing an interpolation on. Regardless of what they consider the correct spatial length scale for observations,the Arctic sees large regional changes in temperature, which are being glossed over with a large correlation length. The Had/CRU treatment of the observations simply states that the error is greater due to lacking data, [edit]. There are no EXTRA observations that GISS has access to, that Had/CRU does not. Thus there is no reason to believe GISS’ observations vs Had/CRU observations of recent global temperature rise when the errors are taken into account. Kind Regards, ”

    [Response: Can you forward me who sent this to you? It is true and freely acknowledged that everyone is working from the same data, and that the differences between the products is a measure of the uncertainty in those products. But the different approaches are all reasonable ways of dealing with the data. - gavin]

    Comment by Alan K — 23 Jan 2008 @ 8:52 AM

  476. 466 Tilo and raypierre

    I don’t think the aerosols play a huge role in this discussion because it would be better to say “Aerosols have offset everything but CO2″ (e.g. methane, N2O, trop. ozone). In fact, the RF for CO2 and the total net RF is roughly the same. Right now, we’re simply talking about the RF of 2x CO2; if it were all GHG’s, or all things that would be a new story. In that sense, if aerosols stay the same or decline, and the other GHG’s rise, their effects will show up a bit more. I would guess you can roughly estimate the future temperature change using just CO2 + feedbacks, though I do not know how you can calculate the forcing from water vapor feedback, just that it goes up 7%/K from Clausius-Clapeyron. This and ice-albedo haven’t shown their ugly face yet. Dr. Pierrehumbert’s comment about the warming “still in the pipeline” still applies, which is probably about half a degree to a degree, so if you use 5.35 ln (Cf/Ci) you need to realize that is an equilibrium concept, and we are not yet at 380-ppm like conditions.

    Actually, for those who say that the real sensitivity is ~1.2 C per 2x CO2, I would say that at equilibrium we have already hit that.

    Comment by Chris Colose — 23 Jan 2008 @ 9:26 AM

  477. re #447. Thanks, Hank, for correcting my loose terminology in stating that CO2 molecules reflected photons (as opposed to re-emitting them) back to the surface. However, you triggered the following thought: SO2, a triatomic molecule and therefore having a dipole moment, is presumably able to absorb and emit photons of an appropriate wavelength. In the case of SO2, this must be in the visible rather than infra red range. I have read that SO2 contributes to global dimming through an albedo effect (by reflecting solar radiation out of the atmosphere). Should one more correctly state that SO2 absorbs solar radiation and re-emits some of it out of the atmosphere? If not, why not?

    Comment by Douglas Wise — 23 Jan 2008 @ 10:03 AM

  478. Douglas, “presumably” is not your friend. If you got

    > SO2, this must be in the visible

    from a source other than your own logic, would you tell us what source you’re relying on and why you consider it trustworthy? If from your own logic, I’d suggest checking the math or asking someone more competent than me for help.

    It’s important to check what you believe, even if you are sure it was true yesterday.

    Google is your friend. Google Scholar can be too.

    Try this search for starters.

    Comment by Hank Roberts — 23 Jan 2008 @ 12:02 PM

  479. Douglas, a bit more for fun:
    This (a lab notebook, nothing fancy, but interesting tidbits I think relevant to your question)

    That led to this (it’s a web archive, you may find a current version if you look around at NIST)

    Comment by Hank Roberts — 23 Jan 2008 @ 12:11 PM

  480. This is OT – sorry. Stu Ostro of the Weather Channel has been posting some very interesting studies of the relationship between global warming and weather (for which he has been hammered by trolls). He has especially remarked on the appearance of persistent, intense high pressure ridges that are contributing to extreme weather events, and other atmospheric features that are appearing in the wrong places or at the wrong times. The latest is here:

    Comment by Ron Taylor — 23 Jan 2008 @ 12:26 PM

  481. Douglas, That’s not quite what happens. Unlike CO2, SO2 is pretty reactive, so it doesn’t stay a gas, but rather condenses into tiny particles. The particles actually reflect light like a tiny mirror. So it’s not molecular spectra that are relevant but rather the effects of the SO2 aerosols.

    Comment by Ray Ladbury — 23 Jan 2008 @ 12:29 PM

  482. Re 477 Where Douglas asks “Should one more correctly state that SO2 absorbs solar radiation and re-emits some of it out of the atmosphere? If not, why not?”

    No, SO2 is extremely hygroscopic and combines with the water vapour in the air to form sulphate aerosols which reflect (scatter) solar radiation. So CO2 absorbs and re-emits thermal radiation. SO2 reflects solar radiation.

    SO2 would act as a greenhouse gas in the 8 micrometre band if water vapour wasn’t present, as that is where its vibrational frequency is.

    Comment by Alastair McDonald — 23 Jan 2008 @ 12:39 PM

  483. Re 462 where Steve Reynolds asks:

    “Are you willing to take a bet that the next El Nino does not cause a statistically significant increase in famine?”

    If I am correct there is a strong possibility of social disorder. In that case it is unlikely that I would be able to collect my bet. Would you really hand over $1000 dollars to me rather than use it to by food for your wife and kids?

    It is very easy to bury ones head in the sand and say that disasters never happen. But the Boxing Day tsunami happemed, the Katrina hurricane happened, and the First World War happened. Are we really more civilised and less greedy than those people who cheered when war was declared on Germany, and who were unable to prevent a repeat in 1939?

    Comment by Alastair McDonald — 23 Jan 2008 @ 12:56 PM

  484. Oh, one more for Douglas, I missed the obvious error!
    You asked about sulfur dioxide. Look up _sulfate_ to understand how aerosols reflect. Different chemical. These change over time in the atmosphere.


    Comment by Hank Roberts — 23 Jan 2008 @ 1:09 PM

  485. Timothy #468
    Everything you say here is straightforward and uncontroversial. It takes more energy to heat the water than the land or air. The ocean surface has shown a smaller increase in temperature than the air over land. OK. What I don’t understand is the lag, or decades required for equilibrium. The oceans are heated directly by solar radiation, and are warmer by a few degrees C than the atmosphere near the surface. The atmosphere is not heating the ocean, although a warmer CO2 laden atmosphere will slow down the cooling of the ocean (effecting its energy budget) and result in a warmer ocean. But why the lag? It takes more energy to heat the ocean but why does it take more time?

    Comment by B Buckner — 23 Jan 2008 @ 1:50 PM

  486. #475, There is other ways to confirm NASA GISS trend. It is utterly wrong to suggest that there has not been significant warming in the Arctic, especially since 2005, all leading to a massive ice melt of 2007, and by the looks of it 2008. Vertical sun disk diameters particularly exploded in size in 2005. Atmospheric refraction is directly proportional to the density (coldness) of the atmosphere as whole (its a little more complicated by the sum effect of multiple density layers). Looking through 5 to 20 atmospheres by using the sun as a fixed sphere of reference is a powerful analysis
    of a huge swath of the atmosphere. High Arctic oblate sun results confirm NASA GISS, totally reject Hadley if the met office shows no significant warming in the Arctic, which I find unbelievable if they do.

    Comment by Wayne Davidson — 23 Jan 2008 @ 2:00 PM

  487. Re #440 (Barton): Thank you for a very good question! You made me think real deep about something that also for me was unexpected.

    Indeed the partial pressure of CO2 (at sea level) and the weight of the CO2 fraction of the atmosphere are not equal — 38.9 Pa vs. 57.5 Pa. Nor should they be.

    Consider it this way: if you could magically remove all the oxygen, nitrogen, argon etc. and only be left with CO2, the resulting atmosphere would have a scale height of only (29/44)*8 km instead of the current 8 km. In other words, the CO2-only atmosphere would sag down, most molecules moving downward, and density at sea level would increase 44/29 times. Only then would pressure at sea level become equal to the weight of this atmosphere.

    In the current situation, where turbulence etc. keeps the atmosphere well mixed, CO2 is spread out more in the vertical direction than in the above, hypothetical CO2-only atmosphere (or equivalently, in a stagnant atmosphere where each molecular species finds it own scale height, on a very long diffusion time scale). It is the N2, O2, Ar etc. molecules that (on average) are exerting an effective uplift on the CO2 molecules. This uplift equals the difference 57.5 – 38.9 Pa that you found.

    Does this make sense?

    Comment by Martin Vermeer — 23 Jan 2008 @ 2:07 PM

  488. I have no idea.

    Comment by Barton Paul Levenson — 23 Jan 2008 @ 2:40 PM

  489. Mr. Buckner, what’s your source for the statements you make above, including

    > The oceans … are warmer by a few degrees C
    > than the atmosphere near the surface. The
    > atmosphere is not heating the ocean …

    This is true in some locations and probably true if you consider the overall global average, but that’s not how temperature moves.

    What you state is certainly not true where upwelling from deep water replaces warm water with cold, for example.

    This may help:

    Comment by Hank Roberts — 23 Jan 2008 @ 2:41 PM

  490. Re #475:

    Recently on Open Mind, Tamino took the GISS and Had/CRU curves and merged them in the same chart. Other than the .1 degree zero offset (due to the use of different reference periods), the curves follow nicely.

    This seems to show that GISS’ extrapolation of the Arctic sites doesn’t add to the global temp.

    Comment by henry — 23 Jan 2008 @ 2:52 PM

  491. Ray (461), just to be certain before I get to far in what looks like great info: I assume hw is h-bar X omega. Correct?

    Comment by Rod B — 23 Jan 2008 @ 3:20 PM

  492. B Buckner (#485) wrote:

    Timothy #468
    Everything you say here is straightforward and uncontroversial.

    Sometimes it helps, I figure, to explicitly state what may be well-understood with a simple allusion by the regulars but, if stated with too much brevity may be misunderstood by the newcomers.

    B Buckner (#485) wrote:

    It takes more energy to heat the water than the land or air. The ocean surface has shown a smaller increase in temperature than the air over land. OK. What I don’t understand is the lag, or decades required for equilibrium. The oceans are heated directly by solar radiation, …

    Correct, but they are also absorbing backradiation from the atmosphere which increases over time as the result of more water vapor — acting as a greenhouse gas.

    B Buckner (#485) wrote:

    … and are warmer by a few degrees C than the atmosphere near the surface. The atmosphere is not heating the ocean, although a warmer CO2 laden atmosphere will slow down the cooling of the ocean (effecting its energy budget) and result in a warmer ocean….

    The same applies to water vapor. And as the water vapor content of the atmosphere increases, the atmosphere becomes more opaque to infrared radiation.

    B Buckner (#485) wrote:

    But why the lag? It takes more energy to heat the ocean but why does it take more time?

    In part because it is receiving solar radiation at the same rate, and to raise the temperature to the same degree will require more energy — and this can only come from the difference between incoming and outgoing radiation. In part because the atmosphere is becoming more opaque over time due to increased levels of water vapor as the ocean warms — further raising the equilibrium temperatures that the oceans must achieve if there is to be a balance between incoming and outgoing radiation at the top of the atmosphere.


    However, another factor is that the energy which the ocean receives becomes distributed as the result of ocean circulation, and since it becomes distributed, with a portion of it being removed from the surface layers, the ocean surface will have to receive more energy in order to warm to the same degree. With a simple “block” model of the ocean that involves no ocean circulation, the majority of the rise in temperature will take place within only a few years. Consequently you will get a smaller climate sensitivity with the slab model.

    But when we take into account the fact that the ocean is not a solid slab and that it takes time for the additional heat content to become distributed (what one might call the “equilibrium distribution”), we find that the majority of the rise in temperature takes decades. For this reason, those who are intent on claiming that climate sensitivity is low seem to have an almost fatal attraction for slab oceans.

    Just this year I believe we have seen a couple of papers come through which used slab oceans, and at least one was peer-reviewed. I remember that with the latter of the two papers there was a fair amount of press from many of those who think that General Climate Models with forty layers of atmosphere and twenty layers of ocean are too simple to capture the behavior of the climate system. They thought that the study using the single layer slab model of the ocean had conclusively shown that we were overestimating climate sensitivity. Funny how that works. And underestimating climate sensitivity as they do may very well prove to be a fatal mistake — for much of the world’s population — given the floods, drought and famine implied by the higher, more realistic climate sensitivity.


    Anyway, I should have mentioned the bit about ocean circulation earlier, but I was trying to keep things short, and not being a climatologist, I didn’t recall it until seeing your post. Thank you — the bit about ocean circulation needed to be included since (if I remember correctly) this is the biggest reason for the lag.

    Comment by Timothy Chase — 23 Jan 2008 @ 3:38 PM

  493. Re #488 where # Barton Paul Levenson Says: “I have no idea.”

    If you are going to calculate using mass then you have to use moles not kilograms. In other words you have to divide the mass of gas by its molecular weight.

    Surely it does not need a “professional” to point out that it is moles of ideal gases which have the same volume 22.4 litres IIRC.

    Cheers, Alastair.

    Comment by Alastair McDonald — 23 Jan 2008 @ 4:35 PM

  494. Timothy – thanks for taking the time to prepare a thorough and thoughtful response. It was helpful to me.

    Hank – I was referring to global average temperatures. Using 20th century averages, the average land surface temperature is 8.5 C and the average sea surface temperature is 16.1 C.

    Comment by B Buckner — 23 Jan 2008 @ 4:39 PM

  495. Rod, Yes–best I could do with an English keyboard.

    Comment by Ray Ladbury — 23 Jan 2008 @ 5:13 PM

  496. Global Temperature Trends: NASA GISS vs. Hadley Cru

    henry (#490) wrote:

    Recently on Open Mind, Tamino took the GISS and Had/CRU curves and merged them in the same chart. Other than the .1 degree zero offset (due to the use of different reference periods), the curves follow nicely.

    This seems to show that GISS’ extrapolation of the Arctic sites doesn’t add to the global temp.

    I found it.

    From the essay:

    I’ve plotted the two data sets using different y-axes, because they’re on a different scale. That’s because NASA GISS computes anomalies compared to the 1951-1980 baseline while HadCRU uses the 1961-1990 baseline.

    Clearly there’s a strong warming trend in both data sets, with GISTEMP data rising at 0.018 +/- 0.003 deg.C/yr, HadCRU at 0.019 +/- 0.003 deg.C/yr. We can also see some some sizeable ups and downs, like the cooling for a few years around 1992 caused by the Mt. Pinatubo explosion, and the strong warming in 1998 caused by el Nino.

    Garbage is Forever
    August 31, 2007

    Comment by Timothy Chase — 23 Jan 2008 @ 5:26 PM

  497. B Buckner (#494) wrote:

    Timothy – thanks for taking the time to prepare a thorough and thoughtful response. It was helpful to me.

    Not a problem. Writing longer pieces helps me learn and forces me to think through the issues.

    Incidentally, I might not do a one-to-one comparison, though, between sea and land surface temperature. Most of the land is at higher latitudes and there is plenty of ocean around the tropical middle. Oceans at the same latitude may still be warmer, but probably not by as much as the global ocean vs. global land suggests.

    Comment by Timothy Chase — 23 Jan 2008 @ 5:42 PM

  498. B Buckner (485), at the risk of being as presumptuous as to add to what Timothy (492) has said; I will point out that seasonal warming rarely extends much more than a meter on the land (less than half of the earth’s surface). Because part of the ocean is constantly overturning much of the ocean’s warmed upper layer is constantly forced down to warm areas much deeper than one meter (indeed hundreds if not thousands of meters) below the “earth’s” surface. It will take hundreds of years for this previously “warmed water” to resurface and when it does so it will present the “delayed” effect you described.

    Besides the overturning factor described you must realize that the surface of the ocean is subject to mixing that has little comparable equivilent on the land. Local turbulence mixes the upper few meters of the ocean. This is driven by local winds, waves, fish, passing boats, tides, toddlers in inner tubes, etc. The closest terrestrial equivalents to these factors are earthworms, gophers and human earth moving machinery including toddlers in sandboxes. Because water is more fluid than soil, this “cooling effect” is more dominant on water than it is terrestrially.

    I hope this helps

    Comment by Arch Stanton — 23 Jan 2008 @ 6:00 PM

  499. B Buckner (484), at the risk of being as presumptuous as to add to what Timothy (498) has said; I will point out that seasonal warming rarely extends much more than a meter on the land (less than half of the earth’s surface). Because part of the ocean is constantly overturning much of the ocean’s warmed upper layer is constantly forced down to warm areas much deeper than one meter (indeed hundreds if not thousands of meters) below the “earth’s” surface. It will take hundreds of years for this previously “warmed water” to resurface and when it does so it will present the “delayed” effect you described.

    Besides the overturning factor described you must realize that the surface of the ocean is subject to mixing that has little comparable equivilent on the land. Local turbulence mixes the upper few meters of the ocean. This is driven by local winds, waves, fish, passing boats, tides, toddlers in inner tubes, etc. The closest terrestrial equivalents to these factors are earthworms, gophers and human earth moving machinery including toddlers in sandboxes. Because water is more fluid than soil, this “cooling effect” is more dominant on water than it is terrestrially.

    I hope this helps

    Comment by Arch Stanton — 23 Jan 2008 @ 6:22 PM

  500. > global average temperatures

    Without knowing the waves, wind, currents, and much else, the global average number isn’t going to be useful as a way to decide how heat’s transferred. Sunlight penetrates a good ways into the ocean. Measurements have been done to come up with numbers useful for modeling, worth looking these up I think.

    One example, rather old, from one set of cruises:

    “Accounting for all factors, the net surface heat transfer to the ocean was 17.9 ± 10 W m−2….”

    Just an example, mind, not meant to be a global average number.

    Joan Baez said in a radio interview a few years ago, “when faced with a choice between a hypothetical situation and a real one, always take the real one.”

    Perhaps she learned that from her physicist father.

    Comment by Hank Roberts — 23 Jan 2008 @ 6:56 PM

  501. The response in #367 (raypierre) doesn’t jive with climate station data in the Midwest and West. Another station plot (1897-2007, link below) shows Benton Harbor, MI annual mean temperatures in 1931 was very warm in 1931 (still the warmest on record for that station). Other stations, in rural and forested areas, were also hot in 1931.

    Comment by pat n — 23 Jan 2008 @ 8:09 PM

  502. 501

    There is nothing wrong withwhat he said…the earlier warming, say, 1900-40 was caused by anthropogenic, solar, internal, lack of volcanic…interrupted mainly be aerosols, and everything since then is mainly us. Also, I’m not sure what you’re trying to get from a single weather station, but it probably won’t work for any discussion on global warming.

    Comment by Chris Colose — 23 Jan 2008 @ 8:26 PM

  503. 473 Nick Gotts> That is rather a tasteless offer: if Alastair accepts it, he puts himself in the position of standing to win money through an increase in the number of people dying of hunger.

    You are correct; I should have added the usual terms for these bets: The winnings go to a charity of the winners choice. In this case a famine prevention charity should be specified.

    483 Alastair McDonald> If I am correct there is a strong possibility of social disorder. In that case it is unlikely that I would be able to collect my bet. Would you really hand over $1000 dollars to me rather than use it to by food for your wife and kids?

    There are betting brokers that will hold the stakes in low risk investments. Are there any other objections? $1000 is OK with me (with the proceeds going to a famine charity).

    Comment by Steve Reynolds — 23 Jan 2008 @ 9:34 PM

  504. Re #439 Alastair: what you are saying is correct, but IMHO irrelevant. Indeed partial pressure ratios are the same as molar or ppm-by-volume ratios — but when computing weights, you must take molecular masses into account as Barton does.

    From what I see he does his calculations right and comes up with two correct answers. Only, them being different is counter-intuitive.

    Comment by Martin Vermeer — 23 Jan 2008 @ 11:49 PM

  505. #492 Timothy,
    So #485 Buckner’s lag is caused by ocean convection. And #463 ocean upwelling can be a major source of heat loss. Two questions:

    1. What is the characteristic time-scale of this heat flux? Shorter or longer than, say, the solar cycle?
    2. You say ocean heating leads to H2O vapor, a (+) feedback GHG. Does it also not generate clouds, a (-) feedback? How much of each? Do you have a reference handy?


    Comment by An Inquirer — 23 Jan 2008 @ 11:59 PM

  506. Re # 467 Joseph Sullivan

    I’m curious about your comment that “the socratic method is the normal method in that arena [ meaning Law School]”

    I have no doubt that some law professors do use the Socratic method, but there is a common misperception about this teaching method in law school based on John Houseman’s character, Professor Kingsfield, in the 1973 movie, Paper Chase ( I hope this movie is not the source of your comment.

    I’m also curious about your assertion that the socratic method “will be misunderstood by people, like scientists, who have been trained in a completely different way.”

    That is just not true. While most science and engineering courses dwell on factual knowledge, not all do – some aim to get students to think more clearly about scientific issues. And the Socratic method of discourse can be quite useful in forcing science students to think clearly (; In fact, I had an undergraduate biology professor who taught that way, and years ago the journal American Scientist published a very clever essay that made use of Socratic dialog to examine the logic of hypotheses for the evolution of larger brains in primates feeding on fruits in trees (sorry, but I can’t locate the reference at this moment).

    Comment by Chuck Booth — 24 Jan 2008 @ 12:29 AM

  507. random question

    what is the real emissivity of the Earth, I’ve seen 0.769 here, 0.95, 0.615…and don’t you need to compute that when doing radiation calculations like (sigma)T^4 * (0.95) = OLR. That goes from 240 W/m2 to 227, but I never see that done. Why all these values, and where do you actually compute that?

    Comment by Greg — 24 Jan 2008 @ 1:58 AM

  508. In # 40 Gavin said “The variance of weather is larger than the seasonal warming from 10-17 April. Do exactly the same exercise for a two-month period rather than 8 days, and it will make sense”

    That makes total sense, so I ask Gavin or anyone who can answer or point me where I can find the answer to this question: Would not the Earth’s precession and it’s variable orbit around the sun have a significant influence on our climate, since they are the primary factors behind theories related to the ice ages? Geologist point to earth surface features which demonstrate, for example, the Atlantic Ocean was 400′feet higher than present during peak melting and coastal shoreline were 50 miles farther east during peak ice.

    Thank you – Dan

    [Response: Yes. Search on Milankovitch forcing - this is the primary driver of climate variability in the 10,000 to 400,000 year range. - gavin]

    Comment by Dan — 24 Jan 2008 @ 2:12 AM

  509. #506 Chuck Ooth*

    I have not seen Paper Chase. I am basing my assertion on my own experience when I was a student in law school and when I was a biology major as an undergraduate.

    What I had in mind was the standard practice in law school where a student takes a position and the professor will bring up arguments to counter the student’s position, and the student will in turn defend his position. The professor’s arguments at times have no validity, but the process develops the student’s legal reasoning and rhetoric skills.

    Knowingly advancing an incorrect position in a public debate would not be acceptable to the scientific community. A lawyer may push a legal theory in court that he considers incorrect if it would help his client, and as long he followed the code of ethics that behavior would be acceptable in the legal community.

    The excellent Ziman(2000) paper that Raypierre links on The debate is just beginning – on the Cretaceous post explains what I mean in more detail.

    *If I’m Joseph Sullivan, then you’re Chuck Ooth. ;)

    Comment by Joseph O'Sullivan — 24 Jan 2008 @ 3:42 AM

  510. #486
    huh? I have no idea what the Hadley Centre says about arctic warming (searching for “Arctic” on their site yields something about increased river flows into the arctic but by all means let me know if you find some categoric statement that they think it is or isn’t warming).

    The point of their response was the methodology of their approach, which they contrast with GISS. Hadley says that GISS has a very small amount of data which they interpolate from. Hadley meanwhile says “we don’t know” and adjusts its error bars accordingly.


    [Response: The random error bars on both are around 0.1 deg C in any one year - smaller than the difference the treatment of Arctic points make. However, GISS shows a graph of what happens if you leave out the very highest latitudes so you can see for yourself that this is not a 'random' issue. As the Arctic warms faster than the rest of the globe, there is a systematic bias that occurs if you assume that it is simply unknown. It's good that different groups adopt different strategies in this case, because that demonstrates the importance more clearly. - gavin]

    Comment by Alan K — 24 Jan 2008 @ 3:48 AM

  511. Inquirer, cloud fomation is a bit complicated. Yes, one might expect more cloud for higher H2O vapor content, but if the upper troposphere warms substantially, you might not get more clouds. And if you do, keep in mind that clouds have both a cooling effect (keeping sunlight out) and a warming effect (keeping IR in, especially at night).

    Comment by Ray Ladbury — 24 Jan 2008 @ 8:24 AM

  512. RayPierre’s comment on the 3 degree issue is unarguable, (466) as is Hanks (454). They are saying that whatever caused past changes in the energy balance of the planet, the resulting temperature changes will appear much later.
    We have no way of quantifying the effects of the delays, so over short current timescales (hundreds of years?, more?) it is impossible to attribute any particular change in temperature (the emergence from the little ice age, the fall from the forties to the seventies, the medieval warm period) to any particular cause.
    The problem, however, is that politicians and journalists do just that – the increase in concentration of one of the lesser greenhouse gases. A is happening, B is happening, so A must be causing B.

    Scientists should not make that mistake, and should attempt to support the theory with validated laboratory experiments, carefully measuring the saturation effect, for example, and its range of application. We should not attempt to explain Angstrom away (What Angstrom did not Know, here) – if we do not accept his results we should repeat and extend his experiment.

    Otherwise we have only the temperature measurenents (themselves very uncertain, and subject to debate) to validate the model theory.

    I used RayPierre’s logn law data in my first comment on 3 degrees (451). If we are half way to the effect of doubling the CO2 concentration, and if the initial rise from the 19th century to 1940 (about 0.4 degrees) had nothing to do with CO2, (the concentration seems to have increased from 280 to 300 ppm) we can expect less than half a degree increase from the forties to the end of this century.

    Arrhenius was expecting 6 degrees from doubling CO2. Would it have been a good idea to have taken him seriously, and cancelled the Western industrial revolution?

    Are we sufficiently certain to ask China and India to take that view today? More to the point, will they be sufficiently certain to agree?

    [Response: Staple's whole line off argument is completely invalidated by the fact that all GCM's make use of the very same physics that gives you the logarithmic law for radiative forcing, yet they can give a great deal of additional warming -- even up to Arrhenius' value in some cases -- when run out to 2100. You don't have to believe the GCM's have everything "right" to get my point. The point is that these models show that the same physics Staples is arguing on the basis of undeniably is compatible with several degrees C of additional warming in the future.

    As to Arrhenius, if people had paid him more attention it wouldn't have cancelled the industrial revolution. It would only mean we would not have put so much reliance on fossil fuels. We'd be fifty or more years ahead of the game on engineering for a more renewable based economy. To be fair, the moment when it became clear there was a problem was much later in the 1950's when Revelle and Suess developed the ocean chemistry showing that CO2 really could be increased by fossil fuel burning. On the other hand, the carbonate chemistry used by R&S is standard high-school stuff, and there's a good chance it would have been discovered much earlier if people had paid more attention to Arrhenius, and if Ångström hadn't set back the field by a highly publicized but botched experiment. --raypierre]

    Comment by Fred Staples — 24 Jan 2008 @ 9:33 AM

  513. Question for whomever, from Timothy, and #505 which says, “…2. You say ocean heating leads to H2O vapor, a (+) feedback GHG. Does it also not generate clouds, a (-) feedback? How much of each? Do you have a reference handy?…”

    I don’t hear anything about the larger cooling effect on the ocean (surface) stemming from increased evaporation via increased ocean temps. Is this all assumed in the current discussion, or/and what is its impact?

    Comment by Rod B — 24 Jan 2008 @ 11:07 AM

  514. Well Chris,

    Your reply in 502 about “single weather stations” ignored what I said in 501 and 399 that many other stations in rural and forested areas in both 1931 and 1921 were part the 1920s-1940s global warm period.

    U.S. and global hydrologic and meteorological data from 1880-2007 show no significant global cooling from the 1950s-mid 1970. Instead, there were two warm periods: 1. 1920-1940s and 2. late 1970s – current, which were sandwiched between an “apparent cool dip” from the 1950s-1970s.

    The “apparent cool dip” was actually just getting back closer to the normal trend which had been occurring in the late 1800s and early 1900s. The 1950s-1970s “apparent cool dip” was influenced by the dominance of La Nina rather than by El Nino (which dominated the mid 1930s-1940s period).

    It’s wrong to tell the public that a reduction in aerosols is the culprit behind the 1950s-1970s “apparent cool dip”. Many scientists have done that and are now refusing to admit the mistake. As a result, public confidence in interpretations by many climate scientists continues to be low on many subjects – a big problem. Public confidence would improve if more scientists speak honestly about what they see happening, including those scientists who work in weather prediction.

    Comment by pat n — 24 Jan 2008 @ 12:22 PM

  515. Greg posts:

    [[what is the real emissivity of the Earth, I’ve seen 0.769 here, 0.95, 0.615…and don’t you need to compute that when doing radiation calculations like (sigma)T^4 * (0.95) = OLR. That goes from 240 W/m2 to 227, but I never see that done. Why all these values, and where do you actually compute that?]]

    0.95 is the emissivity of the Earth’s surface, 0.615 is the emissivity of the Earth-atmosphere system as seen from space. The 0.769 figure is from treating the atmosphere as a single slab and asking what fraction of energy from the ground would need to be absorbed to reach the correct surface temperature.

    Comment by Barton Paul Levenson — 24 Jan 2008 @ 12:27 PM

  516. Re #512 (Fred Staples) “Arrhenius was expecting 6 degrees from doubling CO2. Would it have been a good idea to have taken him seriously, and cancelled the Western industrial revolution?”

    It’s too early to say :-).

    Comment by Nick Gotts — 24 Jan 2008 @ 1:00 PM

  517. Fred Staples embraces mysticism:
    “We have no way of quantifying the effects of the delays, so over short current timescales (hundreds of years?, more?) it is impossible to attribute any particular change in temperature (the emergence from the little ice age, the fall from the forties to the seventies, the medieval warm period) to any particular cause.”

    Fred, now come on. Fess up. Did you really have a straight face as you were typing this? Your contention completely ignores the fact that we have physics to help us out. The fact that we don’t know everything does not invalidate the fact that there are some things we do know.

    And I refuse to embrace the false assumption that anyone who is persuaded by the science and evidence of climate change must want to “cancel the industrial revolution”. Are fossil fuels the only source of energy? Do we really need a market place where imported tropical fruits are cheaper than locally grown produce?

    Sorry, Fred, this is a weak effort.

    Comment by Ray Ladbury — 24 Jan 2008 @ 1:38 PM

  518. Re #516 (Nick Gotts) [Re #512 (Fred Staples) “Arrhenius was expecting 6 degrees from doubling CO2. Would it have been a good idea to have taken him seriously, and cancelled the Western industrial revolution?”
    It’s too early to say :-).]

    To expand a little, the only way we can try to answer this question is by constructing one or more counterfactual scenarios, and comparing them with the world as it is. So, let’s suppose there had been much less fossil fuel readily available, or some religious taboo had prevented its use. Significant use of fossil fuels actually came quite late in the “Western industrial revolution”, which has medieval roots, and is just the latest stage of a much broader, more or less global acceleration of technological innovation, which has lasted many millennia. By 1600, before any significant fossil fuel use, many key prerequisites of the Western industrial revolution had been borrowed from other parts of the world (e.g. paper, clear glass, steel, magnetic compass, gunpowder, “Arabic” numerals), others were invented in Europe before 1600 (e.g., ocean-going ships, guns, moveable-type printing, clockwork, spectacles, telescopes, perspective drawing). Accessible fossil fuel “supercharged” European technical innovation (and the Great European Land Grab it made possible), but there is no reason to think that technical development would have stalled altogether without it, since the acceleration of innovation, and its spread around the world, go back so far. It seems possible people might have done less damage to each other and the environment in a slower industrialisation process, but by no means certain: our doppelgangers in such a possible world might well find a way to screw things up without using fossil fuels!

    Comment by Nick Gotts — 24 Jan 2008 @ 2:30 PM

  519. It would be interesting to think about a planet where a lot of fossil fuel was not readily available. If people there developed an advanced technology, presumably they would start out burning wood, and later perhaps refine methanol from it. Cars and factories would still be possible. They might make more use of wind and solar power. There’s no reason to think they couldn’t figure out nuclear fission, or eventually, fusion. (Assuming the latter can, in fact, be figured out, something I’m increasingly dubious about.)

    Comment by Barton Paul Levenson — 24 Jan 2008 @ 4:12 PM

  520. Sadly, Raypierre, I have not made my point clear.

    I am not saying that the logn law is unarguable. Applied to current temperature changes and concentrations it may prove accurate; equally, it may not if the temperature effects are delayed.

    What we cannot do is validate the GCM’s by appealing to the GCM results for temperatures changes 100 years from today.

    It is the degree of certainty claimed for the GCM models that I object to, not the GCM theory itself.

    Take, for example, Steve Sherwood’s contribution on the issue of troposphere temperatures versus surface temperatures
    “The non-warming troposphere has been a thorn in the side of climate detection and attribution efforts to date. Some have used it to question the surface record (though that argument has won few adherents within the climate community), while others have used it to deny an anthropogenic role in surface warming (an illogical argument since the atmosphere should follow no matter what causes the surface to warm)”

    He seems to put the issue fairly, except that his parentheses are irrelevant. He should have concluded that, on the measurement evidence, the troposphere warming is not sufficient to support the AGW theory, and if the theory is correct the surface temperature increase looks to be exaggerated.

    And on the mysticism point, Ray, (517) I am inviting you to go back 100 years, look at the energy sources available then, and take a view.

    [Response: You are misinterpreting Steve's quote. It has to be read in the context of the subject he was working on. He is referring to problems with the data itself, and the nature of the difficulties in getting good tropospheric temperature trends, particularly in the tropics. It's still hard to do, but the best estimates now are that the troposphere is indeed warming along with the surface. With regard to the GCM's, what I'm saying is that your physical argument is utterly bogus, since if it were correct it would apply to the GCM's as well, and they don't show the behavior you infer. --raypierre]

    Comment by Fred Staples — 24 Jan 2008 @ 4:24 PM

  521. Fred, that’s precisely the problem–you keep insisting on going back 100 years. And your point is irrelevant, as by 1950, nuclear power was an option, and most of the CO2 emissions post-date this era. Moreover, to expect that human innovation would have stagnated is not reasonable.
    Not only is your understanding of climate models severely outdated, so is your understanding of the data. The discrepancy between theory and measured tropospheric warming is now within reasonable error bars.

    Comment by Ray Ladbury — 24 Jan 2008 @ 5:27 PM

  522. An Inquirer (#505) wrote:

    #492 Timothy,
    So #485 Buckner’s lag is caused by ocean convection. And #463 [#489 I believe?] ocean upwelling can be a major source of heat loss.

    It might be better to phrase this a little more exactly. Ocean upwelling can be a major source of heat loss from the surface. However, the heat still exists. It has simply been redistributed to greater depths.


    An Inquirer (#505) wrote:

    1. What is the characteristic time-scale of this heat flux? Shorter or longer than, say, the solar cycle?

    There shouldn’t be any one characteristic time-scale involved. This is a large part of the reason why today’s climate models will divide the ocean into 40 levels rather than the 20 they might have made due with in the 1990s or used two slabs (one on top of the other) instead of one — which is what would seem to be implied when speaking of just one “characteristic time-scale” in accordance with which heat is exchanged between the two. Indeed, with thermohaline circulation, the cycle itself is such that individual molecules take on the average 3500 years to complete. However, this isn’t relevant for our purposes.

    Thermal inertia and ocean circulation imply that the “full response” takes time, they imply a form of “latent heating,” but the strongest response is immediate. Neglecting natural variability due to various climate oscillations, temperature change is to a first approximation proportional to the rate in change of the forcing at the time of the change in the total forcing and diminishes after that.

    Thus if solar irradiation rises rapidly to say 1950 but then goes flat, the strongest response should be while it is rising or just after it has risen. What one should not expect is for the response to be small, but then pick up with temperatures acceralating nearly 30 years later in 1979. Likewise, when the world economy recovered at the end of World War II, sulfates and forcing due to sulfates greatly increased, leading to a period of global cooling that began immediately. The initial effects weren’t delayed for several years or several decades — although the full effects certainly would be.


    An Inquirer (#505) wrote:

    2. You say ocean heating leads to H2O vapor, a (+) feedback GHG.

    Increasing the temperature is leading to increased water vapor, or alternatively, increasing absolute humidity.

    For one study that looks specifically into this and its attribution to anthropogenic greenhouse gases, please see:

    Santer et al., Identification of human-induced changes in atmospheric moisture content, PNAS, 25 Sept 2007, vol. 104, no. 39, pp 15248-15253

    However, water vapor feedback is expected to and appears to leave relative humidity roughly constant.

    Please see:

    The rate of global-mean drying (~6%/K) agrees with the rate at which the saturation vapor pressure decreases with temperature in the lower troposphere, implying a nearly constant relative humidity change in water vapor mass. Such behavior at the global scale is a widely recognized characteristic of climate models (3–5, 7, 8).

    Supplementary Material to “Global Cooling After the Eruption of Mount Pinatubo: A Test of Climate Feedback by Water Vapor”
    Science 26 April 2002:
    Vol. 296. no. 5568, pp. 727 – 730

    Some of those who have argued for a lower climate sensitivity have argued that relative humidity would actually decrease due to increased precipitation. But we aren’t seeing this either in the models or empirical studies.


    An Inquirer (#505) wrote:

    Does it also not generate clouds, a (-) feedback? How much of each?

    Global warming could result in increased or reduced cloud cover. It will largely depend upon the temperature profile of the atmosphere. Moreover, clouds have both a greenhouse effect in which they trap thermal radiation and an albedo effect in which they reflect sunlight. These nearly cancel, and the net effect for a given layer of clouds is dependent upon cloud height, thickness — among other things. At night, for example, clouds tend to keep in thermal radiation but their albedo effect is of little use.

    In fact, people like Lindzen have claimed that global warming would result in fewer clouds — with the skies opening up to let out additional thermal radiation, acting as a negative feedback that would dampen global warming.

    Now at most latitudes, the models have done well at projecting trends in cloud cover. However, we have seen cloud cover decrease over the past fifteen years in the tropics which is not predicted some models. Whether this is due to diminished aerosol load, global warming or natural variability (e.g., ENSO) is still an open question inasmuch as the trend has been relatively short.

    But the net effect at the top of the atmosphere has been a reduction in reflected sunlight which almost exactly matches the increase in outgoing longwave radiation, implying no net warming or cooling as the result of diminished cloud cover, and over the same period, the temperature of the tropics has continued to rise.

    Please see:

    A significant decreasing trend in OSR [outgoing solar radiation] anomalies, starting mainly from the late 1980s, was found in tropical and subtropical regions (30° S-30° N), indicating a decadal increase in solar planetary heating equal to 1.9±0.3Wm-2/decade, reproducing well the features recorded by satellite observations, in contrast to climate model results. This increase in solar planetary heating, however, is accompanied by a similar increase in planetary cooling, due to increased outgoing longwave radiation, so that there is no change in net radiation. The model computed OSR trend is in good agreement with the corresponding linear decadal decrease of 2.5±0.4Wm-2/decade in tropical mean OSR anomalies derived from ERBE S-10N non-scanner data (edition 2). An attempt was made to identify the physical processes responsible for the decreasing trend in tropical mean OSR.

    A. Fotiadi, et al, Analysis of the decrease in the tropical mean outgoing shortwave radiation at the top of atmosphere for the period 1984-2000, Atmos. Chem. Phys., 5, 1721-1730, 2005

    As such decreasing cloud cover in that region has no net effect upon the trend in global warming.


    Of course what you appear to be interested in is climate sensitivity. One way of cutting through what uncertainty exists with respect to some of the feedbacks is to see what the climate system has done in the past.

    Bringing together the results of over 40 different paleoclimate studies of periods over the past 420,000 years, the following paper…

    Royer DL, Berner RA, Park J. (2007), Climate sensitivity constrained by CO2 concentrations over the past 420 million years. Nature, 446: 530-532.

    … arrived at a fairly consistent best fit for a climate sensitivity of roughly 2.8 C/doubling of carbon dioxide. However, there is still a fair range of uncertainty associated with this, but given the statistical analysis performed, it is more likely that 2.8 C significantly underestimates the actual climate sensitivity than overestimates it.

    Comment by Timothy Chase — 24 Jan 2008 @ 7:57 PM

  523. Rre 509 Joseph [O]Sullivan

    Sorry for misspelling your name – I had it correct in my original comment, which disappeared when I hit [Post] and I couldn’t retrieve it, so I had to retype it in a hurry. And I apologize for questioning your familiarity with the Socratic Method in law might enjoy Paper Chase, by the way.

    I agree that knowingly advancing a false position in a public debate would not be acceptable to the scientific community, but I see nothing wrong with doing so in a science class in order to help students see the flaws in their logic. My point is still that many scientists are quite familiar with the Socratic method – they may have studied Greek philosophy, they may have taken courses from professors who use the Socratic method (apparently, you missed that as an undergraduate, which is unfortunate), and they may use it themselves in the classroom. And many scientists, biologists especially, are very familiar with the conscious advancement of false propositions as a rhetorical trick – that’s how creationists and ID proponents frequently argue their cause. ’nuff said.

    Comment by Chuck Booth — 24 Jan 2008 @ 8:29 PM

  524. Gracias por la traduccion en espaniol – it is extremely useful when trying to explain to my Spanish speaking colleagues and family members.

    Comment by Jorge Ramos — 24 Jan 2008 @ 9:09 PM

  525. 469:

    Updated numbers are given in section The estimate are now a reduction between 15 and 30 % from 1980 to 2000.

    The “reduction…1980 to 2000″ is basically IPCC assessment spin. Here is a quote from the section:

    “The most recent study (Stern, 2005) suggests a decrease in global anthropogenic emissions from approximately 73 to 54 TgS yr–1 over the period 1980 to 2000,…”

    But if you go to Stern, 2005, the emissions as late as 1991 are about 72 TgS yr-1. So it’s not like Stern, 2005 is saying that emissions peaked circa 1980, and declined steadily thereafter.

    So once again, we’re left with the question of why surface temperatures started rising in the mid-1970s, even though sulfur dioxide emissions did not decline.

    Comment by Mark Bahner — 24 Jan 2008 @ 10:21 PM

  526. Nick (518), an added tidbit: Ford’s (at least) early cars were built to run on ethanol because gasoline was still very new and costly.

    Comment by Rod B — 24 Jan 2008 @ 10:32 PM

  527. Can Declining Sulfur Emissions Explain the Timing of the Modern Era of Global Warming?

    Mark Bahner (#525) wrote:

    But if you go to Stern, 2005, the emissions as late as 1991 are about 72 TgS yr-1. So it’s not like Stern, 2005 is saying that emissions peaked circa 1980, and declined steadily thereafter.

    Well, I couldn’t find a copy of Stern 2005, but I found:

    David I Stern, Reversal of the trend in global anthropogenic sulfur emissions, Global Environmental Change 16 (2006) 207–220

    …. which I presume is almost as good.


    Looking at world emissions, I see a very steep climb until some time in the mid 1960s at which point the sulfur emissions continue to rise, but not anywhere as steeply. Then I see the emissions barely rise from 1978-1979, fall during early 1980s, rise again for perhaps four years, fall a little, then fall precipitously. That’s from figure 1.

    Another good graph is figure 4, “Northern and southern hemisphere emissions 1850–2000.” It breaks things out according to hemisphere. According to this graph, sulfur emissions in the Northern hemisphere had pretty much leveled off a little before 1975 and was faultering shortly afterward. Both North America and Western Europe were declining at least since about 1975.


    Mark Bahner (#525) wrote:

    So once again, we’re left with the question of why surface temperatures started rising in the mid-1970s, even though sulfur emissions did not decline.

    I can think of at least two reasons. First, its not like sulfur emissions are the only forcing in the climate system. Second, they don’t get evenly distributed, either.

    Carbon dioxide is still growing, so even if sulfur emissions remained flat, we would see temperatures begin to rise. But more importantly, it really matters where the sulfur is being emitted. If sulfur emissions are dropping in the Northern Hemisphere but rising in the Southern Hemisphere, the Northern Hemisphere will tend to matter more because that is where there is more land mass. Land has less thermal inertia than ocean, and therefore with a given forcing, land temperature will change more rapidly than ocean.

    Sulfates don’t stay put, but get carried by the wind. However, they will get carried only so far.

    Please see:

    Inferences of [sulfate mean residence time] based on concentrations of sulfate in air or precipitation are necessarily ambiguous because of continuous and distributed sources and because of sulfate formation by atmospheric oxidation of SO2 (Schwartz, 1989). From the decrease in sulfate concentration in precipitation with distance over the North Atlantic, Whelpdale et ul. (1988) inferred a (l/e) decay distance of 2400 km. For a mean transport velocity of 300 to 500 km per day (Summers and Young, 1987), the corresponding mean residence time is 5 to 8 days.

    Stephen E. Schwartz, The Whitehouse Effect-Shortwave Radiative Forcing of Climate by Anthropogenic Aerosols: An Overview, J. Aerod Sci Vol. 2T No. 3, WJ. 3S9-382, 1996

    On average, they will travel 1600 km before being cut in half. As such, wind circulation matters a great deal…

    Please see:

    NASA Scientists Use Satellites to Distinguish Human Pollution from Other Atmospheric Particles
    NASA’S Earth Observatory, RELEASE NO: 02-137
    September 17, 2002

    As such, even within the Northern Hemisphere, sulfates will be more effective at arresting the rise in temperature in some places more than others.

    If they are emitted too far north, we will have white on white with ice and snow below and no net effect upon the albedo. And even then, in the far north, a small carbon content will of a little more than 5% will tend to result in a change in the sign of the forcing.

    If they are too far south, they will tend to pass over less land and more water, or perhaps even drift into the Southern Hemisphere. If they are emitted along west coasts, they will tend to drift over land, but when emitted along the east coasts they will drift over water.


    Looking at Stern 2006 figure 4, “Regional trends in sulfur emissions in the 1980s and 1990s,” North America and Western Europe were beginning to reduce their emissions around 1970-5. Eastern Europe had nearly leveled off by the late 1970s and faultered in the early 1980s, with some recovery in the late 1980s. In the Northern Hemisphere, the only continent that experience continuous strong growth was Asia until the mid to late 1980s — further to the south with the Pacific Ocean to the West.

    Not simply the sign matters, but the actual magnitude of the change in emissions. But the magnitude isn’t enough, either. You have to know where it is distributed in order to be able to estimate the effect. Emissions in the Northern Hemisphere leveled off and dropped before global emissions. Emissions in the mid-latitudes of the Northern Hemisphere (principally North America and Western Europe) dropped before the rest of the Northern Hemisphere between 1970 and 1975. Emissions to the north of the mid-latitudes will be more be more susceptible to carbon content due to more ice and snow, and emissions to the south of the mid-latitudes will have a greater tendency to be blown out over open water where they will have less of an effect upon global average temperature, particularly along the east coasts of Asia.

    Given the graphs in Stern 2006, the fact that the modern era of global warming began in 1979 doesn’t present much of a mystery, at least from what I am able to gather. Judging from what I looked at above, any time after 1975 wouldn’t be problematic. And my focus was principally upon sulfates — without regard for the changes to the other forcings, or for that matter climate oscillations like ENSO or the Atlantic Oscillation.

    Comment by Timothy Chase — 25 Jan 2008 @ 3:48 AM

  528. Mark Bahner writes:

    [[So once again, we’re left with the question of why surface temperatures started rising in the mid-1970s, even though sulfur dioxide emissions did not decline.]]

    Because CO2 was high enough then to have a larger effect.

    Comment by Barton Paul Levenson — 25 Jan 2008 @ 8:02 AM

  529. @ Chuck Booth 523
    There’s a big difference between knowingly advancing a false position for the purpose of helping someone learn to think more clearly by finding the flaws in your argument, and knowingly advancing a false position for the purpose of actually convincing others of that false position. I for one am idealistic enough to think that the latter should be shunned in any forum, be that publications in journals, discussion on blogs, or courtroom trails. There are clearly a few people who believe otherwise…but they’ll never identify themselves as such. We can only recognize them by realizing that they’re taking an unsupported position when they really ought to know better.

    It’s tough for us lay people to sort out, though. I’m a smart guy–I even think I could learn some of the relevant math and physics if I had the time and strong enough motivation–but often when the equations start flying in a post I skim down to summary/conclusion statements to try and get the gist. I can’t, or at least am very unlikely to, check all the work for myself. I think it’s safe to say most other people won’t, either. So here’s what I do. I think of the hierarchy of evidence I learned in college: true experiment>correlational study>case study>anecdote>expert opinion>personal opinion. Lab experiments verify the properties of CO2 that lead to it functioning as a greenhouse gas, as well as a lot of (all?) the other physics in the models. The balance of experimental and correlational research leads to a pretty coherant picture of what’s going on, summarized by the IPCC. There are margins of error around all the terms, but things are constrained enough to make it seem profoundly unlikely that some new finding will fundamentally disrupt the big picture. People regularly tell stories about being able to grow certain plants in zones where they formerly wouldn’t grow, spring coming earlier, winters being milder, etc. News reports about changes in the Arctic are frequent, I’ll also count that as anecdote. You can find “professional” opinion going either way, but when bodies of scientists have issued official positions, it has AFAIK always been to endorse the IPCC synthesis. The only place where I encounter as much support for the position that the current warming isn’t anthropogenic (or has stopped, or whatever) is at the lowest tier of evidence: personal opinion. However, there are several very vocal “experts” of the non-anthropogenic position, so I’ll grant that there is some debate at the lowest two tiers of evidence.

    But my impression is that many if not most people put expert opinion much higher in the hierarchy, and then weight expert opinions differentially according to how closely the various experts match their own preexisting beliefs. So in effect, the determining factor in how many laypeople evaluate the evidence is…how well it accords with what they already believe, or want to.

    Here’s where I’m going with all this–the professional and amateur climate scientists here, and at Tamino’s place, and Eli’s, etc. often recommend that we laypersons educate ourselves about climate physics so we can grasp the significance of the evidence. I agree. But given that we know, or at least strongly suspect, that for most people that will never happen, I recommend that the climate folks also educate themselves about another professional field: social psychology, particularly persuasion and attitude change. I think it’s important. And I think those who have set themselves up as your idealogical opponents are ahead of you on this. I’m not suggesting that you use any of the “dirty tricks” of marketing that the other side is already using–going over to the dark side would ruin your credibility with us idealistic people. But I really, really do think you should get more savvy about the social psychology of attitude change.

    Comment by Kevin Stanley — 25 Jan 2008 @ 11:40 AM

  530. I have tried on several previous occasions to thank Ray, Hank, Rod B and Chris for taking the trouble to reply to my previous posts (#443 and #477). Unfortunately they all disappeared when I clicked the post command. I hope I’ll be more fortunate this time.

    I think I now know the following:
    1) There is no current satellite orbiting in space capable of measuring changing brightness temperature in the 15 micron CO2 absorption band. (Needs yet-to-be-launched DISCOVR, re Ray)
    2) A photon cannot be emitted from a CO2 molecule that is itself incapable of being absorbed by a second CO2 molcule.
    3) SO2, as a gas, absorbs in the UV range but, in fact, acts as an aerosol (in droplet form) in the atmosphere and reflects solar energy but absorbs and emits nothing.
    4) The atmosphere is currently saturated in the 13.5 – 17 micron part of the CO2 absorption band. If I interpret Ray Pierrehumbert correctly (his M and M candy analogy and his CO2 columns),no free photons in the 13.5-17 range can pass through the level of saturation to the high dry atmosphere.
    5) I had assumed, therefore , that all radiation in this upper part of the atmosphere must have been created in situ by non radiative convected energy reacting with CO2. I think Ray Ladbury suggested, to the extent that I understood him correctly, that they could also be transported to this region by covection while in the grip of excited CO2 molecules. (Not sure of the relative time scales of energy transmission by convection and radiation in a ghg containing atmosphere).

    I would still like to know how energy reaching the high dry atmosphere could get back to the surface if there were an underlying layer of saturation which would presumably block access to downwelling radiation in exactly the same way as upwelling radiation from the surface is blocked. Can you have reverse convection? I thought that heat energy rose.

    What is meant by top of atmosphere? I am shrewd enough to appreciate that it is not a cellophane wrapper that divides atmosphere from space (but not that much more shrewd). Why doesn’t the atmosphere just keep expanding as extra energy is uplifted and ultimately dissipated by molecular separation by pressure drop?

    Can someone please discuss saturation with respect to water vapour? One can only assume a positive feedback with water vapour in the absence of pre-existing saturation. (I know that it won’t be saturated over dry areas but models, I think, suggest that warming won’t increase water vapour over such areas.) To what extent will a CO2/water vapour combination in the CO2 wings cause saturation equivalent to that caused by CO2 alone in the centre of its absorption band?

    Presumably, increasing ghgs will reduce the height from the ground at which saturation will occur. Why is it not more plausible to suppose that this trapping of energy nearer the surface will have more warming effect than anything happening miles up in the atmosphere?


    [Response: Sorry your previous posts disappeared. I have found that things sometimes (maybe always) disappear when entered into the comment box you get on the pop-up window, as opposed to comments accessed at the end of the article itself. Might be browser dependent. --raypierre]

    Comment by Douglas Wise — 25 Jan 2008 @ 12:22 PM

  531. [Re: # 528]

    But NOAA CMDL data shows that CO2 has been increasing in a linear fashion since measurements began in the late 1950s.

    Comment by pat n — 25 Jan 2008 @ 12:26 PM

  532. Douglas Wise,
    OK, we’re close. Sorry if what I wrote gave the impression that excited CO2 molecules transported via convection were a significant source of energy. Rather, it is thermal energy that is so transported. But thermal energy (e.g. molecular collisions) can excite CO2 in the upper atmosphere–both there and down in the lower troposphere.
    WRT saturation, be careful. Yes, a photon in that wavelength range radiated from the surface probably would not escape. However, as more CO2 is added, you get more absorption in the wings of that absorption line. Look at it this way, the depth of the absorption spectrum can be viewed as measure of the probability that a single molecule of CO2 will absorb the photon in that range. As you move out into the wings, the probability of absorption by a single photon decreases a lot, but if you add enough CO2, you’re still likely to absorb the photon even out in the wings.
    Likewise, as you move higher in the troposphere, temperature decreases, and the thermal energy (of order ~kT) is well under the energy needed to excite the vibrational mode of CO2. However, the molecules follow a Maxwell distribution~exp(-E/kT), so there will be some that are sufficiently energetic to knock a CO2 molecule into its excited state. It can emit an IR photon as it decays. In the absence of CO2 above, that photon would escape. Add more CO2 and it will most likely be trapped. It all comes down to probabilities, but add the probabilities together and you come up with a certainty–you trap more energy.

    Energy transport to the surface from the upper troposphere can occur via convection (again, thermal energy) or backradiation. Yes, a CO2 photon will likely be absorbed on the way down, so maybe that heats the atmosphere. But a warmer atmosphere will have more thermally excited CO2 molecules, and so more IR photons, and so on. The main thing to realize is that if the energy doesn’t escape, it has to warm the climate. It may take several steps to do so, but if it doesn’t escape, those steps will happen.

    Comment by Ray Ladbury — 25 Jan 2008 @ 1:17 PM

  533. Re: #529 (Kevin Stanley)

    An excellent comment, and I agree. I would add that scientists need to recognize that we’re not always the most persuasive (or charismatic) public speakers.

    Perhaps it would be a good idea to identify those individuals with exceptional charisma and persuasive speaking skills to carry the banner of public communication. This is especially important in debate; the “winner” (meaning, the one who is most persuasive) isn’t necessarily the one who is correct, or presents arguments most logically and correctly; all too often it’s the one the listeners like, and therefore want to believe.

    I’m reminded of the 2004 vice-presidential debate between Dick Cheney and John Edwards. I’m no fan of Cheney (or Bush), and I was hoping for Edwards to “clean his clock.” But Cheney was better-prepared with logical (sometimes false, but logical) arguments and facts (sometimes distorted). I concluded, after watching the debate, that Cheney had won going away (much to my chagrin).

    But poll results the following morning showed how wrong I was. The general public viewed Edwards as the winner by a nontrivial margin. I believe this is because Edwards is likeable and good-looking, while Cheney is neither. The “winning” of the debate had nothing to do with fact (distorted or not) or logic (false or not) — it had to do with charisma.

    So, all you RC guys — who in the climate science community has the most charisma, and the most persuasive public-speaking skills? I want someone who looks like Brad Pitt and talks like JFK. Maybe we could persuade Johnny Depp to finish his Ph.D. and join the team?

    Comment by tamino — 25 Jan 2008 @ 2:15 PM

  534. Douglas Wise posts:

    [[I would still like to know how energy reaching the high dry atmosphere could get back to the surface if there were an underlying layer of saturation which would presumably block access to downwelling radiation in exactly the same way as upwelling radiation from the surface is blocked.]]

    If the upper levels are absorbing more radiation from lower levels, due to there being more CO2 in the said higher levels, they will warm up. And radiate. The radiation doesn’t have to get all the way to the ground. It just has to fall on the level under it, which then heats up a bit, and heats the level under that, etc. You can warm the ground a little bit even by warming the very topmost layer. Let me know if you want to see the math.

    [[ Can you have reverse convection? I thought that heat energy rose.]]

    Convective heat transfer does rise (though there is also such a thing as sideways “advection”). Radiation can travel in any direction, however.

    Comment by Barton Paul Levenson — 25 Jan 2008 @ 2:27 PM

  535. >528, 531
    Pat, the rate of increase hasn’t been linear, and your cited page doesn’t support that belief. See also

    Half the fossil fuel burned was burned up to about 1970; the second half through about 2000; we’re on the ‘third half’ now.

    See also

    Comment by Hank Roberts — 25 Jan 2008 @ 3:07 PM

  536. Re #532 Ray Ladbury

    “Likewise, as you move higher in the troposphere, temperature decreases, and the thermal energy (of order ~kT) is well under the energy needed to excite the vibrational mode of CO2. However, the molecules follow a Maxwell distribution~exp(-E/kT), so there will be some that are sufficiently energetic to knock a CO2 molecule into its excited state. It can emit an IR photon as it decays. In the absence of CO2 above, that photon would escape. Add more CO2 and it will most likely be trapped. It all comes down to probabilities, but add the probabilities together and you come up with a certainty–you trap more energy.”

    Please will you explain what happens to the energy which is trapped?

    Comment by AEBanner — 25 Jan 2008 @ 6:17 PM

  537. Mr. Banner, the answer hasn’t changed from the many other times you’ve asked the same question — a greenhouse gas molecule in the atmosphere can transfer energy by collision to oxygen and nitrogen molecules.

    Comment by Hank Roberts — 25 Jan 2008 @ 7:29 PM

  538. Mr. Banner, c’mon, you’ve posted your disbelief — a detailed statement of the physics you don’t believe in — over at CA repeatedly. Most recently p=2572#comment-193276.

    Belief isn’t something we can help with; physics, you need to either understand, or trust someone’s answer for. You won’t get a different answer by asking the same question over and over, seems to me.

    Comment by Hank Roberts — 25 Jan 2008 @ 8:30 PM

  539. Re #537 and #538 Hank Roberts

    Mr Roberts, thank you for responding to my post #536, but I’m still hoping for a reply from Ray Ladbury. I get a sense that you do not like the message I’m trying to send. Yes, I did try to get it across before in a different thread, but the thread was unfortunately cut before I received a reply which I found convincing.

    I have said already that although a sceptic about the enhanced GHG effect, nevertheless I am fully prepared to be convinced by a properly reasoned explanation based on physics. So it is not a case of “belief” as far as I am concerned, but what I have come to understand from my own consideration of the physics involved. I do, indeed, understand the physics and this is why I am not prepared to trust someone’s answer when it clearly is either wrong or, more likely, incomplete. Perhaps you, yourself, should consider whether you are simply “believing” something because you want to. If not, then I think it would be appropriate to give full consideration to an alternative, or additional, explanation based only on the physics.

    It may be that there is some other factor which is crucial to the way the GHG effect works, but which has not been mentioned so far. In such a case, I should be very keen to learn about it.

    With reference to your #537, I quote
    “a greenhouse gas molecule in the atmosphere can transfer energy by collision to oxygen and nitrogen molecules.”

    Yes, this was the sort of reply I was expecting. So it is agreed that when the CO2 molecule which had absorbed a photon next decays by collision, its excited internal energy is transferred to the atmosphere. But this is where the energy came from in the first place! So nothing has changed. The temperature of the atmosphere has not been changed as a result of this process. It is just as if nothing had happened.

    Comment by AEBanner — 26 Jan 2008 @ 8:36 AM

  540. Re #539
    “With reference to your #537, I quote
    “a greenhouse gas molecule in the atmosphere can transfer energy by collision to oxygen and nitrogen molecules.”

    Yes, this was the sort of reply I was expecting. So it is agreed that when the CO2 molecule which had absorbed a photon next decays by collision, its excited internal energy is transferred to the atmosphere. But this is where the energy came from in the first place! So nothing has changed. The temperature of the atmosphere has not been changed as a result of this process. It is just as if nothing had happened.”

    No, the photon originally emitted by the surface is then absorbed by a CO2 molecule which then transfers it to surrounding N2 & O2 molecules, approximately 8 kJ/mole of CO2 is thereby added to the atmosphere. However, any excess energy in N2 and O2 molecules can not be radiated into space, the only way this can occur is if sufficient energy can be collisionally transferred back to a CO2 (or other GHG) sufficiently high in the atmosphere so as to radiate into space. According to the Boltzmann distribution only about 1% of CO2 molecules will be in that state at the top of the troposphere.

    Comment by Phil. Felton — 26 Jan 2008 @ 9:24 AM

  541. AEBanner: So it is agreed that when the CO2 molecule which had absorbed a photon next decays by collision, its excited internal energy is transferred to the atmosphere. But this is where the energy came from in the first place! So nothing has changed. The temperature of the atmosphere has not been changed as a result of this process. It is just as if nothing had happened.

    I think your focus is too narrow. You have just agreed that the absorption of a photon can heat a gas. Now step back and compare an atomsphere with no CO2 with our current atmosphere. The IR from the surface escapes directly to space; no effect on the atmosphere. But in our atmosphere, IR photons get captured and the energy heats the gas.

    So how can CO2 not be heating our atmosphere?

    Not all the IR came from the atmosphere.

    Comment by Tim McDermott — 26 Jan 2008 @ 9:39 AM

  542. AEBaner, well if energy is not escaping the system–and IR radiation is the only way it can escape–then it must go into heating the system. The primary mechanisms are via collisional relaxation and backradiation. The relative importance of these two mechanisms depends on density and temperature, of course and therefore on altitude.
    You say: “Yes, this was the sort of reply I was expecting. So it is agreed that when the CO2 molecule which had absorbed a photon next decays by collision, its excited internal energy is transferred to the atmosphere. But this is where the energy came from in the first place! So nothing has changed. The temperature of the atmosphere has not been changed as a result of this process. It is just as if nothing had happened.”

    I must say, this is a rather myopic view. If the CO2 molecule had not been present, what would have been the fate of the photon? Escape to space, no? So, something most certainly has happened. Energy that would have left the sysem is transformed to thermal energy, which cannot escape the system. Meanwhile, more shortwave radiation is coming from the sun. Energy in is roughly constant; energy out has decreased. And when that happens, what does the system do?

    Also I must apologize to Douglas Wise. When I referred to convective heat transport, I was speaking rather liberally and including the thermal energy transported when the air descends to the ground again. If it is less cool because it has lost energy, it will cool the ground a bit less. My use of the term was a bit too loose, and Barton graciously corrected my loose language.

    Comment by Ray Ladbury — 26 Jan 2008 @ 10:37 AM

  543. AEBanner, as a fellow skeptic maybe I can shed some light re #539. What you describe in the last paragraph is true if the CO2 molecule excited its vibration energy from a collision with another gas molecule (of any kind). That collision should lower atmospheric temperature. Then when the CO2 relaxes by another collision and passes the energy to another gas molecule, the temperature will go back to where it was.

    But this does not cover the excitation of CO2 vibration energy with infrared radiation, which is the predominate way CO2 gets excited (vibrationally). This does provide a net heating of the atmosphere when the CO2 collides and loses that energy.

    Though the question remains on the cooling of the earth surface as it radiates (which ought to be a little greater than the subsequent atmospheric heating) and how that gets mitigated/reversed with back radiation, which is just a little less than the outgoing surface radiation. This gets a little complicated, in my view, and I have questions about it. But this is different, I think, from what you are asking.

    Comment by Rod B — 26 Jan 2008 @ 10:47 AM

  544. Re AEBanner @ 539: “So it is agreed that when the CO2 molecule which had absorbed a photon next decays by collision, its excited internal energy is transferred to the atmosphere. But this is where the energy came from in the first place! So nothing has changed. The temperature of the atmosphere has not been changed as a result of this process. It is just as if nothing had happened.”

    Not quite nothing. What’s happened is that energy has been kept in the atmosphere rather than emitted to space. And since more energy is continuously being added to the atmosphere from the surface, the atmosphere must therefore warm. And it will continue to warm until the energy of the photons making it to space equals the energy being added to the atmosphere from the surface.

    It’s not hard to understand. Really.

    Comment by Jim Eager — 26 Jan 2008 @ 11:15 AM

  545. Mr. Banner, have you counted the various threads where you’re asking this same question? You won’t get a logical proof without math. E.g. or or or or or or

    That’s not a complete list, just a few among many.
    You’re getting lots of attention but not enough help.

    [Response: I think this subject and Mr Banner's concerns have been more than adequately dealt with. No more on this please (that's to everyone of course). - gavin ]

    Comment by Hank Roberts — 26 Jan 2008 @ 12:08 PM

  546. Hank, I understand that the rate increase in CO2 has not been linear. Thanks for clearly pointing that out.

    My comment in 531 was in ref to 528 and 501. I was explaining that while surface temperatures began rising in the late 1970s (even though sulfur dioxide emissions did not decline) BPL was wrong in his explanaion (“Because CO2 was high enough then to have a larger effect.”). I was merely trying to say that there was no abrupt increase in CO2 in the mid-1970s to explain the abrupt increase in the rate of increase in global temperature in the late 1970s. I mistakenly used the words linear fashion in describing CO2 increase, which skepics have used erroneously many times in the past in describing the rate of CO2 increase.

    Comment by pat n — 26 Jan 2008 @ 2:24 PM

  547. pat n (#546) wrote:

    Hank, I understand that the rate increase in CO2 has not been linear. Thanks for clearly pointing that out.

    My comment in 531 was in ref to 528 and 501. I was explaining that while surface temperatures began rising in the late 1970s (even though sulfur dioxide emissions did not decline) BPL was wrong in his explanaion (”Because CO2 was high enough then to have a larger effect.”). I was merely trying to say that there was no abrupt increase in CO2 in the mid-1970s to explain the abrupt increase in the rate of increase in global temperature in the late 1970s. I mistakenly used the words linear fashion in describing CO2 increase, which skepics have used erroneously many times in the past in describing the rate of CO2 increase.

    Actually CO2 levels increased quite substantially during the 1970s relative to the previous decades. I believe the following chart will give you some idea as to how it compared to previous decades, as well as how well-correlated temperature has been with CO2 concentration since 1880.

    Please see:

    Global Average Temperature and Carbon Dioxide Concentrations, 1880-2006

    … from:

    Scientific Evidence
    Increasing Temperatures & Greenhouse Gases: Woods Hole Research Center


    Of course numbers are better than charts and physical principles are better than mere correlations. We know as a matter of physics that due to CO2′s absorption spectra, forcing due to CO2 rises as the log of the concentration.

    Going to:

    Trends in Atmospheric Carbon Dioxide – Mauna Loa
    NOAA Earth System Research Laboratory: Global Monitoring Division

    … I was able to get the annual mean growth rate in carbon dioxide for 1959-2007, the CO2 concentration for 2003, calculate CO2 concentration for each year, then calculate the percent change in CO2-forcing from 1970 to 1980 as: 100*(Ln(CO2[1980]/280)/Ln(CO2[1970]/280)-1) where CO2[1980]=341.13 and CO2[1970]=328.54.

    The result?


    That one decade alone added nearly as much forcing due carbon dioxide as a quarter of all of the previous decades since the beginning of the industrial revolution.

    Comment by Timothy Chase — 26 Jan 2008 @ 6:28 PM

  548. Jim (544), you (among others) also ignore the cooling effect of radiation leaving the surface. Though maybe that just expands and confuses the question too much. Though I’m still curious.

    Comment by Rod B — 26 Jan 2008 @ 10:21 PM

  549. Re Rod @ 548: “you (among others) also ignore the cooling effect of radiation leaving the surface.”

    I don’t ignore it, Rod, but don’t forget that the surface is then rewarmed by solar insolation during the next daylight period. Energy continues to be added to the system.

    Comment by Jim Eager — 27 Jan 2008 @ 12:12 AM

  550. Re #534. Barton Paul Levenson, in answering an earlier question of mine, states that you can warm the ground a little even by warming the very topmost layer of the atmosphere. He offers a math based explanation which, I regret, would almost certainly be beyond my ability to understand. I would, however, like to ask him a few supplementaries.

    1)Given that we have ruled out “downward” convection, why doesn’t heat (energy) at the very topmost layer of the atmosphere just travel on up, thus causing atmospheric expansion. In other words, what constraints does the so-called top of the atmosphere represent.
    2)Barton says that heat at the very top will cause a small degree of warming at the bottom(ground). Surely, the closer the source of heat to the surface, the greater the warming would be? If such is the case, wouldn’t the lowering of altitude of the “almost” saturated layer, occasioned by greenhouse gas increases, be a much more important source of surface temperature increase than anything happening at the top of the atmosphere?

    I am sorry if I am being completely naive. I would also be very grateful if one of you out there could address the subject of radiative saturation with respect to water vapour in a manner similar to that which has been applied to CO2 in isolation.

    [Response: Douglas,I appreciate your desire to attain a better degree of understanding of the finer details of how the greenhouse effect works, but you're not going to get that by a dialog like this in the comments. If you just need the broad-brush explanation, ask yourself if you understand how adding insulation to your house allows you to maintain a higher temperature inside while burning the same amount of fuel in the furnace. If you understand that, you understand the greenhouse effect, since the greenhouse effect is just planetary insulation. The fact that the greenhouse effect works through its influence on radiative heat loss out the top of the atmosphere rather than convective heat loss from the outside of your house does not change the way the energy balance argument works. Now, the questions that you are asking about how CO2 affects infrared loss are answered at the broad-brush level in the "A Saturated Gassy Argument," and if that's not enough for you, there's simply no alternative to actually learning the math and physics, at least to the level of Chapter 3 and 4 of my book (draft still available online). Water vapor radiative saturation is discussed in Chapter 4. Water vapor, unlike CO2, actually does come close to being radiatively saturated in the tropics near the ground (and becomes fully radiatively saturated around 310 or 320 K). I emphasize that because of the "thinning and cooling" argument even a gas that's radiatively saturated through a considerable depth of the atmosphere gives you greenhouse warming. If Weart's verbal discussion of this isn't enough you just have to learn the greygas calculation or accept the authority of people that have. In the case of water vapor, an additional -- and more important -- factor is that water vapor concentration becomes low in the cold upper parts of the troposphere, because of Clausius Clapeyron. Generally speaking, I'm getting a little tired of these endlessly recurring drawn out exchanges about how the greenhouse effect works. Past a point, if the verbal explanations are unsatisfying, you just have to go read the textbooks or go take a course. --raypierre]

    Comment by Douglas Wise — 27 Jan 2008 @ 6:13 AM

  551. AEBanner writes:

    [[So it is agreed that when the CO2 molecule which had absorbed a photon next decays by collision, its excited internal energy is transferred to the atmosphere. But this is where the energy came from in the first place! So nothing has changed. The temperature of the atmosphere has not been changed as a result of this process. It is just as if nothing had happened.]]

    Your idea violates conservation of energy. If a molecule transfers energy to the atmosphere, the atmosphere heats up. Layers of atmosphere may transfer energy to one another, but the energy originally comes from the sun, and while energy is conserved, temperature is not.

    Comment by Barton Paul Levenson — 27 Jan 2008 @ 6:59 AM

  552. Rod, Of course radiation from the surface cools the surface, but that radiation depends mainly on surface temperature (ignoring emissivity for simplicity). It isn’t changed by adding more CO2. Adding CO2 merely decreases the probability of energy leaving the system (as outgoing IR, the only way it can), whether the IR is emitted from the ground or from excited CO2 in the atmosphere.

    Comment by Ray Ladbury — 27 Jan 2008 @ 8:00 AM

  553. Timothy, the link you provided in your comment (547) supports my earlier point that the 1920s-1940s was a warm bubble (367, 514).


    Comment by pat n — 27 Jan 2008 @ 11:22 AM

  554. Thank you , Raypierre, for commenting on my post #550. I will, indeed, read your book to which you made reference although whether I’ll be able to assimilate the maths is debatable.

    I did feel your somewhat irritable comment that “you were tired of these endlessly recurring drawn out exchanges about how the greenhouse effect works” was something of a put down. Essentially, your advice boils down to the fact that I should either accept the establishment view because of the expertise of those proferring it or become an expert on the subject myself. To an extent, I can sympathise with your point of view. For you, the exchanges may be proving repetitive. For me, a newcomer to the subject, the only questions I have repeated are those which haven’t been answered or that have been answered in a manner that I regard as not totally satisfactory. I can assure you that I read very extensively on this website before seeking to ask any questions. Essentially, the only area of your Archive that caused me to have any doubts related to the two parts of the Saturation Argument which I have read over and over again and remain somewhat unhappy with. I believe that answers to the questions I have posted would probably remove my residual doubts. Perhaps reading your book will achieve the same objective and will save you or others time and trouble.

    Comment by Douglas Wise — 27 Jan 2008 @ 3:28 PM

  555. > if the verbal explanations are unsatisfying, you just have to
    > go read the textbooks or go take a course. –raypierre]

    Nominating Ray’s comments along this line, in this and other threads: recommend compiling under the “Start Here” button, to point people to.

    Including a “this much math is needed” prerequisite explanation.

    Comment by Hank Roberts — 27 Jan 2008 @ 9:04 PM

  556. Response to ‘response’ to 284: “Solar reconstructions back to AD 1610 are based on sunspot data, not the cosmogenic isotopes”

    But see
    (TSI Reconstruction 1428-2005, Santa Fe, SORCE 2008)

    Abstract: We have used a recent reconstruction of the long-term solar wind magnetic field based on cosmic ray data [McCracken, 2007; McCracken and Beer, 2007] to infer the secular variation in TSI, based on a sensitivity of 0.5 Wm2/nT. The TSI inferred in this manner increases by ~3 W/m2 (~0.22%), from 1428-2005. Even more remarkably, the TSI reconstruction based on McCracken’s deduced HMF strength increased by nearly 3 W/m2 in the first half of the 20th Century, a result that finds no support in other recent TSI reconstructions which show at most a ~0.5 W/m2 increase over this period. This result casts further doubt on the McCracken [2007] solar wind magnetic field reconstruction which is at strong variance with other recent HMF reconstructions. Alternatively, the sensitivity of TSI to HMF could be much smaller than the 0.5 Wm2/nT suggested by Fröhlich [2008].

    Comment by Leif Svalgaard — 27 Jan 2008 @ 9:38 PM

  557. Douglas Wise,
    Perhaps if you told us what your background in math and physics was, we (or more likely Ray Pierrehumbert) could suggest an appropriate reference. I personally found Ray’s book quite helpful, but I do have a physics background. Basically, what it comes down to is that adding greenhouse gasses decreases the probability that an IR photon emitted from below will escape. Less photons escaping means rising energy and temperature. And if you have energy in a particular mode or place, it always finds a way to distribute itself throughout the rest of the system (position and degrees of freedom). I’d be happy to try to answer questions off-line. My email is not hard to find and I do hereby give Ray my permission to forward it to you in the interest of not hijacking discussions.

    Comment by Ray Ladbury — 28 Jan 2008 @ 8:38 AM

  558. I do have a question regarding the observed lack of trend in the decrease of Diurnal temperature range. As I understand, this is caused mainly by increased cloud cover (rather than some night time effect of the greenhouse gases themselves?). UHI effects DTR, as well as aviation. However, from many feedback papers which propose a positive cloud feedback, the low clouds which control albedo more than any other kind should decrease. My questions are 1) how exactly do increasing GHG’s reduce the DTR, 2) what is going on recently, and 3) are trends of increased cloud cover responsible for a decline in DTR consistent with models which show decling low level cloud cover in the future as a response to warming ??


    Comment by Chris Colose — 28 Jan 2008 @ 12:21 PM

  559. I was thinking about how to graphically illustrate the way signal/noise relationships mesh with “common-sense” perceptions, and I came up with a crude little excel sheet and graph. What it does is to simulate daily temperatures over many years with a cyclic variation overlain by a long-term upward trend similar to that in IPCC forecasts.

    The way it works is to assume a cyclic variation of some dependent variable over a several thousand time periods (if one makes the length of each cycle be about 365, then one could interpret this as daily temperatures over 30 years in say the northern or southern temperate zone). Then I introduce a long-term increase factor that you can tune (and one could perhaps interpret this as high, low, and middle IPCC projections for global temperature changes over the next century) and “pro-rate” that increase into an incremental amount for each smaller discrete time period (or “day”). So now we have a smooth cycle gradually tending upwards. Then, I introduce some random variation factor for any given period’s temperature (within a certain tuneable range above or below any “day’s” underlying cycle value).

    We graph that, and we get something that looks pretty “noisy”, and that even looks like some of the lowest lows are toward the end of the period and some of the highest highs are toward the beginning, but that demonstrably tends upwards over time.

    I know it’s pretty crude, but it does make the point that eyeballing a bunch of values that have a steady trend won’t always show that the trend is obvious. I think it would be very hard to spot any obvious trend by just looking at the graph.

    I am just a computer programmer without any formal background in statistics. What I’d like to know from someone with more formal training in statistics, is whether or not this is really a valid way to illustrate signal/noise problems to people.

    Comment by Howard Hawhee — 28 Jan 2008 @ 1:06 PM

  560. Howard,

    I think you’re right, but the converse is also true. Random sequences can look like they’re showing a trend.

    Comment by Fair weather cyclist — 28 Jan 2008 @ 1:46 PM

  561. Re: #559 (Howard Hawhee)

    It seems to me that your approach will correctly make the points, namely that when the signal-to-noise ratio is low: 1) even with steady increase we can’t expect a new record every year, or even every decade; and 2) the eye can fool ya.

    Your approach sounds similar to the one I emplyed in this post.

    Comment by tamino — 28 Jan 2008 @ 2:28 PM

  562. tamino (#561), thanks for pointing me to your post — very nice. My intent is somewhat different, though very close to yours — I build an increase into artificial data and then (also artificially) give it some random short-term variance. In other words, we know beyond any doubt that there is a real upward trend in the “temperature” of my fake model — yet it is still very hard to make out by any “common-sense” approaches.

    Also, Fair weather cyclist (#562) is of course right that random variation can look like it shows a trend, but the longer the time series, the less likely that random variation will look like a trend. One could use my spreadsheet model to show this too: set the upward trend parameter to 0 so that there is only random variation. The trend extracted from the series will diminish as time goes on.

    Comment by Howard Hawhee — 28 Jan 2008 @ 3:48 PM

  563. oh, sorry tamino (561), I skimmed your post quickly & at first understood that you were also using randomly generated data — it looked so real!

    Comment by Howard Hawhee — 28 Jan 2008 @ 3:50 PM

  564. Re: 563 (Howard Hawhee)

    Yes, it does look real! You’re not the first to mistake it for actual temperature data. In fact, one reader objected strongly to my stating that “we know without doubt that the trend is increasing,” insisting that we know no such thing — so I had to reiterate that it’s artificially generated data with a built-in non-stop upward trend. And as the post mentions, I didn’t have to work hard to make this happen. The very first artificial series I created with a realistic trend and noise level is the one featured in the post.

    Comment by tamino — 28 Jan 2008 @ 4:26 PM

  565. Chris Colose #558,

    Re the evolution of DTR changes with time. I am just an amateur, but here goes…

    Have you read Martin Wild “Impact of global dimming and brightening on global warming”?
    See sect 3.3. Also this conference report may be of use:

    1) Consider the energy fluxes in the day and night. In terms of the 24 hour temperature changes (which lead to daytime max/night time minimum): During the day incoming shortwave radiation (solar) dominates, at night it’s outgoing longwave. Increase trapping of outgoing LW and you should increase temperature. If incoming (surface incident) SW remains steady as you increase the “greenhouse” trapping of longwave then the daytime trend should be a bit less than the night – hence a reduction of diurnal range which is:
    DTR = (Daytime Max Temp) – (Night Min Temp). I think evaporation/evapotranspiration acts as a brake to daytime maximum more so than night. If the daytime surface insolation has been reduced by “dimming”, as the dimming abates the daytime warming takes off, and “catches up” with the night-time warming trend, so reducing the increase of DTR.

    2) In the above paper Wild et al note they’ll be doing a regionally detailed study, but I’ve not tracked that down yet as I’m otherwise engaged. Googling BSRN as suggested below may help with up to date data, but I never work off datasets, I only use primary literature (I don’t know enough to trust any results I may come up with using raw data).

    3) Levelling in DTR seems likely to be down to Sulphate Aerosols, not the same as warming driven cloud cover variance. I think!

    Try googling: Baseline Surface Radiation Network BSRN. If you need more ask and I’ll have a dig around. I think I have more on this, but am moving onto other matters so my climate change stuff is mainly on CDs.

    Comment by CobblyWorlds — 28 Jan 2008 @ 4:47 PM

  566. Re Ray Ladbury #557.

    It is extremely kind of you to offer to help sort my muddled thoughts in a private e-mail exchange. I realise that my questions are somewhat off topic on this thread and am particularly grateful to you. I have attempted to communicate with you direct but could only find an obsolete e-mail address on Google such that my message was returned undelivered. I thought we could get the ball rolling by my sending you my e-mail address. This is:

    Comment by Douglas Wise — 29 Jan 2008 @ 11:03 AM

  567. howard, tamino and cyclist,

    here is an old paper you’ll might enjoy. On internal climate variability


    Comment by stevenmosher — 30 Jan 2008 @ 9:35 AM

  568. Re #550

    There is indeed no such thing as downwards convection, but upward convection depends on the difference in temperature between the surface and higher up. If the top of the atmosphere gets warmer this will reduce convection and this can also warm the surface.

    Comment by Wim Benthem — 30 Jan 2008 @ 6:44 PM

  569. Hi, excuse me if this is off topic, but can anyone offer their thoughts on the following-

    1. Is James Lovelock right when he says that we are living in a ‘Fool’s Climate’ with aerosols currently masking about 3oC of warming?

    2. If so, do we now need to get started on geo-engineering if we’re to have any chance of stabilising the climate?

    If this isn’t a good place to ask this question, can anyone suggest where I could get some informed debate on this?

    Comment by Ru Kenyon — 31 Jan 2008 @ 8:39 AM

  570. #569 Ru Kenyon,

    Here’s a RealClimate commentary on Lovelock:
    Also if you look at the top right of this page you’ll find some categorised links, under “Climate Science” click on “Aerosols” and you’ll find more there.

    I don’t remember Lovelock saying 3degC of warming is masked, but from my reading this is much bigger than seems reasonable. Above I referenced a paper by Martin Wild (et al), they “estimate that, over the past decades, the greenhouse forcing alone has enhanced land surface temperatures by certainly more than 0.2degC per decade, but unlikely much more than 0.38degC per decade.” The observed change in global average temperature has been 0.2degC/decade for the last 3 decades, so they’re saying this would be higher but for “dimming” due to aerosols. That’s from only one paper, but it’s not a bad ball-park figure from what I’ve read.

    With regards geo-engineering I agree with Gavin Schmidt as posted here:
    QUOTE Think of the climate as a small boat on a rather choppy ocean. Under normal circumstances the boat will rock to and fro, and there is a finite risk that the boat could be overturned by a rogue wave. But now one of the passengers has decided to stand up and is deliberately rocking the boat ever more violently. Someone suggests that this is likely to increase the chances of the boat capsizing. Another passenger then proposes that with his knowledge of chaotic dynamics he can counterbalance the first passenger and indeed, counter the natural rocking caused by the waves. But to do so he needs a huge array of sensors and enormous computational reasources to be ready to react efficiently but still wouldn’t be able to guarantee absolute stability, and indeed, since the system is untested it might make things worse. ENDQUOTE

    So I agree with those who don’t think we know enough to attempt geo-engineering.
    Sorry, but for myself I’d say we have to shape up, sort ourselves out, and massively reduce emissions.

    Comment by Cobblyworlds — 1 Feb 2008 @ 7:54 AM

  571. #570 correction.
    I missed off the link to the RC article from which I quoted, see here:

    Comment by CobblyWorlds — 1 Feb 2008 @ 3:40 PM

  572. Ru Kenyon (569) — The IPCC AR4 report (linked on the sidebar) references another IPCC report section in which the forcings for each of the causes is estimated. This should be of some help for your question.

    As for geo-engineering, I know a completely safe method: use hydrothermal carbonization to produce biocoal from biomass. Then bury the biocoal in abandoned mines or carbon landfills.
    Since nature has managed to keep most fossil coal out of the biosphere for millions of years, we can be sure this will work. I estimate the cost to be about US $100 per tonne. Removing about 350 billion tonnes of carbon from the active carbon cycle ought to be enough, at least for this century, assuming the addtional 8.5 billion tonnes added yearly is either forgone or else this much is yearly removed, in addition, via biocoal sequestration.

    Comment by David B. Benson — 2 Feb 2008 @ 3:18 PM

  573. I wanted to make a comment pertinent to models and this (11-Jan) posting is the closest I could find. With 571 comments already down, I’m not optimistic this will be seen, but …

    Pete Best made some comments about IPCC scenario assumptions re. future quantities of FF – primarily oil and gas, but some new estimates on coal are pertinent also. I think he has some valid points, that FF reserve value estimates used in most IPCC scenarios are unrealistic and unlikely. Kjell Aleklett and Jean Laherrere have commented on this, which you may be familiar with.

    First question is: from what sources do IPCC model scenarios get their FF burn rates (oil, gas, coal)? I know the modelers are not likely to be petroleum geologists, so they presumably go to whoever seems to be the authority in that realm – EIA? IEA? USGS?

    I think James Hansen has recently investigated scenarios with reduced FF consumption rates but I can’t cite the specific news item that reported that – you may be able to confirm that, or show it didn’t happen after all.

    Second question: this is a big enough topic – the intersection of Climate Change and oil/gas depletion – that it probably deserves a thorough and open dialog of its own, probably face-to-face for a couple of days, so that we could have a chance to see each other and realize that the rational science-based parties on both sides are not doomster wackos and they have something to learn from each other. We (ASPO-USA) have our national meeting in Sacramento in September – perhaps we could convene a dialog on the side to explore this in an open-minded way.

    I am fully aware that some will jump to the conclusion “ASPO says oil and gas depletion will prevent global warming!” and similar stupid responses. We want to be clear that nobody associated with us is making that claim; but we do want to ensure that your models have the best available data, and then let the science and the models do what they’re supposed to do – give us the most accurate possible projections of future climate, to inform the public and policy decisions.

    We believe the public and policy-makers must be well-informed on both Peak Oil and Climate Change issues; there is a huge overlapping (common) solution space and we must avoid being blindsided by (for example) a rush toward Coal-to-Liquid technology, as a consequence of oil becoming unaffordable to most, or unavailable.

    You can email me directly at dlawrence (at) aspo-usa (dot) com if you want to communicate offline.

    Dick Lawrence

    Comment by Dick Lawrence — 2 Feb 2008 @ 8:24 PM

  574. #573, Dick Lawrence.

    You are correct about the Hansen paper…
    Kharecha & Hansen, “Implications of “peak oil” for atmospheric CO2 and climate”. Abstract & downloadable pdf available from NASA GISS: You may want to first jump to figure 1.

    You say the “intersection of Climate Change and oil/gas depletion”.

    I’d suggest it’s a three factor issue, Climate Change, Oil/Gas Depletion, Population Pressures. I specify population because it’s a key factor in direct impacts on the land biosphere that demonstrably cause climatic changes, and due to feedbacks – such as energy costs and the regional eco-system implications of a switch to bio-fuels, or even just wood from the local forest becoming the cheapest way to heat and cook. Take any one of those alone and I am confident we’d muddle through, it’s all three together that (to me) raises the possibility that we may lose our “civilisation” by the 22nd century.

    What I mean by civilisation is the organisation and common cause that allows many different cultures to cooperate together across the globe to achieve the “marvels” we do, and the technological and scientific progress that results. Something as mundane as a cell-phone is the “tip of the iceberg”, with a mass of supportive infrastructure behind it’s conception and final production. It was with the move to agriculture thousands of years ago that sufficient food was available to allow people to administer and study, rather than their lives being centred on food production. The three stressors I state above could threaten this. I am not suggesting a “Lovelock” style outcome (the few remaining breeding pairs etc etc), but I am suggesting that over this century the forward thrust of human and technical development may stutter and fail. When considering EROEI I fear this could be a permanent failing, unless we can crack fusion.

    As an aside (possibly skewed by a UK perspective): If as seems likely we are about to slip into a recession on a (more or less) global basis, this will make the next few years very interesting from the point of view of current efforts to reduce emissions. We should be able to see how determined we really are, and to what degree the steps taken so far (e.g. carbon credits) are a mere daliance we can afford without costs we will seriously feel. How determined will we be to invest in renewables within a financially constrained situation, where fossil fuels are a cheaper option?

    Comment by CobblyWorlds — 3 Feb 2008 @ 5:03 AM

  575. Cobblyworlds, I’d add to your 3 factors a fourth that will be equally if not more important than the other 3–economic development. India and China have reached takeoff. Several Asian nations are not far behind. Africa probably has a couple of decades yet. Once it happens, though, the competition for resources–especially energy–will only increase.

    Comment by Ray Ladbury — 3 Feb 2008 @ 1:12 PM

  576. Reply #573, conventional oil has been said by some people who have worked with MK Hubbert to have peaked. Therefore oil is to become more expensive whilst oil companies attempt to bring online alternative oil energy sources and convert coal an maybe gas to oil in the future. The very fact that tar, and shale sands as well as heavy oil is alreay being looked into and produced as well as deep sea oil means that oil is becomming scarce. Take a look at oil reserves, 300 billion barrels are made up as of the 1980′s by OPEC in order to pump more oil due to their quota system. 200 billion barrels are heavy tar ands from Albberta Canada.

    Now oil is used to produce gas and coal and hence these two energy sources also become more expensive and harder to extract as it becomes more expensive to produce oil.

    The bottom line is that with in a decade conventional oil is going ot cost a lot more than it does now and threaten many things (as we will have consumed another 300 billion barrels of the stuff) due to its scarcity. Therefore I reckon that AGW will be the least of our worries as nation states such as the USA drive to retain their status in the world.

    It could be very scary.

    Comment by pete best — 3 Feb 2008 @ 3:57 PM

  577. #575 Ray,

    I specifically didn’t include economic development as without the 3 factors I mentioned it would not cause such a problem. Although I accept that it underlies the others, and perhaps I am being too pedantic.

    As an aside, from the BBC:

    “An Epoch in the making…
    Writing in the house journal of the Geological Society of America, GSA Today, Britain’s leading stratigraphers (experts in marking geological time) say it is already possible to identify a host of geological indicators that will be recognisable millions of years into the future as marking the start of a new epoch – the Anthropocene. “

    Comment by Cobblyworlds — 4 Feb 2008 @ 6:54 AM





    “… To those peoples in the huts and villages across the globe struggling to break the bonds of mass misery, we pledge our best efforts to help them help themselves, for whatever period is required—not because the Communists may be doing it, not because we seek their votes, but because it is right. If a free society cannot help the many who are poor, it cannot save the few who are rich.

    To our sister republics south of our border, we offer a special pledge—to convert our good words into good deeds—in a new alliance for progress—to assist free men and free governments in casting off the chains of poverty. …

    To that world assembly of sovereign states, the United Nations, our last best hope in an age where the instruments of war have far outpaced the instruments of peace, we renew our pledge of support—to prevent it from becoming merely a forum for invective—to strengthen its shield of the new and the weak—and to enlarge the area in which its writ may run.

    Finally, to those nations who would make themselves our adversary, we offer not a pledge but a request: that both sides begin anew the quest for peace, before the dark powers of destruction unleashed by science engulf all humanity in planned or accidental self-destruction…

    Now the trumpet summons us again—not as a call to bear arms, though arms we need; not as a call to battle, though embattled we are—but a call to bear the burden of a long twilight struggle, year in and year out, “rejoicing in hope, patient in tribulation”—a struggle against the common enemies of man: tyranny, poverty, disease, and war itself.

    Can we forge against these enemies a grand and global alliance, North and South, East and West, that can assure a more fruitful life for all mankind? Will you join in that historic effort?

    In the long history of the world, only a few generations have been granted the role of defending freedom in its hour of maximum danger. I do not shrink from this responsibility — I welcome it. I do not believe that any of us would exchange places with any other people or any other generation. The energy, the faith, the devotion which we bring to this endeavor will light our country and all who serve it — and the glow from that fire can truly light the world.

    And so, my fellow Americans: ask not what your country can do for you—ask what you can do for your country.

    My fellow citizens of the world: ask not what America will do for you, but what together we can do for the freedom of man. 26
    Finally, whether you are citizens of America or citizens of the world, ask of us the same high standards of strength and sacrifice which we ask of you. With a good conscience our only sure reward, with history the final judge of our deeds, let us go forth to lead the land we love, asking His blessing and His help, but knowing that here on earth God’s work must truly be our own.

    Comment by Hank Roberts — 5 Feb 2008 @ 9:43 PM

  579. Re 578. Damn! Don’t make’em like they used to, do they?

    Comment by Ray Ladbury — 6 Feb 2008 @ 8:20 AM


    Comment by Hank Roberts — 6 Feb 2008 @ 6:33 PM

  581. Weather vs. Climate

    Where do natural variability end and climate start?

    Whatever one might think of the temperature record of the last 5-10 years to dimiss it as natural/random behaviour is not very scientific, unless one believes that the temperature record is largely a drunkard’s walk.

    I believe the “apparent” stagnation “could be” hugely significant if we can understand or “at least” model it.

    Currently I am looking at its implications for the “rate” (not the sign) of enthalpy uptake by the earth. Initial calculations seem to indicate that although the flux into the earth (primarily oceans) is positive, the rate of change has turned down significatly and is currently negative. Obviously this will not last. The continuing accumulation of CO2 should dictate that the trend will eventually turn once more upwards but the apparent “stagnation” is now of sufficient duration that I believe that it deserves a rational (physical) explanation.

    I have done calculations for both a slab ocean (which has a severely negative trend) and a more reasonable diffusive ocean that although much more moderate also shows a negative rate over the last 5 or so years.

    Just a flat rate must pose questions of the following sort.

    If warming due to CO2 is continualy upward (as I suspect) what has the world been doing in the las 5-10 years to “stall” the rise of the last 30-40 years?

    I think that it is more than plausible that some sort of change of climatic regime has taken place.

    I also think that we should do our utmost to try and understand what has happened and what it implies for the future.

    Even if this be “just” “natural variability” it is a vital that we understand its causes.

    As I understand it the “models” are essentially weather models so are hopefully adapt at predicting just this sort of variability. If so what do they have to say about the climate for the last 10 years.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 12 Feb 2008 @ 8:27 PM

  582. Alexander Harvey writes:

    [[If warming due to CO2 is continualy upward (as I suspect) what has the world been doing in the las 5-10 years to “stall” the rise of the last 30-40 years?]]

    It hasn’t stalled. Check here:

    Comment by Barton Paul Levenson — 13 Feb 2008 @ 8:24 AM

  583. There’s your problem: start with a wrong assumption:
    >If warming due to CO2 is continualy upward
    > (as I suspect)

    Try pasting your belief into the Google search box, followed by a question; their natural language search is rather good by now. They’ll first check your spelling, then give you a search with the words spelled right.

    Imagine how profitable Google would suddenly become if they could convince people to look up their beliefs to check whether there was any good science on the questions.

    Ah, but they’d first have to agree they were questions, wouldn’t they.


    “The predictability of timescales of seasonal to decadal averages is evaluated. The variability of a climate mean contains not only climate signal arising from external boundary forcing but also climate noise due to the internal dynamics of the climate system, resulting in various levels of predictability that are dependent on the forcing boundary conditions and averaging timescales….”
    Volume 10, Issue 6 (June 1997) Journal of Climate
    Atmospheric Predictability of Seasonal, Annual, and Decadal Climate Means and the Role of the ENSO Cycle: A Model Study
    Wilbur Y. Chen and Huug M. Van den Dool
    Climate Prediction Center, NOAA/NWS/NCEP

    Comment by Hank Roberts — 15 Feb 2008 @ 5:37 PM

  584. Alexander Harvey,

    I agree, unless you use GISS in both Hadley/CRU and GHCN there’s an apparent short term levelling of the warming trend.

    From what I can see (I can’t find research papers to back me up), the reason for the difference between GISS and the other 2 is that GISS seems to give more weight to the Arctic than the others. As to whether there really is marked warming in the Arctic, I think observations of ice conditions there suggests there will be.

    I’ve commented on this here previously:
    This current situation is interesting, but measured against the sort of issue raised by RayPierre in the article at the top of that page, it’s not the most crucial gap in knowledge.

    By the way Climate Models are Climate Models, not Weather Wodels. Each run of a climate model is like an individual realisation of the climate’s planet. If we had a set of identical Earths you wouldn’t expect them to have the same weather (short term) but their climates (long term) would be much more similar.
    Furthermore here is another reason not to view climate models as climate models:

    When I was a sceptic I used to reassure myself every time there was a downward blip – i.e. I wasn’t being critical or sceptical at all. CO2 forcing changes from year to year are tiny in comparison to internal climate variability, it’s only on a decadal and greater scale that the trend becomes robust.

    In closing, have a look at this image:
    For the bottom graph (note the -1 to -0.8 interval): If that whole X axis were to cover 2 seconds we’d not get into an argument about what the underlying signal was, because in the time it’d take to walk off and make a cup of tea it’d be apparent what was going on. Alternatively, knowing that the original signal in graph A was significant component in B would allow us to be very confident that despite the activity between -1 and 0.8, the following long term behaviour would be very different.

    Comment by CobblyWorlds — 17 Feb 2008 @ 5:01 AM

  585. #570&572

    James Lovelock does say that aerosols are masking ’2-3 degrees’ of warming.
    His lecture is worth a watch.

    His main argument is that the Earth is now in positive feedback & moving ‘ineluctably’ towards a hot state. The albedo feedback from the absence of ice at both poles he says would be of a similar magnitude to the forcing by all man made emissions. Anyone have a second opinion on this?

    Given the shocking 2007 arctic melt, we now know the arctic will soon be ice free in summer. It now seems unlikely that the Greenland ice will not pass its threshold and commit us to 5 metres sea level rise, and that much of this could happen by 2100.

    My point is that creating a new low carbon economy & stabilising atmospheric CO2eq concentration at say 430ppm by 2030 will not be enough to stop runaway global warming. The positive feedbacks are already coming into play. Geo-engineering is now imperative if we are to avoid runaway, dangerous climate change.

    Comment by Ru Kenyon — 26 Feb 2008 @ 4:00 PM

  586. There are several posts above, mostly by Bryan S, having to do with ARGO. This article is making the rounds:

    The Mystery of Global Warming’s Missing Heat

    Comment by JCH — 20 Mar 2008 @ 11:56 AM

  587. Hasn’t a Hungarian Scientist, Ferenc Miskolczi who used to work at NASA, just published a paper claiming that there is a major flaw in the basic equations that are still being used to model global climate? Is this paper valid, and if not why not? This seems, at least to the interested observer, to be an important issue, which would be a perfect discussion point for this forum.

    Comment by Russ Willis — 21 Mar 2008 @ 3:48 AM

  588. 1. During the decades of the ’80′s and the 90′s, volcanos at 10 degrees north (Philippines), 20N (Mexico), and 40N (Oregon)spread aerosoles around the world for ten years. I’ve never seen this discussed.

    2. During the 150 years of coal burning, we have engaged in the massive process of dam building. The amount of water held on the land to evaporate is not insignificant. It must figure into the equation some way.

    3. Due to population growth, 6(?) of the 10(?) largest rivers in the world no longer reach the sea.

    4. One of the changes we are seeing is ‘stratification’ of the ocean’s ‘thermoclines’, diminished vertical mixing. Given the failure to ‘mix’, increased glacial melt (cold water), and a diminished input of rivers (hot water), is it any wonder the ocean’s are getting colder?

    Comment by blue7053 — 1 May 2008 @ 1:32 PM

  589. #585

    “Given the shocking 2007 arctic melt, we now know the arctic will soon be ice free in summer.”

    We do?


    1. During the decades of the ’80’s and the 90’s, volcanos at 10 degrees north (Philippines), 20N (Mexico), and 40N (Oregon)spread aerosoles around the world for ten years. I’ve never seen this discussed.

    Really good point. Something I’ve never seen discussed either.

    Comment by Doug — 5 May 2008 @ 2:06 PM

  590. > is it any wonder the ocean’s are getting colder?

    It would be if it were, but as it isn’t, it ain’t.
    (I think Lewis Carroll said that originally.)

    > never seen discussed

    We can help you figure out why you failed to find it, if you’ll describe how you looked and failed to find it

    Or you could try Google: aerosols volcanic decade

    finds this comprehensive resource among much else:

    When you’ve read down that extensive page you’ll find this illustration, for example:

    Also helpful:

    The “Start Here” link at the top of the RC page
    The search box near it, type in the term you want
    The first link under Science in the right margin

    Comment by Hank Roberts — 5 May 2008 @ 3:56 PM

  591. Two questions for Gavin:

    I understand that an 8 year time frame is still not conclusive with respect to AGM. However, lets say, in the absence of any major event like a volcano, and ENSO cycles in the typical range etc etc, the same flat, or slightly downward trend is maintained for a few more years, how many years would it be when you would be willing to admit that the models have over predicted the impact of CO2 emissions significantly. Would 1 more year do it, 5, 10, 100?

    Second question: I noticed in the posts above, you suggest that the 2003 heat wave in Europe, and 2007 NH ice hat are “extremely unlikely to be on it’s own just another fluctuation”

    I’ve seen some numbers on Antarctic sea ice to be unusually high. Is the Antarctic sea ice “just another” fluctuation or something else?

    Thanks for your response to so many posts.

    Comment by Leonard Herchen — 8 May 2008 @ 10:51 PM

  592. The web site!OpenDocument

    contains a paper by Nicola Scafetta to the EPA recently.

    Does this paper contain anything of interest to the IPCC climate scientists? Or is he just off course?

    [Response: It's worth noting that this is nothing to do with EPA or any official submission, it is simply placed in an online archive (like Arxiv). I'll have a look to see if there is anything of note. - gavin]

    Comment by John Burgeson — 2 Mar 2009 @ 5:11 PM

  593. Thanks, Gavin. The presentation is about stuff far beyond my own expertise; several of my ASA colleagues think it may be worthwhile. We’d appreciate an analyses very much.



    Comment by John Burgeson — 2 Mar 2009 @ 5:27 PM

  594. Leonard Herchen: On Antarctic sea ice: go to and scroll about 3/4 of the way down to where a graph is shown with Antarctic and Arctic sea ice trends plotted by their standard deviations: You can see that the 12 month mean Antarctic ice is still about average for the satellite period, and has mostly stayed within a couple standard deviations of the long term average: the 12 month mean of Arctic sea ice, on the other hand, is almost 4 standard deviations below the long term trend, and 2007 was almost 8 full standard deviations below the long term mean.

    Also see:

    I’m not Gavin, but I’d personally be quite surprised if we haven’t seen a global mean temperatures back at at least 2005 levels within 3 years – the negative ENSO index is unlikely to last that long, and we’ll be moving up the solar cycle. If we don’t have a clear, new global record within 6 or 7 years (absent volcano, massive continuing negative MEI, or Maunder minimum type solar drop) then I’d personally be reevaluating my assumptions about climate sensitivity. Somewhere in the realclimate archives I remember a post showing how often a “new record” would be expected based on an underlying trend + noise, but I can’t find it right now…

    Comment by Marcus — 2 Mar 2009 @ 5:57 PM

  595. The charts here would be very much worth bringing up to the present date. They’re very helpful.

    Comment by Hank Roberts — 2 Mar 2009 @ 6:56 PM

  596. 592: John, Gavin,

    Not much physics or analysis to address. It’s just an array of fancy pictures and curves.

    It really doesn’t matter if you use PMOD or ACRIM, since the differences are extremely small, and are evident over only a small portion of a solar cycle. There is no physical evidence that solar changes explain over 50% of the 20th centuey warming, and the forcing from solar changes is expected to be very small compared to the GHG changes. I don’t have the knowledge to make a judgment on which product is better, but it isn’t a huge deal for attribution efforts.

    Slides 25-32 severely misrepresent the spatial and temporal scales of historical paleoclimatic events and the distribution on early 20th century warming. The “Expectation: A significant fraction of the warming observed during the last decades is natural (sun or something else)” on slide 34 does not follow from anything Scafetta provides, and his Phenomenological work as well as the Loehle Paleoclimate Reconstruction have all been addressed at RC before. Scafetta is (intentionally?) not accurately assessing the modern literature on the MWP, and attribution conclusions do not follow from them happening.

    Comment by Chris Colose — 2 Mar 2009 @ 8:01 PM

  597. Marcus wrote in 594:

    Somewhere in the realclimate archives I remember a post showing how often a “new record” would be expected based on an underlying trend + noise, but I can’t find it right now…

    I believe you are thinking of the chart titled “How long might you wait for a new record?” which is available in the post:

    11 May 2008
    What the IPCC models really say

    Jeff Masters’ expanded on this over at WunderBlog:

    Is the globe cooling?
    4 Feb 2009

    You might also find the following two posts by Tamino to be of interest…

    Global Temperature from GISS, NCDC, HadCRU
    January 24, 2008

    You Bet!
    January 31, 2008

    Comment by Timothy Chase — 2 Mar 2009 @ 11:48 PM

  598. I love Scafetta’s graph that looks like it attributes the collapse of the Inca civilization to a “cold period” in the 1400s… I think most historians think that the spread of disease from the Europeans landing in Mexico was a more likely cause of the Incan collapse (and civil war, and then the arrival of Pizzaro was the straw that broke the llama’s back).

    But it really seems that once someone starts down the Skeptic Path, they begin to lose all connection with reality – see Pielke Jrs blog where he attempts to claim that a cap-and-dividend won’t reduce emissions, Spencer’s blog and “non-anthropogenic CO2 rise”, etc.

    Comment by Marcus — 2 Mar 2009 @ 11:57 PM

  599. Ray Ladbury #575.

    However, China has population controls that will counter the change. China is also building more renewable power sources than anyone else. It makes sense for them: China is not resource rich. And even if they were, it’s better economically to sell your resources rather than use them yourself as long as you can get away with it.

    And both China and India will be able to buy newer and better technologies with less disruption than the first world would see, for much the same reason why Africa has managed to skip past the expensive and unscalable wired phone network and gone to the cheaper and more easily rolled out wireless (mobile) phone network.

    They hadn’t sunk cost into a landline system and didn’t need to ensure their stockholders get their ROI on that landline investment and so didn’t have a need to make sure that wireless rollout was expensive enough to make up for landline revenue loss.

    Comment by Mark — 3 Mar 2009 @ 7:18 AM

  600. John Burgeson wrote in 592:

    contains a paper by Nicola Scafetta to the EPA recently.

    Does this paper contain anything of interest to the IPCC climate scientists? Or is he just off course?

    If you put Nicola Scafetta’s last name in the search box at the top of any webpage of Real Climate you will find that several of his papers have been reviewed here in the past.

    In particular:

    Another study on solar influence
    31 March 2006

    … which critiques:

    Scafetta, N., and B. J. West (2006), Phenomenological solar contribution to the 1900–2000 global surface warming, Geophys. Res. Lett., 33, L05708

    How not to attribute climate change
    10 October 2006

    … which critiques:

    Scafetta, N., and B. J. West (2006), Phenomenological solar signature in 400 years of reconstructed Northern Hemisphere temperature record, Geophys. Res. Lett., 33, L17718

    … and:

    A phenomenological sequel
    27 November 2007

    … which critiques:

    Scafetta, N., and B. J. West (2007), Phenomenological reconstructions of the solar signature in the Northern Hemisphere surface temperature records since 1600, J. Geophys. Res., 112, D24S03

    Comment by Timothy Chase — 3 Mar 2009 @ 2:03 PM

  601. Thank you, Timothy Chase. I understand (now) the search facility on this site and will employ it in the future.

    I’m new to this stuff and have several colleagues (interestingly, all of conservative tendency) who keep suggesting that AGW is on a shaky foundation. So be patient with my naive questions (not that you are otherwise).

    Burgy (

    Comment by John Burgeson — 4 Mar 2009 @ 2:51 PM

  602. I have been tracking the measured annual global mean surface temperature versus the IPCC trend predictions, 1995 and 2000.
    While the IPCC made no annual predictions, I see no reason one can not construct annual bounds based on their trendline predictions to facilitate gauging the performance of their predictions to date.

    To do this I extended their prediction, uppper and lower bounds, by 1.28 std. deviations (90% confidence limit) to account for natural interannual variation. The variance was calculated as twice the variance in the residuals of a linear regression on the annual surface temperature data: 1970-2000. This is to account for the natural variability one can expect between any year of interest and an arbitray reference year: 1990 in this case.

    A link to the graph:

    Comment by Dave Occam — 18 Apr 2009 @ 10:10 AM

  603. Dave Occam (602) — Thank you!

    Comment by David B. Benson — 18 Apr 2009 @ 2:32 PM

  604. Aren’t you lumping all models together, than saying a short trend doesn’t invalidate model X, therefore all models are still plausible?

    For example, I would think a small downward trend or flatline over an 8 year period, should make the forecasts of larger warming like 6C less likely than the 2C scenario.

    Comment by MikeN — 20 Apr 2009 @ 11:18 AM

  605. Dave Occam #602, is that right? A two-sided 90% confidence interval is +/- 1.645 sigma for the normal distribution.

    Comment by Martin Vermeer — 20 Apr 2009 @ 1:49 PM

  606. MikeN (604) — The major uncertainty is human behavior; are we or are we not going to add lots of extra carbon dioxide to the atmosphere?

    Comment by David B. Benson — 20 Apr 2009 @ 1:52 PM

  607. “Aren’t you lumping all models together, than saying a short trend doesn’t invalidate model X, therefore all models are still plausible?”

    Nope, a short trend doesn’t invalidate model X therefore it doesn’t prove model X is wrong.

    There is no flatline or downward trend.

    It was going up before 1998.

    It has gone up after 1998.

    And when in the 80s the AGW theory was just beginning and well into the 90s, it was all “you haven’t got enough data to say there’s even been any warming”.

    That was 80 years of data.

    Now 8 years is enough???

    Comment by Mark — 20 Apr 2009 @ 2:55 PM

  608. MarkN (604),

    The 90% confidence limit does not apply to the probability of annual temperatures falling within the two bounds but rather the confidence limits of the best and worst case scenarios only. You can’t put a confidence limit on the aggregate because none were supplied by the TAR or SAR for their trendline prediction.

    1.281 std devs leaves 10% (one tail) of the population outside the limit of interest if temperatures are tracking one of the envelope trendline bounds. Only one tail is relevant for each bound. If for example temperatures are running close to the lower bound then they are testing the slowest temperature rise model/scenario(s); the upper range of natural variability would fall well within the upper bounds. So while in this example you might have 10% of the years falling below my lower confidence limit it’s highly unlikely you will have any above the upper limit.

    Comment by Dave Occam — 21 Apr 2009 @ 8:35 AM

  609. Martin Vermeer (605),

    Unfortunately, I thought quantifying the confidence limit might cause some confusion.

    The 90% confidence limit does not apply to the probability of annual temperatures falling within the two bounds but rather the confidence limits apply to the best and worst case scenarios only. You can’t put a confidence limit on the aggregate because none were supplied by the TAR or SAR for their trendline prediction.

    1.281 std devs leaves 10% (one tail) of the population outside the limit of interest if temperatures are tracking one of the envelope trendline bounds. Only one tail is relevant for each bound. If for example temperatures are running close to the lower bound then they are testing the slowest temperature rise model/scenario(s); the upper range of natural variability would fall well within the upper bounds. So while in this example you might have 10% of the years falling below my lower confidence limit it’s highly unlikely you will have any above the upper limit.

    Comment by Dave Occam — 21 Apr 2009 @ 8:49 AM

  610. Dave OK, thanks.

    Comment by Martin Vermeer — 21 Apr 2009 @ 9:07 AM

  611. Dave, so were you agreeing with me that a 10 year flatline, if it happens, would have to lower the confidence ranges for temperature increase by 2100? Don’t want get confused by terminology. I’m saying that the 6.4C upper range temperature is less valid given a flatline. Not debunked or invalidated, but less likely than it already was, since a flatline is more likely under a 2.4C scenario than a 6.4C scenario.

    Comment by MikeN — 21 Apr 2009 @ 11:13 AM

  612. MikeN #611. Not if you find the reason for the flatline is something that isn’t going to continue.

    Without that, the most likely candidate is that the errors will get WIDER.

    And note: it isn’t flatlining.

    But even if it were, if that reason turned out to be that the ocean chemistry was changing into a new mode and about to exhaust all the stored methane in it, the temperature by 2100 would be higher than predicted.

    If it were, and that reason were that the sun is reducing its output and will for the next thousand years, then the temperature by 2100 would be lower than predicted.

    But since either could be happening, if it WERE flatlining (which it isn’t), we don’t know why and so our uncertainty would be higher. We’d have, in the words of Rummy, a known unknown. And each of those increases the uncertainty, not the trend.

    Comment by Mark — 21 Apr 2009 @ 2:17 PM

  613. Well going off the post at the top which shows various trends over short periods as part of general ‘weather’ variability.
    I’m assuming a flatline comes from this and not some external forcing not in the models.

    In that case, then I think basic math would suggest that the lower end of warming is more likely than the higher end. A flatline should be more likely in a 2C model than a 6C model correct?

    Comment by MikeN — 21 Apr 2009 @ 7:39 PM

  614. Hi MikeN. :-)

    This question has come up in another discussion, and both Mike and I would love to get a more informed third opinion. If I may rephrase the question I think Mike is asking…

    What does the frequency of “lulls” tell us, if anything, about the climate sensitivity? Sensitivity estimates for climate range from about 1.5C to 4.5C per 2xCO2, or (equivalently) from about 0.4 to 1.2C per unit forcing (W/m^2).

    Should different sensitivities result in a different frequency of “lulls”? Can we use information about the range of variation in the short term 8-year slope to help constrain the sensitivity estimates?

    Comment by Duae Quartunciae — 22 Apr 2009 @ 1:58 AM

  615. You’d have to know how the chaotic system reacts.

    Short answer: no.

    Long answer, theoretically and in general.

    An example is to take the variations in temperature and go “how much of that is matched out by solar variation”. you then subtract a variation at that frequency until you get a line that is more straight than before.

    And do that with all other variables.

    Though that gives more the attribution rather than the sensitivity. And is subject to a lot of error.

    But that’s not looking at the lulls either.

    Comment by Mark — 22 Apr 2009 @ 2:38 AM

  616. “Dave, so were you agreeing with me that a 10 year flatline, if it happens, would have to lower the confidence ranges for temperature increase by 2100?”

    Are you addressing me, MikeN? I accidentally double posted – I meant to direct my post to Martin Vermeer.

    In case you were asking my opinion: Statistically speaking, I would not look at a select 10 year period, a selection already biases the result, but rather the full record from the date of the prediction/s. I would consider the prediction suspect if we got a couple years outside my bounds over the next decade (exempting years of a major episodic event).

    I think if temperatures hug the lower bound over the next few years it would make the upper 2030 bound an unlikely occurrence, but not necessarily the upper 2100 bound. Some models and scenarios produce very slow increases in temperatures in the first couple decades but accelerate more in the later years. For example scenario A2 of the 6 SRES scenarios (Fig 9.15 of my reference) produces the smallest increase in temperature in 2030 but the second highest in 2100 – for all models.

    So far the IPCC predictions are holding their own. Hypothetically speaking, if they should require modification in the future to levels outside current bounds then I don’t think we can say at this point if it is because of inaccurate models, forcing sensitivities, initial conditions or natural changes that were not anticipated. So we couldn’t say much about 2100 temperatures till that was resolved.

    Comment by Dave Occam — 22 Apr 2009 @ 11:19 AM

  617. >I think if temperatures hug the lower bound over the next few years it would make the upper 2030 bound an unlikely occurrence, but not necessarily the upper 2100 bound.

    OK, not necessarily unlikely, but wouldn’t it then make the lower 2100 bound more likely than the upper 2100 bound?

    Duae has hypothesized that a high positive feedback model that produces a 6C warming is as likely to produce lulls as a low feedback model that produces warming of 2C. Any opinion on this?

    By the way, I don’t wish to compare models of different carbon scenarios, but rather models with different feedback variables.

    Comment by MikeN — 29 Apr 2009 @ 4:35 PM

  618. Re 617: “By the way, I don’t wish to compare models of different carbon scenarios, but rather models with different feedback variables.”

    But the IPCC prediction envelope encompasses both; when the authors determined the bounds of this envelope they had both in mind. The bounds of their prediction were based on expert judgment, not statistical computation.

    If you want to tease out more information than what they summarized you need to address specific models and/or specific scenarios – but then how do you choose – statistically speaking? And then adding confidence limits for a chosen model is problematic – for experts not to mention lay readers.

    Re 617: “OK, not necessarily unlikely, but wouldn’t it then make the lower 2100 bound more likely than the upper 2100 bound?”

    Short answer, I am not qualified to say, but IMO I still think not necessarily – despite the intuitive attractiveness of your assumption. It all depends on how you determine likelihood and how well the causes of the slower than expected warming are understood in the future of your hypothetical scenario.

    But we are just splitting hairs. By the time we reach 2030 they will have greater understanding of climate, better data and models and be able to narrow the range of plausible outcomes. However there will always be some natural variation that is beyond prediction. At this point in time I don’t see that it matters in terms of decisions that have to be made today regarding policy around GHG emissions that affect resource spending prior to 2030.

    Comment by Dave Occam — 30 Apr 2009 @ 10:39 AM

  619. “but wouldn’t it then make the lower 2100 bound more likely than the upper 2100 bound?”

    It doesn’t MAKE the upper bound less likely than before.

    You can make a ***guess*** that it should, but unless you know why your model and reality are disagreeing, you don’t ***know*** that your assumption will hold.

    Comment by Mark — 30 Apr 2009 @ 11:21 AM

  620. That’s the thing. It isn’t one model, but a range of model possibilities.

    Comment by MikeN — 30 Apr 2009 @ 1:19 PM

  621. Re 620: “That’s the thing. It isn’t one model, but a range of model possibilities.”

    Right. As I understand it, capturing the results from a range of models is a proxy for capturing all the uncertainties in modeling. This might seem like they are sandbagging by playing it so safe, but I believe the models are using a nominal value for climate sensitivity to various forcings. If they were to choose only one model then it would make sense to capture the uncertainty in the sensitivities by running them with a range of values. Probably how it will be done in the future, after they get confidence in more evolved models and when they can do a better job of independently bounding the sensitivities.

    Comment by Dave Occam — 2 May 2009 @ 10:54 AM

  622. Re: #602

    I have updated the data and added near term statistical projections to the plots; i.e. projections free of any climate specific assumption.

    Comment by Dave Occam — 15 May 2009 @ 4:07 PM

  623. Captured in these four plots we see that the global annual mean surface temperature, through 2008, is tracking within International Panel on Climate Change projections.


    The top chart shows actual mean temperatures relative to 1990 and annual bounds derived from IPCC’s multi-decadal trend bounds – values on left scale. These bounds capture uncertainty in the model, emissions scenario, and inter-annual noise. Also a linear trend-line is plotted from the measured temperature points and extrapolated out to 2030 so we can see the projected 40 year change – right scale, based on data through 2008.

    The next two charts are the associated multi-decadal temperature trend projections published by the IPCC in the Third Assessment Report, based on climate models run in the late 1990s. As it turned out, per the last chart in the lower right (published in the Copenhagen Climate Report – 2009) emissions have so far closely followed SRES A1F1. I have circled the point in the IPCC chart where model ECHAM4/OPYC predicts the 40 yr temperature change given emission scenario A1F1. You can see it (0.72C) falls right on top of the actual trend to date.

    Of course 19 years of data do not definitively define a 40 year trend, but we now have enough data to get a meaningful indication of how these 1990s models are tracking reality. So far actual temperatures are closely following median model projections.

    It is disturbing to note that should these models continue to hold true and we continue on the emissions path of scenario A1F1 we are facing a global temperature increase of around 4.5C or 8.1F before the end of the century.

    Comment by Dave Occam — 28 Jun 2009 @ 2:22 PM

  624. Link to plots for preceding post

    Comment by Dave Occam — 28 Jun 2009 @ 2:24 PM

Sorry, the comment form is closed at this time.

Close this window.

1.409 Powered by WordPress