Tropical tropospheric trends again (again)

Taking a slightly larger view, I think this example shows quite effectively how blogs can play a constructive role in moving science forward (something that we discussed a while ago). Given the egregiousness of the error in this particular paper (which was obvious to many people at the time), having the initial blog posting up very quickly alerted the community to the problems even if it wasn’t a comprehensive analysis. The time in-between the original paper coming out and this new analysis was almost 10 months. The resulting paper is of course much better than any blog post could have been and in fact moves significantly beyond a simple rebuttal. This clearly demonstrates that there is no conflict between the peer-review process and the blogosphere. A proper paper definitely takes more time and gives generally a better result than a blog post, but the latter can get the essential points out very quickly and can save other people from wasting their time.

Page 2 of 2 | Previous page

284 comments on this post.
  1. Magnus Westerstrand:

    I just want to say that I (as PhD-Student) agree on what you are saying; blogs can be a good contribution in the scientific debate. Not least by keeping the public alert on different discussions on going behind the “scenes” of peer-reviewed journals.

  2. François GM:

    This 17-author paper basically says that the models would only be “incorrect” if observations were roughly outside the 0.0 to 0.5 degree-per-decade window.

    This means that no warming whatsoever over the next century would not invalidate the models.

    Interesting.

    [Response: But not true. These are hindcasts for a specific period (1979-1999) and the variance is in large part due the shortness of that period and the degree of ‘noise’ – both internal and forced (i.e. volcanoes) – that confuses the estimate od the long term trends. Our previous post dealt with what the models expect for periods in the future – and the longer the time frame the less influence the short term variations have. – gavin]

  3. Connie Smith:

    Well done. It is also worth noting that the earlier work by Douglass et al. (2004) on this subject had issues in observational data sets pointed out by Fu and Johanson (2005), Mears and Wentz (2005), and Sherwood et al. (2005).

  4. Ricki (Australia):

    Many thanks for this site yet again!

    Please don’t forget that many papers are not readily available to the public (most have to buy them). This means that the best insights the average person can get is what the blog postings and following discussion reveals about the papers and their weaknesses/strengths.

    Also, many out there don’t have the technical expertise or time to read the papers even if they could get hold of a copy.

    Therefore, the general public relies on sites like this one to help keep up to date with the state of climate science. This is such an important issue that blog sites like this one MUST be maintained.

    [Response: Thanks. We realise that access to papers is limited for most of the public and we link to free versions where possible. In this case, I recommend reading the fact sheet (linked above) to get a good sense for what was done. – gavin]

  5. steven mosher:

    Good post gavin. I think I’m reading the trend chart wrong, but is the mean of the models higher than the mean of all 10 data sets? What do the error bars at the surface look like? Did all the models include volcanic forcing? Err maybe I should read the paper first

  6. Steve Bloom:

    How did Douglass et al get published in a respectable journal? Did the editors let it through despite being aware of the errors?

  7. Fred Staples:

    What do Messrs Santer et al mean by surface temperature? The surface I live on comprises two lawns, some areas shaded, some not, flower beds, a new black tarmac drive, a new white stone patio, and a fairly fast flowing river. How would they propose to measure its temperature? Like all the other surface temperatures (apart from the 75% of the surface covered by the oceans) they would probably suggest placing the thermometer a few meters up in the atmosphere.

    Satellites are by far the best way to measure atmospheric temperatures across the planet.

    I do urge all contributors who are so certain of the AGW theory to at least look at the UAH data (Google Global Warming at a Glance). [edit]

    From 1978 to 1996 neither the lower or mid-troposphere data show any warming. The warming which lends credence to the AGW theory occurred over three years from 1999 to 2002. Thereafter, temperatures remained “high” (plus 0.2 degrees above the long term average) and fluctuating for 4 years to 2006. (You would find it difficult to detect this change in the average temperature of your house).

    Thereafter the temperatures fell back to 1978 levels.
    The mid-troposphere chart experienced the same relatively sharp increase from1999 to 2002, but the increase was much lower, and more rapidly reversed – hence the difference between the long term trends.

    The models follow the CO2 concentration, which is why the Hansen “B” line and the “surface” temperatures are diverging.
    Santer’s fact sheet does not display troposphere temperatures after 2005.

    If the satellite data is anything like accurate then AGW warming theory is either plain wrong or nothing to worry about. If current (1978) temperatures persist for the next 5 to 10 years, challenges to the AGW consensus will increase rapidly, both in number and volume.

    [Response: Wrong again. But feel free to continue ignoring all previous corrections of your faulty thinking. – gavin]

  8. Vinod Gupta:

    I am not an expert by any definition, but just a layman keenly interested (concerned) about the future of the planet and my children, all our children.

    I am also very concerned that the current debate on global warming gets so specialized that the forest may well be missed out for the trees.

    There seems to be a lurking expectation in some of the climate scientists and others that a new ice-age may be dawning that will yet save mankind from the apocalypse of melting polar ice and significant rise of sea levels in our lifetime. In other words, there is no warming effect of greenhouse gases and
    humans can carry on with Business As Usual, including massive burn of fossil fuels.

    There is also little or no awareness among the lay public that greenhouse gases are cumulative and the accumulation is exponential in scale today and has been in the last 150 years since the industrial engines appeared.

    Somewhere I read that the current production of greenhouse gases is twice as much as what the green plants (on land and in the sea)can neutralize. In other words, the “tipping point” is already far exceeded and there cannot be any new equilibrium of production/neutralization of greenhouse gases.

    So any real solution would only come through total and serious cutback in human lifestyles and corresponding energy use.

    Please start educating the lay public about global warming in a simple(and stark)way that they can understand. Please also provide a forum for getting some expert advice on what the alternate lifestyles of the future world must look like, in order that we may have a future.

  9. Gavin (no, not that one, a different one):

    Steve (6) Peer review will never be a guarantee that published papers are free of (even obvious) errors, reviewers editors and authors are all only human and there will always be a chance that none of them will spot the error in time. However here we have seen good science in action in that the error has been spotted and corrected (well done Santer et al!). It also highlights the advantage of pre-prints being made available by the publisher as it means that the correction is made more rapidly that would otherwise be the case (well done IJC).

    It is important however that the correction to high profile or contraversial papers are made both in the blogs and in the peer-reviewed litterature, as otherwise the correction is all too easily dismissed by saying “RealClimate, well they would say that wouldn’t they?” ;o)

  10. Figen Mekik:

    I agree with “the other Gavin”. Peer-review isn’t the end-all filter for good science. It is simply the first one. And reviewers are often very pressed for time, so sometimes errors fall through the cracks :)

  11. Timo Hämeranta:

    The paper is available at LLNL page
    https://publicaffairs.llnl.gov/news/news_releases/2008/NR-08-10-05-article.pdf

    Its final conclusion is as follows:

    “We may never completely reconcile the divergent observational estimates of temperature changes in the tropical troposphere. We lack the unimpeachable observational records necessary for this task. The large structural uncertainties in observations hamper our ability to determine how well models simulate the tropospheric temperature changes that actually occurred over the satellite era. A truly definitive answer to this question may be difficult to obtain. Nevertheless, if structural uncertainties in observations and models are fully accounted for, a partial resolution of the long-standing ‘differential warming’ problem has now been achieved. The lessons learned from studying this problem can and should be applied towards the improvement of existing climate monitoring systems, so that future model evaluation studies are less sensitive to observational ambiguity.”

  12. Thomas Lee Elifritz:

    Please don’t forget that many papers are not readily available to the public (most have to buy them).

    On the contrary, most university doors are WIDE OPEN to the public, joe six pack merely needs to get off the couch and wander down to his or her local world class university research libraries and look them up. There are costs involved, page charges run anywhere for a few pennies up to a dime a page for the venda copy machines or the laser printer. Many of these papers may be available on a preprint server as well, and if you dig deep enough, they are available as fair use copies on the author’s web pages.

    Captcha : concluded for

  13. Kevin McKinney:

    “On the contrary, most university doors are WIDE OPEN to the public, joe six pack merely needs to get off the couch and wander down to his or her local world class university research libraries and look them up.”

    For most of us that is more than enough, unfortunately, to preclude the use of the term “readily available.” I live in an area boasting 10+ universities in the metro area, and I’d still have to budget about 2 hours of round trip driving, and a minimum of an hour–and probably more–to get in, find the material, copy what I need, and get out again. Not so practical. I’m really doing more than I should–“should” in relation to the practicalities of daily life–when I take 40 minutes online to browse a few climate papers that I *can* access.

    (captcha: “tieing booked”)

  14. Hank Roberts:

    > [Response: Wrong again. But feel free to continue
    > ignoring all previous corrections of your faulty
    > thinking. – gavin]

    Is there a search string that will find posts by an individual userid that include inline responses?

    [Response: Yes. Add “class=response” to the string. This picks out the html that we use to colour the inline responses green. – gavin]

  15. counters:

    Nice work! You all should consider featuring your “Fact Sheet” a bit more prominently, perhaps even before the fold, because it seems to be a definitive resource providing refutations to nearly every single claim advanced by skeptics in the wake of the Douglass debacle.

    I’ll third the assertion of Figen and “other Gavin”; blogging can be a way of disseminating information to a large group of people rather quickly, but it is most effective in cases like this where it is paired with the traditional, rigorous academic exercise of peer-review.

  16. Steve Bloom:

    Re #s 9 and 10: In addition to the errors in the paper seeming like the sort of thing a peer-reviewer ought to have caught, the presence of Fred Singer as a co-author ought to have singled the paper out for special attention. So, as I said, it begins to seem plausible that the editors were aware of the problems and published it anyway. IMHO there’s a reason to have done so, i.e. that it’s useful for such things to get corrected in a reasonably prominent place. But is that what happened?

  17. John Lang:

    For the past year, the satellite troposphere anomalies (20S to 20N) have been negative (as much as -0.5C) – La Nina obviously.

    But wouldn’t that change the analysis – first, the linear trend is now only 0.06C per decade and secondly, the Nina 3.4 region and ENSO are bigger influences than previously thought.

  18. steven mosher:

    So, am I reading that graph wrong or does the mean of the models exceed the mean of all 10 observation sets , and that the mean of the surface hindcast for both Land and Ocean and SST exceed the 4 observation means as well.?
    Also, how about a quick list of the models used?

  19. Figen Mekik:

    The thing is even if Fred Singer’s name is probably a flag for a lot of people, as an editor you have to treat everything fairly. if the reviewers did not find fault then one would have to either send the paper to more reviewers (which would take an inordinate amount of time) or rejecta a paper based on its authors list. While that may have been appropriate in this case in hindsight, it opens the door to a slippery slope. Then new or untenured scientists will have a harder time getting published than established ones “in the old boys club.”

    To avoid all that, I think peer review should be double blind so the work is judged on its own merits and not its authorship or familiarity with specific authors or their affiliated institutions.

  20. Pat Neuman:

    In #8 Vinod Gupta wrote … “Please start educating the lay public about global warming in a simple(and stark)way that they can understand”. …

    However,

    although Realclimate has done well in their efforts to do that, the required resources and responibility to succeed on the large scale exist only in governments (who have so far failed, miserably)

  21. Chris:

    Just had a quick glance at the fact sheet.
    Have the authors considered the possible effects of El Chichon and Pinatubo in diminishing the relative tropospheric temperature peaks of the ~1983 and ~1992 El Ninos, thereby steepening the tropospheric trend?

    [Response: The volcanic effects were taking into account in the models, but the precise timing of El Nino events is going to be different in each model – that in fact is a big factor in the variability of the 20 year trends. – gavin]

  22. D White:

    As a non scientist, but (I would like to believe), a fairly scientifically literate layman, it seems that the evidence for AGW and the physics based explanation for it are irrefutable. The denialists in my opinion are similar to those who believe in the efficacy of accupuncture, homeopathic remedies, and chiropractic medicine. They will seize on any tiny scrap of data supporting their claims while ignoring the vast amount of objective scientifically based evidence to the contrary.

  23. Richard:

    Figen Mekik: “then one would have to either send the paper to more reviewers (which would take an inordinate amount of time) or rejecta a paper based on its authors list. While that may have been appropriate in this case in hindsight, it opens the door to a slippery slope.”

    You must be joking! Are you seriously suggesting that the Douglass et al paper should have been rejected on the basis of the authors? What are you? The thought police?

    Who is to say that the Santer et al paper is correct and free of mistake? Papers are accepted because they are based on sound research and the methodologies are also sound. That is not to mean that they are error free.

    Santer et al have produced a paper that argues a different hypothesis to Douglass et al. It does not mean by any means that Santer et al is more correct or more free of potential challenge.

    Because Gavin SAYS that it does a better job of comparing trends does not make it so.

    [Response: … though it does make it a little more likely ;) – gavin]

  24. Barton Paul Levenson:

    Thomas Lee Elifritz writes:

    most university doors are WIDE OPEN to the public, joe six pack merely needs to get off the couch and wander down to his or her local world class university research libraries and look them up. There are costs involved, page charges run anywhere for a few pennies up to a dime a page for the venda copy machines or the laser printer. Many of these papers may be available on a preprint server as well, and if you dig deep enough, they are available as fair use copies on the author’s web pages.

    Some of the most crucial journals are no longer available in paper; if you don’t have a multi-hundred-dollar-a-year subscription, you’re SOL. The premier planetology journal, Icarus, is an example. Paywalls matter. They are effectively restricting scientific data to professional scientists, which is a bad thing.

  25. Fred Staples:

    As Einstein said, Mr White (22) 10,000 observations may support a theory – it only takes one to refute it. Science would not progress without the one observation out of step.

    For example, at my University, many years before the discovery of radioactivity, a professor of Physics noticed that sealed photographic plates in a drawer had been darkened. They were adjacent to a cupboard containing Radium.

    He solved the problem by moving the plates to another room.

    The tiny scrap of data to which you refer are the satellite temperature records from 1978, which was the year of the global temperature trough following the previous peak in the 1940’s.

    Because it contradicts AGW theory, that data has been very thoroughly analysed and criticised. Adjustments have been made for all kinds of systematic errors, and at least one arithmetical error has been acknowledged and corrected.

    The results (surely worth a glance) reflect all known temperature perturbations, including the 1988 peak. They are published and plotted every month, (Google Global Warming at a Glance), and overlaid onto the GISS surface data on which Anthony Watts (Watts up with That) labours so strenuously.

    You might also be interested in Tamino’s very interesting post (Open Mind) on the long running Central England Temperature record. His polynomial form fitting suggests a new “hockey stick” shape with an up-tick of 0.5 degrees this decade. Temperatures at the start of the decade were about 10.5 degrees centigrade – high but not remarkably so. His data suggested a temperature up-tick this decade from 10.5 to 11.0 degrees centigrade, which would be absolutely unprecedented.

    With 15 months to go there is no sign of the up-tick. This year looks like coming out at 10.2 degrees, the 25th warmest year in the record.

  26. Gavin (no, not that one, a different one):

    Richard says “Santer et al have produced a paper that argues a different hypothesis to Douglass et al. It does not mean by any means that Santer et al is more correct or more free of potential challenge.”

    No, that is not correct, both papers seek to determine whether the observational data are consistent with the models, however Douglass et al use a statistical test that actually answers a different question, namely “is there a statistically significant difference between the mean trend of the ensemble and the observed trend”. The two questions are not the same, and indeed AFAICS one would not expect the observed trend to be exactly equal to the ensmble mean even if the models were perfect, so it is hardly a reasonable test of the validity of the approach!

  27. Figen Mekik:

    Richard,

    Please read my earlier post carefully. I said it would not be right to reject papers because of their authorship. That is what my comment was all about.
    You say “What are you? The thought police? ” Again, please read everything one posts before jumping to conclusions about what they are saying. I encourage double blind peer review so no one will act as the thought police.

  28. outeast:

    most university doors are WIDE OPEN to the public, joe six pack merely needs to get off the couch and wander down to his or her local world class university research libraries and look them up.

    That’s a pretty high bar. If you’re an ordinary person and simply want to check something you’ve read in a newspaper article, you’re unlikely to want (or be able) to invest several hours in doing so.

  29. tamino:

    Re: #25 (Fred Staples)

    Gavin is right, you staunchly continue ignoring all previous corrections of your faulty thinking.

    You’re assigning far too much certainty to the UAH satellite reduction. Yet it’s in disagreement with similar satellite reductions from RSS (which is the subject of *this* post), and from the U. of Washington, and from the U. of Maryland. If the satellite data are as surefire as you believe, why do 4 separate groups come to different conclusions based on the SAME raw data?

    You completely ignore the fact that satellites don’t *measure* lower-troposphere temperature. They measure temperature in very large segments of the atmosphere, and the “T2LT” channel for the lower troposphere is *inferred* from the data of those other channels by attempting to correct for the stratospheric influence and other factors.

    As for the CET record, in the context of *this* post you’re just trying to change the subject. But for your information, the *data* (which are not mine, but are freely available from the web) suggest a rate of increase of 0.5 deg.C per decade. Anyone interested in what the rate of increase presently is should download the data and fit a regression line from, say, 1975 to the present.

    As for “this year coming out at 10.2,” are you completely ignorant of the variation present in year-to-year results? The standard deviation of the residuals from a linear regression to annual averages 1975-2007 is 0.472, so we expect a range of variation of roughly +/- 0.94 deg.C from the long-term trend. I guess you’re just a member of the “it was cold yesterday in London, so global warming must be false” brigade. You’re seriously in need of an education about the statistical nature of noise; this is one of many posts I’ve done on that topic.

    Finally: any flattering reference to Anthony Watts and his blog casts doubt on both your objectivity and your competence. Those interested in the “quality” of the work of Watts and his collaborators should read this and this and this and this and this and this.

  30. Hank Roberts:

    Outeast — your local library can borrow any journal they don’t carry. Ask about “Interlibrary Lo-an.”
    (Hyphen for the spamhallucinator software.)
    The Reference Librarian will have additional suggestions for searching.
    It’s easier to find than complain about.

  31. Lennart:

    Hopefully this is not too much off topic, but i was reading this article by Lomborg and am wondering where he goes wrong:
    http://www.guardian.co.uk/commentisfree/2008/oct/14/climatechange-scienceofclimatechange

    He doesn’t give any sources for his statements about lower temps and sea level rise, so i wonder if they’re correct, and if not, what the correct numbers are. Apart from exact numbers, isn’t his main error that he’s looking too much at short term variations instead of looking at the longer term trends and projections?

  32. Alan Millar:

    “You’re seriously in need of an education about the statistical nature of noise;” (Tamino)

    Ah that wonderfully scientific term ‘Noise’

    Here is the full data set from UAH since satellite data became available.

    1978 – 1994

    http://woodfortrees.org/plot/uah/from:1978/to:1994/trend/plot/uah/from:1978/to:1994

    1995 – 2000

    http://woodfortrees.org/plot/uah/from:1995/to:2000/trend/plot/uah/from:1995/to:2000

    2001 – Date

    http://woodfortrees.org/plot/uah/from:2001/to:2009/trend/plot/uah/from:2001/to:2009

    So Tamino, you often state that ‘noise’ can mask the true signal. So what is the true signal here? The cooling periods 1978 – 1994 and 2001 to date being masked by a warming trend from 1995 -2000? Or a warming trend from 1995 – 2000 being masked by cooling trends from 1978 – 1994 and 2001 – 2009?

    I cannot see why people are tring to kick up the most almighty panic on these trends especially when this is combined with no visible acceleration in sea level rise to anywhere near the levels predicted by the models.

    Also these trends this century are against a background of higher than estimated man made CO2 emmisions and also that the forcing factor of this affect must have already been at its highest level during the 21st century and can only decline as total atmospheric CO2 increases. After all we know the climate does not have a linear response to rising CO2 levels or the Earth would have already suffered a runaway greenhouse effct in its past.

    So how do you justify some of the apocalyptic predictions people seem happy to bandy about?

    Alan

  33. Alan Millar:

    Also just to emphasise this noise versus signal idea.

    Here is the data from 1940, when the CO2 signal started to become strongly apparent, to 1977. I use Gis temp data because of course satellite data is only available from 1978.

    http://woodfortrees.org/plot/gistemp/from:1940/to:1977/trend/plot/gistemp/from:1940/to:1977

    Again what is the real long term signal and what is the noise?

    Alan

  34. Mark:

    Noise is the random variability that the system you’re not interested in is putting on the signal.

    E.g. if you’re trying to listen to someone talking, the engine is making noise.

    However, if you want to find out where the bite is, that “noise” IS your signal.

    So anything that is of a shorter period than climate is noise.

    And what period is “climate”? Well if you ONLY had temperatures to go on, and “midsummer” was the warmest day (lets ignore thermal inertia), how many years would you have to sum to find out “the day of midsummer” to within a week? About 50?

    Check it out, go back through the temperature records and see.

  35. Gavin (no, not that one, a different one):

    Alan Millar says “So what is the true signal here? The cooling periods 1978 – 1994 and 2001 to date being masked by a warming trend from 1995 -2000? Or a warming trend from 1995 – 2000 being masked by cooling trends from 1978 – 1994 and 2001 – 2009?”

    One way to decide would be to compare the plausibility of the supposed mechanism behind each hypothesis. One advantage of the theory that there has been a warming trend occasionally obscured in the short term by natural variability is that we have a mechanism that explains why there should be a warming trend (CO2) and mechanisms for explaining the variability (IIRC ENSO is responsible for quite a lot of it). Can you provide a more plausible mechanism explaning a cooling trend occasionally masked by natural variation?

    A long term trend (e.g. 1978-2008) is much less affected by ENSO-related natural variability, and that shows a clear warming trend

    http://woodfortrees.org/graph/uah/trend/plot/uah

    The fact that the observed long term trend shows warming strongly suggests that there isn’t an underlying long term cooling trend and the overall warming is unlikely to be due to natural variability.

    BTW isn’t the 1940-1977 cooling is attributable to aerosols?

  36. Pat Neuman:

    Re #32, Millard,

    “Let us spend one day as deliberately as Nature, and not be thrown off
    the track by every nutshell and mosquito’s wing that falls on the rails”.
    Henry David Thoreau (1854)

    http://data.giss.nasa.gov/gistemp/

  37. tamino:

    Re: #32 and #33 (Alan Millar)

    You’ve illustrated, far better than I could, just how naively some people approach trend analysis.

    You say “The cooling periods 1978 – 1994 and 2001 to date…” Did you actually bother to do any analysis? What’s next? Will you tout the “cooling period from 8:15 am to 8:23 am”?

    To answer your question: it can’t be called “true signal” if it fails statistical significance tests.

    Here’s a question for you: if 1978-1994 and 2001-present are “cooling periods,” then what are the rates of cooling, AND what are the uncertainties (confidence intervals) associated with those rates? Be careful: I’m not the only professional statistician who comes around here, and you can bet we’ll all be checking your work.

  38. Chris G:

    Just thinking on trends.

    Last year a friend of mine, who is a skeptic, proposed that the loss of arctic ice was just part of a 20 cycle. So, I asked him when was the last time that the NW passage was fully navigatable? I think it was more than 20 years ago.

    Permafrost is melting; the timing of animal migrations is changing; and a host of other things that have great inherent stability are changing. I was wondering how much more energy there must be in the arctic to melt so many more cubic miles of ice. It can be estimated, but suffice to say it is a lot. Why does anyone cling to data sets that are open to interpretation to refute the basic trends that are so readily observable?

  39. Alan Millar:

    “BTW isn’t the 1940-1977 cooling is attributable to aerosols?” (Gavin, no not that one, a different one)

    Well there is conjecture that that is so (unproven but). If that is a fact, what trend is it actually masking?

    There was a long term warming trend upto 1940 that did not seem connected to any man made CO2 emissions.

    http://woodfortrees.org/plot/hadcrut3gl/from:1880/to:1940/trend/plot/hadcrut3gl/from:1880/to:1940

    If the aerosols were, subsequently masking this, apparently natural, trend of recovery from the LIA then continuing that underlying trend to the present time would account for most of the subsequent observed warming and leave only a small amount due to mans CO2 emissions.Indicating a much lower sensitivity to CO2 than the models hypothesise. This seems a perfectly reasonable supposition.

    It seems a hell of a coincidence to me that this long term natural trend should cease at precisely the moment aerosols become a factor and man made CO2 start to drive the long term climate. But that is what we are required to believe if we are to see the models hit their targets.

    I don’t believe that the Earths climate is cooling most indicators say otherwise. What I find hard to believe is the hypothesised rate of change forecast by the models and that the Earths climate is mainly driven by three or more factors ie CO2, CO2, and CO2!

    Alan

  40. Ray Ladbury:

    Alan Millar, did you never take a course in data analysis? The “signal” is the portion of the measurement, you are interested in, while the “noise” is that which interferes with your observing the signal. In this case, the theory predicts a “warming” signal, and the data confirm this to high confidence.

    You are welcome to try and see how it does with a “cooling” signal, although I expect you will have trouble finding supporting data with out cherrypicking it. Here’s a clue. “Climate” is very noisy on short timescales. Measurement errors across satellite systems increase both noise and systematic error. (Note: A couple of colleague of mine has direct experience with this in reconstructing galactic cosmic ray measurements and plasma fluxes far from Earth–no one satellite has produced enough data for a decent model, and yet there are little problems going from one dataset to another.) Best advice, though: Learn the physics and you’ll understand your signals and noise much better. CO2’s effects last for decades to centuries–it’s a trend that’s pretty easy to spot.

  41. tamino:

    Re: #31 (Lennart)

    See this for a closer look at Lomborg’s “statistics.”

  42. Patrick Hadley:

    Ray Ladbury says that the theory predicts a warming signal and the data confirms this to high confidence.

    It seems a bit odd that no warming trend at all this decade is confirming to high confidence a warming trend. Imagine how confident he would be if temperatures were actually going up rather than down on GISS since Jan 2001.

  43. Alan Millar:

    Gavin, Ray, Tamino.

    I have already answered but my post has disappeared! I will try again to see if my post can beat the censorship on here and whether true discussion is allowed.

    “BTW isn’t the 1940-1977 cooling is attributable to aerosols?”

    It is conjectured that this is so. It is not proven but assuming that it is a fact what trend is it actually masking?

    This is the long term trend upto 1940 when it is hypothesised that aerosols started to become a significant masking factor.

    http://woodfortrees.org/plot/hadcrut3gl/from:1880/to:1940/trend/plot/hadcrut3gl/from:1880/to:1940

    Now if you extrapolate this, long standing underlying but masked natural trend, visible from the Earths recovery from the the LIA, to the present date it explains most of the observed warming. It leaves only a fairly small amount of warming attributable to CO2 emissions and therefore indicates a low sensitivity to increased atmospheric CO2.

    This seems an entirely reasonable conjecture. If this is not so then one has to assume that this longstanding natural trend came to a complete halt at precisely the same moment that aerosols became a factor and that man made CO2 started to drive the Earths long term climate.

    Unless one can produce an accurate model of the Earths climate and I admit I cannot, how do you prove this with any degree of confidence? ( I take comfort in the fact that no one else can either ) Even if I understood completely and could hypothesise the effect of the huge number of factors and correlations and feedback mechanisms that drive the climate I would not have accurate measurements over any significant timescale to prove this. Technolgical man has not been around long enough to measure these things. I would guess that we would have to be present and measuring over a couple of glacial and inter-glacial periods to have great confidence in any models.

    Of course we are talking 100s of thousands of years for this to happen, a small period indeed when we are talking about the Earths climate but daunting if you are looking for a rush to judgement.

    So do I think the Earth is on cooling trend? No, I think most indicators suggest a moderate medium term warming trend that started well before any significant man made CO2 emissions and continues to this date.

    Alan

  44. Richard Simons:

    On the contrary, most university doors are WIDE OPEN to the public, joe six pack merely needs to get off the couch and wander down to his or her local world class university research libraries and look them up.

    Not all people have easy access to a university library, or even a library of any description. Even when I lived in a major city, it was not easy to get to the university library, but now my nearest physics library is an 8-hour drive away. The closest public library is a little over 3 hours away, in a different jurisdiction so I am unable to borrow books or use interlibrary loa^n. (The ^ is to defeat the spam detector).

  45. François GM:

    @ Tamino

    What are you talking about ?? Confidence intervals are estimate intervals of a true endpoint, which could be human population parameters, distance to the moon or temperature trends. You don’t apply confidence intervals on the true endpoint but on proxies or measures to estimate the true endpoint. A – 0.1 temp trend from 2001 to 2008 is a -0.1 temp trend from 2001 to 2008 – period. There are NO confidence intervals. It is the true final endpoint, if you define it that way. You may argue that a short-term trend from 2001- 2008 is not a proper TRUE endpoint. Fine. You may then apply statistics (but which ones ??? good luck) to determine the confidence intervals to estimate the TRUE endpoint. But tell us – what is the TRUE endpoint ? Temperature trend from 1980 to 2030 ? From 2000 to 2099 ? Up to you to cherry pick but PELEASE DO NOT apply confidence intervals on the true endpoint.

  46. George Ray:

    Re: #37 (Tamino)

    What would you say is the minimal period of time that one should use to ensure that it contains a “signal” (and not just noise). What is the formal statistical method that one would use to ensure that the time frame is long enough?

    Thanks.

  47. Richard:

    To Figen Mekik,

    My apologies for not understanding your meaning. It was just the part about “While that may have been appropriate in this case in hindsight” that worried me. I thought that the statement implied that you approved of elimination on the basis of the authorship.

    I apologise for getting it wrong in this case.

    Regards

  48. Figen Mekik:

    Richard, no worries at all.

  49. wayne davidson:

    Raobcore different versions giving different results is quite confusing for me.. From what I gather, Raobcore 1.4 should be considered the better most perfect version, if so, why show the older ones?

    I suggest a flaw in analysis which is the apparent segmentation of the Upper Air profile, in the Arctic, the real action is happening between 1000 and 650 mb, segmenting upper air levels 850,700, and 500 mb reduces resolution and misses some real important air volumes at higher pressures, where anything can happen between 1000 and 850, 850 and 750 mb etc… Raobcore profile looks a little unrealistic, and it is likely that the models are more correct for the lower troposphere than Raobcore.

    However, its very curious, oblate sun refraction method agrees with Raobcore calculated trends at the same location, which is for a warmer lower troposphere at virtually every year, but not with the actual radiosonde measurements varying a lot, which is strange…… This leads me to conclude that Raobcore resolution needs to be increased… But this depends on whether my interpretation of Raobcore is right, which is the most accurate, corrected perfect data available.

  50. bobzorunkle:

    As a lurker, it seems to me that Allan Millar makes a good point. Most models seem to have a very large margin of error, so that almost any recent temperature result can be said to fall within the M of E and validate the model. It is therefore difficult to ‘disprove’ the theory. But the basing of the AGW theory upon a short warming trend 1995-2000 seems to be a stretch if you can (I think reasonably) argue that the facts can also support a contrary view. It comes down to \We have a theory of what causes the warming and you don’t – So we’re right and you’re wrong.\ Why are \deniers\ castigated for saying \We don’t know what causes the warming\ when the AGW theory cannot explain what initiates the warming that CO2 exacerbates?

  51. Gavin (no, not that one, a different one):

    Alan Millar Says: ““BTW isn’t the 1940-1977 cooling is attributable to aerosols?” (Gavin, no not that one, a different one)

    Well there is conjecture that that is so (unproven but). If that is a fact, what trend is it actually masking?”

    As I said in my earlier post, you need to look at the plausibility of the mechanism underlying each hypothesis to decide. If you can provide a plausible mechanism for long term cooling or switching between warming and cooling phases then go ahead and we can discuss their merits relative to the existing explanations (contained for instance in the IPCC reports), which IIRC suggest warming in the first part of the 20th century due to solar forcing, warming due to greenhouse gas since approx 1970 and a plateau in the middle due to aerosols.

    BTW it is rather doubtful that any hypothesis can be unequivocally proven based solely on observationsal data (only refuted). If you are willing to dismiss an explanation as being unproven conjecture then there is very little that you will be willing to accept on either side of the debate if you are to remain consistent!

  52. Lennart:

    Re: #41 (Tamino)

    Thanks for explaining part of the story. I can’t really judge how correct your explanation is, although it seems plausible to me, but maybe more knowledgable people can comment? Also the question about Lomborg’s sea level statements still stands. I read Lomborg’s book Cool Climate and think he’s very selective and doesn’t take extreme risks into account, but he’s sort of smart and influential people (can) use him to postpone radical climate policies endlessly. So it seems important to keep following his ‘arguments’ closely and to correct him where he’s wrong.

  53. Pascal:

    Hi Gavin

    In the figure of your factsheet, why did you compare SST and LT tropical temperatures?
    Why not to account a mix of SST and land temperatures to compare it to LT temperature?
    I don’t think it’s very important because there is a high proportion of ocean’s area, but I think it’s worrying because, in principle, the trend of land temperatures is higner than SST’s trend.

  54. Barton Paul Levenson:

    Alan Millar writes:

    Ah that wonderfully scientific term ‘Noise’

    It’s a technical term in information theory. A communication is made up of noise and signal.

    So Tamino, you often state that ‘noise’ can mask the true signal. So what is the true signal here? The cooling periods 1978 – 1994 and 2001 to date being masked by a warming trend from 1995 -2000? Or a warming trend from 1995 – 2000 being masked by cooling trends from 1978 – 1994 and 2001 – 2009?

    The World Meteorological Organization defines a climate as mean regional or global weather over a period of 30 years or more. The periods you are mentioning are considerably less than that, thus more likely to represent noise than signal.

    these trends this century are against a background of higher than estimated man made CO2 emmisions and also that the forcing factor of this affect must have already been at its highest level during the 21st century and can only decline as total atmospheric CO2 increases.

    What in the world are you talking about? More CO2 in the air leads to greater forcing, not less.

    After all we know the climate does not have a linear response to rising CO2 levels or the Earth would have already suffered a runaway greenhouse effct in its past.

    It’s true that the response is not linear. It’s not true that a linear response would have led to a runaway greenhouse effect. Do you understand what a “runaway greenhouse effect” actually is?

    So how do you justify some of the apocalyptic predictions people seem happy to bandy about?

    Extrapolating from trends, I guess.

  55. Mike Tabony:

    Anyone who thinks the climate hasn’t warmed significantly since 1978, as several of comments on this blog imply, hasn’t been outside enough. If you’re trapped in the office and haven’t had any direct observations of nature in the last 30 years ask someone who has. Talk to a local forester or farmer who has been “in the field” for a couple of decades. They can add to your knowledge base considerably.

  56. Barton Paul Levenson:

    Alan Millar writes:

    What I find hard to believe is the hypothesised rate of change forecast by the models and that the Earths climate is mainly driven by three or more factors ie CO2, CO2, and CO2!

    Do you know what a straw man argument is? No climate scientist has ever said CO2 was the only influence on climate. Where are you getting your information? I’m guessing denialist web sites. It certainly isn’t anything you’d get from a climatology textbook.

  57. Barton Paul Levenson:

    Patrick Hadley writes:

    It seems a bit odd that no warming trend at all this decade is confirming to high confidence a warming trend.

    How many times does this urban legend need to be refuted?

    There is not a recent period of “no warming.” Eliminate the spaces and look here:

    http://www.g e o c i t i e s.com/bpl1960/Ball.html

    http://www.g e o c i t i e s.com/bpl1960/Reber.html

  58. Lauri:

    RE: #25
    The results (surely worth a glance) reflect all known temperature perturbations, including the 1988 peak. They are published and plotted every month, (Google Global Warming at a Glance), and overlaid onto the GISS surface data on which Anthony Watts (Watts up with That) labours so strenuously.

    Global Warming at a Glance takes you to JunkScience.com. Oh, my, what junk it is!
    They make the reader compare an anomaly of satellite observations with base value 1978-2008 (?, they just say ‘average’) with a surface temperature anomaly of GISS with base value 1951-1980!
    No wonder the values differ.

  59. Ray Ladbury:

    50 Bobzorunkle falls for the fallacy that the warming trend is short.

    Uh, Bob, look at this graph:

    http://www.globalwarmingart.com/wiki/Image:Instrumental_Temperature_Record_png

    Now, what I see is a pretty consistent trend of warming from about 1900 through the present. Yes, there are drops and rises, but the thing about climate–and especially CO2’s influence on climate–is that the longer the time series, the more it is evident.

    Now look at this one:

    http://www.globalwarmingart.com/wiki/Image:Instrumental_Temperature_Record_png

    Doesn’t the 20th century look a bit anomalous to you? What Alan Millar has is a conservative talking point–if you chop up the temperature record into tiny little bits and don’t look at the whole thing, it doesn’t look so scary.

  60. Ray Ladbury:

    Alan Millar posits a “natural trend” is responsible for climate change. So, Alan, how does this “natural trend” create energy and can we harvest it to solve all of our energy needs. Moreover, look closely at your temperature graph–the first half shows no warming. What is more, if you extrapolate to the left and right, the dichotomy between 18th and 19th centuries becomes more dramatic.

    http://www.globalwarmingart.com/wiki/Image:Instrumental_Temperature_Record_png

    Now, since the LIA was an 17th-18th century phenomenon wouldn’t you expect “recovery” to be most rapid early and not to accelerate the further away we get? I anxiously await your resolution of this little problem with your theory, as well as the amazing new discovery of how to create energy you posit.

  61. Dean:

    #59

    Ray, the problem here is that the first half of the century is attributed to natural causes, the second half to CO2. Why is this a problem? Well, we hear that the 1980-present warming is “unprecedented” when in fact it’s very close to the warming from 1910-1940, which supposedly is natural. This is easily verified if you use the HadCRU data. So how can this be “unprecedented” when it’s happened before?

    Also, what drove the warming during 1910-1940? The sun? Not according to Leif Svalgaard. He has theorized that TSI has had no appreciable change since the 1700s. If he’s correct, then our understanding of the role in TSI on climate is fundamentally flawed.

    One of the main areas where analyses gets into trouble is in not understanding our assumptions. We assume 1950-1980 was driven by aerosols. we don’t “know”. We assume the natural phenomenon are well understood, we don’t “know”. Each of these assumptions carries with it a likelihood of being correct. As an example of this, we do have a pretty good understanding of how large volcanic eruptions affect global temperatures.

    Another example, no climate model predicted the melt-off of the arctic last year. not a single one. Even if we assume AGW had an impact, it would not have melted so much of the arctic. Therefore, something “natural” overcame/supplemented the warming and caused the ice to melt. What was this natural cause?

    Also, on the subject of noise: How can the short term variations in temperature be considered “noise”? Noise has no impact on the overall trend (it averages out to zero over long time frames). In this case, however, variations in temperature when averaged ARE variations in climate! To say that variations in temperature is purely noise is to de-link temperature from climate. You can assume it’s noise and show that the variation isn’t out of the typical range of the noise (Tamino has done just this on his blog), but you cannot claim for certain that the variation IS noise. You can gain confidence in trying to rule out other causes, but again, that’s only as good as your knowledge of said causes.

    As Jimmy Buffett says, “don’t ever forget that you just may wind up being wrong”.

  62. Pat Neuman:

    ENSO indexes suggest:

    La Nina drove the cool 1950s – mid 1970s.

    El Nino drove the warm and humid mid 1930s – 1940s.

    What drove the warm and dry dust bowl years in early 1930s?

    But what drove the warming during 1910-1940?

  63. counters:

    #61:

    Another example, no climate model predicted the melt-off of the arctic last year. not a single one.

    Pray tell, what climate model would be used to make a point prediction about the near future? I work with the CCSM, and I can’t imagine using it to make a short-term prediction about a seasonal fluctuation in a parameter such as Arctic ice.

    Now, if what you really mean is that the models used to predict ice-sheet fluctuations were off, then we can start talking from there. The simple answer is that the anomalous melt last year can be attributed to weather conditions that tended to amplify melt. It was sort of a “perfect storm” convergence of parameters which amplified melt – a relatively warm season, coupled with wind patterns which tended to push ice into warmer waters, increasing the probability that it would melt.

    I don’t think anyone here will defend the models as “perfect.” If they were perfect, then some of us would be out of jobs, because there wouldn’t be any work to be done on them. Instead, models are just as their name implies. They’re useful tools which do a large number of calculations for us. There are bound to be uncertainties and poor parameterizations in them; however, as time goes on, we are ironing out those deficiencies intrinsic to the model or developing the understanding necessary to cut them out as smoothly as possible.

    Last year’s melt isn’t some death-blow to the idea of modeling, particularly because there is an accepted hypothesis (the pro-icemelt weather conditions) which supplements the model predictions to explain what we observed.

  64. bobzorunkle:

    #59 – Thanks Ray, but the graph suggests to me a 100 year cycle where temperature anomolies were 0.2 to 0.4 degrees below (whatever the baseline was) for the 100 years from 1850 to 1975 and then perhaps a new cycle where the anomolies flip to 0.2 to 0.4 above. And since this graph is based upon the instrumental temps, the recent warm anomolies would be even smaller if the graph used the satellite temps and were continued to 2008. I’m not denying there has been warming – just saying that instead of closing our minds in favor of the AGW/CO2 arguments, there could be other “natural” factors we haven’t yet properly assessed.

  65. Chris Schoneveld:

    I can’t see anything wrong or implausible with Allan MiIlar’s suggestion #39 that we are riding on a long term natural warming trend as we are coming out of the LIA, a trend temporarily interrupted in the period 1945-1975 due to aerosols and slightly accelerated by CO2 emissions (but with far less climate sensitivity than assumed) thereafter. Indeed, as long as we are not able to pinpoint the cause of the warming between 1910 and 1945 it is quite conceivable that this natural trend continues to play a role in today’s climate change/global warming.

  66. Ray Ladbury:

    Dean, your attribution of early 20th century warming to natural forcers is false. Look at this graphic:
    http://www.globalwarmingart.com/wiki/Image:Climate_Change_Attribution_png

    Yes, insolation did increase about 1900. However greenhouse gasses also play an increasing role that runs a close second to insolation. It is only in about 1960 that ghgs predominate, but to dismiss them entirely is inappropriate. In any case to average across the boundary of the 19th/20th centuries is incorrect, as there is clearly a change in the importance of the forcers in the middle of this period.
    Your contention on noise doesn’t make sense, it is only by averaging over short-term variations that the long-term trend (climate) emerges. Of course you can’t say whether any short-term trend is only noise–that’s because it’s a short-term trend! And again, what do you propose as a “cause”. Climate science has a model that explains most of the observed trends (real trends, not short-term variation masquerading in the fevered denialist mind as a trend) quite well. The denialists have bupkis, which, incidentally is why they don’t publish.

  67. Rod B:

    Alan (32), a minor (??) clarification. Forcing from increased CO2 is greater than a linear response.

    [Response: No it isn’t. It’s logarithmic which goes up much more slowly than linear. – gavin]

  68. Patrick Hadley:

    From around 1940 to 1975 there was a period of about 35 years with no long term warming trend. In 1975 a strong warming trend began and in 1988 James Hansen went to Congress and made a big deal about a warming period of just 13 years. If you read his talk you will see he talked about the statistical significance of the near 0.4C anomaly of a period of just one year. I cannot find any contemporary reports of climate scientists condemning him for make long term conclusions about climate change based on a warming period of just 13 years.

  69. Hank Roberts:

    > no single model
    Except several. Look up Dr. Bitz’s more recent work in Google Scholar after reviewing the older topic here:
    http://www.realclimate.org/index.php/archives/2007/01/arctic-sea-ice-decline-in-the-21st-century/

  70. Rod B:

    Mike (55), that’s the Ted Turner school of analysis: “Have you been outside today? It’s hotter than hell out there!”, which I find totally unconvincing.

  71. Rod B:

    Gavin (re 57), I think that is incorrect. I don’t recall the exact figure but the increase is not just logarithmic but 4-5 times the ln of the concentration ratio (or the ln of the ratio to the 4th or 5th power) which increases much faster (initially) than linear until you get to a concentration ration of 10-15. Is this not accurate?

    [Response: The forcing is ~ 5.35 log(CO2/CO2_orig). If you linearise at today’s concentration, the forcing would be approximated by 0.014*(delta CO2). So at 560ppmv it would be an increase of 2.52 W/m2. But actually the additional forcing is 2.07 W/m2 – i.e. less. Therefore forcing goes up less than linearly. – gavin]

  72. Chris G:

    Dean, et al,

    I’m not following your logic. The modelers have indicated the climate is heading in a certain direction, based on increases in CO2 and other factors. Some sample data indicate we might be further in that direction than the modelers have estimated, and you use that as evidence to reject the prediction of what direction we are heading, or what is causing the movement?

  73. Ray Ladbury:

    Bobzorunkle says “I’m not denying there has been warming – just saying that instead of closing our minds in favor of the AGW/CO2 arguments, there could be other “natural” factors we haven’t yet properly assessed.”

    Gee, Bob, and what might those “natural” factors be?

    [cricket’s chirping]

    Nothing? It’s OK, nobody else has come up with anything concrete and credible either. OK, how about we do science instead. You know, make a hypothesis and then look and see if the data support it. Climate science has done that, and the data are pretty darned supportive of anthropogenic causation. We’re waiting for alternative hypothesis.

    Patrick Hadley–Nobody condemed Hansen because he was doing SCIENCE. You know, looking at trends since the dawn of the industrial age, seeing if they were consistent with his models, and finding that both those and the modern data (however limited) were consistent, he made a prediction. Anybody else you know of got a proven 20 year track record of climate prediction?

  74. Jim Eager:

    Re Patrick Hadley @68: “From around 1940 to 1975 there was a period of about 35 years with no long term warming trend.”

    False. As has been discussed here repeatedly, there was a sharp drop in global mean temperatures between roughly 1945 and 1950, which was followed by a shallow warming trend until roughly 1975, when the slope of the warming trend increased sharply:

    http://www.globalwarmingart.com/wiki/Image:Instrumental_Temperature_Record_png

    Why rely on faulty analysis or repeat deliberate disinformation when the facts are readily available?

  75. Mark:

    Or (re #73) the natural factor is our natural ability to think of how to burn oil and CO2’s natural ability to cause GW.

  76. Jim Eager:

    Re Ray Ladbury @73: “Gee, Bob, and what might those “natural” factors be?”

    And Bob, don’t forget to explain just how these natural factors would negate the known and demonstrated radiative physics of greenhouse gases.

  77. Rod B:

    Gavin (re 71), maybe I’m not grasping the linearization part. What is the formula that arrives at F = 0.014*(deltaCO2)? i.e: how is the 0.014 derived? are the units of delta_CO2 raw ppmv? is F in watts/meter^2? I was using straight linear math as in when X doubles, so does Y, but it now occurs to me this gives me a big units problem. Similarly, where does the 5.35 come from, mathematical formula or laboratory observation? I assume the units of the “5.35” number must be watts/meter^2; how does that happen?

    0.014*280 = 3.92 (280 is delta_CO2 = 560 – 280);
    5.35Ln(2) = 3.71.
    Where am I going wrong here?

    Thanks for any help.

    [Response: This stuff is not hard to find. the formula F=5.35*log(CO2/CO2_orig) comes from a fit to line-by-line calculations in Myhre et al (1998). If you linearise at 380 ppm (noting the d(log(C))/dC=1/C), the linear formula is F=(5.35/380)*(CO2-380). Plug in 560ppmv to get what I said above and compare with the full formula. Whatever value you pick, the log formula is always smaller than the linear one. – gavin]

  78. Chris:

    #57 Barton Paul Levenson

    “How many times does this urban legend need to be refuted?
    There is not a recent period of “no warming.” Eliminate the spaces and look here: …..”

    I agree that the lack of warming since 1998 is often over-emphasised. However, those who mention it do have a point. Here’s one way to rationalise it:
    (Temperatures from Hadley http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt )

    By 1990 the global temperature anomaly had already reached +0.25C in an ENSO-neutral year (average ONI ~ +0.25, threshold required for weak El Nino = +0.5) http://www.cgd.ucar.edu/cas/ENSO/enso25.jpg )
    Also the AMO was starting to switch to positive phase, just as it did in the late 1920s incidentally:
    http://en.wikipedia.org/wiki/Image:Amo_timeseries_1856-present.svg

    In the first 7 months of 1991 the temperature anomaly was +0.26C, and a strong El Nino was in the pipeline, later to peak in 1992. We will never know how warm temperatures *should* have got by 1995, when ONI finally went below -0.1 for the first time since 1989….. +0.45C perhaps? – see later…
    Instead Mt Pinatubo erupted in summer 1991, and global temperatures were lowered as a result for several years.

    Finally the knock-on effects wore off and a return to a particularly strong El Nino culminated in the record global temperatures of 1998. Let’s note the average global temperature in the 2-year period 1997-1998: +0.45C.

    Then we had 2 years of La Nina, keeping temperatures down at +0.28C for the period 1999-2000.

    From second half of 2001 to first half of 2007
    (see also http://www.cgd.ucar.edu/cas/ENSO/enso26.jpg )
    ONI showed only El Nino (weak to moderate) or neutral (apart from 3 months where it crossed the threshold into weak La Nina Dec 05 to Feb 06). Thus global temperature anomalies not surprisingly went higher again: +0.44C for the period 2001-7, slightly below the +0.45C noted earlier for the period 1997-8.

    Therefore, I think it would be reasonable to suggest that the underlying trend without Pinatubo was a rapid warming in the 1990s from 0.25C in 1990 to ~0.45C by 2000, then a levelling off where the 1997-8 average was never substantially eclipsed (highest was +0.48C in 2005), even with years of broadly ENSO-positive conditions and no massive volcano-induced cooling earlier in the decade (unlike in the 1980s and 1990s).

    Unlike others, I’m not saying that because of all this, “global warming stopped in 1998 therefore CO2 has little effect”. All I’m saying is that it’s a reasonable way of looking at the facts, and trying too strenously to refute it is perhaps not the best way to convert reasonable newbies to the subject to consensus views on global warming.

    Also according to HadCrut, we’ve seen a global increase in temperature of ~0.4C from the peak of the early 1940s (last time strongly +ve AMO and PDO http://en.wikipedia.org/wiki/Image:Pdoindex_1900-2006.gif coincided) to the recent peak. If I’m correct in saying that the bulk of that increase was up to the late 90s, and *should* have been by the mid-90s or even earlier, then this leaves a tiny bit more room for changes in solar forcing – since it’s only after the solar max of the early 90s that the trend in solar activity from 1940 took a dive http://en.wikipedia.org/wiki/Image:Temp-sunspot-co2.svg#file. Let’s take a conservative +0.1C for solar forcing. This leaves 0.3C to be explained by natural variation in 50 years. Urban heat effects? Conservative +0.1C beyond what is already factored into the temperature series/models? That leaves about 0.2C for CO2. For the sake of argument.

    I think what happens in the next few years could be very telling. At the moment, I genuinely don’t know how to call it. But with every year that the global temperature fails to break new ground (say +0.50 on the Hadley measure) the more receptive I will be to arguments for lower-than-consensus climate sensitivities.

    Note: please don’t people accuse me of not seeing the wood for the trees. I got that in spades on the sea ice thread when I argued that 2008 was not on a surefire course to surpass 2007’s record, and rather was likely set for an earlier and stronger recovery – yet I was correct (and incidentally ice extent is now ~1.5 million km2 more than a year ago.)

  79. Steve Bloom:

    OT: This interesting new paper (public access) claims to have found evidence for significant atmospheric CO2 fluctuations from 1000 to 1500 CE (and so including the MWP). They think the scale of these changes is enough to account for some of the climate change previously ascribed to solar and volcanic forcings (although note that the recent trend of the science has been to heavily discount the importance of solar forcing for that period), and that ocean changes drove the CO2 since there is a correlation with North Atlantic SSTs. (IIRC at least one of these authors [Wagner] has done some past similar work that used the same proxy [leaf stomata] to draw some conclusions that turned out to be unreasonable.)

  80. Hank Roberts:

    > I argued that 2008 was not on a surefire course
    > to surpass 2007’s record

    So did Connolley. So did others. It’s not the ideas, it’s the verbose argumentative confusing presentation.

  81. Chris:

    I note that even though the 2007/8 La Nina was (at least judging by ONI data) not as deep or certainly nearly as long as the 1999/2000 back-to-back La Nina, the three-month-average tropical LT temperature anomaly dipped even lower in 2008, to -0.54C (Mar-May) which was lower than at any time during the satellite era, apart from during the strongest La Nina of the era i.e. 1988/1989. The recent tropical LT anomalies have been way below the surface temperature as far as I can tell, and thus ought to bring down the tropical tropospheric temperature trend relative to the surface.

    http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

  82. bobzorunkle:

    #73 – “Gee, Bob, and what might those “natural” factors be?”

    I am not a climate scientist so I don’t have a clue. All I know is that, as I said in my first post, it doesn’t help for AGW theorists to keep saying “We’ve got a theory and you don’t, so we’re right and you’re wrong.”

    In the past, various theories were put forward on the causes of earthquakes: the Gods are angry; herd of elephants; meteors striking the earth; volcano activity; plate tectonics; elastic rebound theory; etc., etc.. At any given point in time, a theory was accepted by a consensus – until new information came to light. Just because the volcanic activity theory contained some “science”, did not make it correct. Similarly, just because the CO2 theory is based on valid science, it may not correctly explain the cause of the recent warming. A new theory may well come along which supplants CO2 as the most important cause in warming of the planet. It does not make sense to bend ourselves out of shape trying to justify a theory if the facts don’t fit. If the current models didn’t predict the levelling off of the warming, or the unusual melting of the arctic icecap, then shouldn’t a reasonable scientist acknowledge that the confidence in those models should decline? Even a little bit?

  83. Steve Bloom:

    Re #78: “Urban heat effects? Conservative +0.1C beyond what is already factored into the temperature series/models?” Tch.

    Unscientific guesswork about short-term trends is boring unless you’re willing to bet on it. I believe there are still open offers on that here and here. If you just want to speculate, the Watts Up With That blog would be a better place.

  84. RichardC:

    78 Chris said, ” I argued that 2008 was not on a surefire course to surpass 2007’s record”

    Arctic sea ice set a new record low in 2008. Ice has three dimensions, not two. Extent is an improper metric, especially since thickness is far more important than either of the other two dimensions. Thickness is inversely proportional to salt content, so it trumps all. 2008 set a new record for thinness, and also for lowest volume. Extent is a red herring which has been reported ONLY because it used to be difficult to measure thickness and volume. (NSIDC says that is changing right now)

    http://nsidc.org/images/arcticseaicenews/20080924_Figure3.jpg

    Age is an excellent proxy for thickness and inverse salt content. First year ice is almost irrelevant – movie set ice is a good description, while older ice tells the tale. Even with a cold winter and cool summer, arctic ice STILL declined to a new record low. Age^2 * extent is a reasonable first order approximation, at least for the first 3 years. After that, the increase gets more linear. Note that 2007 was mostly >2nd year ice, while 2008 was mostly 1st year ice.

  85. Ike Solem:

    RichardC’s point about the ice volume reaching a new minimum is worth repeating. It’s also true that this summer saw the most rapid rates of ice retreat yet:

    “Arctic Saw Fastest August Sea Ice Retreat On Record, NASA Data Show”
    ScienceDaily Sep. 28, 2008

    Recall also that the record low area coverage last summer was also due to the wind / ocean circulation pattern at the time. This was used by some bloggers to claim that “wind was responsible”, but if climate skeptics are known for anything, it’s for oversimplifying complex issues.

    Thinner, young sea ice is more susceptible to being compressed by wind than is older, thicker sea ice. The discussion of the role wind played in the record 2007 low is here:

    http://www.nasa.gov/vision/earth/lookingatearth/quikscat-20071001.html

    Nghiem said the rapid decline in winter perennial ice the past two years was caused by unusual winds. “Unusual atmospheric conditions set up wind patterns that compressed the sea ice, loaded it into the Transpolar Drift Stream and then sped its flow out of the Arctic,” he said. When that sea ice reached lower latitudes, it rapidly melted in the warmer waters.

    This kind of complex interaction should be expected – relative to the larger question of arctic warming and land-sea temperature trends, such complexities create “noise” – even though the so-called noise is itself of interest and is not all random. Like many other climate issues, the complexity lies in the interaction between the atmospheric and oceanic systems.

    It is harder to measure ice volume, but this year was probably also a record low for sea ice:

    http://www.sciencedaily.com/releases/2008/10/081002172436.htm

    “Warm ocean waters helped contribute to ice losses this year, pushing the already thin ice pack over the edge,” said Meier. “In fact, preliminary data indicate that 2008 probably represents the lowest volume of Arctic sea ice on record, partly because less multiyear ice is surviving now and the remaining ice is so thin.”

    Given that Arctic warming is predicted to lead other signs of global warming, what are other regional indicators?

    1) 99% of Alaskan glaciers are retreating or stagnating – Yes, a few have seen record snowfalls, but those are in Pacific coastal regions, where more moisture is evaporating off the Pacific, which is also in line with predictions about increasing water vapor, the main feedback-forced component of CO2-induced warming.

    2) The largest ice shelf (Ward Hunt) in the N. Hemisphere fractured again this past year. That’s after losing much of its mass in 2003. Similarly, in 2006, Ellesmere Island’s Ayles ice shelf collapsed. Warming ocean and atmospheric temperatures appear to be why these shelves are steadily collapsing. Again, this is due to a complex interaction of factors:

    “The summer of 2005, Copland said, had the lowest amount of Arctic Ocean sea ice ever recorded. Pack ice normally buffers the ice shelf from ocean movement. But with little ice and strong offshore winds, waves were able to batter the ice shelf, weakening it.

    In addition, 2005 was the warmest summer recorded on Ellesmere Island since 1960, with temperatures about 3.8°F (2.1°C) above average.

    3) You also have the warming ocean and atmosphere around Greenland, and the faster-than-predicted glacial outflow there. Last time CO2 levels were near today’s, Greenland was mostly ice-free. In several hundred or a thousand years, we should expect those conditions to return.

    4) The permafrost, like the ice shelves, is buffered from temperature changes by arctic sea ice. As the ice goes, ocean warming will have a larger effect inland, leading to faster permafrost melting.

    So, all indicators are that Arctic warming is proceeding as predicted, only somewhat faster than the climate models suggested. The faster we add fossil CO2 to the atmosphere, the faster the warming will proceed.

  86. Barton Paul Levenson:

    bobzorunkle writes:

    Why are \deniers\ castigated for saying \We don’t know what causes the warming\ when the AGW theory cannot explain what initiates the warming that CO2 exacerbates?

    What are you talking about? Are you talking about a natural deglaciation? We know what causes that — changes in the distribution of sunlight over Earth’s surface. Google “Milankovic cycles.” That isn’t what’s happening now.

  87. Barton Paul Levenson:

    Chris Schoneveld writes:

    I can’t see anything wrong or implausible with Allan MiIlar’s suggestion #39 that we are riding on a long term natural warming trend as we are coming out of the LIA, a trend temporarily interrupted in the period 1945-1975 due to aerosols and slightly accelerated by CO2 emissions (but with far less climate sensitivity than assumed) thereafter.

    What is the mechanism which causes “coming out of the LIA?” Where is the energy coming from? How does it work?

  88. Barton Paul Levenson:

    chris writes:

    Unlike others, I’m not saying that because of all this, “global warming stopped in 1998 therefore CO2 has little effect”. All I’m saying is that it’s a reasonable way of looking at the facts,

    No. It isn’t reasonable to believe something that is demonstrably wrong. Did you look at the links I provided?

  89. Chris:

    #80 Hank Roberts: “So did Connolley. So did others.”
    You’re confusing the issue. It was when the ice extent trend was pointing steeply downwards in early Sep that I was the only one on North Pole Notes to argue it would turn around quickly (you’re implying to people that I was instead talking about pre-season projections.)
    At least you’re concise, accusing me of “verbose argumentative confusing presentation”. I can also do concise: you’re wrong, and are gratuitous in your use of unhelpful, dismissive and hypocritical adjectives.

    #83 Steve Bloom. “Unscientific guesswork about short-term trends is boring unless you’re willing to bet on it.” Your comment does not connect with your quote from my post. Attributing 0.1C of the surface record increase to urban heat effects (from micro to macro) in 50-60 years is scientifically justifiable without guesswork, and does not refer to a short-term trend. I’m sorry if it’s boring – only as boring perhaps as unscientific guesswork about the extent of aerosol-induced cooling post-1940s perhaps, which many are more than happy to indulge in. (Note: I’m not saying that UH necessarily had the effect suggested, or that aerosols haven’t had a significant effect, before those skimming my words jump into attack autopilot.)

    #84 RichardC. Should really take this to North Pole Notes. Volume has the virtues of (1) still being very hard to measure despite what you say
    http://scienceblogs.com/stoat/2008/10/sea_ice_thinner.php
    and (2) being the most lagged indicator of all of NH warmth i.e. it should be the last indicator to turn around in any longer-term Arctic ice recovery.
    Remember that on the map you link to, despite the distracting psychadelic colours, in fact ALL the ice on the RH side will start 2009 as multiyear ice according to the NSIDC’s definition:
    http://nsidc.org/cgi-bin/words/word.pl?multiyear%20ice
    And if there’s less 4/6+ year old ice? Why do you think the average sea ice thickness in the Arctic never got far above 3m in the twentieth century? (And the average age of all ice never got above single digits) Because most of the thickness increases come in the first couple of years, and most old ice is “old” because it is nearing the end of its natural cycle (where it thins to zero.)

  90. Mauri Pelto:

    Chris your comparison of temp and specific La Nina and El Nino conditions is interesting. I agree that some of the elegance is lost in the length of the post. Temperature is not the only measure to consider in examining climate sensitivity. Sea ice extent, Greenland Ice Sheet melt extent are additional measures that are exceeding model expectations.

  91. Ray Ladbury:

    Bobzorunkle says: “In the past, various theories were put forward on the causes of earthquakes: the Gods are angry; herd of elephants; meteors striking the earth; volcano activity; plate tectonics; elastic rebound theory; etc., etc.. At any given point in time, a theory was accepted by a consensus – until new information came to light.”

    Oh, come on. This is absolute horse puckey. Find me a peer-reviewed paper that attributes earthquakes to angry Gods! At least do us the courtesy of taking the discussion seriously and sticking to SCIENCE. And while you are at it, maybe learn the difference between climate and weather.

    Pray, where are those inconvenient facts that current theory doesn’t explain. It certainly has not trouble showing that if you have a La Nina, temperatures on average will fall. As to the melting of sea ice, the theory has predicted summer melt would increase on average over time. It has. Here’s a clue: It is called GLOBAL CLIMATE CHANGE for a reason–all three words are important.

  92. Hank Roberts:

    Thanks Mauri, for help focusing on the interesting question in Chris’s post. Who’s working in that field, do you have pointers?

  93. Ike Solem:

    Always worth remembering the American Petroleum Institute’s Global Climate Science Communications Plan (1998):

    “GCSCT members who contributed to the development of the plan are A. John Adams, John Adams Associates; Candace Crandall, Science and Environmental Policy Project; David Rothbard, Committee For A Constructive Tomorrow; Jeffrey Salmon, The Marshall Institute; Lee Garrigan, environmental issues Council; Lynn Bouchey and Myron Ebell, Frontiers of Freedom; Peter Cleary, Americans for Tax Reform; Randy Randol, Exxon Corp.; Robert Gehri, The Southern Company; Sharon Kneiss, Chevron Corp; Steve Milloy, The Advancement of Sound Science Coalition; and Joseph Walker, American Petroleum Institute.”

    A few quotes…
    Global Climate Science Communications Action Plan

    Project Goal: A majority of the American public, including industry leadership, recognizes that significant uncertainties exist in climate science, and therefore raises questions among those (e.g. Congress) who chart the future U.S. course on global climate change.

    Here, the goal is to interject doubt into the discussion. If the target was AIDS, one could point out that HIV might not really be what’s responsible – maybe it’s due to genetic factors. We know now that genetics plays a role in many diseases, and this wasn’t understood 200 years ago, so maybe HIV has nothing to do with AIDS?

    Progress will be measured toward the goal. A measurement of the public’s perspective on climate science will be taken before the plan is launched, and the same measurement will be taken at one or more as-yet-to-be-determined intervals as the plan is implemented,

    This is not normal advertising or PR; this is an attempt at large-scale modification of the public perception. It’s worth noting that a well-educated and scientifically literate population is less likely to fall for the claims of tobacco science experts – unless they think they are listening to “independent objective experts.” Thus, a major aim of PR firms is to cultivate apparently independent experts who will cooperate with PR efforts. The PR firm in question that is the American Petroleum Institute’s lead is Edelman, who likely used their experience defeating “secondhand smoke” regulations as a selling point.

    Victory Will Be Achieved When

    * Average citizens “understand” (recognize) uncertainties in climate science; recognition of uncertainties becomes part of the “conventional wisdom”
    * Media “understands” (recognizes) uncertainties in climate science
    * Media coverage reflects balance on climate science and recognition of the validity of viewpoints that challenge the current “conventional wisdom”
    * Industry senior leadership understands uncertainties in climate science, making them stronger ambassadors to those who shape climate policy
    * Those promoting the Kyoto treaty on the basis of extent science appears to be out of touch with reality.

    So, that’s the agenda of the oil and coal industry, their financial backers, their trade association, the API, and their PR firm Edelman. Edelman maintains a stable of dozens of bloggers to push their agenda on blogs, and they are the recipients of a $100 million contract from the American Petroleum Institute to “clean up the industry’s image.”

    This type of blanket propaganda effort has been going on for a long time, but for some reason the U.S. press is very reluctant to write any stories about it. The memo in question list tactics such as:

    Tactics:
    Establish a Global Climate Science Data Center. The GCSDC will be established in Washington as a non-profit educational foundation with an advisory board of respected climate scientists…

    Organize under the GCSDC a “Science Education Task Group” that will serve as the point of outreach to the National Science Teachers Association (NSTA) and other influential science education organizations. Work with NSTA to develop school materials that present a credible, balanced picture of climate science for use in classrooms nationwide….

    Notice that the NSTA has shown up here on realclimate before in regards to their refusal to distribute free copies of “An Inconvenient Truth” to science teachers, and they’re also heavily funded by ExxonMobil.

    In short, it’s a campaign of lies that targets children, among others, (sounds like tobacco, doesn’t it?) managed by Edelman and the API, all intended to sway public, media and Congressional opinion in order to prevent any laws from being passed that will reduce fossil fuel combustion.

  94. Jim Eager:

    Re bobzorunkle @82: “I am not a climate scientist so I don’t have a clue.”

    Points for being honest, at least.

    “just because the CO2 theory is based on valid science, it may not correctly explain the cause of the recent warming. A new theory may well come along which supplants CO2 as the most important cause in warming of the planet.”

    Then again, it may not. After all, theories don’t just “come along”, they emerge from sustained study of the actual evidence. All of it.

    In the meantime we know, with certainty, that increasing atmospheric CO2 will make it warmer. We also know, with certainty, that will still be the case if and when your as-yet undiscovered and unexplained “natural” factor is revealed and explained.

    Why do deniers so fervently pin their hopes on as yet undiscovered and unexplained “natural” factors when reality is staring them in the face?

  95. Pat Neuman:

    In #78 Chris wrote: … “But with every year that the global temperature fails to break new ground (say +0.50 on the Hadley measure) the more receptive I will be to arguments for lower-than-consensus climate sensitivities”. …

    Chris, I don’t think anyone should be receptive to that, for several reasons. e.g. S.H. temperatures for Jan-Jun 2008 had marginal positive anomalies but large (.76 .85 59) positive anomalies for Jul-Sept (Sept was highest of record for S.H. according to NASA).

    http://data.giss.nasa.gov/gistemp/tabledata/SH.Ts.txt

  96. steven mosher:

    gavin, WRT to your inline comments in #2 ands #21. You indicate that volcanic forcing was included in the models used in Santer et al. Figure 3 in that paper lists the models used for the hindcast. in AR4 section 10 I believe there is a chart showing which models use which particular forcings for the 20CEN hindcast. According to that chart some of the models used by Santer ( example FGoals, but there are others) did not model historic volcanic forcing. Am I reading the Santer paper wrong or misreading the chart in AR4? So, simple question.
    Did all of the models used by Santer et all include volcanic forcing or not.

    [Response: No. I never said they all did. – gavin]

  97. Chris Schoneveld:

    Barton Paul Levenson #87.

    Are you suggesting that there was no such thing as a LIA (after all the hockey stick doesn’t reveal one) because we can’t come up as yet with a mechanism that could have caused it?
    Maybe it was due to changes in CO2 radiative forcing (in part) since historical CO2 levels weren’t as stable as assumed by the IPCC, at least that’s what van Hoof et al. conclude from CO2 data derived from stomatal frequency analysis. Here is a link to the abstract of their paper:

    http://www.pnas.org/content/105/41/15815.full

  98. steven mosher:

    Gavin,

    Sorry for the misunderstanding. You wrote:

    [Response: The volcanic effects were taking into account in the models, but the precise timing of El Nino events is going to be different in each model – that in fact is a big factor in the variability of the 20 year trends. – gavin]

    Perhaps you meant to say was “taken into account in some of the models,but not all” or something like that, but I apologize misreading your inline reply.

    It is true, you never said “all of the models.” Given that volcanic forcing is important, recalling the role that representing volcanic forcing plays in validating GCM work, some might find it odd that models that don’t include this important forcing would be used in Santer, much less AR4. then again, maybe not. Perhaps a comparison of models using volcanic forcing versus those that don’t would be enlightening.

  99. Hank Roberts:

    http://www.google.com/search?q=%2B“steven+mosher”+%2B”volcanic+forcing”

    Why not summarize the answers you’ve had on similar questions and post them as an article? It’d save some retyping.

  100. Chris:

    #95 Pat Neuman: “S.H. temperatures for Jan-Jun 2008 had marginal positive anomalies but large (.85 .59 .76) positive anomalies for Jul-Sept (Sept was highest of record for S.H. according to NASA).”

    The land temperatures you refer to were heavily skewed by the Antarctic, where NASA GISS has very sketchy coverage, and appears to “fill in” vast areas.
    For skew caused by Antarctic see:
    http://tinyurl.com/4le752
    and http://data.giss.nasa.gov/work/modelEt/time_series/work/tmp.4_obserTs250_1_2005_2009_1951_1980-0/map.pdf [needs rotating]
    and for hint at limitations of “filling in” compare with
    http://climate.uah.edu/sept2008.htm
    (SH anomaly +0.10C)

    Having said that, I am certainly interested as to why part of Antarctica was so warm in Sep (just as I am incidentally interested as to why much larger areas were so cold in Aug:
    http://climate.uah.edu/august2008.htm )

    Looking through the Antarctic station data
    http://www.antarctica.ac.uk/climate/surfacetemps/
    it does appear that several stations on the Antarctic Peninsula/islands to the north had a record-breaking mild September. This is interesting, don’t want to diminish from its inherent importance. However, all of the stations in the rest of Antarctica (i.e. the other ~98 per cent of its area or whatever the exact figure is) had fairly average temperatures, from what I can tell.

    The same issues apply to July:
    http://tinyurl.com/43kcgu (+0.85C)
    http://climate.uah.edu/july2008.htm (0.00C)

    Hadley’s Crutem3 SH land figures for Jul and Aug (Sep isn’t out yet) were +0.66C and +0.14C respectively, or +0.40C for the two months, compared with +0.72C for GISS. I suggest they are also skewed, but not so badly.
    http://www.cru.uea.ac.uk/cru/data/temperature/crutem3sh.txt

    Note that according to Crutem3, SH land temperatures in August were the coldest since 1992 and 1994 (post-Pinatubo cooling) and below the 1980s average of +0.15C. This would fit with anecdotal evidence from various countries in the SH, notably Australia (coldest August since 1944 in Sydney, for example).
    UAH had Aug SH anomaly at -0.19C
    http://climate.uah.edu/august2008.htm

  101. Chris:

    Sorry I should have been more precise in my previous post and given UAH SH land temperature anomalies:
    Jul: +0.26C
    Aug: -0.56C
    Sep: +0.24C
    http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt
    I don’t have the RSS figures to hand but they won’t be very different in any event.

  102. Hank Roberts:

    Quite tangential, but maybe of interest to one of the climate scientists:

    We have another ‘natural experiment’ going on, rather like the 3-day shutdown of air travel in the USA after 9/11, and like the Beijing Olympics clean-air experiment.

    http://www.nakedcapitalism.com/2008/10/baltic-dry-index-falls-another-107-down.html

    Total ocean shipping activity has dropped dramatically. That’s almost all fueled by high-sulfur fuel oil; shipping is known to make linear clouds along shipping tracks.

    The change in ship travel ought to show up in satellite imaging, I’d think — and in anything else?

  103. Chris:

    Re: the volcanic forcings. The reason I raised the issue before was that simply eyeballing the graph at the top of p5 of the factsheet reveals the following: the main reason that the tropospheric trend is steeper appears to be the lack of peaks in the 1980s with higher tropospheric anomalies than surface anomalies.
    There are in fact several points that I would make here:
    1. La Nina events in 1985, 1989 and 2008 (beyond timescale of graph) show a pattern of tropospheric negative anomalies significantly exceeding surface negative anomalies. (Not so much 1999/2000 though.)
    2. The El Nino event in 1998 (and weaker El Nino events from the mid-1990s) imply that the reverse pattern ought to be true for El Nino events.
    3. However, this did not occur for the El Nino events of 1983, 1987 and 1992 – the major apparent cause of the relative steepness of the tropospheric trend.
    4. El Nino 1983 and 1992 were associated with major volcanic cooling events – a plausible explanation for the subdued tropospheric temperatures relative to surface.
    5. El Nino 1987 was not associated with such an event.
    However, I note from
    http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt
    that the tropics reached an 6-month anomaly of +0.48C for the second half of 1987, a figure which does not appear to have been surpassed since in any year except 1998.
    Also I’ve just located the RSS data at
    ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_1.txt
    and these show +0.47C for the second half of 1987, here just edged out by the two peaks in the 2000s (+0.50C and +0.53C) as well as the 1998 peak. (The graph in the factsheet shows the peaks in the 2000s at ~0.15C higher than the peak in 1987)
    6. At the same time, I notice that the standard RSS figures for the tropics (as in link above) are for latitudes 20S to 20N, whereas the key figure in the factsheet (and the paper?) is for 30S to 30N – which extends beyond the tropics as commonly understood:
    http://en.wikipedia.org/wiki/Tropical
    7. La Nina 2008 will almost certainly have made the tropical tropospheric trend significantly shallower (relative to the surface trend)
    I don’t draw any particular conclusion from this still cursory examination, except that it seems to me there’s significant leeway to come up with differing trends depending on one’s choices of parameters for the data.

  104. Barton Paul Levenson:

    chris schoneveld writes:

    Are you suggesting that there was no such thing as a LIA (after all the hockey stick doesn’t reveal one) because we can’t come up as yet with a mechanism that could have caused it?

    We know what caused it — the Maunder Minimum. Increasing solar constant brought us out of it. But the solar constant hasn’t increased significantly in 50 years, so it can’t be driving the sharp upturn in global warming of the last 30 years. We’re not still “coming out of the Little Ice Age.” We came out of it in 1850.

  105. Lauri:

    #100: Chris Says

    Hadley’s Crutem3 SH land figures for Jul and Aug (Sep isn’t out yet) were +0.66C and +0.14C respectively, or +0.40C for the two months, compared with +0.72C for GISS. I suggest they are also skewed, but not so badly.
    http://www.cru.uea.ac.uk/cru/data/temperature/crutem3sh.txt

    Note that according to Crutem3, SH land temperatures in August were the coldest since 1992 and 1994 (post-Pinatubo cooling) and below the 1980s average of +0.15C. This would fit with anecdotal evidence from various countries in the SH, notably Australia (coldest August since 1944 in Sydney, for example).
    UAH had Aug SH anomaly at -0.19C
    http://climate.uah.edu/august2008.htm

    You are comparing apples to oranges to pears. All the different data sets have different base periods for the anomalies. The levels of numbers should be different, therefore. Go back and adjust all numbers according to the anomaly base year. Then there would be at least some sense in the comparison.

    By the way, GISS data does not show +.72C. For the globe it is .49C
    http://data.giss.nasa.gov/gistemp/graphs/
    and I am sure it is less for SH.

    Also, the satellite data include lower stratosphere and are, by definition, not comparable to surface temps.

  106. pat neuman:

    Chris, 2008 monthly S.H. temperature averages by NOAA NCDC are unlike the S.H. averages by NASA but are like the averages you show at your links. Your comment seems to explain that well.

    However, I still don’t think anyone should become receptive to arguments for lower-than-consensus climate sensitivities.

    Also, I would like to see your explanation, if you have one, for the for why 1931 was extremely hot and dry in the U.S.

    http://picasaweb.google.com/npatnew/ClimateDataMidwestAK#

  107. Chris Colose:

    The Van Hoof et al. paper is interesting. Not very familiar with proxies but I’ve been led to believe I should trust ice cores more, but if the results hold up, would the amplitude of CO2 from Vostok over glacial-interglacials also be effected (if so, linearly?) or would this mostly be a higher frequency problem? I’d like to see a paleoclimate person here do something on that.

    The LIA is partly due to solar forcing, but volcanic effects had a lot to do with it as well.

  108. Chris:

    #105 Lauri

    “You are comparing apples to oranges to pears. All the different data sets have different base periods for the anomalies. The levels of numbers should be different, therefore. Go back and adjust all numbers according to the anomaly base year. Then there would be at least some sense in the comparison.”

    Why don’t you do it yourself? GISS has Jul-Aug for 2006, 2007 and 2008 at +0.51, +0.58 and +0.72.
    http://data.giss.nasa.gov/gistemp/tabledata/SH.Ts.txt
    Crutem3 has them at +0.56, +0.12 and +0.41.
    http://www.cru.uea.ac.uk/cru/data/temperature/crutem3sh.txt
    Thus it’s obvious that GISS anomalies were going in a different direction to those of Crutem3 from 2006.
    If I had one set of shelves in one room with an apple on one of the shelves, another set of shelves in another room with an orange on one of the shelves, then moved the apple up one shelf but the orange down one shelf, would you dispute that they’d moved in opposite directions if you didn’t know which shelf each one had started on?

    “By the way, GISS data does not show +.72C. For the globe it is .49C
    http://data.giss.nasa.gov/gistemp/graphs/
    and I am sure it is less for SH.”

    Pat was referring to SH land temperatures

    “Also, the satellite data include lower stratosphere and are, by definition, not comparable to surface temps.”

    What you say is misleading. See:
    http://www.ssmi.com/msu/msu_data_description.html

    “…The MSUs are cross-track scanners with measurements of microwave radiance in four channels ranging from 50.3 to 57.95 GHz on the lower shoulder of the Oxygen absorption band. These four channels measure the atmospheric temperature in four thick layers spanning the surface through the stratosphere…
    …The brightness temperature for each channel corresponds to an average temperature of the atmosphere averaged over that channel’s weighting function. In the case of channel TMT, most of the signal is from a thick layer in the middle troposphere at altitudes from 4 to 7 km, with smaller contributions from both the surface and the stratosphere. Channel TLT uses a weighted average between the near-limb and nadir views to extrapolate the data to lower altitude, thus removing almost all of the stratospheric influence. For each channel, the brightness temperature can be thought of as the averaged temperature over a thick atmospheric layer….”

    I will concede though that I omitted to highlight that the satellite data does not cover central Antarctica, so here I guess I was unintentionally misleading too. (I would emphasise of course that the satellite data has better coverage around the edges of the continent/Antarctic Circle than GISS, and it is these areas which are the best indicators of temperature since it is here that much of the atmospheric mixing takes place.)

    Going back to your analogy, when it comes to the satellite data the pears weren’t moving up a shelf either so it’s up to you to explain the divergence.

  109. Benjamin:

    There is a new paper out that we can expect to see circulated and quoted by the skeptics:

    CO2 EMISSIONS BY ECONOMIC ACTIVITIES ARE NOT REALLY RESPONSIBLE FOR THE GLOBAL WARMING: ANOTHER VIEW

    International Journal of Transdisciplinary Research Vol. 3, No. 1, 2008

    http://www.ijtr.org/Vol3No1/Tsuchida_IJTRPaper_Formatted.pdf

    The journal is apparently peer reviewed, but its focus is an interdisciplinary approach to economics, which makes it an odd choice for a climate paper. The author is a retired physics professor.

    I read a good bit of it, and it doesn’t impress me as high quality science, but I’m no climatologist and am not really able to rebut it effectively.

    So, I’m posting this here in the hope that someone(s) with more knowledge will take up the challenge.

    [Response: Nonsense I’m afraid. The attribution of the recent rise of CO2 in the atmosphere to industrial activities and deforestation is incontrovertible through dozens of lines of evidence. Anyone who insists otherwise (that it comes from the ocean – despite the isotopic evidence, budget and direct measurements of increasing ocean carbon) is living in cloud-cuckoo land. Retired professors of physics notwithstanding. – gavin]

  110. Hank Roberts:

    Oh, man — he says

    “8.5 The nuclear power industry: a tactician behind the CO2 based global warming
    In my view, the CO2 based global warming theory was contrived to revive a nuclear power generation industry that suffered from high cost infrastructure and from a bad public image after the disastrous Chernobyl accident in 1986. This contrivance was
    effective and appears now to be achieving its aim.”

    D’oh.

  111. Thomas Lee Elifritz:

    International Journal of Transdisciplinary Research

    And to think I was just getting used to the term : metadisciplinary.

  112. Ray Ladbury:

    Tsuchida’s paper is an absolute mess. All it does is regurgitate the “temperature leads CO2″ argument and then try to pass off the whole CO2 increase as being due to decreased solubility of CO2 in H20 at higher temperature.
    His motivation is clear–he is afraid remedying climate change will require more nuclear power. This is a classic sad case of rejecting the science because you don’t like it’s implications.

  113. Chris:

    #106 Pat: thanks for the reference to NOAA NCDC. I just looked it up and it has SH land temps for Sep 08 as +0.44C or 10th warmest, well below Sep 05 at +0.85C. (Of course, it has very limited Antarctic coverage, but then again I don’t know how many more stations GISS takes into account.)
    http://www.ncdc.noaa.gov/oa/climate/research/2008/sep/global.html#temp
    If I’m picking only one surface dataset, it tends to be from Hadley, but maybe that’s my British bias :)
    (I would have quite a lot to say about the CET dataset though.)

    “However, I still don’t think anyone should become receptive to arguments for lower-than-consensus climate sensitivities.

    Also, I would like to see your explanation, if you have one, for why 1931 was extremely hot and dry in the U.S.”

    Not sure why you’re asking me the latter question. The obvious answer (from someone who is indeed receptive to arguments for lower-than-consensus climate sensitivities) is that it was on a par with recent hot years because temperatures at US latitudes of the globe really weren’t as much cooler in the 1930s/1940s (compared to the present) than GISS/Hadley’s best estimates (from often sketchy global coverage) suggest.
    http://data.giss.nasa.gov/gistemp/graphs/Fig.D.lrg.gif

    However, there are various other plausible explanations, for example:
    – changes in US temperatures since the 1930s/1940s show regional variation within the overall warming trend at those latitudes;
    – actually I’m struggling to think of any others, apart from inaccuracies in the US temperature record but these have tended to point the other way.

    Obviously 1930s/1940s had coincident warm phase PDO and AMO.

    BTW I do think there is a need for further examination of the causes of the record Sep 08 temperatures on the Antarctic Peninsula/nearby islands, as they are quite striking. On the face of it, and given that the records were limited in time and space to that month in that region, I would say that unusually volatile jet stream shifts are a likely intermediate cause. The next mildest Sep was in 1984 I think it was, not sure if that gives us much clue as to what caused any jet streams shifts/increased volatility. Obviously, as ever, global warming will be a factor, question is how much and causal relationships.

  114. Chris Colose:

    Benjamin,

    I read (very, very quickly) about half of the piece you linked to. So far the author is not discounting the radiative effects of CO2 on temperature, but whether the CO2 we emitted is actually what is causing the rise in concentration. He doesn’t understand the relationship between emmissions and concentrations, but aside from that we know very well that the CO2 rise is due to anthropogenic activities, and we also know the magnitude and rate of the CO2 rise is far outside the bounds of natural variability.

    He brings up quite a bit of the “CO2 lags temperature in the Vostok ice core” stuff which has been thorouhgly refuted (at least in the context that this is contradictory to AGW). Rebuttals are not hard to find (if you go to ‘start here’ on the top of the RC page, and scroll to the section that says “Informed, but seeking serious discussion of common contrarian talking points” each one of those links discusses this claim, and similar ones).

    It’s pieces like this that David Archer was referring to when he said Denialism is mean for laymen, not for scientists. Just speaking s a somewhat informed laymen, I’ll assure you this guy won’t be getting any nobels.

  115. Chris Schoneveld:

    Barton Paul Levenson (# 104),

    Who has determined the official end of “the coming out of the Little Ice Age”? If it was, for whatever reason, indeed around 1850 what mechanism caused the warming between 1900 and 1940?

    At least you appear to be one of the rare AGW’ers admitting the importance of solar forcing and who at the same time implicitly refutes Mann’s hockey stick by acknowledging the proxies that established the existence of the Little Ice Age, and for that matter, the Medieval Warm Period.

  116. Lauri:

    Chris Says:
    17 October 2008 at 8:54 AM

    #105 Lauri

    “You are comparing apples to oranges to pears. All the different data sets have different base periods for the anomalies. The levels of numbers should be different, therefore. Go back and adjust all numbers according to the anomaly base year. Then there would be at least some sense in the comparison.”

    Why don’t you do it yourself?

    I am not doing it myself, because common sense says, that if someone proposes something, (s)he is responsible for stating the argument without error. It is not the listener’s task to correct the argument.

    Pat was referring to SH land temperatures

    So you propose it’s sensible to compare land surface temperatures with satellite observations that span all around the globe. Doesn’t make any sense to me.

    However, I take back the comment on stratosphere. I had an erroneous picture in mind.

  117. Pat Neuman:

    Chris, I asked about 1931 because I haven’t been able to understand why that year was so warm at and dry at so many climate stations especially in the Midwest. El Nino may explain the warm temperatures which occurred later, in the mid 1930s-1940s, but there was no El Nino in 1931.

    Also, I’d like to see someone try to explain the extreme precipitation deficit which occurred during the Dust Bowl years in the Midwest and West. The Mississippi River nearly dried up then and it hasn’t been close to that low since. Dams and land use changes don’t explain the lack of significant precip.

  118. Mark:

    Chris, who decided we HAD an ice age to come out of?

    The same people decided when it ended.

  119. Chris Colose:

    Chris Schoneveld (#115)

    The forcing from 1900 to mid-century was mostly natural…mostly solar, a bit of lack-of-volcanic, maybe some black carbon in the arctic. The majority of the warming over the 20th century however has been in the last few decades, and the anthropogenic RF relative to 1750 outweighs natural forcings by at least an order of magnitude.

    You do not understand Mann’s work: he never denies the existence of a MWP or LIA, just that the late 20th century is anomalous in the context of the last 400 years, probably the last 1,000 and maybe longer.

  120. David B. Benson:

    “Palaeoclimatologists developing region-specific climate reconstructions of past centuries conventionally label their coldest interval as “LIA” and their warmest interval as the “MWP”. Others follow the convention and when a significant climate event is found in the “LIA” or “MWP” time frames, associate their events to the period. Some “MWP” events are thus wet events or cold events rather than strictly warm events, particularly in central Antarctica where climate patterns opposite to the North Atlantic area have been noticed.”

    from

    http://en.wikipedia.org/wiki/Medieval_Warm_Period

    In particular, the MWP and LIA for particular regions are not necessarily precisely synchronous with those in Europe; indeed precise dating for some does not exist. For example, glacial terminal moraines demonstrate local LIA in many parts of the world, both hemispheres. But I, at least, don’t know very precise datings for these maxima. For another, liminological studies in Patagonia demonstrate a (warm) MWP at approximately the same time as in Europe, within a century or so.

  121. Chris:

    Lauri, sorry if I was short with you before. Effect of difference in base years between GISS and Crutem3 appears to be ~0.1C. I just thought the implications of the evidence would be obvious without further detail.
    Evidence from satellite data does have some weight when examining possible discrepancies in surface data, I just didn’t feel like spending extra time spelling out the detail, nuances and limitations on this occasion as I thought the person I was replying to would know what I meant by referring to it.
    Satellite data is broken down by NH/SH, land/ocean etc see
    http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt
    There is also data available from RSS e.g.
    ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_1.txt

    Pat, I’m sorry I don’t have the answers to your questions. Though I would comment re: 1931 that regional warm events are not necessarily correlated very well with El Nino conditions e.g. 2006 was the warmest year on record in the UK (HadCET), and 2003 saw a record hot summer in western Europe. Also 1976 remains the hottest ever summer in the UK at 0.4C above 2003.
    None of these were major El Nino years.
    (Sorry this probably doesn’t help much, as you will doubtless be aware of all this in any event. Maybe someone with more knowledge of US weather than myself can comment?)

  122. Phil. Felton:

    Re #108

    “Also, the satellite data include lower stratosphere and are, by definition, not comparable to surface temps.”

    What you say is misleading. See:
    http://www.ssmi.com/msu/msu_data_description.html

    “…The MSUs are cross-track scanners with measurements of microwave radiance in four channels ranging from 50.3 to 57.95 GHz on the lower shoulder of the Oxygen absorption band. These four channels measure the atmospheric temperature in four thick layers spanning the surface through the stratosphere…
    …The brightness temperature for each channel corresponds to an average temperature of the atmosphere averaged over that channel’s weighting function. In the case of channel TMT, most of the signal is from a thick layer in the middle troposphere at altitudes from 4 to 7 km, with smaller contributions from both the surface and the stratosphere. Channel TLT uses a weighted average between the near-limb and nadir views to extrapolate the data to lower altitude, thus removing almost all of the stratospheric influence. For each channel, the brightness temperature can be thought of as the averaged temperature over a thick atmospheric layer….”

    This description is from RSS, as I recall UAH uses a somewhat different approach for TLT which is thought to include more stratospheric signal

    I will concede though that I omitted to highlight that the satellite data does not cover central Antarctica, so here I guess I was unintentionally misleading too. (I would emphasise of course that the satellite data has better coverage around the edges of the continent/Antarctic Circle than GISS, and it is these areas which are the best indicators of temperature since it is here that much of the atmospheric mixing takes place.)

    The Satellite data doesn’t cover any of the antarctic continent (see:
    http://www.remss.com/data/msu/graphics/plots/MSU_AMSU_Channel_TLT_Trend_Map_v03_1.png )
    so you’re misleading again.

  123. Hank Roberts:

    See also:

    http://initforthegold.blogspot.com/2008/10/climate-sensitivity-is-not-central.html

    http://initforthegold.blogspot.com/2008/10/followup-re-huge-sensitivity-arguments.html

  124. Chris:

    Phil, so where did e.g.
    http://cat.inist.fr/?aModele=afficheN&cpsidt=18948717
    get their data from?

    [Response: MSU-LT is highly suspect near the poles because of the non-nadir issues with sea ice and perhaps the near-surface temperature structure. MSU 2 and 4 channels are much less affected by surface effects and that is what is used by Johnson and Fu. – gavin]

  125. Lauri:

    #121 Chris Says:
    17 October 2008 at 5:42 PM

    Lauri, sorry if I was short with you before. Effect of difference in base years between GISS and Crutem3 appears to be ~0.1C. I just thought the implications of the evidence would be obvious without further detail.

    Chris,
    You are correct in that the difference, in practice, between GISS and Crutem3 baselines is small. However, I just computed the difference (in GISS values) of baselines 1951-80 and 1979-07 and it is 0.32C. (I once tried to find out the baseline period for UAH but did not succeed. It seemed to me that the baseline is the whole data set.) This 0.32C is significant and should be used as a correction when comparing GISS to UAH data.

  126. Barton Paul Levenson:

    chris schoneveld writes:

    what mechanism caused the warming between 1900 and 1940?

    The sun increased in brightness slightly, and carbon dioxide was beginning to ramp up.

    At least you appear to be one of the rare AGW’ers admitting the importance of solar forcing and who at the same time implicitly refutes Mann’s hockey stick by acknowledging the proxies that established the existence of the Little Ice Age, and for that matter, the Medieval Warm Period.

    Nobody disputes that there was a Medieval Warm Period or a Little Ice Age. What’s disputed is whether they were global or not — the former probably was not, the latter probably was. And the MWP was not warmer than the present, as some have tried to say.

    More than a dozen independent studies have come up with “hockey sticks” since Mann et al. 1998. The latter was just the first to use that kind of method to establish a historical temperature curve.

  127. Chris Schoneveld:

    Chris Colose #119,

    Chris, to demonstrate that Mann acknowledges the LIA and MWP, please indicate on the hockey stick graph well defined intervals that supposedly correspond with the MWP and LIA. To allege that I don’t understand Mann’s work is a bit stiff, thank you very much.

    And as to the rate of warming in the late 20th century, it is not much different from the rate of warming in the early 20th century.

    BTW, there are too many bloody Chris’s responding to this thread, it’s getting confusing.

  128. Chris Schoneveld:

    Mark #118, thank you for your enlightening response.

  129. Pat Neuman:

    Chris, thank you for commenting on my questions. I realize now it’s probably a distraction for me to dwell on the 1930s when we know (as Chris Colose said in #119) that … “the late 20th century is anomalous in the context of the last 400 years, probably the last 1,000 and maybe longer.”

  130. David B. Benson:

    Barton Paul Levenson (126) — Here is a review of a book by anthropologist Brian Fagan which suggests a fairly global extent for MWP:

    http://www.nytimes.com/2008/03/21/books/21book.html?ref=science

  131. Marcus:

    Re: #127: For what Mann thinks about the Little Ice Age, I would read: http://holocene.meteo.psu.edu/shared/articles/littleiceage.pdf

    Also, read Chapter 6 of the IPCC 4th Assessment Report for the overview of what (at least a year or two ago) the literature had to say about the MWP, the LIA, and temperatures in the last 1000 years in general.

    I don’t think anyone denies that the sun matters for climate, but the question is whether the variability of the sun in recent history has had the impact that we project from greenhouse gases over the next 100 – and there, I think, a majority of your “AGW’ers” would think the evidence suggests that changes in human forcing will likely be several times (at least) larger than any solar variability we’ve seen in a thousand years or more.

  132. Hank Roberts:

    Interesting — look at the top hits in this search in Scholar.
    They’re from McIntyre, Michaels, and World Climate Report.
    http://scholar.google.com/scholar?q=Mann+%22hockey+stick%22+MWP+LIA

  133. Fred Staples:

    One of the many penalties of ageing is that people insist on explaining, slowly and carefully, ideas with which you have been completely familiar for many decades.

    Tamino (29) (Open-Mind, Central England Temperatures) predicted an up-tick (not a least-squares trend) in surface temperatures of 0.5 degrees centigrade this decade, starting from an annual average of about 10.5 degrees. With just 15 months to go, Tamino, sooner rather than later you will have to accept that you picked the wrong decade.

    I have commented many times in these posts of the near impossibility of measuring the changes in global temperatures which AGW theory predicts. The variability of all the data sets is obvious. The warming is everywhere slight. To make its presence clear any particular forcing needs to compete with natural cycles and trends, events, as well as measurement errors, both systematic and random.

    Random errors (noise in these posts) will accentuate the variability of the data and increase the demands on the trend needed to establish significance. Systematic errors will distort the trends, and are not easy to detect. With satellite data, systematic errors like diurnal drift, altitude change, instrument deterioration etc are particularly trying. Surface (or more accurately near-surface) temperatures are vulnerable to urban development, which I observe regularly driving along the (flat) M4 motorway into London.

    Those whose business it is attempt to correct for some of these, and we are presented with a variety of data, always plotted as deviations from a mean which accentuates the false zero effect, exaggerating the changes in relation to the absolute temperature.

    Tamino asks why I quote the UAH satellite data and not, say, the GISS “surface” data. The answer is that UAH can cross-check their results against an independent source – the long running radio-sonde data. Have a look at the Hadley Centre Frequently Used HADAT graphics. Hadley (hardly a denialist organisation) have weighted their radio-sonde data to match the satellite altitudes. They plot surface, HadAT2, UAH and RSS data together, as does Tamino in his first “this”, 29.

    The plots are similar – little or no warming from 1978 to before the El Nino peak in 1998, a sharp increase from 1999 to 2002, and varying degrees of fall-back from about 2005/2006 led, surprisingly, by the radio-sonde data.

    As well as the trends themselves we can test the fundamental AGW assumption that temperature increases must be uniform across the lapse rate temperature distribution – the surface, lower and upper troposphere temperatures must increase together. Over 30 years from 1978 (the trough following the 1945 peak) the trend increases per decade are:

    Surface: 0.16 degrees
    UAH : 0.12 degrees
    RSS : 0.18 degrees
    Radio-Sonde : 0.16 degrees
    The UAH mid-troposphere trend is 0.05 degrees per decade.

    All these trends depend on the step change from 1999 to 2002, and the persistence of these (marginally higher) temperatures over the next 3 or 4 years to 2005/2006. If the current 1978 temperatures continue, the upward trends, such as they are, will fall away rapidly.

    No one can know what will happen over the next decade, but this data does not support the IPCC assertion that we can be 90% certain that increasing CO2 concentrations have been responsible for a substantial part of the 20th century warming, or that we an expect 3 degrees C of warming over the next century.

    [Response: I find that the penalty of running this blog is the amount of aging I end up doing patiently explaining to people the same facts over and over and having them not listen each and every time. Your confidence in the logic and physical relevance of your thinking is unfortunately highly misplaced. – gavin]

  134. Leif Svalgaard:

    19: Figen Mekik Says:
    “To avoid all that, I think peer review should be double blind so the work is judged on its own merits”

    But, AFTER the paper has been accepted, the name of the reviewers AND [even more importantly] the text of the review should be published along with the paper [in the electronic version, at least]

  135. Leif Svalgaard:

    133: [Response: I find that the penalty of running this blog is the amount of aging I end up doing patiently explaining to people the same facts over and over and having them not listen each and every time. […] – gavin]

    Gavin, how true !

  136. RichardC:

    133 Gavin, pontification is an export issue. Importation of data is NOT desired or allowed. Fred decided long ago what is ABSOLUTE FACT, and so current data is surpurfulous.

    You waste your time trying to educate the uneducatable Fred, but there are thousands of others whose brains aren’t stuck in the muck of past beliefs. Fred’s spouts are far removed from reality. The graphs of temperature ALL show that current temps are far higher than in the 70s, but Fred still spouts goop. “current 1978 temperatures?!” Egad, Fred, you’re embarrassing yourself. Fred, please, please, PLEASE go look at some recent data. ANY data. Don’t do it on a clean carpet, though.

  137. Tenney Naumer:

    I’m just here to remind everybody to watch the upcoming PBS Frontline Special “Heat — A Global Investigation” which will be shown both on the air and online at 9 p.m. EST, October 21st.

    http://www.pbs.org/wgbh/pages/frontline/heat/

  138. Hank Roberts:

    Needed — a link the Contribs can drop into such Inline Responses that automagically searches out all prior inline responses to that user.

    (That would mostly benefit the new reader coming along who will only see the “over and over” but not find where it had been explained to the same person over and over. Heck, even Fred might benefit. And it’d save retyping, perhaps.)

    (It’s a truism from Usenet days that one way to get good information on the Net is to post what you think and await correction. Without good search tools some folks won’t recall or retrieve the corrections, but rather go on posting the same thing heedless of help.)

  139. Walt Bennett:

    OT

    Some of you may remember Alexi, who has been here several times, says enough to show that he has a grasp of the science, then delves into opinions not supported by the science.

    Well, his favorite sport these days is to hang around Yahoo CCD and pick on people with less education. We had the following exchange recently and if some science expert would like to make sport of replying to Alexi’s assertions, I’d be grateful, since I’m not likely to ever know as much science as he does, and he loves to use that as a bludgeon, as if I must be wrong since I don’t even understand his answer.

    I wrote: “CO2 has been shown in laboratory conditions to increase radiative forcing at the surface.”

    Alexi replied: “It does not look as you have grasped the concept at all, since the above statement contains at least THREE factual errors.

    “First, no controlled laboratory experiments have been conducted at the scale of 70,000 meters long, with varying air density and a special temperature profile.

    “Second, the AGW theory alleges that the “forcing” occurs at the top of atmosphere, not at the surface. The actual amount of “forcing at the surface” is a subject of heated debates because many non-trivial factors are acting at the surface.

    “Third, the “forcing” cannot be possibly ever observed in laboratory conditions because the definition of forcing contains three physically unrealizable conditions: (a) CO2 mixes up, but temperature profile stays the same, (b) stratospheric cooling equilibrates with tropopause conditions before troposphere does, and (c) troposphere-surface interface is kept frozen. None of these conditions can possibly exist, and therefore the whole concept of “forcing”, expecially the claim that is is accurately calculated to be 3.7 W/m2, is highly dubious.

    “It is really amazing how it is possible to make three brutal mistakes in a statement of 10 words long.”

    Thanks in advance for any contributions.

    – Walt

  140. Chris:

    #122 Phil “The Satellite data doesn’t cover any of the antarctic continent”
    Apart from unquestionably the majority of the outer coastal zones (i.e. from 70S)
    http://maps.grida.no/go/graphic/antarctica-topographic-map
    unlike GISS which only extrapolates from a few scattered stations (and a main concentration on the Peninsula). Not criticising GISS by the way, that’s just the reality of Antarctic surface coverage.

    Also (re: Gavin’s response) I note that both TLT and TMT use MSU 2 channel (see below). Perhaps the UAH guys use the same method as Johnson and Fu when extending their anomaly maps further over Antarctica?

    TLT = Temperature Lower Troposphere
    MSU 2 and AMSU 5
    TMT = Temperature Middle Troposphere
    MSU 2 and AMSU 5
    TTS = Temperature Troposphere / Stratosphere
    MSU 3 and AMSU 7
    TLS = Temperature Lower Stratosphere
    MSU 4 and AMSU 9
    from http://www.ssmi.com/msu/msu_data_description.html#channels

    Another point, I thought it was more high-altitude continental ice that was the problem than sea ice. RSS set their north pole band up to 82.5N, and this includes large areas covered by sea ice. Perhaps they simply want a single latitude limit, therefore they stop at 70S to avoid including a significant amount of the high-altitude ice sheet, even though there are large areas south of 70S which generate reliable enough data (as with between 70N and 82.5N).

    Sorry that this is all rather speculative, and I realise that this should probably be for the UAH guys to make clear, rather for Gavin to have to comment on.

    In any event, the only reason I referred to the satellite data in my earlier post was as a (limited) check against the GISS SH land data, in particular its record temperature anomaly for Sep. Since then my attention has been drawn to the NOAA NCDC data, which only show SH land as 12th warmest
    http://www.ncdc.noaa.gov/oa/climate/research/2008/sep/global.html#temp
    I wouldn’t therefore say that GISS is “wrong”, just that I had a valid point in raising questions about the definitiveness of the record emphasised by an earlier poster.

    And sorry if none of this has anything to do with the tropical troposphere, can’t even remember how this diversion started now!

    [Response: MSU-LT uses off-nadir radiances (unlike MSU 2 or MSU 4) and they are the problematic ones. – gavin]

  141. Pat Neuman:

    There is no “near impossibility of measuring the changes in global temperatures”. Furthermore, it’s easy to see that regional warming is taking place at many climate stations in the Upper Midwest and Alaska, which is being driven by global warming.

    http://picasaweb.google.com/npatnew/ClimateDataMidwestAK#

  142. Ray Ladbury:

    Walt, what is the link?

  143. Brian Klappstein:

    The graph above may demonstrate the “you can’t prove our models are wrong” thesis, but they do little to convince a skeptic like myself the models are right. Look at the (A) section: all the sonde datasets are left of the grey confidence zone of the models for a big chunk of the lower troposphere. Look at the (B) section: all the sonde data means plus UAH MSU means are below the model means. And I think it would be worse if the data were run out to 2008.

    Correct me if I’m wrong but this paper does little more than regurgitate the Douglass results, but with a wider “confidence zone” for the models. If your aim is to to prove that current GHG modeling of the atmosphere is sound, that’s not near good enough.

    Regards, BRK

    [Response: Well, since that wasn’t the aim, we are doing ok. What is needed to show that models have skill are comparisons to data that is a) well characterised, and b) with a large signal to noise ratio. Both elements conspicuously absent in the tropical tropospheric trends, but abundant in the 20th Century global mean surface anomaly, or in the stratospheric cooling, or the Arctic amplification or the response to Pinatubo, or the increase in water vapour, or the enhancement of the Antarctic polar vortex etc. Please see the IPCC report for more detailed assessment of these issues. – gavin]

  144. Walt Bennett:

    Re: #142

    Ray,

    http://tech.groups.yahoo.com/group/climatechangedebate/message/22190

    This is Yahoo CCD. You have to be a member to jump in, but membership is automatic.

  145. dagobert:

    Gavin’s response to #143
    “What is needed to show that models have skill are comparisons to data that is a) well characterised, and b) with a large signal to noise ratio.”
    Maybe I misunderstand, but wouldn’t that imply showing that models have skill is depending on a clear signal existing in the first place? My understanding of most of the (lets call it) skeptical positions from people like Roy Spencer is that they essentially claim exactly that: the absence of a large signal compared to noise (or natural variability) and the entire debate is essentially about the question, whether noise is a measurement/statistical problem or the very nature of climate itself? Or is the noise in this case something that can be clearly separated from natural variability?

    [Response: This is a real issue, but for attribution purposes the signal vs. noise has to be determined using the models. Within the models the late 20th C trend is way outside of the ‘noise’, while the 1979-1999 tropical tropospheric trend is not. Part of the uncertainty in the attribution is of course how realistic the ‘noise’ in the models is – and that can be assessed by looking at hindcasts, paleo-climate etc. Roy Spencer notwithstanding, there is no evidence that the models hugely underestimate the background variability. – gavin]

  146. Mark:

    Walt #139

    1- why must it be on the same scale? It wouks on 1m scale. why does it change at larger scale? At what scale do you say it changes? Why?

    2- Incorrect. Prove it says that.

    3- CO2 mixes so on the scale of km it is homogonous. b. says nothing. c is ridiculous. frozen means solid. there’s no solid surface in the sky.

    note: Won’t change him

  147. Barton Paul Levenson:

    Walt —

    Alexi’s analysis is completely wrong. To begin with, you don’t need to have a 70,000-foot tube to measure CO2 illuminance depletion. All you need is enough different lengths to verify that the Beer-Lambert-Bouguer Law holds. We can also measure the natural solar and back-radiation from the sky.

    The statement that “CO2 mixes up, but temperature stays the same” is contextless and scientifically illiterate. What temperature, under what circumstances? The phrase “stratospheric cooling equilibrates with tropopause conditions before troposphere does” is also meaningless. Alexi is just stringing sciency-sounding phrases and words together in an attempt to intimidate readers who don’t know those terms. How can you compare “stratospheric cooling” with “troposphere?” How does cooling in the stratosphere “equilibrate” with conditions at the tropopause? The phrase is just meaningless. So is “troposphere-surface interface is kept frozen.” All three “prerequisites” exist solely in Alexi’s mind.

  148. Martin Vermeer:

    Walt #139, the guy is browbeating you.

    He doesn’t have any deep understanding of the science, just read enough of it to spell the difficult words right.

    I spotted one error offhand: he says

    “Second, the AGW theory alleges that the “forcing” occurs at the top of atmosphere, not at the surface. …”

    i.e., the same mistake you made, only a few kilometres further up :-)

    (For clarity, the “forcing” is a concept that applies to the atmosphere as a whole, not to any particular level. What happens at the “top of atmosphere” — the level where outgoing radiation leaves for space, not itself a very easy concept — is the restoration of equilibrium, the increase in temperature that, through Helmholtz-Boltzmann at the Earth’s brightness temperature 255K, restores the balance between incoming and outgoing energies. This then propagates downward along the temperature gradient, which may also change a little as temperature goes up. Contrary to what Alexi claims, there are no great uncertainties there. Not as long as we talk about the CO2 forcing sec, before the feedbacks.)

    [Response: It’s probably also worth noting that the standard radiative forcing is a diagnostic of what is going on designed so that different forcing mechanisms can be usefully compared. The physics of any particular forcing happens wherever it’s appropriate – ie. every grid box has a change in radiation if there is a change in GHG or aerosol composition; ozone changes affect the stratosphere and troposphere directly; land-use changes affect the surface reflectivity and water cycling etc. – gavin]

  149. Brian Klappstein:

    Gavin:

    A few comments on your comments in your statement above:

    Arctic amplification – What about the Arctic warming of the 30’s? I don’t know the details but I doubt that the models do a good job of replicating this. So if the models “don’t get” non GHG Arctic warming, why should I believe they “get” GHG-driven polar amplification? You could argue the data on 30’s arctic warming is too sparse both spatially and temporally to be a realistic test of model performance, but then you start to weaken your point about “well characterized” global surface temperature. As a final comment on polar amplification, I don’t think you could argue the south pole temperature trends offer much support for model performance.

    As for stratospheric cooling, the trend since 1996 looks flat. Sure it’s only 12 years but as pointed out above, it’s got a low noise to signal ratio. The question is regarding stratospheric temperatures: What have you done for me lately? (model-performance wise)

    Which brings me to my last point. If you’re going to convince skeptics like myself the models are sound, you’re going to have to get them past the 20th century. You had a posting some months back on an ocean heat content paper which ended in 2002 (I think), leaving the last 5 years blank. And now this paper ends the analysis at 1999, leaving the last 8 years blank. And yet the best data we have for many climate parameters has been gathered over the last 10 years or so.

    [Response: Your comments indicate to me that there is not much point in trying to convince you of anything. All of those features I mentioned were predicted decades before they were observed, and yet you seem to think that just one or two more years will make a difference to your thinking. Sorry, I’m not convinced. What I read instead is a grasping of straws in order to provide a fig leaf of credibility when not paying attention to what’s robust. However, you should check on the details occasionally. You appear to think that the MSU-4 (lower stratosphere) record is what I was talking about – it’s not. That record is mainly driven by ozone depletion (which is slowing due to restrictions on CFC emissions). Cooling on the upper stratosphere and mesosphere is the signal of GHG-related cooling and that is ongoing (look up Stratospheric Sounding Units (SSU)). I think hardly anyone would agree with you that Antarctic trends are ‘well characterised’, but the general fact that the southern hemisphere is expected to warm less rapidly than the north is itself predicted by the models.

    As for dates for model runs, these depend on the availability of emission data sets (which take time to collate) and the cutoff dates for submitting data to the central databases. The runs for IPCC AR4 were completed in 2004, with emission datasets that went mainly only up to 2000 and so many of the simulations only went to the end of 1999. However, note that we were assessing the work done in Douglass et al, and I don’t recall you complaining in those earlier discussions that they should have used a longer period. Curiously, in their figure they used 1979-1999 for the models and 1979-2004 for the observations. It turns out not to make much difference, but we preferred to compare like with like. The latest model runs will start next year and they will be up-to-date to at least 2005. – gavin]

  150. Urs Neu:

    re 125
    Baseline of satellite records (incl. UAH) usually is 1979-1998 (20 years). They might change in the future when a 30 year baseline (1979-2008) is available.

  151. Hank Roberts:

    Walt, ‘climate change debate’ wants to “keep debate alive” — look at their suggested reading at the website climatechangedebate.org (anything anybody wants to add is included) –it’s a chatgroup meant for maintaining controversy; Yahoo is selling readers to advertisers.

  152. Deep Climate:

    #145
    Speaking of Spencer, his latest musings are at Watts Up in the form of a “simplified” version of a paper he is “preparing to submit” to GRL.
    http://wattsupwiththat.com/2008/10/19/new-paper-from-roy-spencer-pdo-and-clouds/

    The paper argues that climate sensititvity to CO2 is much lower according to “observation” and that simplified “models” combining PDO and CO2 can “explain” most of 20th century warming through PDO-induced changes in cloud cover.

    I’m not clear how the oscillation with alternating warming and cooling (presumably about some sort of mean) can cause such a strong warming trend. It even appears the combined effect with CO2 is somehow less than the PDO alone in the early part of the century, if I’m understanding the fitted curves. More to the point, the PDO-temperature relationship clearly breaks down after about 1975 (no surprise there).

    Even before we get to the argument and curves, things go badly wrong. As one commenter has already pointed out, the estimate of sensitivity to CO2 doubling of 1.1 in Schwartz 2007 has already been corrected upward by that author (to 1.9).

    It’s hard to imagine that this paper could get published without serious revision. But in the end the publication of the article may be beside the point.

    “I am posting this information in advance of publication because of its potential importance to pending EPA regulations or congressional legislation which assume that carbon dioxide is a major driver of climate change. Since the news media now refuses to report on peer-reviewed scientific articles which contradict the views of the IPCC, Al Gore, and James Hansen, I am forced to bypass them entirely.”

    I’m not aware of any “pending regulations” or “legislation” that might be enacted any time soon, say by the end of the year. But there is an election soon, so perhaps Spencer means that this is important information for voters to use in deciding which legislators to support.

  153. Rod B:

    Gavin et al are certainly more qualified and experienced to assess climate models than I. None-the-less, the conclusions drawn (#143) seem overly sanguine (though admittedly my characterization is subjective). The IPCC claims the models’ global (wide) mean annual temperatures is highly correlated (0.98) with measured actual (ignoring for now the question of the validity and reliability (noise) of the measurements themselves). That clearly is very positive. It also said, for example, it missed both polar regions badly, had a 10 degree standard deviation over much of the land mass, missed significant portions of the eastern ocean basins, etc. I would agree with Brian K. that this certainly would not disprove the models by any stretch. But it certainly raises eyebrows in the overall validity. “The pieces were screwy but when averaged out together came out pretty good” may be a good indication (prove the Law of Large Numbers??) but certainly does not prove the process. This applies if the pieces are parts of the global temperature and the whole is the global mean, or if the parts are individual models and the whole is all models thrown together (mathematically) — the latter being especially specious in my mind.

    On the other hand, “proof”, as an absolute, is not usually the appropriate metric as has been pointed out here numerous times. So a question for anybody (with strong interest in Galvin’s professional opinion): what is your real confidence level, in percent, that the models correctly reflect the temperature deviations?

  154. thingsbreak:

    @152:

    I’m not aware of any “pending regulations” or “legislation” that might be enacted any time soon, say by the end of the year. But there is an election soon, so perhaps Spencer means that this is important information for voters to use in deciding which legislators to support.

    Spencer is probably referring to the fact that both candidates for the Presidency have at least claimed they will enact an emissions cap and trade policy. Furthermore (although I have no idea of whether Spencer was aware of it at the time of writing) Barack Obama has said that absent meaningful legislation on emissions within the first 18 months of inauguration, should he win he will allow the EPA to have the authority to regulate GHG emissions as dangerous pollutants.

    [captcha: Repairs chemist]

  155. Walt Bennett:

    Re: #154

    “Barack Obama has said that absent meaningful legislation on emissions within the first 18 months of inauguration, should he win he will allow the EPA to have the authority to regulate GHG emissions as dangerous pollutants.”

    In other words, Obama has committed to unilateral emissions reduction? Does this mean that the courts will settle how much has to be cleaned up, how soon?

    Does it seem rational for Obama to announce that he will act to curb emissions even in the absence of an agreement that other nations will do the same?

    Moral absolutism as a substitute for rational policy making?

  156. Mark:

    Walt, #154. America agreed to a democratic republic and a written constitution without wondering if anyone else woul do it too.

    Because it’s the right thing to do.

  157. Deep Climate:

    Re # 154:
    “Spencer is probably referring to the fact that both candidates …”

    You do seem to agree that the timing has everything to do with the elections (both presidential and congressional) and that there are no “pending regulations”. Obviously, some unknown proposed regulations that might be imposed under some hypothetical scenario in almost two years from now hardly qualify as “pending”.

    But if both presidential candidates are proposing cap and trade systems, how is one supposed to use Spencer’s comments to usefully distinguish between them? Unless, of course, he is hoping that Sarah Palin can convince McCain that contemporary climate change is not primarily attributable to human activities (assuming she is capable of coherently expressing that point of view). And of course perhaps Spencer’s post could also be used to bolster the re-election efforts of congressional “sceptics”.

  158. Rod B:

    Walt asks, “Moral absolutism as a substitute for rational policy making?”

    Yes, so it seems. The advantage is that moral absolutism requires little knowledge.

  159. Kevin McKinney:

    Re 155, 156 & 158:

    The old adage, “Lead, follow, or get out of the way” comes to mind. I think many on this forum would welcome *any* of the above with respect to Administration policy; Bush has been rather good at standing in the way.

    As for the future, someone *always* has to go first.

  160. kevin:

    Wow. Walt and Rod are now characterizing a decision to LET THE EPA DO ITS JOB, a decision based on THE CONSENSUS VIEW OF CLIMATE SCIENTISTS WORLDWIDE as “moral absolutism,” not “rational policy making,” and a position which “requires little knowledge.” This stuff is Orwellian in the extreme, IMO. I think the harmful “moral absolutism” is in fact coming from people with irrational beliefs that mainstream climate science must be wrong because A) it runs counter to their religious beliefs (the “God wouldn’t let us screw things up” camp, who like to say how we’re too small and insignificant to actually affect Earth’s climate) and/or B) it runs counter to their political beliefs (in that they think environmentalism = liberalism, and that liberalism = the evil commies) and/or C) it runs counter to their fundamentalist belief in the transcendant wisdom of unregulated markets (“get government out of industry’s way and everything will be allright! [yeah, ok. ask people who were invested in Enron. Or, Nowadays, ask people who were invested in anything else. Better yet, ask a Chinese mother whose baby died because of melamine contamination. That’s what “free” {i.e. unregulated} markets do. They concentrate wealth in a few hands, and stomp everyone else into the ground.]).

    Whew, veered off there for a few minutes. Sorry. Anyway, I don’t think letting the EPA fulfill its mission in light of the best available science is in any sense “moral absolutism as a substitute for rational policy making.”

  161. Walt Bennett:

    Re: #156

    What if it is actually the wrong thing to do? Announcing an intention to “go it alone” seems likely to garner a response of “have at it!” from places like China and India. China already out-emits the US, and if I am not mistaken, India soon will.

    “Good ideas” can often times be abysmal policy.

    Re: #160

    When you are prepared to ask me where I stand rather than tell me, please let me know.

  162. kevin:

    Hi, Walt. From where you stand, does allowing the EPA to have the authority to regulate GHG emissions as dangerous pollutants constitute “moral absolutism?”

    If so, then I stand behind what I said in #160. If not, well, your earlier post was a bit misleading, then.

  163. David B. Benson:

    Walt Bennett (161) — Here is the estimate of annual CO2 emissions by human activities:

    http://cdiac.ornl.gov/trends/emis/tre_glob.htm

    Summing all that, you’ll find that very little of it has so far been contributed by China or India.

  164. thingsbreak:

    @155: In other words, Obama has committed to unilateral emissions reduction?

    No, in other words, unlike the Bush administration, Obama would allow the EPA to follow its own conclusions regarding the proper implementation of the Clean Air Act.

    Does it seem rational for Obama to announce that he will act to curb emissions even in the absence of an agreement that other nations will do the same?

    The agreement will come in 2012 (and will be largely settled by 2009), whether the US is part of it in a substantive way or not. We have two options, and only two:

    1. Be a credible partner if not a leader, and commit to firm emissions reductions, and as with the Montreal Protocol present a strong, united front to the developing world with a proper package of carrots and sticks for its compliance.

    OR

    2. Continue to kick the can down the road, and allow our status of “the nation most responsible for total GHG emissions yet still doing nothing” to provide cover for India, China, Brazil, and the rest of the developing world.

    There is no “guarantee” that if we choose 1, we are 100% certain to succeed in bringing the developing world into the fold, but I’ve yet to speak to any rational party that believes that it’s even possible, much less plausible to do so if we choose 2.

  165. Walt Bennett:

    Re: #163

    lol

  166. Chip Knappenberger:

    Re: Response to Comment 143:

    Just a quick note of clarification. Gavin’s use of an “enhancement of the Antarctic polar vortex” to support model behavior is perhaps appropriate, but it seems that far and away the most important player down there is ozone depletion rather than GHG increases, at least according to Karpechko et al. (posted on-line today at GRL). So, while “an enhancement of the Antarctic polar vortex” may be used to support model replications of observations, it shouldn’t be used to support model replications of observed impacts of an enhanced greenhouse effect (because models without ozone effects do not well replicate observed trends there).

    -Chip

    [Response: We were talking about models in general, not effects of CO2 in particular – note that I threw in Pinatubo as well. I don’t suppose anyone assumed that I though that volcanoes are caused by CO2 emissions as a consequence. – gavin]

  167. Walt Bennett:

    Re: $162

    Kevin,

    You are attempting to make simple a complex issue, and David is attempting to use statistics to alter the issue. What exactly do the two of you have against facing reality?

    We are probably already past the first Hansen tipping point, the eventual loss of the Greenland Ice Sheet. If we are serious about saving ourselves from that, and from gliding past other tipping points, then we are going to have to talk rationally about where we are and where we’re headed.

    If Obama does what he’s threatening, no more coal plants will be built in the U.S. Now we all know that day must come; but how soon is it practical to get there? I’m sure there are idealists out there who say ‘NOW! STOP NOW! RIGHT NOW! TODAY!’

    Are you two among those? If so, I’d ask you who will do without electricity. I somehow suspect it will not be the rich.

    If you are not among them, I am sincerely interested in knowing what alternatives can be brought online soon enough to do without the coal plants, or perhaps what real world changes can be made in efficiency.

    My larger point was that without the cooperation of other large emitters of CO2, the Hansen tipping points will surely fall; what would be the difference in years, if the U.S. hits its targets but China and India don’t?

    What would be the real sacrifice in increased costs for goods, services, labor, heat, electricity, fuel? Would that sacrifice be borne by the U.S. alone? What is the plan to account for these increased costs? Are we talking about a Hansenish redistribution? I’d like to know if that’s part of your solution package.

    Obama is fine at saying, “Here’s what we’ll do.”

    Wait a couple of years, and see how much “we” get done. Politics is a mud bath, and everybody gets dirty. Obama may be “transformative”, but I wager that he will not find a way to alter that paradigm.

    The way I see it, he has committed another rookie mistake by announcing his unilateral intentions to curb emissions. I repeat what I said: there will be a long line of nations who wholeheartedly agree to let us go on and do that.

  168. Eric:

    Lucia at Rank Exploits has claimed to apply the methods of Santer to the Global Temperature anomaly and found that most of the model trends do not lie within the 2sigma error of the data trends.

    http://rankexploits.com/musings/2008/lets-apply-the-method-in-santer17-to-gmst-part-1/

    Are her calculations correct? I don’t have access to the data, nor the background and expertise to tell.

  169. Hank Roberts:

    “… When the mind of a nation is bowed down by any political superstition in its government,… it loses a considerable portion of its powers on all other subjects and objects….”
    http://www.ushistory.org/paine/rights/c2-03.htm

    Any country that can run quickly ahead of the rest of the world in developing the technology that is more efficient, less polluting, more profitable, and that contributes to the general good of the world will have something valuable and, I hope, be eager to license it on very affordable terms. Ask yourself if your country is better off being ahead, or being behind, in innovation and development.

  170. David B. Benson:

    Walt Bennett (167) — I see it as an ethical issue; some philosophers have attepted to say something productive about this; Hansen already has, more briefly. The point is that the EU, U.S., Canada and Australia all need to control emissions first; so far only the EU has mad minimal progress, none elsewhere (except the Province of British Columbia and a bit in Ontario).

    Joe Romm, on ClimateProgress, has a plan; he states that efficiency is the first goal and much can be done here; then in the U.S. solar thermal and wind will have an increasing role. Turns out China has lots of wind and they are certainly installing numerous wind turbines.

    [But reCAPTCHA states the final answer: “No population”.]

  171. Rod B:

    kevin (160), Speaking of little knowledge, it sounds like you are accusing Walt and me, and not Obama, of being Orwellian, in which case it behooves you to find out what it means. You also ought to bone up on the EPA’s mission and the process (unpoliticised rule of law and inconvenient things like that) it is supposed to go through, not that you would have any interest. It’s much easier to simply and autocratically decree what will be done (which in the oddest stretch imaginable Mark somehow equates with writing the Constitution, etc!!) It’s all a little strange since Obama should have little trouble accomplishing his dictum through the normal processes (though it might take an extra year or two) — but he probably doesn’t know that (there’s that “little knowledge” thing again ;-) )

    Would you be pleased if the EPA (through normal legal processes) finds CO2 a class 1 pollutant and bans it totally from all electric power generation by, say 2012, and from all motor vehicles by, say, 2015? It sounds like you would though maybe it is just the veering off thing I’m reading.

  172. Rod B:

    kevin (162), “allowing the EPA to have the authority to regulate GHG emissions as dangerous pollutants constitute[s] “moral absolutism?” Yes, as you precisely say it. The President does not have legal authority to do that, which makes it absolutism. He can pressure them to make their assessment as the Supreme Court says they can do (wrongly IMO, but that’s not relevant) and encourage the correct outcome (like what some threatened to impeach Bush for doing, …like that), but this was not the assertion.

    [Response: Actually, the supreme court already decided that EPA has this authority. The only block at the moment on doing the necessary findings and plans is at the political appointee level. And that is something that any new president can change. – gavin]

  173. Walt Bennett:

    Re: #169

    Hank,

    I could not have said it better myself.

    Let’s get started on a huge technological revolution, a post-industrial age where we blend all sorts of energy sources into a balanced, sustainable, growable solution.

    And let the U.S. lead the way. I’m all for economic transformation, a tide which lifts all boats.

    I am 100% all for a reality based dedication to such a future.

    Just do me a couple of favors along the way, such as not radically altering the cost structure for delivering fuel to those who need it, and those who seek it for their own sustenance. As I have said before, it is not their fault we are in this mess.

    I would also point out that it is realistic for me to suggest that we will engineer our way forward.

    Re: #164

    We disagree on the certainty of any sort of international agreement which will cause year-over-year reductions in emissions, within any sort of timeframe which will avert “tipping point” levels of CO2. You will recall that Kyoto was “agreed to” as well, and the targets were not met.

    Further, whether it is “right” or “wrong” to “do the ethical thing” (and I would submit that in reality, that’s a very weak argument in terms of its potential to effect change), my point was that if Obama really wants to “do the right thing”, he needs the rest of the world to do the same. It is my opinion that he sacrifices leverage toward that goal when he categorically states that he will unilaterally curb emissions.

    It seems to me that the next move after that would have to be threats of sanctions if other nations don’t reci-procate.

    And what I see at the end of all of that is serious disruption of the lives of those who can least afford the disruption. Since I consider myself among those people, and since I know I am better off than 80% of humanity, I therefore know that the disruption will be truly devastating for perhaps billions of people.

    Meanwhile, the planet will continue to warm *anyway*.

    We certainly need a worldwide commitment to a sustainable future, and if we are talking reality, then we also know that we need to find a way to actually take CO2 levels down, not wait for nature to do it.

    So, if Obama is truly interested in success, then he needs to start with a clean sheet of paper and look at what’s real and what’s possible. Showing his hand before he’s even been elected, as I said, strikes me as another of his rookie mistakes.

  174. Kevin McKinney:

    Walt, why do you keep on referring to “unilateral” cuts? It seems quite likely to me that others will be ahead of the U.S. on this; the EU for instance are busily negotiating targets now.

    Further, an Obama administration would undoubtedly be participating in talks to achieve global (hopefully) or at least internationally agreed action on mitigation. So the “unilateral” bit seems rather a straw man to me.

  175. Mark:

    Walt, 161

    What if the moon was made of cheese?

    How does the smalll possibility of failure make trying wrong?

    I thought the US was ‘the leader of the free world’! SO LEAD!!!

  176. Barton Paul Levenson:

    Eric writes:

    Lucia at Rank Exploits has claimed to apply the methods of Santer to the Global Temperature anomaly and found that most of the model trends do not lie within the 2sigma error of the data trends.

    http://rankexploits.com/musings/2008/lets-apply-the-method-in-santer17-to-gmst-part-1/

    Are her calculations correct? I don’t have access to the data, nor the background and expertise to tell.

    In a word, no. For details, ask Tamino, who has actually analyzed what Lucia did and pointed out her mistakes:

    http://tamino.wordpress.com

  177. Martin Vermeer:

    Walt, it is pretty clear to me that you are vastly underestimating the power of inspired leadership. There hasn’t been too much of that lately, so perhaps your pessimism is forgivable.

    The way I see it is this: there are two essentially different kinds of international treaties. There are treaties of the first kind, voluntary agreements among sovereign nations, that any one of them may join or leave. And then there are treaties that have the ambition of universality: the UN Charter, the Geneva Conventions, the Comprehensive Test Ban Treaty. These look like treaties, but are in reality emergent international law.

    Climate treaty negotiations have been seen as being of the first kind, due to rich and powerful nations behaving in that way. It doesn’t have to be so. Obama’s example may well bring about a world in which, e.g., the moral status of a coal-fired power plant burning in the atmosphere is no better than that of a nuclear device exploding in it.

    And consider also that the economic development of China and India is premised on export to the West, and will be for a foreseeable future. Imagine how they will look side by side on a shop shelf, a Chinese product from a fossil-fired economy, competing with a carbon-neutral domestic product. Can you hear the calls already for carbon-graded import duties?

    Appearances matter. Morality matters. They will play ball.

  178. Joseph O'Sullivan:

    “A proper paper definitely takes more time and gives generally a better result than a blog post, but the latter can get the essential points out very quickly and can save other people from wasting their time.”

    There is a role for blogging to fill the gaps that peer review can’t and I think Real Climate is doing a good job at it.

    #172 Rod B
    The president does not just have the authority to regulate GHG emissions, he must obey the law and regulate them. The Clean Air Act is clear about this. The Supreme Court decision didn’t even turn on this issue. The only serious issue was if the states and environmentalists had the right to sue.

    Gavin is right, now its just political delay.

  179. Martin Vermeer:

    Barton #176, this is a different computation than the one Tamino commented on.

    I am also in no position to judge the correctness, but it looks reasonable. What it shows is the effect of the structural uncertainty in individual GCMs (meaning that some of them are systematically high, others systematically low, due to flaws in the representation of the physics; most probably related to discretization/parametrization effects for clouds and/or aerosols). It shows up when you use for comparison a longer, globally averaged time series rather than a shorter, tropical-only one.

  180. Ray Ladbury:

    Walt, one of the problems we have with climate mitigation is that we’ve never confronted such a grand issue on a global scale, so nobody is quite sure how to approach it. We’re all standing around, dipping a nervous toe or two into the water and saying, “After you…” “No, after you…” It is quite possible that early adopters will make mistakes that have significant economic consequences. On the other hand, the first one that gets it right, will reap significant advantages selling technology and expertise to the rest of the world. I have to say that I am concerned that I don’t see the sort of leadership from the US business community needed to tackle this crisis. It is hard to believe that they are so myopic as to only see the risks and ignore the opportunities. This is something that has to be done. You can’t negotiate or spin the laws of physics, and as capitalists used to say in a more adventurous time, “The best way to predict the future is to create it.”

  181. Rod B:

    Gavin (172), the Court declared that the EPA can assess the question (the various self-serving spins put on it not withstanding). It then can decide yea or nay. If it decides yea it would, in turn, have the authority to actually regulate and to determine allowances and prohibitions of emitting CO2, but would have to go through the legally established process to do so. The President can greatly influence this as I said and you imply, but can not decree it as kevin said and Obama implied.

    [Response: I’ve read the findings report – there is no doubt that EPA will decide that CO2 emissions are harmful under the Clean Air Act. Whether it goes forward depends only on the appointee level. – gavin]

  182. Walt Bennett:

    Re: #!77

    Martin,

    You correctly describe me as pessimistic that any year-over-year reductions, on the world level or on national levels, will materialize in time to matter.

    You write that appearances matter and morality matters. So they do. What they don’t do, of course, is rule. You can “do the right thing” and still get screwed. Is the U.S. lining up to play patsy? We will see. I only point out that the potential is there.

    Meanwhile, of course, my issue has to do with the cost burden, which will be quite real and quite “now” to those it affects.

    I’m waiting for the “moral leadership” brigade to explain their moral approach to making sure those on the edge don’t go over the edge when energy costs rise.

  183. Rod B:

    Kevin McKinney (174), the EU and members are re-negotiating their own targets that they (save a couple) missed the last time they went ahead of us. Why should we assume they are way out front this time?? And if we made actual goal-reaching cuts, it would be on our own. We’re discussing “unilateral” only because that is what Obama clearly implied, so if it is a straw man go fuss at him.

  184. Rod B:

    Joseph (178), as was done here in RC not too long ago, you’re just reading the Clean Air Act to suit your desires, not as it is actually written.

  185. Martin Vermeer:

    Walt #182

    I’m waiting for the “moral leadership” brigade to explain their moral approach to making sure those on the edge don’t go over the edge when energy costs rise.

    You seem to have no idea how relatively minimal the costs of an effective mitigation programme will actually be… we’re talking mere percents of GDP. In other words, the same sort of money we’re willing to spend on the military, another kind of investment in security. Similarly uncertain, and the same moral question arises; yet somehow the money is found and spent — setting priorities, I guess. Note well, I’m not arguing against the legitimacy of having a defence force, just pointing out that your moral problem, while real enough, is a little broader than the cost of AGW mitigation.

  186. Rod B:

    Gavin (181), which is pretty much what I said…. (though of course don’t like ;-) )

  187. Hank Roberts:

    Walt Bennett Says:
    20 October 2008 at 8:58 PM
    Re: #169
    >…
    >I could not have said it better myself.

    As the guy said after seeing _Hamlet_, eh?

    Read the rest of The Rights of Man
    and ask if you could come even close,
    and consider why the USA should lead.

    http://www.ushistory.org/paine/rights/b2-intr.htm
    —————-
    … such is the irresistible nature of truth, that all it asks, — and all it wants, — is the liberty of appearing. The sun needs no inscription to distinguish him from darkness; and no sooner did the American governments display themselves to the world, than despotism felt a shock and man began to contemplate redress.

    The independence of America … would have been a matter but of little importance, had it not been accompanied by a revolution in the principles and practice of governments. She made a stand, not for herself only, but for the world, and looked beyond the advantages herself could receive. Even the Hessian, though hired to fight against her, may live to bless his defeat; and England, condemning the viciousness of its government, rejoice in its miscarriage.

    As America was the only spot in the political world where the principle of universal reformation could begin, so also was it the best in the natural world. An assemblage of circumstances conspired, not only to give birth, but to add gigantic maturity to its principles. The scene which that country presents to the eye of a spectator, has something in it which generates and encourages great ideas. Nature appears to him in magnitude. The mighty objects he beholds, act upon his mind by enlarging it, and he partakes of the greatness he contemplates…. In such a situation man becomes what he ought. He sees his species, not with the inhuman idea of a natural enemy, but as kindred; and the example shows to the artificial world, that man must go back to Nature for information.
    ————————————–

    And

    http://www.nellbrinkley.net/tompaine.htm

    ——————
    It matters not where you live, or what rank of life you hold, the evil or the blessing will reach you all. The far and the near, the home counties and the back, the rich and the poor, will suffer or rejoice alike. The heart that feels not now is dead; the blood of his children will curse his cowardice, who shrinks back at a time when a little might have saved the whole, and made them happy.
    ——————

    Tabitha dore

  188. Lawrence Brown:

    In April 2007, The Supreme Court handed down a decision
    that ruled that CO2 is a pollutant under the Clean Air Act, and the EPA has the authority to regulate tailpipe greenhouse gas emissions.
    http://environment.about.com/od/environmentallawpolicy/a/epa_greenhouse.htm

  189. Chip Knappenberger:

    Gavin (re:181),

    I know that you serve(d) as a federal expert reviewer of the EPA’s “Technical Support Document for Endangerment Analysis for Greenhouse Gas Emissions under the Clean Air Act,” but it was my impression (actually, it is a fact) that the Public Comment period is still wide open on the Advanced Notice of Proposed Rulemaking of Regulating Greenhouse Gases under the Clean Air Act. One of the things that is still open for comment is the endangerment issue. Thus, I should hope that your comment that “I’ve read the findings report – there is no doubt that EPA will decide that CO2 emissions are harmful under the Clean Air Act” is based only on your personal opinion rather than an insider’s knowledge of the way things are to be. I for one (among many) are very actively engaged in providing comments on the DRAFT of the “Technical Support Document for Endangerment Analysis for Greenhouse Gas Emissions under the Clean Air Act.” From my initial readings, it is a superior document than the draft of the Unified Synthesis Product released for comment by the USCCSP earlier this summer (for which the public comments were sufficient to have the CCSP rethink the contents and release date of the final document), but, it is still in need of some modification and, most importantly, its support of an endangerment finding is still in question. Now, maybe you know something that I don’t know (which most assuredly is the case :^) ), but let’s hope for procedure’s sake (if for nothing else), that the decision on endangerment isn’t already made. Otherwise, I am going to be wasting a lot of my time over the next month!

    -Chip

  190. Kevin McKinney:

    In #183 Rod B wrote:

    “Kevin McKinney (174), the EU and members are re-negotiating their own targets that they (save a couple) missed the last time they went ahead of us. Why should we assume they are way out front this time?? And if we made actual goal-reaching cuts, it would be on our own. We’re discussing “unilateral” only because that is what Obama clearly implied, so if it is a straw man go fuss at him.”

    “. . .save for a couple” is still more progress than we have made at a national level here–basically zero–so they are clearly ahead of us now–which would seem to dispose of “it would be on our own,” as well.

    It is not clear to me that Obama implied “unilateral,” given that his Administration is committed to meaningful participation in international negotiations on emissions mitigation. I think it is an assumption that you and/or Walt is/are making because you are viewing the idea in isolation from other (presumptive) Obama policies.

  191. Steve Reynolds:

    Martin: “You seem to have no idea how relatively minimal the costs of an effective mitigation programme will actually be… ”

    I would like to see a credible, peer reviewed, analysis that shows that. Do you have an example of one?

    Then the next step after determining an effective program is to see how it could be implemented without politicians multiplying the cost several fold or making it ineffective with loopholes for their friends.

  192. Joseph O'Sullivan:

    Chip Knappenberger is actually right, but only on a technically. The rule-making process is still undergoing and it will take time before any of this is resolved by the EPA. Once this process is started it must run through all the legally required steps.

    It really is inevitable that the EPA will have to find greenhouse emissions harmful. The science and jurisprudence all point in one direction. Even if politics trump science and the EPA rules no endangerment the EPA will be overruled in the sure to follow lawsuit.

    Once that is done the language of the CAA dictates the EPA regulate CO2, a development that will traumatize Rod B ;)

  193. Ray Ladbury:

    Steve, many of the measures that can be taken have negative cost–that is they actually save money–but require an upfront investment that serves as a barrier to implementation.
    As to peer-reviewed analysis, would a real-world example suffice instead? Last year, the city of Juneau, AK reduced energy consumption by more than 30% when they were cut off from cheap hydroelectric power by avalanches. The did so without any real preparation, with little central planning and with no damage to the local economy.
    Finally, the thing people need to understand is that this is not optional. If we do not carry out steps to limit increased warming, we will suffer severe consequences down the road.

  194. Hank Roberts:

    Steve, Google.
    E.g.
    http://www.climatechange.ca.gov/research/index.html

    > EPA, carbon

    They’re slow. The lead rule just appeared, it only took fifteen or twenty years after the research became convincing.

    Mercury is still (ahem) up in the air.

    CO2 is as well documented as either of those. It takes time — and votes — to get administrators who will listen to the scientists.

  195. Mark:

    Steve, #191, the Stern Report is as close to that as you get with any economist report. Vetted and checked by lots of people for errors and omissions.

  196. Martin Vermeer:

    Steve, Ray, Mark: don’t forget the IPCC report… WG1 isn’t all there is ;-)

  197. Rod B:

    Joseph says, “…the language of the CAA dictates the EPA regulate CO2, a development that will traumatize Rod B…”

    Along with a jillion other folk, including even some proponents who are blind to the Chinese proverb (which I’ll mangle, but you get the point) to ‘be careful what you wish for’ and are so orgasmic they’re oblivious to potential STDs and other bad stuff. (I wonder if this will get past the spam censor…)

  198. Rod B:

    I think the sanguine attitude on how easy this will all be (a little this, a little that, a bunch of HPFM, and then we all sing songs around the campfire) is way off the realistic mark. But I do admire the optimism (really).

  199. Marcus:

    Rod B, Chip Knappenberger: You might want to look at: http://www.ombwatch.org/article/articleview/4308/1/83/?TopicID=2

    The general assumption is that EPA _found_ endangerment, and that is why OMB never opened the EPA email, and instead of an endangerment finding the EPA went to an ANPR as a tactic to delay until Bush was out of office.

    Obama is, in fact, basically stating, in a fairly reasonable fashion, that he is going to actually follow the Supreme Court ruling and allow EPA to find endangerment which will require action. But, as he points out, Congress is free to step in at any time in the first 18 months and write legislation that preempts the CAA for GHGs. Ideally, this new legislation should actually _control_ GHGs, but if Congress really doesn’t want GHGs to fall under the CAA it could merely pass legislation saying just that.

    And I think there is some flexibility in implementation in the CAA such that a) the EPA can start with big sources and move down (they’ve done that for other pollutants), and b) an expedited approval process could potentially be developed for “small” sources, so even GHG regulation under the CAA (which it was admittedly not designed for) might not be as bad as some fear.

    But I don’t see moral absolutism here. I see following the Law of the Land as passed by Congress and interpreted by the Supreme Court, and a perfectly reasonable window in which Congress can choose another path if they don’t like the Supreme Court’s interpretation. That’s how the different branches of the government are supposed to work together!

  200. Ray Ladbury:

    No, Rod, I don’t think it will be easy. I do think we could do much more than we are doing now fairly easily, and that those efforts might buy crucial time for us to come up with the solutions to the hard part. Do you disagree with any of that?

  201. Steve Reynolds:

    Martin, Ray, and Mark,

    I asked to see a credible, peer reviewed, analysis about “how relatively minimal the costs of an effective mitigation programme will actually be…”.

    I got an anecdote from Ray, a non-peer reviewed political report (extensively disputed in peer reviewed publications) from Mark, and the IPCC from Martin.

    I checked the IPCC AR4 SPM (last page) again, and it still says:

    Limited and early analytical results from integrated analyses
    of the costs and benefits of mitigation indicate that they
    are broadly comparable in magnitude, but do not as yet permit
    an unambiguous determination of an emissions pathway or
    stabilisation level where benefits exceed costs. {5.7}

    That does not sound very supporting of minimal costs (or even any certainty a positive cost-benefit to mitigation).

  202. Mark:

    Rod #198
    And that’s worse reasoning than you accuse others of.

    Well done.

  203. Martin Vermeer:

    Steve #201, I see you haven’t given up your quote mining habits. Worse, you’re again happy to engage in the lie of conflating uncertainty within an issue with uncertainty on the issue’s reality. Please leave those dirty games to politicians.

    http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-spm.pdf

    (BTW I take note that you don’t contest that this summarizes peer reviewed science. Of course it is only a SPM; don’t make me dig in the full report.)

    Look at the SPM of WG3 (above link), Tables SPM.4 (page 12) and SPM.6 (page 18). Third column in both tables “Range of GDP reduction”.

    All ranges quoted are within the ball park of a few percent of GDP that I stated, and that I stand by: below 3 percent up to 2030, and below 5.5 percent up to 2050, even for the most aggressive scenario. Less, and based on more studies, for the others.

    Sure, lots of uncertainty, just as there is in the science. But this is what the odds look like. Policy making under uncertainty is the norm; those refusing to do so don’t deserve to survive.

    BTW this SPM also discusses at some length the “low hanging fruit” that would actually save money mentioned by Ray, and that you dismiss as anecdotal; those are the negative numbers figuring in the above mentioned table columns, mostly at the top. Serious researchers say so based on real-life experience.

    I rest my case. Thank you for wasting my time, perhaps someone else finds this info useful.

  204. Ray Ladbury:

    Steve Reynolds, I still have yet to see a convincing study that limits risk as well, so if risks of climate change are unlimited, we have to mitigate regardless.
    Are you seriously contending that we couldn’t do anything more. Anything we do buys time, and time is key to both mitigation and adaptation. Really, I have to say that I am very disappointed with magic-of-the-marketplace economists in this regard. All I’ve seen from them is handwringing and dire warnings of how any effort to address climate change will bring the end of prosperity. In the end, we HAVE TO address this problem, and if markets and corporations don’t provide answers, people will look to other places for them.

  205. Mark:

    Steve 201 You have one.

    Unless being looked at by parliament bodies for whether it should be given to the public as ‘not peer reviewd’.

    Which would be odd considering they include real peers…

  206. Walt Bennett:

    With regard to mitigation, in case any of you missed it: that was and remains my point. We aren’t seriously addressing mitigation; we aren’t seriously discussing removing CO2 from the atmosphere. We aren’t even, as far as I know, discussing such basics as how big is the pot of money, and how best to apportion it.

    We are talking, so far as I know, about two things: (1) reducing CO2 emissions and (2) saving forest.

    Anybody who believes that either of those two efforts will generate a net savings in the next decade, is drinking Kool-Aid, in my not-so-humble opinion.

    I want lots more research and development of real mitigation strategies; you know, the kind that have a chance to succeed.

  207. Steve Reynolds:

    Ray: “Are you seriously contending that we couldn’t do anything more. Anything we do buys time, and time is key to both mitigation and adaptation.”

    I agree with that and what you said in 200.

    Ray: “I have to say that I am very disappointed with magic-of-the-marketplace economists in this regard. All I’ve seen from them is handwringing and dire warnings of how any effort to address climate change will bring the end of prosperity.”

    Some have also advised revenue neutral carbon taxes. But those things are most of what the _economists_ can do (advise against inhibiting action of the market).

    The market will respond by the efforts of millions of investors, managers, engineers, and workers actually solving the problems. It is going on right now in response to higher oil prices.

    Martin,

    I find it unnecessary to respond to argument by insult.

  208. Rod B:

    Marcus (199), I think that is a fairly accurate assessment.

    I would have some minor disagreement around the edges — for the record: It’s true that the EPA implementation might not be onerous; but it also could just as easily be terribly burdensome. They do have to look at cost/benefit analyses, but if they decide that the CO2 impact could conceivably destroy a majority of the planet’s population within a few decades, they are not going to give me and others much consideration for the electricity we’ll have to forego. If they are inclined, it is certainly not beyond their capability to stretch the science to their liking. Would calmer heads prevail and restrict their foray? Probably, but they still could get us started down the road to ruin. Then again, as you say, maybe not.

    Secondly, a picayune but significant point. The Court ruled that the EPA had to look at it. They did not rule that the EPA had to find CO2 a pollutant under the CAA, though it would be easy to (wrongly) infer bias toward that in Justice Stevens’ words.

  209. Rod B:

    Ray, setting aside my skepticism for the moment, I agree with what you say. Actually, even given my skepticism, I’m all in favor in doing relatively easy stuff (and “easy” could still be large-scale) as some insurance while we continue the debate and development, and to get a jump on peak oil, and for other secondary benefits. However, what scares me is taking the debate away from learned policy makers (on both sides) and, to a large degree anyway, even from the professional scientists, and dumping this “you bet your world” in the hands of a few bureaucrats. This is what’s behind my “be careful what you wish for” comment.

  210. Rod B:

    Martin V, you imply that making policy decisions in uncertainty always comes out great. I beg to differ.

  211. Kevin McKinney:

    re Martin (#203)–

    Thanks, I do think that will be quite useful information, actually!

  212. Martin Vermeer:

    Rod B #210: on the contrary. I didn’t mean to imply, nor do I believe so. Only that using knowledge you have, even uncertain knowledge, gives you invariably better odds than ignoring it. (Well, the same odds as an edge case.) How it actually turns out… you win some, you lose some. Ask any military commander.

    Steve R #207: how convenient.

    Kevin #211: (blush)

  213. Ray Ladbury:

    Steve, the market tends to respond to events in real time. It’s got a piss poor record of responding to long-term threats–for instance it is responding to high oil prices even though we’ve known peak oil is coming for a generation. It also does a poor job of factoring in marginal costs when it determines the true cost of a commodity. The usual mitigation against this shortcoming has been regulation–which you and many other classical economists oppose. So here is the challenge:
    1)We have a threat that is known to pose significant economic risk.
    2)It is possible, even probable, that the threat will pose a mortal risk to economic prosperity if not human civilization.
    3)The consequences of the threat remain uncertain and will likely not be immediately evident until it is too late to do anything about them.
    4)How does the market come up with effective mitigation of this threat?

    So far, all we have is either denial or downplaying the risks. Think of this as market capitalism’s final exam question. The advocates of controlled economies, etc. are already hard at work on it. Market capitalism has a blank page so far.

  214. Joseph O'Sullivan:

    I’ll reply one last time then I’ll claim res judicata.

    What the Supreme Court ruled in Massachusetts v EPA from the official decision:
    “Petitioners have standing to challenge the EPA’s denial of their rulemaking petition. Pp. 12–23.”
    http://www.supremecourtus.gov/opinions/06pdf/05-1120.pdf

    A much shorter and non-technical summary from the Pew Center:
    http://www.pewclimate.org/epavsma.cfm

    For procedural reasons unique to the legal system the Court did not tell the EPA to regulate CO2, but for all intent and purpose that is what the ruling means. Even The Wall Street Journal agreed in an editorial this week.

  215. Rod B:

    Martin (212), agreed

  216. Rod B:

    Joseph, I agree with your #214, which doesn’t contradict anything I said.

    The “all intent and purposes” stuff though is an indication, not a legality. The EPA could in fact technically conclude that CO2 need not be regulated and be in full compliance with the Court order. General inferences of what the Court meant, even if valid, does not trump what the Court explicitly said.

  217. Mark:

    Walt, 206

    Your original point was to ask for help with spurious pseudo-scientific nonsense (139).

    Turns out you were a believer like him.

  218. Mark:

    Steve 207:

    ‘The market will respond by the efforts of millions of investors, managers, engineers, and workers actually solving the problems. It is going on right now in response to higher oil prices.’

    Not when these investers are the last ones to be affected by the problem and the first to pay to solve it.

  219. Hank Roberts:

    > last ones to be affected

    You imagine no one beyond your own lifespan will be affected?

  220. Steve Reynolds:

    Ray: “It’s [market] got a piss poor record of responding to long-term threats–for instance it is responding to high oil prices even though we’ve known peak oil is coming for a generation.”

    I disagree. Some companies have a poor record of responding, and may go bankrupt as a result. Others that respond better (such as Toyota) will do well. That is the market at work, eliminating organizations making poor use of resources. I wish it worked that way for government organizations.

    As for your question (which I think is a good one, even if I doubt point #2, let’s assume it for now):

    I agree the market cannot solve this by itself, since no one owns the atmosphere (the much discussed ‘tragedy of the commons’). So given our current world system, governments must set a cost for emitting GHGs. As most economists advise, this should be done with revenue neutral carbon taxes (and not cap and trade or other easily corrupted schemes).

    If the situation is as dire as you propose, the taxes will be sufficiently high for the market to direct vast resources away from fossil fuels to the most viable alternatives. The last thing we want is for these resources to be directed by politicians with a shorter term outlook than nearly anyone else.

  221. Ray Ladbury:

    Toyota did well because the Japanese domestic car market demands small, efficient cars, and that demand has expanded as we approach Peak Oil. Japan’s domestic market demands small, efficient cars because Japan tases the bejesus out of gasoline. For the US market, Toyota (Honda, too) has been producing much larger vehicles. So it would appear to me that the Japanese government’s policies of artificially increasing the price of fuel deserves most of the credit, not necessarily farsightedness of corporate management. My point still stands. We’ve known Peak Oil was coming for a generation, and yet we are still poised for an economic shock because markets did not favor development of alternative energy sources as long as Oil was cheap.
    Now here’s something I really don’t understand. Why would you favor carbon taxes over cap and trade? Surely, cap and trade is closer to a market system. Yes, it can be corrupted or otherwise screwed up, but that is also true of taxation. And by opting for a tax you are leaving the policy in the hands of the same short-sighted politicians you so deplore.

  222. Hank Roberts:

    Market theory? Agonizing reappraisal.
    http://www.garretthardinsociety.org/articles/art_extension_tragedy_commons.html

  223. William Astley:

    B. Santer et al’s Fact Sheet.

    The following is a link to B. Santer et al.’s fact sheet. (Santer et al. in their paper assert they have refuted the conclusions of Douglass et al. 2007.)

    Looking at the Santer et al’s data, there are some unanswered questions.

    Look at the graph in the fact sheet which shows the tropical surface temperature & tropical mid-troposphere temperature Vs year, 1980 to 2005.

    There is an obvious holistic change pre 1992 vs post 1992, in surface temperature Vs mid-tropospheric temperature.

    https://publicaffairs.llnl.gov/news/…-factsheet.pdf

    There is almost no warming 1980 to 1992 of the mid-troposphere. As Douglass et al. state for that period the tropical mid-troposphere and tropical surface temperatures, track each other and there is no relative increase in the mid-troposphere temperatures. (i.e. The mid-troposphere does not warm 1980 to 1992.)

    After 1992 there is a sharp warming of both surface and troposphere. From 1992 to 2005 the mid-troposphere does warm more than the surface.

    There is however no explanation as to why starting in 1992 the mid-troposphere warming suddenly occurs. i.e. There is a mechanism change pre-1992 vs post-1992.

  224. Mark:

    Hank, #219. No, they could be affected before they die. However, anyone with enough money to be called “an investor” will be the last to be hit.

    E.g. Will someone with two houses be affected before someone who can only afford cheap housing (on a flood plain)?

    GW will affect the third world poor, then the third world rich, then the first world poor then the first world rich. By the time it gets to putting Bill Gates on his uppers or risking his life, it will likely be his children or grand children. Maybe not even then.

  225. pft:

    I try to keep an open mind, but when I read the paper whose sole purpose was to validate the models with observations since 1979, and saw that it stopped at 1999, it was obvious the reason why must have been that observations past 1999 were not predicted by the models. Despite increasing CO2 levels, temperatures are now at 1980 levels. The models can not account for this drop in temperature.

    [Response: Try to make sure that when you open your mind that your brain doesn’t drop out. The period 1979-1999 was chosen because that was what Douglass et al used, and which is the maximum period of overlap between the AR4 model simulations and the observations. See our previous post on the range of model simulations for the recent period – which include a significant number that have similar trends to observed, even while they have the same long term trends. – gavin]

  226. Hank Roberts:

    http://www.shell.com/home/content/hydrogen-en/faq/future_1204.html

    “… gradually … gradually ….”

    Yep, there’s a problem with their public position paragraph; the repetition of the word is a bit spooky.

  227. dagobert:

    Mark #223
    This is true not only for possible impacts of GW but also for the impacts of mitigation strategies like carbon taxes. If you can barely afford a small second hand car for your daily commute, you’ll be hit hard by more expensive fuel, while the typical driver of a Porsche Cayenne probably won’t even notice a difference.
    With cap and trade programs it comes down to the same thing. In the short run, it leads to higher prices for practically everything which effects the poor before the rich even notice. In order for the markets to swing to carbon mitigating technologies, something like an external ‘forcing’ is probably required, but it has to be applied very carefully and slowly. The car markets in Japan and Europe had decades to adjust to high fuel prices. Trying to apply the same sort of pressure too quickly to the US market could end in economical and social disaster.

  228. Rod B:

    Hank, I really appreciated your referenced article (222). I think it explains, indirectly at least, why capitalism and market driven systems need not be destroyed (as some here have maintained) because they are insufficient it some areas, but rather should be supplemented in those areas that capitalism or free private enterprise (a major part of individual freedoms) is not designed for or hasn’t the wherewithal to handle (ala tragedy of the commons) by appropriate government intervention, rule making, or regulation.. Subject to those requirements, then let capitalism and markets free to do what they do better than anything else.

    One caveat: some take that necessity to control and remove freedoms in certain circumstances, not as a restrained and strongly constrained action but as a rationalization to control and take liberties with total abandon to satisfy other “hidden” agendas. Got to be careful of and watch out for them.

    I don’t know if this is how you intended, but it’s how I took it.

  229. Mark:

    Dagobert #227 I have no clue as to what you’re trying to say.

    What does that have to do with #223? That was solely about how the mitigation cost is an immediate cost the wealthy pay now that won’t affect them for a long time. Point being that ‘the market’ won’t want to solve it: the capitalist system gives the ones with the most money the biggest say, whether they deserve that say or not.

  230. William Astley:

    This comment is in follow up to my comment 223. (The link to Santer et al’s fact sheet did not work in my comment 223 and it is necessary to see the graph from Santer et al’s paper to understand my comment and question.)

    http://www.realclimate.org/docs/santer_etal_IJoC_08_fact_sheet.pdf

    From the above link.

    “Figure Caption: Estimates of observed temperature changes in the tropics (30°N-30°S). Changes are expressed as departures from average conditions over 1979 to 2006.”

    Look at the graph showing the temperature anomalies tropical surface and tropical mid-troposphere Vs time 1979 to 2006. Prior to 1992 is no mid-troposphere warming. From 1992 to 2006 the mid-troposphere does warm more than the surface.

    There is however no explanation as to why starting in 1992 the mid-troposphere warming suddenly occurs. i.e. There is a mechanism change pre-1992 vs post-1992.

    Douglass et al. 2007 note the same observation but reached a different conclusion than Santer et al.

    It seems it is not possible to determine which paper is correct without an explanation of the pre and post 1992 change.

  231. Lauri:

    RE 227
    With cap and trade programs it comes down to the same thing. In the short run, it leads to higher prices for practically everything which effects the poor before the rich even notice.

    Well, this is true only if you increase the amount taxed. The fiscal system can be adjusted so that environmental taxes do not increase the overall tax rate, or tax rate by income class. If you have international cap and trade, then, yes, the money that flows to another country has to be net collected in a way or another.

    This brings up the of question how big of a tax are we talking about. A typical CO2 price in computations is 20 $/ton CO2. That would mean (I hope I got the numbers right) 16 cents per gallon. Would the daily commuter notice this difference in the middle of the gas price ups and downs recently?

  232. Fred Staples:

    My occasional attempts to challenge the (90%) certainty about AGW expressed on this site seems to have received shorter shrift than usual.

    If you doubt (136) that current data shows little warming since the seventies, have a look at Tamino’s chart (first “this”, comment 29). Pay particular attention to the temperature scale on the left hand side – 1cm is equivalent to 0.2 degrees centigrade – and think about what we are trying to measure – the global average temperature, all of it, oceans, atmosphere and continents. Better still, take a few temperature readings during the course of a few days in your own environment, and see how they vary.

    As for the underlying climate science, here is a simple experiment. Turn off the bedroom radiator and go to bed on a cold night without bedcovers. When you feel cold, wrap yourself in a thick duvet and think about what happens.

    Are you warming the duvet, or is it warming you?

    What physical characteristic is responsible for the temperature drop across the duvet, inside to outside?

    Has radiation anything to do with the warmth that you experience?

    As for “explanations”, Hank, (138) I am trying to locate one of Gavin’s where I think he said that “in this context”, presumably atmospheric radiation, “heat and energy are equivalent”. Can you help?

  233. Hank Roberts:

    pft, seriously, read the thread Gavin points to.
    Once you understand basic statistics you see the world more clearly — and realize how easily a short run of numbers gets mistaken for a trend. If you don’t have the math, at least read the words and think hard about what they’re trying to teach.

  234. Peter Williams:

    Any idea why the models don’t show the negative temperature trend at 500hPa that the sonds do? That’s pretty interesting. What’s the tropopause – 250?

  235. Hank Roberts:

    > negative temperature trend at 500hPa …
    > What’s the tropopause – 250?

    What latitude are you asking about, Peter? Pointer please?

  236. Marcus:

    Fred Staples (#232): Obviously, the answer you are looking for in terms of the duvet is that it reduces heat loss by reducing convective heat losses. Of course, it also reduces radiative heat losses too. The mechanism being that the outside of the duvet is cooler than the inside of the duvet, and therefore the temperature differential with the heat sink is less, and therefore loss of heat is less. The duvet’s effectiveness depends on its conductivity, and on the conductivity of the air trapped between the duvet and the body.

    Of course, in the Earth system, we are surrounded by vacuum, and therefore there is _no_ convection (or conduction) loss. Therefore, for the Earth climate system, radiation is the ONLY heat loss mechanism, and therefore radiative absorption is key.

    But I seem to recall having a previous argument with you about how a vacuum flask works, and if you haven’t figured out how one works yet, you aren’t going to understand heat transfer in the climate system either. You might note that space blankets are popular because in addition to reducing convection, they are effective at reducing heat loss through evaporation and radiation as well. And the reason that cloudless nights are cooler than cloudy ones is because radiation loss is much higher.

  237. Hank Roberts:

    Sorry, Fred

    http://www.google.com/search?q=%E2%80%9Cheat+and+energy+are+equivalent%E

  238. Hank Roberts:

    Typo; same result though, Fred:
    http://www.google.com/search?q=“heat+and+energy+are+equivalent”
    You’re looking for the first law of thermodynamics there.

  239. Peter Williams:

    Re 235: Hank, sorry – I’m not an atmospheric scientist. My background is hydrodynamics and turbulent transport. I’m just saying the sonds clearly show wider swings to the negative and positive than the models do. They’re more “S”-shaped, which is kinda interesting, because it tells you (I think) that there’s some interesting transport going on where the models have some room for improvement. What fraction of that is convective and what fraction is radiative I haven’t a clue, since it’s not my field.

  240. Peter Williams:

    PS … and so where the tropopause is, is important for figuring out that transport. I don’t really know the atmosphere so I’m not sure where that is in hPa, but it’s interesting to note that the sonds clearly show that the atmospheric lapse rate in the lowest part of the troposphere, from surface to 500 hPa, is becoming larger in magnitude. This region of the atmosphere is becoming more convectively unstable. In contrast, the region between 500 and 200 hPa is becoming less convectively unstable. Any deviation from a purely adiabatic lapse rate should be telling you something about vertical transport. What, I don’t know.

  241. Kevin McKinney:

    Returning to an earlier topic on this thread, I came across a report on EU Kyoto compliance today:

    “In 2006, four EU-15 countries (France, Greece, Sweden and Britain) had already reached a level below their Kyoto target.

    “Eight further EU-15 member states (Austria, Belgium, Finland, Germany, Ireland, Luxembourg, the Netherlands and Portugal) project that they will achieve their targets, but projections from three member states (Denmark, Italy and Spain) indicate that they will not meet their emission reduction goals. Of the eight percent target, 2.7% has already been achieved and should rise to a 3.6% cut through existing policies and measures by 2010. Buying carbon credits will account for another 3% and reforestation for carbon sink purposes another 1.4%.

    “The report also gives a long-term estimate of the emissions situation in Europe. Although emissions are projected to continue decreasing until 2020 in all 27 members of the EU, the 20% reduction target compared with 1990, endorsed by European leaders in 2007, will remain out of reach without the implementation of additional measures, such as the EU energy and climate change package proposed by the European Commission in January 2008, the agency said.”

    See: http://www.industryweek.com/ReadArticle.aspx?ArticleID=17569

    (Captcha: “policy produced”)

  242. Hank Roberts:

    Peter, when you write
    > the sonds clearly show …

    Are you saying this based on something besides looking at the picture at the top of this page? Are you referring to an article or a post on another weblog?

  243. Fred Staples:

    No Marcus, (236), it is not the convective effect. It is the insulating effect – the TOG value, or to be more precise the thermal conductivity of the duvet, which is responsible for most of the warming. If you doubt this, try replacing the duvet with a thin sheet, which will inhibit the convection.

    For the heat to be removed from your body you have to establish a substantial temperature differential across the duvet, which is why the inside of the duvet (and you) will feel 20 degrees centigrade warmer than the outside temperature if the duvet is thick enough. Radiative effects must be present, but they are negligible in comparison.

    Something similar is true of greenhouses. Everyone quotes the elimination of convection as the reason why the interior is warm. But you can still find radiative explanations of greenhouse warming (W in, 2W radiated from the interior, W back from the glass, W out, Page 18 of Global Warming by John Houghton, for example). To some extent this must be true – the point is that the radiative effect on the inside temperature is negligible.

    If the atmosphere consisted of Oxygen/Nitrogen only, its thermal conductivity would be very low, solar heating would be much the same, and the insulation effect (and the gravitational lapse rate) would produce a substantial temperature differential from the surface to the top of the atmosphere without any radiative absorption. Both the earth and the atmosphere would radiate directly to space. Add absorptive molecules and you will slow the radiation from the earth and add a radiative effect to the temperature differential – mainly from water vapour but also from CO2.

    I do not know how much the radiative effect will add to the surface temperature, but it is certainly not responsible for the whole 33 degrees differential attributed to the presence of the atmosphere. It is only this radiative effect which will be perturbed by additional CO2.

    If we think that the perturbation is significant, as the modellers do and many knowledgeable people do not, we must look to the data for confirmation. As Houghton and Hansen have both pointed out, we are looking for a small differential in an ill-defined temperature which is very difficult to measure accurately anywhere and which varies naturally over time, both randomly and systematically.

    The statistics of trend lines can help. A trend line will be significant if, and only if, the variance of the data about the trend line is sufficiently less than the variance of the data about its own mean. Relatively few data points can produce a significant trend if they are all close to the trend line; many data points may not produce a significant trend if they are widely dispersed.

    The F test in Excel does the maths, (Hank, 233) and has the additional bonus of calculating the range of trend lines within which the true trend probably falls. Sadly, the data points must be independent, and the variations about the trend lines must be normally distributed, which means that we cannot use the monthly data (where the downward trend since 2001 is significant) because the values appear to be serially correlated.

    It is obviously easier to establish significance (and be more confident) if we have a long time period and many data points. The downward trend in the annual data since 2001 is not significant now, but if the current 1978 temperatures persist it will be.

    My point, however, is that neither the trend data, nor the underlying theory give any support for the (90%) certainty that most of the warming during the last century was anthropogenic.

    As Sir John Houghton said, in 2001:

    “The fact that the global mean temperature has increased since the late 19th century and that other trends have been observed does not necessarily mean that an anthropogenic effect on the climate has been identified. Climate has always varied on all time-scales, so the observed change may be natural.”

    Since 2001 his doubts must have been reinforced.

    [Response: Hardly – gavin]

  244. Hank Roberts:

    Indeed.

    That short snippet from Houghton (IPCC, 2001) is being blogflogged like crazy recently by people who don’t know how to look these things up. At least they make it easy to identify them by doing so.
    Hoist on their own petards.

    http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=0521817625

    Global Warming – The Complete Briefing 3rd Edition, John Houghton
    New ISBN: 9780521528740

  245. Ike Solem:

    Re Fred Staples:
    “My point, however, is that neither the trend data, nor the underlying theory give any support for the (90%) certainty that most of the warming during the last century was anthropogenic.”

    Wrong. By your argument, if no one had collected any data, then there would be no possibility of global warming. What if there was no instrumental record whatsoever from the 20th century? What if all the data was erased (by aliens, say)? Could one then say anything with certainty?

    Of course one could! We can use physical theory to predict tides, planetary orbits, trajectories, even the effects of heating of a fluid (like our atmosphere). We are not at all reliant on statistical analysis of data to predict the future.

    It’s worth reminding readers that there are two general approaches to modeling – one is the statistical method, and the other is the dynamical method (sometimes called first principles), and there are all manner of blends.

    To review the dynamical method, we start with the atmosphere and we pretend that none of the air molecules can move – they are fixed in place. This allows us to create a “radiative balance” model, which calculates how energy from sunlight and from the earth’s surface is absorbed and emitted. This gives you an exaggerated temperature profile: too warm at the surface, too cold aloft.

    Then, we add in fluid dynamics and convection (warm air rises, right). Then, we have a radiative-convective model of the atmosphere, which allows us to calculate how changing the composition of the atmosphere changes the radiative effect.

    The so-called greenhouse gases are of interest here, in that they are mostly transparent to sunlight, but opaque to infrared. The earth’s radiative spectrum is all in the infrared, so if you increase the concentration of those gases, you should see some warming. How much? That’s what models are for.

    Now, if we want to move further into the future, we have to include the oceans, which are also absorbing heat from the atmosphere – so if we warm the atmosphere, we warm the oceans (as well as the land surface). Our planet’s surface is 70% water. This is an area of great complexity. Ocean circulation models are complex, and the exchange between the atmosphere and the ocean is even more so. Are the models reliable? Well, so far they’ve been conservative, haven’t they? The rate of Arctic melting, in reality, has exceeded the model predictions, leaving one to conclude that the models appear to be underestimating the response.

    Furthermore, the models do not predict future changes in atmospheric composition, do they? To do that, they would have to include the entire biosphere, as well as the main uncertainty, future human behavior.

    Can you produce a model that predicts future human behavior? Economists have tried, and all their results have ended in dismal failure. Will rational long-term self-interest outweigh short-term greed? Good luck predicting that one.

    Regardless, the fundamental flaw in your reasoning is your lack of knowledge about the difference between statistics and dynamics, which you appear to not to understand at all.

  246. Ray Ladbury:

    Oh, Fred, you poor man, you are so confused!

  247. Marcus:

    Fred: What is the thermal conductivity of a vacuum? It is ZERO. So adding a low thermal conductivity atmosphere on top of the planet doesn’t do much if it is transparent to radiation. Basically, in the absence of any longwave trapping gases, the sun will heat the surface to 255 degrees Kelvin. The atmosphere, in equilibrium with the surface, will cool at the dry lapse rate (and be really really cold at high altitudes). But because it is completely transparent to radiation, the surface will continue to loss heat as a 255 K blackbody straight out to space through _radiation_. Insulation only works if it blocks a relevant heat loss mechanism.

    Again, why is a clear night colder than a cloudy night? Why does a vacuum flask work? Why does a duvet with big holes or a greenhouse with lots of open windows not work very well? Why do space suits have heat exchangers to cool the astronauts? (hint: because vacuum insulates perfectly for everything except radiative heat loss)

  248. Steve Carson:

    Sorry to be off-topic, can anyone recommend 2 or 3 good introductory climatology books. I’ve read the IPCC TAR “Overview of the Climate System” – section explaining the basics of climate. I’ve got an engineering degree. I’m motivated to learn. Recommendations appreciated. Thanks.

    And apologies for making some of you angry, but..

    There must be many thousands (tens of thousands?) visiting this site because they aren’t sure what the truth is about human-induced climate change. Systematically characterizing people with opposing viewpoints as deceitful/bad/evil or stupid is not helping your cause. Even if you can see clearly into the hearts and minds of others, keep it to yourselves and just explain climate theories as clearly as you can.

  249. Mark:

    Fred #243, no the TOG doesn’t do ANY heating. Your body does the heating and the duvet slows it getting away from your skin.

    No.
    TOG.
    Heating.

    Now add that the earth/atmosphere/space isn’t you/duvet/atmosphere and you have nothing to state.

    The TOG/Duvet is only illustrative of how adding even more insulation can still cause a lot of extra insulation, so the “CO2 is 100% saturated” is not valid as why there’s going to be no more GW.

    Oh, and what happens to you if you are sleeping in a summer night at 20C? Will your body be at 40C? No? Then your “duvet keeps you 20C warmer” is a load too.

  250. Barton Paul Levenson:

    Fred Staples writes:

    If the atmosphere consisted of Oxygen/Nitrogen only, its thermal conductivity would be very low, solar heating would be much the same, and the insulation effect (and the gravitational lapse rate) would produce a substantial temperature differential from the surface to the top of the atmosphere without any radiative absorption.

    No, it would not. With no clouds, the albedo would be about 0.13 and the Earth and atmosphere would both be at abougy 277 K. You’d have no lapse rate at all.

  251. Barton Paul Levenson:

    Let me correct that — there would also be some albedo from Rayleigh scattering in the cloudless atmosphere. About 6%, so the Earth’s albedo would be perhaps 0.2 and the temperature of surface and atmosphere would be 264 K.

  252. Uli:

    RE:Steve Carson (#248)
    “can anyone recommend 2 or 3 good introductory climatology books.”
    For start, the links an this page
    http://www.realclimate.org/index.php/archives/2007/05/start-here/

    For the history of climate science
    http://www.aip.org/history/climate/index.html

    You said you have an engineering degree. Depending on your previous knowledge in physics and mathematics, I suggest Ray Pierrehumbert’s Climate Book
    http://geosci.uchicago.edu/~rtp1/

  253. Ray Ladbury:

    Steve Carson says: “Even if you can see clearly into the hearts and minds of others, keep it to yourselves and just explain climate theories as clearly as you can.”

    Steve, I agree that this is good advice and would adhere to it if I had superhuman patience. However, when the same denialists continue to post the same discredited arguments day after day, ignoring all reasoned replies, it is difficult not to draw some conclusions about either their intelligence or their honesty. Even Jesus suggested forgiving our brethren “seventy times seven” times–many of these guys would be pressing their luck even with him!

  254. Marcus:

    Re: #250: Barton, wouldn’t you still have a lapse rate even in a pure O2/N2 atmosphere? Drop in pressure == drop in temperature and all that? As I understand it, GHGs don’t change the lapse rate (well, with the exception of water) but one way of explaining their effect is that GHGs change the optical thickness of the atmosphere and therefore to change the effective radiation height, right? And with zero GHGs, the effective radiation height would be zero.

  255. Marcus:

    Re: #254: Me: Sigh. Lapse rates are hard, let’s go shopping (or maybe I should retake my atmospheric chemistry and dynamics course). Clearly GHGs _can_ have an impact on lapse rates: see, for example, ozone which causes a positive lapse rate in the stratosphere. Also, I found at least one website that claims that increasing GHGs may decrease the lapse rate somewhat (a negative feedback) though I would prefer a more trusted source on that. Having said that, I still hold that an O2/N2 atmosphere would likely have something near to the dry-adiabat lapse rate.

    Tamino has a nice description of lapse rate – thanks Tamino! http://tamino.wordpress.com/2007/07/16/lapse-rate/

    At least I’m not the only one to have confusion in this area – even Gavin can make mistakes:
    http://www.realclimate.org/index.php/archives/2004/12/why-does-the-stratosphere-cool-when-the-troposphere-warms/

    [Response: Why do people keep bringing that up? I’m fully reformed now… ;) gavin]

  256. Mark:

    MArcus, #255. The problem is how do you get a lowered lapse rate. Increased uplift of air is about the only global atmosphere effect that could managed that. Or increased water content (so that energy is more efficiently transported by ferrying vapour up to condense, release the energy and rain out). But that would require more water or more temperature first…

    Barton’s effect is correct, near enough: IR finds a pure O2/N2 atmosphere transparent. Therefore the IR radiation goes straight out. Then you have conduction into the lowest level of the boundary layer. And air is pretty good as an insulator.

    Nearly removing all the conduction that allows the atmosphere to have a lapse rate. The lapse rate is then all that is required to get the pressure equal to the mass of atmosphere above.

  257. Ray Ladbury:

    Gavin, FWIW, you have my vote to no longer sit on the Group W bench.

  258. Kevin McKinney:

    Interesting. One consequence of the increasing lapse rate should then be increased turbulence–or should I say, instability–around the tropopause, right? (And I think I recently saw something about observations of this phenomenon, too.) What in turn follows from this?

  259. Mark:

    Does it matter, Kevin?

    We have what we do know.

    We have what we know we don’t know.

    What we know looks bad if we don’t change our ways, what we know we don’t know isn’t going to help things and everything else isn’t going to know how to magically help things, so is as likely to make it worse as better, so best not rely on anything other than “bad” from them.

    Remember, a pessimist is only ever pleasantly suprised.

    (note: one reason I think so many do NOT believe AGW is because they really do believe that the universe won’t let anything bad. God, in other words, wouldn’t let that happen. I say, why would god not try the same thing with the apple of knowledge and see if we’ve learned now whether to use that knowledge to *respect* the creation or whether we’re no better than mindless automata, demanding our ephemeral needs be succored).

  260. Rod B:

    Mark, interesting sidebar. If you take the entire Bible at its word God allowed a ton of really bad things to happen — even purposefully caused some.

    [Response: No more theology please. – gavin]

  261. Kevin McKinney:

    Matter? In terms of the need to “change our ways,” no–or probably no, at least. In terms of understanding the larger picture, yes of course.

    I’m reading between the lines that you may think I’m one of the brigade that is forever seeking a reason to do nothing now.

    That is somebody else!

    But I am curious, and do believe that there is a good chance somebody on this forum may have a piece of the answer and may be kind enough to share it.

    (Captcha: “cities sincere.” Make of that what you will.)

  262. Peter Williams:

    Uh, re Levenson (250): if there is no radiative coupling whatsoever of the earth to the atmosphere or of the atmosphere to itself, then outside of convective instability (which would lead to an adiabatic lapse rate), the only transport mechanism is conduction. That would seem to support the notion the atmosphere would be isothermal, unless there’s some weird effect of gravity I’m forgetting.

    Anyway it’s a pretty academic exercise since you’d be hard pressed to find a real planetary atmosphere in which either radiative or convective transport didn’t completely and totally dominate conductive transport.

    So as for (242): I am looking at the same graph as everybody else, at the top of this page. This shows sondes and models, among other things. The sondes (colored data) *clearly* show the lapse rate is changing, becoming more unstable between surface and 500 hPa and less unstable btw 500 and 250. So for all this discussion of changing lapse rates, the data’s right in front of our noses, isn’t it? :)

  263. Peter Williams:

    Ps re 243 (Fred): Sorry, man, but you’re really out of your depth. It’s good to try to understand this stuff, but even you admit that you “do not know how much the radiative effect will add to the surface temperature” (to use your words).

    Houghton is an OK book, but it’s pretty basic. I’m an astrophysicist, but I wouldn’t pretend to understand how to model the atmosphere without some serious technical reading that goes waaay beyond his book.

    And please, anybody serious about thermal transport understands how a duvet works.

  264. Hank Roberts:

    Kevin, I poked around and didn’t find anything specific about more turbulence at the tropopause; maybe you’re thinking of hurricane strength increasing with increasing lapse rate? Certainly the strength of thunderstorms and thermal columns increases with a higher lapse rate — a nice post-frontal day with a very cold air mass moving in over warm moist air on the ground makes for great hang glider flying and, yes, much more turbulent air.

    That would not necessarily go up to the tropopause — by definition the tropopause is where the lapse rate goes to zero (and it varies and moves around). What scale are you thinking of? More aircraft bumps? better lift for gliders? Higher wind shear at cloud tops? You might mean a lot of things.

    We have a tropopause because of the ozone layer, the air gets colder up to that point; above that, there’s an inversion, warmer air, where the ozone is significantly contributing to the temperature as oxygen, absorbing ultraviolet, gets heated, splits, and recombines as ozone, I gather. Ray Pierrehumbert or any other atmospheric expert can say far more than my vague understanding (have you read his “Science Fiction Atmospheres” and other work at his website? Link is in the sidebar).
    ________________
    “he Educational”

  265. Kevin McKinney:

    I appreciate that, Hank. I’m afraid I don’t recall very specifically what I think I saw, or I’d have dpne more searching myself. I was trying to reason from the general point toward something more specific, motivated purely by curiosity. Thanks, too, for the suggestion on the sidebar item.

  266. Mark:

    Kevin, there were two reasons for #259.

    1) Stop someone using that message inappropriately (“See! You could be wrong!”)
    2) It really doesn’t make much difference and there are more impactful unknowns (made up word) that deserve attention

    As to the note, it was more an idea to consider in explaining why someone really isn’t listening. I find it quite frustrating to think that someone who seems intelligent really isn’t thinking. That idea in the note was more to help reduce the frustration.

  267. Fred Staples:

    Ray Ladburys’ comment (143 ‘Greenspan, Einstein, and Reich’) does suggest that he has looked at Tamino’s composite chart (29, first “this”).

    He can see that it would be absurd to claim that the rise in temperature, 0.6 degrees, from 2000 to 2002 is “climate”, and the fall, 0.6 degrees, from 2005 to date is mere “weather”.

    We could reasonably call the increase and the decrease “weather”, ignore them both, and accept that current temperatures are back to their seventies level (take out 1998/99 El Nino peak also, stop at 1997, and the upward trend from the seventies virtually disappears).

    The fall from the Medieval Warm period into the Little Ice Age had nothing to do with additional CO2. Can we really claim that the rise to the peak in the forties was not equally natural? The acceleration in CO2 concentrations came later, and was accompanied by a fall in temperature down to the trough in the seventies.

    Are there not grounds for reasonable AGW doubt from this data.

    Mr Wexler (157’Greenspan, Einstein, and Reich’) suggests that it is nonsense to treat heat energy as electro-magnetic waves, because it is not possible to construct a theory of molecular absorption without photons. In fact, Grant W Petty, University of Wisconsin Madison, derives the resonant absorption equations on page 252 of his book First Course in Atmospheric Radiation. He writes:
    “We see, to our amazement, that the photon frequency associated with the transition is just an integer multiple of the classical resonant frequency of the harmonic oscillator”

    Not, I have to say, to my amazement. Neutrons in reactors behave like particles, random walking across the moderator, slowing by collision, until they are absorbed by fissile material. You can calculate their wave-length equivalent (as you can for a billiard ball) but it adds nothing to the model of physical reality.

    Heat waves in the atmosphere behave like waves. Resonant absorption will convert heat energy to kinetic energy in the molecule (water vapour, CO2 et al) and be transferred by collision to the general atmosphere, from where it will be radiated into space.

    I introduced the duvet, Mark, 249, to demonstrate basic insulation. You have to establish a temperature differential across an insulating barrier for heat to transfer. Think about your loft insulation, double gazing, and the clothes you wear (string vests are very effective – it is the air, not the string). Part of the temperature differential across the atmosphere, from earth to space, is due to the same effect.

  268. Hank Roberts:

    Fred, neutrons and billiard balls _are_ particles.

    If the double-slit experiment worked with billiard balls, pool tables would be different and the whole game would be more interesting.

    Also you left a few words out of your explanation, these:

    > … be transferred by collision to the general
    > atmosphere, from where it will be

    transferred back by collision to other molecules that are greenhouse gases, lather rinse repeat, until heat radiated from a greenhouse gas molecule eventually fails to interact before it is

    > radiated into space.

    Mind the gaps.

  269. Kevin McKinney:

    Re Mark, 266: “As to the note, it was more an idea to consider in explaining why someone really isn’t listening.”

    I don’t want to generate another demarche into theology, but I suspect that the ideo- or theo-logy you describe is an important psychological factor for some–Spencer and Christy come to mind.

  270. Kevin McKinney:

    Fred, your attempt to equate current temps to the 70’s just doesn’t fly. (In fact, I can’t imagine what you were thinking of.) (“. . .accept that current temperatures are back to their seventies level. . .”)

    Per the HadCrut data, as rank-ordered on the Hadley site, only five of the years from the seventies–73, 77, 79, 78, and 72–crack the top fifty warmest years. Of these, 73 is the warmest at 26th warmest, followed immediately in rank by 77 and 79; 78 and 72 are the 47th and 48th warmest. (Indeed, for the last two, their anomaly is actually below that of the reference period (61-90.))

    By contrast, *every year* of the current decade except 2000 makes the top ten. (2000 was “merely” the 13th warmest.) Eyeballing the data gives a mean around .3 degrees higher for the 2000s vs. the 70s.

  271. Mark:

    Kevin, Fred wasn’t thinking.

    Or, rather, he was thinking of talking a load of bull to flim flam anyone who comes on here to hear what’s going on.

    At least RodB sometimes asks a good question.

    Fred, never.

  272. Karsten J:

    I don’t think there’s even the faintest possibility that mankind’s dominating forces (which are in more totalitarian control than ever) will do anything to the climate problem before it’s far too late. They’re crushing the climate science under a mountain range of oil- and carindustry propaganda. The mediatorship has by now completely buried the climate crisis deep below the “financial crisis” even as the climate crisis completely dwarfs the “financial” (which not one economic theory is able to explain or was able to foresee, but those “theories” are met with enormous respect in the same media industry which ridicules overwhelmingly consensual climate science).

    “I used to believe that collective denial was peculiar to climate change. Now I know that it’s the first response to every impending dislocation.”

    http://www.monbiot.com/archives/2008/10/14/this-is-what-denial-does/

  273. Fred Staples:

    So, Mark and Hank, we believe with 90% certainty that the AGW effect, which began about 250 years ago with the Industrial Revolution (near here at Ironbridge, Shropshire, plentiful wood and water) has waited 250 years to manifest itself as a 0.3 degree increase in the average temperature in the lower atmosphere/surface. This after a 40% increase in atmospheric CO2.

    You could not measure average temperature to that degree of accuracy over 10 years in a garden shed, and if you attempted it you would need quality control procedures which are manifestly lacking across the US (probably the best available apart from the UK) – see Watts Up with That.

    If you think you know the sea surface (more than 70% of the total) temperatures sufficiently accurately look at this post:

    http://earthobservatory.nasa.gov/Features/OceanCooling/page2.php

    I object to the unjustified certainty, and the absence of laboratory based experimental evidence (Angstroms demonstration of Saturation is dismissed as “botched” and Woods greenhouses’ as irrelevant). Nothing is offered in their place.

    It would be absurd if it were not so dangerous. The US economy will suffer even more if the automobile industry fails. In the UK power supplies will fail in about seven years from now unless now invest in new coal-fired stations – it is too late for nuclear power. We are now converting corn to Ethanol (when I wrote professional papers on this subject, light years ago, I raised that possibility as a joke)

    Meanwhile the chances of China and India foregoing their Industrial Revolutions are vanishingly small, so CO2 we will continue to increase and the temperatures will obstinately fail to respond (I am 90% certain).

  274. Brian B:

    Hmm…so let me see if I’ve got this straight.

    Even using your 2-sigma levels, 4 of the 7 radiosonde data sets
    have trend averages that lie outside the 95% confidence interval
    for the models. In addition, the average of all radiosonde trends
    also lies outside the 95% confidence level. And this is evidence
    of NO discrepancy between the models and data?

    [Response: No. It is evidence that there is no obvious discrepancy with the models. On it’s own this data is not a very strong constraint on anything much, but what you can’t say is that it proves that there is a discrepancy. – gavin]

    You do understand, I hope, that the likelihood of even one data-set
    average falling outside the lower end of the interval by random chance
    is only 2.3%. The likelihood of 4 out of 7 doing so is less than 35*(p)^4,
    or 1 part in 100,000. This implies a real and obvious discrepancy between
    the data and models, with systematic errors existing for the data or models,
    or both. Your own reworking of the statistics still supports the conclusion
    of Douglass et al.

    I would add that ALL data sets (radiosonde and satellite) except for UMd
    agree with NO WARMING TREND at the 1-sigma level. Unless large systematic
    errors can be identified for all data sets, one must conclude that no warming
    is occurring in the troposphere at tropical latitudes.

    My final point is that the statistical approach of Douglass et al. is correct.
    When looking for SYSTEMATIC deviations between data and model simulations,
    one calculates the mean and the standard deviation of the mean for each
    and compares. One wants to know, after all, how uncertain the plotted mean value
    is.

    [Response: The observation is not the expected mean, and so the test they performed makes no sense at all. – gavin]

  275. Brian B:

    “[Response: No. It is evidence that there is no obvious discrepancy with the models. On it’s own this data is not a very strong constraint on anything much, but what you can’t say is that it proves that there is a discrepancy. – gavin]”

    Gavin,

    Thanks for your response. No, the data don’t “prove” there’s a discrepancy. As a scientist, I think you know better than to use such loaded language. But the data do strongly suggest that a discrepancy exists. The discrepancy is obvious to me.

    But let’s be more precise in our language. As I interpret it, the observed data have less than a 2.3% chance of occurring if the models are correct. For a 30-year trend, this would amount to a once-in-a-millenium fluctuation, assuming no systematic error. I would therefore consider the “no obvious discrepancy” hypothesis unlikely. If you disagree, then you should say at what confidence level you would consider the discrepancy obvious.

    [Response: But you have ignored the uncertainty in the data – which is much larger than this. If the data were perfect (which it isn’t – look at the spread in the different estimates of it), then you could make that kind of calculation (but I have no idea what you are comparing here – what field? which version of the data and which distribution of model simulations?). – gavin]

    The approach of Douglass et al. makes sense to me, but I can be convinced otherwise. Unfortunately, I don’t understand what you mean by “the observation is not the expected mean.” The T2 and T2LT data points are averaged over altitude and (for the theory) the various models. Why wouldn’t one then calculate the standard deviation of the mean to compare the two?

    [Response: There are two issues here – one is the forced signal defined as the average of what you would get with the same (transient) external conditions over an infinite number of multiple Earths. Second is what any one realisation of the weather would give. The difference between knowing the 3.5 is mean after throwing a dice an infinite number of times, and seeing a 2 on an individual throw. The particular sequence of dice throws is a random realisation of an underlying stochastic process with a well defined mean and distribution. The same is true for the MSU trend. If you ran the planet over, you wouldn’t get the same thing. Therefore, you can’t assume that the observed MSU trend defines the underlying ‘forced’ trend – instead it has a component of random noise. This would become smaller with a longer time-series, but in the comparison period selected by Douglass (1979-1999), it is still a big term. – gavin]

  276. Brian B.:

    Gavin,

    Thank you for your substantive response. I agree with your point about the uncertainty on the model mean. The sigma you provide gives the range of possible outcomes and the data are one particular realization of those weather conditions over the 30-year period. I
    therefore accept your criticism of Douglass et al. on this point, though with some caveats stated below.

    I also agree that my calculations above have ignored the large uncertainty in the data. But this is easy to account for. When comparing two means, each with an associated uncertainty, the distribution of the means has a width given by adding the individual widths in quadrature. That is, (sigma-means)^2 = (sigma-data)^2 + (sigma-model)^2. In general, sigma-means can never be bigger than 1.4 times the larger of the two sigmas. As an example, we can compare the radiosonde averages for both T2LT and T2 to the model means. For simplicity, I will use the data mean and the average sigma of the data sets to represent a “typical” radiosonde data set. Using the broad uncertainty you provide for the models (weather noise, etc.), I calculate that the T2LT and T2 means deviate from the model means at the level of 1.25 and 1.26 (sigma-means), respectively. This implies that a typical radiosonde data set has only a 10.4 – 10.6% chance of being that far below the model predictions, in the absence of systematic deviations.

    [Response: But that is a big term also (see our previous discussion). – gavin]

    Using the language of the IPCC, I would say there’s a “likely” chance of a real discrepancy between the data and theory. This hardly agrees with your claim of “no obvious discrepancy.”

    But your position deteriorates further when considering an important refinement of this analysis. All of the radiosonde data are measuring the tropospheric temperature trend under the same noisy (i.e., weather) conditions, so we can usefully calculate the overall mean AND standard deviation of the means. The latter is calculated directly from the seven data sets, but it should be close to (sigma-data)/sqrt(7), which it is. This greatly reduces the uncertainty in the overall radiosonde mean, and implies that the model uncertainty dominates in the calculation of (sigma-means).

    [Response: But that assumes they are all independent estimates, and they certainly aren’t since they use the same raw data. – gavin]

    Consequently, the overall data means for T2LT and T2 deviate from the model means at the level of 1.82 and 1.63 (sigma-means), respectively. This implies that the aggregate radiosonde data have only a 3.5 and 5.2% chance of being that far below the model predictions without systematic deviations. I would conclude that a real discrepancy between the radiosonde data and the models is “highly likely.”

    Now, the above analysis assumes the model uncertainties given in Santer et al. My two caveats are as follows. First, doesn’t the model uncertainty include both model noise (i.e., weather fluctuations) and systematic differences among the models? It’s not valid to calculate (sigma-model) based on both. This makes the noise seem larger than it is.

    [Response: Over short periods the size of the weather noise is significantly larger than the structural differences in the models. – gavin]

    Second, it is not valid to treat the model noise as if it is unconstrained by observation. Specifically, any weather noise over the last 30 years in the troposphere must hold for the surface as well. If we have indeed experienced 1-chance-in-20 low trend for the last 30 years, it should be evident in the surface data.

    Alternatively, weather fluctuations in the models that are not consistent with the surface data should be eliminated altogether.
    Doing so, I think, would reduce the model uncertainty.

    [Response: If you screen the models to have surface trends similar to that observed, you do reduce the tropospheric range of responses, but error bars still overlap with the uncertainty in the obs. (I did this calculation back in the first comment on this subject). – gavin]

  277. tamino:

    Re: #276 (Brian B)

    … 1.25 and 1.26 (sigma-means), respectively. This implies that a typical radiosonde data set has only a 10.4 – 10.6% chance of being that far below the model predictions …

    This suggests you’re using a one-sided test. A two-sided test will double that chance; I see no justification whatever for a one-sided test.

  278. Hank Roberts:

    > So, Mark and Hank, we believe …

    Ah, Fred’s back, again attributing his ideas to other people.
    Using wattsface for his science facts, and belief tanks for his political fundamental assumptions.

    Boring, Fred.

  279. Brian B:

    tamino,

    I am not using any kind of “test.” I am making a straightforward
    and precise statement based on the numbers. There’s only a 10% chance
    of any data point falling in the range 1.25 sigma or more below the
    model mean. Note that ALL the data points fall in the lower range.
    The data themselves are one-sided, precisely because there is a discrepancy.
    This is also why averaging them together gives an even tighter condition,
    making the discrepancy all but certain.

  280. Brian B:

    [Response: But that assumes they are all independent estimates, and they certainly aren’t since they use the same raw data. – gavin]

    Gavin,

    Averaging together the data points does not assume they are independent
    estimates. Quite the opposite, it assumes they are estimates of the same
    underlying data. This is no different from your averaging together the models,
    or averaging the radiosonde data over altitude. In this case, the variation
    is due to differences in how each group analyzes the data. Since we have
    no way of determining who is right and who is wrong, this variation can be
    treated as random error. Any wrong assumptions held in common are, of course,
    systematic error.

    “[Response: If you screen the models to have surface trends similar to that observed, you do reduce the tropospheric range of responses, but error bars still overlap with the uncertainty in the obs. (I did this calculation back in the first comment on this subject). – gavin]”

    Overlapping error bars are irrelevant. You have to CALCULATE the probability
    of such an arrangement, as I’ve done above. If the model error bars reduce,
    then the probability of getting the data we see becomes even less than
    the 3 – 5% chances I’ve quoted above. For the 30-year radiosonde data, this
    requires once-in-more-than-a-millenium noise for there to be no discrepancy.
    Highly unlikely.

    [Response: Sorry, but you have completely misunderstood the nature of the uncertainty here. The errors in the radiosonde data are systematic, not random. Averaging together differently systematically biased numbers does not remove the bias, however many times you do it. Neither are the different methods used to treat the data random, they are structurally different, and can’t be averaged together to improve the estimate. As for ‘overlapping error bars’, this is a reasonable heuristic for this level of conversation – better CALCULATIONS can be found in Santer et al, 2008 (see how annoying that is?).

    Let me give you an example. Someone has a scale that has a non-linear error (because the spring is worn out or something) – this has the effect of progressively under-predicting the weight as the weight gets larger. Say the true calibration is W=M*(1+M/0.81), with M is the measured weight, and W the true weight. Thus the error at 1kg is 10% (measure would be 0.9kg), and the error at 2kg is 16% (measure would be 1.66 kg) etc. Now someone takes a weight of makes a measurement and gets 1.52kg. Someone else comes along and calibrates the scale at 1kg, and estimates a correction of 10% and estimates the true weight is 1.69 kg (which is better but still off). Now you come along and suggest the two estimates are independent random estimates of the same value and suggest that the actual value is (1.52+1.69)/2=1.61 kg with some improved accuracy. Isn’t it obvious that this isn’t appropriate? (the real answer in this constructed case is 18kg). – gavin]

  281. tamino:

    Re: #279 (Brian B)

    I am not using any kind of “test.” I am making a straightforward
    and precise statement based on the numbers. There’s only a 10% chance of any data point falling in the range 1.25 sigma or more below the model mean.

    Obfuscation. [edit] On the basis of this “precise statement” you concluded

    there’s a “likely” chance of a real discrepancy between the data and theory

    Discrepancies can be high or low, so one must account for that when making statements such as this. All your “probabilities” are only half as large as they should be.

    You finish with the truly ludicrous:

    The data themselves are one-sided, precisely because there is a discrepancy.

    Not only do you fail to understand the nature of systematic bias and autocorrelation, this statement reveals that you assumed your hypothesis was true even before you applied any analysis.

    You made up your mind that you’re right, then fudged the numbers to make it seem objective. [edit]

  282. Brian B:

    Gavin,

    I haven’t misunderstood anything. I understood from the beginning that
    each radiosonde data set involves systematic differences in the way the
    data is processed. There may also be systematic errors that are common
    to all data sets. Averaging together the various data sets is useful for
    identifying these latter errors. The same is true of the models. They also
    contain systematic errors that differ from model to model, in addition to
    systematic errors that are common to all. Averaging them together, as you
    did in Santer, et al., allows these latter errors to be identified.
    Comparison of data and model means is not helpful for identifying
    systematic errors that are unique to each model/data set. At this level
    of comparison, such systematic errors look like random error and simply
    increase the apparent noise level. Is this distinction difficult to understand?

    In your example, the distinction I’m making is adequately demonstrated
    (though your formula seems not to work–W = M*(1+M/0.81) gives 90% error
    at M = 0.9 kg., doesn’t it?). Averaging the two results together does not
    give a more accurate estimate–I never claimed that it did. But comparison
    to the actual weight would point to a systematic error present in both analysis
    methods.

    Your overlapping error bars heuristic is not adequate at any level of discussion.
    Suppose the theory and data have the same error bars and the means are separated
    by 3 sigma. The 2-sigma error bars will overlap substantially. Does this imply
    “no significant discrepancy?” No. The distribution for the means will have error
    bars bigger by sqrt(2) = 1.4, so that the means differ by 3/1.4 = 2.1 sigma, clearly
    a statistically significant discrepancy.

    Your claim of a better calculation in Santer, et al. is not annoying to me at all.
    I’ve read the paper multiple times but couldn’t find the number I was looking for.
    Perhaps I simply missed it. In that case, you should have no trouble quoting the
    confidence level calculated in Santer, et al. That’s really what I’ve asked for
    from the beginning. A lot of typing on both sides could have been saved by simply
    supplying the number.

  283. Brian B:

    tamino,

    Hmmm…you accuse me of obfuscation, ludicrousness, and fudging.
    This is not exactly a substantive reply on your part. I think you
    can do better than that. Just because you don’t like my conclusions
    doesn’t mean I’ve made a mistake or that I have a prearranged conclusion.

    Discrepancies can indeed go both ways. With random error they always do.
    The fact that the data points are all consistently low seems to point
    to a systematic error. This suspicion is supported by my calculation.

    If I fail to understand the “nature of systematic bias and
    autocorrelation,” then perhaps you can point out how. I am
    always happy to learn new things and am quick to acknowledge
    errors on my part when they become clear to me. :)

  284. tamino:

    Re: #283 (Brian B)

    I am always happy to learn new things and am quick to acknowledge
    errors on my part when they become clear to me.

    You claim to understand that deviations can go both ways. But you persist in defending the use of a one-sided statistic for a deviation that can go either way.

    This isn’t esoteric — it’s as basic as it gets, and is an error on your part. Let us know when it becomes clear to you.