RealClimate

Comments

RSS feed for comments on this post.

  1. DC FACTOID SHOWER ALERT

    Singer & Co will be unleashing it all on the media tomorrow-
    http://adamant.typepad.com/seitz/2007/12/factoid-shower.html

    [Response: Be sure to have your umbrella handy… (but actually this press conference has been cancelled) – gavin]

    [Response: Update: despite getting a email notifying of the cancellation of this press conference, it turns out it went ahead anyway… – gavin]

    Comment by Russell Seitz — 12 Dec 2007 @ 5:31 PM

  2. Better take the kevlar one.

    Comment by Russell Seitz — 12 Dec 2007 @ 5:35 PM

  3. In your last figure, shouldn’t you exclude all models with a surface trend outside the observed confidence interval at the surface?

    [Response: The model ensemble is supposed to be a collection of possible realisations given the appropriate forcing. The actual surface or mid-troposphere trends will be different in each case. The issue here is whether the trends seen at any individual level in the obs fall within the sample of model outputs. Now you could have done this differently by focussing on the ratio of trends – and I think you could work that out from their paper, but ratios have tricky noise characteristics if the denominator gets small (as here). This is what was done in Santer et al, 2005 and the CCSP report though. Their conclusion was the same as ours (or Thorne et al), you cannot reliably detect a difference between the models and the obs. – gavin]

    Comment by viento — 12 Dec 2007 @ 5:49 PM

  4. I see your point, but I think I do not agree completely with you. Perhaps the failure to detect a difference is just a sign that the statistical test is not powerful enough, not that the signal is not there.
    Imagine for instance that we take thousand of simulations, each one conducted with realistic and unrealistc forcings and with realistic and unrealistic models. If you do not discriminate you end up with temperature profiles all over the place. I think one should define a priori criteria for the realism of the simulations.
    In other words, if we take just random temperature profiles, we will probably cannot detect a signal either but the exercise will not be not very informative.

    Another unrelated question, and please, correct me if I am wrong. Higher temperatures aloft imply a stronger negative feedback (?), so what Douglas et al are arguing is that models are wrong because their feedback is too negative??

    [Response: You could certainly calculate the conditional distribution of the tropospheric trends given a reasonable estimate of the surface trend. I’ll have a look. Your interpretation of the D et al claim might be correct – I’ll think about that as well… – gavin]

    Comment by viento — 12 Dec 2007 @ 6:22 PM

  5. A few questions,

    Am I the only one that finds odd that the observations have to be within the uncertainty of the models? Shouldn’t be the other way around?

    [Response: It depends on the question. Douglass et al claim that the observed trends are inconsistent with the models. The range in the models occurs mainly because of unforced ‘noise’ in the simulations and given that noise (which can be characterised by the model spread), there is no difference. If we were looking at a different metric, perhaps for a longer period, for which noise was not an issue, it would be the other way around. – gavin]

    Why 2-sigma instead of only 1-sigma for the uncertainty of the models, are there realizations of the models that show as weak a trend in the lower troposphere as models-2sigma in the last plot?

    [Response: Douglass used 2 sigma, we followed. These data are from the paper – the three lowest trends at 850 mb are 0.058, 0.073 and 0.099 degC/dec.]

    Why should things like El Niño add noise to the trend?. Radiation and convection, which drive the temperature structure in the tropics, act in a very short time scale so that vertical temperatures in the tropics should relax to moist-adiabatic in that same short time scale (This would be a problem if we were trying to look at hourly or daily trends).

    [Response: Because ENSO is a huge signal compared to the trends and so the structure of the last 25 years is quite sensitive to where any model El Ninos occur. You therefore need to average over many realisations.]

    “The authors of Douglass et al were given this last version along with the one they used, yet they only decided to show the first (the one with the smallest tropical trend) without any additional comment even though they knew their results would be less clear.”

    [Response: From the RAOBCORE group among others. ]

    How do you know this? Did you edit or refereed the paper? If you knew this much perhaps you could explain us why the paper was published anyway.

    [Response: I cannot explain how this was published. Had I reviewed it, I would have made this same comments. ]

    Why is there no surface point in the last plot?

    [Response: It just shows the RAOBCORE v1.4 data which doesn’t have surface values. The surface trends and errors in the first plot are reasonably uncontroversial – I could add them if you like. – gavin]

    Comment by Frank R — 12 Dec 2007 @ 6:47 PM

  6. Viento, If you restrict the simulations you want to those that have surface trends within the obs uncertainty as defined by Douglass et al (+0.12 +/- 0.04), then you only retain 9 of the models. For the trends at 300mb, the 2 sigma range for those models is then: 0.23+/-0.13 degC/dec (compared to 0.31+/-0.25 in the full set). So you do get a restriction on the uncertainty, and in each case the RAOBCORE v1.4 is clearly within the range. At 500mb, you get 0.17+/-0.076 (from 0.23+/-0.22), which puts the obs just outside the range – but with overlapping error bars. So I don’t think the situation changes – you still cannot find a significant difference between the obs and the models.

    Comment by gavin — 12 Dec 2007 @ 7:25 PM

  7. Excellent post!

    Comment by Miguelito — 12 Dec 2007 @ 7:29 PM

  8. What really needs to be discussed is the poor editorial policies of journals that publish people like Michaels and Singer. These journals still fail to meet disclosure standards that are common in the medical field.

    As Don Kennedy pointed out last year, when these journals fail to require authors to disclose their funding, “people are entitled to doubt the objectivity of the science.”

    Comment by Thom — 12 Dec 2007 @ 7:44 PM

  9. Thanks for all the replies, now I understand what the issue with El Niño could be. The ratio between the trends should follow moist adiabatic independent of where El Niño’s occur though. I wouldn’t like to put data where there is none, I was just sincerely asking.

    Comment by Frank R — 12 Dec 2007 @ 7:54 PM

  10. FYI, this paper is already being widely cited on blogs by global warming deniers as proof that all of the climate models are wrong, wrong, wrong and that the whole concept of anthropogenic GHG-caused warming has been refuted.

    Meanwhile, in Bali according to The Washington Post:

    U.S. officials at U.N. climate negotiations here said Tuesday that they would not embrace any overall binding goals for cutting global greenhouse gas emissions before President Bush leaves office, essentially putting off specific U.S. commitments until a new administration assumes power in 2009, according to several participants.

    In closed-door meetings, senior U.S. climate negotiator Harlan L. Watson said the administration considers several aspects of a draft resolution circulated by U.N. officials unacceptable, according to an administration official and other negotiators. Watson specifically objected to language calling for a halt in the growth of worldwide emissions within 10 to 15 years, to be followed by measures that by 2050 would drive emissions down to less than half the 2000 levels.

    The administration also suggested eliminating language in the draft calling for “sufficient, predictable, additional and sustainable financial resources” to help poor nations adapt to climate change, on the grounds that it is vague.

    […]

    The U.S. position is expected to hold sway here not only because the United States plays such an important role on the world stage, but because negotiators are fashioning a consensus document that needs to be approved unanimously by the nearly 190 participating countries.

    In fact, U.N. Secretary General Ban Ki-Moon said in a press conference with reporters Wednesday afternoon that the U.S. opposition to language calling for industrialized countries to reduce emissions between 25 and 40 percent by 2020 had effectively taken the question of specific pollution cuts off the table at the Bali conference.

    So the current US government is rejecting both emission reductions by rich industrialized nations and adaptation assistance to poor developing nations that will experience the worst consequences of the rich nations’ emissions, and is blocking international consensus. It is hard to see this as anything other than criminal.

    Comment by SecularAnimist — 12 Dec 2007 @ 8:10 PM

  11. anyone want to give me a laymen version?

    Comment by joe — 12 Dec 2007 @ 8:37 PM

  12. I tried to find any press conference info, but SEPP’s site search for press conferences finds pages attacking the idea that chloroflurocarbons degrade the ozone layer. It’s like warped time.

    Comment by Hank Roberts — 12 Dec 2007 @ 9:02 PM

  13. DOES THIS BLOG accept comments which disagree with your postings?

    [Response: Try reading them to see. We do have a moderation policy, but stick to the rules, and you’ll be fine. – gavin]

    Comment by DemocracyRules — 12 Dec 2007 @ 10:55 PM

  14. Joe says: “Anyone want to give me a laymen version?”

    Hmmmm, a short answer might look like this:

    A new study (that is already full of fatal omissions and inaccuracies) has just come out in a legitimate peer-reviewed journal (Inernational Journal of climatology).

    Remember, a study needs at least two things to really be important scientifically:

    1. To come out in a legitimate peer-reviewed journal (this is true with this study).

    2. This same study has to stand up under world-wide peer-review scrutiny for accuracy (This study has already failed this criteria).

    A rundown of the study might be this:

    Independent computer models (about 23 or so world-wide, I believe), generally show a warming of the surface and even more in the tropsophere in the tropics due to increased water vapor (warm the air up and it has more available water vapor (a greenhouse gas)..so a “new greenhouse gas” comes into play where the air is warmed (ouch, what a simplification).

    …the higher up you go the less water vapor you normally get because it is too cold to have available water vapor (the rate of condensation strongly exceeds the rate of evaporation)…unless you warm it and “suddenly water vapor just appears” where it was mostly absent before. However, it did already exist lower down because it was already warm and already contained water vapor because it was warm.

    The study states that that instruments do *not* show more warming the higher you go in the tropics…even though the models do.

    Hence, independent world-wide computer models are wrong when they predict global warming in the next 100 years…

    and secondly, because computer models base their future (and present) warming predictions on increasing greenhouse gases (and they “don’t get the warming correct now”), that greenhouse gases actually are not causing the warming we have been seeing for the last 100 years.

    This means then, that mainstream science only predicts global warming based on computer simulations…so global warming is not a problem.

    This then means that the warming (most of it) is part of a natural cycle (cosmic rays and solar wind) and is not man-made…

    so we can burn all the oil, coal and gas that we want without guilt (and we certainly don’t have to regulate them)…and the IPCC (Intergovernmental Panel on Climate Change) is irrevocably wrong and can be ignored with impunity.

    This also would mean that President Bush is “correct” to do nothing right now about global warming even though every other major world country is taking action including the last holdout- Australia on Kyoto, I believe).

    Anyway, here are some fatal problems with the study as I understand them that invalidate this study:

    1. Even if the study were right…(which it is not) mainstream scientists use *three* methods to predict a global warming trend…not just climate computer models (which stand up extremely well for general projections by the way) under world-wide scrutiny…and have for all intents and purposes already correctly predicted the future-(Hansen 1988 in front of Congress and Pinatubo).

    Now the three scientific methods for predicting the general future warming trend is:

    1. Paleoclimate reconstructions which show that there is a direct correlation between carbon dioxide increasing and the warming that follows.

    2. Curent energy imbalance situation between the energy coming in at the top of the atmosphere (about 243 watts per square meter WM2) and fewer watts/M2 now leaving due mostly to the driving force of CO2…ergo the Earth has to heat up.

    3. Thirdly, climate computer simulations that have been tested against actual records before they actually happened….and were correct.

    Now, on to actual problems with the paper:

    Any real scientist, ahem, includes error bars in their projections because of possible variables. The study does not include them. If it did, or they were honest enough to, they would fit the real-life records (enough to overlap the two records) and be a non issue.

    Secondly, this study is dishonest and does not show all the evidence available (v1.3 and V1.4)…boing…this paper has just failed peer-review. Science is an *open* process and you just don’t cherry pick or real scienists will correctly invalidate your results.

    Third, with this omitted data, the computer models agree with the actual data (enough for it to be a non-issue).

    Fourthly, the study does not honestly work out the error bars for the models themselves by giving them reasonable uncertainty for accounted-for unknowns such as El Nino (Enso) and other tropical events.

    Now however, there are honest unknowns with the models and how they (slightly) mismatch histoical records…but they are accounted for in the big scheme of things…more work needs to be done…but it does not invalidate what the models are saying for general warming trends…unbrella anyone?

    In other words, this study is a strawman and the authors know it.

    Comment by Richard Ordway — 12 Dec 2007 @ 11:02 PM

  15. Thanks for great post Gavin. What is John Christy’s story? I thought he was a serious scientist whose earlier work on the discrepancy between satellite data and climate models was respected at the time though, ultimately, shown to be wrong. His appearance in Martin Durkin’s film “The Great Global Warming Swindle” earlier this year and now this paper suggests he is following Richard Lindzen’s slide [edit]

    Comment by Chris McGrath — 12 Dec 2007 @ 11:40 PM

  16. I was curious about the tropical upper tropospheric
    trend, so I plotted this for my own satisfaction:
    ( Something I urge everyone to do )

    http://climatewatcher.blogspot.com/

    While the stratospheric cooling and Arctic warming
    are evident, the modeled TUT maxima is not ocurring.

    Comment by Al Bedo — 13 Dec 2007 @ 12:05 AM

  17. Thank you, Gavin, for the original post and to you, Richard Ordway, for the layman’s version. I now think I understand, in broad terms, the main points of the discussion.

    Comment by Steven Kimball — 13 Dec 2007 @ 1:52 AM

  18. I wish we could leave quibbling over some minor descepancies in scientific data alone for a while and concentrate our efforts and education of the public to the matter at hand. Latest nasa report says the artic will be free of summer sea ice within 5 years. I said within five years because the computer model of the Navel Postgraduate school in california did not factor in the two record years 2005,2006. That means in actuality the artic summer should be free of ice well within the 5 years and completely ice free year round in around 2040 or earlier. This year also greenland recorded it’s highest ever melt at 10%. An increasing number of top scientists now believe we have passed the tipping point..the point of no return. So stop quibbling over technocalities and lets all concentrate and apply the blow torch to our respective leaders.

    Comment by Lawrence Coleman — 13 Dec 2007 @ 3:15 AM

  19. With Steven Kimball, “Amen, amen.” Thank you gentle folk all.

    Comment by Juola (Joe) A. Haga — 13 Dec 2007 @ 4:08 AM

  20. ref #12. I asked in another thread why CO2 is blamed for global warming if eg greenland was as warm 1000 yrs ago as it is today. Not only was the post not answered, it was not posted. Surely not because the question was too tricky?

    [Response: No. It’s because it makes no sense logically. It’s equivalent to this: “Why are arsonists being blamed for recent California wild fires when there were wild fires before?” – gavin]

    Comment by Alan K — 13 Dec 2007 @ 4:22 AM

  21. It seems to me that you are misquoting the paper.
    You say “Now the claim has been greatly restricted in scope and concerns only .. the rate of warming”, but the abstract of the paper says “above 8 km, modelled and observed trends have opposite signs”.

    [Response: Well that’s wrong too, even from the data in the paper. Since everyone agrees that the stratosphere is getting colder and the surface warmer, there must be a height at which the sign of the trend switches. With different estimates of the trends that height will vary – in some models it happen between 200mb and 100mb, just as in the Douglas et al obs (RAOBCORE has the switch between 200 and 100mb (v1.2) or 150mb to 100mb (v1.4)). In fact, none of the obs data sets nor the models have sign changes near 8km (~350 mb). – gavin]

    Comment by PaulM — 13 Dec 2007 @ 4:38 AM

  22. I echo Steven Kimball’s thanks to both Gavin and Richard.

    Comment by Nick Gotts — 13 Dec 2007 @ 5:23 AM

  23. Indeed, there is a clear physical reason why this is the case – the increase in water vapour as surface air temperature rises causes a change in the moist-adiabatic lapse rate (the decrease of temperature with height) such that the surface to mid-tropospheric gradient decreases with increasing temperature (i.e. it warms faster aloft). This is something seen in many observations and over many timescales, and is not something to climate models.

    I used to think laspe rate, WV and clouds feedbacks are bound in the Tropics. For example, nearly all recent model intercomparisons show that AOGCMs poorly reproduce precipitation in 30°S-30°N, they still diverge for cloud cover evolution at different levels of the vertical column, and I don’t clearly understand for my part how we can speculate on long term trends of tropospheric T without a good understanding of these convection-condensation-precipitation process. Would it mean latent heat budget, deep/shallow convections, low-medium-high cloud cover are indifferent to the “hot spot” at 200-300 hPa ? And do you think we’ve presently a satisfying quality of simulation for these domains ?

    Another question : why solar forcing would have the same signature that GHGs forcing on Tropics? Shouldn’t the first be much more dependant of cloud cover in ascendant or subsident areas (another way to put this : is the Giss simulation for spatial repartition of solar forcing/feedback cloud-dependant?)

    [Response:A simple and short explanation is that a warming would affect the rate of evaporation, air humidity and the Hadley cell. -rasmus]

    Comment by Charles Muller — 13 Dec 2007 @ 6:05 AM

  24. I second all those thanking Richard Ordway. Much appreciated!!

    Btw, it would be useful if there was a byline for the posts. I never know who’s writing what.

    Comment by Tony Lee — 13 Dec 2007 @ 7:04 AM

  25. If there is so much uncertainty in the observed data and the model outputs that one cannot conclude that they are significantly different, then it also follows that one cannot conclude that the models are accurately representing the real world.

    Comment by Michael Smith — 13 Dec 2007 @ 7:28 AM

  26. A further consideration is changes in stratospheric ozone in the tropics. This affects the heat balance of the uppermost troposphere. See P. M.Forster et al.,
    Effects of ozone cooling in the tropical lower stratosphere and upper troposphere, Geophys. Res. Lett., Vol. 34, No. 23, L23813, 10.1029/2007GL031994
    which appeared on AGU’s website today.

    Comment by David Parker — 13 Dec 2007 @ 7:35 AM

  27. Please correct the spelling of the name Douglass. (This helps when doing web searches.)

    [Response: … and it’s just better. My bad, apologies all round – gavin]

    Comment by Roger — 13 Dec 2007 @ 9:17 AM

  28. #18. Thanks v much for the response. (oh no) It’s not illogical. The key issue is what causes temperatures to rise. Your view is man-made CO2 emissions. But temperatures rose without man-made CO2 emissions 1000 years ago. My original post asked what, therefore, caused temperature to rise 1000 years ago.

    Your arsonist analogy works against you. Not all wildfires are caused by arsonists so some are caused by other means. Do you agree, therefore that not all warming events are caused by man-made CO2 emissions in which case what other causes are there? What was it 1000 years ago, why couldn’t it be the same thing now and why this time round are you sure it is CO2 emissions?

    [Response: Hardly. I’m not sure where you get your information from but I’ve never suggested that warming events in the past were all related to CO2. Previous events have been tied to many causes – solar variation, Milankovitch, plate tectonics and, yes, greenhouse gases. The issue is what is happening now – solar is not changing, Milankovitch and plate tectonics are too slow, ocean circulation appears relatively stable etc. etc. and all the fingerprints that we can detect point to greenhouse gases (strat cooling etc.). Each case is separate – and the ‘arsonist’ this time is pretty clear. – gavin]

    Comment by Alan K — 13 Dec 2007 @ 11:13 AM

  29. Thanks for your extensive critique. I heard Singer pitch this stuff last month, and he wasn’t very convincing even to a gaggle of undergrads.

    The poor old Pope has somehow been inspired, during the Bali conference, to announce that global warming is all “environmentalist dogma” that puts “trees and animals” above people, and is not science. The irony inherent in his “judgment” is truly stupendous.
    link

    [Response: No he didn’t. Check out this for the real story, and the actual text of the Pope’s statement. – gavin]

    Comment by Mike — 13 Dec 2007 @ 11:23 AM

  30. Could you folks speculate on the nature of the peer review process for the International Journal of Climatology? Is it a top-tier journal, or a lesser one? In my field, we all know that some peer-reviewed journals can basically be ignored because the quality of the peer-review process is poor, and the journals are so short of material that they often fast-track papers to fill their issues. Of course, outside the field, this is unseen, so people have no ability to better journals from the poorer ones.

    The critique here seems pretty compelling, and obvious once stated. But is it widely known in the field? Is there a reason why the peer-review process would have missed it?

    Comment by Steve — 13 Dec 2007 @ 11:37 AM

  31. f there is so much uncertainty in the observed data and the model outputs that one cannot conclude that they are significantly different, then it also follows that one cannot conclude that the models are accurately representing the real world.

    If you assume that comparing with this set of data is the only way we have to test models. An assumption that would be dead wrong …

    Comment by dhogaza — 13 Dec 2007 @ 12:00 PM

  32. From the essay:

    Previously, the claim was that satellites (in particular the MSU 2LT record produced by UAH) showed a global cooling that was not apparent in the surface temperatures or model runs. That disappeared with a longer record and some important corrections to the processing. Now the claim has been greatly restricted in scope and concerns only the tropics, and the rate of warming in the troposphere (rather than the fact of warming itself, which is now undisputed).

    Given the complexity of the climate system, it seems rather likely that there will always be some areas of poor fit between the models and the evidence. However, the fact that climate models are not an instance of curve-fitting, but are instead built upon a solid (albeit cummulative) foundation of physics, the incorporation of physics to describe processes in one area tend to lead to a progressive tightening of the fit in others.

    As such, the areas where the fit is still relatively poor will tend to become smaller in magnitude, narrower in scope, and require a more specialized knowledge simply in order to have a vague idea of what they are. This would seem to be a case in point.

    As such, if one’s job depended merely upon pointing out that such areas exist, given the existence of researchers who will in essence being doing this work for you, I believe one would have a great deal of job security for quite some time to come. However, if it also depended upon making such things seem make-or-break, it would seem the bell has begun to toll.

    Comment by Timothy Chase — 13 Dec 2007 @ 1:21 PM

  33. Why does the plot included above that shows the results of the solar forcing look so different from the plot of solar forcing included in AR4, p675 fig 9.1.a?

    [Response: Scale of the forcing. Figure 9.1a is from the solar forcing estimated over this century from solar (a few tenths of W/m2). The figure above is for a 2% increase in solar, which is comparable to the impact of 2xCO2 and so the global mean change in SAT in the two figures above is comparable (around 3 to 4 deg C). – gavin]

    Comment by B Buckner — 13 Dec 2007 @ 1:54 PM

  34. Re: #18
    Another way to put it is that your initial question seems to be premised on the idea that *only* CO2 can cause warming. This is not the case, and in the past we know it’s naturally happened. However, in the last few centuries, we have good evidence that the natural mechanisms are not contributing as much to warming as the man-made inputs. This is in fact what the IPCC sections say.

    A similar objection is raised by stating that various other planets are warming, so why do we blame ourselves, since (as Sen. Thompson naively put it) there are no greenhouse gas emitting industrialists on Pluto? The obvious point is that these all have the sun in common. However, the output of the sun has not varied nearly enough to cause our warming, and each of the other planets has wildly different atmospheres and…uh…planetologies. So again, there’s no one cause we can point to that is common across all these scenarios. Each is different, so Earth’s warming is unrelated to other planets’ warming.

    Someone will correct if I got anything wrong, I’m sure. :-)

    Comment by Robear — 13 Dec 2007 @ 2:27 PM

  35. No. 28 wrote:

    “If you assume that comparing with this set of data is the only way we have to test models. An assumption that would be dead wrong …”

    Okay, what are the other ways to test the models?

    [Response: I think they key was ‘this set of data’ – there’s lots of other data that does not have either as much noise or as much uncertainty. – gavin]

    Comment by Michael Smith — 13 Dec 2007 @ 2:48 PM

  36. 1) I took a look at Figure 9.1 in the latest IPCC of zonal mean atmospheric temperature change from 1890 to 1999 (similated by the PCM model). That figure shows a significant difference in the magnitude of the trend at 10 km over the tropics between solar forcing and greenhouse gas forcing. This is at odds with your first two graphics. Are you saying that the IPCC graphic is wrong?

    2)Given that your last graphic is correct (Roabcore v1.4), it still shows not much difference between the surface and 10 km. I’d compare it to figure 9.1 a and c, but I am likely comparing apples and oranges. But it seems to me that it is closer to figure 9.1 a.

    3) Bottomline, I think this comparison is highly flawed. Who is to say that the models have all the science in it needed to put the warming in the right place? Given that H2O is the dominant greenhouse gas in the tropics, doubling CO2 levels will not change temperatures much there. It is in places where water vapor is scarce where the effects of CO2 would have the most impact.

    Comment by VirgilM — 13 Dec 2007 @ 4:12 PM

  37. It is amazing to see how this paper is already being “spun” by the global warming denialist blogosphere. While RealClimate has called into question the soundness of the paper’s quite narrow conclusions of discrepancy between model predictions and measurements of the relative rate of warming of different levels of the atmosphere over the tropics, this paper is being touted by the deniers as showing that the models are wrong to predict any warming at all, and that predictions of future warming and climate change can be entirely discounted. It is really a case study in the propagandistic distortion of science (which apparently was not particularly good science to begin with).

    Comment by SecularAnimist — 13 Dec 2007 @ 4:36 PM

  38. Re #32: 1) see Gavin’s response to #29 (probably crossed your post)

    3) H2O may be the dominant greenhouse gas, but you missed that it is a feedback, not a forcing. Water vapour partial pressure is an exponential function of temperature: it just amplifies the CO2 effect — more or less independent of where you are (It requires careful spectral analysis to say so — part of all model codes).

    Comment by Martin Vermeer — 13 Dec 2007 @ 5:21 PM

  39. This is a post at an Australian Blog, Jennifer Morohasy, where a co-author has posted what is reported as a quote from John Christie:

    “My Environmental sciences colleague contacted JC.

    Response to RC:

    The (treatment of) errors on all datasets were discussed in the text.

    ALSO

    To quote from realclimate. org “The sharp eyed among you will notice that the satellite estimates (even UAH) – which are basically weighted means of the vertical temperature profiles – are also apparently inconsistent with the selected radiosonde estimates (you can’t get a weighted mean trend larger than any of the individual level trends!).” This was written by someone of significant inexperience. The weighting functions include the surface, and the sondes align almost exactly with UAH data … the weights are proportional, depending most on 850-400 but use all from the surface to the stratosphere. The quote is simply false.

    [Response: JC is of course welcome to point it out to us ourselves. I will amend the post accordingly. If he would be so kind as to mention whether the RSS and UMD changes are also in line with the sondes, I will be more specific as well. We have no wish to add to the sum total of misleading statements. – gavin]

    ALSO

    The LT trend for RAOBCORE v1.4 is still considerably less than the models. However, as I stated before, v1.4 relies very strongly on the first guess of the ECMWF ERA-40 reanalyses that experienced a sudden warm shift from 300 to 100 hPa with a change in the processing stream (I think stream #4 ?). Consistent with this was a sudden increase in precipitation (I think about 10%), a sudden increase in upper level divergence (consistent with the increase in precipitation) and a sudden rise in temperature. We have a paper in review on this, but we were not the first to report it. I think Uppala was the first. v1.2 had less dependence on the ERA-40 forecast model, and so was truer to the observations. There is more on this, but what I said should suffice.

    [Response: The point was made above that we are not trying to demonstrate the v1.4 or v1.2 is better, merely that there are quantitative and systematic uncertainties in the observational data set that were not reported in this paper. If JC wanted to make the point that v1.2 was better, then why was the existence of 1.4 or 1.3 not even mentioned? At minimum, this is regrettable. – gavin]

    Remember, in the paper, the fundamental question we needed to answer before the comparison was “what would models show for the upper air trends, if they did get the surface trend correct?” We found that through the averaging of 67 runs (RealClimate seems to miss this point … we wanted to compare apples with apples). Then the comparison could be made with the real world.

    John C.

    [Response: That is not the test they made. Though I did do it, in response to Viento (comment #6) – gavin]

    John is quoted as saying that one of the items in your post is false. I am not sure if you have communicated with Christie about the paper however as a layman I cannot tell what is the truth here. Is there anyway you can explain what you said in your post that I would be able to understand?

    Comment by Ender — 13 Dec 2007 @ 5:41 PM

  40. Gavin:

    Thank you for responding to me in number 31. Can you give us a link to other data sets on tropical tropospheric temperatures that have less noise and uncertainty which corroborate the models?

    Thanks again for taking the time to respond.

    [Response: I wish I could. The Thorne et al paper suggests that it will become clearer in time – another 5 to 10 years should do it. In the meantime, there are lots of signals (global SAT, Arctic sea ice, strat cooling, Hadley cell expansion (perhaps)….) with much better signal-to-noise ratios… – gavin]

    Comment by Michael Smith — 13 Dec 2007 @ 7:00 PM

  41. #34 – So if the signature of solar and greenhouse gas forcing are the same over the last 120 years, how can we be sure that the current warming is mostly greenhouse gas induced? The point of Figure 9.1a was to show that there was a different signature between the various forcings and observations should point to the culprit.

    Of course, the IPCC report admits that solar influence on our climate is poorly understood, so who is to say that the model zonally averaged derived temperature trends in Figure 9.1a is accurate? Also who is to say that the models have water vapor/cloud feedbacks correct too? The IPCC rates the understanding of these processes as low, so it could be possible that the models are putting the warming in the wrong place for that reason. Or the modelers could be lucky with their parametization and got it right before the science came in.

    [Response: Perhaps you misunderstand. Fig 9.1 gives the patterns associated with the forcings for 20th Century – solar is much smaller than CO2 and the other GHGs. The figure above is what happens if the amount of forcing is similar (you get a similar pattern). There’s no inconsistency there. The big difference between solar and CO2 forcing is in the stratosphere – where CO2 causes cooling – just as is seen in the real world. – gavin]

    Comment by VirgilM — 13 Dec 2007 @ 7:06 PM

  42. Was this paper peer-reviewed?

    Comment by gringo — 13 Dec 2007 @ 7:35 PM

  43. RE:DOES THIS BLOG accept comments which disagree with your postings? A simple yes or no answer would have been fine. Your oblique response does not in fact answer my question.

    Your response, “Try reading them to see. We do have a moderation policy, but stick to the rules, and you’ll be fine. – gavin”, is in fact not a clear yes or no. “Try reading them to see” only presents to me those comments which have been approved by you for posting. Among other things, this ignores the Base Rate. That is, of 1000 comments that you receive which disagree with you, how many do you accept? Then, compare that with 1000 comments that you receive which agree with you. How many of those do you accept?

    This issue is critically important for all websites and blogs which discuss global warming, given the highly charged atmosphere, and the proneness to biased views.

    Blogs without disinterested and balanced discussions and comments do not qualify as true discourse.

    Comment by DemocracyRules — 13 Dec 2007 @ 9:00 PM

  44. Gavin – thank you for your reply – this clears it up for me. I am cross posting it back to the original where I hope it clears it up for a few people there as well.

    Comment by Ender — 13 Dec 2007 @ 9:04 PM

  45. RE #26, I read the Pope’s statement, and felt it could have been stronger, but I think the Pope might be surrounded by some anti-environmentalists.

    Here’s my transcription of part of “Rome Reports” segment from 10/10/07, “Is Pope Benedict the First Eco-Pope?” aired on EWTN, and prominently featured Monckton as their climate expert (it can be viewed just past the middle of the video at http://www.romereports.com/index.php?lnk=750&id=461 ):

    Some Catholic scholars question the theory that climate change is manmade. Nor do they believe that the Pope accepts this theory. Instead they think he agrees with the conclusions of a recent Vatican conference, that the climate is changing, but the reasons for it are unknown.

    VISCOUNT CHRISTOPHER MONCKTON (scientist and participant, Vatican Climate Change Conference, 2007): “It has been noticed that on the surface of Mars warming has been going on at a rate that is very much parallel to that of what’s happening on Earth. Likewise on the surface of Jupiter. All of these planetary surfaces are exhibiting warming at the same time. Well now, is it SUVs out there in outer space or is it that large, bright, hot object bang in the center of our solar system and for which our solar system gets its name. You tell me.”

    It’s this scientific uncertainty which makes some want to see the Pope go a bit further and enter into the climate change debate.

    In other word “some Catholic scholars” are calling the Pope a liar, since he has on several occasions been talking about the dangers of global warming and our need to mitigate it.

    Furthermore Pope Benedict XVI is NOT the first eco-pope. John Paul II was, especially when he said in “Peace with All Creation” (1990), “Today the ecological crisis has assumed such proportions as to be the responsibility of everyone…The…’greenhouse effect’ has now reached crisis proportions…”

    Back to topic, the devil seems to be in the details not in the general overview of climate change science (and working his tail off close to the Vatican).

    Comment by Lynn Vincentnathan — 13 Dec 2007 @ 9:16 PM

  46. Hi there. There is a curious conversation over at Desmogblog. John Holliday has posted a large comment there responding to this thread here, and stating that he doubts the moderators at RealClimate would allow his question to be posted.

    So I thought I would ask the moderators here if they had blocked Holliday’s post from this thread, or if he did not actually try to comment here.

    It looks to me like his purpose is more to smear RC than to argue about the Douglass paper.

    http://www.desmogblog.com/singers-deniers-misrepresenting-new-climatology-journal-article#comment-140321

    Comment by VJ — 13 Dec 2007 @ 10:07 PM

  47. Alan K, try this:
    http://en.wikipedia.org/wiki/Non_sequitur_%28logic%29

    1) Previous “Natural Events” occurrd
    2) Warming event is occurring now
    3) therefore, todays trend is natual (even though we put in a new variable not seen in 4.6 billion years)

    anyone?

    Also, VJ (#37), you are correct- scientific sounding, content-free.

    Comment by Chris Colose — 14 Dec 2007 @ 12:14 AM

  48. Thanks, Gavin. The thing I don’t get about Douglass et al. (and your article, for that matter) is the way of treating different models as if this was a democracy. My understanding was, that GCMs were different approaches to simulating parts of the climate system. I thought a bunch of scientists and programmers put their efforts into creating a celled representation of the atmosphere, oceans etc. and try to come up with formulas and parameters which – to their best knowledge – mirror what really happens. I guess each of these more or less independent efforts has its pros and cons and some perform nicely in areas where others fail and vice versa. If we mix these together in a single mean and give it a wide (2 sigma) range, naturally everything this side of a new ice age will somehow fall into the modelled range. But even with your 2 sigma graph, most of the other, non-RAOBCORE v1.4 observations would fall out of the range (although a proper deviation for the observations would probably overlap). Wouldn’t the best approach be focusing on individual models and trying to find out, why they come up with tropospheric trends that don’t seem to match the observations? Or are you saying, that the observations themselves or Douglass et al.’s trending is deeply flawed?

    [Response: Good point. There are two classes of uncertainty in models – one is the systematic bias in any particular metric due to a misrepresentation of the physics etc, the other is uncertainty related to weather (the noise). When you average them together (as in this case), you reduce the uncertainty related to noise considerably. You even reduce the systematic biases somewhat because it turns out that this is also somewhat randomly distributed (i.e. the mean climatology of all the models is a better fit to the real climatology than any one of them). However, in this particular example, we have a great deal of weather noise over this short interval, therefore the spread of the runs due to weather is key. How we distinguish weather-related noise from physics-related noise requires longer time periods, but could perhaps be done. With respect to RAOBCORE, I don’t have a position on which analysis is best (same with the UAH or RSS differences). However, similarly reasonable procedures have come up with very different trends implying that the systematic uncertainty in the obs is at least as large as the weather-related uncertainty. That means it’s hard to come to definitive conclusions (or should be at least). – gavin]

    Comment by henning — 14 Dec 2007 @ 3:42 AM

  49. RE #13, thanks, Richard. That really helped.

    However, I never really take their nipping around the edges in an effort to disprove AGW too seriously for several reasons:

    (1) In this case even if they were correct and the models failed to predict or match reality (which, acc to this post has not been adequately established, bec we’re still in overlapping data and model confidence intervals), it could just as well mean that AGW stands and the modelers have failed to include some less well understood or unquantifiable earth system variable into the models, or there are other unknowns within our weather/climate/earth systems, or some noise or choas or catastrophe (whose equation has not been found yet) thing.

    This is sort of how “regular” science works (assuming no hidden agenda, only truth-seeking); there will always be outlier skeptics considering alternative explanations, and when they find some prima facie basis for one, will investigate. Just as most mutant genes never express as phenotypes, and if they do, most never succeed in becoming dominant in a gene pool, so there are a variety of ideas or “ideotypes” in an idea pool, most of which never pan out into actual cultural (widely shared) knowledge and behavior based on that knowledge.

    In this case, the vast preponderance of evidence and theory (such as long established basic physics) is on the side of AGW, so there would have to be a serious paradigm shift based on some new physics, a cooling trend (with increasing GHG levels and decreasing aerosol effect), and that they had failed to detect the extreme increase in solar irradiance to dislodge AGW theory.

    (2) Prudence requires us to mitigate global warming, even if we are not sure it is being caused by human emissions (and we are sure, and this new skeptical study does not reduce that high level of certainty). This is too serious a threat to take any risks, even if AGW were brought into strong doubt by new findings or theories.

    *************

    RE #40, VJ, RC fails to post my entries now and then (sometimes I think unfairly), but I find if I reword it and make it more polite or more on-topic, they do post it. So it may take several tries, esp if it has invective and ad hominem attacks, or is well outside of science (like too much on religion or economics), or too off-topic.

    As for disputing their science on a scientific basis — I think RC scientists love to engage at that level. I’m just not savvy enough to engage them on that level.

    Comment by Lynn Vincentnathan — 14 Dec 2007 @ 8:21 AM

  50. I don’t understand your final chart. Why did you leave out the other 3 temperature series (HadAT2, IGRA and RATPAC)? Does your analysis not leave RAOBCORE v1.4 as an observational outlier, especially at 200-300 hPA by around 0.3 degrees??

    [Response: See above. The point was simply to demonstrate the systematic uncertainty associated with the obs. Better people than me are looking into what the reality was. – gavin]

    Comment by Paul — 14 Dec 2007 @ 8:41 AM

  51. Acc to the article the authors conclude that “carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming.”

    If that were actually the case, then the implication would be that we would have to reduce our GHGs all the more in hopes of reducing the warming trend at least a little. I mean we can’t very well turn down the sun, or halt cosmic rays, or whatever. We have to do what we can do. And in this case every effort would have to be on reducing that tiny portion of our contributions to GW in hope of avoiding a runaway scenario and much worse harm….since it seems the harms increase almost exponentially with the warming. Every little bit of reduction in warming would help tremendously, and might be that straw lifted from the camel’s back just in time.

    I guess, if there are still people on earth in billions of years when the sun does start getting much hotter, they will be struggling to do whatever is in their power to do to reduce the warming and its harms. Where there’s life, there’s hope, and I imagine future people (if not this generation) would struggle even more to do whatever is in their power to keep life going. The mantra might be, “Johnny, the sun is causing us a lot of harm and danger, and we don’t want to add anything to that, so be a good boy and turn off that light not in use!”

    Comment by Lynn Vincentnathan — 14 Dec 2007 @ 8:52 AM

  52. A commenter on a political blog site has posted the following quote which he attributes to the “lead author” of this paper:

    “The observed pattern of warming, comparing surface and atmospheric temperature trends, does not show the characteristic fingerprint associated with greenhouse warming. The inescapable conclusion is that the human contribution is not significant and that observed increases in carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming.”

    The first sentence of that statement does seem to more or less accurately describe the paper’s contention, although it seems it would be more accurate to say “does not show the characteristic fingerprint associated with the predictions of the 22 models examined in the study“.

    However, the second sentence asserting the “inescapable conclusion … that observed increases in carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming” seems to go far beyond what the paper purports to have demonstrated, to the point of seriously misrepresenting the paper’s conclusions.

    Does anyone know whether the above quoted statement was actually made by the lead author of the paper? If so, is it a justifiable characterization of the actual conclusions of the paper? (Aside from whether those conclusions are scientifically sound.)

    Comment by SecularAnimist — 14 Dec 2007 @ 9:48 AM

  53. Re: Stratospheric Cooling

    I found a paper in the Journal Science called “Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling” V. Ramaswamy, et al. Science 311, 1138 (2006) DOI: 10.1126/science.1122587

    In this paper is the following quote “In Fig 3B, the comparison of WmggO3 and Wmgg shows that the overall lower stratospheric temperature decline is driven primarily by the depletion of ozone, and to a lesser extent by the by the increase in well-mixed greenhouse gases”

    This paper looked a lower stratopheric temperatures between 1979 and 2003 and tried to model those temperatures using various natural and anthropogenic forcings. I must note that this conficts with Gavin’s assertion that greenhouse gases is the most responsible for stratospheric cooling during the last few decades. Gavin cannot ignore the results of the paper that I referenced.

    Gavin said: “The big difference between solar and CO2 forcing is in the stratosphere – where CO2 causes cooling – just as is seen in the real world.” The problem with this statement is that the stratosphere cooled because of O3 depletion, NOT CO2. So you can’t really look at the stratosphere to see if most of the tropospheric warming is due to solar or GHG forcings.

    [Response: You confuse the lower stratosphere with the whole stratosphere. MSU4 is mostly a lower stratospheric signal, and indeed, the trends there are mostly associated with ozone depletion. But higher up, it’s more related to CO2 (see http://www.atmosphere.mpg.de/enid/20c.html for instance, fig 4). -gavin]

    Comment by VirgilM — 14 Dec 2007 @ 3:44 PM

  54. pertaining RAOBCORE I read:

    “It turns out that the radiosonde data used in this paper (version 1.2 of the RAOBCORE data) does not have the full set of adjustments. Subsequent to that dataset being put together (Haimberger, 2007), two newer versions have been developed (v1.3 and v1.4) which do a better, but still not perfect, job”

    and

    “Nor does it show that RAOBCORE v1.4 is necessarily better than v1.2.”

    and in the response to #42 comment

    ” With respect to RAOBCORE, I don’t have a position on which analysis is best”

    Care to explain?

    [Response: The researchers on RAOBCORE presumably think that v1.4 is better (otherwise why put it out?). I do not have enough knowledge on radiosonde analyses to be able to render a judgment. The main point is that the systematic uncertainty is much larger than portrayed in the Douglass et al paper and the authors knew that before their paper was published. Whether one is better or not, shouldn’t they have at least mentioned it? – gavin]

    Comment by Andre — 14 Dec 2007 @ 4:26 PM

  55. “The main point is that the systematic uncertainty is much larger than portrayed in the Douglass et al paper and the authors knew that before their paper was published.”

    Yes they should have mentioned it. But the point of the paper is to show that observational data doesn’t match model data in certain parts of the troposphere. Showing that one dataset, v1.4, overlaps the models is a pretty weak rebuttal. Why don’t the other obs datasets overlap?

    This tendency to keep responding to criticisms with the latest data (v1.4 was published this year) reinforces more than anything how unsettled the science still is. And that progress is being made…

    [Response: You missed they main criticism – the uncertainty on the model runs is completely wrong. The correct ones overlap even the earlier versions of the radiosondes. – gavin]

    Comment by Ian Rae — 14 Dec 2007 @ 7:05 PM

  56. The graphic labeled “Not quite so impressive” shows only RAOBCORE v1.4, happily up by the mean of the model data. The earlier datasets are not shown in the graphic, but yes they would overlap, down near a trend value of 0.1. So it would appear that in order to make things overlap you need models that predict global warming of less than 0.1 deg/decade! Yes, the upper error bar is around 0.5, but since the obs datasets are down at the lower bound, it’s kind of reassuring.

    Also, I thought error bounds meant any value within those bounds was equally likely. And since (peering at the diagram) around 80% of the model value error range is above the obs data, the question of a discrepancy remains statistically likely.

    Comment by Ian Rae — 14 Dec 2007 @ 8:08 PM

  57. Ian Rae, You seem to think that uncertainty in one aspect of climate science means that it is all uncertain–not true. CO2 forcing is well known. Radiosonde measurements are quite difficult (think about how they are made). There’s lots of noise in the data. Also, your idea that any value within error bounds is unlikely–errors are often assumed to be normally distributed about the mean, although this, too is just a convenient approximation.

    Comment by Ray Ladbury — 14 Dec 2007 @ 10:25 PM

  58. So what you are saying is that the confidence intervals around the model predictions are so large that they are essentially unfalsifiable?

    [Response: for this metric and taking into account the errors on the observations and shortness of the interval, yes. Different metrics, different time periods, different observations it’s a different story. – gavin]

    Comment by Terry — 14 Dec 2007 @ 11:55 PM

  59. As a non professional scientist I would like to know why, if this new paper is so bad, that it passed peer review. Isn’t peer review supposed to weed out bad papers?

    Comment by David Young — 15 Dec 2007 @ 1:44 AM

  60. Re #50 [David Young] David, I think of peer review as a kind of spam filter – it gets rid of most of the spam, but some slips through, and occasionally emails you would really have wanted get blocked – i.e. a really good paper gets rejected – I know this has happened to mine ;-).

    Comment by Nick Gotts — 15 Dec 2007 @ 5:25 AM

  61. #49 “So what you are saying is that the confidence intervals around the model predictions are so large that they are essentially unfalsifiable?”

    No, the models are falsifiable (in the sense of Popper), they just have not been falsified by this particular set of observational data. This is becuase the data are consistent with the models.

    However the models are biased, on average they over-predict. This is undoubtedly a concern already known to the modellers.

    #50 “As a non professional scientist I would like to know why, if this new paper is so bad, that it passed peer review. Isn’t peer review supposed to weed out bad papers?”

    No process involving human beings is going to be perfect, bad papers do get through peer review sometimes. Fortunately replies to the article appear in the journal and the original aothors may submit a rebuttal. This means that science is able to recover from errors in the review process. This system has worked well in the past, and no doubt will sift out the truth in this case.

    Comment by Gavin (no, not that one, a different one) — 15 Dec 2007 @ 6:18 AM

  62. @Lynn – 44
    No. What it would mean (if it was true) is, that all of the models and our understanding of what actually drives climate change would be wrong. Since the models used in AR4 depend on a certain GHG effect, none of their projections (temperature, percipitation, sea-level, sea-ice etc.) would have any confidence at all. The entire game would start from scratch – and nobody would spend huge amounts of money for reducing GHGs when the effect would be lost in the noise of other, yet unknown or heavily underestimated, forcings.
    Let me play the devils advocate for a second and string the evidence against CO2 together:
    CO2 levels are rising, but the climate sensitivity is vastly overestimated by the IPCC due to overestimated positive feedbacks. In truth, the feedbacks cancel each other out (Lindzen et al.) and the resulting sensitivity is much smaller (Schwartz). The surface temperature record is contaminated (McKitrick) and most of the observed warming is in fact land-use and not GHG. If it was GHG, the radiative forcings should show higher trends in the mid troposphere, which it doesn’t (Douglass et al.). Lower temperatures in the stratosphere are caused by ozone depletion and the arctic is just effected by cyclic changes in wind and sea streams. So you see – nothing to worry about, especially since Loehle just showed, that it has been like this a mere millenium ago. And guess what – this is well within the modelled range, according to Gavin Schmidt.

    ;-) Just kidding.

    Comment by henning — 15 Dec 2007 @ 6:29 AM

  63. David Young #50, Peer review fulfills many functions–weeding out papers that are clearly incorrect, yes, but also improving papers that are flawed. And even if a paper is not 100% correct, a reviewer may decide that it would be of sufficient interest to the general community to be published. Remember, the intended audience are experts, not laymen. The assumption is that experts can read and discuss the paper and reach a conclusion as to its merits. The ultimate test is whether the paper is cited in future work. Peer review is a floor, not an absolute judgement.

    Comment by Ray Ladbury — 15 Dec 2007 @ 7:40 AM

  64. RAOBCORE: Still a few kinks…

    The people at RAOBCORE believe that the 1.4 is definitely better than the earlier versions. However, I wouldn’t get too attached to 1.4 as of yet. They believe there is an issue endemic to all versions which will be fixed in the next. No doubt RAOBCORE will be a nice tool once it is done, but at the moment it looks like it has a few kinks.

    Cautionary note: The tropical mean trends 1958-1978 show warming at low levels but cooling at upper tropospheric levels, which seems unrealistic. This feature is related to a strong warming anomaly over the Eastern US which is spread to the Caribbean by the ERA-40 bg. Consequently the Carribean stations may be overcorrected at low levels during this period. Since most tropical stations are in the Caribbean during these early days, this problem strongly affects also the global tropical means. This issue, which affects all RAOBCORE versions, will be fixed in the next version.

    RAdiosonde OBservation COrrection using REanalyses (RAOBCORE)
    Version 1.4, 30 Jan 2007
    http://homepage.univie.ac.at/leopold.haimberger/RAOBCORE_T_1.4.html

    I think I would avoid using this in the tropics for the time being, and unfortunately I can’t tell when 1.5 is coming out.

    If I understand the problem correctly, their product is designed to identify by means of empirically established measurements how the climate system behaves according to certain metrics under near equilibrium conditions. As such they have to take the measurements under near equilibrium conditions. However, there was a strong weather system passing through at the time and they took their measurements anyway. An actual case of GIGO — in a product being used to “test” the models.

    Comment by Timothy Chase — 15 Dec 2007 @ 7:48 AM

  65. Gavin,

    Re 43. So you are saying there is insufficient (statistical) confidence in the the model output (per confidence interval quoted) and the observational data (illustrated by the very large revision to ROABCORE) as a joint probability distribution to make any claims about the efficacy of the model output?

    [Response: Efficacy? If you mean to say that this data and this comparison are not very useful in characterising model skill, then the answer is yes, it is not (yet) useful. – gavin]

    Comment by Paul — 15 Dec 2007 @ 8:29 AM

  66. Gavin, will you be publishing the content of this post in the form of a rebuttal? And would such a rebuttal be subject to the same level of peer review as the original article? I encourage you to do so. It is good to see some attention being paid to confidence levels around the model predictions (red triangle series in the last graph: they’re all over the map). I can think of many studies in climatology where a match between two time-series patterns would be “not so impressive” if the authors were to correctly calculate robust confidence intervals on the two data series.

    Comment by Richard Sycamore — 15 Dec 2007 @ 10:50 AM

  67. I was forwarded what looked like a news report that made a lot of really interesting statements. It was also interesting that it took the deniers blogosphere and was replicated over and over again.

    Tracking it back, I found that it was a press release from Singer’s site. What is interesting is that almost none of the press release is relevant to the paper other than to make a passing reference to a paper that was accepted.

    Apparently, getting a paper into a peer-reviewed journal then entitles you to make other claims that you can’t make easily in something that is peer reviewed.

    In my reply, I made some of simpler comments for my commentor and the people who might eventually read the back and forth. Since no one has printed it above I post it for your viewing. Anchor yourself so you don’t go into spin mode.

    Climate scientists at the University of Rochester, the University of Alabama, and the University of Virginia report that observed patterns of temperature changes (‘fingerprints’) over the last thirty years are not in accord with what greenhouse models predict and can better be explained by natural factors, such as solar variability. Therefore, climate change is ‘unstoppable’ and cannot be affected or modified by controlling the emission of greenhouse gases, such as CO2, as is proposed in current legislation.

    These results are in conflict with the conclusions of the United Nations Intergovernmental Panel on Climate Change (IPCC) and also with some recent research publications based on essentially the same data. However, they are supported by the results of the US-sponsored Climate Change Science Program (CCSP).

    The report is published in the December 2007 issue of the International Journal of Climatology of the Royal Meteorological Society [DOI: 10.1002/joc.1651]. The authors are Prof. David H. Douglass (Univ. of Rochester), Prof. John R. Christy (Univ. of Alabama), Benjamin D. Pearson (graduate student), and Prof. S. Fred Singer (Univ. of Virginia).

    The fundamental question is whether the observed warming is natural or anthropogenic (human-caused). Lead author David Douglass said: “The observed pattern of warming, comparing surface and atmospheric temperature trends, does not show the characteristic fingerprint associated with greenhouse warming. The inescapable conclusion is that the human contribution is not significant and that observed increases in carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming.”

    Co-author John Christy said: “Satellite data and independent balloon data agree that atmospheric warming trends do not exceed those of the surface. Greenhouse models, on the other hand, demand that atmospheric trend values be 2-3 times greater. We have good reason, therefore, to believe that current climate models greatly overestimate the effects of greenhouse gases. Satellite observations suggest that GH models ignore negative feedbacks, produced by clouds and by water vapor, that diminish the warming effects of carbon dioxide.”

    Co-author S. Fred Singer said: “The current warming trend is simply part of a natural cycle of climate warming and cooling that has been seen in ice cores, deep-sea sediments, stalagmites, etc., and published in hundreds of papers in peer-reviewed journals. The mechanism for producing such cyclical climate changes is still under discussion; but they are most likely caused by variations in the solar wind and associated magnetic fields that affect the flux of cosmic rays incident on the earth’s atmosphere. In turn, such cosmic rays are believed to influence cloudiness and thereby control the amount of sunlight reaching the earth’s surface-and thus the climate.” Our research demonstrates that the ongoing rise of atmospheric CO2 has only a minor influence on climate change. We must conclude, therefore, that attempts to control CO2 emissions are ineffective and pointless. – but very costly.”

    Comment by Gsaun — 15 Dec 2007 @ 12:46 PM

  68. For some who have heard Prof. Douglass’ talks over the years in Rochester, this is a moment rich in drama. In his talks, there is always a heavy dose of anti-Gore sarcasm, and the belittling of climate scientists who predict anything more than the mildest consequences of global warming. He shows a slide with a vicious circle, in which predictions of significant consequences generate research funding, which in turn causes researchers to predict even more dire consequences. GCMs are deemed to be wrong because they are too complicated, while using a grid that is too coarse, and couldn’t possibly take account of all the physics correctly. One is struck by Prof. Douglass’ continued level of certainty, which hasn’t wavered over the years. Even after the claims of Christy and Spencer, which he had trumpeted, were shown to be erroneous, he didn’t waver. When someone asked, “What about the melting glaciers?” he responded that all the attention goes to glaciers that are shrinking. We don’t hear about the glaciers that are growing.

    The tone of the recent press release is no surprise, but it is a severe disappointment that the press conference (mentioned in #1) was canceled. A future playwright or composer of opera might have obtained some excellent material.

    Comment by gough — 15 Dec 2007 @ 12:50 PM

  69. Because the results between the v1.2 and v1.4 datasets were so different, I actually emailed one of the DCPS authors asking them to justify their dataset selection. From that explanation, I believe the v1.2 dataset to be the more accurate and the results based on that dataset to be more believable.

    One can be sure that if the empirical data had overlapped the model, there would be little/no discussion from modellers about error bars. In true protect-the-model form, it is asserted that if error bars had been added to the measurements that measurement and model envelopes would overlap and, assuming the best case, show that the models are acceptable. It would also have shown, though, that the models could be even worse.

    I am a firm believer in model development. With respect to the atmosphere, though, they have still not risen to the level of trustworthiness for future climate prediction/projection.

    [Response: Well that’s nice. Perhaps you’d like to share their explanation which curiously is not to be found in the paper itself? People wouldn’t be criticising the calculation of the error bars if it had been done properly… – gavin]

    Comment by Steve — 15 Dec 2007 @ 2:47 PM

  70. Steve (#58) wrote:

    Because the results between the v1.2 and v1.4 datasets were so different, I actually emailed one of the DCPS authors asking them to justify their dataset selection. From that explanation, I believe the v1.2 dataset to be the more accurate and the results based on that dataset to be more believable.

    I think the following is worth quoting at this point:

    Version 1.4 of RAOBCORE contains 2 major improvements compared to the versions 1.2, 1.3 described in Haimberger (2007) (J. Climate, in press). These improvements are:
    1) The dataset is updated up to December 2006
    2) The ERA-40 background modification described in Haimberger (2007) is only applied between Jan 1972 and Dec 1986. It has turned out that the ERA-40/ECMWF bg forecast time series are quite consistent with recent versions of the RSS and UAH satellite datasets, so that a modification of the ERA-40 bg is not necessary. Between 1972 and 1986, modifications of the bg are unavoidable. The bg is modified more strongly in the tropics in v1.4 compared to the modification applied in version 1.2. The differences between 1.2, 1.3 and 1.4 can be examined using the web visualization tool.

    Cautionary note: The tropical mean trends 1958-1978 show warming at low levels but cooling at upper tropospheric levels, which seems unrealistic. This feature is related to a strong warming anomaly over the Eastern US which is spread to the Caribbean by the ERA-40 bg. Consequently the Carribean stations may be overcorrected at low levels during this period. Since most tropical stations are in the Caribbean during these early days, this problem strongly affects also the global tropical means. This issue, which affects all RAOBCORE versions, will be fixed in the next version.

    RAdiosonde OBservation COrrection using REanalyses (RAOBCORE)
    Version 1.4, 30 Jan 2007
    http://homepage.univie.ac.at/leopold.haimberger/RAOBCORE_T_1.4.html

    Looks like they would recommend using version 1.4. It also looks like they are having a problem with the tropics.

    It might also be worth looking at what the producer of a competing product has to say regarding their own product:

    Cautionary note

    It is important to note that significant uncertainty exists in radiosonde datasets reflecting the large number of choices available to researchers in their construction and the many heterogeneities in the data. To this end we strongly recommend that users consider, in addition to HadAT, the use of one or more of the following products to ensure their research results are robust. Currently, other radiosonde products of climate quality available from other centres (clicking on links takes you to external organisations) for bona fide research purposes are:

    *Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC)
    *RAdiosonde OBservation COrrection using REanalyses (RAOBCORE)

    HadAT: globally gridded radiosonde temperature anomalies from 1958 to present
    http://hadobs.metoffice.com/hadat/

    Climate models do quite well — as measured by a variety of metrics in many different contexts. Radiosondes? Looks like there is still a substantial amount of work to be done — as indicated by the “Caution” labels.

    Comment by Timothy Chase — 15 Dec 2007 @ 5:00 PM

  71. “The interpretation of this is a little unclear (what exactly does the sigma refer to?), but the most likely interpretation, and the one borne out by looking at their Table IIa, is that sigma is calculated as the standard deviation of the model trends.”

    Does this mean that sigma is the standard deviation for the mean trend for each model over several realisations (i.e. it is the standard deviation of 22 numbers rather than 67)? If this is the case it may have artificially reduced the width of the error bars even further as the prior averaging over realizations will have reduced the variance to some extent. I would have thought the standard deviation over the 67 realisations would be a fairer estimate of the model uncertainty.

    [Response: Agreed. – gavin (yes, that one) ]

    Comment by Gavin (no, not that one, a different one) — 16 Dec 2007 @ 5:23 AM

  72. RE #53 & 44. Henning, I never conceded any of those other points — the idea was that our GHG emissions were only playing a minor role in the warming (& I didn’t actually concede that either).

    However, if we pose ALL those other unfounded contrarian bizzaro-science points, it’s all the more easy to shoot them down (re policy implications):

    Even if GW and AGW are not happening, we still drastically need to reduce all the measures that involve GHG emissions through energy/resource efficiency/conservation & alt energy, not only because this will reduce many other environmental harms and lessen dependence on foreign oil and slow the depletion of finite resources, but also because it just makes eminent economic sense.

    For instance, we have reduced our GHG emissions (over our 1990 emissions) by two-thirds cost-effectively, without lowering our living standard (increasing it actually). And we could reduce more cost-effectively. Since we moved down to Texas to get on Green Mountain’s 100% wind energy (which also saves us money), I haven’t bought a bicycle. So once I get that I can cycle the 2 miles to work, rather than drive, and I’m sure improve my health & stress in the process. I also understand that cycling and walking help reduce crime, save money on road repair (by offsetting car driving), and create a more friendly community, etc.

    Reducing GHG emissions is a win-win-win-win-win-win situation. And if perchance the skeptics are correct and humans are not causing GW, and GW isn’t even happening — then our belief in the scientific facts that it is happening & our serious mitigation response to it would turn out to be the best thing that ever happened to us – the false positive bonanza.

    However, if we’re talking false negative & a do-nothing to mitigate approach, we’re really in for hell on earth.

    Comment by Lynn Vincentnathan — 16 Dec 2007 @ 10:11 AM

  73. Gavin:

    Repeating your assertion #2 that The National Press Club Press Conference had been canceled at DSB brought a replyfrom Singer saying he held it, though it is not on the NPC events calender – maybe he bought somebody a waffle?

    Comment by Russell Seitz — 16 Dec 2007 @ 9:07 PM

  74. Apparently he talked to someone at a meeting sometime, if you read way down in this undated story, at least that’s how they describe it.
    http://afp.google.com/article/ALeqM5jnsW1wNezDB_oKpYLA5npFOC03Dg
    That John fellow at Alabama has his name spelled a new way in this article, I notice, suggesting the fact checker’s off today.
    No one signed it; Google News got it from Agence France Presse (AFP).

    Comment by Hank Roberts — 16 Dec 2007 @ 9:39 PM

  75. Just to get this in line. Lets assume, that the observed trends are in deed correct – wouldn’t that mean, that Douglass et al.’s conclusions are correct, too? Radiative forcing should show trends higher than surface trends in the troposphere, as I understand it. If these trends were lower, the entire GHG theory would fall, right?

    [Response: No. The expected amplification has nothing to do with GHGs being the cause. Any real and clear differences (which there are not) then it would imply either a problems in the observing systems or with the our ideas about moist convection. – gavin]

    Comment by henning — 17 Dec 2007 @ 4:07 AM

  76. RE #62 & “No. The expected amplification has nothing to do with GHGs being the cause. Any real and clear differences (which there are not) then it would imply either a problems in the observing systems or with the our ideas about moist convection.”

    Gavin, you mean even if they were right and there was a significant difference between reality (assuming the obs are capturing that — which apparently they are not doing that exactly) and the models, it would just mean the models need tweaking and the underlying processes rethought at bit.

    IOW, this is pretty much ado about nothing (for those interested in the macro-issues of whether or not GW is happening and whether it’s caused by our GHGs)?

    And the denialists main gist in their article and palaver around the web was to knock the models?

    Well, I have an answer to that attack, as well: Okay, models do not perfectly replicate reality, and perhaps the models might be overestimating the problem; but then again the models might underestimating the problem and we could be in much hotter water than we thought.

    Comment by Lynn Vincentnathan — 17 Dec 2007 @ 9:28 AM

  77. RE #66, Hank, that is shocking that Agence France Presse would run such a story. I think they have been a fairly good source of GW news stories. Surely they know the few people they interviewed are in the extreme minority of climate scientists, and that their ideas have been for the most part debunked.

    I know the media in general, especially here in the U.S., have been very bad in their GW coverage, and have given us the “silent treatment” on GW, and when they broke their silence, wrongly used the “balanced, pro-con” format (which is good for opinion issues, but not for science).

    But I thought the news services have been somewhat better on GW, and that it was the newspapers and TV news that refused to pick up the stories the news services offered them.

    Now I have to rethink news services (I’m writing a paper on GW and the media). I guess no one can be trusted in this, except the vast majority of climate scientists who say AGW is real.

    I’m wondering if the recent sale of AFP had anything to do with this – see http://www.reuters.com/article/technology-media-telco-SP/idUSL1556429420071217

    Comment by Lynn Vincentnathan — 17 Dec 2007 @ 10:09 AM

  78. #64 Gavin, I’m not a go between for you and other researchers. You have their email addresses. Ask them yourself! The whole idea behind scientific progress is for the researchers themselves to correspond with one another to resolve differences in results. If you do not have a good enough relationship with those researchers to air honest differences then you have basically created an inbred network of colleagues who do not review your work with a critical eye but simply rubber stamp it [edit]

    [Response: Asking for clarification of your statements as opposed to simply taking your word for something someone may have said seems appropriate. Rest assured that there is plenty of communication going on between those researchers who are actively working on this. My comments here have focussed only on two issues which do not require any expertise to assess – the incorrect calculation of the model uncertainty and the complete lack of discussion of the actual observational uncertainty. – gavin]

    Comment by Steve — 17 Dec 2007 @ 12:26 PM

  79. #73 Gavin, I understand that no one should simply take my word for anything. However, I did not feel I had the right to post statements from a private communication. The implication was that these researchers are available to any interested party with questions.

    In a follow-up email, though, I received a full explanation about the model uncertainty calculations and observational uncertainty (and I did not even ask for them). If you really want to know, all you have to do is email them.

    Comment by Steve — 17 Dec 2007 @ 2:08 PM

  80. Excerpt from the article above in comparing the two grafics, which shows no much differences:
    “If the pictures are very similar despite the different forcings that implies that the pattern really has nothing to do with greenhouse gas changes, but is a more fundamental response to warming (however caused).”

    It seems that the IPCC thinks different and supports Douglas, et al point of view:
    “Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from (a) solar forcing, (b) volcanoes, (c) wellmixed
    greenhouse gases, (d) tropospheric and stratospheric ozone changes, (e) direct sulphate aerosol forcing and (f) the sum of all forcings.”
    source:
    Chapter 9 Understanding and Attributing Climate Change, page 675
    http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf

    It is clearly seen, that – simulated by the PCM model – different forcings generate different warming patterns in the tropical atmospheric temperature.
    What do you think about this?

    Comment by Arno — 17 Dec 2007 @ 2:16 PM

  81. Steve, ask them to come here or give you permission to quote them. No need to complicate this by asking other people to do it. You have the info, they have the right to say you can post it. Go for it!

    Comment by Hank Roberts — 17 Dec 2007 @ 2:24 PM

  82. Steve (#79) wrote:

    In a follow-up email, though, I received a full explanation about the model uncertainty calculations and observational uncertainty (and I did not even ask for them). If you really want to know, all you have to do is email them.

    Steve,

    If you are in contact with the authors of Douglass, Pearson, Singer and Christy, perhaps you could invite them to explain:

    (1) why they chose to use an older version 1.2 of RAOBCORE when the current version is 1.4;
    (2) why they think that 1.2 is superior to 1.4 even though the manufacturers regard 1.4 as having substantial improvements over 1.2 and 1.3;
    (3) why they chose not to even acknowledge the existence of versions 1.3 and 1.4 in their paper;
    (4) how they “calculated” model uncertainties; and,
    (5) why they chose to omit any acknowledgement of observational uncertainties inherent in the RAOBCORE radiosonde product.

    I am also wondering whether they are aware of the fact that RAOBCORE and HADAT carry prominent cautionary notes regarding their use and RAOBCORE specifically notes that all versions of their product have significant problems in the tropics.

    Personally, to me this seems more like the “oversight” of using earlier versions of UAH — which had a variety of technical issues, not the least of which involved the difference between night and day. Not that I mean to compare RAOBCORE to UAH.

    Comment by Timothy Chase — 17 Dec 2007 @ 3:16 PM

  83. #82 Tim, As I said in an earlier post, I have no desire to be the go-between for discussions. And I will not post the contents of non-public correspondences. That is a violation of trust. I did indeed inform them of this thread. My guess is that they will not post because of the sometimes uncivil responses that occur (as they also do on anti-AGW sites). Respectfully, you have a keyboard and the email addresses are public knowledge. If you are burning to know the answers, please send an email. If you get permission to post the response, I would love to see the civil exchange that develops.

    Comment by Steve — 17 Dec 2007 @ 5:27 PM

  84. Re 1. Thanks for the update- who claimed it was canceled ?

    [Response:

    >Date: Tue, 11 Dec 2007 11:54:07 -0500 (EST)
    >From: info@marshall.org
    >Subject: Cancellation of SEPP Event
    >
    >The briefing on December 14, 2007 at the National Press Club organized by
    >The Science & Environmental Policy Project (SEPP)
    >has been cancelled.  We regret the inconvenience.
    

    – gavin]

    Comment by Russell Seitz — 17 Dec 2007 @ 9:22 PM

  85. As a geologist, having read and only partially understood the climate based terminology, I see a science in termoil with neither side with the ammunition to do each other in. I feel that this controversy could not be more ill-timed. With barbarians at our gates, determined to destroy western civilization, we are not in any position to throw many trillions of dollars and Euro’s into sequestering CO2. I personally feel we are in the 5th interglacial warm spell of the Pleistocene and anything that prolongs it is better than returning to another epoch of glaciation. How could it exceed the warm climate of the (175 my) Mesozoic Era? This was a good era for land based organisms. Primitive mammals, birds and flowering plants appeared. Mammals remained small because of dinosaur predation. Extensive forests florished as shown by the coal seams of the Mesa Verde fm and the very thick Paleocene coals of Wyoming. I think the problem of serious over-population and the resultant strain on natural resources and agricultural land will be our undoing in the struggle for survival, expedited by widespread nuclear technology.

    Comment by Robert Reynolds — 17 Dec 2007 @ 11:42 PM

  86. > Barbarians at The Gate
    Excellent book, I recommend it. Quite cautionary for our time too.
    http://books.google.com/books?id=3tDDlEFq1fAC&printsec=frontcover&dq=barbarians&lr=

    Comment by Hank Roberts — 18 Dec 2007 @ 12:36 AM

  87. Although it is interesting, I doubt if this discussion about the troposphere temperatures relative to the surface will resolve the argument about the impact of greenhouse gasses on global temperatures.

    This issue will not be resolved until the James Hansen prediction from the seventies has been either confirmed or rejected.

    Writing in 1978, he predicted that, “if the abundance of the greenhouse gasses continue to increase with at least the rate of the 1970s, their impact on global temperature may soon rise above the noise level”. For significance, he was looking for 0.4 degrees centigrade increase.

    There is nothing so powerful as a successful prediction.

    Without the significant increase in temperatures from 1978 to 1998, no-one (politicians, journalists and peace prize committees) would have taken the AGW argument seriously. Whether an increase of 0.91 degrees C could really have resulted from an increase in CO2 concentrations from 335ppm to 366ppm in twenty years is another matter altogether.

    It is what happened next that will be decisive. Starting from the end of 2007, how far back must we go before the temperature trend again differs significantly from zero? Against a straightforward F Test, the increase is the UK data (a close proxy for the Northern hemisphere, Ray) is significant at the 5% level in 1993, and not afterwards. And if we go back to the two warm years at 1989 and 1990 the increase from 1989 is only just significant at the ten per cent level.

    The CO2 increase since 1989 is almost the same as the Hansen increase.

    If these trends continue, and global temperatures do not rise over the next five years, the clamour from the sceptics will be deafening. Will it then be possible to construct a defence of AGW?

    Comment by Fred Staples — 18 Dec 2007 @ 7:34 AM

  88. Robert- what sea level was present 175 million years ago? Secondly, what corals were around? You cannot compare conditions many millions of years ago to what we may experience very soon (in geological scales) because it is rate of change that matters, not the precise end point.

    Climate change also puts pressure on farming because of altered rainfall patterns, salinisation as sea levels rise, increased Co2 levels changing plants respiration and some other reasons which I cannot recall. It is more likely in my opinion that climate change, population growth, resource use, ecocystem destruction together would cause major problems.

    Comment by guthrie — 18 Dec 2007 @ 8:47 AM

  89. Robert Reynolds posts:

    [[As a geologist, having read and only partially understood the climate based terminology, I see a science in termoil ]]

    “Turmoil.”

    [[with neither side with the ammunition to do each other in.]]

    Then you clearly don’t know much about the subject.

    [[ I feel that this controversy could not be more ill-timed. With barbarians at our gates, determined to destroy western civilization, we are not in any position to throw many trillions of dollars and Euro’s into sequestering CO2. I personally feel we are in the 5th interglacial warm spell of the Pleistocene and anything that prolongs it is better than returning to another epoch of glaciation.]]

    A real geologist would know we weren’t due for an ice age for another 20,000-50,000 years, even without global warming.

    [[ How could it exceed the warm climate of the (175 my) Mesozoic Era? This was a good era for land based organisms.]]

    Doesn’t mean it would be good for us, or that the transition would be smooth or even survivable. Lava becomes great soil, but you don’t want to be there when it comes out of the volcano.

    [[ Primitive mammals, birds and flowering plants appeared. Mammals remained small because of dinosaur predation. Extensive forests florished as shown by the coal seams of the Mesa Verde fm and the very thick Paleocene coals of Wyoming.]]

    A geologist would know that the Paleocene was not during the Mesozoic.

    [[ I think the problem of serious over-population and the resultant strain on natural resources and agricultural land will be our undoing in the struggle for survival, expedited by widespread nuclear technology.]]

    Could be.

    Comment by Barton Paul Levenson — 18 Dec 2007 @ 11:08 AM

  90. Fred Staples #87 said: “If these trends continue, and global temperatures do not rise over the next five years, the clamour from the sceptics will be deafening. Will it then be possible to construct a defence of AGW?”

    Actually, since there are few skeptics who even understand climate science, let along publish in refereed journals, they can scream as loudly as they want. Five years is a very short time to look for climatic trends–I certainly wouldn’t recommend allowing a 5 year trend to overrule the evidence emerging from a 20 year trend or a 150 year trend. I would also think that physics ought to play a role–physics says we’ll keep warming in the long term.

    Comment by Ray Ladbury — 18 Dec 2007 @ 11:29 AM

  91. I’m sorry, Ray, if my comment (89) was not clear. The trend over the last six years is downward, but that is far too short a period to mean anything.

    Before 1998 the temperature trend was upward – not just increasing but increasing significantly in the F-test sense relative to the random variation in the signal.

    However, from 1994 to 2007, 13 years, the upward trend is not significantly different from zero. There is one chance in six that the observed trend arose by accident.

    If we then go back 5 more years to 1989, 18 years, we find an annual average temperature of 10.5 degrees against 10.42 this year. The trend temperature is still upward over the entire 18 years because three of the next four years (’91/92/93) were cold (below pre-Hansen levels, actually) but the upward trend is again not statistically significant.

    If the temperatures fluctuate about current levels over the next 5 years to 2012 we will then have a total of 23 years without a significantly increasing trend. It is this period that will, I suspect, make AGW indefensible to the non-scientific establishment.

    Comment by Fred Staples — 18 Dec 2007 @ 12:47 PM

  92. if these trends continue, and global temperatures do not rise over the next five years, the clamour from the sceptics will be deafening. Will it then be possible to construct a defence of AGW?”
    Comment by Fred Staples — 18 December 2007 @ 7:34 AM

    As this question is literally rhetorical– it concerns the quality of both sides rhetoric , the answer may depend on the degree to which proponents of models refrain from giving them much to clamor about. The rhetoric of motives has already severely afflicted one side, but that will afford no protection to the other if its commitment to scientific candor should falter, or it lets the semiotic abuse of models as tool for the advancement of environmental or economic agendas get out of hand.

    Comment by Russell Seitz — 18 Dec 2007 @ 12:49 PM

  93. Fred Staples #91, It is interesting that you are more interested in what the “non-scientific establishment” thinks than what the scientists think, is it not? Wouldn’t one think that the experts would have a better appreciation for what is going on than the non-scientists? And even if your contention of cooling were correct (it is not), if there were a good reason for the cooling (e.g. decreased insolation, increased aerosols from Chinese coal-burning power plants, etc.) that would certainly not mean we are out of the soup. Thanks, Fred, but I’ll stick with physics.

    Comment by Ray Ladbury — 18 Dec 2007 @ 3:11 PM

  94. We have discussed the physics at great length, Ray. We agreed, I think, that both the two plausible explanations for AGW (inhibited surface radiation and “higher is colder”) require the troposphere temperature to increase more than the surface temperature. That issue is the subject of this thread.

    It is a simple matter of record that AGW would not have been taken seriously had it not been for the warming from 1978 onwards, predicted by James Hansen. Temperatures had declined from the previous peak in the thirties.

    The CO2 Science web site provides us with an F-test for two sets of data:
    Angell, J.K. 1999. Global, hemispheric, and zonal temperature deviations derived from radiosonde records. In: Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, TN, USA.
    And

    The Global Historical Climatology Network (GHCN)

    This provides a direct comparison of the troposphere data and the surface data, measured independently, from 1978 onwards.

    For the surface data we obtain an increase of 0.8 degrees C, with an F value of 55 for 27 degrees of freedom – absolutely significant.

    For the troposphere data we have a trend not significantly different from zero, F = 0.16 for 24 degrees of freedom (to year 2004).

    How Ray, in the name of physics, can you explain those results? As I am sure you know, the F-test is testing that data against its inherent variability and its measurement error, combined. The surface temperature has increased; the troposphere temperature has not.

    Comment by Fred Staples — 19 Dec 2007 @ 7:04 PM

  95. http://www.nap.edu/openbook.php?record_id=9755&page=21

    “..the range of these trend estimates is determined by applying different trend algorithms to the different versions of the surface and tropospheric data sets. Further discussion of the uncertainties inherent in these estimates is provided in chapters 6–9.”

    http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#

    Comment by Hank Roberts — 19 Dec 2007 @ 8:05 PM

  96. Fred, that information you quote from the CO2science website — do you know what’s been published since that 1999 article, following up the Christie et al. work?

    check the references and follow the cites forward.

    Comment by Hank Roberts — 19 Dec 2007 @ 8:14 PM

  97. “Trends, trends, trends”. Refocus. The substantive issue here is whether or not you have a divergence problem. If the temperature trend continues to fail to rise in lock-step with rising CO2 you have a problem; your CO2 sensitivity estimate is dropping. The real question is how much confidence you have in this estimated parameter. If you are confident, then you know that the long-term trend will pick up, despite what it appears to do in the short run. My advice is to forget about data ‘trends’ (very flaky index prone to abuse by spinmeisters) and focus on model parameter estimates. After all, “attribution is fundamentally a modelling exercise”.

    Comment by Richard Sycamore — 19 Dec 2007 @ 8:35 PM

  98. Fred, isn’t it funny how CO2″Science” has managed to fall 8 years behind on their estimates of tropospheric warming. My, the time just gets away, doesn’t it? I believe there have since been a couple of adjustments–ever upward–in tropospheric warming estimates. There may be more to come. It is not hard to understand this. Measuring tropospheric temperatures is a difficult enterprise, and the troposphere is a turbulent place where energy is transported rapidly. So between uncertainties and the difficulty of modeling energy transport in the troposphere, I’m not too concerned about the somewhat lower than predicted warming of the troposphere. Part of physics is understanding when errors preclude definitive statements, and in the face of the overwhelming evidence for anthropogenic causation from other lines, it is hard to get too overwrought.
    But it is physics that tells us that if additional CO2 absorbs more IR, then the planet must warm, and physics tells us that CO2 has to absorb more IR. And it has found no other mechanism that can account for the warming we see. Like I say, physics been bery, bery good to me. I’ll stick with it.

    Comment by Ray Ladbury — 19 Dec 2007 @ 8:42 PM

  99. Oh, Fred, you can read about this issue more here:
    http://www.realclimate.org/index.php/archives/2005/08/the-tropical-lapse-rate-quandary/
    and here:
    http://www.realclimate.org/index.php?p=170

    Isn’t it odd, that people are willing to go to all this trouble trying to discredit all the sites that measure surface temperature, and yet they take the much more fraught radiosonde measurements as gospel. Go figure.

    Comment by Ray Ladbury — 19 Dec 2007 @ 8:51 PM

  100. Re 93: “It is interesting that you are more interested in what the “non-scientific establishment” thinks than what the scientists think, is it not?”

    This is in fact a crucial issue. The “non-scientific establishment” calls the shots in the political and economic arenas. They control the resources.

    The denialists are not interested in advancing science, not at all. Their only goal is to create such an uncertainty that the required political and economic decisions are not made. In this they have been rather successful.

    However, there is a growing body of “non-scientific” parties that see climate change as an opportunity. Early adoption of new scientific findings have always given an edge on the competitive marketplace.

    By the way, denialist services … How come that always and anywhere (in any language and in any major media), if you mention climate change, within the hour there pops up two to six denialists to re-circulate the same discredited opinions. It looks like a network of “service centers”, with some underpaid and overworked youths (or retirees) copy-pasting these preset opinions and factoids, under various real or assumed names. They do not have very much traction nowadays, but as loyal employees they carry on regardless.

    Comment by Pekka Kostamo — 19 Dec 2007 @ 10:59 PM

  101. Pekka, re: denialist services, Interesting idea. One study I’d like to see is a correlation between denialism latitude and/or proximity to near-sea-level locations. I suspect that there are a lot more denialists at northern latitudes, simply because some northern climes may even benefit from climate change. Likewise, there is less incentive to worry about rising sea levels if you occupy the physical (though not the moral) high ground. This is what Roger Waters has called “The bravery of being out of range.”

    So, if we wanted to visit the denialist services mothership, we should probably look on a hill somewhere near the arctic circle.

    Comment by Ray Ladbury — 20 Dec 2007 @ 9:34 AM

  102. Richard Sycamore, you’re going on at length on your subject in the other thread: http://www.realclimate.org/index.php/archives/2007/12/live-almost-from-agu%e2%80%93dispatch-3/#comment-77526
    Please don’t distract Fred right now, he’s asked about trends based on what he read elsewhere, it’s an important question, he’s been given pointers to how to do his own skeptical reading and get updated info.
    You’re getting attention in the other thread.

    Comment by Hank Roberts — 20 Dec 2007 @ 10:49 AM

  103. re 101:
    “if we wanted to visit the denialist services mothership, we should probably look on a hill somewhere near the arctic circle.”

    Thin metaphorical ice Ray.

    If Mjoes did not hail from 70 north — Tromso advertises itself as : The Northernmost University”, where the sun shines sideways if at all and the IR optical depth to sunward is deep as the gloom in a Bergman movie, his Nobel priorities might have been otherwise.

    Comment by Russell Seitz — 20 Dec 2007 @ 9:33 PM

  104. This argument is not about a PREDICTION. It is about a a SIMULATION. There is so much variability between models and between data collections that it is not surprising that some madels can be found which simulate some data.

    This does not prove the correctness of the models because of the well-known (but little accepted) maxim that a correlation, however convincing does not prove cause and effect.

    No model has ever convincingly predicted future climate. Global temperatures, however measured, have been relatively unchanged for some eight years, in violation of all model PROJECTIONS. Until models can be shown to be successful in prediction, why should anybody believe in them?

    Comment by Vincent Gray — 22 Dec 2007 @ 8:52 PM

  105. But, Vincent, can you cite any source to support any of what you write above? I can understand you saying you believe it. But I’ll be surprised if you can show anyone else has published research supporting what you believe. Please provide your evidence that my hypothesis about this is wrong by giving cites — that’s how science works, after all.

    Hansen’s Scenario C looks very good so far, after 20 years. http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/

    Eight years is insufficient data to reliably demonstrate a trend (or lack of one) against the noise level in climate. Didn’t I find this for you earlier? Has someone told you different? Who? Where?

    William Connolley gives you the information to be appropriately skeptical about what people tell you, and points out how you can download the data set and do your own statistics to test what you’re being told and shows you what you will get using standard tests of significance on one sample data set, and comments:

    “15 year trends are pretty well all sig and all about the same; that about 1/2 the 10 year trends are sig; and that very few of the 5 year trends are sig. From which the motto is: 5 year trends are not useful with this level of natural variability. They tell you nothing about the long-term change.”
    http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#

    So, even Hansen’s 20 year old model has been quite good at predicting (and over a long enough period of years for the trends to be statistically interesting). Does having a factual basis to believe this change what you believe? Do facts make a difference?

    Comment by Hank Roberts — 22 Dec 2007 @ 10:43 PM

  106. Dr. Gray writes:

    [[No model has ever convincingly predicted future climate. Global temperatures, however measured, have been relatively unchanged for some eight years, in violation of all model PROJECTIONS. Until models can be shown to be successful in prediction, why should anybody believe in them?]]

    Climate models successfully predicted that the climate would warm, that the stratosphere would cool, that the poles would warm more than the equator, that nights would warm more than days, and they predicted quantitatively how much the Earth would cool after the eruption of Mount Pinatubo. What else do you want?

    Comment by Barton Paul Levenson — 23 Dec 2007 @ 7:12 AM

  107. Barton Paul Levenson (#106) wrote:

    Climate models successfully predicted that the climate would warm, that the stratosphere would cool, that the poles would warm more than the equator, that nights would warm more than days, and they predicted quantitatively how much the Earth would cool after the eruption of Mount Pinatubo. What else do you want?

    I would include the expansion of the Hadley cells, the rise of the tropopause, the super greenhouse effect in the tropics, and I understand they do quite well against ocean circulation. They have also predicted that the range of hurricanes and cyclones would expand — just about a year before Catarina showed up off the coast of Brazil, I believe. They are also used to understand paleoclimates, tested using hindcasting, etc..

    And we should keep in mind the fact that they aren’t based upon correlations and aren’t instances of line-fitting. They are built upon physics. Radiation transfer theory, thermodynamics, fluid dynamics, etc.. They don’t tinker with the model each time to make it fit the phenomena they are trying to model. With that sort of curve-fitting, tightening the fit in one area would loosen the fit in others. They might improve the physics, but because it is actual physics, tightening the fit in one area almost inevitably means tightening the fit in numerous others.

    *

    Vincent,

    The trend in the global average temperature for the past few years is flat only if you lop off the Arctic. And DePreSys did well at forcasting that, the recent short-lived El Nino and the La Nina. Moreover, it tells us that temperatures will remain flat for 2008, but with the coming of the next El Nino (some time around December of 2008, I presume), temperatures will begin to climb again. Or so they project.

    If you want the models to realistically model natural variability with the hot El Ninos, cool La Ninas and all, you have to initialize the models with real-world data. But that means taking measurements. Plenty of measurements. And we are getting into this now.

    Comment by Timothy Chase — 23 Dec 2007 @ 5:11 PM

  108. Did this post get lost?

    Thank you, Hank. The links in 99 are very interesting.

    Suppose I concede immediately all the contentious points relating to the radio-sonde data. This means that the people responsible did not suspect that the sun was warming their instruments, that they could not apply the necessary correction retrospectively, and that when the instruments were modified the reduction in the systematic error compensated perfectly for a rise in tropospheric temperature between 1978 and 2000, which it completely masked. We must also accept that the absence of an upward trend since 2000 cannot be used as evidence for a zero increase because “the other errors are, unfortunately, not as easy to quantify as the solar heating error. It is not clear what direction they may have pushed trends”.

    The UAH story is familiar, and their data must be the most analysed and corrected data in the field. Christy and Spencer seem to have accepted all the corrections with good grace, and the fit of their data to the RSS data is almost perfect.

    I have consequently repeated my analysis using the UAH data.

    First, there was no significant increase in lower troposphere temperatures between 1978 and 1996, and no increase at all to the end of 1995.

    Overall, from 1978 to 2007 the increase is significant and the plus or minus 5% confidence limits range from 0.35 to 0.47 degrees C.

    For the surface data we have an increase of 0.8 degrees C, with an F value of 55 for 27 degrees of freedom – absolutely significant.

    So, the increase in tropospheric temperature over the crucial “Hansen” period is half that of the surface temperature. Your link states that it should be from 1.0 to 1.8 times the surface temperature increase. The anomaly remains.

    I will not suggest that we use Occam’s razor to resolve the dilemma, nor do I claim that this data disproves the AGW theory. What I do claim is that the CO2 global warming theory is nothing like as certain as its proponents suggest. If it were only a matter of scientific debate, it would not matter – time and further work would resolve the issues (between five and ten years on current trends, in my opinion).

    But journalists and political leaders cannot judge the merits of the case – they accept the “scientific consensus” and campaign for policies which may well prove to be foolish and even dangerous. Take, for example, the UK Environment secretary. He is quoted as claiming that the UK temperature since the seventies have risen by about one degree, which they have, and that this establishes the case for AGW.

    He omits to say that the same temperatures fell by one third of a degree between the fifties and the seventies, that 1949 was the second warmest year in the record, and that there has been no significant increase in UK temperatures since 1989.

    Comment by Fred Staples — 25 Dec 2007 @ 8:04 AM

  109. Fred, What it shows is that either
    1)there are additional sources of error in the measurements–a very likely possibility if you know anything about either the satellite or the balloon measurements.
    2)that the models are not complete–also quite likely
    3)both of the above

    None of this in any way invalidates the robust conclusion that the current warming is due to anthropogenic CO2–for which we have mountains of evidence. In science, you have to go with the preponderance of evidence, and it is very unusual for any single piece of contrary evidence to invalidate a theory, especially if there are plausible alternative explanations–as there are in abundance here.

    Comment by Ray Ladbury — 25 Dec 2007 @ 9:38 AM

  110. Hi, ich wnschte mir hier mehr Tiefgang!

    Comment by Marco Feindler — 27 Dec 2007 @ 8:04 PM

  111. UAH is about to make a correction, but it will lower their data by about 0.2 degrees for the last three months http://vortex.nsstc.uah.edu/data/msu/t2lt/readme.19Dec2007

    Comment by jbleth — 30 Dec 2007 @ 5:20 PM

  112. Just a quick question….

    The tropics seems to be a favorite topic among contrarians at this point — although I suspect you are already familiar much of this. Part of what the “Iris Effect” is supposed to work off of. And interestingly enough, there does seem to be a reduction in cloud cover, particularly in the 20N-20S. It reduces the outgoing reflected shortwave at the top of the atmosphere, but increases the outgoing longwave by the same amount so that the net effect of reduced cloud cover is neither to increase the temperature (as some might have it in order to provide an alternative “explanation” to the greenhouse effect for the current warming trend) nor to reduce temperature as Lindzen and Christy would have it.

    And of course without the increased opacity of the atmosphere to longwave, it would be difficult to explain how the increase in TOA outgoing longwave just managing to keep up with the reduction in TOA outgoing reflected shortwave — given the fact that the tropical SST has increased over the same period (1980s to present). Additionally, as you make clear in the post, this is an issue that has nothing actually to do with the forcing which is driving the warming trend.

    *

    Nevertheless, there is the issue of reduced cloud cover, a trend which as I understand it many models have difficulty with. And from what I understand, this pertains to the parameterization of moist air convection which is necessary due to the resolution of models being too granual to capture the process. How is the GISS Model E performing in this area at this point? Is it capturing the reduction in cloud cover, and do we have some understanding the process that is involved?

    For what may at first seem like a bizarrely different topic, it is my understanding that Wieslaw Maslowski is doing a better job of capturing the oceanic advection in the artic which is melting the ice from below and thus of explaining the trend in Arctic sea-ice using a higher resolution model. Their model predicts 2013 without taking into account the data from either 2005 or 2007. So it would seem that the biggest problem with forcasts is with modeling convection in both the atmosphere and the ocean, caused by the complexity of the process which simply overwhelms computer resources.

    Would there be ways of applying neural networks in the area of parameterization, or perhaps of adjusting the parameterization based upon flow conditions, or perhaps of dynamically adjusting the spatial or temporal grid it becomes finer when calculations require it, like with the fractal compression of digital photographs?

    No doubt some people are already looking into things like this, but I was just wondering what sort of things are being attempted.

    Comment by Timothy Chase — 2 Jan 2008 @ 11:16 AM

  113. I have a background that includes some computer modeling/simulation of complex non-linear systems… though no knowledge of climate and atmospheric dynamics other than what I’ve picked up recently, mostly off this site.

    Can someone point me to some resources that might bring me up to speed with the modeling techniques used? The previous post kind of piqued my curiosity.

    I read statements that “curve fitting” is not used to tweak the models. I obviously understand that the models are actually simulating real physical phenomena rather than trying to derive abstract functions that map inputs to outputs through a learning or “curve fitting” process. However, are there no parameters internal to the models that are derived through adaptive techniques trying to achieve a fit for historical data?

    When Timothy Chase said “applying neural networks in the area of parameterization” which parameters are being referenced… grid sizes, time steps or parameters that are internal to the physical processes being modeled?

    Many Thanks in advance.

    Comment by Phillip Duncan — 3 Jan 2008 @ 11:35 PM

  114. Phillip Duncan (#113) wrote:

    I have a background that includes some computer modeling/simulation of complex non-linear systems… though no knowledge of climate and atmospheric dynamics other than what I’ve picked up recently, mostly off this site.

    Can someone point me to some resources that might bring me up to speed with the modeling techniques used? The previous post kind of piqued my curiosity.

    I read statements that “curve fitting” is not used to tweak the models. I obviously understand that the models are actually simulating real physical phenomena rather than trying to derive abstract functions that map inputs to outputs through a learning or “curve fitting” process. However, are there no parameters internal to the models that are derived through adaptive techniques trying to achieve a fit for historical data?

    When Timothy Chase said “applying neural networks in the area of parameterization” which parameters are being referenced… grid sizes, time steps or parameters that are internal to the physical processes being modeled?

    Well, I am afraid that I won’t be able to help you much in this area in terms of understanding the actual parameterizations which are used — although I can point you off in one direction or another which may be helpful up to a point, depending. But when I speak of using neural networks or of a dynamic model resolution where the local model resolution might be adjusted automatically as, according to some calculation a lower resolution is more likely to affect the results (e.g., where windspeeds and turbulence become greater), this isn’t necessarily something which current models are capable of. It may not even be a realistic suggestion, but given what limited knowledge I have, it seems reasonable at least.

    *

    With respect to the difference between curve-fitting and the sort of parameterization which is made use of in climate models, the distinction is quite important — and relatively easy to make — so I hope you don’t mind if I explain it a little first for the the benefit of those who may be less knowledgable than yourself. Models use parameterizations because of the fact that they are necessarily limited (in one form or another) to finite difference calculations.

    There will exist individual cells, perhaps a degree in latitude and a degree in longitude. These cells will be of a certain finite height, such that the atmosphere will be broken into layers – with perhaps the troposphere and stratosphere sharing a total of forty atmospheric layers. Likewise, calculations will be performed in sweeps such that the entire state of the climate system for a given run is calculated perhaps every ten minutes in model time.

    Now physics provides the foundation for these calculations, but as we are speaking of finite difference, the calculations will tend to have problems calculating turbulent flow due to moist air convection, for example. When you have flow which is particularly turbulent, such as around the Polar Vortex, cell-by-cell calculation based on finite differences will lack the means by which to tell how for example the momentum, mass, moisture and heat which is leaving the cell will be split-up and transfered to the neighboring cells. To handle this, you need some form of parameterization. Standard stuff as far as modeling is concerned, I would presume.

    Parameterization is a form of curve-fitting. But it is local curve-fitting in which one is concerned with local conditions, local chemistry and local physics — backed up by the study of local phenomena, e.g., what you are able to get out of labs or in field studies. It is not curve-fitting which adjusts the models to specifically replicate the trend in the global average temperature or other aggregate and normalized measures of the climate system.

    *

    To give an example, with the most recent NASA GISS model, they are beginning to take into account elements of the carbon cycle. So for example, if you wish to take into account how plants will respond to increases in temperatures, you need a representative species and you need studies which show how members of those representative species will respond to a specific temperature, level of carbon dioxide and perhaps tropospheric ozone. The data you get from such studies are then parameterized, providing us with a set of equations which may be applied at the cell-level at each increment of model time.

    In any case, if you would like to look at the models themselves and even examine their code, they are available – although it may take some digging to find whatever it might be that you are specifically looking for. The datasets which they are given in terms of the levels of specific greenhouse gases, aerosols, solar irradiance and the like are available. There is extensive literature on how the levels of the various of these quantities are estimated based upon empirical studies (e.g., gas bubbles which trap aerosols from earlier in this century in places like Greenland), etc..

    *

    Anyway, this would probably be a good place to look for much of the information you may be interested in:

    The frozen version used for upcoming IPCC simulations (see below) and the controls for upcoming model description papers is denoted as ModelE1 (internal version number 3.0, dated Feb. 1, 2004). This code can be freely downloaded (as a 1.2 MB gzip-ed tar file) from modelE1.tar.gz.

    GCM – Model E
    http://www.giss.nasa.gov/tools/modelE/

    The same webpage also includes technical articles detailing the changes which went into the most recent model.

    PS

    My apologies for not responding a little earlier, but I wanted to give someone else more knowledgable than myself the opportunity to respond first.

    Comment by Timothy Chase — 5 Jan 2008 @ 5:18 PM

  115. Re #113 Philip Duncan:

    A good, very readable introduction to climate physics and the basics of climate modeling is Ray Pierrehumbert’s draft textbook, available online at:

    http://tinyurl.com/2n7sr4

    I started referring to this text while attending an undergrad course on “radiation in planetary atmospheres” that focused on the optical properties of the atmosphere – the absorption and emission aspects of the greenhouse effect. I’d already done one undergrad intro to climatology, with a quick overview of the equations of atmospheric motion, convection, vorticity, etc., but there were no programming tasks in that course.
    I haven’t read all of the draft textbook yet, but it appears to cover both these aspects and to introduce how to go about building computer models that incorporate these sets of physical laws. I found the prose very readable and progressive, explaining the steps along the route.

    Comment by Jim Prall — 11 Feb 2008 @ 6:04 PM

  116. Sorry if this is OT but water vapour gets a few mentions here and comments on the 2005 http://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/ are closed — but I found another reference to the Lindzen claim that at least 98% of the greenhouse effect is water vapour:

    http://www.downbound.com/Greenhouse_Effect_s/322.htm

    The link in this article is dead but here’s another copy: http://eaps.mit.edu/faculty/lindzen/153_Regulation.pdf or http://www.cato.org/pubs/regulation/regv15n2/reg15n2g.html

    This seems to me the most likely source for the denialist propaganda machine.

    Hope this is of interest to collectors of denial memorabilia. There is a claim now doing the rounds that H_2O is only 95% of the effect so, in another couple of decades, they will be in the mainstream (which will probably have flooded their houses by then …).

    Comment by Philip Machanick — 7 Mar 2008 @ 7:26 PM

  117. I just ran into an article regarding a Roy Spencer, et al August 2007 publication claiming that satellite observation of tropical cirrus clouds called into question the manner in which climate models treated them. Using Google Scholar search I tried to find some discussion or follow up study on their claims, but found nothing. I did, however, find the following in the Wikipedia entry under Roy Spencer:

    “In August, 2007, Spencer published an article in Geophysical Research Letters calling into question a key component of global warming theory which may change the way climate models are programmed to run. [2] Global warming theory predicts a number of positive feedbacks which will accelerate the warming. One of the proposed feedbacks is an increase in high-level, heat trapping clouds. Spencer’s observations in the tropics actually found a strong negative feedback. This observation was unexpected and gives support to Richard Lindzen’s “infrared iris” hypothesis of climate stabilization. “To give an idea of how strong this enhanced cooling mechanism is, if it was operating on global warming, it would reduce estimates of future warming by over 75 percent,” Spencer said. “The big question that no one can answer right now is whether this enhanced cooling mechanism applies to global warming.”

    Is this another example of jumping to conclusions from early observational result or is there really something to this? Why hasn’t it been discussed on Realclimate? Does the Wikipedia entry need some editing?

    Comment by Ted Nation — 31 Mar 2008 @ 4:23 PM

  118. Lynn Vincentnathan Says:In this case even if they were correct and the models failed to predict or match reality (which, acc to this post has not been adequately established, bec we’re still in overlapping data and model confidence intervals)…In this case, the vast preponderance of evidence and theory (such as long established basic physics) is on the side of AGW, so there would have to be a serious paradigm shift based on some new physics, a cooling trend (with increasing GHG levels and decreasing aerosol effect), and that they had failed to detect the extreme increase in solar irradiance to dislodge AGW theory.

    The problem with this argument is that

    1) there is lots of physics and effects we don’t understand in the climate system. The sheer fact that the models and scientists cannot readily explain the last 10 years is prima facea evidence of this.

    2) if the AGW is less than 2 degrees C per century then the AGW proponents have lost the political argument because all the damage from GW is supposed to come from this level of heating. Therefore, AGW arguers really only have to argue that the rate of heating will be less than the 2 degrees.

    3) The trend over the last 30 years of heating is about 0.33 degrees for 30 years. (remember prior to that trend the earth was cooling for 30 years) At that rate the next 90 years will see about 1 degree heating unless we get another acceleration of heating like the 1998 El Nino repeatedly occuring.

    4) The forcing value for CO2 is still highly unknown and subject to wide variation. The value for all the other forcings are all computed by the “inverse method”. This means that the models are fitted to the data. Therefore using past data to compare with the models is self-congratulatory and circular. The only thing that matters from a modeling perspective and a science perspective is what has happened since the models predicted the future. The score there is very bad for the models. They have failed to predict completely the recent 10 years of climate.

    5) The more accurate we make the models of climate for past data the more stringent it puts error bars around the current predictions. The fact that AGW enthusiasts keep touting the accuracy of their models actually works against you. The models and the current data appear to be so out of whack now that there is only a 5% probability that AGW is correct.

    [Response: This is garbage. Both in how you describe how modelling is done, and in your assessment of its skill. Short period comparisons are bogus because of the huge influence of short term weather events. ’10 years of climate’ doesn’t even make sense. And for longer term tests, the models do fine (see here for instance). – gavin]

    Comment by John Mathon — 7 Apr 2008 @ 1:53 PM

  119. John Mathon, I can see that you didn’t bother to read the post above before posting.
    1)Climate and Weather are different. Climate is long term. Weather is anything on a scale of a couple of decades or less.
    2)Look at the papers by Hansen et al. There is probably a lot of warming “in the pipe” that has not happened yet. We are a long way from a new equilibrium, and CO2 already in the air will keep warming things until we reach one.
    3)See 2 above.
    4)Not true. Forcing for CO2 is well established to be around 3 degrees per doubling. And most of the uncertainty (hence most of the risk) is on the high side).
    5)Read the article. It deals with the errors the authors made in calculating confidence intervals.

    There is plenty of real science here if you are interested. Or you can stay ignorant. Here are the pearls. Decide what you are.

    Comment by Ray Ladbury — 7 Apr 2008 @ 2:06 PM

  120. Re #118 Lynn is correct that a new paradigm is needed, but new paradigms are fiercely resisted. See Gavin’s response at such a suggestion.

    As Gavin says the models do fine with the present paradigm. But that parardigm says that the greenhouse effect can be equated to solar forcing at the top of the atmosphere. Then when a volcanic eruption alters that solar forcing and give results that match the models, it is claimed that the models have reproduced greenhouse forcing. But they have not. They have reproduced solar forcing. It has not been proved that the greenhouse and solar forcing are equivalent.

    And in fact they are not! The results from the MSUs and radiosondes have shown that. Solar radiation produces diurnal forcing, but “fixed” greenhouse gases produce decadal forcing. That is why there is still question mark of the tropical lapse rate problem.

    The optically thick greenhouse bands are saturated by definition. So the greenhouse effect does not work through Arrhenius’ scheme of radiation being blocked, as pointed out by Karl Angstrom. It operates by Tyndall’s scheme of the air near the surface being warmed by absorption. Fourier was describing Saussure’s hot box, not the glass of an Arrhenius hot house!

    It is the CO2 adjacent to the ice that absorbs most of the radiation, which warms the air most, and that melts the ice.

    Cheers, Alastair.

    Comment by Alastair McDonald — 7 Apr 2008 @ 8:13 PM

  121. perhaps I think of things in too simplistic a way…but if the major cause of feedback is water vapor and co2 is much more important then solar radiation in determining feedback..then on a nice bright summers day a bowl of water sitting directly in the sun should evaporate at approximately the same rate as a bowl of water that is placed in the shade..has anyone already measured evaporation rates in such a manner

    Comment by steven caskey — 9 Apr 2008 @ 5:46 AM

  122. “and for longer term tests the models do just fine” I guess I don’t see this..the 1991 ipcc “best guess” prediction is .2C off in just 15 years..extrapolated over a century that is more then 1C off and was using a climate sensitivity of 2.5…the current “best guess” is a climate sensitivity of 3 and is likely to be even further off in the long run then the 1991 “best guess” especially since it now has several years of temperature catching up to do

    [Response: Any projection is a function of projected emissions and a climate model. The difference between the 1990 estimate you are referring to and later ones was the emissions projection, not the model (since, as you note, best estimate climate sensitivity has increased slightly). That’s why we use multiple scenarios, and why the 1988 projections from Hansen’s paper have stood up so well. And I think I’ve mentioned on numerous occasions the folly of looking at short period trends in a noisy system….. – gavin]

    Comment by steven caskey — 9 Apr 2008 @ 7:13 AM

  123. Steven Caskey, your way of thinking about the matter is not simplistic, but wrong. It is climate CHANGE. Of course the Sun is still the dominant source of energy coming into the climate syste, but it is not CHANGING very much. What is changing is how much IR radiation greenhouse gases allow to escape from the atmosphere. Look at it this way:
    You have a bathtub with the water tap on and the drain open. The flow is such that the water level is constant in the tub: water in from the tap=water out the drain. Now block off half the drain. The source and flow of the water is still the same (the tap), but now less is escaping, so pretty soon, somebody will have a mess to clean up. We’re trying to avoid that mess.

    Comment by Ray Ladbury — 9 Apr 2008 @ 10:31 AM

  124. well now I have a real conundrum because if I take some of the arguments for feedback and thermodynamic principles as explained by some of those who support the AGW theory and I apply them to the solar forcing we know happened I find very easy to believe co2 has very little influence at all

    the oceans are hiding the real effect of global warming and the full impact won’t be felt until later..so if the oceans are supposed to be hiding the impact of AGW couldn’t they also have hidden the effects of solar warming and the difference between when solar cycles and temperatures parting ways in about 1980 be a residual effect? also there are indications that the oceans may actually be cooling now

    another musing I found was that there could be long term effects of AGW that would add another 3C to the climate sensitivity of co2..these would include things such as trees growing further north, less snow and ice to reflect sunlight, the incrreased releasing of methane from frozen tundra…now if there is in fact a long term feedback from global warming, and it does make sense at face value, and you applied that to the solar induced warming of the early 20th century…how much temperature change is left to explain?

    please tell me which arguments by those supporting AGW I should ignore as invalid or why there should be a different reaction in feedback mechanisms between solar and co2 forcing…thanks!

    [Response: There’s no difference in most feedback effects between solar and CO2 (the differences there are mostly refer to stratospheric changes and their consequences). In a warming situation from either cause, ocean heating will slow the response simply due to its heat capacity, but in neither case does it ‘hide’ the response. There are long-term feedbacks as well (ice sheets, vegetation shifts), but these have not changed significantly (for your purposes) over the last century and so cannot explain current trends. Whatever the sensitivity is, you still have solar forcing that is (at best) 5 times smaller than the GHG forcing and which is still insufficent to explain current trends – even assuming that the long term sensitivity was valid on multi-decadal time scales (which it isn’t). – gavin]

    Comment by steven caskey — 22 Apr 2008 @ 11:37 AM

  125. Steven Caskey, Remember, we are looking at CHANGES in the forcers. The CHANGE in insolation is tiny, so unless you can figure out a feedback that operates on solar radiation and NOT on CO2, that’s a nonstarter. It’s not a matter of ingoring arguments. It’s knowing relative magnitudes and the basic science. Look at the START HERE section and commence your education.

    Comment by Ray Ladbury — 22 Apr 2008 @ 1:56 PM

  126. Dear Mr Ladbury, perhaps you didn’t read what I said but in the world I live in it is important how much solar radiation is absorbed by the earth not just how much is being produced by the sun…so unless you can show me a study that eliminates the loss of ice, the decrease in the amount of snow, and the growing of trees further north as factors in the amount of radiation actually absorbed by the earth..or you can show that it wasn’t solar radiation that was the primary driver for at least the first half of the 20th century..or you can show me that these things did not happen in the first half of the 20th century…or you can show that a doubling of the effect of a driving force should not be doubled in the long term because of these factors as proposed by some of those that support the AGW hypothesis..then to dismiss me as ignorant is rather arrogant on your part is it not?

    Comment by steven caskey — 26 Apr 2008 @ 6:59 AM

  127. Steve, try the ‘Start Here’ link at the top of the page, and the first link under Science at the right side.

    Most of us here are readers — we’re not servants or waiters to whom you can address your demands for an education.

    Horse, water, drink.

    —————–
    On the topic, this may be helpful

    http://www.agu.org/pubs/crossref/2008/2008GL033454.shtml

    Comment by Hank Roberts — 26 Apr 2008 @ 11:28 AM

  128. sorry for the tone of my last response, I just find it a bit frustrating to discuss an issue with people that seem to have such closed minds…now, if I had said Hansen says that the climate sensitivity of co2 is 3C in the short term but you have to add another 3C to the climate sensitivity for the long term effects everyone would be nodding in agreement….so I would contend that either this is not correct or you would have to apply the same effects to solar forcing…is this such an uneducated statement that I should be dismissed? this isn’t my field and I readily admit it but that doesn’t mean I can’t use some basic logic..as far as my comments about the oceans hiding the heat..I could have said thermal inertia and went on to explain but I guess I didn’t realize that this wasn’t just a blog for us ignorant people

    Comment by steven caskey — 27 Apr 2008 @ 7:46 AM

  129. Try citing your sources for your beliefs. It really does help when you say why you believe something, where you read it, and why you trust the source.

    Eric Raymond’s article on how to ask questions the smart way on the Internet is addressed to computer questions but quite good as general advice to people entering who want to attract helpful responses.

    Comment by Hank Roberts — 27 Apr 2008 @ 12:56 PM

  130. steven caskey (128) wrote “… if I had said Hansen says that the climate sensitivity of co2 is 3C in the short term but you have to add another 3C to the climate sensitivity for the long term effects everyone would be nodding in agreement.”

    (1) Read Hansen et al. more carefully. I think you will find that the hypothesis relates to the total radiative forcing, assumed in the future to be dominated by CO2.

    (2) It is certainly the case that this remains a speculative hypothesis, but one with some logic for it. Not everybody agrees.

    Comment by David B. Benson — 27 Apr 2008 @ 2:06 PM

  131. Hank Roberts Says:
    27 April 2008 at 12:56 PM
    Try citing your sources for your beliefs. It really does help when you say why you believe something, where you read it, and why you trust the source.

    I’m not sure what I believe nor what sources to trust, if I was this would not be a page I’d be interested in since I would already have all the answers. As an example I was recently pointed to a study conducted by harries on the change of ghg’s effects on radiation over time. The change on the co2 part of the spectrum was very small although statistically significant but the change in the methane area of the spectrum was much more pronounced. From my untrained perspective this would seem to indicate a greater contribution by far to the ghg effect from methane then that from co2 and yet it was used as evidence that co2 was the significant driver. Then there are controversies regarding how the troposphere should be reacting and if that isn’t bad enough there are also controversies over how the troposphere is reacting. Then you have controversies over how important sea ice and snowfall are as far as an albedo effect. The controversies go on and on and yet the science is being declared by so many as settled to the point where we are trying to completely change our economy and scaring our kids with dire predictions of the future. Am I concerned that there may be actual consequences of adding so much co2 to the atmosphere? Of course I am too many people believe this to dismiss it as mythical. But I am also concerned that we are raising false alarms that will tarnish the credibility of climatologists to the point where when we do have a better grasp of the science and there is an actual climate emergency on the horizen that people will mearly say sure oh yes another emergency haha. If solar cycle 24 turns out to be a weak cycle and this actually causes the temperature to go down, then how will we ever convince people that we need to prepare for solar cycle 25 that is predicted by NASA to be incredibly weak? Are we that sure of the driving forces that we can ignore these possibilities? thank you all for letting me post on your page and thank you gavin for your responses, they were much appreciated and allowed me to find my mistake on my interpretation of the ipcc predictions

    Comment by steven caskey — 29 Apr 2008 @ 9:05 AM

  132. > I was recently pointed to a study …
    > Then there are controversies …
    > Then you have controversies …
    > the science is being declared by so many as settled …
    > we are trying to completely change our economy …
    > If solar cycle 24 turns out to be a weak cycle
    > and this actually causes the temperature to go down …
    > solar cycle 25 that is predicted by NASA to be incredibly weak …

    See, if you don’t have a source for what you believe, it looks like these are things you read on a blog somewhere.

    Or you’re playing the climate change bingo game and haved a winner.

    When someone makes a claim about some science published, ask:

    Did you get a cite to the study?
    Tell us where you learned this?
    Tell us what you looked up?

    Because people make up all sorts of stuff, misunderstand or misinterpret what they read, or tell only part of a story to emphasize a talking or arguing or PR point they want to make.

    Watch out even if you get a source for the PR sites that identify themselves as providing “advocacy science” instead of peer reviewed science.

    Comment by Hank Roberts — 29 Apr 2008 @ 9:55 AM

  133. Steven, here’s what I find looking for your “Harries” reference with the information you provide. Can you clarify what you read?

    http://scholar.google.com/scholar?as_q=+co2+methane+climate&num=100&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=Harries&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en&lr=&newwindow=1&safe=off

    Comment by Hank Roberts — 29 Apr 2008 @ 10:12 AM

  134. Steve Caskey, This is the reason why scientists study for ~10 years to get a PhD, and then work for about 5 years as a postdoc and then publish for a couple of decades before they really become influential in a field. It really does take that long to understand the relative importance of different effects, which researchers are credible, etc. As a layman, your best bet is to look at peer-reviewed literature that has been accepted by the experts. Realclimate is an invaluable resource in this regard.

    Look at relative magnitudes of different effects. Look at how long they persist. Anthropogenic CO2 is and remains the 400 pound gorilla even if we have a dip in solar activity.

    Comment by Ray Ladbury — 29 Apr 2008 @ 10:25 AM

  135. yes I understand the importance of a study being peer reviewed…just as in my profession where they also refer to me as doctor..so I very seldom bother to go to blogs and when I do I am more interested in what their references are then in what they have to say..I am refering to such things as the recent study of the temperature of the troposphere which was peer reviewed and published in the dec 2007 royal meteorological journal by christy and others..there is also a paper that was peer reviewed by a hungarian scientist who’s name escapes me now but who worked out the possible climate change due to co2 in a finate atmosphere was considerably less then being projected…as far as the work under harries I don’t recall if that was peer reviewed or not and have to head to work now but I do recall the significant difference in the radiation windows between co2 and methane and the the comparison of the graphs of the change in radiation in these windows from ~1970 to 2003..I will try to find time to take a closer look at it later

    [Response: The Hungarian study you are talking about is by Miscolszy. It appeared in an obscure Hungarian weather journal, and having looked at it myself the standards of peer review for that journal can’t be very good. You have to look at the journal and its standards in evaluating work. In this case, several of us had a look at the paper, and it’s clear that the author made serious and elementary errors concernng application of Kirchoff’s law, and the virial theorem. This paper isn’t important enough to address in a peer-reviewed comment, but I have some Bowdoin undergrads working on a write-up of the problems in the paper, and that will be ultimately posted on RC when they’re done. As for the Christy study, that’s more or less a broad-brush review of temperature trends gotten by various groups. Just what is it that you see in that study that would cause you to doubt the seriousness of AGW as a problem? It’s still true that nobody can get these temperature trends from a physical model that leaves out the influence of CO2, and it’s still true that the trends are compatible with predictions of models that have equilibrium climate sensitivities from 1.5C to 4C. The data does not in any way support or demand low sensitivity to CO2. It doesn’t prove high sensitivity, either, which is why we are stuck making policy in the face of uncertainty. –raypierre]

    Comment by steven caskey — 29 Apr 2008 @ 11:59 AM

  136. there seems to be no doubt that co2 is a ghg and effects climate as such. the discussion seems to be centered around the climate sensitivity and the degree of this influence. the study by christy using the raw data would indicate a low sensitivity, however,as I have read before and refreshed my memory today on your pages the margin of error of the data could cause the sensitivity to be much higher. what this study does do from my perspective is show that our ability to measure such things appears to be insufficient to draw firm conclusions one way or the other. thank you for reminding me of Miscolzsy, that is in fact the study I was refering to and will make it a point to read your critique of his work when it comes out.

    See, if you don’t have a source for what you believe, it looks like these are things you read on a blog somewhere

    the comments on solar cycles 24 and 25 was based on information I read about nasa’s predictions on their home page. there appears to be about a 50/50 split on what magnitude solar cycle 24 will be while the prediction on a very weak solar cycle 25 was made by hathaway at nasa and there are similar predictions made by other scientists about solar cycle 25. russian solar physicists whose names I can research if you are interested. I must admit as I read I pay too little attention to names since I am not familiar enough with the personalities to draw any conclusions from who does the work.

    [Response: I am not aware of any study by Christy that would support a climate sensitivity appreciably lower than what is given in the IPCC range. Could you be more precise about just what kind of result or argument you are quoting? Steve Schwartz did have a paper in JGR which claimed the data supported a low climate sensitivity, but as Gavin pointed out in his RC post on that paper, the Scwhartz’s analysis was based on invalid methods. That critique will have to work its way into a regular journal article someday, but meanwhile, you can read the reasoning here on RC. We do indeed have a range of possible climate sensitivities. The 20th/21st century data does not strongly constrain what sensitivity is the right one, though study of the Eocene climate and the ice-age climate would tend to argue against the lowest end of the IPCC range, though not definitively at this point. What is relevant for policymakers, though, is that nothing we know at present rules out the high end of the IPCC range, or even beyond that . That is important because big damages come at the high end, so they figure importantly in the expected damage, even if they have low (or unquantified) probability. –raypierre]

    Comment by steven caskey — 29 Apr 2008 @ 7:04 PM

  137. Raypierre, your comment raises an important point: The risk cannot be bounded at a reasonable level because of the thick high-side tail on the probability distribution of sensitivities. It would seem to me that anything we could do to reduce uncertainty on the high side would pay serious dividends on the policy side. If we can rule out such high sensitivities, and if unforeseen feedbacks (e.g. outgassing of CO2 by melting permafrost, the oceans, etc.) are not as severe as feared, we might be better able to develop coherent mitigations. These are big ifs, but without progress on this front, the mitigation problem becomes a bit of a Gordian knot.

    Comment by Ray Ladbury — 29 Apr 2008 @ 8:03 PM

  138. > , if you don’t have a source for what you believe,
    And if you don’t have a publication record in the science about which you’re commenting!
    > it looks like these are things you read on a
    > blog somewhere

    A dentist opining about dentistry, and a climatologist opining about climatology, have some trust established in their own areas of knowledge. Readers will expect they’ve got a basis for opinions, in their fields. Trust goes both ways between writers and readers.

    Comment by Hank Roberts — 29 Apr 2008 @ 8:14 PM

  139. Trust goes both ways between writers and readers

    I have no doubt that scientists on both sides of this issue have full faith in what they say and their interpretation of the results. there may be some exceptions of course but I believe the vast majority of the people involved are both serious and convinced in their beliefs

    the christy report which stated that the troposphere was not warming as fast as expected in agw models is where I got the interpretation of lower climate sensitivity although it is certainly possible that I may have read someone else’s interpretation of it and that may have lead me in that direction. I did read the response on this web site and I believe I may have read it before about the same time I read the christy paper and noted the main complaint being the margin of error in the temperature readings and the type of data base they chose. is it not a logical assumption that if the troposphere is not warming faster then the surface or is warming faster but by an amount less then those predicted by models that this would be a reflection on climate sensitivity? this is not meant as a rhetorical question either I am open minded to being pointed in the right direction should I have taken a wrong turn.

    It would seem to me that anything we could do to reduce uncertainty on the high side would pay serious dividends on the policy side

    I think reducing uncertainties is an excellent idea

    Comment by steven caskey — 29 Apr 2008 @ 10:49 PM

  140. > both sides
    > full faith in what they say and their interpretation
    I think that’s another mistake

    I don’t think science works by taking sides, nor by faith, nor by certainty. PR, however, certainly does. It’s easy to think you’re reading scientific work and find you’re actually reading a political or business PR site instead.

    Try this — pasting words taken directly from your statement above into Google, like these:

    http://www.google.com/search?num=100&hl=en&newwindow=1&safe=off&client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&hs=NZO&q=if+the+troposphere+is+not+warming+faster+then+the+surface+or+is+warming+faster+but+by+an+amount+less+then+those+predicted+by+models+that+this+would+be+a+reflection+on+climate+sensitivity%3F&btnG=Search

    Look at the hits that come up in the first few pages.

    The top couple are from Wikipedia. Of the rest, some are from science sites; some from PR or “science advocacy” sites.

    Compare what they’re saying.

    I think you’ll agree you see claims there are “two sides” and “faith” and “certainty” — but not from the science sites. Those are the opinion/PR/argument words.

    Then try the same exercise but in Google Scholar, for a contrast.

    Comment by Hank Roberts — 29 Apr 2008 @ 11:28 PM

  141. I don’t think science works by taking sides, nor by faith, nor by certainty. PR, however, certainly does. It’s easy to think you’re reading scientific work and find you’re actually reading a political or business PR site instead.

    I’m sure I’ve read plenty of opinion pieces also. But if I believed everything I read I would be talking about big oil or grants for studies depending on whose side I was taking. I find my opinion to be one of a small minority and some anecdotal evidence that I am not unduly influenced by other’s opinions. also note my answer was in response to your sentence involving the word trust and to me it seems that the words trust faith beliefs all involve the basic idea and would be more difficult to answer without their use

    Comment by steven caskey — 30 Apr 2008 @ 5:25 AM

  142. as far as reducing uncertainties goes I will note that I did not say for PR reasons although that was the context it was brought up in. to me this means another look at the possibilities and a closer look at the probable consequences. a better understanding of the science would allow such a reduction in uncertainties would it not? I doubt that anyone would allow such a reduction if the data did not support it.

    Comment by steven caskey — 30 Apr 2008 @ 6:14 AM

  143. I would agree that there shouldn’t be sides and all the scientists in the field should be working together to iron out their differences and prove to eachother that their interpretation of the data is the correct one. to say this is happening and that there isn’t in fact sides to this issue is to ignore reality for the sake of argument

    Comment by steven caskey — 30 Apr 2008 @ 6:33 AM

  144. Steven Caskey, A scientist should do his speaking on scientific matters in peer-reviewed journals and at conferences along with other experts. Any other forum is ex cathedra. Since denialists have utterly failed to publish any credible theories for the current warming epoch, their opposition cannot be considered scientific. There is only one side to the science here.
    In the interactions of scientists with politicians and the public, the objective should be education–the public needs to understand the likely consequences of the science and the possibilities for mitigation.
    Outside of these venues, the scientist is a private citizen with preferences for one strategy or another, but with no special authority.

    There simply is no credible science that challenges anthropogenic causation of the current warming epoch. We know also that warming will have adverse consequences, that the climate system has positive feedbacks that could take the situation completely out of our control and that there are still considerable uncertainties in where these “tipping points” lie. Most of the uncertainty is on the high side of the risk equation. To date the scientists have been quite conservative.

    Comment by Ray Ladbury — 30 Apr 2008 @ 7:53 AM

  145. Since denialists have utterly failed to publish any credible theories for the current warming epoch, their opposition cannot be considered scientific

    I would contend that in order to deny a theory/hypothesis one would not have to replace it with a different hypothesis/theory but rather mearly prove the current theory/hypothesis flawed. I am not saying this has been done and I’m not saying it can be done, I am mearly disagreeing with the level of responsibility of those that disagree should be held to. I do agree peer reviewed papers are the best way discuss an issue as far as formal results go but on a personal note I would be distressed if differences in my field led to such division as there appears to be in yours.

    Comment by steven caskey — 30 Apr 2008 @ 9:30 AM

  146. Steven, can you put quotation marks around words that you’ve copied and pasted? I find I can’t tell if you’re echoing other people’s words in order to comment, or because you agree. Showing direct quotes, and cites, are very helpful tools for making clear whose words you’re using.

    Comment by Hank Roberts — 30 Apr 2008 @ 9:38 AM

  147. Steven Caskey, Scientific evidence is best judged in terms of how well a theory fits the observations. A theory may be incomplete and do a less than stellar job of fitting the data and still be correct. It is much easier to use comparative measures (e.g. likelihood ratio, AIC, BIC, DIC) to judge the goodness of a theory.
    A theory is only rarely disproved by finding a single piece of data so wildly at odds with the theory that the advocates of the theory just throw up their hands. Rather, the theory will be modified somewhat to account for the theory. If this results in a more complicated theory, the various comparative metrics need to be looked at for the modified theory vs. all others.
    The hypothesis that CO2 produced by human activity is largely responsible for the current warming epoch does not currently have any credible alternative. Most of the other ideas don’t even merit the term hypothesis, as they lack detailed physical mechanisms.

    Comment by Ray Ladbury — 30 Apr 2008 @ 10:11 AM

  148. Hank Roberts Says:
    30 April 2008 at 9:38 AM
    Steven, can you put quotation marks around words that you’ve copied and pasted? I find I can’t tell if you’re echoing other people’s words in order to comment, or because you agree.

    certainly I will sorry about that

    Comment by steven caskey — 30 Apr 2008 @ 10:16 AM

  149. Ray Ladbury Says:
    30 April 2008 at 10:11 AM A theory is only rarely disproved by finding a single piece of data so wildly at odds with the theory that the advocates of the theory just throw up their hands. Rather, the theory will be modified somewhat to account for the theory

    I would add to this that it would be especially difficult when the actual theory isn’t disputed but rather the magnitude of the positive and negative feedbacks in a very complex environment where it isn’t possible to keep all other factors constant in order to test the outcome of changing one variable. it is the modifications to the theory that are in dispute, and there is actually less difference between those who believe the lower range of the ipcc predictions at ~1.5 climate sensitivity to be true and the people termed deniers at ~1.0 climate sensitivity then there is between those who believe the lower end of the ipcc predictions to be true at ~1.5 climate sensitivity and those that believe the upper end of the ipcc predictions at ~4.5 climate sensitivity to be true. what that tells me is there is a serious need to reduce uncertainties.

    Comment by steven caskey — 30 Apr 2008 @ 11:53 AM

  150. Steven, you have to look at whether the data support a particular sensitivity. A sensitivity of 1 or even 1.5 causes way more problems for many different datasets than does a sensitivity of 3 or even 3.5. The probability distribution on sensitivity is very asymmetric: 1.0 is extremely improbable given the data, while 4.5 cannot be ruled out. Almost all of the uncertainty is on the high side.

    Comment by Ray Ladbury — 30 Apr 2008 @ 2:10 PM

  151. Ray Ladbury Says:
    30 April 2008 at 2:10 PM
    Steven, you have to look at whether the data support a particular sensitivity. A sensitivity of 1 or even 1.5 causes way more problems for many different datasets than does a sensitivity of 3 or even 3.5. The probability distribution on sensitivity is very asymmetric: 1.0 is extremely improbable given the data, while 4.5 cannot be ruled out. Almost all of the uncertainty is on the high side.

    it isn’t as important what direction the sensitivity goes as it is the range the sensitivity can be accurately predicted to, at least from a scientific standpoint. the narrower the range the more precise the knowledge. my point was the range was so wide that currently the very people that are termed deniers could be closer to the truth if the truth turned out to be in the low end of the range then those that are predicting the high end of the range. if the data does not support the low end of the range then perhaps a good starting point would be to support the raising of the low end of the range.

    Comment by steven caskey — 30 Apr 2008 @ 5:19 PM

  152. Steven, you are misunderstanding the situation: The probability to the left of 1.5 is near zero. The probability to the right of 4.5 is not negligible. This is extremely important both from the point of view of science and risk assessment. Look up “thick-tailed distribution” and the term skew as it applies to probability.

    Comment by Ray Ladbury — 30 Apr 2008 @ 6:24 PM

  153. Ray Ladbury Says:
    30 April 2008 at 6:24 PM
    Steven, you are misunderstanding the situation: The probability to the left of 1.5 is near zero. The probability to the right of 4.5 is not negligible. This is extremely important both from the point of view of science and risk assessment. Look up “thick-tailed distribution” and the term skew as it applies to probability.

    no, I understand exactly what you are saying I think..not that I know the exact statistics but lets say you come up with a 1% chance of 1.5 but a 15% chance of 4.5…wouldn’t this be what you mean?

    Comment by steven caskey — 30 Apr 2008 @ 6:44 PM

  154. let me rephrase that. if I know what a skewed bell curve looks like then that is what I would be looking at when viewing the climate sensitivity probabilities, correct?

    Comment by steven caskey — 30 Apr 2008 @ 9:12 PM

  155. http://tamino.wordpress.com/2007/10/27/uncertain-sensitivity/
    http://tamino.files.wordpress.com/2007/10/probs.jpg
    (see the discussion, this is just one image from the thread)

    Comment by Hank Roberts — 30 Apr 2008 @ 11:35 PM

  156. Regarding, “Now the claim has been greatly restricted in scope and concerns only the tropics, and the rate of warming in the troposphere (rather than the fact of warming itself, which is now undisputed).”

    Correct, the rate of warming is not in dispute – it’s decidely negative and will likely continue that way for some time to come! The warming trend observed in the 1990’s has more to do with less aerosols in the atmosphere and a more active sun. This is a better explanation than CO2-driven climate change for the following observed phenomena: cooler stratosphere and warmer surface temperatures (particular areas over land in the northern hemisphere, such as across Asia where the former Soviet Union dissolved and China has modernized). What happens to the impact of CO2 in the climate models if the assumed amount of aerosols are cut in half, or more? Plus, throw in some minimal mechanism for a more active sun. How much of the CO2-driven temperature rise is eliminated once these assumptions are made? It’s my understanding that the role of CO2 has increased over the years primarily to account for more aerosols assumed in the models. Why? Just to prop up CO2? In fact, I have seen two published articles(posted on Atmoz and WUWT) that suggest the opposite: the air is cleaner today than 30 years ago. Yet, the climate models assume otherwise. Go figure.

    [Response: Please do. The impact of CO2 is independent of any other forcing. -gavin]

    Comment by Chris — 1 May 2008 @ 12:27 AM

  157. Hank Roberts Says:
    30 April 2008 at 11:35 PM
    http://tamino.wordpress.com/2007/10/27/uncertain-sensitivity/
    http://tamino.files.wordpress.com/2007/10/probs.jpg
    (see the discussion, this is just one image from the thread)

    thank you, the skew was larger then I had anticipated as was the tail but the basic shape I did recall correctly

    Comment by steven caskey — 1 May 2008 @ 6:44 AM

  158. Hank Roberts Says:
    30 April 2008 at 11:35 PM
    http://tamino.wordpress.com/2007/10/27/uncertain-sensitivity/
    http://tamino.files.wordpress.com/2007/10/probs.jpg
    (see the discussion, this is just one image from the thread)

    I am still a bit confused by where the graphs came from. from what I read it seems there were produced by the person who runs the open mind blog? I tried looking for a chart that looked similar in the ipcc report but was unable to find one. it may be there and I will attempt another look later.

    Comment by steven caskey — 1 May 2008 @ 7:18 AM

  159. Chris,
    You, like so many other skeptics have fallen victim to what I call the Chinese menu fallacy: You assume that if you can just find other causes for warming that aren’t in the current models that the whole nasty business with CO2 being a greenhouse gas will go away. It won’t. Greenhouse forcing is not an adjustable parameter–it is fixed–and pretty narrowly–with independent data. So are the other forcings–with varying success. The parameters that are poorly fixed are aerosols and clouds. Find a new forcer, that’s were the give can occur in the models. No one is propping anything up. They are merely doing what the science tells them to do.

    Comment by Ray Ladbury — 1 May 2008 @ 8:25 AM

  160. Steven, Given the shape of the curve, do you see why I am saying that the climate studies (which take ~3 deg/doubling) have been conservative. The cost of climate change to civilization probably rises exponentially with increasing temperatuer, so in reality, the risk (cost times probability) is dominated by that thick tail on the right side. The evidence says the denialists are almost certainly wrong, and it cannot rule out the scenarios of the alarmists like Lovelock and Hansen. In this sense, “alarmist” is not a pejorative. If sensitivity is 6 degrees per doubling, alarm is the only appropriate reaction.

    Comment by Ray Ladbury — 1 May 2008 @ 8:46 AM

  161. Ray Ladbury Says:
    1 May 2008 at 8:46 AM
    Steven, Given the shape of the curve, do you see why I am saying that the climate studies (which take ~3 deg/doubling) have been conservative. The cost of climate change to civilization probably rises exponentially with increasing temperatuer, so in reality, the risk (cost times probability) is dominated by that thick tail on the right side. The evidence says the denialists are almost certainly wrong, and it cannot rule out the scenarios of the alarmists like Lovelock and Hansen. In this sense, “alarmist” is not a pejorative. If sensitivity is 6 degrees per doubling, alarm is the only appropriate reaction

    I see what you’re saying according to the graph I looked at, however I have only seen it on a blog so far. but for a minute let’s say that the graph is correct as presented for theoretical purposes, it would greatly favor the higher levels of climate sensitivity ranges but it would have an incredibly large range for climate sensitivity of nearly 8C. it appears that most of the uncertainties between 1.5C and 2C have either been eliminated or discounted by the person that made the graph and the next logical step would indeed be to try to eliminate more uncertainties which would in all probability eliminate the tail of the graph at the high end. of course this is speculation and in attempting to further refine numbers it could cause shifts in either direction

    Comment by steven caskey — 1 May 2008 @ 10:56 AM

  162. Ray Ladbury Says:
    1 May 2008 at 8:46 AM and it cannot rule out the scenarios of the alarmists like Lovelock and Hansen. In this sense, “alarmist” is not a pejorative. If sensitivity is 6 degrees per doubling, alarm is the only appropriate reaction

    I haven’t read anything about what lovelock has said but I do have a familiarity of what hansen has done and that is that he has taken a climate sensitivity of 3C to be the immediate effect and then an additional 3C as a long range effect. this would be based on things such as less ice and snow, methane release, and darker vegetation further north. this hypothesis may seem very ominous on it’s face but actually it may be exactly the opposite since then one would have to go back and apply the same long range forcings to the increases in solar radiation which would decrease the amount of unaccounted for temperature change currently being placed on co2. having just read the response from real climate to his paper and some of the postings there it was also pointed out to me that some of these long range responses are also limited by their nature and that the long range response would actually be less with time. ie there is a limited amount of ground that could be uncovered and a limited amount of methane that could be released. I don’t have the skill to figure out what exactly applying this hypothesis to earlier solar forcing would result in but should it ever be done or if it has been done I would find it of interest. some of this I said in an earlier post. I apologize in advance for being some of this being repetitive.

    Comment by steven caskey — 1 May 2008 @ 12:26 PM

  163. On Miskolczi and Kirchoff’s law, Kirchoff’s law means absorptivity and emissivity must be equal *only when considering the same frequency.* Earth’s visible absorptivity is 0.7, but the emissivity is not– just the visible emissivity.

    Comment by Chris Colose — 1 May 2008 @ 2:04 PM

  164. Steven, Climate models are not Chinese menus. The sensitivities of CO2, solar irradiance, etc. are determined by independent data–not by fitting to the temperature rise. If there is an unaccounted for forcer, CO2 forcing, which is tightly constrained, will not change. Less well constrained forcers, such as aerosols, clouds, etc. have some give–not CO2.

    Comment by Ray Ladbury — 1 May 2008 @ 5:25 PM

  165. Ray Ladbury Says:
    1 May 2008 at 5:25 PM
    Steven, Climate models are not Chinese menus. The sensitivities of CO2, solar irradiance, etc. are determined by independent data–not by fitting to the temperature rise. If there is an unaccounted for forcer, CO2 forcing, which is tightly constrained, will not change

    I agree there is a set forcing from co2…I believe it has a range from .8 to 1.2 from what I read although I may be marginally off. what I am discussing is the feedback mechanisms which are not set nor are they well understood as fully exemplified by the ranges in climate sensitivities. is this not correct or am I missing something?

    Comment by steven caskey — 1 May 2008 @ 8:37 PM

  166. as an example to further the point; in the 1990 estimate the climate sensitivity was judged most likely to be at 2.5C but this was raised to 3.0C sensitivity based upon the increase in temperature as opposed to the increase in known forcing . now if the additional warming was due to previous solar long term forcing that hasn’t yet been included in the equation then the sensitivity may well still be at 2.5C as a best bet conclusion. of course I have no idea what the outcome of research into long term solar forcing would conclude so this is just a possible example but one I would think worth taking a look at especially when long term forcing is considered to be equal to the short term forcing by some and if so could make a considerable difference

    Comment by steven caskey — 1 May 2008 @ 9:20 PM

  167. Gavin and Ray,

    Is “climate sensitivity” to CO2 an independent variable? The following graph shows aerosols have two effects: one direct and one indirect (via albedo).

    http://en.wikipedia.org/wiki/Image:Radiative-forcings.svg

    I assume that is the case for CO2. Although CO2 sensitivity is not a forcing component in the strict sense, it is one nonetheless via the embedded formulas in the climate models. How can you say CO2 is an independent variable when it has a supposedly indirect effect (via climate sensitivity)? Does not climate sensitivity depend on other variables? Also, you didn’t answer my question: Why do climate models assume there are more aerosols today than the past decades (please see graph below).

    http://en.wikipedia.org/wiki/Image:Climate_Change_Attribution.png

    Further, what are the accuracies of the models if the assumed amount of aerosols are cut in half? I can only assume they wouldn’t be accurate at all. So, one would either conclude that aerosols are “propping” up the role of CO2, or the models are not accurate at all. You can choose the best description. I contend that my hypothesis (less aerosols, more active sun) provides a better explanation for observed results (cooler stratosphere, warmer land surface temperatures) than any climate model. Until you guys provide better results, your sense of creditability appears ill-founded at best.

    Finally, I see two Chris’ on this site of late. I’m Chris N.

    Comment by Chris N — 1 May 2008 @ 11:48 PM

  168. Remember, if you think you might be retyping a FAQ, try typing it into the Search box at the top of the page. You may find you’re right.
    Also the “Start Here” link at the top of the page is handy.

    Comment by Hank Roberts — 2 May 2008 @ 12:19 AM

  169. the paper by scafetta & west and the reaction to it on real climate is a great help to what I was saying about the limits on long term climate sensitivity. since you do have to treat the two forcings of solar and co2 in almost exactly the same manner then extreme predictions on long term climate sensitivity are not practical and by going back and applying these forcings to solar forcing you can limit these feedback mechanisms by comparing to what is currently happening and what has happened in the past and by doing so hopefully eliminate some of the uncertainties that are creating the long tail at the high end of the possible climate sensitivities ranges

    Comment by steven caskey — 2 May 2008 @ 6:06 AM

  170. Chris N., I’m not sure what you are talking about. Where do you get your information that climate models are assuming more aerosols today than in the past? That is a rather vague accusation that sounds as if it is taken out of context.
    Now, how, pray does your model account for a cooler stratosphere? And it certainly doesn’t account for the fact that there is more warming in night-time temperatures than day-time temperatures, or any of a number of other trends. It sure makes the problem easier when you only pick a subset of the trends to fit. I think it is your understanding of the models that is ill founded.

    Comment by Ray Ladbury — 2 May 2008 @ 7:56 AM

  171. Steven, I think you will find that the sensitivity was raised because independent data favored the new value. Raising the sensitivity in response to temperature trends would be ill advised, because you then could not use temperature trends as validation for the models. Sensitivity is not an adjustable parameter in the models. It may vary over time, but only as new data come in to support different sensitivities.

    Comment by Ray Ladbury — 2 May 2008 @ 7:59 AM

  172. Steven (aside, the shift key would really help, if you find it easy to use, both to make quotation marks before and after quoted material, and to make capital letters that help indicate when you start sentences. Paragraphs also help organize thought, as others have mentioned, for those of us with older eyes. If you can’t do that easily I understand, but if you can it’d be a kindness.

    You wrote: “treat the two forcings of solar and co2 in almost exactly the same manner” — I’m not sure who you’re talking about doing this, the modelers? the politicians?

    — we don’t have control over the sun. Solar input is only changing by about one watt out of thirteen hundred watts per square meter, not a whole lot.

    — we do have control over CO2, which we’re doubling on the shortest time span by far ever in Earth’s history.

    I’m trying to figure out what point you are trying to make. Can you make it explicitly?

    Comment by Hank Roberts — 2 May 2008 @ 9:18 AM

  173. we do have control over CO2

    Imagine a world in which solar output was going up, CO2 concentrations were static, and the atmophere was warming at an alarming rate. Would fatalism be the order of the day or would someone hit upon the happy idea of reducing CO2 concentrations in the atmosphere as a way of mitigating the effect of the increasing solar energy?

    Comment by Jeffrey Davis — 2 May 2008 @ 10:03 AM

  174. “Hank Roberts Says:
    2 May 2008 at 9:18 AM I’m trying to figure out what point you are trying to make. Can you make it explicitly?”

    I will try.

    [current forcing today / future long term feedbacks] = [forcing from the past/current long term feedbacks]

    The larger you predict the future long term feedback mechanisms the larger the long term feedback mechanisms from the past must be influencing the climate today

    current total forcing = [forcings x immediate feedback] + [past forcings x long term feedback]

    The higher the value of the long term feedback the smaller the value of the immediate feedback must be.

    I know I have oversimplified to an incredible degree and this is not up for peer review so please don’t be too harsh but I hope it got my frame of thinking a little more clear.

    Comment by steven caskey — 2 May 2008 @ 11:20 AM

  175. Jeffrey Davis:
    Imagine a world in which solar output was going down, CO2 concentrations were up, and the atmosphere was cooling. Would fatalism be the order of the day or would someone hit upon the happy idea of increasing further CO2 concentrations in the atmosphere as a way of mitigating the effect of the decreasing solar energy?

    [Response: You’d be much better off with SF6 or some of the HFCs – cheap, inert and with GWPs many times that of CO2. – gavin]

    Comment by Leif Svalgaard — 2 May 2008 @ 12:04 PM

  176. Steven, OK, so what was the past forcing you posit is still reverberating today? Changes in insolation have been tiny. Other influences have been short-term and inconsistent (some positive, some negative). Did you ever study differential equations? Think about how the time dependences of the homogeneous and particular solution have to be related to see a consistent, monotonic effect.

    To paraphrase raypierre–the sun goes up and down and up and down, and temperature (trend) goes up. Look, it comes down to this: the energy has to come from somewhere. Where do you think it’s coming from.

    In any case, the fallacy of your argument is that somehow, CO2 forcing is determined from current forcing. It isn’t. It is determined from things like paleoclimate, past response of the atmosphere to purturbations, and so on. They are saying, “the sensitivity has to be x, because in the dim and dark past we saw y.” So unless you can produce y with a much smaller sensitivity in the dark and distant past, CO2 sensitivity in the models won’t be affected. CO2 sensitivity in the models is not a fitting parameter. It is fixed by prior information.

    Comment by Ray Ladbury — 2 May 2008 @ 12:22 PM

  177. Ray Ladbury

    My point wasn’t that long term feedback was a significant cause of the current climate. It was that if it isn’t, and it seems obvious that you believe it not to be, then there is no reason to believe it will be in the future.

    Comment by steven caskey — 2 May 2008 @ 1:14 PM

  178. Steven–the problem is that in the past we didn’t have a rapidly increasing driver that would have effects that persiste for hundreds of years, and the system’s response to large perturbations may be quite different from the response to small perturbations. Past perturbations were not sufficient to melt the ice caps. This ome might be. In the past, permafrost stayed frozen; now it is melting and releasing CO2. In the past, the ocean remained a net sink for CO2, but now its ability to absorb is diminishing. Believe me, I have looked for warm fuzzies to convince me that we don’t have to worry about that thick positive tail. I haven’t found them.

    Comment by Ray Ladbury — 2 May 2008 @ 1:51 PM

  179. Chris N,

    I’m not sure why you are challenging the credibility of Gavin ( a highly published and renouned researcher) or anyone else when your questions and assumptions make little sense (how does an increase in the sun lead to strat cooling from a radiative viewpoint? What is “what are the accuracies of the models if the assumed amount of aerosols are cut in half?” supposed to mean?).

    Your questions on forcings and sensitivity seems very ill-posed or confused. Adding CO2 is a climate forcing, not a “sensitivity.” The sensitivity tells you how much the climate changes from x amount of forcing. A climate with a very high sensitivity will change a lot from x forcing, and a climate with low sensitivity will change very little for the same x forcing. For example, if the radiative forcing from some increase in CO2 is 2 W/m2 and the climate sensitivity is 0.75 Kelvin per W/m2, then adding that amount of CO2 will give 1.5 K increase.

    C

    Comment by Chris Colose — 2 May 2008 @ 6:10 PM

  180. Leif, you write
    > would someone hit upon the happy idea of increasing further CO2
    Only someone who hated fish. And plankton. You know this.

    Comment by Hank Roberts — 2 May 2008 @ 7:40 PM

  181. My tone AND math was off. Lashings of apologies. I’d intended a clever rebuttal to the fatalism inherent in the “nothing we can do” position of the idea that solar increases were responsible for AGW. Well, of course we could and would do something. Like tinker with CO2 concentrations. Maybe. But mentally I’d switched the sign. Up was down. Etc. Hard to explain. Like calling your best friend the wrong name.

    I’m getting old.

    Comment by Jeffrey Davis — 2 May 2008 @ 9:10 PM

  182. We appear to be talking past one another. My comments are two distinct points but somewhat related. First point: Less aerosols in the stratosphere will cause it to be cooler, all else equal, than a time when more aerosols are in the stratosphere. Thus, the present trend of cooler stratosphere is due to less particulate matter reaching the stratosphere today than 20 or 30 years ago. Regarding the models themselves, would they still be “accurate” if the aerosol forcing is cut in half? It appears to me that the aerosol forcing has been inflated in order to get the CO2-driven models (with their inherent climate sensitivities) to reasonably match surface temperature trends. According to this graph, it appears that aerosol forcing in the models has been incresing over the years, not decreasing.

    http://en.wikipedia.org/wiki/Image:Climate_Change_Attribution.png

    If you think I don’t know what I’m talking about, please explain the graph above.

    [Response: I’m not sure what the relation of the first part of your claim is to the second part. Volcanic aerosols reach the stratosphere and cause a warming there, but don’t last very long. The anthropogenic aerosol increase in the graph you linked is mostly in the troposphere, and therefore doesn’t have nearly as much effect on the stratosphere. Aerosol forcing has been increasing, but not nearly as much as CO2 forcing in recent years, which is why the CO2 forcing is winning out and we are seeing strong warming. Some models assume less aerosol forcing, some more, which is why a range of models with different climate sensitivity can still be compatible with historical instrumental climate records. If you think there is some other feedback mechanism that could yield lower climate sensitivity than the IPCC range, and fit the temperature record with lower aerosol forcing than assumed in the range covered by IPCC, please turn that into a quantitative model and show me. Nobody has done that. Not with cosmic rays, not with solar forcing not with fanciful “iris” cloud feedback, not with nothin’. It’s not to say it couldn’t possibly be done, but nobody’s done it, which leads me to think the proponents of low climate sensitivity are not serious about seeing whether their ideas work when turned into hard,cold numbers. –raypierre]

    Comment by Chris N — 2 May 2008 @ 10:51 PM

  183. Jeffrey Davis #181, …but your valid point is that natural disasters may be as bad as self-inflicted ones, and their mitigation just as legitimate. The fact that a very damaging development is “not our fault” — well, AGW is, but think asteroid impact or whatever — is no reason to just suffer it.

    Fatalism is a curse, and 100% self-inflicted.

    Comment by Martin Vermeer — 3 May 2008 @ 4:45 AM

  184. Steven, #174
    Are these equations meant to represent a theory you have? Or do they come from some source you can cite?

    It looks as though you are assuming there that present conditions are equal to past conditions.

    Spencer Weart goes through the science done to test that assumption, in considerable detail.

    Comment by Hank Roberts — 5 May 2008 @ 6:26 PM

  185. “Hank Roberts Says:
    5 May 2008 at 6:26 PM
    Steven, #174
    Are these equations meant to represent a theory you have? Or do they come from some source you can cite?”

    They were just equations I made up to try to make my line of reasoning a bit clearer. The point I was trying to make isn’t that things are the same as they were but rather that the long term climate sensitivity shouldn’t change that much. Thus if you predict a high long term sensitivity for current forces it would make sense to go back to past forces and use similar climate sensitivities for those and see how they should be affecting today’s climate. I understand I grossly oversimplified but it was the best way I could think of to show my line of thought. I will make it a point to read what Spencer Weart has done.

    Comment by steven caskey — 5 May 2008 @ 9:20 PM

  186. In #117 above I sought a response to a paper for which Roy Spencer was the lead author. It now appears to have also included Christy as a co-author and claimed a response of tropical cirrus clouds that should require modellers to lower sensitivity value by as much as 75%. Now Spencer is claiming that the climate science community is ignoring their results. (See “The Sloppy Science of Global Warming” posted March 20, 2008 on “Energy Tribune”).

    “By analyzing six years of data from a variety of satellites and satellite sensors, we found that when the tropical atmosphere heats up due to enhanced rainfall activity, the rain systems there produce less cirrus cloudiness, allowing more infrared energy to escape to space. The combination of enhanced solar reflection and infrared cooling by the rain systems was so strong that, if such a mechanism is acting upon the warming tendency from increasing carbon dioxide, it will reduce manmade global warming by the end of this century to a small fraction of a degree. Our results suggest a “low sensitivity” for the climate system.

    What, you might wonder, has been the media and science community response to our work? Absolute silence. No doubt the few scientists who are aware of it consider it interesting, but not relevant to global warming. You see, only the evidence that supports the theory of manmade global warming is relevant these days.”

    The paper in question appears to be,

    Article title: Cloud and radiation budget changes associated with tropical intraseasonal oscillations
    Published in: GEOPHYSICAL RESEARCH LETTERS in August, 2007.

    I don’t put a great deal of stock in what Spencer and Christy do but I would like to see some authoritative response to this paper.

    Comment by Ted Nation — 8 May 2008 @ 12:03 PM

  187. Spencer and Christy don’t have a great track record when it comes to producing results that are accurate the first time round. In part, that is likely due to the difficulty of hammering the sattelite measurements into order. However, his insistence on doing science by press is inexcusable. If his paper has merit, that will come out, but to claim to have overturned climate science and complain that you aren’t getting the attention you deserve is kind of sad really.

    Comment by Ray Ladbury — 8 May 2008 @ 1:33 PM

  188. Ted Nation (186) — I’m certainly no authority, but the global temperature may have been hotter in the mid-Holocene than now as certainly hotter than the global temperature in the 1950s. The Eemian interglacial (termination 2) is thought to have been quite a bit warmer than that, with termination 3 even hotter.

    So spencer’s iris effect, if adequately proven to actually exist, does not appear to keep temperatures from rising a substantial amount more. At best, IMHO, this could only lower climate sensitivity most modestly, say from 3.0 K to 2.9 K.

    Comment by David B. Benson — 8 May 2008 @ 1:52 PM

  189. Thank you for responding to the Spencer, Christy, et. al paper but I’m looking for something authoritative. This paper is out there bouncing around among skeptics and deniers without rebuttal. (The latest is the Australian, Jennifer Marohasy). I thought this kind of thing was partially what Realclimate was set up to respond to. I’m familiar with the long dispute regarding satellite temperature data and how it was resolved with Christy forced to acknowlege errors in his data. However, while the errors were unrevealed, others marshalled the evidence on the other side. I realize that it may be some time before independent analysis is done on the data from the new satellite but an authoritative listing of catradictory evidence is called for.

    Comment by Ted Nation — 8 May 2008 @ 3:29 PM

  190. Ted Nation (189) wrote “but an authoritative listing of catradictory[sic] evidence is called for.” I’m not sure what you want. The evidence from ice cores can readily be obtained from the NOAA Paleoclimatology web site. The analysis of the Vostok ice core by Petit et al. has been converted in graphical form for a Wikipedia page:

    http://en.wikipedia.org/wiki/Image:Vostok-ice-core-petit.png

    where termination 2 is about 125 kya, termination 3 is about 240 kya and termination 4 is about 325 kya. All three show higher temperatures than at present.

    Comment by David B. Benson — 8 May 2008 @ 3:59 PM

  191. Your point about the error bars is correct. But I’m not sure it’s clear to the typical reader. I posted a version of the text below on another board and got the comment that it was a lucid explanation of the statistical issue. I thought it might be helpful to try to post it here.

    The Douglass et al. error bars tell you that you have a fairly precise estimate of average prediction, but they do not tell you that you have a very precise prediction. That’s the conceptual mistake they made — they confused the accuracy of their estimate of the average prediction with the accuracy of the prediction itself.

    A simple example can make this clear. If I ask 1000 economists to predict the average rate of inflation in the year 2100, and take the average and standard deviation of those predictions, what I’ll get is a fairly precise estimate of the average prediction. What I most assuredly do not have is a very precise prediction. In fact, it’s still just a guess. I should have no expectation that the actual inflation rate in that year will be close to that prediction. And if I then asked 100,000 economists, I’d get ten times more precision in my estimate of the average prediction. But the prediction itself would be no more accurate than the first one.

    To recap: they mistook the accuracy with which they estimated the mean prediction, for the accuracy of the prediction itself. That’s like saying that if you ask 100x as many economists, you’ll get a 10x improvement in the accuracy of your economic forecast. Nope. You’ll get a 10-fold improvement in your estimate of what the average economist thinks, that’s all.

    Comment by Christopher Hogan — 9 May 2008 @ 9:36 AM

  192. There’s a blog entry at climate-skeptic.com that claims to poke holes in this post.

    http://www.climate-skeptic.com/2009/01/can-you-have-a-consensus-if-no-one-agrees-what-the-consensus-is.html

    Is there any truth to it?

    [Response: It is a little confused. The point is that the supposed absence of a hot spot is a much more fundamental problem for atmospheric physics than it is a problem for greenhouse gases – specifically the moist adiabat is fundamental to all theories of moist convection in the tropics (this is the temperature gradient that results from lifting up parcels of moist air). That gradient because of the temperature/water vapour saturation relationship always decreases as the surface temperature increases (thus leading to enhanced warming aloft). This is such a fundamental issue – that long predates climate modeling or worrying about greenhouse gases, that for this to be wrong would overturn maybe a century of meteorology. Thus it is highly unlikely to be wrong and the problem is much more likely to be in the observations. Having said that, the follow on post from this (here) demonstrates that there may well be a hot spot in any case. – gavin]

    Comment by Aaron — 14 Jan 2009 @ 4:23 PM

  193. I worte in comment #190 “at present”. In this context “present” is the year 1950 CE.

    Comment by David B. Benson — 14 Jan 2009 @ 5:58 PM

  194. RE: 192

    I posted this on “another site”, but no one there found it particularly interesting, given certain ideological beliefs that no such hotspot exists:

    “Warming patterns are consistent with model predictions except for small discrepancies close to the tropopause. Our findings are inconsistent with the trends derived from radiosonde temperature datasets and from NCEP reanalyses of temperature and wind fields. The agreement with models increases confidence in current model-based predictions of future climate change.”
    http://www.nature.com/ngeo/journal/v1/n6/abs/ngeo208.html

    “Insofar as the vertical distributions shown in Fig. 3 are very close to moist adiabatic, as for example predicted by GCMs (Fig. 6), this suggests a systematic bias in at least one MSU channel that has not been fully removed by either group [RSS & UAH].”
    http://earth.geology.yale.edu/~sherwood/sondeanal.pdf

    “The observations at the surface and in the troposphere are consistent with climate model simulations. At middle and high latitudes in the Northern Hemisphere, the zonally averaged temperature at the surface increased faster than in the troposphere while at low latitudes of both hemispheres the temperature increased more slowly at the surface than in the troposphere.”
    http://www.atmos.umd.edu/~kostya/Pdf/VinnikovEtAlTempTrends2005JD006392.pdf

    “In the tropical upper troposphere, where the predicted amplification of surface trends is largest, there is no significant discrepancy between trends from RICH–RAOBCORE version 1.4 and the range of temperature trends from climate models. This result directly contradicts the conclusions of a recent paper by Douglass et al. (2007).”
    http://ams.allenpress.com/archive/1520-0442/21/18/pdf/i1520-0442-21-18-4587.pdf

    Also, it’s always worth pointing out that the satellite “channels” do not represent the actual temperature trends at those altitudes, but the trends of huge swaths of atmosphere that include the stratosphere to various degrees (except for TLT). The “channel” that is centered on the “hotspot” (RSS TTS — only reliable since 1987) is half troposphere and half stratosphere, a fact that is seldom (never) pointed out by people pushing this grab bag of nonsense.

    Comment by cce — 14 Jan 2009 @ 6:36 PM

  195. “Having said that, the follow on post from this (here) demonstrates that there may well be a hot spot in any case. – gavin”

    Thank you. :)

    Comment by Aaron — 15 Jan 2009 @ 9:16 AM

  196. Thank you.

    Comment by Aaron — 15 Jan 2009 @ 9:17 AM

Sorry, the comment form is closed at this time.

Close this window.

0.604 Powered by WordPress