RealClimate

Comments

RSS feed for comments on this post.

  1. As before, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world.

    =================

    Statistically significantly so.

    Enough data to reject the hypothesis (and the conclusions) that Scenario B is correct.

    [Response: But no model is 'correct' and no forecast is perfect. The question instead is whether a model is useful or whether a forecast was skillful (relative to a naive alternative). Both of these are answered with a yes. We can even go one step further - what climate sensitivity would have given a perfect forecast given the actual (as opposed to projected) forcings? The answer is 3.3 deg C for a doubling of CO2. that is a useful thing to know, no? - gavin]

    Comment by Lord Blagger — 21 Jan 2011 @ 10:15 AM

  2. Thank you.

    Comment by Edward Greisch — 21 Jan 2011 @ 10:19 AM

  3. A few curiosities from the article. Why do you calculate the climate sensitivity from Hansen’s scenario B projection when it is running a little warm? Actual temperature data appears to be tracking much better under scenario C.
    Also, any comment as to why the ocean heat content has appeared to level off during the post 2003 period?

    [Response: Scenario C assumed no further emissions after 2000, and is much further from what actually happened. I doubt very much that a linear adjustment to the forcings and response is at all valid (and I'm even a little dubious about doing it for scenario B despite the fact that everything there is close to linear. The error bars are about +/- 1C so it isn't a great constraint in any case. As for OHC, it is likely to be a combination of internal variability, not accounting for heat increases below 700m, and issues with the observing system - compare to the Lyman et al analysis. More time is required for that to become clear. -gavin]

    Comment by Dan H. — 21 Jan 2011 @ 10:35 AM

  4. >The model simulations use observed forcings up until 2000 (or 2003 in a couple of cases) and
    > use a business-as-usual scenario subsequently (A1B).

    Does that mean the model simulations have not accounted for the unusually deep solar minimum or would that not make much of a difference?

    Any highlights in the modelling world on new or improved physics which is being included? Thanks.

    Comment by Andre — 21 Jan 2011 @ 10:50 AM

  5. Did the Hansen ‘B’ projection properly take into account the thermal inertia of the oceans? Put another way, is it possible that the Hansen ‘B’ projection gets the rate of warming wrong (too fast), but the overall final sensitivity (4.2˚C/doubling) right (and it’s just the far distant tail of the graph that will differ, but it will ultimately end at the correct temperature)?

    My untrustworthy memory tells me that the degree of thermal inertia of the oceans came as a mild surprise to science in the last decade, so the answer to these questions could be yes.

    [P.S. On a separate note, as if it couldn't get worse, reCaptcha's new method of outlining some of and sometimes only parts of the characters is really, really, really annoying. And I was just, finally getting used to it...]

    [Response: The recaptcha issue looks like a weird CSS issue, possibly related to an update of the live comment preview. Any pointers to what needs to be fixed will be welcome!- gavin]

    Comment by Bob (Sphaerica) — 21 Jan 2011 @ 11:12 AM

  6. Gavin,

    with regards to Dan’s question, is there any estimate of the short-term effect of ENSO on radiative forcing? Also, do you think that sea level rise is a good proxy for OHC?

    Comment by Rocco — 21 Jan 2011 @ 11:19 AM

  7. Gavin,

    [Response: The recaptcha issue looks like a weird CSS issue, possibly related to an update of the live comment preview. Any pointers to what needs to be fixed will be welcome!- gavin]

    I don’t think so. I think the captcha itself is an image generated and delivered as a unit by Google (right click and “View Image” in Firefox)… CSS has no effect on it, so I think it’s just a coincidence in timing (captcha change vs. preview change). Google must be generating that image using a different algorithm/method than they were. They’re just outsmarting themselves (and us).

    Comment by Bob (Sphaerica) — 21 Jan 2011 @ 11:54 AM

  8. So, if I read this correctly, your method yields an estimate for climate sensitivity of 3.3ºC. It’s easy to derive from this the CO2 level compatible with the policy goal of limiting the rise in global mean surface temperature to 2ºC over the pre-industrial level. I make it about 426ppm. Of course, this assumes a few things, such as that levels of other GHGs, such as methane, are returned to their pre-industrial levels, or continue to be counter-balanced by aerosol forcings.

    At the current rate of increase in atmospheric CO2 levels (around 2ppm/yr), we’ll pass 426ppm within less than another 2 decades. We’ll have to reduce CO2 levels after that if we’re to avoid an eventual temperature rise of more than 2ºC.

    Comment by Tim Joslin — 21 Jan 2011 @ 12:03 PM

  9. Gavin (or anyone else qualified to answer):

    when the measured temperature drops from one year to the next (as happened in 2008-2009 if I’m reading the graph correctly), where does the heat go? Upper ocean / lower ocean / atmosphere / someplace on the surface where we don’t have thermometers?

    Put another way, if our grid of measuring devices was truly comprehensive in 3-d, could we see how heat moved from one place to another from year to year? Would the result be a straight rising trend line?

    ps: Once again, thanks for all your hard work explaining these issues to the public. Some days I’m not sure how you keep your sanity.

    [Response: The heat content associated with the surface air temperature anomalies is very small, and so it can go almost anywhere without you noticing. - gavin]

    Comment by Francis — 21 Jan 2011 @ 12:08 PM

  10. For the period 2000-2010, Tamino’s analysis might be a better measure

    http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/#comment-47256

    It’s not a huge difference–it does make everything a lot more consistent.

    Comment by Ray Ladbury — 21 Jan 2011 @ 12:34 PM

  11. This is a good idea. It could be improved (IMHO) (a) by assembling the yearly predictions in a pdf file that can be easily accessed here and (b) including similar graphs for the measured variables that Barton Paul Levenson has included in his list of correct predictions.

    The data seem to strongly confirm or support scenario c over the other two scenarios. Would you say that’s a fair inference to date?

    1, Gavin in comment: But no model is ‘correct’ and no forecast is perfect. The question instead is whether a model is useful or whether a forecast was skillful (relative to a naive alternative).

    Instead of “correct”, how about “sufficiently accurate”? Box (and others who use the word “useful”) never clarified what good qualities are necessary or sufficient for a model to be “useful” if it is “false” (Box’s word.) If model c is the most skillful to date, should model c be the model that is used for planning purposes? (This looks like an idea that might appeal to Congressional Republicans.)

    You are aware (I am sure) of “sequential statistics”. Would you like to propose a criterion (or criteria) for deciding how much accuracy (or skill) over what length of time would be sufficient for deciding which of the scenarios was “most useful”?

    Comment by Septic Matthew — 21 Jan 2011 @ 1:15 PM

  12. I’m not so sure 2011 will be “easily” in the top 10, considering the strong la Nina in place and expected to last at least mid-year and perhaps beyond. Are January surface anomalies not looking that cool?

    2008, for example, appears to be ranked 11th, and this year’s la Nina seems to be a little stronger. So more atmospheric greenhouse gases, sure, but it’s only 3 years removed. Solar influence won’t be much different.

    Interesting to see arctic sea ice extent still tracking below 1 SD.

    Any updates to Schuckmann 2009 (ocean heat storage down to 2000 m)?

    Comment by MarkB — 21 Jan 2011 @ 1:49 PM

  13. re: MarkB @8

    “Interesting to see arctic sea ice extent still tracking below 1 SD.”

    From the graph it looks like it’s tracking ~4 SD below mean and ~1 SD below 2007.

    Comment by Jeffrey Davis — 21 Jan 2011 @ 1:54 PM

  14. Jeffrey Davis: “From the graph it looks like it’s tracking ~4 SD below mean and ~1 SD below 2007.”

    I don’t follow that. The dotted lines above and below the solid one (ensemble mean) are 1 SD. Maybe you’re confusing this with the vertical axis, which is sea ice extent.

    Comment by MarkB — 21 Jan 2011 @ 2:26 PM

  15. I wonder how the model ensemble and observations (especially GISTemp) track north of 60.

    Intuitively, one would think that model the underestimate of the decline in Arctic sea ice extent would also be reflected, at least partly, in an underestimate of Arctic temperature rise.

    Comment by Deep Climate — 21 Jan 2011 @ 2:36 PM

  16. Thanks for this update. I agree with others that some mention/discussion wouuld be helpful to discuss natural forcings/inputs not included in the models:

    1) Certainly the very low solar activity and prolonged solar minimum.
    2) We’ve had two rather significant La Ninas in the past 5 years
    3) The PDO has swtiched to a cool phase (accentuating the La Nina?) And what is that doing to the OHC?
    4) The AMO might be heading for a cool phase, and this combined with a continued cool PDO, on top of a weak Solar Cycle 24 could prove interesting.

    Related to all this is of course the issue of how AGW might be affecting the very nature of longer term frequency ocean cycle such as the PDO and AMO.

    The most exciting thing is we’ll get a chance to see the relative strength of all of these over the next few years, and it will most interesting to compare the total decade of 2010-2019 to previous decades in terms of the trends in Arctic Sea ice, Global Temps, and of course, OHC.

    Comment by R. Gates — 21 Jan 2011 @ 2:55 PM

  17. gavin : we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.27*0.9) * 0.19=~ 3.3 ºC

    But with this calculus, don’t you estimate transient climate response (and so transient climate sensivity)?

    [Response: I am assuming that the sensitivity over the 27 years is linearly proportional to the equilibrium sensitivity. If I knew the transient sensitivity of this model (which I don't), I could have scaled against that. In either case, one would find that a reduction by a factor 0.19/(0.27*0.9) = 0.78 would give the best fit. - gavin]

    Comment by question — 21 Jan 2011 @ 3:03 PM

  18. Couple questions. Since H88, what are the most important changes made to the model (a non slab ocean and improved sulfur cycle? etc,etc)

    if you repeated H88 with today’s version of the model what would you get?

    Academic questions, just curious.

    Comment by steven mosher — 21 Jan 2011 @ 4:24 PM

  19. I sent off my 2nd (revised) version of the planet temperatures paper yesterday. Wish me luck…

    Comment by Barton Paul Levenson — 21 Jan 2011 @ 4:52 PM

  20. re: #19 Good luck BPL. I just read your easy greenhouse effect article. Now I’m going to find out how Fourier figured it out in the first place, and another learning curve begins. Thank you.

    Comment by One Anonymous Bloke — 21 Jan 2011 @ 5:41 PM

  21. #19 Looking forward to reading it BPL ,

    Comment by john byatt — 21 Jan 2011 @ 5:51 PM

  22. “Consistent with that, I predict that 2011 will not be quite as warm as 2010, but it will still rank easily amongst the top ten warmest years of the historical record.”

    Sounds like a “close shave” prediction for 2011. What’s the probability that it is or isn’t?

    Let’s ignore NASA gmao ensemble for ENSO and instead eyeball a prolonged La Nina episode for 2011, which decays into 2012 giving neutral conditions:

    In this case, we would expect around 0.2 deg C drop similar to 1999. So, 2011 will still rank amongst the top ten warmest years of the historical record, but only by 0.05 deg C.

    Given that the MEI ranked the 1999 -2000 La Nina episode quite poorly, and this year we have seen record SOI values, I wouldn’t say 2011 will “easily” be in the top ten.

    [Response: Wanna bet? - gavin]

    Comment by Isotopious — 21 Jan 2011 @ 5:52 PM

  23. re: 13

    I was looking at this graph:

    http://nsidc.org/images/arcticseaicenews/20110105_Figure2.png

    Comment by Jeffrey Davis — 21 Jan 2011 @ 5:59 PM

  24. Short learning curve :)

    Comment by One Anonymous Bloke — 21 Jan 2011 @ 6:20 PM

  25. Given the historical performance of Nasa gmao, no I don’t.

    [Response: Not sure what they've got to do with anything. But once again we have someone going on about cooling in the blog comments who backs off when pushed. Good to know. -gavin]

    Comment by Isotopious — 21 Jan 2011 @ 6:36 PM

  26. Isotopious,
    I am going to agree with you that 2011 will not be as warm as 2010 (La Nina vs El Nino and all). However, I will take the position that it will fall outside the top 10. According to the CRU data, a 0.1C drop from 2010 would knock 2011 down to 12th place. Based on the Dec. temperature drop, a 0.1-0.2C drop is quite possible. By the way, according to CRU, 2010 finished 4th by a nose to 2003.

    Comment by Dan H. — 21 Jan 2011 @ 7:14 PM

  27. gavin : in IPCC AR4, table 8.2 gives a 2.7 °C equilibrium climate sentivity for GISS-EH and GISS-ER. In your text, the value is 3.3 °C. What is actually the best estimate?

    [Response: Range is 2 to 4.5, best estimate ~3 C -gavin]

    Comment by question — 21 Jan 2011 @ 7:27 PM

  28. Jeffrey (#23),

    I was referring to the 3rd graph listed in this post comparing model ensemble to data, so SD refers to the model SD. Yours compares the 1979-2000 average to recent extent.

    Comment by MarkB — 21 Jan 2011 @ 7:59 PM

  29. “An almost-record summer melt in the Arctic was also important (and probably key in explaining the difference between GISTEMP and the others).”

    This makes me wonder if over time, (decades) we should expect to see a growing gap emerge between the anomalies measured by GISS compared to the other methods? I wonder this because GISS attempts to include the arctic and the others do not and the arctic is warming much faster than the global average and will likely continue to do so as the ice melt continues. Or are the anomalies (large as they are) not over a big enough area of the globe to create such a difference?

    Comment by Mike C — 21 Jan 2011 @ 8:02 PM

  30. The 2011 temperature result will obviously depend on ENSO. As you know, El Nino is seasonal (as the name implies), yet the anomalies have persisted unchanged into the New Year. While it is still possible for the event to decay in 2011(as gmao ensemble is predicting), historically, it could have already began to decay since Christmas time….mmmm

    So I don’t want to bet, just pointing out that the odds of 2011 making the top 10 warmest years are not as good as they could be…that’s all.

    :)

    Comment by Isotopious — 21 Jan 2011 @ 8:03 PM

  31. The hindcast of the models, against the temperature record from 1900 to 2000, is indeed very impressive.

    It is that very fact that leads me to believe they have to be wrong!

    They are not comparing like to like. An apple doesn’t equal an orange no matter which you cut it.

    The models are set up to to produce the climate signal. However, there are some weather signals such as the PDO influenced ENSO conditions that introduce medium term warming and cooling signals and they can be quite large as we know. They are cyclical so, in the long term, they average out and are not an additive effect so the long term climate signal will always emerge from this masking weather noise.

    Now the models average out these temporary weather forcings so they are only showing the true climate signal. Well, I can understand that, it seems a reasonable thing to do, makes them less complicated

    How can, therefore, a model set to match the climate signal only, match so well the climate plus weather signal, which is what the temperature record is, so well?

    We are told that the fact the models are not representing the 21st century record vey well is that ‘weather’ conditions are temporarily masking the true climate signal which will emerge when the weather conditions cycle.
    http://www.woodfortrees.org/plot/hadcrut3gl/from:2001%20/to:2011/trend/plot/hadcrut3gl/from:2001/to:2011

    Well I could accept that as an explanation if it wasn’t that the hindcasts of the model match the climate plus weather signal so well.

    We know that when James Hansen made his famous predictions to congress in 1988 that he didn’t know he was comparing a period, which was in the warm end of a sixty year PDO weather cycle with periods in the cool end. The PDO cycle was not identified until 1996.

    Surely the hindcast of the models should show periods of
    time where the climate signal is moving away, up and down, from the climate and weather signal?

    The fact that it doesn’t suggests to me that the modellers wanted to tune their models as close as possible to the temperature record so that people would have high confidence in them. However they overlooked, in their hubris, that if they were truly accurate they shouldn’t.

    So it appears that the only way we could solve this conundrum is to say that during the period of the hindcast weather was never anything other a very minor force. However, in the period of the forward cast weather has transformed itself into a major masking force.

    Sorry I can’t buy it.

    Alan

    Comment by Alan Millar — 21 Jan 2011 @ 9:01 PM

  32. Mike C #29 I’ve been wondering about this: is it correct to think that the ice mass itself acts as a sort of ‘cooling forcing’? Implying that its melt has a double whammy – less ‘cooling forcing’ plus increased sea levels…or have my perceptions led me astray again?

    Comment by One Anonymous Bloke — 21 Jan 2011 @ 11:12 PM

  33. > Alan Millar
    > woodfortrees

    You’ve used his tool to do exactly what he cautions about!
    _______________
    “I finally added trend-lines (linear least-squares regression) to the graph generator. I hope this is useful, but I would also like to point out that it can be fairly dangerous…
    Depending on your preconceptions, by picking your start and end times carefully, you can now ‘prove’ that:
    Temperature is falling! ….”
    ————— http://www.woodfortrees.org/notes

    Comment by Hank Roberts — 22 Jan 2011 @ 12:19 AM

  34. Alan Millar #31, reference needed for the match between climate hindcasts and weather 1900-2100 you say is too good.

    The first figure of the OP, for instance, shows 1980-2010. The model ensemble hindcast tracks the two big dips in the real temps, as it should if the models know about El Chichon and Pinatubo. (The OP says the simulations used the observed forcings.) That apart, what’s so impressive about the match? Can you quantify your suspicions?

    Comment by CM — 22 Jan 2011 @ 2:14 AM

  35. Alan Millar:

    If the models are behaving correctly, then they absolutely should reproduce weather effects such as ENSO. They just don’t produce the same actual weather that occurs in the real world.

    So if you look at an individual model run, you will see peaks and troughs that represent La Nina and El Nino in the model run. But those peaks and troughs don’t correspond to any real world index – they represent model weather.

    When you average together many model runs, these weather effects average out, and you don’t see the same patterns in the ensemble mean. However, you should be aware that the majority of the variance (the big gray 95% range in Gavin’s graph) is caused by these large scale model weather events. That is why we should expect the observed global temperature to bounce around quite dynamically in that 95% (even going outside it sometimes) and not cling closely to the mean. The fact that this is exactly what we see is evidence that the models are right, and not whatever it is that you are trying to imply. In the AR4 graph, the only time that the observed temperatures closely follow the model is just after Pinatubo, because observed volcanic forcings are included in the model.

    The models also illustrate nicely that more subtle, hard to delineate oscillations such as PDO do not have a significant effect on long-term global climate. Enhancing models so that they can do better in that area may improve regional modelling results, but it won’t affect the global picture much.

    On a different note, you are actually trying to argue that because the models are so good at reproducing climate, then they can’t be real. That’s a ludicrous argument, and you know it. Even really simple, naive models do a good job of reproducing climate. Even statistical models can do it.

    Comment by Didactylos — 22 Jan 2011 @ 3:24 AM

  36. Regarding model output for OHC, Gavin writes : ” As before, I don’t have the post-2003 model output”

    Why not? I dont understand. Are you saying the models never output any OHC predictions past 2003?

    [Response: No. It is a diagnostic that would need to be calculated, and I haven't done it. That's all. - gavin]

    Comment by TimTheToolMan — 22 Jan 2011 @ 4:02 AM

  37. One Anonymous Bloke – The ice mass doesn’t act so much as a cooling ‘force’ as it does as a lid covering or insulating warmer water. The smaller the lid gets – because warmer water is eating away at its edges and its underside – the more that warmer water is exposed to the local air mass, the less that air is cooled by contact with ice. And on and on.

    Comment by adelady — 22 Jan 2011 @ 4:30 AM

  38. AM 31: The models are set up to to produce the climate signal… the modellers wanted to tune their models as close as possible to the temperature record so that people would have high confidence in them. However they overlooked, in their hubris, that if they were truly accurate they shouldn’t.

    BPL: You appear to have no idea whatsoever how these models actually work. They are not “tuned” to any historical records at all. They are physical models, not statistical models. They start with the Earth in a known state (say, 1850) and then apply known forcings and physics to see what happens. THAT is what makes the agreement with the historical data so impressive.

    You might want to get a copy of this and read it–IF you’re really interested in learning about how these models work, that is:

    Henderson-Sellers and McGuffie 2005 (3rd ed). A Climate Modeling Primer. NY: Wiley.

    Comment by Barton Paul Levenson — 22 Jan 2011 @ 7:08 AM

  39. Okay, CAPTCHA choked me twice on what I wanted to write.

    Gavin et al.: I think it would help immensely if you put the “Say It!” button AFTER the reCAPTCHA section. That way at least I wouldn’t often forget to do the reCAPTCHA at all.

    Comment by Barton Paul Levenson — 22 Jan 2011 @ 7:10 AM

  40. #30–You make it sound like ENSO is still going on. We’ve been in a cold phase since beginning of last summer. In fact, this has been a fairly strong cold event in the fall of this year.

    For those that want to get current conditions and ENSO predictions, please go here:

    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_advisory/ensodisc.html

    Comment by m — 22 Jan 2011 @ 7:30 AM

  41. Re. 32 One Anonymous Bloke

    Loosely related but could be of interest:

    Some interesting work has been done by Jerry Mitrovica on the effects of glacial melt due to gravity, affecting sea levels:
    * http://harvardmagazine.com/2010/05/gravity-of-glacial-melt
    * http://www.theglobeandmail.com/news/technology/science/article9752.ece
    * http://www.goodplanet.info/eng/Contenu/Points-de-vues/The-Secret-of-Sea-Level-Rise-It-Will-Vary-Greatly-by-Region

    In Greenland an ‘Area of the size of France melted in 2010 which was not melting in 1979′. (H/T to Neven)
    * http://www.msnbc.msn.com/id/41197838/ns/us_news-environment/

    If only the Greenland Ice Sheet melted, sea levels would fall along the shores of Scotland, and the Netherlands would see only one-fifth the average sea-level rise worldwide. (“Of course, that’s what they’re hoping for, even as they plan for the worst-case scenario,” says Mitrovica. “But if you’re Australian, you have a very different hope.”)

    Comment by J Bowers — 22 Jan 2011 @ 7:34 AM

  42. Can’t somebody just re-run Hansen’s old model with the observed forcings to the current date? Why are we forever wed to his old Scenarios A, B and C? I want to separate the physical skill of the model from his skill in creating instructive emissions scenarios.

    Comment by carrot eater — 22 Jan 2011 @ 7:57 AM

  43. Alan Millar: “Sorry I can’t buy it.”

    No, Alan, you just don’t understand it.

    Comment by Ray Ladbury — 22 Jan 2011 @ 9:08 AM

  44. 38 BPL

    “They are not “tuned” to any historical records at all. They are physical models, not statistical models. They start with the Earth in a known state (say, 1850) and then apply known forcings and physics to see what happens. THAT is what makes the agreement with the historical data so impressive.”

    Hmmm………..

    Unfortunately they also have to apply figures for forcing whose values and effects are not known and have ongoing debate about them, reflective aerosols, land use, black carbon etc etc. You get to play around with them in the model.
    With black carbon we are not yet positive what the sign is even.

    So before you run the model you have to estimate values and effects for these forcings and estimate their changing values over time even though we do not have good measurement.

    In the GISS E model for instance they are flat lining the effects of the following factors since the late 1980s. Black Carbon, Reflective Aerosols, Land Use and Ozone all of which it had changing values for in the model prior to the late 80s.

    So BPL you are saying that having done all this when they ran the model the first time they had this excellent fit? They never had to go back in and tweak any forcings or assumptions?
    [edit]

    [Response: Aerosol forcings in the GISS model are derived from externally produced emission inventories, combined with online calculations of transport, deposition, settling etc. The modern day calculations are compared to satellite and observed data (see Koch et al 2008 for instance). As models improve and more processes are added the results change and are compared again to the modern satellite/obs. The inventories change because of other people reassessing old economic or technological data. The results don't change because we are trying to fit every detail in the observed temperature trend. If we could do that, we'd have much better fits. Plus, running the suite of runs for the IPCC takes about a year, they are not repeated very often. - gavin]

    Comment by Alan Millar — 22 Jan 2011 @ 9:27 AM

  45. #31 Allen, I got similar results, showing a decline, over the last 10 years, using a Fourier filter with a 0.1 cy/yr cut-off freq.

    Just a thought but I wonder how much these models relay on physical laws, AND engineering empirical equations.

    Comment by J. Bob — 22 Jan 2011 @ 11:40 AM

  46. 38, Barton Paul Levenson: Henderson-Sellers and McGuffie 2005 (3rd ed). A Climate Modeling Primer. NY: Wiley.

    I have just purchased “Dynamic Analysis of Weather and Climate” 2nd ed. by Marcel Leroux. I was wondering whether this was considered a good book among experts in the field.

    Comment by Septic Matthew — 22 Jan 2011 @ 12:25 PM

  47. A question, when a model is initialized far in the past (1850 has been mentioned as a starting year), how can the initialization be accurate enough given the relative scarcity of data from those times? I would guess that, with three virtual centuries available to develop, systematic errors in the starting conditions could impact the final result quite a bit.
    Or is the hindcast made initializing things in 2000, where the relevant climate parameters are much better known, and then running the model backwards (for all I know, the relevant equations work in both directions…)

    Comment by Alfio Puglisi — 22 Jan 2011 @ 12:37 PM

  48. For Alan Millar: http://www.woodfortrees.org/notes
    “… graph will stay up to date with the latest year’s values, so feel free to copy the image link to your own site, but please link back to these notes ….”

    Comment by Hank Roberts — 22 Jan 2011 @ 12:49 PM

  49. 31, Alan Millar: The hindcast of the models, against the temperature record from 1900 to 2000, is indeed very impressive.

    Sorry to pile on, but where do you see that?

    Comment by Septic Matthew — 22 Jan 2011 @ 12:49 PM

  50. A clarification. HadSST and HadISST are different products. The latter (used in GISTEMP until 1982) interpolates missing regions while the former (used in HadCRUT) does not. The next versions of these analyses, HadISST2 and HadSST3, will be available shortly.

    Of course, when they are released, there will be much howling in certain quarters of “adjusting the past.” Said adjustments will be examined by certain bloggers while leaving out crucial explanation and details in favor of endless speculation and innuendo.

    Comment by cce — 22 Jan 2011 @ 1:04 PM

  51. In your graph of temperature measurements vs model projections, you discuss Scenario B, while it appears that Scenario C is almost identical to HadCrut3 and GisTemp. All three are about 0.185K/decade. Is this not the general trend from 1900, and considered a reflection of natural warming coming out of the LIA?

    The CAGW worry I have is based on 3K/century. I expected that we would be in the acccelerating >2.0K/century by now, 22 years after the 1988 initiation of the concerns. At any rate, while global warming is certainly still occurring, doesn’t your graph here suggest that there is only a “background” type of warming going on, the non-feedback type that we thought would be such a problem?

    I’m also confused by the comparison between the temperature graphs and the ocean warming graphs, certainly by the Lyman portion of the ocean heating graph. If the oceans take up and hold the heat so much more than the atmosphere, and then warm the atmosphere because the oceans are warmer, why do the two trends of ocean heat and atmospheric temperature not follow each other? Up to 2002 the Lyman measurements match, as if the ocean and atmosphere were in equilibrium. Then they diverge, and do so from the other data compliers’. Did Lyman’s methodology change in 2002?

    Rather than support the CAGW, this post seems to support global warming of a moderate level, but not of a disasterous level. The Lyman divergence is very odd. It is possible that Lyman is measuring a transfer of oceanic heat from warm waters to cool waters through circulation changes than increased retention of solar energies?

    [Response: 'C'AGW is not any actual scientific hypothesis, theory or result. To my knowledge it has never appeared in the text of any IPCC, NAS or Royal Society report. It only exists in contrarian blogs when people want to argue against some strawman. Since I don't know what it is, nor have I ever written about it or seen it honestly described, I have no idea whether what I wrote supports it or not. If you would like to know what the scientific projections are and why people are concerned about 'business-as-usual', please see any of the reports I mentioned above (or read this). The updates here all support that concern. - gavin]

    Comment by Doug Proctor — 22 Jan 2011 @ 1:36 PM

  52. Adelady #37, thank you. I think I know where I’ve been going wrong: it’ll still be colder at the poles (or at high altitude) whatever the average global temperature: the ice is a symptom of that, not a cause.
    J Bowers #41, Thank you too: very much of interest. My sense of wonder is being spoiled rotten :)

    Comment by One Anonymous Bloke — 22 Jan 2011 @ 1:55 PM

  53. “So to conclude, global warming continues. Did you really think it wouldn’t?”

    No, I figured it would. I study history, which tells me we have another 400 years or so of Global Warming to go with the occasional two or three decades of slight cooling here and there.

    What I find disturbing is that if Congress had instituted the draconian emission restrictions being recommended in ’88 then these observations through 2010 could be used now to pat ourselves (as in the USA/UN/whoever imposed draconian CO2 restrictions) on the back for avoiding “certain” warming.

    The other thing that bothers me is that if one of the lines of evidence for the current warming having man’s fingerprint is that the models without man’s influence do not coincide with reality that is warmer. This being the inverse should imply man’s influence is negligible since the closest match to reality is with draconian emission restrictions that didn’t happen.

    [Response: You have misinterpreted the logic completely. There are very clear fingerprints of change that are only associated with changes via increasing CO2 etc. - gavin]

    Comment by John W — 22 Jan 2011 @ 2:08 PM

  54. >”[Response: Wanna bet? - gavin]”
    >”But once again we have someone going on about cooling in the blog comments who backs off when pushed.”

    ‘Easily’ hasn’t been defined. I would be surprised if you were willing to bet on GISStemp GLOBAL Land-Ocean being in the top 8. I am already betting quite heavily that it won’t be warmest and mildly that it won’t be in the top 5 so I don’t want to bet a fortune on it not being in top 8 and do not want to bet on Giss not being in the top 10. It should be noted that there is a big difference between those. Seems like I am also at least partly agreeing with Isotopious. If it wasn’t for my existing positions, I might try to push Gavin on what he means by ‘easily’.

    (Last intrade trade price for gisstemp being in top5 was 67% so if Gavin is looking for something like even odds on Giss being warmer than +0.52C he could easily lay it off for a guaranteed profit.)

    Comment by crandles — 22 Jan 2011 @ 2:39 PM

  55. 49 Septic Matthew

    Well my point is that a model that is tuned to match a climate signal only, should not track, accurately, a record that is both a climate and weather signal especially when we know that these medium term effects can be quite strong, even if they cycle out in the longer term.

    Look at the 2001 to 2011 climate and weather signal and the GISS model which is showing an opposite, climate only, trend over the full decade.

    Now we say that that doesn’t falsify the model because a noisy weather signal is masking the climate only signal. Now that was for a forecast period.

    So let us look at the decadal trends for each of the decades since 1950 the decades when CO2 is supposed to have become a dominant forcing factor. These decades are also decades which the GISS model has back cast against.

    http://www.woodfortrees.org/plot/hadcrut3gl/from:1951/to:1960/trend/plot/hadcrut3gl/from:1951/to:2011/plot/hadcrut3gl/from:1961/to:1970/trend/plot/hadcrut3gl/from:1971/to:1980/trend/plot/hadcrut3gl/from:1981/to:1990/trend/plot/hadcrut3gl/from:2001/to:2011/trend/plot/hadcrut3gl/from:1991/to:2000/trend

    So what does the GISS model show for these decades/

    Well when I look at it the GISS decadal, climate only, signal trend, matches the weather and climate decadal signal trend. Up, down, up, up, and up. In none of them do we see an opposite trend over the whole decade.

    However in the first decade which GISS forecasts, rather than backcasts, this opposite trend in signals emerges.

    Now, I am not saying this couldn’t be coincidental but it adds weight to my point that the model should not be so accurate on the backcast. Does this suggest excessive tuning in the backcast well its some evidence for it.

    Alan

    Comment by Alan Millar — 22 Jan 2011 @ 2:41 PM

  56. One wonders, with OHC are we not missing some at the poles? Two thirds of ice loss appear to be from underneath, according to recent research. That is supported by other recent findings that warm currents are infiltrating northward under ice shelves and into fjords around Greenland and into the Arctic basin more than previously thought.

    Is some of that heat playing peek-a-boo, or being sent into the atmosphere due to the physics of ice melt and growth?

    Regarding 2011 temps: the late ice growth, almost total lack of any ice over 2.5 meters thick and the new report on Greenland melt all indicate (to me, at least) the La Nina (I realize LN was later in the year and most GIS melt ends about Sept., but the transition from EN to LN was underway during the melt.) may not hold temps down as much as we might hope.

    But, then, we just hit a high, so I suppose the yo-yo is gonna assert and take us back down a bit. But don’t bet much on it.

    Comment by ccpo — 22 Jan 2011 @ 4:20 PM

  57. #53 What I find disturbing is that if Congress had instituted the draconian emission restrictions being recommended in ’88 then these observations through 2010 could be used now to pat ourselves (as in the USA/UN/whoever imposed draconian CO2 restrictions) on the back for avoiding “certain” warming.

    Huh????? It is absolutely certain that warming has occured since 1988. So what is your point???????

    Comment by JiminMpls — 22 Jan 2011 @ 4:39 PM

  58. Re: #55 (Alan Millar)

    There are 5 best-known global temperature estimates, surface data from GISS, HadCRU, and NCDC, and lower-troposphere estimates from RSS and UAH. Four of the five show a positive (warming) trend in the last decade (but not statistically significant), but you chose to display the only one that shows a negative (not significant either) trend and declare it to be “the” trend. Why?

    There are other factors affecting global temperature besides greenhouse gases, some of which have a profound impact on short-term variations. When estimates of their impact are removed (see this), the global warming trend becomes evident. In fact, when compensated for exogenous factors, all 5 data records show a positive trend over the last decade (including HadCRU), all show 2010 as hottest year, and 4 of 5 have 2009 as 2nd-hottest.

    Note to moderators: the recaptcha thing has gotten much more difficult, and is now a genuine pain in the ass. I submit that your real mission is not to provide more data for a (admittedly fascinating) scientific study of pattern recognition, it’s to disseminate accurate information on one of the world’s most important scientific issues. Please stop impeding your primary mission; get rid of recaptcha and serve your readers, not investigators of spam-detection technology.

    Comment by tamino — 22 Jan 2011 @ 4:40 PM

  59. Two related points

    First, Hansens A, B and C are not models, but emission scenarios that are fed into the same model. Thus A and C are much further from what actually happened than B, which is actuallypretty close to reality. This points to the 1988 team having an excellent understanding of what was likely to happen

    Second, for an exercise such as this, comparisons of the actual emissions to those that were assumed in the model would help peg how good the models were.

    Comment by Eli Rabett — 22 Jan 2011 @ 4:45 PM

  60. #55 Well when I look at it the GISS decadal, climate only, signal trend, matches the weather and climate decadal signal trend. Up, down, up, up, and up. In none of them do we see an opposite trend over the whole decade.

    The past decade is too short a period to actually define a trend, but there is no indication whatsoever that either the climate or weather signal (whatever you mean by that????) is in the midst of any of cooling trend.

    Nothing that you and John W write (or think) makes any sense because it is grounded in the misconception that the weather or climate has cooled over the past decade. It has not.

    Comment by JiminMpls — 22 Jan 2011 @ 4:49 PM

  61. Oh, and Iso-whateveryournameis….

    ENSO is not seasonal.

    Comment by JiminMpls — 22 Jan 2011 @ 4:52 PM

  62. 61

    Not this year it isn’t. Apart from Nino 1+2 regions near the coast, there has been no change what so whatever.

    But I guess it will eventually weaken, probably this year, or in early 2012. The historically record is awfully short though….

    That’s the problem with all this magical natural variability. The little bit#h might hang around for a decade for all I know, given that climate change will lead to more extremes and tipping points…

    lol

    Comment by Isotopious — 22 Jan 2011 @ 5:32 PM

  63. RE #53 Response: “There are very clear fingerprints of change that are only associated with changes via increasing CO2 etc”

    I agree with your science assertion, basic physics. However, the logic from the IPCC:

    “Numerous experiments have been conducted using climate models to determine the likely causes of the 20th-century climate change. These experiments indicate that models cannot reproduce the rapid warming observed in recent decades when they only take into account variations in solar output and volcanic activity. However, as shown in Figure 1, models are able to simulate the observed 20th-century changes in temperature when they include all of the most important external factors, including human influences from sources such as greenhouse gases and natural external factors.”

    So, the same logic applied to models of 1988 vintage suggests human influence is negligible. I’m sure models of the 21st century are “new and improved” and surely do a much better job. The question would be is anyone prepared to apply the logic that the IPCC uses to indict human activities to acquit if model predictions don’t hold up to the test of time? (i.e.: If the temperatures don’t fit you must acquit! LOL)

    [Response: Nothing in that quote contradicts my point at all. Read the whole of the chapter on attribution, and pay special attention to the discussion of fingerprints. That the observations match patterns (for instance, stratospheric cooling/trop warming, increasing OHC, radiative signatures) that were predicted decades ago for the impacts of increasing GHGs AND that you can't explain what is seen without including the extremely well-known effect of GHGs is, to most logical people, a strengthened argument for attribution. The simulations from 1988 - or even earlier - have proved skillful, though if you think that means they were perfect (or that they would need to be), you are somewhat confused. - gavin]

    Comment by John W — 22 Jan 2011 @ 6:09 PM

  64. “I study history, which tells me we have another 400 years or so of Global Warming”

    John W, you seem to think that this would happen in the absence of CO2 emissions. How, exactly, do you jump to this conclusion?

    “History” is a vague, all-encompassing thing. Palaeoclimate, now, actually studies past temperature and climate. Objectively, not based on anecdote. And nothing in the science suggests that you are correct. What is it about “history” that offers a different explanation?

    Comment by Didactylos — 22 Jan 2011 @ 6:36 PM

  65. RE: #63

    I think you dismiss anecdotal evidence too quickly. Remember, anecdotal evidence has started many scientific inquiries, such as plate tectonics and global warming and it is one of the main reasons why global warming is so indisputable. I don’t think I’m jumping to conclusions, although it may be fair to characterize it as unscientific. Looking at just recent history we have the Roman Warm Period around the 1st century, 500 years later the dark ages (massive crop failures and starvation), another 500 years the Medieval Warm Period and 500 years later the Little Ice Age. Of course it’s not exactly 500 years but just as an approximation it works out. So, given that man’s activities probably does have some influence (i.e.: magnitude and duration) of the current warming period I’m guessing we’ll probably continue to warm for another 400 years or so. So, yes, you’re right not really scientific, more like an intuitive conclusion; it may be only another 100-200 years if man’s influence is smaller than I personally believe or I could be just plain wrong.

    Comment by John W — 22 Jan 2011 @ 7:28 PM

  66. > Alan Millar
    > Woodfortrees

    You’re doing exactly what he warns against, you realize, fooling yourself with the tool he provided (using periods too short to be useful to detect trends in these data sets).

    Anyone new to this will do well to read the notes to see what Alan is trying.

    “After many requests, I finally added trend-lines (linear least-squares regression) to the graph generator. I hope this is useful, but I would also like to point out that it can be fairly dangerous…
    Depending on your preconceptions, by picking your start and end times carefully, you can now ‘prove’ that:
    Temperature is falling!…”
    http://www.woodfortrees.org/notes.php

    How long a time series of temperature data do you need to determine a trend? You need to know how much variation there is, for any collection of data, to know how many data points you need to have to test.

    For annual data, one data point per year, for global temperature, you need upwards of 20 years.

    Looking at pictures? Good way to fool yourself — or others.

    There’s a good high-school-level explanation with an exercise you can do for yourself at Robert Grumbine’s site. See
    http://www.google.com/search?q=grumbine+results+trends

    He says, among much else worth reading:

    “How to decide climate trends
    …. you need at least 15 years for your average to stabilize, 20-30 being a reasonable range…. But most tempests in blog teapots are about trends….”

    Comment by Hank Roberts — 22 Jan 2011 @ 7:29 PM

  67. Alan Millar seems to have come up with a new meme for his yearly visit to object to the model-data updates – he claims there is something called climate, and another thing called weather [I'm with him so far], then a 3rd thing, which he claims is somehow a combination of the first 2, called the temperature record. I hate to see him confuse himself further, but maybe he could clarify so as not to confuse us? Or maybe he could wait for next years update and use the time to rethink his groundbreaking theory.

    [tamino - CAPTCHA ia a bit of a pain at times, but I bet it saves the mods time weeding out some of the more ignorant contrarians - I just clicked the recycle 50 times, only 5 of them were the iffy new style]

    Comment by flxible — 22 Jan 2011 @ 7:36 PM

  68. #

    Can’t somebody just re-run Hansen’s old model with the observed forcings to the current date? Why are we forever wed to his old Scenarios A, B and C? I want to separate the physical skill of the model from his skill in creating instructive emissions scenarios.

    Comment by carrot eater — 22 Jan 2011 @ 7:57 AM

    #######

    seconded. If I had a couple wishes granted for free CPU time it would be carrot’s suggestion and my suggestion.

    Show how well H88 would do with the observed forcings
    and show how well ModelE does.

    Comment by steven mosher — 22 Jan 2011 @ 7:40 PM

  69. Including atmosphere and oceans, how much fluctuation in heat content is there in the Earth system for a model at equilibrium? Flux in equals flux out as an average, but I’m curious to what extent and on what time scales imbalances occur. I guess I’m ultimately trying to get a sense for how much annual or decadal variations in surface temperatures are a function of atmosphere-ocean heat exchange vs. earth-space imbalances (I realize the two don’t function entirely independently).

    reCAPTCHA: A mechanism for translating illegible gibberish into legible gibberish by leveraging the internet hive mind.

    Comment by Nibi — 22 Jan 2011 @ 7:48 PM

  70. Isotope, these things don’t have to be your guess, you can consult the record and model analysis. My “guess”, living on the west coast of Canada, where the phases of ENSO have noticeable influence, is that the current LaNina will decay to a neutral condition before mid-year, as most of the models suggest. :)

    Comment by flxible — 22 Jan 2011 @ 8:07 PM

  71. By the way, Alan, if you find someone who can do the work, it is possible to test significance over shorter time periods:

    “….. by removing the influence of exogenous factors like el Nino, volcanic eruptions, and solar variation (or at least, approximations of their influence) we can reduce the noise level in temperature time series (and reduce the level of autocorrelation in the process). This enables us to estimate trend rates with greater precision. And when you do so, you find …”

    See for yourself:
    http://tamino.wordpress.com/2011/01/21/phil-jones-was-wrong/#more-3350

    Comment by Hank Roberts — 22 Jan 2011 @ 8:07 PM

  72. Looking at just recent history we have the Roman Warm Period around the 1st century, 500 years later the dark ages (massive crop failures and starvation), another 500 years the Medieval Warm Period and 500 years later the Little Ice Age.

    John W … Europe is not the world, the US is! Oh, wait, the US isn’t the world, either.

    Seriously … what is the basis for your claim that regional events recorded in history accurately record *global* temperatures? Particularly in the face of paleoclimate reconstructions to the contrary?

    Comment by dhogaza — 22 Jan 2011 @ 8:22 PM

  73. The NODC OHC data starts in 1955. Your comparison starts in 1970. Do you have a link to the Model E OHC output data in annual or monthly form starting in 1955?

    Comment by Bob Tisdale — 22 Jan 2011 @ 9:50 PM

  74. 55, Alan Millar: So let us look at the decadal trends for each of the decades since 1950 the decades when CO2 is supposed to have become a dominant forcing factor.

    So it’s back to 1950 instead of back to 1900?

    Comment by Septic Matthew — 22 Jan 2011 @ 9:57 PM

  75. > since the 1950s
    Citation needed. You appear to be off by about 200 years.

    Try here:
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/tssts-2-5.html
    \… very high confidence that the effect of human activities since 1750 has been a net positive forcing …\

    Comment by Hank Roberts — 22 Jan 2011 @ 11:24 PM

  76. Ps, this might be worth rereading:
    http://www.digitalspy.com/forums/showpost.php?s=56d70aadd1c95e30814139d9aab77758&p=47494153&postcount=2401

    Comment by Hank Roberts — 23 Jan 2011 @ 12:56 AM

  77. RE #63 response

    I’m not disputing that if it quacks like a duck and waddles like a duck it’s a duck. What I’m saying is that the duck may be a small duck. The magnitude of warming is expressed in temperature (the size of the duck). 1988 predictions may have correctly identified it as a duck (manner of warming consistent with GHG forcing) but woefully missed the size of the duck.

    Comment by John W — 23 Jan 2011 @ 1:10 AM

  78. #72
    Yes, I realize the history of Western Civilization doesn’t necessarily match the world, however, I don’t believe “my take” is contradictory to paleoclimate reconstructions per se; I just don’t have to take all the uncertainties into consideration, a benefit of not being a climatologist.
    http://www.cgd.ucar.edu/asr/asr03/ccr/ammannasr1.jpg

    Comment by John W — 23 Jan 2011 @ 2:15 AM

  79. #47 – A question, when a model is initialized far in the past (1850 has been mentioned as a starting year), how can the initialization be accurate enough given the relative scarcity of data from those times?

    Since the climate models are physical models they will converge on their version of physical reality regardless of the initial conditions. Starting conditions don’t affect the final outcome if the models are run for long enough.

    Comment by Dave Werth — 23 Jan 2011 @ 2:47 AM

  80. I did not read through all the posts in this thread, but it seems to me that Hansen’s scenario C is the best match for measurements. What is scenario C?

    Comment by joe — 23 Jan 2011 @ 3:36 AM

  81. John W:

    You are right about one thing. You are plain wrong.

    Anecdotal evidence didn’t start global warming theories, or plate tectonics. The only reason Wegener’s continental drift theory struggled is because he couldn’t correctly propose a mechanism. In global warming, the mechanism came first, and the observations a very late second.

    A regional warm period and the LIA gives you less than two data points. To extrapolate that to a cycle requires more than intuition, it requires a really active imagination.

    Anecdotes might be a nice starting point for ideas, but when we have scientific evidence that all fits together, then that trumps anecdote every time. And the evidence says that your cycle does not exist.

    Comment by Didactylos — 23 Jan 2011 @ 5:12 AM

  82. 71 Hank et al

    I don’t know why people on here appear to think I am trying to say that the Earth is cooling. I don’t think that.

    My point is that I don’t think the models are accurate, as there is evidence they were tuned to make an apple look like an orange.

    I do not think a period of a decade is significant, in judging what a true climate signal is.

    So I would expect a model to have an opposite signal, some times, in those sort of similar time periods.

    When the hindcast is five for five and the forecast is none for one, it may be coincidence but it raises a flag. When you combine that with how there is so much leeway on how we estimate certain forcings in the models surely you cannot just ignore this?

    Is the decade 2001-2010 the hottest on record? Yes!

    Is that evidence that the warming trend continued during the decade? No!

    Sorry, no credible scientist would assert such a thing. I don’t know if any accredited scientists on here has said that but I would be utterly amazed if they had.

    The only way, the last decade could not have been the ‘warmist ever’, is if a cooling trend had set in over the whole decade, that was equal or greater than the previous warming decades trends.

    I don’t know any scientist who has alleged that. The last decade is no evidence for a continung warning trend, no evidence for a cooling trend, just evidence for a pause.

    Now you might not want to agree with the latter but it is a fact. Not that that fact will subsequently turn out to be true, that will emerge later.

    Also the record cannot be paused too long as we know the Earths climate is always on the move. So we shall see.

    Alan

    Comment by Alan Millar — 23 Jan 2011 @ 6:50 AM

  83. John W says:

    So, the same logic applied to models of 1988 vintage suggests human influence is negligible.

    You seem to be under the misapprehension that Scenario C somehow corresponds to negligible human influence. That could not be further from the truth. Yes, Scenario C imagines that we start to constrain our emissions but CO2 levels in the atmosphere continue to increase by 1.5 ppm per year until year 2000 and it is only after year 2000 that CO2 levels in the atmosphere cease to increase (remaining at 368ppm). Even after that, temperatures would continue to rise as we adjust to the CO2 already in the atmosphere. (See here for details of each scenario: http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf )

    [It is also interesting to note that the stabilized level of CH4 assumed in Scenario C is 1916 ppb whereas the actual level has stabilized just under 1800 ppb.]

    Comment by Joel Shore — 23 Jan 2011 @ 10:59 AM

  84. Alan Millar:

    I don’t know why people on here appear to think I am trying to say that the Earth is cooling. I don’t think that.

    I don’t think that.

    My point is that I don’t think the models are accurate, as there is evidence they were tuned to make an apple look like an orange.

    What you are saying is that there’s evidence of scientific misconduct as the models are tuned to mislead.

    That’s a very serious charge that won’t be taken seriously by knowledgeable people.

    Comment by dhogaza — 23 Jan 2011 @ 11:09 AM

  85. Alan Millar: you are no longer making any sense, so I can’t address the substance of recent your comments. However, you really should be aware that your claims equate to an accusation of serious fraud.

    Do you actually understand what you are saying? You are claiming that climate scientists have routinely lied in scientific papers, kept all of this secret in some kind of giant conspiracy, and misled not just the public but all the scientific institutions in the world, as well as governments around the globe.

    Is that what you intended to say?

    Comment by Didactylos — 23 Jan 2011 @ 12:08 PM

  86. I’m very confused about all those PDO discussion, is it really contributing to the sst slowdown or not? It would be useful if some expert write an article about it.

    Comment by marct — 23 Jan 2011 @ 12:25 PM

  87. John W.

    Let’s throw out everything we know, and pretend that you are right. Let’s pretend there is a 500 year cycle over the last 2 millennia. Year 1, high, 501, low, 1001, high, 1501, low, 2001, high.

    Instantly we run into a problem. According to your theory, we should be at a high now, and about to cool. But you predict 400 years more warming! That’s a massive contradiction.

    But we soldier on. Let’s pretend you never said that, and that you are claiming we peaked about now.

    Firstly, if we are at a peak, then there should be no trend either way. We should be stable, at the high point. But that’s not what we are seeing, we are observing a highly significant, very rapid warming trend.

    Uh oh! Wrong again. Let’s forget this pesky cycle thing and pretend you are arguing that because the temperature has cycled in the recent past, it might in the future.

    Let’s look at the peak to valley temperature change – let’s say 1 whole degree between the mediaeval climate anomaly and the LIA (and that’s being charitable, it is actually considerably less). 1 degree over 500 years is…. 0.02 degrees per decade.

    We are seeing almost ten times that rate of warming at the present. That makes any contribution from any imagined cycle absolutely negligible. So, wrong again!

    Okay, enough wrongness. How about we return to the real world and evaluate our initial supposition?

    I’m currently looking at various global temperature reconstructions over the last 2 millennia. I see no peak in the first century. I see no trough in the dark ages. I see marginally higher temperatures from 800-1200. I see the LIA clearly, although reconstructions differ as to the magnitude of the event.

    I see a massive spike representing the present day.

    History teaches us one thing clearly: current global temperatures are unprecedented in recorded history.

    Comment by Didactylos — 23 Jan 2011 @ 12:34 PM

  88. Alan Millar, #82, Have you been listening to my brother, Bloke from the Pub? He has all manner of theories if you’ll buy him a beer, and you can use them to fill your mind with oxymorons like “decadal trends”, and that will help you ignore the science some more. Or you can drop the hubris and try to learn something. Your call.

    Comment by One Anonymous Bloke — 23 Jan 2011 @ 1:03 PM

  89. > I would expect a model to have an opposite signal,
    > some times, in those sort of similar time periods.

    Wrong. You can’t _get_ a signal out of annual global temperatures in a decade. Any decade.

    You can imagine one by eyeballing lines. People do this all the time with even shorter time spans. It’s a hole many people fall into often.

    You are not alone–you’ve fallen in with a bad crowd easily misled.

    You misuse the line-drawing tool at woodfortrees in exactly the way he warns against. You claim you found “a fact” — you found what you expected and misinterpret it, repeatedly.

    Clue: use 20-year periods; see what you can say about it.
    Clue: http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php

    Comment by Hank Roberts — 23 Jan 2011 @ 2:05 PM

  90. 85 Didactylos

    “However, you really should be aware that your claims equate to an accusation of serious fraud.

    Do you actually understand what you are saying? You are claiming that climate scientists have routinely lied in scientific papers, kept all of this secret in some kind of giant conspiracy, and misled not just the public but all the scientific institutions in the world, as well as governments around the globe.”

    Don’t make the most ludicrous assertions which have no relation whatsoever to my words. It kind of throws doubt on your judgement and objectivity.

    It’s not my fault that you don’t like the data I have presented. Take it up with the data.

    Just to repeat, so that you may reflect.

    I have said that if a model is set up to match a certain signal ( not just climate) yet matches a signal that contains an additional cyclical factor which can change that signal significantly over the short and medium term then you would not expect it to show great accuracy over the short to medium term.

    That is such an obvious fact that there can’t be a serious arguement.

    Concerning the GCMs, it is a fact that they contain forcing factors whose calculated values and effects over time are in no way agreed or settled. Black Carbon and Land Use are just a couple of examples.

    Now the modeller has to decide the value himself, he is not going to find an agreed value.

    I am sure you have heard of confirmation bias. When you look at data and variables there is a bias towards those values that help your hypothesis. That is a well observed phenomenum in science whether you like it or not. That is not fraud, that is human nature!

    [Response: Do you think that scientists are not well aware of the possibility of confirmation bias or have no sense of "human nature" as you call it? Possibilities don't prove, or even imply, actualities.--Jim]

    I just give you data that shows, on the backcast, the models, surprisingly, hit all the decadal trends since 1950 in a signal it was supposedly not tuned to match.

    On the forecast it misses the decadal trend. Not an issue in itself, it can be expected to miss occasionally for the reasons stated. The surprising thing is is that it didn’t miss any on backcast.

    Might be coincidence. Is confirmation bias involved? Well you couldn’t entirely rule it out just looking at the data.

    No fraud or conspiracy needed.

    Alan

    Comment by Alan Millar — 23 Jan 2011 @ 2:26 PM

  91. Alan Millar (#55),

    Now we know what temperature record you’re looking at (Hadcrut, variance-unadjusted). For reference, where is the GISS model hindcast from 1950 onwards you’re comparing this with?

    Comment by CM — 23 Jan 2011 @ 3:02 PM

  92. Re: #90 (Alan Miller)

    On the forecast it misses the decadal trend.

    I already asked why you base your claim about the “decadal trend” on the *only* data set (out of 5) that shows a negative trend, and when *none* of the trends is significant.

    Now you say the forecast “misses the decadal trend.” Have you computed the uncertainty level in your estimate of the “decadal trend”? Do you even know how?

    Since you don’t seem to know how meaningless “decadal trends” are, you use the only data set that gives you what you want and ignore the others, and you act as though there’s no uncertainty in your “trend” estimate, your level of certainty amounts to nothing more than hubris. I suggest you are an example of the “Dunning-Kruger” effect.

    Comment by tamino — 23 Jan 2011 @ 4:23 PM

  93. Alan Millar:

    Why do you not understand the implications of what you say?

    All climate models explain in the literature exactly how they were constructed and used. It seems you have never read any of these papers, but despite that, you claim (sorry – “insinuate”) the scientists have tweaked things to suit themselves.

    That is an accusation of serious fraud.

    Now, can you grasp that?

    Or are you going to continue repeating yourself instead of engaging your brain?

    It’s bad enough you keep saying all this nonsense, but it really isn’t acceptable that you don’t actually understand what you yourself are saying.

    And the “data” you have presented? For laughing! You have just made vague and nonsensical claims of fraud, without understanding anything you have said. No data. You also seem to be ignoring the fact that the modelled 10 year trends suffer from the same thing any 10 year trend does – huge error bars. So no, your ramblings about “decadal trends” are not data. You aren’t even discussing decadal trends, which makes me wonder whether you even understand the term. You haven’t shown any understanding of anything else, so my money says you don’t. And just as a final insult, you cherry-picked not only the temperature product, but the interval and the starting points. We’re not morons, so we noticed. Bad luck. This means that you no longer have the benefit of an assumption of good faith.

    Oh, I’ve no patience with your ignorance. Go away and annoy someone else.

    Comment by Didactylos — 23 Jan 2011 @ 4:47 PM

  94. Alan Millar says of aerosols and land use: “Now the modeller has to decide the value himself, he is not going to find an agreed value.”

    No, Alan, the modeler will not decide on a “value” at all, but will instead model the processes using the best possible physics available. There will be a degree of subjectivity in selecting which physics is “best”, but once selected, that’s it for the model. The fact that some models perform better then others may argue that they have chosen better representations. However, the fact that pretty much all the models agree, regardless of which representations they have of the phenomena you posit, pretty well on the trend suggests that the processes you identify are only of secondary importnace.

    Alan, if you put in physics that represents reality poorly, you cannot compensate for that by putting other poor representations. The model would produce garbage. Maybe you ought to investigate actually how these models work before pontificating with such certainty. Or not. Those of us who have looked do appreciate comic relief.

    Comment by Ray Ladbury — 23 Jan 2011 @ 5:01 PM

  95. 92 Didactylos

    [edit. ok, that's it. take it elsewhere]

    Comment by Alan Millar — 23 Jan 2011 @ 5:15 PM

  96. Alan Millar — Here is a ultrasimple decadal model. Study it.
    http://www.realclimate.org/index.php/archives/2010/10/unforced-variations-3-2/comment-page-5/#comment-189329

    Comment by David B. Benson — 23 Jan 2011 @ 6:06 PM

  97. John W @ #77

    Even a duck at half the size is a pretty frightening prospect as we in Western Australian Agriculture are dealing with much lower and less predictable rainfall.

    Comment by Dale Park — 23 Jan 2011 @ 6:54 PM

  98. The only way the models can be wrong (in direction) is if they have grossly underestimated natural variability, and a global cooling trend is established in the long term observations.

    Many would think such a cooling outcome to be extremely unlikely (in the deep psyche impossible), but until the models have the ability to predict the short term variations occurring over the time interval of one year, we don’t know how well the models have estimated natural variability.

    What will the Global Temperature be in 12 measly months? Somewhere in a cloud? lol

    Comment by Isotopious — 23 Jan 2011 @ 6:55 PM

  99. Not really sure why anyone even brings up Hansen’s 1988 “predictions”. I’d be embarrassed. Scenario A assumes continued growth in CO2, pretty much in line with what has actually happened. Scenario B assumed reductions in CO2 emissions, and C assumed a major decrease in emissions starting in 2000. So the prediction has drifted far away from actual temperatures. The prediction for 2010 would be approximately twice what the actual anomaly has been. Pretty darn far off, so please stop bringing it up. It is a good example of a model proven wrong.

    [Response: Actually this comment is a good example of definitive statement that is completely wrong. How can you state something so confidently without doing the least bit of checking? (All the correct answers can be found in the linked post and data). - gavin]

    Comment by George — 23 Jan 2011 @ 7:01 PM

  100. Alan,

    Enough people have already pointed out issues with your thinking about ‘decadal trends’ and the data interpretation, so I won’t pile on, but I think it’s still worth saying a few words about the actual implications of model-obs agreement.

    You interpret very good model-observation agreement in hindcasts as evidence for model tuning, and that the exceptional agreement means they must be wrong (post #31). It’s important to understand why models can produce such good agreement in light of the large uncertainties in radiative forcing and climate sensitivity, and how all the models which produce similar 20th century trends can disagree by a factor of two or three in 21st century temperature projections. Indeed, models do simulate similar warming for different reasons as discussed in e.g., Knutti, 2008. There are reasons why the AR4 runs did not span the whole possible space of aerosol forcings & sensitivity (e.g., Kiehl, 2007, GRL) and thus do not sample the full range of uncertainty. Inclusion of calculated indirect effects from aerosols for instance or if unknown/un-included forcings are significant this may lead to more model-obs disagreements. It’s also certainly plausible that model development choices are made with knowledge in mind of the current climate state and the observed trends, even if done unconsciously.

    However, the agreement provides a useful constraint on the models parameter-space, so this at least gives us a consistent explanation of the response to the applied perturbation. Note that the consistency between modeled and observed temperature trends is not an attribution, and is not taken to be by experts in the attribution field. But models are not tuned to the trends in surface temperature, and as Gavin noted before (at least for the GISS model), the aerosol amounts are derived from simulations using emissions data and direct effects determined by changes in concentrations.

    In weather forecasting, models assimilate information to constrain the present state in order to allow for better predictive capacity. Similarly, it can be useful to benchmark climate models against the observed record to establish some sort of reasonable initial state for future predictions. The confidence in model performance comes primarily from the fact that they are based on the fundamental laws of physics (conservation of mass, momentum, etc) and have also now reached an exceptional level of maturity which allows them to simulate the mean state and variability in various climate variables rather well. For global temperature anomalies, we are doing pretty well. For other variables (precipitation, sea ice loss), statistics (e.g., the mean state, extremes) or depending on the spatial scale you are interested in model performance varies and the degree to which model tuning can be accomplished while still maintaining a reasonable climatology and consistency with observations is limited.

    Finally, the same uncertainties which plague the observational record modeling may be less important in the future. For example, I think it’s fair to say that the relative importance of GHG’s and aerosols are on the same footing in the 20th century (although clearly positive forcings have won out, which itself is a constraint on aerosol effects) although GHG’s should be much more significant compared to aerosols in the coming century.

    Comment by Chris Colose — 23 Jan 2011 @ 10:44 PM

  101. #98–”. . . until the models have the ability to predict the short term variations occurring over the time interval of one year, we don’t know how well the models have estimated natural variability.”

    Nonsense. These are quite separate problems.

    Comment by Kevin McKinney — 23 Jan 2011 @ 11:31 PM

  102. Isotopious:

    Many would think such a cooling outcome to be extremely unlikely (in the deep psyche impossible), but until the models have the ability to predict the short term variations occurring over the time interval of one year, we don’t know how well the models have estimated natural variability.

    Bull. That’s equivalent to saying just because we can’t predict the whether or not it will rain on July 4th 2011 in Portland Oregon, that we can’t predict that july and august will be warmer and drier than february.

    We can model natural variability in summer weather without being able to predict exactly where summer of 2011 will fall within that range.

    You’re just wrong.

    Comment by dhogaza — 23 Jan 2011 @ 11:33 PM

  103. “Bull. That’s equivalent to saying just because we can’t predict the whether or not it will rain on July 4th 2011 in Portland Oregon, that we can’t predict that july and august will be warmer and drier than february.”

    I see what you mean, however, we could be more specific, and ask if we can predict whether next January will be above or below average temperature. In this case we can’t predict the result; the science is not good enough. The physics have not been established unlike in your example.

    So have a guess, just the good ol’ above/ below, yes/ no, 1/ 0, will suffice. No need for some ancy-fancy decimal point value…I don’t want the whole world, etc…

    Comment by Isotopious — 24 Jan 2011 @ 2:02 AM

  104. Is there any release date available for the new HadISST products. I have an issue which i believe will be fixed by the Thompson correction. I will probably delay my manuscript submission slightly, if i can get my hands on the new version of hadisst soon.

    Comment by DrGroan — 24 Jan 2011 @ 3:42 AM

  105. Isotopious@98 demonstrates a deep misunderstanding of climate. The reason why CO2 trumps natural variability IN THE LONG RUN is not becuase it is, at present, much larger than energy fluctuations due to natural variability, but because it’s sign is consistent. It is the same reason why gravity–despite being the weakest of forces trumps all others at the level of cosmology.

    And his contention that we cannot be confident in the models until they can predict on yearly timescales is utter BS. I do not know a fund manager who will predict with confidence how his fund will do on a yearly timescale, and yet they wager billions on decadal timescales. Folks, come on, think about the dynamics of the system before you post this crap!

    Comment by Ray Ladbury — 24 Jan 2011 @ 5:55 AM

  106. TimTheToolMan asks : Regarding model output for OHC, Gavin writes : ” As before, I don’t have the post-2003 model output”
    Why not? I dont understand. Are you saying the models never output any OHC predictions past 2003?

    Gavin responds : [Response: No. It is a diagnostic that would need to be calculated, and I haven't done it. That's all. - gavin]

    Now I’m even more confused. How is it that arguably the most important aspect of AGW (ie the Ocean Heat Content) has not been calculated from the model output past 2003?

    [Response: It has. But I didn't do it, and I don't have the answers sitting on my hard drive ready to be put on a figure for a blog post. And if I don't have it handy, then I would have to do it all myself (no time), or get someone else to do it (they have more important things to do). If someone who has done it, wants to pass it along, or if someone wants to do the calculation, I'd be happy to update the figure. - gavin]

    Comment by TimTheToolMan — 24 Jan 2011 @ 7:14 AM

  107. How much global warmimg has occurred in “polar regions” and how many temperature stations record this warming?

    Comment by steve — 24 Jan 2011 @ 7:23 AM

  108. Ray Ladbury,

    “The reason why CO2 trumps natural variability IN THE LONG RUN is not becuase it is, at present, much larger than energy fluctuations due to natural variability, but because it’s sign is consistent.”

    But doesn’t ~3.3 sensitivity indicate that CO2 does trump natural variability in the short run?”

    Comment by captdallas2 — 24 Jan 2011 @ 8:49 AM

  109. Capt. Dallas, No. That’s 3.3 K per doubling. CO2 doesn’t double overnight.
    [comment was moved to Bore Hole -moderator]

    Comment by Ray Ladbury — 24 Jan 2011 @ 9:18 AM

  110. “…The simulations from 1988 – or even earlier – have proved skillful, though if you think that means they were perfect (or that they would need to be), you are somewhat confused. – gavin]”

    I guess I am somewhat confused.

    When you say a model is skillful, you must answer the question: skillful as compared to what? Skillful requires that the model performs better than a BASELINE model. One baseline model is a simple linear trend from the start of the century. This model performs as good or better than Hansen’s prediction. It takes less than 1 second to run, uses no physics, and is more accurate. (admittedly there are many possible different simplified models).

    [Response: At the time (1988), there were no suggestions that climate should be following a linear trend (though if you know of some prediction along those lines from the 1980s, please let me know - the earliest I can find is from 1992, and the prediction was for 0.1 degC/dec). Instead, there were plentiful predictions of no change in mean climate, and indeed, persistence is a very standard naive baseline. Hansen's model was very skillful compared to that. To argue that a specific linear trend should be used as a naive baseline is fine - except that you have to show that your linear trend up to 1984 was a reasonable thing to do - and should you use a 10yr, 20yr, 30yr etc. period? How well did that recipe validate in previous periods (because if it didn't, it wouldn't be a sensible forecast). Post-hoc trolling for a specific start point and metric now that you know what has actually occurred is not convincing. This was explored in some detail in Hargreaves (2010). - gavin]

    To prove skillful for the point you are trying to make (CO2 is a climate driver), the model must perform statistically better than a model that doesn’t use this large CO2 forcing. Hansen’s scenario C, which assumes a significant reduction in CO2, matches what really happened, business as usual CO2 increases.

    [Response: No it didn't. The different scenarios have net radiative forcing in 2010 (with respect to 1984) of 1.6 W/m2, 1.2 W/m2 and 0.5 W/m2 - compared to ~1.1 W/m2 in the observed forcing since then. The test of the model is whether, given the observed changes in forcing, it produces a skillful prediction using the scenario most closely related to the observations - which is B (once you acknowledge the slight overestimate in the forcings). One could use the responses of all three scenarios relative to their specific forcings to make an estimate of what the model would have given using the exact observed forcings, but just using scenario C - which has diverged significantly from the actual forcings - is not going to be useful. This is mainly because of the time lag to the forcings - the differences between B and C temperature trends aren't yet significant (though they will be in a few years), and in 2010 do not reflect the difference in scenario. If you are suggesting that scenario C will continue to be a better fit, I think this is highly unlikely. - gavin]

    To confused people like myself, this suggests the CO2 forcing aspect of this model is WRONG based on actual performance.

    [Response: I looked into what you could change in the model that would have done better (there is no such thing as a RIGHT/WRONG distinction - only gradations of skill), and I estimated that a model with a sensitivity of ~3 deg C/2xCO2 give the observed forcings would have had higher skill. Do you disagree with that? Since that is indeed our best guess for the sensitivity, and is also close to the mid-point of the sensitivities of the current crop of models, do you agree that this is a reasonable estimate? - gavin]

    This leads confused people like myself to not trust new models until they have proved skillful against a reasonable baseline model, which is provided at the time of the model release.

    [Response: Then you are stuck with looking at old models - which in fact did prove skillful compared to naive baselines provided at the time of release (see above). I prefer to use old models and their biases in order to update my Bayesian model for what is likely to happen. If a model with a sensitivity of 4.2 deg C/2xCO2 went a little high (given current understandings of the forcings), and a model with a sensitivity of 3 deg C/2xCO2 would have been spot on, I think that is support for a sensitivity of around 3 deg C, and that is definitely cause for concern. - gavin]

    Large positive CO2 forcings calls for accelerated warming as CO2 increases.
    I don’t see this signal in the data (yet). It’s not there.

    Comment by Tom Scharf — 24 Jan 2011 @ 11:00 AM

  111. Ray Ladbury,

    I am aware that is ~3.3 K is for a doubling and that during the last decade Atmospheric CO2 increased from ~ 340 ppm in 1983 to ~390 ppm in 2010. My question was, would not ~ 3.3 K sensitivity indicate that over that short period (27 years), CO2 warming exceeded natural variation?

    That would of course lead into a question of how over the period 1913 to 1940, though you could pick virtually any 27 year period, natural variability could create similar changes, but not so much now?

    Comment by captdallas2 — 24 Jan 2011 @ 12:34 PM

  112. Does your graph have a different baseline to the one published in IPCC 2007 ts26 ? http://www.ipcc.ch/graphics/ar4-wg1/jpg/ts26.jpg
    It has models and observations matching at about 2000, wheras you don’t?

    [Response: That seems to have a baseline of 1990-1999 (according to the caption), so that isn't the same as 1980-1999 used above. - gavin]

    Comment by tony — 24 Jan 2011 @ 12:43 PM

  113. I am new to the website. I have a basic (and what may seem like a trivial) question, but I am looking for a pointer to a place on the sebsite that tells about the conventions used to consolidate multiple observations/measurements at different points on the surface of the earth (e.g., different oceans) at different times (i.e., the seasons and such). Any pointers to that spot would be kindly apprecited.

    Comment by Stan Khury — 24 Jan 2011 @ 1:52 PM

  114. I think Alan Millar thinks that http://www.woodfortrees.org/plot/gistemp/mean:12 is GISS model output, not data.

    He says “Well when I look at it the GISS decadal, climate only, signal trend, matches the weather and climate decadal signal trend. Up, down, up, up, and up. In none of them do we see an opposite trend over the whole decade.”

    - Which is clearly not the case with GISS Model E – go to http://data.giss.nasa.gov/modelE/transient/Rc_jt.1.01.html and hit the “show plot” button.

    Confusion over the difference between models and data aside, when we look at 30 year or longer CLIMATE trends in the data, rather than decadal WEATHER trends, what we see is http://www.woodfortrees.org/plot/hadcrut3vgl/mean:12/plot/hadcrut3vgl/from:1890/trend/plot/hadcrut3vgl/from:1950/trend/plot/hadcrut3vgl/from:1980/trend/plot/esrl-co2/offset:-350/scale:0.01
    -Accelerating warming trends as the CO2 forcing increases.

    Comment by Brian Dodge — 24 Jan 2011 @ 2:13 PM

  115. Re: #110 (Tom Scharf)

    You say

    Large positive CO2 forcings calls for accelerated warming as CO2 increases.

    You are incorrect.

    Comment by tamino — 24 Jan 2011 @ 2:24 PM

  116. 102, dhogaza: That’s equivalent to saying just because we can’t predict the whether or not it will rain on July 4th 2011 in Portland Oregon, that we can’t predict that july and august will be warmer and drier than february.

    We can model natural variability in summer weather without being able to predict exactly where summer of 2011 will fall within that range.

    I think that’s a misleading (though commonly used) analogy, for two reasons. First, the seasonal differences are caused by solar variation, whereas we are naturally worried in AGW discussions by CO2, H2O, and CH4 effects. Second, predictions of seasonal effects are simple extrapolations of statistical records accumulated in real time over many generations, and over many summer/winter cycles, but we do not have comparable statistical records accumulated in real time over many cycles of increasing/decreasing GHGs.

    Comment by Septic Matthew — 24 Jan 2011 @ 2:56 PM

  117. Hansen’s scenario c is clearly the most accurate of the 3 to date; is any conclusion to be drawn from this?

    The graph of ocean heat content shows a slight decline in the year 2010; this co-occurred with a decline in sea surface and a record (or near record) surface temperature. That makes it look like 2010 was characterized by a slight departure from the average net transfers of heat between ocean and surface. Is that a fair statement?

    Comment by Septic Matthew — 24 Jan 2011 @ 3:06 PM

  118. captdallas2 — 24 Jan 2011 @ 12:34 PM
    “That would of course lead into a question of how over the period 1913 to 1940, though you could pick virtually any 27 year period, natural variability could create similar changes, but not so much now?”

    perhaps because over that cherrypicked 27 year period 1913 to 1940, when CO2 was much lower and rising much slower than it is now, the natural variation in solar output was up, whereas despite the natural variation in solar output being down since 1980, temperatures and CO2 are up.
    http://www.woodfortrees.org/plot/hadcrut3vgl/mean:12/offset:-0.1/plot/hadcrut3vgl/from:1913/to:1940/trend/plot/sidc-ssn/from:1913/to:1940/trend/scale:0.01/offset:-0.4/plot/hadcrut3vgl/from:1980/trend/plot/sidc-ssn/from:1980/trend/scale:0.01/offset:-0.4
    (This uses scaled and offset Sunspot Number as a proxy for solar output, since that’s all woodfortrees has available.)

    Comment by Brian Dodge — 24 Jan 2011 @ 3:13 PM

  119. re 113
    Basically the temperatures are converted to anomalies from a common base period,weighted based on area then averaged, where things get complicated is dealing with missing data and other quality control issues.
    http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf describes how Hansen did it for land

    Comment by jacob l — 24 Jan 2011 @ 3:18 PM

  120. “That would of course lead into a question of how over the period 1913 to 1940, though you could pick virtually any 27 year period, natural variability could create similar changes, but not so much now?”

    Natural variability is not a magic wand that just produces warming and cooling. Natural variability itself is the result of underlying processes, some known, some not known. My recollection is that the 1913-1940 warming coincided with at least two natural warming processes: increasing solar warming, and decreasing volcanic activity. As our observational capacity improves, we should expect to have fewer and fewer instances of unexplainable climate changes. In this case, GHG warming explains the last several decades nicely, but changes in known natural processes do not. Therefore, in order to come up with an alternative explanation, one has to simultaneously show why GHGs are not causing the warming they would be expected to based on physical principles, and at the same time come up with a natural source of temperature change that can match the magnitude and patterns of the observed change. Good luck…

    -M

    Comment by M — 24 Jan 2011 @ 3:23 PM

  121. SM, if you’d read the prior discussion, you wouldn’t be repeating the same question based on the same misunderstanding already answered over and over.
    Seriously, unless you’re trying intentionally to repeat the talking point that the old Scenario C is accurately describing current events (despite different assumptions and different facts) — you could avoid making that mistake.

    Have a look at the answers already given to that question. They might help.

    Comment by Hank Roberts — 24 Jan 2011 @ 3:27 PM

  122. 117 septic Mathew,

    The temperature may appear to be closer to scenario C but that has nothing to do with the accuracy of scenario C. Each scenario predicts a response based on action or inaction to curb CO2 output. The business as usual pretty much is the only scenario we should be looking at unless thinking about doing something counts as doing something.

    In another three to five years we may be able to coax out a trend without being accused of cherry picking. Then some of the questions about natural variability may be answered.

    Comment by captdallas2 — 24 Jan 2011 @ 4:08 PM

  123. M says,

    “Natural variability is not a magic wand that just produces warming and cooling. Natural variability itself is the result of underlying processes, some known, some not known. My recollection is that the 1913-1940 warming coincided with at least two natural warming processes: increasing solar warming, and decreasing volcanic activity.”

    Solar increase has been pretty much ruled out as being significant during that period. Aerosols, natural and man made are interesting. The unknowns are more interesting though.

    Comment by captdallas2 — 24 Jan 2011 @ 4:37 PM

  124. Brian Dodge,

    I guess you could call it cherry picking since I picked it because it had a similar slope without as much CO2 increase. Kinda the point.

    BTW, There are more recent solar studies than Lean 1998, 2000 and 2005. 1 Watt/meter squared TOA variation in TSI during a solar ~11 year cycle “may” contribute 0.1 degree temperature variation. I think Dr. Lean herself said that not too long ago.

    Comment by captdallas2 — 24 Jan 2011 @ 4:48 PM

  125. captdallas2 @111 — Pleasse take the time to study the ultrasimple model in
    http://www.realclimate.org/index.php/archives/2010/10/unforced-variations-3-2/comment-page-5/#comment-189329

    Comment by David B. Benson — 24 Jan 2011 @ 5:07 PM

  126. For Stan Khury, here’s one way to go about answering questions like yours. I took your question and pasted it into Google, and from among the first page of hits, here for example is one that may be helpful to get an idea of how scientists work to make observations taken in many places at many times useful. Just an example, you’ll find much more out there.

    Geodetic Observations and Global Reference Frame …
    http://www.nbmg.unr.edu/staff/pdfs/Blewitt_Chapter09.pdf

    “… Geodetic observations are necessary to characterize highly accurate spatial and temporal changes of the Earth system that relate to sea-level changes. Quantifying the long-term change in sea-level imposes stringent observation requirements that can only be addressed within the context of a stable, global reference system. This is absolutely necessary in order to meaningfully compare, with sub-millimeter accuracy, sea-level measurements today to measurements decades later. Geodetic observations can provide the basis for a global reference frame with sufficient accuracy. Significantly, this reference frame can be extended to all regional and local studies in order to link multidisciplinary observations ….”

    Take any set of observations and you can find similar information.

    Here’s a bit about the CO2 record, for example: http://www.esrl.noaa.gov/gmd/ccgg/trends/ They explain how the seasonal variations are handled on that page to produce the annual trend. You can find much more.

    Comment by Hank Roberts — 24 Jan 2011 @ 5:21 PM

  127. 121, Hank Roberts, if the question was answered, I missed the answer.

    Comment by Septic Matthew — 24 Jan 2011 @ 6:45 PM

  128. > SM
    > Scenario C
    For example, click: http://www.realclimate.org/index.php/archives/2011/01/2010-updates-to-model-data-comparisons/comment-page-3/#comment-198490
    and earlier; find “scenario c” in this thread, and at rc globally.
    Don’t miss the inline responses.

    Comment by Hank Roberts — 24 Jan 2011 @ 7:02 PM

  129. septic Matthew – follow Gavin’s response in 110 carefully.

    Comment by Phil Scadden — 24 Jan 2011 @ 7:17 PM

  130. 121 David B. Benson,

    Read it. 1 in 4 x10 to the 40th means there is a snowballs chance in hell that CO2 increase won’t lead to warming. I agree. I just tend to agree with Arrhenius’ final stab at sensitivity and Manabe. Arrhenius’ first shot was around 5.5 then he adjusted downward to 1.6 (2.1 with water vapor. Manabe, kinda got to shoot for his average ~2.2. The Charney compromise of 1979 (1.5 – 3.0 – 4.5) never impressed me despite the rigorous mathematics.

    Anyway, the point I was making is that older solar variation estimates are over used (by both camps, er tribes) and that natural variation could quite possibly be underestimated. There is more to climate oscillations than the ENSO and AMO, which may be part of a tri-pole (Tsonis el al. https://pantherfile.uwm.edu/aatsonis/www/JKLI-1907.pdf )

    Cherry picked though it may be, 1903 to 1940 is an interesting period.

    Comment by captdallas2 — 24 Jan 2011 @ 7:34 PM

  131. capt. dallas, Gee, and here I thought we ought to be going with what the evidence said–which is ~3 degrees per doubling, with 90% confidence between 2 and 4.5. We know more that did Arrhenius or Manabe.

    Comment by Ray Ladbury — 24 Jan 2011 @ 8:50 PM

  132. Okay … read your reply, but it still looks like Scenario C is the close comparison to GISTemp and HadCruT. Also, Lyman (2010) looks way out of line with other measurements.

    What would Scenario C tell you about CO2 sensistivity for doubling?

    Comment by Doug Proctor — 24 Jan 2011 @ 9:17 PM

  133. captdallas2 @130 — To become more impressed by the estimate of about 3 K for Charney equilibrium climate sensitivity, read papers by Annan & Hargreaves.

    I also have a zero-dimensional two reservoir model using annualized data. The only sources of internal variability included are ENSO and the AMO. Looking at the autocorrelations there does not appear to be anything left to explain except as randeom noise.

    Comment by David B. Benson — 24 Jan 2011 @ 9:20 PM

  134. “The temperature may appear to be closer to scenario C but that has nothing to do with the accuracy of scenario C”

    I’m missing something I REALLY would appreciate being corrected about. I understood that the point of graphing things together like the measured temp and the Scenarios, was that the one Scenario that looked most like the measurement was the most likely Scenario to go with. Why is “B” better than “C”, when “C” looks most like GISTemp and HadCruT?

    Seriously, I’d like to know. I have no idea how to interpret such comparisons otherwise and will always make the same mistake.

    Comment by Doug Proctor — 24 Jan 2011 @ 9:26 PM

  135. 129, Phil Scaddon

    thank you. I had missed it. It seems to suggest that the better fit of scenario 3 to the data might be meaningful should it persist.

    Comment by Septic Matthew — 24 Jan 2011 @ 10:13 PM

  136. Re: #134 (Doug Proctor)

    Scenarios A, B, and C are the same model, but with different forcings (different greenhouse gas emissions forecasts). Model B is preferred because it’s the one for which the emissions forecast is closest to what actually happened.

    But scenario B turned out to be too warm. That indicates that the model itself was probably too sensitive to climate forcings. As Gavin said, that model has a climate sensitivity of 4.2 deg.C per doubling of CO2. The best estimates of climate sensitivity (around 3 deg.C per doubling of CO2) indicate that that’s too much — in agreement with the conclusion from the model-data comparison.

    Comment by tamino — 24 Jan 2011 @ 10:24 PM

  137. > It seems to suggest
    > that the better fit of scenario 3
    > to the data might be meaningful should it persist.

    What is this “it” you are relying on, SM?
    Look at what’s actually being written, not what some “it seems” “to suggest”
    Look at the assumptions at the time for each of those scenarios.
    Compare them to what people are telling you.

    Where are you finding anyone saying Scenario C has a better fit?
    Hint: fit isn’t the line drawn on the page; fit is the assumptions as well as the outcome. C has too high a climate sensitivity and a cutoff on use of fossil fuels. Reality — differs.

    Oh, don’t listen to me, listen to the scientists here.

    Look in the right sidebar for the list “… with inline responses” to keep up, and do the searches on the subject. You’ve apparently been missing the most useful information here — the scientists’ inline answers — since you haven’t read them.

    Comment by Hank Roberts — 25 Jan 2011 @ 12:06 AM

  138. Doug Proctor (#134)

    When you try to predict the climate out in the future, you have two big uncertainties. First, you have the socio-economic uncertainties (which dictate emissions and CO2 growth rates, etc) and is mostly determined by human political choices. Secondly, you also have the actual physical uncertainties in the climate system.

    Scenario’s A, B, and C were primarily about the former component. They represent possible future concentrations of the main greenhouse gases. In Scenario C, trace gas growth is substantially reduced between 1900 and 2000 so that the forcing no longer increases into the 21st century. This is not what actually happened. Scenario B was a bit more conservative about greenhouse growth rates, and it’s not what happened either but it’s the closest one. Keep in mind also that the actual forcing is not too well known because of tropospheric aerosols. Thus, even before we talk about the actual trends in temperature, we known Scenario B is the most useful comparison point.

    Keep in mind though that actual forcing growth and Scenario B growth are not completely equivalent. Also keep in mind that the climate sensitivity (this is the physical aspect now) is a bit high in the 1988 model paper, so you’d expect some differences between observations and models.

    Comment by Chris Colose — 25 Jan 2011 @ 12:09 AM

  139. Oh, good grief, people, is this what’s got you going on this so avidly?

    … climatedepot.com Jan. 21
    … Oops-Temperatures-have-fallen-below-Hansens-Scenario-C …

    Comment by Hank Roberts — 25 Jan 2011 @ 12:14 AM

  140. Ray said

    “capt. dallas, Gee, and here I thought we ought to be going with what the evidence said–which is ~3 degrees per doubling, with 90% confidence between 2 and 4.5. We know more that did Arrhenius or Manabe.”

    Ouch Ray, is there anything we don’t know?

    Oh! Right! That brings us back to 1913 to 1940. Solar is much less likely to have driven the rise than expect only 5 years ago (0.1 w/m^2 Wang 2005 slightly less with Svalggard 2007 Perminger 2010) and aerosols have the largest uncertainty (when you include cloud albedo).

    Update: Zeke posted a neat look at mid century warming over at Lucia’s.
    http://rankexploits.com/musings/2011/more-mid-20th-century-warming/#more-13706

    Northern high latitudes dominated the warming? Oscillation warm phase synchronization?

    Comment by captdallas2 — 25 Jan 2011 @ 12:40 AM

  141. Captdallas,
    Yes, there are things we do not know. There are also things we do know–like about a dozen independent lines of evidence that all point to a sensitivity of about 3 degrees per doubling. And we know that it is much easier to get an Earth-like climate with a sensitivity that is higher than 3 than it is with one that is lower.

    http://agwobserver.wordpress.com/2009/11/05/papers-on-climate-sensitivity-estimates/

    All told, there’s 5% of the probability distribution for CO2 sensitivity between 0 and 2 degrees per doubling. Theres an equal amount from 4.5 to infinity. You seem awfully willing to bet the future of humanity on a 20:1 longshot.

    Comment by Ray Ladbury — 25 Jan 2011 @ 5:47 AM

  142. From where might I obtain the OHC for the model runs in a similar format, e.g. globalised, or data that is freely available in any format?

    Nothing like it seems to be archived at Climate Explorer.

    Alex

    [Response: In CMIP3 it wasn't a requested diagnostic, and so you will need to calculate it from the ocean temperature anomalies integrated over depth. For CMIP5 it is requested and so it should be available pre-computed. - gavin]

    Comment by Alexander Harvey — 25 Jan 2011 @ 7:33 AM

  143. @99: Essentially you’re saying you need to be able to predict the weather in order to be able to predict the climate. This is demonstrably false.

    You’re making a fundamental error with respect to levels of analysis: You don’t need to and often simply cannot measure a phenomenon at all levels. That you cannot measure at one level simply does NOT mean you cannot at another level. You can assert it, but you’d be wrong and the number of counterexamples to your “logic” is legion.

    For example, no (modern!) baseball coach would let the results of a single managerial decision affect making that same decision over and over again. Any single event–the results of a particular baseball play–may in fact be forever unpredictable. That does not at all mean that aggregate events (e.g., wins, climate) cannot be predicted. I’d choose Helton or Pujols (very high on base + slugging percentage) to pinch hit in a crucial situation every time over some player with a low value on that statistic. That is not to say either Pujols or Helton could strike out, hit into a double play, etc. when some rookie might have hit an HR in any particular instance. In fact just over half the time they will fail. But I’d make that same managerial decision every time if I wanted to keep my job.

    Comment by jgarland — 25 Jan 2011 @ 7:43 AM

  144. Gavin,

    Thank you for the information on CMIP3/5. Sadly I would not expect to de in a position to do the calculations even if I had the anomalies.

    I notice that the ensemble trend (1993-2002) and hence your extrapolation amounts to (1993-2010)~12E22Joules/17yrs which with the same 85% above 750m 15% below correction (as in Hansen,… yourself et al 2005 Earth’s Energy Imbalance: Confirmation and Implications) and a straight 750m/700m ratio correction gives about 0.55W/m^2 global (total area 5.1E14m^2) for the period. Is this figure in agreement with your understanding, and that Model ER was tracking that rate at the 0-700m integration level.

    It seems modest compared to figures quoted elsewhere (e.g. CERES analyses) and notably does not give rise to a significant model mean vs NOAA OHC discreptancy. Which is reassuring, but a little puzzling as I have seen figures such as a requirement for ~0.9W/m^2 quoted and hence a search for additional stored heat beyond what can be reasonably deduced from the unadjusted NOAA OHC data. I am not bothered by the squiggles, a year or two over or under budget is not much of an issue as far as I am concerned. I further discount the 0.04W/m^2 (atmosphere and land and melted sea ice and land ice) mentioned in your research article as relatively minor given known uncertainties.

    Which brings me belatedly to by last query. Do you have any figures for the (below 700m/above 700m ratios) for the ensemble? I should like to know if the ensemble has a significant requirement for storage below 700m.

    Many Thanks

    Alex

    Comment by Alexander Harvey — 25 Jan 2011 @ 10:53 AM

  145. Okay: as I think I understand it, Scenario C is not to be considered because it assumed that no emissions occurred after 2000, which clearly isn’t the case. However, the tracking between Scenarios and actual temperatures is best for Scenario C. Which is to suggest: 1) the emissions have virtually no impact at this time on global temperatures, or 2) all of the impact of emissions since 2000 has been offset by natural processes that have not been modelled. Either way, the correlation of actual temperatures with Scenario C is important.

    To dismiss the Scenario C correlation as not being “useful”, when “useful” is not defined (for purposes of controls under a precautionary principle?) [edit - don't go there]

    [Response: You are not getting the point. Take a classic little trick: what is 19/95? One method might be to cancel the '9's to get 1/5 (which is the correct answer). However, this method is completely wrong and so even though (coincidentally) the method came up with the right answer, it is not 'useful' (except as a party trick). The point is that the coincidence of the wrong forcing, a slightly high sensitivity and a lucky realisation of the internal variability isn't useful in the same way. Perhaps the correct answer (temporarily), but not one that was got using a correct or useful method. How do you suppose we should use that to make predictions for future events? - gavin]

    [edit - if you want to have a conversation, don't insult the person you are conversing with]

    The comparison you have done with what Hansen said in 1988 – which is still valid, as the models have not substantially changed since then – is embarassing in your denial. If the correlations were positive, that temperatures matched Scenario B, would you accept skeptics saying, “Sure, but really, Scenario C is more useful”, and if the ocean-heat data looked like Lyman (2010), them saying “Sure, but that’s only because deeper heat is being transfered to the surface and replaced by cooler waters, but we can’t see it”?

    [Response: Huh? The models have changed a lot - the results have not changed much. There is a difference. As for Lyman et al, that is what the OHC data look like (as far as we can tell), so I don't get your point at all. - gavin]

    Comment by Doug Proctor — 25 Jan 2011 @ 12:28 PM

  146. 110, gavin in comment: If you are suggesting that scenario C will continue to be a better fit, I think this is highly unlikely. – gavin

    137, Hank Roberts: Where are you finding anyone saying Scenario C has a better fit?

    It’s very clear from the graph that the forecast from scenario c fits the data better than the other two forecasts; that the clearly counterfactual assumptions in scenario c produce a better fit (so far) suggests that scenarios a and b are untrustworthy guides to the future.

    Much has been learned since Hansen ran those models. Would it not be more appropriate to rerun the models over the same time span that Hansen ran them, using current best estimates of parameters (such as sensitivity to CO2 doubling) and see what those predict? The difference between the a,b, and c scenarios of 1988 and the a,b, and c scenarios of 2011 would, I propose, be a measure of the importance (implications for the future) of what has been learned in the time since. At least if the model itself is sufficiently trustworthy.

    If the modeled results from scenario c (with best possible parameter estimates and counterfactual CO2 assumptions) continue for a sufficient time to be closer to actual data than the modeled results from scenarios a and b (with best possible estimates of parameters and accurate CO2 assumptions), then the model that produced the computed results will have been disconfirmed.

    I don’t think the Hansen graphs have any importance at all, except historical. They were like “baby steps”, when 22 years later the erstwhile “baby” is playing in the Super Bowl. I would much rather see the outputs, over the same epoch, of the same model with the current best estimates of all inputs that have had to be estimated or measured.

    Comment by Septic Matthew — 25 Jan 2011 @ 1:06 PM

  147. #149 – Sceptic Matthew: “I don’t think the Hansen graphs have any importance at all, except historical.”

    I think that is a little harsh, and we should not underestimate the importance of consistency in forecasting the general trends, nor the context it brings to how we interpret and present our present model results.

    For one thing, given that Hansen was using a fairly simple model and did not have the benefit of the computing power and data sets that we have now, I think his model actually did a pretty good job. More to the point, the warming trend his graph shows has not, to my knowledge, been contradicted by any subsequent climate modelling, despite our better estimates of climate sensitivity etc. This suggests to me that he was getting the basics more or less right, which in turn emphasises the point that the best models and theory we have all predict and have consistently predicted the same thing: warming, and quite a bit of it by the end of this century if we keep dumping CO2 in the atmosphere at our current rates. Put this another way: are there any models out there showing a consistent cooling trend over the next 30-50 years? One also wonders how much longer we have to keep predicting the same thing, again and again, before the message really sinks home.

    Comment by Nick O. — 25 Jan 2011 @ 2:25 PM

  148. > the forecast from scenario c fits the data

    SM, you’re confused and it’s hard to see why you cling to this.

    I predict I can boil a gallon of water using gasoline and a bellows to force air, and it will take 5 minutes.

    We boil the water but do it using kerosene and a tank of compressed air.
    It takes 5 minutes.

    Did my forecast fit the data?

    C’mon. Details matter.

    Comment by Hank Roberts — 25 Jan 2011 @ 2:55 PM

  149. Doug,

    Sure, but really, Scenario C is more useful

    No, scenario B is more useful.

    You appear to think that different scenarios have different physics. They don’t. The only difference between scenarios are emissions (which depend on economic development and politics. Both are outside the realm of climate models to predict). They are useful to show to the policy makers: “if we do this than that is the expected consequence”.

    So scenario C is useless since it assumed an emissions path that was nowhere near reality. Scenario B is closest to real-world emissions over the past 2 decades.

    Comment by Anne van der Bom — 25 Jan 2011 @ 3:09 PM

  150. 148, Hank Roberts.

    147, Nick O.: I think that is a little harsh,

    The 1988 model was a good step forward. Should it really not be updated with the best current information? I hope this exercise can be repeated annually or at 5 year intervals, and include the predictions from other models for comparison (one other is presented above), such as Latif et al’s model and Tsonis et al’s model, along with the simple linear plus sinusoid. I mentioned Wald’s sequential analysis (and its descendants); I hope that there is sufficient evidence to decide that one of them is really accurate enough to base policy decisions on before more than 20 more years pass.

    One also wonders how much longer we have to keep predicting the same thing, again and again, before the message really sinks home.

    Until you have a long record of reasonably consistent prediction accuracy. As long as the predictions are closer to the prediction from scenario a than to the subsequent data record the forecasts will remain unbelievable to most people; the more such forecasts are repeated, the more unbelievable they will become, unless the data start clearly trending more toward scenario a.

    Comment by Septic Matthew — 25 Jan 2011 @ 3:32 PM

  151. > Would it not be more appropriate to rerun the models

    Asked and answered repeatedly. You really should use the search tool.
    Shorter answer: Why bother? Got a spare supercomputer, staff, and a year available?

    [Response: Actually, this isn't the case. It is more subtle. The actual model code is still available, but getting it to run on a more modern system with different compilers requires some work. In doing that work, and fixing various bugs that were found, the model becomes something a little different (look up EdGCM). The resolution of the model then (8x10) means you could run it trivially on laptop if you wanted. - gavin]

    Comment by Hank Roberts — 25 Jan 2011 @ 3:54 PM

  152. Anyone care to comment on Flanner et.al. “Radiative forcing and albedo feedback from the Northern Hemisphere cryosphere between 1979 and 2008”?

    http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo1062.html

    The abstract ends:

    On the basis of these observations, we conclude that the albedo feedback from the Northern Hemisphere cryosphere falls between 0.3 and 1.1 W m−2 K−1, substantially larger than comparable estimates obtained from 18 climate models.

    One of the authors, Karen Shell, an Oregon State University is quoted as saying

    The cryosphere isn’t cooling the Earth as much as it did 30 years ago, and climate model simulations do not reproduce this recent effect. Though we don’t necessarily attribute this to global warming, it is interesting to note that none of the climate models used for the 2007 International Panel on Climate Change report showed a decrease of this magnitude.
    Instead of being reflected back into the atmosphere, the energy of the sun is absorbed by the Earth, which amplifies the warming. Scientists have known for some time that there is this amplification effect, but almost all of the climate models we examined underestimated the impact – and they contained a pretty broad range of scenarios.

    http://www.indymedia.org.au/2011/01/22/albedo-feedback-climate-models-underestimate-loss-of-reflectivity-in-the-arctic

    Comment by Geoff Beacon — 25 Jan 2011 @ 4:11 PM

  153. 143

    Given that it has been shown that Temperature lags ENSO by about 6 months (Jones 1989),

    and that inclusion of warm water volume data in ENSO prediction schemes significantly improves forecast skill at ~6 months (McPhaden 2003).

    A 12 month prediction is much closer than you think:

    http://www.pmel.noaa.gov/tao/elnino/wwv/gif/wwv_nino.gif

    The weather-climate linkage is the first step towards understanding natural variability.

    Comment by Isotopious — 25 Jan 2011 @ 4:47 PM

  154. Taking advantage of a chance to advertise from gavin’s inline to #151, I’d highly recommend those interested in basic modeling to download EdGCM. It’s very user friendly and basic runs at least can be accomplished with no programming skill, runs fine on any home PC and takes about 24 hours to do a 150 year simulation.

    We used it heavily as part of a Global Climate Processes course at UW-Madison for later undergrad and grad students, so it has a good deal of flexibility in what you can test (though the model blows up for extreme forcings like snowball Earth, I used CO2 at about 140 ppm and couldn’t get much lower than that). You can also create your own scenarios “A”, “B”, “C” etc in a text file and use your own forcings as part of the simulated projections to see how the climate responds. Alternatively, you can create your own CO2 concentration projections based on your own emission and ocean/biosphere sink/source scenarios using this carbon cycle applet created by Galen McKinley at Madison, which can then be integrated into EdGCM.

    Comment by Chris Colose — 25 Jan 2011 @ 5:02 PM

  155. Roger Pielke Sr. has a comment on his website related to an inline response to comment #3 above.

    The Lyman et al (2010) results (illustrated in the OHC figure above), are the best estimate yet of OHC trends and the associated uncertainties in the product. Explaining every wiggle in that curve is beyond the scope of a blog post, but the points I made above in discussing the potential mismatch between the GISS model AR4 runs and the observations are trivially true – they can be related to a) OHC changes not captured by the 0-700m metric, b) continuing problems with the Argo network and older measurement network, c) internal variability of the climate system that is not synchronised with internal variability in these models or d) inaccuracies in modelled ocean surface heat fluxes. The ‘truth’ is very likely to be some combination of all of the above.

    Dr. Pielke ends his comment with a call for “independent assessments of the skill at these models at predicting climate metrics including upper ocean heat content”, which of course I have no problem with at all. Indeed, I encouraged him to pursue just such an assessment using the freely available AR4 simulation archive years ago. As I said then, if people want to see a diagnostic study done, they are best off doing it themselves rather than complaining that no-one has done it for them. That is the whole point of having publicly accessible data archives.

    Comment by gavin — 25 Jan 2011 @ 6:55 PM

  156. captdallas2 @140 — For the first half of the 20th century, volcanoes made a difference:
    http://replay.waybackmachine.org/20090220124933/http://tamino.wordpress.com/2008/10/19/volcanic-lull/

    Comment by David B. Benson — 25 Jan 2011 @ 7:42 PM

  157. 155

    I don’t need an assessment of the models at predicting climate metrics including upper ocean heat content.

    They have zero skill, and the only skill they have for El Niño is because of the lag between the two.

    Besides the historical behaviour of the phenomenon, there is nothing to say why the current La Niña will eventually decay.

    That’s a very dangerous position to be in. If it doesn’t decay the change to the climate system would be off the charts.

    [Response: And if my grandmother was a mouse, she'd eat cheese. Can we please have less of the evidence-free declaratory mode? Thanks. - gavin]

    Comment by Isotopious — 25 Jan 2011 @ 8:04 PM

  158. Ray Ladbury,

    I am not particularly willing to bet anything, long shot or not. It just appears that the useful but imperfect models and the useful but imperfect data sets the models are working with, have more downside uncertainty (lower sensitivity) than upside. My fixation on the early 20th century warming is, it good period to spend more effort on, to help reduce uncertainty.

    Solar influence during that period is much less than estimated less than a decade ago meaning that aerosols and/or natural climate oscillations played a greater than estimated role. The natural oscillations intrigue me because of their potential to dramatically increase precipitation on shifts, which reduces heat content that may not be accurately reflected in the surface temperature record.

    With warming, those oscillations should produce even more precipitation, like more and more powerful storms. Imagine that. Increased precipitation will buffer warming somewhat. How much? Will the frequency of the climate shifts increase with warming? If they do, how strong would that feedback be on GTA?

    Comment by captdallas2 — 25 Jan 2011 @ 9:28 PM

  159. Gavin, why didn’t comment #157 by Isotopious just go straight to The Bore Hole, where it belongs?

    Comment by David B. Benson — 25 Jan 2011 @ 9:51 PM

  160. What the 1988 Scenarios were or showed is one of those zombie lies that keep coming back. Eli showed a long time ago that the Scenario B 1988 predictions were a little low until ~2000 and then a little high. Further B and C don’t really diverge until ~2000, so that effect would only show up about now anyhow. (Hint, click on the graphic to blow it up, the actual forcings until 2006are the blue line)

    Comment by Eli Rabett — 25 Jan 2011 @ 10:34 PM

  161. 151, Hank Roberts: Got a spare supercomputer, staff, and a year available?

    It’s something I hope to do in 2015-2020. I hope by then to be able to purchase a 64 core computer (mine only has 8 cores). I appreciate the comments by Gavin and Chris Colose about EdGCM. I’ll probably look into that.

    I think that the divergence between scenarios a and b and the data shows that the model is defective. But it’s only one item of information.

    [Response: Huh? Scenario A an B are different, why should the results be the same? - gavin]

    Comment by Septic Matthew — 26 Jan 2011 @ 2:15 AM

  162. I have bought the web name http://www.climatebet.com and will set up the site when I have a bet to post on it. I have been trying a few bookies to set the odds but so far I have failed. I wonder if anyone one this site would participate.

    Earlier, #152. I asked about Flanner et.al. – no response yet. As I read the abstract, it is saying that all existing climate models underestimate an important feedback, the sea-ice albedo effect. I worry about such “missing feedbacks” and am willing to back my worries with a $1000 evens bet.

    So who will take on a bet that this year will have the lowest sea-ice extent in the satellite record as published by the NSIDC.

    I am happy to bet this even though the climate is in a La Nina phase.

    Any takers?

    Better still, does anyone know of a bookie that will take bets like this?

    Comment by Geoff Beacon — 26 Jan 2011 @ 6:33 AM

  163. Re: 110
    When you say a model is skillful, you must answer the question: skillful as compared to what? Skillful requires that the model performs better than a BASELINE model. One baseline model is a simple linear trend from the start of the century. This model performs as good or better than Hansen’s prediction. It takes less than 1 second to run, uses no physics, and is more accurate. (admittedly there are many possible different simplified models).

    If you take a trend from 1900-1984 as the baseline forecast, the 2000-2010 forecasts from scenario B improve on the baseline RMS error by 20% (using each year as a forecast). A more appropriate baseline, in my view, would be the mean from 1900-1984, since it doesn’t include an assumption of warming. Scenario B improves on that forecast by 60%. There’s skill compared to both of those baselines.

    Comment by Harold Brooks — 26 Jan 2011 @ 8:29 AM

  164. captdallas says: “It just appears that the useful but imperfect models and the useful but imperfect data sets the models are working with, have more downside uncertainty (lower sensitivity) than upside.”

    Huh? Where in bloody hell do you get that? Have you even looked at the data constraining sensitivity? Dude, there is zero convincing evidence for a low sensitivity. Read the papers by Annan and Hargreaves or the Nature Geo review by Knutti and Hegerl.

    If we are lucky, sensitivity is 3 degrees per doubling. If very lucky, 2.7. If we win the frigging lotto, 2. The fact remains that we cannot rule out sensitivities of 4.5 or higher. Below 2 you simply don’t get an Earth-like climate.

    Comment by Ray Ladbury — 26 Jan 2011 @ 9:29 AM

  165. Can anyone explain THIS COMMENT over at Policy Lass? (Seems relevant and I can’t think of anywhere more appropriate to ask)

    [Response: Crank central. - gavin]

    Comment by J Bowers — 26 Jan 2011 @ 9:53 AM

  166. 161, Gavin: Huh? Scenario A an B are different, why should the results be the same? – gavin

    I meant that mean temperature of the world has developed (according to the data to date) differently from forecast (or “anticipated”) by scenarios A and B. It’s almost as if the increased CO2 since 1988 has had no effect.

    [Response: Huh? (again). If CO2 had no effect, there would have been almost zero net forcing at all, and no reason for the observed trends. - gavin]

    Does everyone agree with Eli Rabett in 160? That the scenarios are zombie lies?

    to continue with my imaginary future computer, I probably will not be able to afford the electricity.

    Comment by Septic Matthew — 26 Jan 2011 @ 11:14 AM

  167. re: 110 Large positive CO2 forcings calls for accelerated warming as CO2 increases.

    Since the effect of each equal increase in CO2 is less than the one before, I don’t see how this is true. Or who would claim it to be true. This is the 2nd time in recent weeks that someone has claimed that there’s a widespread assertion that the warming should be accelerating due to CO2 increases

    All of the discussion about accelerating increases in temperature that I’ve read over the last couple of years pin the effect on feedbacks.Particularly changes in the land.

    Who is asserting that there should be accelerating increases in temperatures due to increases in CO2?

    [Response: This needs to be strongly caveated. For the 20th C, the increase in GHGs gave faster-than-linear forcing, and so a larger trend is expected for the last 50 years, as for the first 50 years (as is observed). Thus in that sense, there has been an acceleration. Moving into the 21st century, the IPCC figure on the temperature response to the scenarios shows that eventually we also expect an increase in trend above what we are seeing now. However, there are no results that imply that we should be able to detect an acceleration in say the last ten years compared to the previous ten years. Many of the claims being made are very unspecific about what is being claimed, and so one needs to be careful. - gavin]

    Comment by Jeffrey Davis — 26 Jan 2011 @ 11:25 AM

  168. Septic Matthew:

    Why are you playing games?

    Eli’s position is clear. On the 1988 paper: “The result was a pretty good prediction. Definitely in the class of useful models.”

    The “zombie lies” are the perennial falsehoods that you and others dump on us from time to time, attempting to rewrite history by confusing the scenarios or just outright lying about what they are or which is closest to reality.

    As for your plans to recreate a climate model…. I seriously doubt your ability to do that. However, I suspect you do actually understand more about models than you pretend to.

    Comment by Didactylos — 26 Jan 2011 @ 11:41 AM

  169. Septic Matthew – if increased atmospheric CO2 since 1988 were having no effect, given all the talk of natural cooling phases, wouldn’t the global mean temperature be on a distinct downward tack?

    Comment by JCH — 26 Jan 2011 @ 11:43 AM

  170. Apologies, Gavin. Rudeness is not helpful.
    As parameters are changed, the models of 1988 can’t be compared to the results of 2010, I guess. Perhaps one could say that the mix of errors in 1988 Scenario C “looks” like it matches the history, suggesting that “up” errors of 1988 were compensated for by “down” errors as things turned out. And that right now Lyman 2010 might be observing something that will show up later when the time-lag is considered.

    The goal of graphs is to reduce a complex relationship to a simple image that doesn’t distort the general connection. If Scenario C is not to be viewed “as is”, then the non cognoscenti would wonder about A and B also. And if the visual divergence between Lyman and the others is one of apples and oranges, then the presentation is misleading or disinformation.

    What we really want is to see some simple comparison of temperature data since 1988 with projections circa 1988 beyond 2010 and hindcasts from 1988 with current (2010) understanding beyond 2010. Then we can see if prior warnings were excessive or underrated and see how possible troubles have changed over the years. Then, 22 years later, we’ll get a better feel as to how much confidence we, as non-statisticians and modellers, can place in forecasts from Hansen, the IPCC et al.

    Again, I apologize for the rudeness.

    Comment by Doug Proctor — 26 Jan 2011 @ 11:46 AM

  171. “However, there are no results that imply that we should be able to detect an acceleration in say the last ten years compared to the previous ten years.”

    but to reach more than 2°C at the end of the century, there must be undoubtedly a detectable acceleration at some moment – and it is somewhat surprising that no hint for any acceleration is detectable after 30 years of continuous increase of radiative forcing, isn’t it ? when is this acceleration supposed to become measurable?

    [Response: Look at the projections - I'd say by mid-century. One could do a more quantitative analysis of course. - gavin]

    Comment by Gilles — 26 Jan 2011 @ 11:54 AM

  172. concerning scenarios : first I don’t understand a continuous use of scenarios that were already wrong in the 90′s. Then the weak point of the scenarios used in SRES is that they all rely upon a continuous economic growth throughout the century -which is by no means granted- , and that they all exceed by far the amount of proven reserves for at least one fuel – which can hardly be considered as a likely event, by definition of “proven”. So the SET of scenarios does certainly not encompass all the possibilities of the future – it is strongly biased as a whole.

    Comment by Gilles — 26 Jan 2011 @ 11:59 AM

  173. Geoff Beacon 162

    Try intrade. They have a bet on JAXA IJIS extent record low. Implied probability of failing to get a record is 30% to 39% with last trade at 32%. No-one should offer you evens if they can get a better odds at intrade; so it appears you need to offer approx 2:1 to get someone to bet against a record.

    Comment by crandles — 26 Jan 2011 @ 12:03 PM

  174. SM, failure of reading comprehension.
    You’re rapidly becoming boring and digging yourself deeper.

    You ignore Gavin’s point that you can run EDGCM yourself on a home PC and redo the scenarios yourself, and whine about not having a supercomputer, presumably the 1980s vintage machine you’d need to exactly redo scenarios.

    You write:
    > Does everyone agree with Eli Rabett
    > … That the scenarios are zombie lies?

    That’s a pure example of what Eli describes, lying about what he wrote and misstating what the scenarios were or showed. It’s your spin. He wrote:
    “What the 1988 Scenarios were or showed is one of those zombie lies …”

    Why bother here if you’re just posting talking points and advertising ignorance while ignoring the answers given over and over to that stuff?

    Please, we need smarter skeptics who can respond to facts, not deny’em.

    Comment by Hank Roberts — 26 Jan 2011 @ 12:25 PM

  175. Ray Ladbury, 24 Jan 2011 9:29 am

    Perhaps I miss read. Annan and Hargreaves 2009. 2 to 3 C is the expert high probability range.

    “If we were to further update each of these analyses with the FG likelihood, the posterior cost would reduce still further to 1.6% of GDP with a 95% limit for S of 3.5oC based on the Hegerl et al. analysis, and 2% of GDP with a 95% limit for S of 3.7oC for Annan and Hargreaves. At such a level of precision, it would probably be worth re-examining the accuracy of assumptions in some detail, such as those regarding the linearity of the climatic response to forcing, and the independence of the analyses of the distinct data sets. Nevertheless, such results may be interpreted as hinting at an achievable upper bound to the precision with which we can reasonably claim to know S, given our current scientific knowledge.”

    So the 95% likelyhood range of 2xCO2 is 1.3 to 3.7, based on “our current scientific knowledge” Of course there are questions with their methods, but they are at least avoiding “overly pessimistic” priors. I will have to look into the other papers.

    I guess looking back at 1913 to 1940 only intrigues me.

    Comment by captdallas2 — 26 Jan 2011 @ 12:29 PM

  176. “Can anyone explain THIS COMMENT over at Policy Lass?” J Bowers — 26 Jan 2011 @ 9:53 AM

    “…when you correct Sagan’s incorrect optical physics of sols, which started the whole CAGW scare, ‘cloud albedo effect’ cooling changes to heating, another AGW which is self-limiting.”

    AHA! Now, I understand.

    Sagan got cloud albedo effect backwards, and it should therefore be “nuclear summer” instead of “nuclear winter”. That misteak has fooled Lindzen into thinking there is a cloud Iris effect (GOOOOOAAAAAALLLLL – oops, OWN GOOOAAALLL, erm, own goal, uuuh, shhh,own goal). So actually the current warming from natural variation (NOT CO2, never CO2) will give more evaporation, which will give more clouds, which will magnify the warming(which will give more evaporation, more clouds, more warming, etc, etc, etc). This positive cloud feedback will self-limit when we reach 100% year round cloud cover everywhere, up from the current (~60% ?) average. Musta been what happened at the PETM – we can look forward to an Azolla swamp summers at the North Pole(which we can harvest to fertilize rice crops – fixes Nitrogen), and megafauna extinctions. Makes me feel all warm and fuzzy that we don’t have to work up a sweat over CO2 emissions.

    There is that one niggling detail – are humans megafauna?

    He goes on to claim that “Pollution increases the albedo of thin clouds because but, contrary to the present theory, decreases it for thick clouds. You can prove this by looking at thick clouds about to rain – they’re darker because increased droplet size means more direct backscattering, less light diffusely scattered.” This doesn’t make sense – more pollution, or more specifically, more Cloud Condensation Nuclei from sulfate or particulate emissions, result in more droplets. The start of cloud formation is dependent on a parcel of air reaching water vapor saturation(actually, some level of supersaturation); whatever that level is, it represents a fixed amount of water available at the onset of cloud formation. Larger numbers of CCN from pollution will divide that amount of water over a larger number of necessarily smaller droplets.

    This has been observed.

    google “pollution effect cloud droplet size”, or see “Aerosol Pollution Impact on Precipitation: A Scientific Review” By Zev Levin, William R. Cotton

    Thick clouds about to rain are not the same as thick (fluffy, high albedo) clouds with lots of pollution caused CCN.

    Alas, Policy Lass is Amiss, or a miss.

    Comment by Brian Dodge — 26 Jan 2011 @ 12:36 PM

  177. @ Gavin, many thanks for taking the time. I actually realised who he is not long after asking here. Ta.

    Comment by J Bowers — 26 Jan 2011 @ 12:40 PM

  178. Hi Gavin,

    apropos Ray’s comment at 164 above:

    “The fact remains that we cannot rule out sensitivities of 4.5 or higher”

    Back in 2007 in your “Hansen’s 1988 Projections piece” at http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/

    you noted that the sensitivity used in the Hansen model of about 4C per doubling was a little higher than what would be the best guess based on observations. You posed the interesting question:

    “Is this 20 year trend sufficient to determine whether the model sensitivity was too high? ”

    You answered in the negative at that time. Whilst in 2007 you noted that “Scenario B is pretty close and certainly well within the error estimates of the real world changes” that is clearly not the case now, as you acknowledge in saying that it (Scenario B) is running a little warm.

    My question is this:

    Given the fact that observations have continued to move away from model projections based on a sensitivity of 4.2C per doubling of CO2 (Scenario B), to what extent can we rule out, or at least consider as highly unlikely, sensitivities of 4.5C or higher?

    Thank you.

    Comment by Stephen — 26 Jan 2011 @ 12:41 PM

  179. 166, Gavin’s comment: [Response: Huh? (again). If CO2 had no effect, there would have been almost zero net forcing at all, and no reason for the observed trends. - gavin]

    I meant “increased” CO2, not total. Is there any good reason why, if the model is correct, the counterfactual scenario c fits the data better than the scenarios a and b, or is it merely a short-term statistical fluke with no long-term importance?

    169 Didactylos (smart e-name, by the way. I think I mentioned that once before): However, I suspect you do actually understand more about models than you pretend to.

    Thank you. I have more experience with models like this: http://www.sciencemag.org/content/331/6014/220.abstract

    The actual model is in the Supporting Online Material behind the paywall, but it’s available to any member of AAAS.

    Comment by Septic Matthew — 26 Jan 2011 @ 12:50 PM

  180. Pssst, guys, look at the TOPIC you’re linking to, in Policy Lass.

    You’re loooking at a post in The Climate Denier Dunce Cap topic.

    Many have nominated excellent examples.
    Volunteer dunces have contributed their best work.

    Great site. Be aware of the context in which you’re reading.

    Comment by Hank Roberts — 26 Jan 2011 @ 12:57 PM

  181. “Then the weak point of the scenarios used in SRES is that they all rely upon a continuous economic growth throughout the century -which is by no means granted- …” Gilles — 26 Jan 2011 @ 11:59 AM

    If that’s the case, then the economic arguments – that we should wait “until the science is settled” because the application of a discount rate makes solving AGW cheaper in the future, and we can’t afford the economic dislocation if we act now – are false. If we really can’t afford to fix it now, and are less able to in the future, then we are really screwed.

    Comment by Brian Dodge — 26 Jan 2011 @ 12:58 PM

  182. SM:

    I meant that mean temperature of the world has developed (according to the data to date) differently from forecast (or “anticipated”) by scenarios A and B. It’s almost as if the increased CO2 since 1988 has had no effect.

    Actually, it’s almost as though the computed sensitivity of 4C to a doubling of CO2 in this early, 1988 model is too high (as has already been pointed out). Why, it’s almost as though the real sensitivity is more or less 3C, instead … I mean, it’s almost as though AR4′s most likely number for sensitivity, rather than the 1988 model’s higher computed value, is likely to be very close to the real value.

    Despite this too-high sensitivity number, the scenario B projection (the only one worth looking at, since that most closely models actual CO2 emissions), was pretty darn good. As someone pointed out, it is skillful compared to simply extrapolating the trend from earlier decades …

    Comment by dhogaza — 26 Jan 2011 @ 12:58 PM

  183. PPS
    It looks like J. Bowers set up the confusion by coming here, pointing to the guy’s post over there, misidentifying it as from “Climate Lass” — and over there telling the guy that he’s linked to him at RC to get him more attention.

    “Oops” and “oops” again, twice linking to denier posts naively. Do better.

    Comment by Hank Roberts — 26 Jan 2011 @ 1:11 PM

  184. Re 126.

    Thanks for the pointer, Hank. I will try it — and hope I get some clues on how this is done. I have done much modelling in my time and I find that making sure the input is properly constructed is the sine qua non of a good process. With that solved, I then move to the two natural elements in the sequence: the derivation of the relationships (statistical association, formulaic constructions, etc.) and ultimately to the results of the model and how to make sure that a good historical fit (of statistical associations) does not inadvertently slip into a conclusion of causality.

    Comment by Stan Khury — 26 Jan 2011 @ 2:07 PM

  185. Captdallas2,
    Huh? Where in bloody hell are you getting 1.3 to 3.7. The 90% CL is 2-4.5. If you are referring to Annan’s Bayesian approach where he uses a Cauchy Prior–I have severe qualms about any Bayesian analysis that significantly changes the best estimate. In any case, a Cauchy is not appropriate, because it is symmetric and so does not capture the skew in our knowledge of the quantity. It is probably best to go with a lognormal prior as this at least doesn’t violate the spirit of maximum entropy/minimally informative priors.

    Comment by Ray Ladbury — 26 Jan 2011 @ 2:11 PM

  186. 156 David Benson,

    The volcanic lull persisted until circa 1960, 20 years after the period in question. It is also likely that other aerosols, dust from the dust bowl and accelerated manufacturing to build up for war, could have been significant. If the volcanic lull had a better fit to temperature, I would agree. The construction boom following the war may have contributed to the drop in temperatures, but the climate shift theory is a much better match to what happened.

    Comment by captdallas2 — 26 Jan 2011 @ 2:32 PM

  187. @ 182 Hank

    I (and others) have asked the guy to come to RC a number of times in the past on CIF to propose his pet theory, but he goes under a different name there, and it wasn’t until immediately after I genuinely asked here that I realised who he is (he sockpuppets in the UK MSM) – it’s not like his theory makes much sense to start with, and it’s even more difficult to follow each time he announces it in different ways. Pointing out I’d asked at RC was a kind of in-joke between us. I wasn’t spreading any confusion or attention seeking and I clearly said “Policy Lass” (read up). I frankly thought it was another faux “nail in the coffin of AGW” that crops up every week, and not being a climate modeller thought it would be a good idea to ask if someone could take a quick look in anticipation of a wave of zombie arguing points.

    Comment by J Bowers — 26 Jan 2011 @ 3:09 PM

  188. 151, Hank Roberts: Got a spare supercomputer, staff, and a year available?

    174, Hank Roberts: You ignore Gavin’s point that you can run EDGCM yourself on a home PC and redo the scenarios yourself, and whine about not having a supercomputer, presumably the 1980s vintage machine you’d need to exactly redo scenarios.

    You two guys need to get together. And I did note the comments about EDGCM.

    Comment by Septic Matthew — 26 Jan 2011 @ 3:10 PM

  189. The Role of Atmospheric Nuclear Explosions on the Stagnation of Global Warming in the Mid 20th Century – Fujii (2011)

    Hat tip to:
    http://agwobserver.wordpress.com/2011/01/24/new-research-from-last-week-32011/

    Comment by Hank Roberts — 26 Jan 2011 @ 3:11 PM

  190. Ray Ladbury,

    In the Annan Hargreave 2009 paper they did not discuss a lower limit as it had no meaning for their purpose, I assume. Their statement of 2 to 3 C as an expert excepted range and they calculated a 95% probability S was less than 4 C. 3.7 was the 95% limit for their study. The range 1.3 – 2.5 – 3.7 is implied from that study.

    I agree that the Bayesian with Cauchy prior has issues, but my understanding is that if the prior is fairly “expert” the results will be reasonable. Other methods are more tolerant of ignorance, but the results tend to be less useful (too much area under the curve).

    As far as departure from the accepted “norm” (2-4.5), 1.3 – 2.5 – 3.7 is not that much of a stretch. In any case, I expect more papers in the near future using various statistical approaches.

    Comment by captdallas2 — 26 Jan 2011 @ 3:12 PM

  191. Aside — the above links lead to others including this tidbit modelers might find useful to crosscheck dust estimates from aerosol depth. (Pure speculation on my part.)

    Plutonium fallout might track amounts of dust in the atmosphere recently picked up in dust storms. The age of the isotope identifies the source: http://dx.doi.org/10.1016/j.apradiso.2007.09.019

    “Recently, the deposition rates have been boosted by the resuspension of radionuclides in deposited particles, the 239,240Pu content of which may originate from dusts from the East Asian continent deserts and arid areas.

    Comment by Hank Roberts — 26 Jan 2011 @ 3:20 PM

  192. Captdallas: “As far as departure from the accepted “norm” (2-4.5), 1.3 – 2.5 – 3.7 is not that much of a stretch.”

    WRONG!! There is virtually no probability below about 1.5, and even 2 is stretching it. OTOH, the portion above 4.5 is quite thick-tailed. The distribution looks like a lognormal with standard deviation of about 0.91–and that’s got a pretty serious kurtosis. The problem with a Cauchy Prior is that it exaggerates the probability on the low side and under-estimates the probability on the high side. The prior should not drive the results, and it definitely should not have properties significantly different from the likelihood.

    Comment by Ray Ladbury — 26 Jan 2011 @ 3:44 PM

  193. Hank 188, the nuclear hypothesis is a weak one; not impossible, ofcourse, to be sure, but not, on very solid data. For obvious reasons it cannot be replicated either:) I am not closed off to a possible cooling effect, as a result of nuclear explosions, but all conclusions on either side of the debate are too speculative for my blood. I still read these little contributions, just the same, as it makes for interesting reading.

    Comment by Jacob Mack — 26 Jan 2011 @ 3:48 PM

  194. Ray, 184. Let us back up for a minute. In your own words, how would you justify the best estimate as it is often called? What I mean statistcally speaking and based upon what you personally know about the data, what are the best indicators this range can be believed with with such confidence. I do not mean quoting the the IPCC report and their probability ranges, but what have your calculations, best estimates and background in physics shown you, and how do you articulate it? Maybe I am just too confident having completed a host of new grad stats courses and having some interesting input from a chemical engineer friend of mine, but I am curious about your critical thinking process and analysis of the claims made in this temp range and so forth.

    Thanks,

    Jake.

    Comment by Jacob Mack — 26 Jan 2011 @ 3:53 PM

  195. Crandles 173

    Thanks for your suggestion but I couldn’t find it on the intrade site. Can you point me in the right direction.

    But a bet doesn’t ally my fears about missing feedbacks. I had hoped for some guidance on the Flanner et. al. paper.

    Comment by Geoff Beacon — 26 Jan 2011 @ 4:17 PM

  196. Captdallas2: “…but the climate shift theory is a much better match to what happened.”

    Not even wrong. Dude you are arguing with 100 year old science. You might as well be arguing for the currant-bun theory of the atom!

    Comment by Ray Ladbury — 26 Jan 2011 @ 4:21 PM

  197. Ray Ladbury, #191: Good start on talking point in statistics I. One question, and one comment: what do you base the claim of zero probability of 1.5 degrees? Comment: I would argue, the high end estimates have zero probability statistically due to empirically observed buffering.

    Comment by jacob mack — 26 Jan 2011 @ 4:41 PM

  198. Jacob Mack, do you want to talk in terms of Bayesian, Frequentist or Likelihood ratios? And what criterion do you want to base the designation of “best” on. The treatment by Knutti and Hegerl is quite readable and fairly comprehensive. Knutti proposes 9 criteria for evaluating the quality of a constraint, along with 9 separate constraints. You can find additional analyses at AGWObserver:
    http://agwobserver.wordpress.com/2009/11/05/papers-on-climate-sensitivity-estimates/

    Personally, I do not think any single criterion leaves the others in the dust. What is most impressive to me is the accord between the very different analyses and datasets. Almost all of them arrive at a best estimate of around 3 degrees per doubling and a “likely” range between 1.5 and 7. All are skewed right. They all pretty much preclude a sensitivity as low as 1.5. If the concept of CO2 sensitivity were wrong, or if the physics were seriously off, you would expect to get a broad range of estimates. You don’t. My takeaway from this is that CO2 sensitivity is most likely around 3 degrees per doubling, and that if that’s wrong, it’s most likely higher.

    Now in terms of stats, I think likelihood is probably the best way to look at it. If you are going to look at it in a Bayesian way, your Prior should be as uninformative as possible. One useful way of looking at it is to look at how much the best-fit parameters of your distribution (again, you probably want to use a lognormal) change as you add data. You can also look at how your likelihood (or Posterior) for your best-fit value changes as you add data. Big changes indicate either very important or inconsistent data. What you notice about climate sensitivity is that the central tendency doesn’t change much–your distribution just mostly gets narrower as you add data–a lot at first, and less as you proceed. That is usually an indication that your data are pretty well in accord. Does that make sense?

    Comment by Ray Ladbury — 26 Jan 2011 @ 4:50 PM

  199. Ray Ladbury,

    So for lognormal SD ~1 there are a lot of random, multiplicative, feedbacks and all are positive? You have a much stronger grasp of statistics than I do. I was under the impression there might be a few negative feedbacks in a chaotic atmosphere.

    Multi-decade long oscillations that impact climate go against 100 year old science?

    Guess I need to hit the books.

    ta

    Comment by captdallas2 — 26 Jan 2011 @ 5:06 PM

  200. Jacob, What precisely do you mean by “empirically-observed buffering”? The analyses that place the most stringent limits on the high side, also show almost no probabilty on the low side (e.g. last glacial max, last interglacial and climate models). The climate models are particularly important for limiting the high-side probability. That is why I always laugh when denialists argue that the models are crap. If the models are wrong, we’re probably in real trouble! The thing is that none of these things work particularly well with low sensitivity. The claim of low (not zero) probability is due to the aggregate distribution.

    Oh, BTW, I was in error–the lognormal standard deviation is probably about 0.25. That puts the probability of S less than 1.5 at about 0.3% On the high side to get to such a low proability, you’d have to go to about 6 degrees per doubling. The high-side will always drive risk.

    Comment by Ray Ladbury — 26 Jan 2011 @ 5:11 PM

  201. captdallas 2, 189: I think it is safe to say we must use caution with any statistical method and analysis. Each method and application hold inherent limitations that can lead to issues in the results. In general, very low and high ends, various extremes and outliers should be eliminated, ideally, but that is not always the case. Whether linear or curvilinear correlations, and regardless of the regression method, or outlier elimination process for values in a data set +/- 3 s.d.’s from the mean, error analysis and going back to raw data are essential procedures. Sometimes even outside of bias, expectations and in the face of a considerable background, a few typos, blurry vision from tired eyes, or using a new computer program can greatly change the final results.

    I agree with you, however, there are no strong reasons to rule out low values either. That may open the door to the possibility of higher values too, in conversation, but I think based upon data and statistically summary, the higher ends can in fact, be ruled out as in 0/(some value) as not being possible, but that will take some work and analysis to show. By higher values I mean anything above 3.5 degrees. I also think even the clustering at 3-3.2 degrees needs more evidence and reanalysis of raw data. What methods in particular and why are each authors in peer review being cited by others here, using? That goes a long way in recreating analysis and although models (usage of/interpretation) may be unavoidable in our discussion, they should and cannot be the main source of information, data and so forth.

    Comment by Jacob Mack — 26 Jan 2011 @ 5:11 PM

  202. # 195, I must disagree when you say 100 years of science as if the history settles the argument in and of itself. Many prominent scientists int he past 100 years have made false statements, still perpetuated and researched by current scientists.

    [Response: Define false. And how many "prominent" scientists have made "true" statements in that same time?--Jim]

    Sometimes a paper or two does turn a history of science on its head, and I do not mean, Galileo or anything from hundreds of years ago either. OF course cation should be used with individual papers and lack of replicability too. However, sometimes one or two papers get replicated thousands of times after and progresses science greatly. The 1950′s– 1970′s marks a 20 year period of great controversy in climate science where at times, one author or two, changed the perception of weather and climate.

    [Response: These types of vague generalizations lead nowhere. Please talk specifics and back things up.--Jim]

    Comment by Jacob Mack — 26 Jan 2011 @ 5:17 PM

  203. Ray Ladbury, 197, let us start in Bayesian and work out from there. First I will read your link(s). Next I will consult my textbooks and some of the statistical work I do. I will get back to you beginning tomorrow. I am back to my night job. By all means let us keep the discussion open [edit; as before]

    Comment by Jacob Mack — 26 Jan 2011 @ 5:21 PM

  204. Ray Ladbury @184 — The point behind using a Cauchy distribution as the prior is that it is presumable completely uninformed about both physical laws and the actual data. Assuming S cannot be less than zero requires some degree of expertise. I find using Cauchy to be quite ingenious.

    Comment by David B. Benson — 26 Jan 2011 @ 6:04 PM

  205. Re Geoff Beacon 194

    “Thanks for your suggestion but I couldn’t find it on the intrade site. Can you point me in the right direction.”

    In ‘The Predictions Market’ menu on the left the second category is ‘Climate and weather’. This opens a second level menu which includes:
    NY City Snowfall,
    Global Temperatures, and
    Arctic Ice Extent

    Comment by crandles — 26 Jan 2011 @ 6:14 PM

  206. 173crandles
    Geoff Beacon 162

    The site is http://www.intrade.com/jsp/intrade/contractSearch/index.jsp?query=Minimum+Arctic+ice+extent+for+2011+to+be+greater+than+2007

    I went to intrade.com and started a search for arctic sea ice and it came up. As I read it, Geoff should get the 2 to 1 odds. They are betting that sea ice extent will not be lower than 2007. Geoff, I would almost bet you $100 because I would like to loose the bet. The fact that 2007 is the lowest has the deniers touting that sea ice is growing. It will be interesting to follow intrade, because I think 2012 might have a shot at being tie lowest.

    Comment by Bibasir — 26 Jan 2011 @ 6:32 PM

  207. David, The problem I have with a Cauchy Prior is that it is symmetric–when we know that the distribution for S is not! This is bound to under-estimate the high-end tail and over-emphasize the low-end tail. The result is also very sensitive to the position parameter. So, while I agree it is ingenious, I think it is also wrong, in that it violates the spirit that motivates using minimally-informative Priors. You get a VERY different result if you even use a lognormal prior with a standard deviation of 0.65-0.85 (which maximizes width/Kurtosis for a given mean).

    I mean what Annan has done is essentially cut off the high-end tail and say it isn’t relevant. Fine, but be straightforward about it.

    Comment by Ray Ladbury — 26 Jan 2011 @ 7:13 PM

  208. Ray Ladbury @202 — Aha, but the prior is to be the least informed possible. So one starts by assuming as little as possible. In particular the lognormal distribution assumes that S is not negative. Furthermore, a variable might be modeled as log-normal if it can be thought of as the multiplicative product of many independent random variables each of which is positive. from
    http://en.wikipedia.org/wiki/Log-normal_distribution
    so this distribution makes that assumption about the subjective pdf for S. One could consider adding another parameter to form the translated lognormal pdf, but the above objection still obtains.

    An alternative is the translated Weibull distribution:
    http://en.wikipedia.org/wiki/Weibull_distribution
    but again there are objections to somehow relating a subjective pdf for S to failure rates.

    The especial beauty of the Cauchy distributions is that those possess no mean and assign postive probablity to all possible values of S. The latter is important in a Bayesian analysis as then one does not apriori exclude any possible value. I suppose if one argues that the cloistered expert who is picking this prior knows some physics, then she might choose a translated Cauchy centered at 1.2 K for 2xCO2.

    Once the Bayesian analysis is applied to whatever evidence one has, the influence of the prior begins to diminish. The long tails are still there when starting with a Cauchy prior but the symmetry quickly goes away, with the probability of large S vastly exceeding the probability of negative S.

    Comment by David B. Benson — 26 Jan 2011 @ 8:55 PM

  209. I suspect that much of the motivation for Annan and Hargreaves is simply to highlight that the uniform prior gives excessive weight to very high sensitivity values, and that results can be very different with a different choice of prior. I think it’s quite valuable to emphasize the sensitivity of the final result to the choice of prior. However, that doesn’t give me much comfort in terms of climate change — it makes results less sure, more unsettling. For risk analysis, I’d suggest using the most pessimistic prior as a cautionary move.

    To quote myself, I wouldn’t want to ride on an airplane whose safety depended on using the Jeffreys prior rather than a uniform prior. And I don’t want safe climate policy to depend on a Cauchy (or lognormal or Weibull) prior rather than a uniform prior.

    Comment by tamino — 26 Jan 2011 @ 9:38 PM

  210. David, The fact that the Cauchy does not preclude negative values for S is actually irrelevant. We know from physics that the data will preclude these values, so all the symmetric nature of the Cauchy is doing is robbing probability from the high-end tail. There is no good reason to do this from a physics perspective–or from that matter from a probability perspective. And from a risk mitigation perspective, it is potentially disastrous. Think about the equivalent situation at a F-annie M-ae. We can’t bound risk on the high end, so we introduce a symmetric Prior to beat down the high-end of our asymmetric likelihood. Sounds like a recipe for a mort-gage crisis to me. Sometimes our “uninformative” priors can be too clever by half.

    I’ve been thinking about this of late. If you have a situation where you are pretty sure that the likelihood will beat down the tails of your distribution, then what you really want in an uninformative prior is one that maximizes your width rather than the extreme tails of the distribution–relatively high standard deviation and relatively low kurtosis. These aren’t even really defined for the Cauchy.

    Uninformative does not necessarily mean unintelligent.

    Comment by Ray Ladbury — 26 Jan 2011 @ 10:03 PM

  211. 19 BPL Good luck.

    Comment by Edward Greisch — 27 Jan 2011 @ 12:12 AM

  212. TimTheToolMan asks : “Now I’m even more confused. How is it that arguably the most important aspect of AGW (ie the Ocean Heat Content) has not been calculated from the model output past 2003?”

    Gavin Replies : “[Response: It has. But I didn't do it, and I don't have the answers sitting on my hard drive ready to be put on a figure for a blog post. And if I don't have it handy, then I would have to do it all myself (no time), or get someone else to do it (they have more important things to do). If someone who has done it, wants to pass it along, or if someone wants to do the calculation, I'd be happy to update the figure. - gavin]”

    So if you’re aware its been done, why not simply ask for the figures from whoever did it?

    If you let me know who it was, I’ll ask if you like.

    Comment by TimTheToolMan — 27 Jan 2011 @ 1:47 AM

  213. Brian Dodges “If that’s the case, then the economic arguments – that we should wait “until the science is settled” because the application of a discount rate makes solving AGW cheaper in the future, and we can’t afford the economic dislocation if we act now – are false. If we really can’t afford to fix it now, and are less able to in the future, then we are really screwed.”

    Actually this is quite a possibility , but what do you mean exactly by “we are screwed” ? who are these “we”, and what would “screw” us? you seem to refer to two very different problems : the possible fading of the industrial society because we don’t have a good alternative to fossil fuels (which would happen even without GH effect), and the threat of a climate change (which could happen even if we found an alternative to fossil fuels, if there are in very large amount and we are too late to replace them).

    These problems are often mixed and confused in public opinion and media, but they’re actually totally different, and in some sense contradictory. And a sensible discussion of what will really be the main problem is still lacking in my sense. Discussions around the climate favor the hypothesis that no real problem of fuel supply will occur, and that climate change will dominate by far all the other issues. I think this is by no means proved.

    Comment by Gilles — 27 Jan 2011 @ 2:06 AM

  214. On priors:

    I don’t think there is such a thing as an uninformative prior, in this case. Originally uninformative priors arise from symmetries that exist in models of physical systems. For example, an infinite euclidean space (or a sphere) have a translational (or rotational) symmetry, and the symmetry makes an uninformative prior over the space reasonable. The principle of maximum entropy, used to deduce uninformative priors, is intimately tied to these symmetries. But symmetries may not exist in real-world problems, especially if the domain is not physics.

    For model hyper-parameters, especially for those limited to positive values, one can sometimes successfully use “uninformative” priors derived by maximum entropy, for the hyper-parameters are abstract and far from the likelihood. Conjugacy helps in a sense, because conjugate priors are often from the exponential family, and therefore they are always maximum entropy over some, more or less artificial symmetry.

    In the case of CO2 sensitivy, I cannot see any symmetries at all, and the likelihood is not of exponential family. So there is simply no basis for uninformative priors.

    In the context of model fitting, strong sensitivity to prior would mean that we have not enough data. In a sense this is the case with CO2 sensitivity as well. And because the prior is informative, one can argue whether it really represents the corrent prior information. :)

    Then there is the assumption of independence between the likelihood and the prior – that the evidence presented by them arise from independent sources. Annan says the independence holds, and I’m not really able to argue convincingly against. But the experts (prior) and the evidence (likelihood) have lived for a long time in the same world.

    I don’t know where all this leaves us. Maybe one should be careful with the bayesian formalism here?

    BTW, Martin Weitzman’s work may be interesting for those who like the Annan approach. If I have understood correctly, Weitzman basically states that utility calculations are worthless in the case of climate change, because the posterior distribution is long-tailed enough, to the upper side, that expected utility cannot be computed at all! But his justification is kind of technical.

    Comment by Janne Sinkkonen — 27 Jan 2011 @ 3:56 AM

  215. Crandles 206
    Bibasir 205

    Thanks very much for your help. I’m just getting to understand how Intrade works. The actual contract is for \MIN.ARCTIC.ICE:2011>2007\ and the Ask and Bid prices are 30 and 39. Because this is less than 50 I think this means that market sentiment is that the contract will fail. This means the expectation is that the minimum Arctic Sea Ice extent this year will be below that of 2007.

    I want to bet on the Arctic sea ice being the lowest this year so I should register with Intrade and offer to sell.

    Perhaps the traders have read Flanner et.al.

    Comment by Geoff Beacon — 27 Jan 2011 @ 4:06 AM

  216. I haven’t seen this paper mentioned outside the denial space:

    Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit

    Abstract

    Sensor measurement uncertainty has never been fully considered in prior appraisals of global average surface air temperature. The estimated average ±0.2 C station error has been incorrectly assessed as random, and the systematic error from uncontrolled variables has been invariably neglected. The systematic errors in measurements from three ideally sited and maintained temperature sensors are calculated herein. Combined with the ±0.2 C average station error, a representative lower-limit uncertainty of ±0.46 C was found for any global annual surface air temperature anomaly. This ±0.46 C reveals that the global surface air temperature anomaly trend from 1880 through 2000 is statistically indistinguishable from 0 C, and represents a lower limit of calibration uncertainty for climate models and for any prospective physically justifiable proxy reconstruction of paleo-temperature. The rate and magnitude of 20th century warming are thus unknowable, and suggestions of an unprecedented trend in 20th century global air temperature are unsustainable.

    [Response: The characterisation of the error in the global annual mean is wrong, and even if correct, the impact on the uncertainty in the trend is completely wrong. What journal was this published in? Quel surprise.... - gavin]

    Comment by Martin Smith — 27 Jan 2011 @ 9:03 AM

  217. A whole lotta jargon goin’ on.

    I’d make an elaborate emergency room joke (STAT!) but I can’t think of one.

    If one of you statisticians regularly teaches the subject, please offer a two-bit explanation for the cheap seats about what the issues are.

    Comment by Jeffrey Davis — 27 Jan 2011 @ 9:21 AM

  218. Re: #216 (Martin Smith)

    If the surface temperature record is so unreliable, then why does it agree so well with the satellite temperature record (also see this)?

    Comment by tamino — 27 Jan 2011 @ 9:47 AM

  219. Janne,
    I think that is my basic point–that caution is warranted. What do you think of an Empirical Bayes method? It would yield a result similar to marginal likelihood. The thing is that I have serious qualms about using a Prior that looks very different from the likelihood. That is one of the problems I have with Annan’s approach–that and the fact that negative sensitivities are unphysical.

    For an empirical Bayes method, we could use the agreement between the different methodologies on the best-fit sensitivity (~3 K/doubling) and then set the width so that the distribution is maximally uninformative. I also think that the skew of the Prior is important, especially if we are to bound the risk.

    Comment by Ray Ladbury — 27 Jan 2011 @ 9:59 AM

  220. 213 Gilles: Discount rate is irrelevant because extinction of the human race is an infinite cost. Regardless of the discount rate, the present cost is still infinite.

    Economics is irrelevant because economics is not applicable to situations where there is not a civilization in which money is used. By “screwed” we mean at best the collapse of economics because money is worth nothing if there is no food.

    Your economics arguments are irrelevant. Your fossil fuels arguments are also irrelevant because fossil fuels are useless in the absence of organized civilization that can use fossil fuels.

    In order to maintain some remnant of civilization with some form of law and order, we MUST quit using fossil fuels immediately. Notice that this does not necessarily prevent gigadeath due to widespread starvation.

    BPL has computed the date of collapse under BAU as some time between 2050 and 2055. That is very soon, and we have no time to waste.

    Is that stark enough for you?

    Comment by Edward Greisch — 27 Jan 2011 @ 10:00 AM

  221. Re: #217 (Jeffrey Davis)

    Not sure what your background is, but I think you’re talking about the discussion of priors in estimating climate sensitivity, so here’s some elementary info.

    We’re trying to figure out the probability of climate sensitivity having any given value, i.e., the probability distribution for climate sensitivity. Many believe that a superior approach to statistics is what’s called a Bayesian analysis. It’s many things, but one of its key elements is that it enables us to combine what we learn from data — which gives us the likelihood — with what we already knew, or simply already believed based on expert knowledge — which we encode in the prior distribution. The prior expresses what’s possible and what we already know (or think we know). After combining prior and likelihood, we get the posterior probability, which is what we were after in the first place.

    When we don’t have much knowledge prior to observations, we usually try to use a noninformative prior. Loosely speaking, this is one which doesn’t impose assumptions or restrictions on the final result. There’s more than one way to skin this cat, including a “maximum entropy” prior or a “Jeffreys prior” or “conjugate prior”, the form of which depends on how the system behaves (which itself may be uncertain).

    Some analysis of climate sensitivity has worked with a uniform prior — that’s just one where all values are considered equally likely prior to incorporating our data. But Annan and Hargreaves have argued that this gives too much weight to very high values of climate sensitivity — after all, it makes no sense to assume as a prior that climate sensitivity of 3 deg./doubling CO2 is equally as likely as climate sensitivity of 3 million deg./doubling CO2.

    Even so, when we apply our data, it so suppresses the 3-million-degree sensitivity that it’s not an issue. But it doesn’t suppress 20- or 30-degree sensitivity nearly as much as other choices of prior, and even though these high values end up not very likely they’re still possible, and lead to the “long tail” of the probability distribution. The long tail isn’t very big, but the consequences of such high values are so extreme that they may dominate risk analysis.

    So, Annan and Hargreaves have explored other choices — first, to show that the uniform prior is the cause of the long tail, and second, to explore what other choices may indicate.

    Most of the discussion here is about the appropriateness of one of Annan & Hargreaves’ choices for a prior, the Cauchy distribution. It moderates the long tail (solving the 3-million-degree-sensitivity problem) but not too much (some would say). Even so, it also includes nontrivial prior probability for “prior-unlikely” values, say, for negative climate sensitivity — when many believe that there’s very good reason a priori to exclude, or at least downweight, the possibility of negative climate sensitivity.

    So, we’re not only discussing the issue in general, we’re also lobbying for our favorite priors, all of which are just well-known probability distributions we think might be useful choices for this analysis. These include the Cauchy distribution (which some object to because it gives equal prior weight to negative as to positive sensitivity values and besides, it’s symmetric) log-normal (which some object to because it excludes negative values altogether), the Weibull, etc.

    I hope this helps.

    Comment by tamino — 27 Jan 2011 @ 10:17 AM

  222. Tamino: “I suspect that much of the motivation for Annan and Hargreaves is simply to highlight that the uniform prior gives excessive weight to very high sensitivity values, and that results can be very different with a different choice of prior.”

    Yes, and anyone who takes a peek at James’ blog might also suspect he’s having some fun doing it. But sensitivity to the prior when one speculates over a broad range of priors isn’t exactly rare is it?

    Why not use the paleo prior of e. g. Hansen? This has the unique advantage of actually being prior. Is this not in the spirit of Bayes? This yields S (sensitivity to forcing, not to the prior) = 3 as usual, and along the way goes far toward solving another problem: “You haven’t haven’t bounded the risk” – the proper polite and professional way of saying “It’s insane to go on burning carbon like there’s no next generation.” This is solved because S (from the paleo prior) depends on paleo temperature estimates. If the paleo temps are off, S will be higher or lower but the expected melting and sea level rise are the same.

    “Two Bayesians can’t pass one another in the street without smiling at each other’s priors.”

    – Cosma Shalizi

    Comment by Pete Dunkelberg — 27 Jan 2011 @ 10:19 AM

  223. Maybe someone can answer a question I have been puzzled about for a while. The graphs compare the global mean temperature estimated by the models with that estimated from the data. However, this metric is only useful if “every degree is equal”, ie a rise in temperature of 1 degree in the arctic is of exactly the same consequence as the same rise in the tropics (or anywhere else). Now if the change in global mean temperature is supposed to be a measure of the change in energy content of the atmosphere (assuming we are using air temperatures) or as a proxy for the change in the energy content of the ocean, or in fact has anything at all to do with energy, surely the specific heat of the air should be taken into account? This must depend to quite a large extent on the humidity (and presumably also pressure and other factors) which must change dramatically around the globe. In particular it must be much higher in the tropics (on average).

    [Response: The global mean temperature anomaly is the 2D integral of temperature anomalies over the surface. In models it is exact, while in the real world it needs to be approximated in various ways (2m SAT anomalies over land, assumptions about representativeness, SST over the ocean, corrections for inhomogeneities etc.). If you were to calculate a change in atmospheric heat content, that would be closer to your suggestion, and while I don't think it would look much different, it is not the same metric. - gavin]

    Comment by snowrunner — 27 Jan 2011 @ 10:24 AM

  224. Geoff Beacon 215

    I agree with your interpretation on intrade prices. It would cost US$7 to get $10 in 9 months time if you are correct and there is a record low.

    >”Perhaps the traders have read Flanner et.al.”
    Perhaps they have read it and got the same interpretation as you and maybe they haven’t read it. I have only read the abstract and there will be more in the paper but I do not see how it changes my assessment of the probabilities much so I am puzzled by your insistance on its importance.

    The paper clearly does quantify the albedo feedback much better than before, but so what? We already knew the decline in sea ice was faster than models predict, presumably this effect was already acting over the last few years and we had records in 2005 and 2007 but not in 2006, 2008, 2009 or 2010. I would suggest that what you need is something to indicate that the effect is going to be much stronger in 2011 than in 2010 or 2009 so that it is likely to be more inportant than natural variability over one year. But perhaps there is something in the paper but not the abstract to indicate that.

    Comment by crandles — 27 Jan 2011 @ 10:37 AM

  225. Jeffrey Davis,
    How familiar are you with Bayesian probability? Basically, the situation is that we have a set of data that favor a certain best-fit value for CO2 sensitivity, S~3, and a certain range S between 2.1 and 4.5. The question is whether we have any basis for a Prior expectation of what S should be and what the probability distribution over values of S should look like before we look at that data. That Prior expectaton (or PRIOR) can be updated with likelihood (e.g. combined probability) of our data using Baye’s theorem. With me?

    The thing is that in Bayesian probability, there is always a degree of subjectivity to choosing a Prior. One way to minimize this is to use a Maximum Entropy Prior (really only applicable to discrete distributions) or a minimally informative prior (a generalization where you make the Prior as broad as possible subject to constraints like known symmetries). The goal in both cases is to minimize the effect of the subjective Prior on the results? OK?

    The problem is that there’s this pesky tail on the distribution of sensitivities at the high end. James Annan did a very interesting analysis in which he showed that even if you use a Cauchy Prior centered on 3 K/doubling, you can make that pesky tail go away. (Note: the Cauchy is a really nasty distribution with tails so thick none of its moments exist) Basically, I think you could say that I am a critic of the approach because the Prior seems to violate symmetries of the problem. David Benson is a fan because the Cauchy is fairly uninformative and makes no assumptions about physics, etc. And Tamino and Janne think the approach is interesting but that the very fact that the result depends critically on the Prior argues for caution when applying Bayesian probability to the problem. I hope that is not too unclear, doesn’t do violence to the situation or distort anyone’s position.

    BTW, Empirical Bayes is a bastardized Bayesian approach where you cheat and look at the data at least a wee bit to at least set the best-fit values for the parameters of your distribution.

    Comment by Ray Ladbury — 27 Jan 2011 @ 10:40 AM

  226. Bibasir @ 206:

    There seems to be a pattern where record low years have a several year period of recovery prior to making a new record low. I pointed this out a few years ago, after the 2007 low, and stated with a high degree of confidence that we wouldn’t have a new record low for several more years. At this point in the cycle, I wouldn’t bet against 2011 or 2012 being a new record.

    I made the same comment a few years back, when Solar Cycle 24 was shaping up to be a dud, and so far we’ve not had an unabiguous and unarguable new record high. As SC 24 slowly reaches Solar Maximum, and CO2 concentrations continue to rise, the probability that we’ll see a new record high global temperature that no one can argue against likewise increases.

    And I say this because =that= is when the deniers are going to have a very large face full of pie.

    Have patience — the proof will be in the pudding in another year or three. All the right technologies to do something about CO2 emissions are maturing and we’ll be in better shape, in terms of the data supporting the science, to make aggressive changes by 2014.

    Comment by FurryCatHerder — 27 Jan 2011 @ 10:48 AM

  227. #209 – Tamino
    I’m I reading you right in saying, at least for A/C, you trust engineering standards more then the more exotic stats?

    Comment by J. Bob — 27 Jan 2011 @ 11:25 AM

  228. For TimTheTool:
    Use the tool: http://scholar.google.com/scholar?q=Ocean+Heat+Content
    See, e.g.: http://www.agu.org/pubs/crossref/2003/2003GL017801.shtml

    Comment by Hank Roberts — 27 Jan 2011 @ 11:58 AM

  229. re Gavin @223
    I know what the mean global temperature is (actually, I don’t, see below) but the question was why is this a meaningful metric for looking at changes over time, when you could get the same global mean from very different distributions of temperature (eg increase the poles, decrease the tropics) which would have very different interpretations of energy balance (at least if I am right that humidity matters)?

    I say I don’t know what the global mean is because what is actually estimated is a spatially weighted average of the (homogenised etc) data. While this is reasonable for looking at changes over time, it is certainly not an estimate of the true mean of the surface temperature of the globe. As everyone who has ever measured the temperature knows, if a station was moved even a few metres in any direction the temperature will change. Any station that is not very rural will suffer from a heat island effect, which may be constant over time but means the station does not give an unbiased estimate of the mean temperature for the area it is supposed to represent. The altitude of the station and the surrounding area are equally important. The mean temperature is therefore an estimate of the global mean plus an unknown constant, presumed constant in time. I’m rather surprised that this doesn’t matter, ie it is possible to model the global climate without any real idea what the true mean is.

    Comment by snowrunner — 27 Jan 2011 @ 12:06 PM

  230. Re: #227 (J.Bob)

    No.

    Comment by tamino — 27 Jan 2011 @ 12:09 PM

  231. Geoff Beacon, before betting too much check papers like Zhang et al.
    “Arctic sea ice response to atmospheric forcings with varying levels
    of anthropogenic warming and climate variability”.

    Above all do not bet with a crank (and how do you know online?). Even if you are sure you have won, the crank will be surer of the opposite and will hound and even sue you to get you pay off. That possibility alone makes your bet a poor one.

    Comment by Pete Dunkelberg — 27 Jan 2011 @ 12:10 PM

  232. Ray Ladbury: “BTW, Empirical Bayes is a bastardized Bayesian approach where you cheat and look at the data at least a wee bit to at least set the best-fit values for the parameters of your distribution.”

    IOW it spoils the fun (for certain values of fun). But haven’t you been suggesting to in effect look at the data just a little bit? Granted there is a difference between a little bit and a lot. Still, the discussion here seems to be about unempirical Bayes being unsatisfactory. And what was that you said about flying?

    Comment by Pete Dunkelberg — 27 Jan 2011 @ 12:20 PM

  233. 225, Ray Ladbury: BTW, Empirical Bayes is a bastardized Bayesian approach where you cheat and look at the data at least a wee bit to at least set the best-fit values for the parameters of your distribution.

    You and the others are already fully informed about all extant data, so all of your priors are at least weakly data dependent, and most of them are strongly data dependent. For computing posterior distributions in the future from future data, you probably can’t do better than use the fiducial distribution from the current best data for S itself. Even though it assigns some probablility to negative numbers, the amount is tiny and negligible for practical purposes; if that annoys you, you could truncate the distribution at 0, or put all of the probability for negative values at 0.

    In his book “A comparison of Bayesian and frequentist approaches to estimation” (Springer, 2010), F. J. Samaniego argues that in order for Bayesian inference to improve upon frequentist inference (in a Bayesian sense of improve!), the prior has to be at least accurate enough; he calls this the “threshold” that the prior has to pass. Granted that fiducial distributions are not highly regarded as probability distributions by Bayesians, they are probably more accurate in real settings than other proposed distributions. I think that you’d have a lot of trouble showing that either the Cauchy or lognormal is more accurate in this case. At least if you want the prior to represent shared evidence instead of private opinion.

    Coming back to my favorite topic, (how) does the comparison of the data to scenarios a, b, and c affect the posterior distribution of S, starting either with the fiducial distribution, Cauchy, or lognormal as a prior? Is this something that can be done with EdGCM and Winbugs?

    Comment by Septic Matthew — 27 Jan 2011 @ 12:31 PM

  234. Here’s another peer-reviewed paper touted in the denialosphere, claiming recent decreased stratospheric water vapor: http://www.nature.com/news/2010/100128/full/news.2010.42.html

    It might be worth your while to discuss it.

    Comment by Septic Matthew — 27 Jan 2011 @ 12:50 PM

  235. I’m concerned about my impact on the environment. I heat my home with heating oil but am worried about what this is doing to the environment. I live in a rural area of lincolnshire so there’s not much alternative to heating my home with oil except wood and LPG… but I don’t know if this is even more harmful. I have just found a heating oil website who offer Group Buying Days, this seems like a great way to help the environment because you can order with others which helps to keep tankers off the roads more, reducing CO2 emissions.
    I would like to see more information on the internet about the effects of heating oil on the environment. On most climate change sites I go on there are articles on gas and electric heating but little on the effects of heating oil.

    Does anyone have any figures about heating oil and ways to minimize my impact on the environment?

    Comment by Ken Lowe — 27 Jan 2011 @ 12:51 PM

  236. Edward :

    “213 Gilles: Discount rate is irrelevant because extinction of the human race is an infinite cost. Regardless of the discount rate, the present cost is still infinite.”

    do you have a serious evaluation of a threshold above which human race would disappear?

    speaking for my own case : I’m living in a valley in France, close to a medium size city, under 3000 m (10 000 ft) high mountain. It’s pretty cold in winter, which allows a fair amount of snow to cover the summits, which gradually melt in summer, fueling small creeks and a river (there are huge dams insuring hydropower and regularizing the water supply some hundreds of km above). Well, not too bad a place to live in. The soil is rather fertile , I grow some vegetables in my garden , mainly for fun but I could grow more. There are a lot of woods above on the hills, providing heating wood and even chestnuts, mushrooms, animals…. Can you explain me exactly why everybody around me, including me, should die within some decades? do you think it is a LIKELY hypothesis, and why ?

    BPL has computed the date of collapse under BAU as some time between 2050 and 2055. That is very soon, and we have no time to waste.

    Is that stark enough for you?”
    well, if mankind was to disappear between 2050 and 2055, it is pretty obvious that “B” couldn’t be “AU” well before this date, isn’t it ? so what’s the coherence of such an estimate ?

    Comment by Gilles — 27 Jan 2011 @ 12:59 PM

  237. FurryCatHerder: “And I say this because =that= is when the deniers are going to have a very large face full of pie.”

    I predict the standard response – deny harder. I also expect dropouts from the denier team, as more people think “Oh oh. We really do have a problem here.” I expect steady if slow movement toward the 65 – 75 percent majority that may be required to move politics against the force of money. But I should temper this mild optimism with the awareness that another IPCC AR is coming, and another denier onslaught is probably coming to counter it. I don’t know what the professional deniers can come up with to top the email attack, but I expect it to be surprising and impressive (to the low information majority). In other words, you’ll be surprised.

    btw Ray Ladbury thanks. I looked up “empirical Bayes.” It looks like a recognized method.

    Comment by Pete Dunkelberg — 27 Jan 2011 @ 1:01 PM

  238. Thank you to tamino and Ray Ladbury.

    I have hardly anymore statistics than the rat we studied in Psych 101 over 40 years ago.

    Comment by Jeffrey Davis — 27 Jan 2011 @ 1:11 PM

  239. Septic Matthew:

    http://www.realclimate.org/index.php/archives/2010/01/the-wisdom-of-solomon/

    Comment by Rattus Norvegicus — 27 Jan 2011 @ 1:43 PM

  240. Re: Gavin’s response to #216, and #218 (tamino)

    The blog of Patrick Frank’s paper that spammed denial space shows a graph of the surface temperature anomaly from 1880 to 2010 with gray vertical bars behind it representing Frank’s computed ±0.46 C uncertainty at each point. In 1880, the anomaly was about -0.25C, and in 2009 it was about (eyeballing) +0.45. Is he really just claiming that the anomaly trend in the graph is meaningless because each data point is indistinguishable from 0? If that’s really all he is saying, then is his claim wrong because the anomaly trend so closely matches the anomaly trend in the satellite data, or is he wrong because a trend is meaningful even if each data point is indistinguishable from 0?

    Comment by Martin Smith — 27 Jan 2011 @ 1:51 PM

  241. Re: #240 (Martin Smith)

    First, his argument about the uncertainty of the surface record is bull.

    Second, even if each individual data point is indistinguishable from zero, it’s possible for the trend to be greatly distinguishable from zero.

    Third, the surface temperature records match the satellite records not just in trend, but in fluctuations about the trend, and to a very high degree of precision (way better than 0.45 deg.). If he were right about the uncertainty of the surface record, this would be impossible.

    [Response: I'll add that I am reliably informed that Eq 9 (p976) and his case 3b (p 974/5) are clearly wrong. The equation for the mean of the error instead of the error of the mean has been taken from ref #17. Other issues are that he apparently claims the error has a similar value for 30-year means as it does for a monthly mean. And the N used in p982 is not the appropriate one. This is all apart from the obvious inconsistencies already noted. - gavin]

    Comment by tamino — 27 Jan 2011 @ 2:11 PM

  242. Pete Dunkelberg, Yes, I was probably to flippant in calling it a Bastardized Bayesian analysis. I merely mean that since the Prior is not independent, it is not really Bayesian, but really more likelihood based.

    Comment by Ray Ladbury — 27 Jan 2011 @ 2:13 PM

  243. Septic Matthew, independent of scenarios A, B and C, the current climate state favors 3 K/doubling based on multiple studies.

    Comment by Ray Ladbury — 27 Jan 2011 @ 2:16 PM

  244. Jim, I meant no specific claims against climate scientists or that science should be discarded. In the history of science over the past 100 years in all fields, there have deliberately falsified results. Recent ones include at Harvard with that professor of psychology, whose name eludes me at this moment. The others I recall in recent times are: In Korea (South Korea?) where the scientist was found to be falsifying stem cell results, and the so called human baby cloning in France a decade or so ago as well. In ths thread I am making mention of issues with GCM’s but I am NOT alluding that the models are being made to ‘lie’ or that anyone here is ‘lying.’ I do think there are issues that need to be looked at like: the whole global climate system cannot be truly quantified, estimates and approximations, may be far enough from being accurate that some improvements are of absolute necessity, and in my next post, today, in response to Ray Ladbury I am going to go from a general disucssion on Bayesian stastics, to specific methods and various advantageous and limitations.

    [Response: OK hold on please. If the purpose of your earlier post that I commented on, was not to imply that your use of the term "false" equated to outright research misconduct via dishonesty, then why do you, here, defend yourself using two high profiles examples of exactly that very thing??? This makes no logical sense at all. Your claim here that you meant that scientists can sometimes get things wrong--well, thanks a lot for that light bulb insight.--Jim]

    I think, starting out general and going to more specific examples is not a bad way to have a informed and intelligent dialogue, but by all means, please tell us more about your data collection and agreement with models.

    [Response: Say what? The problem is you make lots of big sweeping generalizations and assertions with no logical defense of what you assert. Who exactly is it that you expect to just accept these things? And my "data collection etc" is 100% irrelevant.--Jim]

    Finally, if you prefer specific reasoning first, then going out to generalized heuristics, that is just fine too. Neither you or I, may be seeking the other’s approval, but I do, want to talk some of thes things out. Bear with me, I hope, moderators when, in my next post I do like I used to in ,y earliest posts and show the equations, explain the general usefulness, specific utility and then in words both quote and interpret what Bayes is and is not good for, etc… much like my early post showed a link between thermodynbamics and relativity, when I first joined RC.

    [Response: Bring it. And when you do, make it cogent and relevant and understandable. Otherwise you are wasting peoples' time here--Jim]

    Comment by Jacob Mack — 27 Jan 2011 @ 2:20 PM

  245. 239, Rattus Norvegicus,

    Thank you!

    I misread the reference and I thought that it was new today. bleh.

    Comment by Septic Matthew — 27 Jan 2011 @ 2:34 PM

  246. Crandles 224

    You say

    We already knew the decline in sea ice was faster than models predict, presumably this effect was already acting over the last few years and we had records in 2005 and 2007 but not in 2006, 2008, 2009 or 2010.

    “We” may know but others may not acknowledge this. See my comments to “Loosing time not buying time. http://www.realclimate.org/index.php/archives/2010/12/losing-time-not-buying-time/ where I criticise the Trillion Tonne Scenario for being too optimistic. This says when we get to emissions of a trillion tonnes of carbon since 1750 and we will get a 2 deg C rise in temperature and that is just about bearable. But this scenario was created with models that may underestimate warming because they underestimate feedbacks, such as sea-ice albedo. (Also see “Fast and Super-fast – Disappearing Arctic sea ice”, http://www.brusselsblog.co.uk/?p=45)

    Another feedback which may be underestimated is methane from Arctic tundra. The UK Government’s Climate Change Committee have said

    On the subject of methane and climate feedback; we do not assign probabilities to methane release because we do not yet know enough about these processes to include them in our models projections.

    See “The rise in methane and the Climate Change Committee”, http://www.ccq.org.uk/wordpress/?p=120.

    To echo Tamino’s earlier comment – that doesn’t give me much comfort in terms of climate change . For risk analysis, I’d suggest using the much more pessimistic scenario. We need a plan B. See “Plan A might fail … so we need Plan B”, http://www.ccq.org.uk/wordpress/?p=139.

    Thanks for the earlier help. I’ve placed a bet with Intrade (rather “offered to sell a contract”) on this year’s Arctic sea ice.

    Comment by Geoff Beacon — 27 Jan 2011 @ 2:42 PM

  247. 243, Ray Ladbury: Septic Matthew, independent of scenarios A, B and C, the current climate state favors 3 K/doubling based on multiple studies.

    Sure. From those studies you can develop a prior distribution. Then as other data accumulate, data like the time series of global mean temperature, you can compute the posterior distribution given the new data. You could take something that approximates your idea of Hansen’s opinion back in 1988, using his actually chosen value as the mean of the prior distribution, and compute the resultant posterior distribution conditional on today’s data. Eventually data overwhelm differences between priors, so by now all reasonable priors from 1988 probably produce indistinguishable posterior distributions, don’t you think so? If not now, then soon in the future.

    Of course it’s lots of work that I can not do myself right now, but accumulating evidence from diverse sources across time is one of the advantages of Bayesian inference. A plot, by year, of the 95% credible interval for S would be meaningful, don’t you think? That’s a way of doing sequential analysis.

    Comment by Septic Matthew — 27 Jan 2011 @ 2:50 PM

  248. @228, Hank Roberts

    I’m well aware of the measured OHC figures and I’m aware Gavin co-authored a paper comparing the model output to measured values to 2003.

    Instead of providing utterlessly useless links and mistakenly thinking you’re helping, how about you provide a link to OHC figures calculated from model output post 2003.

    In case you’re not sure why you’re not helping. Your “eg” is “GEOPHYSICAL RESEARCH LETTERS, VOL. 30, 1932, 4 PP., 2003″ Pay particular attention to the date.

    Comment by TimTheToolMan — 27 Jan 2011 @ 3:06 PM

  249. Jakob Mack, Oh if only you’d come along sooner and told all those climate scientists that what they were trying (and succeeding at) was impossible. Think of the time you could have save them! Of course, there is that small matter of all those verifiable (and verified) predictions the scientists have made. Wonder how that works if the model they are using is impossible? Puzzle, that. Oh, wait! Maybe we need more than your personal incredulity on which to base our understanding of the Universe. Why, yes. That might be it. It appears argument from personal incredulity is a logical fallacy. Check ‘em out, Jake. How many more can you collect:

    http://www.theskepticsguide.org/resources/logicalfallacies.aspx

    Comment by Ray Ladbury — 27 Jan 2011 @ 3:14 PM

  250. http://multi-science.metapress.com/content/c47t1650k0j2n047/?p=7857ae035f62422491fa3013c9897669&pi=4

    Patrick Frank article in Energy and Environment stating that as a result of a +- 0.46 C instrumental uncertainty, the 20th century temperature rise is statistically indistinguishable from 0. Sounds like the Dunning-Kruger Effect at work.

    Comment by Mark — 27 Jan 2011 @ 3:14 PM

  251. > SM
    Read what you linked.
    Nature (a year ago) mentioned a study published in Science.

    Here’s how to check:
    http://www.google.com/search?q=site%3Arealclimate.org++“water+vapor”+stratosphere+Science

    Comment by Hank Roberts — 27 Jan 2011 @ 3:19 PM

  252. Sometimes, I feel like knocking heads together.

    On the one hand, there are people painting a picture based on extreme estimates, and even beyond extreme estimates. On the other, there are people rejecting the reality of these extreme estimates, and taking the selfish view that it’s not their problem.

    Neither view is helpful.

    The extreme scenarios may not come to pass tomorrow, but if we remain on this path, then they will, inevitably, become reality, in some form or another. Just because some of us may be dead before any of this becomes an issue is not an acceptable reason for ignoring it. Nor is it acceptable to ignore it because we live in comfortable conditions likely to be able to survive moderate warming for a while longer than most places.

    Selfishness is a useful evolutionary trait, but when we are condemning our future to oblivion, it may be a good time to look up “tragedy of the commons”, and ponder the implications.

    Comment by Didactylos — 27 Jan 2011 @ 3:42 PM

  253. Bayesian Statistics, a general primer:

    As discussed by: Hierarchal Modeling for the Environmental Sciences: Statistical Methods and Applications (2006) by James S. Clark and Alan E. Gelfand;

    Search For Certainty: On the Clash of Science and Philosophy of Probability (2009) by Krzysztof Burdzy;

    and for more specific relevance and scope: Review of the U.S. Climate Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision making (2007).

    I also back up and provide more simple but accurate explanation in more algebraic and easier to read format taken from Statstics A Step by Step Approach International Edition: McGraw Hill, written by Alan Bluman. Bluman in my opinon, writes the best and most comprehensive self teaching textbook for general statistics. He deals in H0: = u and H1: not equal to u, instead of Lambda = theta or Lambda not equal to theta, but not everyone here is looking at those symbols like that.

    In general we can look at Bayesian equations in the following manner:

    p(theta}y, lambda) = p(y, theta} lambda)/p(y{lambda) = p(y,theta}lambda)/integral p(y,theta}lambda)dtheta = f(y}theta)pi(theta}lambda)dtheta. What is going here is an approach to modeling where the observed data and any unknowns are treated as random variables. As we will see in a minute this can be expanded upon to include other factors and to better bridge priors and current observations after they are treated separately.
    f(y}theta) is the distributional mocdel for the observed data y = (y1, y2, y3, y4, … y50, etc… The vector of unknown parameters is theta and keep that in mind as there are aspects that do and do not depend upon theta. Current data we know or see does not depend upon theta. Theta is assumed to be a random quantity sampled from a prior distribution (a lot of people talk of theta and it as a prior, but we need to delve a little deeper) which is: pi(theta”lambda), where lambda is a vector of so called hyperparameters. Lambda itself is a paramter that controls such things like variation across populations or spatial similarity. If lambda is known then we can use inference (inferential statistics) in relation to theta the unknown parameters from the general equation I opended up this post with.

    Bayesian inference which is now a paradigm in its usage and importance in modern science does have advantages over the frequentist statistical philosophy: it has a more unified approach to data analysis, it incorporates prior professi0nal opinion/external empirical evidence,into results through pi the prior distribution. Here lies some issues too: if a couple of studies being heavily relied upon are being used that have some unknown flaws the Bayes approach may or may not be able to correct for those, whereas direct empirical observations can better correct for such issues, and there are some other frequentist approaches, though much more tedious, can better control for such errors. Now, it is impossible to observe the whole human system in real time, all the time, just like we cannot observe the whole earth and see all heat flow and temp fluxes and what is causing such fluxes, so certainly do not throw out the baby with the bathwater either. I think re-analysis with several methods in addition to what is used may be if use, including, but not limited to counter-factuals and some falsifiable methodology. More on that in a separate post.

    Now some more math:

    p(theta}y)= N(theta}mu,j^2)N(y}theta, sigma^2)/p(y)N (theta}mu, j^2)N(y}theta, sigma^2)= N(theta}sigma^2/sigma^2 + j^2mu + j^2/sigma^2 + j^2y, ^sigma^2j^2/sigma^2 + j^2. Again this is a general form and there can be tweakings and rearrangements to perform out various calculations and estimations. This last equation is used when we assume a Gaussian observation and from that and other assumptions we compute the posterior.

    This is a very generalized beginning to the disucssion and I welcome corrections from: statisticians, physicists and mathematicians who work in climate science as it specifically pertains to the work they do using Bayes statistics, or related methods that are of importance. The equations are straight from my textbooks and papers I use in my work, or use to analyze other’s work.

    In more simplified terms of Bluman (2009) Bayesian statistics takes the products of the values in the numerator and divides them by the sum of the values in the denominator to obtain a value. We can sketch a tree and apply potential or probable values and get a rough outlook on where we are at and where we may be going. However, minus critical data, Bayesian methods can fall short even if the critical data or data points seem to be small and inconsequential.

    Frequency philosophy attempts to provide scientific explanations for probability which has not been very successful. On the other hand, Bayesian philosophy can still be far too subjective as well. These ideologies and applications as aforementioned by me several times should not be completely thrown out, or completely written off as complete failures as Burdzy and other experts assert, but his warnings about both methods are of immense relevance to mathematics and science in general. I also submit that the use of so much statistics in the absence of empirical evidence provides a talling point about some of the weaknesses in the GCM’s. Since not all important and influential data can be input to the models, there must be assumptions being made as seen in math and science methods along the way, but perhaps more so, when modeling such a complex climate system.

    Backing up a minute: I personally work with Bayesian statistics everyday in my own work. It is also used by physicians, typically less experienced ones, as they become more experienced, they can rely more on professional experience, but this is no bad reflection on the utility of Bayesian statistics itself. However, it does have advantages and disadvanatages: it takes supercomputers in climate science, genetics, environmental sciences, utilizing Markov Chain Monte Carlo integration methods among others to compute and solve for intractable integrals. This aforementioned is both a good thing and a call for caution. The computers make things much easier and they are in fact, indispensable, however, their remains good chances for input error, computer error and at times, missing mistakes made via quality controls and error analysis. Also keep in mind Bayesian methods hold a certain subjectivity and heavy reliance upon judgment, though not absent in frequentist methodology and elsewhere, is more pronounced here. When dealing with one gene and one fucntion at a time, though this can an issue, it is less so, but in more vast and complex systems like a global climate system, or a whole genome the probability of errors, and the unavoidable clashing of errors/cognitive dissonance in results can be immense and yes, very, robust. It is unavoidable that raw data must be worked out, summarized and statistcal methods applied. The incompletness and subjectivity of Bayes makes it subject to at least some scrutiny and calls for careful re-analysis. The other issue is we must be very careful what ‘current data’ is entered, and this holds true of any field, not just climate science. As of late there have been numerous errors in studies conducted in medicine and psychiartry for instance, as reported over at Neurocritic, the magazine Science and various online science blogs. The main issue was the misuse, misinterpretation of and analysis of statistical analysis, setting up control/experimental groups, specifically, and infering from larger populations in general. In terms of climate science I want to see more relationships between reported warming and thermodynamics, which as we all know contain immutable laws. Statsistics serves to look at uncertainty and to summarize existing data, in a more cohesive manner. I want to see more studies cited as reporting 3 degrees C from doubling of C02 not only statiscally and with an analysis of albedo changes, IFR trapped, CH4 emitted from clathrates, but how, more precisely, heat, is relating to an upwards trend in temperature, thermodynamically. I want to see thermodynamics incorporated in the models, and in the Bayesian methods too. We know the 4 laws cannot be violated and we know the first and second law are most pertinent to this discussion of AGW.

    I am interested in looking at, understanding and discussing further, the uncertainty in uncertainty analysis. This is something I am looking to do in ann upcoming lab job I will be working in, myself in another field of science. Hence why this discussion on models, certainty, uncertainty and statistics is of such importance to me.

    I am out of time and would like to atleast get this post out prior to gong to work. Oh and I agree with David B. Benson in the ongoing discussion on S and Cauchy 100%. More on that in my next post.

    Comment by Jacob Mack — 27 Jan 2011 @ 3:52 PM

  254. Ray Ladbury please show me your work. Pleas show the general equations you are working with and why. In this thread I am making no personal attack on anyone. Let us speak of statstics and thermodynamics, one at a time, or in separate paragraphs in the same posts. I know many climate scientists are physicists, chemists, meteorologists, and mathematicians. We are all prone to error or to make judgments where there may be an error at times. I do not know everything, nor will I, but let us behave civily and continue discussion. I just posted some generalized equations and some applications under specific assumptions. I just posted some of my concerns of the methods I actually use myself and have studied in the classroom, the textbooks and in the real world. Let us continue from there, Dr. Ladbury:)

    Comment by Jacob Mack — 27 Jan 2011 @ 3:59 PM

  255. #235 OT, Ken , you can drastically reduce your use of heating oil ,
    lived in Tasmania for two years, I purchased light weight thermal underwear for the whole family, Google DAMART,

    Comment by john byatt — 27 Jan 2011 @ 5:06 PM

  256. Pete Dunkelberg @222 — I’m not sure what you mean by paleo prior but if you mean actually look at the evidence then that is not in the spirt of the cloistered expert. The cloistered expert knows the physics (and geology),k but has no acess to the actual data. The cloistered expert determines, as best she can, her subjective (but informed) prior and only then looks at the evidence to update to the posterior pdf.

    Ray Ladbury & Tamino — I’m certainly finding this discussion of great interest. For reasons I’ll explain in a subsequent comment, can we restrict attention to a cloistered expert in the spring of 1959 CE? The cloistered expert knows physics and geology and has acess to all the literature up to that time. In particular, whatever of “The Warming Papers” which appeared before then and so of course Guy Callendar’s work as well as Arrhenius’s two attempts to compute S.

    This cloistered expert isn’t a good enough mathematician to use a uniform pseudo-distribution of the entire real line; she insists that her subjective pdf integrated over the entire real line has value 1.

    Ray Ladbury — I fear you are informed by the data when you insist that S cannot be other than non-negative. I am quite, quite certain that it is but nowhere near the absolute certainty I have that entropy is non-negative. For entropy the pdf has certainly support bounded below by 0 but for S? I’m not so certain so make it just an exceedingly small value.

    Comment by David B. Benson — 27 Jan 2011 @ 6:40 PM

  257. Jacob, I’m more than happy to have a discussion based on evidence. I just haven’t seen any coming from you in support of your position yet. I also have trouble with arguments that impugn the competence or integrity of an entire field of scientific endeavor. Moreover, since virtually every professional or honorific organization of scientists has taken a position in support of the science, and since the scientific consensus is arrived at via the scientific method, when you impugn the consensus, you are impugning the entire scientific community AND the scientific method.

    The thing is, Jacob, I am not an expert in climate science. I’ve worked at it and understand the basics well enough to see that the science is pretty coherent. I understand most of the statistical and analytical techniques. I know and understand a good portion of the evidence. However, ultimately I tend to buy into the consensus because I can see that it is arrived at via the scientific method. And I know from experience that the scientific method generally yields reliable consensus.

    I look at the other side of the argument, and I see these guys aren’t doing science. They are not stating clear, testable hypotheses. Their story is switching from day to day. They aren’t gathering data, and most important, they aren’t developing new understandings of the climate. When I see two groups of scientists–one developing new techniques, making new, testable hypotheses and steadily advancing their model, and the other saying, “Oh, it’s too complex to understand,” I’m going to throw my weight behind the former.

    So, Jacob, if you can show me a theory that makes as much sense of Earth’s climate and makes as many verified predictions as the current consensus model and which doesn’t imply serious problems due to warming, I’ll be the first to pat you on the back. Until then, I’m going to have to go with the folks who are doing science.

    Recaptcha: Fricking Chinese characters? Come on.

    Comment by Ray Ladbury — 27 Jan 2011 @ 8:52 PM

  258. David Benson, Based solely on the fact that Earth was 33 degrees warmer than its blackbody temperature, on what was known of the absorption spectrum of CO2 and on the fact that Earth’s climate did not exhibit exceptional stability characteristic of systems with negative feedback, I’d probably still go with restricting CO2 sensitivity to 0 to + infinity. I just don’t see any reason–empirical or physical to go with a nonzero probability for a negative sensitivity. Now the exact form for the sensitivity Prior probability distribution based on 1959…that I’ll have to think about.

    Comment by Ray Ladbury — 27 Jan 2011 @ 8:58 PM

  259. Jacob Mack:

    Ray Ladbury please show me your work. Pleas show the general equations you are working with and why. In this thread I am making no personal attack on anyone. Let us speak of statstics and thermodynamics, one at a time, or in separate paragraphs in the same posts. I know many climate scientists are physicists, chemists, meteorologists, and mathematicians. We are all prone to error or to make judgments where there may be an error at times. I do not know everything, nor will I, but let us behave civily and continue discussion. I just posted some generalized equations and some applications under specific assumptions.

    So it’s you against thousands of physicists, chemists, meteorologists, and mathematicians.

    And we’re supposed to believe that you’ve shown them all to be wrong, based on some hand-waving posts absent of much detail, and no willingness to summarize your astounding, paradigm-scuttling, nobel-prize winning achievement in the form of a scientific paper that will lead to your name being established in the firmament with the likes of Galileo, Einstein, and Bohr.

    Why don’t you claim your laurels by codifying your rock-solid debunking of physics, chemistry, meteorology, and mathematics?

    Beating down Ray in a blog thread (not that he’s actually being beaten down, I’m being hypothetical here) isn’t doing science.

    C’mon, reap the laurels, the rewards, if you can actually do it you’re a shoo-in for a seat in the House, if not the Senate, and if you don’t want that, the tea party lecture circuit’s your dime.

    Lay your cards on the table in a credible venue …

    Comment by dhogaza — 28 Jan 2011 @ 12:18 AM

  260. See:
    http://dotearth.blogs.nytimes.com/2011/01/27/on-hollywood-hiv-alcohol-and-warming

    which contains a comment by Gavin. The dotearth article is on a subject I have advocated for RC: How to put the science into action. We all know that the models work well enough to support strong action. Updates to models are not required to decide that a radical departure from BAU is required immediately.

    Comment by Edward Greisch — 28 Jan 2011 @ 2:10 AM

  261. Pete Dunkelberg 231

    Thanks for the reference to Zang et.al.
    “Arctic sea ice response to atmospheric forcings with varying levels of anthropogenic warming and climate variability”
    http://www.agu.org/pubs/crossref/2010/2010GL044988.shtml

    The abstract suggests some good news, modelling a later time for a summer free of Arctic sea-ice than one might expect
    from extrapolating Arctic sea-ice volumes. If falls in Arctic sea-ice volume were to keep up the pace they have had over the past decade, the Arctic will all be open sea in summer in under ten years. See http://psc.apl.washington.edu/ArcticSeaiceVolume/IceVolume.php

    From the point of view of climate modelling the all-gone moment isn’t as important as the magnitude of the change in albedo – particularly in the spring, summer and autumn.

    I don’t particularly bet to make money but because I think a market in environmental futures is important.

    I suppose I’ll have to find the time to negotiate the payment system then read the actual paper from Zang et.al.

    Comment by Geoff Beacon — 28 Jan 2011 @ 7:10 AM

  262. Ken Lowe,
    27 Jan 2011 at 12:51 PM

    A back-of-the-envelope calculation suggests you can save very little GHG emissions by optimising the delivery of your heating oil. If you want to really put a dent in your GHG emissions, reduce your consumption of heating oil. Insulate your home as good as you can. Lower the thermostat. Install a solar water heater. With a bit of oversizing the collector and storage tank, it can help to heat your home too. A ground sourced heat pump is also a good, if somewhat expensive option to completely eliminate the need for heating oil.

    Install solar panels, the UK has an exceptionally generous feed-in-tariff, use it. It won’t save you heating oil, but will lower GHG emissions from powerplants.

    I don’t think however this is the blog for this topic. There are certainly countless forums in the UK where you can get much better advice than here.

    Comment by Anne van der Bom — 28 Jan 2011 @ 7:32 AM

  263. Re #241 (tamino & gavin)

    The denial space template has been adapted to be a rhetorical blog post that extracts key paragraphs and graphics from an allegedly peer reviewed and published scientific paper. A link to the paper is always provided, but that link refers to the paper in some aggregation site, which often charges for access to a pdf of the paper. It all looks so legitimate, and few people have the time to fact check claims in the denialist blog, let alone the scientific knowledge to be able to do it.

    That’s what was done with this Patrick Frank paper, which you guys quickly refuted, but which I only had a strong suspicion was total gibber. I suspect that most people who get referred to these denial space blogs will read them and take their conclusions on board without trying to verify them. I see lots of these denial blogs, because I am in a never-ending argument with a denier on an Investor Village message board. He claims to be an engineer, yet he continually posts denier blogs without doing any fact checking. I have refuted many of them myself, and often I refute them with info from Real Climate (thanks much), but there is so much of this denial stuff being published as rhetorical blogs, there is little hope for the casual reader. And most people are casual readers.

    Clearly, the guy I argue with is only interested in short term stock market gains, and his attitude seems to be typical of people with some connection to the coal and oil industries. It all seems orchestrated, but I am loath to wear a tinfoil conspiracy hat. And more often now, I catch myself in a dark place where I am hoping AGW will increase suddenly and shut these people up once and for all. I don’t like that.

    Comment by Martin Smith — 28 Jan 2011 @ 8:12 AM

  264. 258, Ray Ladbury: exceptional stability characteristic of systems with negative feedback,

    What did you mean by that?

    Negative feedback can produce oscillations, the simplest case being a harmonic oscillator. Increasing the negative feedback, as might happen in the atmosphere if global warming creates increased cloud cover (hence albedo), can increase the amplitude of the oscillations.

    Comment by Septic Matthew — 28 Jan 2011 @ 12:56 PM

  265. Pete Dunkleberry @ 237:

    Well … “deny harder” is always an option, but one of the primary claims supporting the denialosphere is the relatively unavoidable fact that we’ve not had the kind of record high that is unarguably a new record high.

    When we break whatever the denialosphere think is the record, which we almost certainly will in the next 3 years, they won’t have that excuse anymore.

    Comment by FurryCatHerder — 28 Jan 2011 @ 9:26 PM

  266. Ray Ladbury @258 — Carl Hauser is also quite interested in Bayesian reasoning and today over a long lunch we considered the question of how the expert in 1959 CE would be able to construct a prior pdf based on what was known then. Carl was opposed to a uniform distribution over an interval [a,b] on the general grounds that a Bayesian does not exclude any values in a prior since no amont of evidence can ever restore some non-zero probability; one’s mind is made up. That would be ok in situations such as a pdf for, say, temperature where we already know in 1959 that 0 K is the lower limit, but not for S. The other problem with a uniform pdf is the assumption of uniformity. In 1959 we have Arrhenius’s initial estimate of 6 K, his revision to 1.6 K and then Guy Callendar’s revision back to a higher value. So given all the things which might matter left out of those estimates one is still left with a sense that around 1.6–6 K is more likely than either smaller or larger values.

    Carl suggested attempting a reductionist program, to estimate priors for those factors, such as cloud changes, about which nothing was known. We were unable to see how to combine those into a prior for S. We closed by agreeing that the prior for S would need support over the entire real line, going to zero rapidly for both negative and large postive values.

    Along the way I did suggest consulting several experts to pool their estimates for S. This was done in Tol, R.S.J. and A.F. de Vos (1998), A bayesian statistical
    analysis of the enhanced greenhouse effect, Climatic Change 38, 87–112 but we agreed that in 1959 it might have been difficult to find enough experts.

    Comment by David B. Benson — 28 Jan 2011 @ 9:30 PM

  267. Congratulations; you’ve built a highly effective autocorrelative model with no explanatory power.

    Comment by John Dixon — 28 Jan 2011 @ 10:21 PM

  268. David B Benson, frequentist approaches still have some well documented advantages as well.

    Comment by Jacob Mack — 28 Jan 2011 @ 10:52 PM

  269. 266, David B. Benson, that is an interesting post. I hope that you are able to follow it up with calculations of the posterior distribution of S given current and future data.

    Comment by Septic Matthew — 29 Jan 2011 @ 12:19 AM

  270. SM:

    258, Ray Ladbury: exceptional stability characteristic of systems with negative feedback,

    What did you mean by that?

    Negative feedback can produce oscillations, the simplest case being a harmonic oscillator.

    You’re suggesting a harmonic oscillator is unstable?

    Comment by dhogaza — 29 Jan 2011 @ 1:26 AM

  271. spontaneous cycles do not appear with negative feedbacks : they are intrisically non linear. They occur when the equilibrium solution is unstable (with positive feedbacks), leading the system to diverge from this solution. Then non-linear negative feedbacks occur above some finite amplitude, and produce hysteresis cycles. The system revolves around the “stable solution”, satisfying the global budget requirements (such as energy conservation ) on average. Both the frequency and the amplitude of the cycles are very difficult to predict, because they rely entirely on precise non linear feedbacks which are not easily derived from fundamental laws (not at all actually) : well-known examples are Solar cycles, ENSO, etc… whose characteristics can not be precisely reproduced up to now.

    In my sense, climate scientists superbly ignore the possibility of long , secular cycles that could trigger variability on hundreds of years timescales. This is by no means excluded , neither by observations, nor by theory , and could contribute to a fair part of the natural variability of the XXth century , lowering accordingly the sensitivity to GHG.

    [Response: Oh dear. So the thousands of papers on internal variability of the climate system were apparently written by Martian bloggers and not climate scientists? - interesting.... And we can ignore the fact that the sensitivity is barely constrained at all by 20th Century trends (mainly because of the uncertainty in aerosol forcing) and go back to making naive correlations with single factors in a hugely multivariate system.... Is there no sense in which these conversations can progress past politically-tainted declarations of personal belief? Please at least try to assimilate something from the science. - gavin]

    Comment by Gilles — 29 Jan 2011 @ 3:30 AM

  272. Aerosols can both cool and warm depending upon a range of physical factors. Getting to understand the effects of aerosols, timescales and complex interactions seems to be, as it should be an active ongoing area of study.There are some very cool papers published in peer review on internal variability to be sure.

    Comment by Jacob Mack — 29 Jan 2011 @ 2:40 PM

  273. Pardon if I’m repeating the obvious observation….

    In 1998 there was a very strong El Nino and the global annual surface air temperature surged enough to equal the upper bound of the GCM model ensemble. In 2010, the year again started with an El Nino, not as strong as 1998, but probably the strongest since then. This time global annual SAT surged again but only enough to equal the average of the model ensemble.

    Comment by Brian Klappstein — 29 Jan 2011 @ 5:30 PM

  274. Pete Dunkelberg 231

    I’ve paid the $25 and read Zang et.al. “Arctic sea ice response to atmospheric forcings with varying levels of anthropogenic warming and climate variability”
    http://www.agu.org/pubs/crossref/2010/2010GL044988.shtml

    It looks a serious piece of work and gives estimates of the date that the Arctic will be free of sea-ice in summer to 2050 or beyond but as I understand it

    1. It concedes previous climate models were underestimating the fall in Arctic sea-ice.

    2. It assumes climate variability in the Arctic to be consistent with either 1948 to 2009 (or alternatively 1989 to 2009). I doubt that the past two years are typical of either of these periods.

    3. It makes no mention of “rotten ice” as reported by David Barber

    “I would argue that, from a practical perspective, we almost have a seasonally ice-free Arctic now, because multiyear sea ice is the barrier to the use and development of the Arctic,” said Barber.

    http://climateprogress.org/2009/11/08/arctic-multiyear-sea-ice-nsidc-david-barber/

    4. It necessarily has no mention of the just-published paper by Spielhagen et al. “Enhanced Modern Heat Transfer to the Arctic by Warm Atlantic Water”. Reports of this paper suggest that it is not clear how this warmer water entering lower depths of the Arctic seas affect the sea-ice but this seems another unknown. If it was an unknown unknown it is worrying.

    Allen et al, “Warming caused by cumulative carbon emissions towards the trillionth tonne”. Nature 458, 1163-1166 (30 April 2009) may be the underlying basis for the UK Governments concentration on carbon dioxide and so downplaying other climate forcing agents such as methane and black carbon. It says:

    Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide induced warming of 2 degrees Celsius above pre-industrial temperatures, with a 5–95% confidence interval of 1.3–3.9 degrees Celsius.

    Underestimated feedback effects in its climate models undermine this claim. The Arctic sea-ice may be one of them. Zang et.al. may be interesting but it does not give me more confidence in this Trillion Tonne Scenario.

    Rejection of the Trillion Tonne Scenario has enormous consequences for public policy.

    Comment by Geoff Beacon — 29 Jan 2011 @ 6:35 PM

  275. Jacob Mack @268 — I’m attempting to be a compleat Bayesian just now.

    Septic Matthew @269 — Thank you. The current attempt is to find a rational way to establish a prior distribution when little is known. For two posterior distributions, see Annan & Hargreaves; I’ll not attempt to replicate that work.

    Comment by David B. Benson — 29 Jan 2011 @ 7:24 PM

  276. 270, dhogaza: You’re suggesting a harmonic oscillator is unstable?

    Sounds like it, but really I am just asking Ray Ladbury for his definition of exceptional stability. You could call a Lorentz system or Brusselator “stable”, even though they generate chaotic trajectories.

    Comment by Septic Matthew — 29 Jan 2011 @ 8:46 PM

  277. David Benson,
    I still don’t see how you get a negative sensitivity given what was known in 1959. There is nothing in the climate that suggests a negative sensitivity. It would require a conspiracy of feedbacks that somehow overshoot zero net forcing. I just don’t see how you get there, and certainly large negative sentivities can be entirely ruled out. I mean if you wanted to be conservative, you could maybe use a 3-parameter lognormal with a negative position parameter. What is more, if you have 3 estimates varying between 1.6 to 6, there’s certainly nothing there to suggest sensitivity even below 1 degree per doubling.

    I agree that a uniform prior is problematic. If we take the esimates up to 1959, we have Arrhenius (5.5), Arrhenius (1.6), Callendar (2), Hurlbut (4) and Plass (3.8). That gives us an average of 3.3 with standard deviation 1.6. A fit to a lognormal with mean 1.18 and standard deviation of 0.51 doesn’t give a terrible fit. If you want a more noninformative prior you could take a broader standard deviation say, somewhere between 0.65 to 0.85.

    Interestingly, if you take all the point estimates of sensitivity Arrhenius to the present, you get a moderately symmetric distribution, centered on about 2.8 with standard deviation of about 1.5. The point estimates are roughly Weibull distributed with shape parameter ~2.

    The average value for S has not changed by more than a percent (from ~2.8) since 1989, and the standard deviation of estimates has fallen steadily since 1963. That’s indicative of a pretty mature understanding.

    Comment by Ray Ladbury — 29 Jan 2011 @ 9:26 PM

  278. SM, what I am saying is that if you had negative sensitivity, that would imply strong negative feedback, and you wouldn’t see much change in the climate system–in contrast to the climate we see on Earth. My sole intent is to suggest that negative sensitivities can safely be excluded from consideration for an Earthlike planet.

    Comment by Ray Ladbury — 29 Jan 2011 @ 9:31 PM

  279. 278, Ray Ladbury, that answers my question.

    Comment by Septic Matthew — 29 Jan 2011 @ 10:28 PM

  280. David B. Benson # 275, no problem, I understand.

    Comment by Jacob Mack — 29 Jan 2011 @ 10:39 PM

  281. Ray Ladbury,

    Annan’s Bayesian with expert prior approach seemed to me to be a reasonable method to fine tune the range of sensitivity which would help fine tune the risk assessment. No approach would eliminate the possibility of values outside the predicted range unless the range was uselessly broad.

    The tighter range would just give policy makers a better target to use to determine mitigation/adaption policy. “What if”, scenarios based on the tighter range should help in making pragmatic decisions.

    A more interesting statistical exercise would be what actions are likely to be taken. Personally, I would believe that a gaseous fuel infrastructure should be a priority because it increases transportation fuel options without demanding one engine technology be scraped in favor of another. Consumers, at least American consumers, would be more likely to accept personal transportation alternatives that allow for larger vehicles without increasing dependence on foreign oil. Others would pick another priority. Positive action will be a great compromise.

    The only reason I “cherry picked” the 1913 to 1940 range is it might help improve our understanding of climate response.

    In any case, a less contested range of climate sensitivity would help change the debate to what to do, instead of if to do any thing.

    Comment by captdallas2 — 30 Jan 2011 @ 11:09 AM

  282. I recently had an old friend send me the following claim which he said was made by a \NASA engineer\: that without CO2 the climate would be much hotter. Coincidentally I found Ray et al discussing the possibility/impossibility of negative sensitivity. The \engineer\ did not invoke feedback at all. Below is what he wrote. The numbered stuff is my summary of his exact words. THe exact quotation follows my summary. I am tempted to simply tell my correspondent that his \NASA engineer\ is deluded and leave it at that. I hate these zombie arguments and I hate replying to them. The only reason I do it now is out of respect for my friend. I read this stuff and it conjures up images from \night of the living dead\ which I saw in the theater when it came out and that I would really rather forget. The argument is below. I think (1) is correct. 1A is false in that it implies only at ground level can CO2 absorb more energy than it emits which is equivalent to claiming that the atmosphere can’t be heated except at ground level. I believe 2 is correct. I believe 3 would be correct with/without the pressure broadening argument simply because the atmosphere thins with altitude. I would say that (3) is just obfuscating more than right/wrong. Ditto 4. I don’t know how absorption probabilities go with pressure off the top of my head, but for sure they decrease with decreasing pressure; focusing on band structure while excluding the huge drop in number density with altitude is obfuscatory. 5 and 6 are simply wrong. No gas at all is needed in order for radiation to escape to space. As far as I know, if the only physical mechanism under consideration is the radiative cooling of the planet’s surface (which was heated by shortwave solar radiation and reradiated at longer wavelengths in the infrared) via radiative transport, additional gas of any kind can only result in a higher equilibrium temperature. I suppose that with a sufficient change in the atmospheric density by the addition of a gas, one might expect changes in physical processes like thermal conduction and/or advection to make a difference but that isn’t what the engineer was claiming by my reading.

    The strangest thing is that he begins by claiming that CO2 is a trace gas and can’t have an effect and he finished by claiming that it has a major effect but that us poor physicists have gotten the sign wrong on its effect for the past century. Comments appreciated.

    (1)at ground level the spectral bands are at their maximum widths.
    (1A) It is here that CO2 can absorb more energy than it emits.
    (2)farther from the Earth, the temperature and pressure both decrease and
    the bands get narrower.
    (3)This means that when CO2 emits energy towards space, only some of it
    will be absorbed by the CO2 above it. However, a very very small
    amount will not be absorbed because the absorption bands are narrower.

    (4)The rate of this cooling is partly related to the mean free path – how
    far the radiation travels before it is reabsorbed. Basically, the
    farther radiation travels toward space (lower temperature and
    pressure) before being reabsorbed, the narrower the absorption band
    and the more heat is lost to space.

    (5)when this band spreading is taken into effect, it quickly becomes apparent that carbon dioxide is actually the only gas that cools the atmosphere. without carbon dioxide the atmosphere has no way to release its energy to space and the planet quickly over heats.

    (6)Up to about 11,000 feet (top of the troposphere), water vapor provides
    this capability. But above that level, there are few, if any, gases to
    cool the atmosphere.\

    THE ENGINEERS EXACT WORDS

    My research shows that heat comes first, like heating the ocean, then
    the CO2 emitted from the ocean cools the earth back down.

    The biggest problem I have is that CO2 represents only .039% of our
    atmosphere (thats 390 parts in 1,000,000). Further, according to
    Robert Clemenzi, \While the spectra of each gas is different, the
    absorption and emission spectra for a specific gas are usually
    identical. (The primary exception is fluorescence.)
    (1)Though it is
    seldom mentioned, this means that CO2 absorbs and emits IR radiation
    at exactly the same frequencies. Note however that all the radiation
    in a spectral line is not at exactly a single frequency, but instead
    in a small range (band) of frequencies. It is the width of these
    spectral lines that is affected by temperature and pressure.

    Basically, at ground level the spectral bands are at their maximum
    widths. (Maximum pressure – but not always the maximum temperature.)
    It is here that CO2 can absorb more energy than it emits. As you get
    farther from the Earth, the temperature and pressure both decrease and
    the bands get narrower.

    This means that when CO2 emits energy towards space, only some of it
    will be absorbed by the CO2 above it. However, a very very small
    amount will not be absorbed because the absorption bands are narrower.

    The rate of this cooling is partly related to the mean free path – how
    far the radiation travels before it is reabsorbed. Basically, the
    farther radiation travels toward space (lower temperature and
    pressure) before being reabsorbed, the narrower the absorption band
    and the more heat is lost to space.

    The funny thing is that when this band spreading is taken into effect,
    it quickly becomes apparent that carbon dioxide is actually the only
    gas that cools the atmosphere. That’s right, without carbon dioxide
    the atmosphere has no way to release its energy to space and the
    planet quickly over heats.

    Up to about 11,000 feet (top of the troposphere), water vapor provides
    this capability. But above that level, there are few, if any, gases to
    cool the atmosphere.\

    Comment by John E. Pearson — 30 Jan 2011 @ 1:31 PM

  283. John E Pearson:

    The biggest problem I have is that CO2 represents only .039% of our
    atmosphere (thats 390 parts in 1,000,000).

    Feed the dude 390 micrograms of LSD and, a day later, ask him if he still thinks tiny amounts of stuff can’t have big impacts …

    When I see nonsense like his statement, frankly, I don’t bother reading further. It’s that dumb, and as an engineer, I’m sure he knows of many counterexamples. As a NASA engineer, you could ask him whether or not a bit of O-ring material comprising far less than 0.039% of the total weight of a solid fuel booster could’ve caused a significant systems failure 25 years and a couple of days ago …

    Comment by dhogaza — 30 Jan 2011 @ 4:22 PM

  284. Suggest something directly educational on the question.

    (You might want to ask the supposed ‘NASA engineer’ how this old notion that’s been widely debunked (and rebunked) is presented as his new idea.)

    http://www.google.com/search?q=co2+absorbtion+band+spread+altitude
    http://www.physicsforums.com/showthread.php?p=2373492
    http://www.skepticalscience.com/The-first-global-warming-skeptic.html

    Comment by Hank Roberts — 30 Jan 2011 @ 4:38 PM

  285. John (#282),

    Can I try my layman’s take? (Corrections welcome, as always.)

    Your engineer loses the plot from the beginning by assuming that the warming effect of CO2 is to do with the CO2 absorbing more energy than it emits. The imbalance is not between IR absorbed and IR emitted by a layer of atmosphere, but between the incoming shortwave solar energy from space and the outgoing longwave energy emitted to space, due to the increasing difference between the ground temperature and the temperature of the level from which re-emitted radiation can escape to space. Moreover, without GHGs in the atmosphere, getting rid of heat would not be hard, it would be easy. It would just radiate out into space directly from ground level at all wavelengths. Your engineer is correct only that increased CO2 helps cool the stratosphere.

    Comment by CM — 30 Jan 2011 @ 5:21 PM

  286. jep@282 Oh dear.

    Looking at stuff like this, I’m always perplexed when deciding whether this kind of word salad is above my pay grade or below it.

    I’ve probably spent too much of my life dealing with stroppy teenagers who resist rules, logic, structure and everything else about algebra, science, spelling. The breathtaking certainty when assembling poorly understood facts and asserting that their own idiosyncratic connections between them deserve marks I’m not prepared to give is all too familiar.

    I call it smart-aleckry.

    Comment by adelady — 30 Jan 2011 @ 5:47 PM

  287. 284:

    Hank I don’t recognize anything in your links as pertaining to his argument. Am I missing something? I didn’t hear anything about saturation in his argument. He started off by claiming that 390 ppm was too little to have an effect and ended by claiming that it has a big effect, but that physicists got the sign (of the effect) wrong for the past century. That doesn’t sound like saturation to me. I think it’s incoherent but that is a separate issue. In any event he says that CO2 actually produces a cooling effect. I believe that cooling by adding trace amounts of a gas to an atmosphere is physically impossible under the assumption that only radiation physics is responsible for heat transport which is what the guy was arguing. I don’t have a mathematical argument for my claim, just a physical one, which is that as photons pass through a gas they’re either absorbed or not. Absorption means the molecules that did the absorbing end up with additional energy so that absorption can only result in heating, never cooling. As far as I can tell the whole issue of band structure is entirely superfluous to the sign of the effect of adding gas to an atmosphere. I think that adding trace amounts of gas to an atmosphere can only heat. The heating might be negligible, or not, depending on the gas and the radiation, but it necessarily has a positive sign, doesn’t it? It seems to that to get cooling the added gas would have to do something really weird like decreasing the absorption probability for the molecules that were already there before the gas was added. But maybe I’m missing something. I was hoping someone here that knows more about this than me might say something useful and perhaps corroborate/correct my response.

    Regarding the guy’s honesty and whether this was his “work” or someone else’s; it occurs to me that when scientists say they’ve “done research” they generally mean they actually did original research. When other people say they’ve “researched” something they often mean only that they read about it somewhere. A general remark: I’ve found that arguments in which disrespect plays a major part are fare less convincing than arguments which start like this: http://www.youtube.com/watch?v=k80nW6AOhTs which is what impugning someone’s honesty is.

    Comment by John E. Pearson — 30 Jan 2011 @ 6:11 PM

  288. Ray Ladbury @277 — I wouldn’t have foound a negative S in 1959. But with such a limited understanding of how the climate actually works, I (and Carl Hauser) prefer a more conservative prior distribution which allows for that possibility, assuming it actually is found through Bayesian analysis of the evidence collected later than 1959.

    Using a translated lognormal is better, but it still has a cutoff at wherever you translate zero. Since I don’t see how to justify any particular cutoff, I’d rather a prior distribution with none at all; supported on the entire real line and vanishingly small for large absolute values.

    With five experts giving values, I’d be t4empted to use the Tol & deVos procedure to construct such a prior. Thank you for checking.

    I agree that by now there is an exceedingly good grip on the pdf for S with Annan & Hargreaves latest suggesting there isn’t much of a heavy tail.

    Comment by David B. Benson — 30 Jan 2011 @ 6:19 PM

  289. 286 Adelady, I hear ya!

    CM said “Moreover, without GHGs in the atmosphere, getting rid of heat would not be hard, it would be easy.”

    Thanks for that.

    Am I correct that adding any gas at all to the atmosphere can only result in heating at least at the level where the gas is located? That seems to be what you are saying. I wasn’t thinking about the stratosphere. Delinquent that I am, I haven’t followed the whole issue of stratospheric cooling. Does the stratosphere cool because of the CO2 in the stratosphere or because of the CO2 in the troposphere? I’m thinking it cools because of a decreased heat flux into the stratosphere because outbound heat is building up?

    Comment by John E. Pearson — 30 Jan 2011 @ 6:20 PM

  290. captdallas2, I agree with you about Annan’s motivation and while I share the motivation, I have great qualms with using a Prior that alters qualitatively the conclusions of the analysis. The question is whether it is reasonable to use a Prior that is 1)symmetric, rather than skewed right, and 2)allows negative sensitivities (which I think are unphysical). I think that perhaps one approach that might make more sense is to characterize each sensitivity determination in terms of both a “best-fit” or location, and as a width, and look at the distributions over these parameters. I’m doing something like that in my day job, so maybe I’ll look at it.

    I realize that I’ve given you a bit of a hard time. Don’t take it personally. My goal is to ensure that we 1)all stick to the evidence when it comes to science, and 2)use valid risk mitigation techniques.

    Comment by Ray Ladbury — 30 Jan 2011 @ 7:46 PM

  291. David@288, I’m just going with physics, and I don’t see how you get enough negative feedback to get a negative sensitivity AND get 33 degrees of warming over Earth’s blackbody temperature. Such a strong negative feed back would have to apply to any forcing, right? So, how would you get glacial/interglacial cycles with such a strong negative feedback? There is a difference between what is mathematically possible and what is physically possible–and I’d contend that if you can get negative sensitivity values for a positive forcing, then our understanding of climate would have to be so flawed that we would need to toss the whole thing out and start over again.

    Comment by Ray Ladbury — 30 Jan 2011 @ 7:59 PM

  292. Ray Ladbury @291 — In the spring of 1959 in my Physics 1c lab I designed, built and tested a negative thermometer; the mercury went down when the bulb was immersed in a beaker of warm water.

    With that easily performed experiment in mind, the five expert values all derive by variations which hold many aspeccts of the climate system constant (when there is no knowledge that those aspects are actually constant). So to be safe, use a prior distribution with support the entire real line; afterall the glacial cycles might be due to something else completely un-understood in 1959.

    As for symmetry, Using the Tol & deVos procedure to construct a pdf for the five expert’s estimated sensitivies won’t be symmetric, but will have support over the entire real line. It’ll have all moments as well.

    Comment by David B. Benson — 30 Jan 2011 @ 9:29 PM

  293. Ray Ladbury,

    I don’t take it personal, really it is just an illustration of the current conundrum. Think of it as moving away from science and into marketing. While the science is still there, it has to be communicated as a opportunity. A diffeques professor from Bell Labs told me there are no problems only opportunities. That stuck with me just like KISS did. He was trying to teach me how to work with uncertainties (tolerances) to design things that would work reliably with available technology. So I can live with a 95% probability, while knowing that Murphy is still out there.

    Because of Murphy, I would never totally rule out S > 6 or S < 0. We would be either be looking at a new carboniforuous period or a glacial period, but both would be bad for business. It would also be cost prohibitive to design around either.

    Business wise, it would also be difficult to design for 100 plus governments. A contract with one government is enough of a PITA.

    The government we in the US have to work with has all ready invested in promising technology, we just need to pitch the right blends of technology.

    Co-generation has higher efficiency than stand alone. Sulfur-iodine cycle hydrogen production is a neat co-generation product. There has already been a great deal of research to improve high pressure hydrogen storage, fuel cells and decent work on combination natural gas/hydrogen pipeline designs. Pretty impressive because hydrogen is a bitch to contain and platinum free PEM fuels cells are becoming affordable. Selling the US and the G6 on a cost effective energy plan would be a solid start that the ROW would follow if proven.

    Clean coal is a given because coal, clean or not, is an abundant interim resource. It is better if clean is used, but without co-generation of hydrogen it is not all that cost effective yet.

    Higher temperature nuclear reactors offer the option of hydrogen co-generation. While less efficient, off peak or remote solar and wind offer clean hydrogen production with electrolysis.

    Energy independence and cleaner world without sacrificing creature comforts all while saving the world. Chaining ourselves to trees is no where near as effective as selling a good idea :)

    Comment by captdallas2 — 30 Jan 2011 @ 9:57 PM

  294. captdallas2:

    Because of Murphy, I would never totally rule out S > 6 or S < 0.

    So because of this, you never go fishing on a boat, because of the fact that because of Murphy, you can’t rule out that the specific gravity or density of water is so low that your boat will sink before you can say “physics sucks!”

    I could lay out an infinite series of everyday actions we all take that, because of Murphy, are irrational because after all, you can never rule out Murphy.

    Comment by dhogaza — 31 Jan 2011 @ 12:04 AM

  295. John Pearson (on the engineer),

    You need to be very careful thinking about individual components of a local energy budget. The troposphere is currently cooling radiatively at about 2K/day, and adding CO2 to the atmosphere generally increases the radiative cooling (primarily through increases in water vapor, though how these details play out also depend on the details of the surface budget). In the stratosphere, the increased radiative cooling with more CO2 is a ubiquitous feature of double-CO2 simulations and this leads to a drop in the temperature there. But the troposphere can still warm with an increased radiative cooling term because it is also balanced by heating through latent heat release, subsidence, solar absorption, increased IR flux from the surface, etc.

    The increased troposphere-surface warming from more CO2 is best thought of by the rate of IR escape out the top of the atmosphere, which is reduced for a given temperature. Let’s step back though and think about absorption and emission in the atmosphere.

    Suppose first, that we are looking through a pinhole at an empty isothermal cavity at some temperature T. The observer will of course see Planck radiation emanating from the back wall of the cavity by the Planck function B(T), which gives the distribution of energy flux as a function of wavelength (or frequency). We now put an air parcel between the observer and the back wall, a parcel which absorbs IR radiation exiting from the wall to the observer. An parcel means that the medium is small enough to be isothermal and in local thermodynamic equilibrium (which then ensures that the population of thermodynamic molecular energy levels will be set by molecular collisions at the local atmospheric temperature), but the parcel is also large enough to contain a large enough sample of molecules to represent a statistically significant mass of air for thermodynamics to apply. IR molecules are absorbed and knock molecules into higher energy quantum states. The collision time between molecules in the medium (which is representative of our current atmosphere) is several orders of magnitude shorter than the excitation time, so the energy of the molecule will go into the energy reservoir of the local matter establishing a Maxwell-Boltzmann distribution at a new temperature, T+dT. Meanwhile, when radiation escapes without being absorbed then it will cool locally.

    Our observer will look into the medium and see a transmitted (t) portion of the Planckian radiation B*t=B*exp(-τ) and the medium radiates as B(T)*[1 - exp(-τ)]. A warm parcel of air will radiate more than a colder parcel, even at the same 390 ppm of CO2 in the air due to the population of the different rotational and vibrational energy states of the GHGs from collisions with other atmospheric molecules in the LTE limit. (Also see Ray Pierrehumbert’s Physics Today article for a more thorough description).

    In the context of the real atmosphere, an observer looking down from space will see Planckian radiation upwelling at the surface temperature for those wavelengths where the air is very transparent. For those wavelengths in which the air absorbs effectively (such as the 15 micron CO2 band), surface radiation is effectively replaced by colder emission aloft, and is manifest as a bite in the spectrum of Earth’s emission (see this image). Because the temperature aloft is colder than the surface temperature, you can clearly see a bite in the emission at the center of the GHG absorption band, corresponding to predominant emission from the cold upper troposphere or stratosphere.

    Increasing the GHG content increases the depth or width of this bite, with the depth constrained by the coldest altitude of the body in consideration (where the “emission height” eventually propagates to) and the width increases as the wings of the absorption features become important. For doppler-broadened (primarily stratospheric) lines, the absorption becomes logarithmic in absorber amount. Physically, the extra GHG is causing a reduction in the total outgoing radiation at a certain T, and so the planet must warm to re-satisfy radiative equilibrium with the absorbed incoming stellar flux. The whole troposphere is basically yoked together by convection, and the warming is communicated throughout the depth below the tropopause.

    Chris

    Comment by Chris Colose — 31 Jan 2011 @ 12:39 AM

  296. “Energy independence and cleaner world without sacrificing creature comforts all while saving the world. Chaining ourselves to trees is no where near as effective as selling a good idea :)

    Comment by captdallas2 — 30 Jan 2011 @ 9:57 PM”

    This exhibits a very limited understanding of the ecological services of the planet. Just scaling any of these to fully serve 9 billion – the minimum we will hit just from inertia – makes the futility of your claims obvious. When we look at all the other depletions, declines, alterations of habitat, temp increases, etc….

    It’s really important to remember we live in a finite world with myriad system faults occurring simultaneously. If we do nothing worse than burn all the coal, there is basically a slim and a none chance of a comfortable, non-tree hugging world ever existing.

    When designing for a sustainable system, you must include *everything*.

    Comment by ccpo — 31 Jan 2011 @ 3:16 AM

  297. John,

    Does the stratosphere cool because of the CO2 in the stratosphere or because of the CO2 in the troposphere? I’m thinking it cools because of a decreased heat flux into the stratosphere because outbound heat is building up?

    Far as I understand (not very far!), it’s both. (1) CO2 in the troposphere, by repeatedly absorbing and re-emitting IR, reduces upwelling IR to the stratosphere (over the wavelengths where stratospheric CO2 would absorb). (2) In the stratosphere, CO2, being a good IR emitter, radiates to space the heat energy it gains from collisions with other molecules. (3) The net effect is cooling as stratospheric CO2 emits more IR than it absorbs.

    Comment by CM — 31 Jan 2011 @ 4:07 AM

  298. dhogaza,

    LOL, there is probably no better example of “dealing” with Murphy than a boat. I think you may misunderstand me.

    Comment by captdallas2 — 31 Jan 2011 @ 7:25 AM

  299. “Clean coal is a given”

    Clean coal is a myth.

    There’s dirty coal, and very dirty coal, like the nasty stuff they mine in Germany.

    Why can’t we make the coal companies put their own money into carbon capture research? As ideas go, it’s not the best plan we have. If they can make it work, fine – win for capitalism. If they can’t – capitalism wins again, and we can kiss goodbye to those fossilised irrelevant industries.

    Comment by Didactylos — 31 Jan 2011 @ 8:44 AM

  300. David,it’s easy to see how a negative thermometer would work–you just have to have the glass expand more than the mercury. That’s just physics. What possible physics could you possibly have that would cool the temperature of the planet when it traps more energy? And equally important, would that planet resemble Earth at all in its climate.

    Keep in mind that feedback is pretty indiscriminate wrt the source of the energy, so you are positing a mechanism that decreases planetary temperature when you add energy to it–and so increases planetary temperature when you take energy away. I don’t see how you get there with physics.

    Besides physics, my objection to including the negative reals is that that probability must come from somewhere, and if you are stealing that probability from the positive tail (and where else would it come from), you are biasing your result toward that which you would desperately like to achieve. To that end, what is to stop you from positing a prior that peaks at zero?

    The spirit of maximum entropy/minimum information says that we have to incorporate the physics and especially the symmetries into the Prior. That also incluses the antisymmetries–e.g. the skew. The physics tells us that it is much easier to get an Earthlike climate with a high sensitivity than it is with a very low sensitivity. Maybe in that sense, I am an Empirical Bayesian, but I get very uncomfortable when I have a Prior that qualitatively changes the conclusions of the analysis, AND my data are not dominant AND I have no good physics motivated reason for choosing a prior with very different characteristics than my data.

    I think we must also look at what we mean by a conservative analysis. In a scientific sense, the most conservative choice would be to take a prior that includes the negative reals. Hell, take the entire complex plane while we are at it! However, in an engineering sense, the question we must be conservative with regard to is: How bad can it be? If our prior significantly changes the answer to that question without a good physical reason, then we have good reason to call into question our choice of Prior.

    Comment by Ray Ladbury — 31 Jan 2011 @ 9:07 AM

  301. LOL!

    Didactylos, ccpo

    “Clean coal” is not a term I invented. It is an oxymoron that is intended to describe coal processes that are cleaner than other options. Greenhouse effect is not very descriptive either, but that is what we work with still describing the gases that regulate global temperature.

    Economically, there would be a very low probability that coal will not be used in the future and the best option is that its use be as “clean” as possible. I did not “exclude” any technology in the search for sustainability. I just gave “my” opinion of a logical start in the quest for sustainability.

    The tree hugging reference is an example of how not working within the system produces results that go counter to the intent. Rent a huggers, people hired to chain themselves to trees in the Pacific Northwest, were pretty effective in driving some logging companies into other businesses like real estate development. Converting forest into subdivisions was not the desired result. Had the huggers worked with other groups like Ducks Unlimited and other hunting groups for example, more land would have been preserved as natural forest.

    Perhaps you two have a less divisive game plan that is better?

    Comment by captdallas2 — 31 Jan 2011 @ 11:47 AM

  302. > Rent a huggers, people hired …

    Got any evidence for this claim? Paying someone to break the law is illegal. You’re accusing someone. Evidence?

    Comment by Hank Roberts — 31 Jan 2011 @ 12:57 PM

  303. captdallas2:

    My biggest objection to your compromise is that coal is only cheap because of massive subsidies and failure to count the human and environmental cost. Once regulators pull the rug out from coal, the economic landscape should be very different.

    And that’s just taking into account the direct damage from coal. If CO2 is also given a cost, then coal will stay buried.

    As for your random shots at “huggers” – environmentalists are not a homogeneous group, and there are plenty of nuts that saner people could do without.

    If you want to continue discussing coal, you should probably head over to the open thread.

    Comment by Didactylos — 31 Jan 2011 @ 2:04 PM

  304. Coal is still in terms of BTU’s the most efficient energy source per kilogram of anything that we have used or can currently use as an energy source. This is very easy to confirm too. If we need to lower emissions, then yes do two things already recommended: have corporations invest more money into capturing methods and burn coal more efficiently as well. Keep in mind coal is so important to the economy of several states as well.

    Comment by Jacob Mack — 31 Jan 2011 @ 2:39 PM

  305. Captdallas, as soon as someone comes up with a viable “clean coal” strategy that doesn’t negate the cheapness of coal and that doesn’t dump CO2 into the atmosphere, I’ll think about coal. However, having lived in Appalachia, I do not count myself a big fan of what big coal does to:
    1)the environment
    2)the miners
    3)the political process

    Coal ain’t pretty.

    And even with coal, I think it is a given that standards of living will suffer. Cheap energy is a thing of the past.

    Comment by Ray Ladbury — 31 Jan 2011 @ 3:17 PM

  306. Hank Roberts,

    This has moved off topic. The classified ads in Portland, Oregon had a few listing for people looking for rent-a-huggers when I was there in the early 90′s. Thought that was common knowledge.

    dallas

    Comment by captdallas2 — 31 Jan 2011 @ 3:49 PM

  307. Ray Ladbury wrote: “Cheap energy is a thing of the past.”

    I would say that cheap energy is a thing of the future — once the mass production of inexpensive, high-efficiency photovoltaic materials scales up.

    I would say that cheap energy is NOT a thing of the past — the “cheapness” of fossil fuels was an illusion. They were never “cheap”, they only appeared to be cheap because their full costs were hidden.

    Comment by SecularAnimist — 31 Jan 2011 @ 5:12 PM

  308. Ray Ladbury wrote: “And even with coal, I think it is a given that standards of living will suffer. Cheap energy is a thing of the past.”

    And even if energy is more expensive in the future, it does not follow that “standards of living will suffer” — if we stop wasting three quarters of the energy we use:

    Simple changes like installing better building insulation could cut the world’s energy demands by three-quarters, according to a new study.

    Discussions about reducing greenhouse gas emissions usually concentrate on cleaner ways of generating energy: that’s because they promise that we can lower emissions without having to change our energy-hungry ways. But whereas new generation techniques take years to come on stream, efficiency can be improved today, with existing technologies and know-how.

    To calculate how much energy could be saved through such improvements, Julian Allwood and colleagues at the University of Cambridge analysed the buildings, vehicles and industry around us and applied “best practice” efficiency changes to them.

    Changes to homes and buildings included triple-glazing windows and installing 300-millimetre-thick cavity wall insulation, using saucepan lids when cooking on the stove top, eliminating hot-water tanks and reducing the set temperature of washing machines and dishwashers. In transportation, the weight of cars was limited to 300 kilograms.

    They found that 73 per cent of global energy use could be saved by introducing such changes.

    Maximally efficient use of abundant solar energy means that most of humanity’s “standard of living” will improve, not “suffer”, from phasing out fossil fuels.

    Comment by SecularAnimist — 31 Jan 2011 @ 5:19 PM

  309. Ray Ladbury @300 — But glass does not expand more than mercury.

    Seeriously now, Annan & Hargreaves write … the lower bound of 0 [degrees Celcius] on the prior is uncontentious: a negative value of sensitivity implies an unstable slimate system … This is the exact opposite for your earlier argument that it implies stabiliity. If there is no agreement now when we know so much about Terra’s climate, surely there would have been none in 1959, so allow some very small subjective probability that it is actually negative for reasons unknown then. [I hasten to add we know have an abundance of evidence that S is not only positive but most likely close to 3 K.]

    As for stealing from the upper tail, not necessarily. The smallest of the five calculations gave S=1.6 K. So keep the pdf quite small up to, say, 1.5 K. This steals from the interval [0,1.5) to be distributed over the negative reals, at least up to some rediculously large absolute value.

    Complex S? The sensitivity s to be the temperature response to 2xCO2 and nobody has yet suggested complex temperatures. Physics does set some limitations, especially in matters thermodynamical.

    If I were attempting to physical justify the possibility of negative S I would have some difficulty coming up, a priori, with some reasons. In 1959 I would simply point out the known glacial cycling and that the correlation of Crowl-Milankovitch cycles was known, it remained in the to-be-demonstrated category.

    Also, the determined pdf for S is only to apply over perhaps two doublings as there remain, even today, various unknowns which (probably) apply at high temperatures. As for low ones, Carl Hauser reminded me that if it is cold enough the CO2 snows out, a definite nonlinearity.

    Comment by David B. Benson — 31 Jan 2011 @ 6:03 PM

  310. David, Actually, if you take the classical definition o senstitivity, Annan and Hargreaves are correct–as it is proportional to 1/(1-f). A negative sensitivity would require f greater than 1–that is more feedback than an infinitely sensitive system. I know that won’t work! I was trying to give you the benefit of the doubt and think how you might trigger a negative response with a positive input.

    Regardless of how you look at it, though, a negative sensitivity is unphysical. And CO2 raining out is a positive feedback–as would be vaporization of CO2 with an injection of heat. I can think of lots of ways of gettint a sensitivity greater than zero–just none less than.

    And as to swapping around probability–it is certainly desirable that the PDF be smooth. I just don’t see what is gained by entertaining negative sensitivity values that we know will be ruled out by the first likelihood we look at. If all you want is a thick-tailed function, how about a Pareto? Granted, it won’t take you all the way down to zero, but arbitrarily close.

    Now a philosophical question. Let’s say that your choice of prior reduces the tail to the point where we can neglect sensitivities greater than 5. Would you trust it enough to base mitigation efforts on it? Are you 100% convinced that the probability of S greater than 5 is negligible?

    Comment by Ray Ladbury — 31 Jan 2011 @ 10:16 PM

  311. Somebody back a ways was looking for recent ocean heat content data:

    http://www.noaanews.noaa.gov/stories2010/20100920_oceanwarming.html

    (hat tip to JCH posting at Tamino’s AMO thread for that)

    Comment by Hank Roberts — 31 Jan 2011 @ 10:19 PM

  312. And there it is:
    http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3682.1

    Comment by Hank Roberts — 31 Jan 2011 @ 11:43 PM

  313. Forgive me if you’ve already answered this, but could the oceans’ absorption of more meltwater than projected (which I presume would tend to cool the upper 700 m, on average) offset some amount of heat absorption from the atmospher and thus contribute to the stasis of measured ocean heat content over the past five years?

    Comment by DCBob — 1 Feb 2011 @ 10:58 AM

  314. DCBob, It’s negligible.

    Comment by Ray Ladbury — 1 Feb 2011 @ 12:07 PM

  315. > DCBob … stasis
    Citation needed for “stasis” — where did you get that?

    Do you know how many data points you need to say whether there’s a trend, for these data sets? Short time spans are wiggles; those don’t tell you anything useful.
    I found this:
    http://www.skepticalscience.com/images/robust_ohc_trenberth.gif
    here: http://www.skepticalscience.com/news.php?p=2&t=78&&n=202

    Your source?

    Comment by Hank Roberts — 1 Feb 2011 @ 12:10 PM

  316. Hank, Ray and DCBob: thermodynamic processes consist of two cycles: cooling through refrigeration and energy transfer through heating. DCBob, yes there are contradictory data currently in peer review: some state that the oceans are in stasis, some show warming and some show cooling. The ARGOS floats had some calibration issues on, as best as I can recall more than one occasion. SkepticalScience did a write up on it, I believe it was last year (but could have been 2009) but at any rate, thermodynamics, predicts that there will be times of ocean warming, cooling and stasis, anyways, regardless of where we may, find ourselves now, in terms of ocean temperature changes. Whether natural or artificial, thermodynamic cycles consist of refrigeration and heating processes. Oceans have very high heat capacity and specific heat due to the properties of water and the large amount of water on this planet. SkepticalScience also did a write up on the second law of thermodynamics on October 22, 2010, as well, but it could have been far more in depth, then we could see how oceans are affected by the processes described by immutable laws. All of climate, all of weather, all heating, cooling and temp changes obey the laws of thermodynamics, but it seems, to me, that few in the general public understand what that truly means. I think high school classes should be offered dedicated to the physics of heat flow, a transfer of thermal energy due to temp differences, and temperature, a statistical average of a field, of kinetic motion of molecules. In other words in physics we say temperature is a field, in physical chemistry we tend to add that the averaging of motions leads to a statistical average we call temperature. You can have more heat, say in a larger mass object, but have the same exact heat as a less mass object. Heat does not equal temperature necessarily, but it can lead to a change in temperature.Then of course heat can be used to perform work, and we have latent heat and of course we can apply energy/ perform work, and generate heat loss.

    Comment by Jacob Mack — 1 Feb 2011 @ 1:11 PM

  317. Hank, regarding short time spans, that is oftentimes true, but, not a universal truth. If a Tsunami kills 300,000 people and causes 3 billion dollars in damage, and it is the first of it’s kind in 300 years or more, well, it may indicate a change. Not that we can just attribute that activity to global anything (warming, cooling or stasis) but it would certainly lead to some: speculation, questioning, intuitions, more hypothesis formation, research, testing, potential funding, and some sort of results. Not to mention funding for humanitarian efforts, engineering/construction projects, and so forth. If a few more of similar magnitude and location hit, then we may have a weather trend, and if it is very unusual for the climate, that could lead to further climate research. If the oceans are in stasis, and I do not know that they are, but if they are, for 5 years or a little more, then that is a very significant and telling sign, something is different than previously believed. What that something is, I do not know, but that may mean any number of things: not enough ARGO floats, changing chaotic weather patterns, a conflict in how the data is analyzed and summarized, not enough past decade, measurements to compare with current ones, or any number of human factors.

    Comment by Jacob Mack — 1 Feb 2011 @ 1:18 PM

  318. Correction: meant to say and have the exact same temperature, not same exact heat.

    Comment by Jacob Mack — 1 Feb 2011 @ 1:19 PM

  319. David and Ray, the problem with using any prior is that the robustness of that prior, and its utility in any projection, is to say the least, highly subjective. Any tail we look at: left, right, two tailed, must be analyzed individually, however, and we must make sure we limit our uncertainty only to a point that uses as much data first, prior to statistically analyzing any uncertainty. The basic principles of statistics do work, but just one factor can turn it on its head, like, say a loaded dice, a weighted coin or a far more noisy climate, or complex weather system. This is nothing new, but very basic/fundamental, but I cannot tell you how many times, very smart people, working with statistics/mathematics forget simple, indeispensable data, or principles, including myself in the past.

    Comment by Jacob Mack — 1 Feb 2011 @ 1:28 PM

  320. Oh and I realize that as a rule, in Baysian statistics, we look at past data, at first exclude current or new data and then include it after a statistical analysis, but even so, data analysis is of extreme importance, both past, and present. In medicine if clinical data with good results contradict a Baysian analysis the Baysian analysis is dismissed in that setting and new research must be set up to see what went wrong. For example, some findings in the journal of hypertension, where iatrogenic high BP’s were wrongly linke, not to total probability of developing morbidity/mortality risk factors, but to the bottom line results, by a huge margin.

    Comment by Jacob Mack — 1 Feb 2011 @ 1:52 PM

  321. Those who are interested in OHC should read the Willis-Trenberth-Pileke email exchanges. If nothing else, it will help pass the time until the next OHC paper is completed! Among other things, they discuss the deep-ocean paper to which Hank linked, and they discuss von Schuckmann et al. They’re interesting exchanges. Trenberth refers to an “embargoed” paper – Lyman 10? RC has an update on it here.

    For what it is worth, Bob Tisdale claims the ARGO data is now showing warming in the upper 700 meters.

    Also, on the Trenberth graph to which Hank linked: the article.

    Comment by JCH — 1 Feb 2011 @ 3:02 PM

  322. JCH # 321, thanks for the link to the email exchange. I have a problem believing there is missing heat at all. Your thoughts?

    Comment by Jacob Mack — 1 Feb 2011 @ 3:16 PM

  323. Skimming through this incredibly long discussion since – just the 21st of January? – it struck me that Dr. James Hansen’s “Storms of my Grandchildren” really helped to clarify the bigger picture for me. It answered the question of “why 350?” very well and worked to isolate the important from the less so. Instead of getting deeper in the proverbial reeds, my recommendation is to read Dr. Hansen’s recent works (the “Paleoclimate Implications for Human-Made Climate Change” known as the “Milankovic” draft freely available on Hansen’s web site was also very helpful to me as a layperson, http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf).

    Apologies to Dr. Hansen and moderators for unavoidably creating a bald commercial recommendation, but the information it contains would really stand people here on a useful footing. I love RealClimate, wish I had time to really absorb it all. I’ve also purchased Gavin’s beautiful book, “Climate Change: Picturing the Science”, which is absolutely next on my reading list.

    Comment by Paul Suckow — 1 Feb 2011 @ 4:07 PM

  324. Ray Ladbury @310 — Tol & deVos
    http://www.springerlink.com/content/x324801281540j8u/
    in effect consult 5 experts, 3 from the IPCC SAR group. We chose three of them here and fit a Gumbel prior to the experts’ expectation and standard deviation as most experts point at a right skewed prior, and do not want to exclude greenhouse gas induced global cooling.

    The evidence is then entered to form the posterior, one expert at a time and the resulting pdf shifts probability from negative S to the right tail (roughly speaking). As a compleat Bayesian, I object the the empirical Bayesian last step of combining the experts via a weight depending upon Bayes factors for each expert. It looks to be a double use of the evidence to me.

    Instead I suggest weighting the 5 values you looked up (thank you again) by how long before 1959 the analysis was done. So ranking from highest weight to lowest, I thiink we have 3.3, 4, 2, 1.6, 5.5. The exact weights have yet to be chosen but I’m sure there is a fairly objective means to do so. The combined prior is then modified by the evidence and the resulting posterior will look about like the posterior in Figure 5 of Tol & deVos; no support for values less than 0, skewed right but not heavy tailed the way starting from a uniform distribution is.

    Thanks for participating in quite an interesting (and for me, at least, useful) exchange. Unless you see some unresolved issues, I think we have a sufficiently decent closure.

    Comment by David B. Benson — 1 Feb 2011 @ 6:18 PM

  325. Jacob Mack @319 — In Bayesian reasoning, one begins with a prior probability, often called subjective to distinguish these from a frequentist approach. However, these prior probabilites can be informed by existing knonwledge, not merely guesses. The choice of
    http://www.merriam-webster.com/dictionary/subjective
    for Bayesian priors is perhaps unfortunate.

    Comment by David B. Benson — 1 Feb 2011 @ 7:33 PM

  326. Jacob Mack, Have you read E. T. Jaynes “Probability Theory: The Logic of Science”? It is the probability text he never quite managed to finish in his lifetime, finally published by Oxford University Press. It is truly a gem of a book, and it does an excellent job outlining how to minimize subjectivity in a Bayesian analysis. He has an excellent treatment of Maximum Entropy and minimally informative Priors, and there is at least one priceless nugget of wisdom in each chapter. Check it out if you haven’t.

    Comment by Ray Ladbury — 1 Feb 2011 @ 8:30 PM

  327. David,
    The values I quoted are all most probable values. Each of those values is the mode of a probability distribution (or likelihood) for S. Will the distribution of modes reflect the uncertainty in S? I don’t think so. The modes wind up pretty close to 3, and the distribution of all 62 modes is roughly Weibull (s=2) but with a somewhat longer positive tail. That is quite a bit different than the sensitivity distributions, which tend to be heavily skewed right. I would contend that the rightward skew is telling us something–that it is easier to make a climate look like Earth with a high S value than a low S value. Should not our Prior reflect that? And if it does not, we have changed the results by our choice of Prior–certainly not in the spirit of minimally informative priors.

    I agree that a Gumbel might work, but you get very different values depending on the shape paremeter you choose. A lognormal could also work, and since we are talking ultimately about a feedback, would it not better reflect the physics?

    Comment by Ray Ladbury — 1 Feb 2011 @ 8:53 PM

  328. Jacob Mack at 322 JCH # 321, … I have a problem believing there is missing heat at all. Your thoughts?

    I think they will find more heat in the deep ocean than both Willis and Pielke seem to think is there during their email exchanges (Willis may have other thoughts now,) but perhaps less than Trenberth is hoping/expects.

    Comment by JCH — 2 Feb 2011 @ 10:08 AM

  329. Gavin,

    This seems like a really rich set of information, now that the record is approaching climate timescales. Is there a way to figure out how much of the difference is due to predicted vs actual forcing, and how much due to a difference between the model and actual response (i.e., sensitivity?). This might be what you are doing in the penultimate paragraph, but if so it was a little too concise for me to follow. Seems like an obvious question to address, so I am thinking someone has done it?

    Comment by LinC — 2 Feb 2011 @ 12:28 PM

  330. David 325, yes. I am sorry if I was unclear. I realize that it is not just guesswork. I work with Bayesian all the time. When doctors or psychologists use the prior and include newer data, and construct a tree, it is not as if it is useless.I do know though that professional experience and empirical results can at times trump Bayesian predictions based upon constructed trends in any field. I see it happen enough myself, in my work.

    Ray 326, I own many books that outline such approaches, but being I have not read that book, I will read it first.

    JCH 328: that seems reasonable.

    Comment by Jacob Mack — 2 Feb 2011 @ 1:23 PM

  331. Ray Ladbury @327 — According to Tol & deVos, the orginal Arrhenius estimate of 5.5 K including no information about standard deviation, so they rather celerly did it for Arrhenius.

    Following something similar to their procedure for the five estimates available before 1959 leads so something similar in shape to their Figure 5; skewed right but not heavy tailed.

    Annan & Hargreaves clearly so that the heavy tail is an artifact of started with a uniform distribution. Starting with the procurude outlined over these several comments, quite similar to the work of Tol & deVos, leads to a more sensible distribution once outdated by the evidence. The entire approach satisfies my sense of what a compleat Bayesian requires (subject to the change in the Tol & deVos procedure) I already mentioned.

    Unlike some, for determining a prior distribution I’m perfectly prepared to take a hand drawn estimate; it need not necessarily be drawn from any of the usual (or unusal) families of distributions. After all, to determine the posterior a computation is required.

    Since in 1959 little of the physics was understood one has to be quite conservative about allowing what, at first approximation, appear to be outlandish or even physically impossible values; the evidence will resolve the matter provided the total range of the evidence has support in the prior.

    Comment by David B. Benson — 2 Feb 2011 @ 5:40 PM

  332. David, one could just as easily say that the lack of a heavy tail in the analysis of A&H is due to their choice of Prior. Indeed, since all of the likelihoods (e.g. data) point to a heavy tail, that is precisely why I am a bit leary of forcing it to vanish.

    And it is certainly not a valid statement to state that one must be conservative about allowing outlandish values on the negative side when one is using a Prior that makes them go to zero on the positive side!

    One of the reasons why I am being very conservative about this issue is that the appropriate risk mitigation approaches are quite different for a sensitivity of 3 and a sensitivity of 8 or even 6. Indeed, I think that is the motivation behind using a Prior that makes the high end vanish! However, if we are wrong, then our mitigation will be totally ineffective.

    Do I think sensitivity is 8. No. I think it’s much more likely to be around 3. However, unless we have good physical reasons for the choices we make for our prior, I don’t see how we can do a truly valid Bayesian analysis. As to negative senstitivities, I’m quite satisfied to toss them. They would require fedbacks greater than 1, and if S is negative, our understanding of climate is so screwed we need to start from scratch to begin with.

    Comment by Ray Ladbury — 2 Feb 2011 @ 6:48 PM

  333. Ray Ladbury @332 — A compleat Bayesian does not know anything about the evidence (i.e., data) when forming a prior probability distribution. If updating by the evidence produces a heavy tail, so be it, but if heavy tail arises solely by starting from a uniform prior we haven’t done our job as (Bayesian) scientists. That is Annan & Hargreaves point as I take it.

    To repeat, a compleat Bayesian isn’t willing to have any unsupported interval without absoutely compeling reasons. Even as late as IPCC SAR some of the experts were not willing to exclude some negative values, so certainly not in 1959.

    In 1959 we presumably have 5 “most likely” values from four authors. That’s enough to follow a procedure quite close to that of Tol & DeVos; the resulting prior allows some quite small probability assigned to the negative real line. Now apply all the evidence. The result will look rather similar to that in both Tol & deVos and also in Annan % Hargraves (althugh for both papers, there are still other sources of evidence which should be included, I think).

    Now we have a rationally constructed posterior. It has no support for negative S and might, as you state, look similar to a Weibull distribution with shape parameter ~2 and suppose the mode is ~2.8 K. As Bayesian scientists we have done the best we can and hand that over to the risk analysis experts (which certainly does not include me).

    We haven’t let economics or risk analysis or anything else such as politics influence our analysis resulting in the determined posterior. We might, however, vary the determination of the prior in various sensible ways (uniform is not one, varying between Gumbel, translated lognormal, normal and Cauchy is) to report some forms of sensitivity; ideally this should be low because the evidence predominates now in 2011.

    This exercise determines S, the so-called Charney climate sensitivity. However, looking at longer time scales, a study of the Pliocene suggest ESS is ~5.5 K for 2xCO2. Looking at shorter time scales, we have the transient response for the next century, ~2/3rds of S from AOGCMs.

    But whatever, the climate response to the forcings to date has already upped the temperature by ~0.6 K over some sense of normal and that alone is enough to cause serious economic harm to:
    Europe in 2003
    Russia, Ukraine and Kazakstan in 2010
    Southeast Asia in 2010–2011
    Queensland in 2011
    with various predictions that it will become seriously worse over the next 50 years.
    That ought to be enough for risk analysis and policy formation.

    Comment by David B. Benson — 2 Feb 2011 @ 7:35 PM

  334. David,
    The problem with “letting the data tell us whether we have a thick tail is that: 1)we are not at a point where we would be data dominated–especially in the tails; 2)the data may already be telling us that there is a thick tail.

    Again, we can by choosing different Priors reach very different conclusions about what sort of mitigation is needed.

    Also, we cannot necessarily treat all estimates as imposing equal constraints on the overall distribution. An analysis that produces a very broad probability distribution for sensitivity will not impose as strong a constraint as a sharply peaked distribution. So we need somehow to take the entire probability distribution into account–not just the best-fit values. One way to do this would be to fit both means (or modes or medians) to a distribution and fit widths to another distribution. I’ve been working on something like this for my day job. I wonder if it might work in this application.

    One last thought: Given that our result seems to depend critically on the choice of Prior, might we not instead average over the Priors? It might yield a better result.

    Comment by Ray Ladbury — 2 Feb 2011 @ 9:00 PM

  335. Ray Ladbury @334 — As I understand it, a compleat Bayesian confronted with several models uses the weighted predictions from the various models; the weighting is somehow related to the Bayes factor. I don’t see that as the same as taking a weighted average over various priors, but I could be persuaded otherwise.

    I don’t see the criticality provided one begins with a handful of values from experts; any rational treatment will probably give about the same resulting prior. That prior will lead to something similar to a Weibull distribution, which is not heavy or fat tailed.

    What does give a fat tail is using the uniform distribution as a prior. That choice does not appear rational to me, to Carl Hauser, nor, I take it, Annan & Hargreaves; indeed I believe you stated that it has problems.

    For those experts not offering advice on the standard deviation of their estimate, somehow the widths have to be estimated. Tol & deVos offers one rational procedure but others could be tried.

    Comment by David B. Benson — 2 Feb 2011 @ 9:51 PM

  336. David, I believe you could also use a broad lognormal as a Prior and still get a thick tail outto at least 8 or so–or a Pareto.

    Model averaging can be done over Priors or Posteriors. One way to do it would be to start with a Prior that is a superposition of simpler Priors: A uniform + a Cauchy + a Lognormal…with each weighted uniformly to begin with, but with weights changing with data. In effect, you start with a uniform Prior over models.

    If we are to choose a Prior, that choice must be based on our knowledge of the system we are modeling. It makes no sense to say we know enough to eliminate a uniform Prior while at the same time allowing negative sensitivities, which are in my opinion unphysical for the current understanding of climate (even in 1959).

    This still leaves the problem of how to model the entropy (if you will) of each sensitivity estimate. Without that, we wind up giving each estimate equal weight, and we know that is not right. One way we could approach this is essentially by modeling moments separately. The advantage here is that we could use the method of moments to get a best fit for any distribution form and so look at model dependence. Alternatively, we need a way to factor in the full distribution, since, for purposes of risk mitigation, we are interested not just in the mode but also 90 or even 99% WC estimates of sensitivity.

    Comment by Ray Ladbury — 3 Feb 2011 @ 5:10 AM

  337. The Cauchy distribution is a fine way to go. I think weighting has to be very well explained in terms of how, why and when. Of course as virtually any statistics book or paper states: “one should always have full access to all raw data, in any field. The reasons given of course are: to replicate findings as many times as possible/needed in a time span, look for any unintentional errors, biases or oversights, and control for the occasional dishonesty, as can happen in any field. The applications of Cauchy are truly incredible in risk assessment and various areas within physics.

    Some books to consider, the ones I have actually read, referenced, reviewed, and in many cases applied to my work:

    Schaum’s Easy Outline : Probability and Statistics

    Spiegel, Murray Schiller, John Srinivasan, Alu

    McGraw-Hill Professional Publishing

    02/2002.

    Information Theory and the Central Limit Theorem

    Johnson, Oliver

    Imperial College Press

    07/2004

    Nonlinear Signal Processing : A Statistical Approach

    Arce, Gonzalo R.

    John Wiley & Sons, Incorporated

    2005

    Statistical Inference in Science

    Sprott, Duncan A. Bickel, Peter J. Diggle, P.

    Springer-Verlag New York, Incorporated

    06/2000

    Mathematical Statistics

    Shao, Jun

    Springer-Verlag New York, Incorporated

    03/1999

    Data Analysis : A Bayesian Tutorial

    Sivia, D. S. Skilling, John

    Oxford University Press

    06/2006

    Statistical Analysis of Stochastic Processes in Time

    Lindsey, J. K. Gill, R. Ripley, B. D.

    Cambridge University Press

    08/2004

    Statistical Problems in Particle Physics, Astrophysics and Cosmology :

    Proceedings of PHYSTAT05

    Lyons, Louis Ünel, Müge Karagöz

    Imperial College Press

    2006

    Theory of Multivariate Statistics

    Brenner, David Fienberg, S. Casella, G.

    Springer-Verlag New York, Incorporated

    08/1999

    Levy Processes and Stochastic Calculus

    Applebaum, David Bollobas, B. Fulton, W.

    Cambridge University Press

    07/2004

    Calculus Single Variables

    Hughes-Hallet, Gleason and McCallum et al. 4th edition. Wiley.

    2005.

    Comment by Jacob Mack — 3 Feb 2011 @ 4:44 PM

  338. However, in statistics and probability one can never really rule out low ends, like in a similar fashion, one can, never really rule out high ends. Thus, mathematically or statistically speaking a 1.5 degree increase from doubling C02 could be possible, and conversely, a 7 degree increase is possible. A wide range of values in a CI could land within a C.R. and have some assigned probability, or they could land in the N.C.R. and still be possible though the N.H. is rejected. The reason why I accept a possible low end temp increase with doubling and reject a high one is due to the actual laws of physics, and nothing more. Thermodynamics does not allow for 2 things: knowing with great certainty where all the heat and refrigeration processes lead to and the predicted cooling/stasis effects that reduces the potential for such high warming. Three things should be the focus when using statistics of any sort and whatever data we do have:

    (1.) Apply and use thermodynamics as an interpretive tool.

    (2.) Reduce uncertainty with more solid data points.

    (3.) Analyze raw data in a wider scope with mathematical/statistical methods.

    3 degrees maybe correct. I am not convinced that the clustering shown is going to happen. I cannot see how we can get much more warming with all of the cooling processes witnessed in nature.

    Comment by Jacob Mack — 3 Feb 2011 @ 4:58 PM

  339. Ray Ladbury @336 — From Annan & Hargreaves we known that even a Cauchy prior does not result in a heavy tailed posterior, so it seems unlikely that result would change with a lognormal prior, rationally chosen.

    In 1958 we would have the 5 modes you found. That is enough to apply the method in Tol & deVos [where I remind you that even for IPCC SAR at least two of the experts assigned nonzero likelihood to negative S, unphysical or no.] In that method the 5 prior estimates of S are weighted by their Bayes factors to obtain the reported posterior, which clearly is not heavy tailed.

    I don’t see any remaining difficulties. The ‘old fashioned’ use of a uniform prior is replaced by the estimates of 5 experts and combined in what I now begin to understand is the compleat Bayesian method employed in Tol & deVos. Thank you.

    Jacob Mack @337 — Where any of the 5 experts fail to provide any guidance as to their own estimate of standard deviation, or better, width, Tol & deVos have a senisble way to assign a normal distribution. That same method could be applied to assigning a Cauchy distribution. That certainly appears to be more conservative.

    Comment by David B. Benson — 3 Feb 2011 @ 6:44 PM

  340. David, A cauchy has the wrong characteristics–for one thing it is symmetric. For another, it supports negative values for S, which is unphysical. I don’t see why you are resistant to a broad Lognormal, for instance.

    We also need more than just the best estimates to update whatever Prior or Priors we use.

    And finally, again, are you comfortable banking the fate of civilization on a choice of Prior? I think that when the data are not precluding an outcome, it is risky to rely on a Prior to do so.

    Comment by Ray Ladbury — 3 Feb 2011 @ 7:12 PM

  341. David B Benson 339. Indeed that method does seem reasonable. I just wonder where the state will be in, when we arrive at doubling.I suspect in the low to mid range.

    Comment by Jacob Mack — 3 Feb 2011 @ 7:21 PM

  342. Ray Ladbury @340 — I meant, of course, translated Cauchy and normal distributions so that the mode agrees with the value of S determined by the expert. Annan & Hargreaves truncated their Cauchy to have support only in the interval [0,100]. Considering the way Bayesian updating is done, I doubt this makes much difference.

    I have already explained several times now the two reasons for including negative numbers in the prior for S.

    I don’t know what a broad lognormal is, but a lognormal with the same mode is likely to give much the same posterior, although some experimentation or further analysis would be required to decide the matter.

    Ideally there is enough evidence that the choice of prior matters little. I opine that any of the various choices we have considered will give much the same result. It is only the extraordinary nature of the uniform distribution (and close relatives) which gives a heavy tailed posterior distribution, as Annan & Hargraves demonstrate.

    Civilization is at risk from the ongoing transient response. Its hard enough to persuade decision makers about that over the next 89 years; I fear you’ll run up against the discount rate when attempting to discuss 189 years.

    Jacob Mack @341 — Hopefully it is still if, not when, the concentration reaches 550 ppmv. At that point there will continue to be a transient response as actually reaching the equilibrium requires a millennium or so. In any case, the most likely value for S is close to 3 K for 2xCO2.

    Comment by David B. Benson — 3 Feb 2011 @ 8:30 PM

  343. Jacob #338,

    With all your stats references and discussion on this thread, I thought you were building up to something more rigorous than an argument from incredulity (”I am not convinced … I cannot see how…”) based on vague hand-waving about “cooling processes”.

    Comment by CM — 4 Feb 2011 @ 3:38 AM

  344. David, do you think that a flat Prior from 0 to 20 would have been an unreasonable choice in 1959? After all, estimates over the next decade or so ranged from 0.1 to 9.6. If not, then perhaps a model-averaged approach would be the best approach. Now as to determining weights, we could take several approaches. One would be to use AIC weights a la Anderson and Burnham (note: if we are going to use a translated Cauchy, that’s another parameter). We could start with Uniform weighting of the Priors and let the data decide which ones have the most support. I would feel more comfortable with this than with selecting a Prior that determines the outcome.

    We still have the issue of how well each estimate constrains the distribution and of independence–some of the estimates rely on the same data more or less. When this happens, do we: 1)Average the results, 2)Take the “best”, 3)Take the most recent, or 4)something horribly complicated?

    Comment by Ray Ladbury — 4 Feb 2011 @ 5:47 AM

  345. CM 343, I am not done yet:)But then again, the cooling processes are well established. The references just present a background to the discussion. The real answers and evidence of how much we cannot know comes from thermodynamics the first and second law, anyways. Go see the longer posts on the previous page too. No hand waving here.

    Comment by Jacob Mack — 4 Feb 2011 @ 3:09 PM

  346. David B Bension I agree in terms of transient responses, in terms of weather variations, but how equlibrium is treated, I am not so confident in.

    Comment by Jacob Mack — 4 Feb 2011 @ 3:31 PM

  347. Jacob Mack @346 — In the 1979 NRC report on CO2 & Climate
    http://books.nap.edu/openbook.php?record_id=12181&page=R1
    we find what is now called the Charney equilibrium sensitivity, writen S. It includes only the so-called fast feedbacks. Attempting to take all feedbacks into account produces what is called Earth System Sensitvity, ESS, as in

    Earth system sensitivity inferred from Pliocene modelling and data
    Daniel J. Lunt, Alan M. Haywood, Gavin A. Schmidt, Ulrich Salzmann, Paul J. Valdes & Harry J. Dowsett
    Nature Geoscience 3, 60 – 64 (2010)
    http://www.nature.com/ngeo/journal/v3/n1/full/ngeo706.html

    Equilibriation time from AOGCMs suggests a millennium and a bit more for S. For ESS takes longer, maybe two or three millennia.

    Comment by David B. Benson — 4 Feb 2011 @ 5:20 PM

  348. Ray Ladbury @334 — A uniform prior is and always was a cop-out. The method used in Tol & doVos appears to me to be exactly what a compleat Bayesian uses, although I would make two changes: Gumbel to something similar, but heavy tailed; translated normal to translated Cauchy. Their method weights the various expert’s pdfs by the Bayes factor once the envelope of evidence is opened.

    To illustrate the use of the Cauchy pdf by doing everything much too simply, start with the 5 available modes of 1.6, 2, 3.3, 4 and 5.5. The unweighted average is 3.28 K; use that as the only mode in this simplified procedure. Now as you say, S less than 0 is unphysical, so the compleat Bayesian simply say it is highly unlikely, the subjective probability that S is less than zero is but 5%. Knowning that value and the mode sufices to determine all the parameters for the translated Cauchy pdf.

    One would actually have to read the five papers to determine, as an expert, how to weight the five apriori; otherwise, just directly follow Tol & deVos’s standard Bayesian procedure, which uses goodness-of-fit to do the post facto weighting to produce the resulting posterior pdf. The advantage of the latter is that no judgement is required; the advantage of the former is in discovering that the 5 expert opinions were indeed not all completely independent.

    Comment by David B. Benson — 4 Feb 2011 @ 6:05 PM

  349. If the real world temperature data is tracking to scenario C and scenario C was developed on the assumption that no more carbon was emitted post 2000 does that not mean that (in terms of the model) the effects we are seeing in the real world are the equivalent of a massive reduction in carbon in the atmosphere (relative to where we actually are). That is the model predicted we would see this kind of temperature trend if we cut all carbon emissions and therefore reduced the carbon in the atmosphere?

    What I’m saying is if the assumption for scenario C at 2010 is 2000 level carbon emissions minus yearly reduction (resulting from zero human emissions) and the reality is 2000 emissions plus actual emissions then scenario C was wrong by the difference right? Scenario C overestimated the sensitivity such that we have “x” more carbon in the atmosphere produced a result predicted to occur with significantly less carbon in the atmosphere. So model should have had less sensitivity to carbon. That makes sense to me.

    In terms of the other scenarios I dont know what the base assumptions were for them but assuming scenario B most likely matches the carbon emissions we have seen to date it is clealy wrong. I’ll give scenario A the benefit of the doubt and assume it was based on significantly more carbon being emitted.

    Comment by Elliot — 4 Feb 2011 @ 8:37 PM

  350. Elliot,
    No, what it means is that the climate sensitivity used by Hansen was higher than current best estimates…as Gavin has pointed ut many a time.

    Comment by Ray Ladbury — 4 Feb 2011 @ 9:58 PM

  351. Elliot, see #109 this was the response

    [Response: No it didn't. The different scenarios have net radiative forcing in 2010 (with respect to 1984) of 1.6 W/m2, 1.2 W/m2 and 0.5 W/m2 - compared to ~1.1 W/m2 in the observed forcing since then. The test of the model is whether, given the observed changes in forcing, it produces a skillful prediction using the scenario most closely related to the observations - which is B (once you acknowledge the slight overestimate in the forcings). One could use the responses of all three scenarios relative to their specific forcings to make an estimate of what the model would have given using the exact observed forcings, but just using scenario C - which has diverged significantly from the actual forcings - is not going to be useful. This is mainly because of the time lag to the forcings - the differences between B and C temperature trends aren't yet significant (though they will be in a few years), and in 2010 do not reflect the difference in scenario. If you are suggesting that scenario C will continue to be a better fit, I think this is highly unlikely. - gavin]

    Try reading all comments,

    Comment by john byatt — 4 Feb 2011 @ 10:27 PM

  352. Relevant to determining a prior for S in 1959 is
    http://www.realclimate.org/index.php/archives/2010/01/the-carbon-dioxide-theory-of-gilbert-plass/
    where he “lucked out” with 3.6 K in his Tellus paper
    http://onlinelibrary.wiley.com/doi/10.1111/j.2153-3490.1956.tb01206.x/abstract

    Comment by David B. Benson — 4 Feb 2011 @ 11:31 PM

  353. No, what it means is that the climate sensitivity used by Hansen was higher than current best estimates…as Gavin has pointed ut many a time.

    And we’re in this weird solar minimum … which of course denialists insist proves “it’s the sun” proving “CO2 means nothing” (stupidity left for the reader to judge). (hint, if it were just the sun we’d be seeing cooling…)

    Comment by dhogaza — 5 Feb 2011 @ 12:41 AM

  354. As an amateur “graph ‘eyeballer’”, I am disappointed in my fellow eyeballer’s performances here. T is obviously about to pierce scenario C and leave it in the dust. As quickly as T fell away from scenario B, it could also catch up.

    I don’t know how it will do this. Us graph eyeballers are ethically restrained from guessin’ on things, but maybe China and India will install scrubbers on their smokestacks. Somethun’ like that could happen.

    Comment by JCH — 5 Feb 2011 @ 10:34 AM

  355. What setting of ocean was uesd for the 1988 runs, I used mixed-layered with deep difffusion
    Year CO2-eq Temp 1958-2010
    1958 309.2 0.05
    1959 307 0
    1960 301 -0.08
    1961 291 -0.18
    1962 287 -0.14
    1963 249 -0.3
    1964 223 -0.39
    1965 255 -0.45
    1966 281 -0.33
    1967 294 -0.24
    1968 280 -0.24
    1969 274 -0.37
    1970 298 -0.28
    1971 310 -0.29
    1972 316 -0.25
    1973 311 -0.19
    1974 305 -0.24
    1975 283 -0.21
    1976 307 -0.28
    1977 323 -0.16
    1978 326 -0.14
    1979 329 -0.08
    1980 338 0
    1981 341 0.02
    1982 275 -0.12
    1983 251 -0.33
    1984 305 -0.21
    1985 330 -0.11
    1986 332 -0.08
    1987 341 0
    1988 351 -0.01
    1989 363 0.03
    1990 364 0.11
    1991 297 -0.08
    1992 221 -0.36
    1993 303 -0.25
    1994 343 0.02
    1995 361 0.05
    1996 367 0.14
    1997 372 0.13
    1998 384 0.2
    1999 392 0.36
    2000 397 0.42
    2001 397 0.37
    2002 399 0.4
    2003 401 0.45
    2004 400 0.51
    2005 399 0.48
    2006 399 0.51
    2007 400 0.55
    2008 401 0.55
    2009 405 0.52
    2010 410 0.52

    Comment by jacob l — 9 Feb 2011 @ 1:53 AM

Sorry, the comment form is closed at this time.

Close this window.

0.871 Powered by WordPress