RealClimate

Comments

RSS feed for comments on this post.

  1. re: 16, 22, 23, 34 regarding Lindzen & Choi 2009. This paper simply has some important mistakes and some serious problems in the robustness of its methodology. I’m happy to say that I’m a co-author on a paper that examines this technique and its robustness in some detail (though not as much detail as I would like, but of course GRL has stringent length requirements) – just accepted in GRL. When it comes out I’ll provide a link.

    [Response: Want to write a guest post on the issues? - gavin]

    Comment by Chris ODell — 5 Jan 2010 @ 12:35 AM

  2. A couple of reviews showing why Lindzen and Choi 2009 probably fall into the “not even wrong” category:

    http://agwobserver.wordpress.com/2009/12/05/comments-on-lindzen-choi-2009/#comment-421

    http://chriscolose.wordpress.com/2009/03/31/lindzen-on-climate-feedback/

    http://julesandjames.blogspot.com/2009/11/roy-spencer-debunks-lindzen-and-choi.html

    http://julesandjames.blogspot.com/2009/08/quick-comment-on-lindzen-and-choi.html

    When even Roy Spencer doesn’t buy a denialist argument, you know it must be bad.

    Comment by Ray Ladbury — 5 Jan 2010 @ 9:49 AM

  3. Chris O’Dells Paper is now in press. On Lindzen and Choi they state:
    http://www.agu.org/journals/pip/gl/2009GL042314-pip.pdf

    “Another recent attempt to estimate sensitivity and λ [Lindzen and Choi, 2009] (LC09) notes that there are many pitfalls to be avoided in assessing climate feedbacks in models using observations of radiation at TOA. While they adopt a procedure to avoid one of these pitfalls, they fail to recognize and account for several others, they do not account for external forcings,
    and their use of a limited tropical domain is especially problematic. Moreover their results do not stand up to independent testing.”

    Comment by Bobo — 8 Jan 2010 @ 7:17 AM

  4. Oh sorry I have forgot to post Trenberths et al conclusion on Lindzen and Choi 2009: “As shown here, the approach taken by LC09 is flawed, and its results are seriously in error. The LC09 choice of dates has distorted their results and underscores the defective nature of their analysis.”

    Comment by Bobo — 8 Jan 2010 @ 7:21 AM

  5. I moved some relevant comments from the Plass thread over.

    Comment by gavin — 8 Jan 2010 @ 3:07 PM

  6. Since you’ve seeded commentary here, maybe the guest thread comments should be closed, so there’s a single, linear discussion?

    [Response: Let's keep this thread for the meta-issues, and that thread for the specifics of the L&C study and the response. - gavin]

    Comment by dhogaza — 8 Jan 2010 @ 4:19 PM

  7. There are interesting comments here:
    http://agwobserver.wordpress.com/2009/12/05/comments-on-lindzen-choi-2009/

    Comment by Pete Dunkelberg — 8 Jan 2010 @ 4:34 PM

  8. I could have saved myself a bit of trouble if I’d known the guest post was coming. ;)

    Comment by thingsbreak — 8 Jan 2010 @ 4:35 PM

  9. Gavin –

    You just seem to continue as if the roof had not fallen in.

    Why is there computer code in the Climategate files that artificially adjusts earlier temperatures lower and later higher? Somebody needs to come forward and say: That code was never used in any of our published work, including IPCC reports. I’ve been looking everywhere and I can’t find any team member saying that. Have I missed it? You are a prominent correspondent with Jones, Mann, et al. Can’t you demand that they make this simple affirmation? If nobody will make this statement, how can a member of the public do anything other than conclude that all of your and their work is scientific fraud? Help!!!!

    [Response: We covered this at the time. The fudge factor was based on a PC analysis of the divergence between the MXD reconstruction and the actual temperatures and was used to examine whether the calibration statistics would be significantly affected in the event that the divergence was anthropogenic. It was used (very explicitly) in an Osborn et al submission in 2004 but never published (which was in google cache for a while but no longer) and has not been used anywhere else. It has nothing to do with any temperature record, has not been used in any published reconstruction and is not the source of the hockey stick blade anywhere. - gavin]

    Comment by Francis — 8 Jan 2010 @ 4:44 PM

  10. Hi Gavin,

    I’m not familiar with the details of the peer-review process. What happens when a paper such as LC09 is published, but is later shown to have major flaws that render its conclusions irrelevant? I’m aware that comments can be submitted and challenging papers can be submitted and published (as is the case here). However, if enough evidence is built to demonstrate the flaws of such a paper, is it ever removed and “un-published?”

    Thanks for clarifying,

    -Dan

    [Response: Only very rarely. It is not a crime to be wrong or come to an erroneous conclusion, or make a mistake. So most rebutted papers stay in the literature. I'm only aware of papers being wtihdrawn in the case of proven fraud (Schon, the Korean stem-cell debacle etc.). For the more common kinds of mistakes/errors, people usually correct them next time around. - gavin]

    Comment by Dan Ives — 8 Jan 2010 @ 5:57 PM

  11. “If the answers were obvious, we wouldn’t need to do research.”

    Whoops.

    You’d still need to confirm it. We all thought Newton had it dead on for a very long time, as it was obvious and worked.

    ;)

    I’m quite encouraged by this. Real old school paper that rebuts another paper being rebutted in a third.

    “Guys, I think you have it wrong.”
    “Oooh, check your math and data framing; you’ve got statistical weirdness in it, and you left out a major factor that will impact your results.”
    “Darn it – I know I’m on to something…gonna refine it and go again.”
    “Bring it on! I gots lots of red sharpies ready to mark you up.”
    “We’ll see about that!”
    “Oh, btw, can I get that data set and code? It was an interesting take that gives me another I might use in my own research.”
    “Certainly. Do I get a mention in the paper?”
    “Footnote only; but I’ll forward you it before publication for review.”

    Or, alternately, the phone is hung up with a FU at the request.

    Comment by Frank Giger — 8 Jan 2010 @ 5:59 PM

  12. Gavin, thank you for taking a moment to answer my questions.

    I agree, making honest mistakes and being wrong is certainly forgivable, especially when trying to answer difficult scientific questions. I followed your link to DotEarth where Lindzen states that he plans to submit a new version, so it will be interesting to see how he addresses the critiques.

    I enjoy reading the posts on this site, and your effort to educate people about climate science and debunk the myths and misinformation is a noble undertaking. Thank you.

    Cheers,

    -Dan

    Comment by Dan Ives — 8 Jan 2010 @ 6:47 PM

  13. You say “LC09 was not a nonsense paper – that is, it didn’t have completely obvious flaws that should have been caught by peer review “. I beg to differ.
    It is a priori obvious that one cannot determine the climate sensitivity from an incomplete energy balance over the tropics. LC09 ignores the fluxes of heat into and/or out of the region via the atmosphere, and the flux of heat into the ocean. As Trenberth et al. point out, these are large terms, and they simply cannot be ignored. This is a glaringly obvious error that any competent reviewer should have picked up. It undermines the whole analysis and makes it worthless. In my view, the other issues raised by Trenberth et al. are important, but secondary to this fundamental problem.

    Comment by Tom Wigley — 8 Jan 2010 @ 6:59 PM

  14. >In a telephone interview today, Dr. Trenberth told me that the flaws in the Lindzen-Choi paper “have all the appearance of the authors having contrived to get the answer they got.”
    Gavin, following up on Dans question about papers being withdrawn. If it finally appears that the data was deliberately contrived to get this conclusion how will that affect Lindzen and Choi’s ability to get future papers published? The deniers will continue to push this paper even after it has been countered by scientists.

    thank you personally for your dedicated work on the recent climate gate ho hah hah. It takes a lot of your time but I learned enough to discuss it with some of my students at school.

    Comment by Michael Sweet — 8 Jan 2010 @ 7:05 PM

  15. It’s nice to see the distinction in the intro that LC09 was “not a nonsense paper” even while disputing its conclusion. I’m trying to get people to see that peer review does not guarantee that everything published is *correct* but rather that it must at minimum be arguable, plausible, worth considering – on the page. The great frustration with recent breakdowns in peer review is stuff having gotten into journals that were indeed nonsense, and a waste of everyone’s time. This was evident to anyone familiar with the subject, but of course contrarians still latch on to the nonsense titles and run with them…

    One frivolous question: all these subscription-required journal sites offer single-article purchase, but at what seem astronomical prices – $15 to $30 for a single article of a few pages. (I have the good fortune to work at a university that subscribes to most of the best journals)
    My silly question: does anybody ever actually pay those steep single-aritcle prices? Who – journalists? It’s not like these works have commercial applications to warrant paying from a corporate expense account. Couldn’t the journals get more revenue with lower prices?

    Comment by Jim Prall — 8 Jan 2010 @ 7:41 PM

  16. Can someone help me here?

    I’m currently engaged in an argument with someone about global warming and while I know this exists, despite a large amount of google searching, I can’t find a peer-reviewed paper that (quoting other party here, excuse the obvious baiting):

    1. Demonstrate causation, not just correlation of CO2 levels relative to global temperature.
    2. Use real-world emperical evidence, not flawed computer models.
    3. Show emperical evidence for temperature rises following CO2 level increases, not before.
    4. Demonstrate that CO2 is the sole major forcing in global temperature changes, not a minor player in a much larger game, involving clouds, solar flux and CRF.
    5. Show that CO2 levels and greenhouse effect is not already saturated.

    Help! Can someone point me to any links / papers that I can refute his claims? I’m just unable to find them but I know you guys will have the answers!

    -Josh

    [Response: Here is a good place to start. - gavin]

    Comment by Joshua — 8 Jan 2010 @ 8:23 PM

  17. Joshua, 16:

    Your friend doesn’t like models?

    Ask him/her why he thinks atmospheric absorption is saturated, if he didn’t use some sort of model to describe that absorption. While he tries to figure that out, surprise him with some empirical evidence from satellites. I think John Harries has some relevant papers. Plenty of material on here as well, as to why the saturation argument is flawed.

    If he thinks CO2 is a minor forcing compared to solar, ask him how he knows that without using some sort of model. He’ll show you some dubious correlations. Then you can ask him to meet his own challenge of showing causation, not correlation, without using a model.

    Try it. I’m curious what happens.

    Comment by tharanga — 8 Jan 2010 @ 9:20 PM

  18. (corrected)

    RE Joshua

    I’m currently engaged in an argument with someone about global warming and while I know this exists, despite a large amount of google searching, I can’t find a peer-reviewed paper that (quoting other party here, excuse the obvious baiting):

    There are several posts on Skeptical Science (also check the “argument” page and recent archives), with links to published papers, as well. They may not be exactly what you’re looking for, but they are close and are informative to read.

    1. Here and here.

    2. Here

    3. Here. You might also want to look into the Permian-Triassic extinction event.

    4. “Sole” is a bit of a strawman – there are several factors, but CO2 is primarily for the recent temperature trends.

    5. #54

    The search function of RC is also essential. For example, using the term “saturate” yields this post.

    Good luck.

    Comment by Deech56 — 8 Jan 2010 @ 9:23 PM

  19. Michael Sweet asks “If it finally appears that the data was deliberately contrived to get this conclusion how will that affect Lindzen and Choi’s ability to get future papers published? The deniers will continue to push this paper even after it has been countered by scientists.”

    OK, now why should this affect their future work? They were wrong. They were not as diligent as one might have hoped. I hardly think this casts aspersions in general on their ability or honesty. Hell, I’m just happy when the denialists actually publish something. Get the wreckage off the road and everybody learn from it and move on. I’m sure none of us have any interest in playing the character assassination games. After all, we have evidence.

    Comment by Ray Ladbury — 8 Jan 2010 @ 9:38 PM

  20. OT (or maybe not), cartoon for those feeling beleaguered by the CRU hacking incident -
    http://3.bp.blogspot.com/_2fgn3xZDtkI/SziXnfjHy1I/AAAAAAAACxE/cjWviqrQGzM/s1600-h/TrickSTRIP(MINI).jpg

    Comment by Lynn Vincentnathan — 8 Jan 2010 @ 10:42 PM

  21. It seems to me that GRL needs to tighten up the ship, possibly considerably. I believe their high volume “letters” orientation is causing review problems.

    Comment by Jim Bouldin — 8 Jan 2010 @ 10:51 PM

  22. The fact that Lindzen is able to get an obviously flawed paper published is a warning that peer review isn’t perfect, but also an indication that far from the contrarian case being suppressed, it gets published even if it’s rubbish. And it’s not the first time.

    So much for the grand conspiracy theory.

    Comment by Philip Machanick — 8 Jan 2010 @ 11:01 PM

  23. Joshua, don’t let people fool you by making things up and then challenging you to prove their imaginative exaggerations–you can’t (they can’t either). If they say it’s true, ask them what source they’re relying on.

    Comment by Hank Roberts — 8 Jan 2010 @ 11:04 PM

  24. Jim Prall, given the cost of subscriptions and the pressure of deadlines, Eli has now and again paid for a paper. If you think about university costs, it even makes sense to have a small sum set aside for this as long as no one abuses it. And yes, the journals could get more if they dropped the individual article cost, OTOH their goal is to maximize revenue which mostly comes from library subscriptions.

    On the other topic, it would be impossible to prove that mopery was afoot with LC09, and given that it’s not worth trying

    Comment by Eli Rabett — 9 Jan 2010 @ 12:15 AM

  25. Jim Prall:

    Like Eli, I’ve also paid for a paper a couple times. No library has a subscription to every last thing, and sometimes you just don’t want to wait a couple days for them to get you a copy.

    So it’s not unheard of.

    Comment by tharanga — 9 Jan 2010 @ 12:44 AM

  26. I’d love the denailosphere to explain the steasy decrease of C13 in the atmosphere:

    http://www.barrettbellamyclimate.com/page33.htm
    http://www.barrettbellamyclimate.com/page34.htm

    Extrapolate that line back to 0 and you get 1655 AD :)

    I look forward to see how the “denialosphere” explains this.

    Comment by Garrett — 9 Jan 2010 @ 4:30 AM

  27. Francis #9, Gavin, the Osborn et al. article is here in cache:

    http://74.125.155.132/scholar?q=cache:Njnn1fHjn0QJ:scholar.google.com

    Unfortunately html. Go to Section 4.3 for the goods.

    Comment by Martin Vermeer — 9 Jan 2010 @ 4:46 AM

  28. Just as a precautionary note, my blog artile (linked in comment #2 by Ray Ladbury, and also the topic of the RC article entitled “Advocacy vs. science”) was a response to a blog posting by Lindzen, not the peer-reviewed published article. The counter-arguments I presented do not apply to the article in the literature, and this has generated some confusion on my blog posting.

    Comment by Chris Colose — 9 Jan 2010 @ 5:55 AM

  29. This seems like another example of a failed and flawed process that needs fixing with a stronger and truely independent review systen. Last time I posted with this view, I was told that it was absolutely fine and worked well. The ‘conflict of interest’ aspect needs to be taken out …………

    Comment by Bill — 9 Jan 2010 @ 6:14 AM

  30. Thats the one thing about peer review that seems a little odd to the public at large perhaps. If its findings turn out to be wrong then how come it got published in the first place? I would have thought that peer review itself then is not really peer review but more inferior peer review if other scientists find it to be flawed. Why was it not picked up.

    Some would say some scientists are wrecking science due to the fact their review was not sufficient and found it worthy of publication. Something seems to be odd here.

    Comment by pete best — 9 Jan 2010 @ 6:21 AM

  31. Josh: 1. Demonstrate causation, not just correlation of CO2 levels relative to global temperature.

    BPL:

    Fourier, J.-B. J. 1824. “Memoire sur les Temperatures du Globe Terrestre et des Espaces Planetaires.” Annales de Chemie et de Physique 2d Ser. 27, 136-167.

    Tyndall, J. 1859. “Note on the Transmission of Radiant Heat through Gaseous Bodies.” Proceed. Roy. Soc. London 10, 37-39.

    Arrhenius, S.A. 1896. “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground.” Phil. Mag. 41, 237-275.

    Royer, D.L. 2006. “CO2-forced climate thresholds during the Phanerozoic” Geochim. Cosmochim. Acta 70, 5665-5675.

    Came R.E., J.M. Eiler, J. Veizer, K. Azmy, U. Brand, and C.R. Weidman 2007. “Coupling of surface temperatures and atmospheric CO2 concentrations during the Palaeozoic era.” Nature 449, 198-201.

    Doney, S.C. et al. 2007. “Carbon and climate system coupling on timescales from the Precambrian to the Anthropocene” Ann. Rev. Environ. Resources 32, 31-66.

    Horton, D.E. et al. 2007. “Orbital and CO2 forcing of late Paleozoic continental ice sheets” Geophys. Res. Lett. L19708.

    Fletcher, B.J. et al. 2008. “Atmospheric carbon dioxide linked with Mesozoic and early Cenozoic climate change” Nature Geoscience 1, 43-48.

    W. M. Kurschner et al. 2008. “The impact of Miocene atmospheric carbon dioxide fluctuations on climate and the evolution of the terrestrial ecosystem”Proc. Natl. Acad. Sci. USA 105, 499-453.

    Lean, J.L. and D.H. Rind 2008. “How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006.” Geophys. Res. Lett. 35, L18701.

    Royer, D.L. 2008. “Linkages between CO2, climate, and evolution in deep time” Proc. Natl Acad. Sci. USA 105, 407-408.

    Zachos, J.C. 2008. “An early Cenozoic perspective on greenhouse warming and carbon-cycle dynamics” Nature 451, 279-283.

    2. Use real-world emperical evidence, not flawed computer models.

    See above. For carbon dioxide rising, see

    Keeling, C.D. 1958. “The Concentration and Isotopic Abundances of Atmospheric Carbon Dioxide in Rural Areas.” Geochimica et Cosmochimica Acta, 13, 322-334.

    Keeling, C.D. 1960. “The Concentration and Isotopic Abundances of Carbon Dioxide in the Atmosphere.” Tellus 12, 200-203.

    For the new carbon dioxide being anthropogenic in origin, see

    Suess, H.E. 1955. “Radiocarbon Concentration in Modern Wood.” Sci. 122, 415-417.

    Revelle, R. and H.E. Suess 1957. “Carbon Dioxide Exchange between Atmosphere and Ocean and the Question of an Increase of Atmospheric CO2 During the Past Decades.” Tellus 9, 18-27.

    3. Show emperical evidence for temperature rises following CO2 level increases, not before.

    Google “PETM,” or check here, where a tight correlation is shown between temperature anomalies and CO2 level in the same year:

    http://BartonPaulLevenson.com/Correlation.html

    In a natural deglaciation, temperature rise does indeed precede carbon dioxide increase, because warmer water holds less CO2 and it bubbles out of the ocean. The additional CO2 then raises the temperature further in a feedback. But that is NOT what is happening now. We know the new CO2 is coming from fossil fuels and deforestation, not the ocean, through its radioisotope signature.

    4. Demonstrate that CO2 is the sole major forcing in global temperature changes, not a minor player in a much larger game, involving clouds, solar flux and CRF.

    This is a straw-man argument. Nobody competent ever said CO2 was “the sole major forcing in global temperature changes.” It happens to be the major (not the only) cause of the present global warming, but at other epochs other causes have been more important. See the Lean paper referenced above for an example of how they sort out change attribution.

    5. Show that CO2 levels and greenhouse effect is not already saturated.

    At the lowest levels of the atmosphere, it is mostly saturated–and it doesn’t matter. The atmosphere as a whole is *never* entirely saturated, and can’t be, and every level contributes to the surface temperature. Please read:

    http://BartonPaulLevenson.com/Saturation.html

    Comment by Barton Paul Levenson — 9 Jan 2010 @ 7:01 AM

  32. Re #13
    Spot on Tom.

    I voiced very similar here back in August.

    Many people are delving into the subsequent analysis and ignoring that the framework and assumptions are flawed.

    Peer review: Why was no one from Trenberth/Fasullo team asked to review ?
    Did *anyone* review the paper (really) and what were their comments ?

    Originally http://www.realclimate.org/?comments_popup=930 #127
    ————————————————————————–
    Re Some thoughts on Lindzen’s paper http://www.leif.org/EOS/2009GL039628-pip.pdf

    Lindzen attempts to show that there is a relationship between LW radiative flux anomaly and SST anomaly 20N-20S
    over a 15 year period. Sensitivity (K/(W/m2)) is estimated as the ratio of these properties.

    He also discusses the failure of some AMIP based models to replicate the ERBE observations, which I do not specifically comment on here.

    4 principal criticisms

    1. The analysis is constrained to tropical oceans.
    2. The heat budget for the 20N-20S area is not closed either in i)area or ii)total heat budget (particularly latent heat)

    3. Simply eyeballing the SST and OLR anomaly graphs together does not give a confident impression of a statistically significant relationship between LW flux and temperature anomolies

    4.Analysis is constrained to delta T of >0.2K, which for instance excludes The Pinatubo event which is clearly captured in OLR, ASR with little SST response.

    My main concern is with (2).
    The 20n-20s area of analysis:
    1 Is arbitrarily constrained
    2 Has boundaries across which there are significant oceanic and atmospheric meridional heat flux.http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/i1520-0442-21-10-2313.pdf
    These flux vary seasonally 2-8PW (equivalent 4-16Wm-2 globally or about 16-64Wm-2 over 20N20S oceans) at both 20N and 20S
    A 10% interannual variation would be equivalent to about 6-7 Wm-2 over the 20N20S area
    3 Has an unbounded heat budget, particularly in latent heat, which whilst having a relatively uniform mean in the tropics:
    http://ds.data.jma.go.jp/gmd/jra/atlas/surface-1/lheatsfc_ANN.png
    has considerable year to year variability:
    http://ds.data.jma.go.jp/gmd/jra/atlas/surface-1/lheatsfc_sd.png
    which is typically 5-15 Wm-2, much larger than the OLR variation http://ds.data.jma.go.jp/gmd/jra/atlas/surface-1/dlrsfc_sd.png
    and larger than the 7Wm-2 peak-to-peak OLR anomaly in Lindzen’s paper

    Consequently whilst the temperature variations may be correlated to a greater or lesser extent with radiative flux anomalies (<7Wm-2),they could be wholly or partially caused/explained by interannual variations in

    1 Meridional heat export across 20N and 20S (ocean or atmosphere)(?~6-7 Wm-2)
    2 Latent heat (5-15 Wm-2)

    which, if true, unfortunately renders any subsequent analysis of sensitivity (however well-founded in theory) redundant as the heat budget is not closed.

    Comment by cumfy — 9 Jan 2010 @ 7:28 AM

  33. Jim Prall,
    I know how much work went into your climate citation index. I just want to tell you that no matter how much it was, it was worth it. I know of no stronger evidence of consensus. I cite it to denialists at least 3 times a week!

    Comment by Ray Ladbury — 9 Jan 2010 @ 8:10 AM

  34. Jim Bouldin says, “It seems to me that GRL needs to tighten up the ship, possibly considerably. I believe their high volume “letters” orientation is causing review problems.”

    I disagree. There needs to be a place where ideas can be vetted even before a toothpick comes out clean. They are still a good publication. Every now and again, you find an acorn.

    Remember, the purpose of peer review is not to ensure that everything that is published is correct, but to make sure it is sufficiently correct and sufficiently novel to be of interest to one’s peers. At the very least, I think the community learned something from poking at this work. And hey, if it really had been correct, it would have been highly significant.

    Comment by Ray Ladbury — 9 Jan 2010 @ 8:14 AM

  35. Nonstationarity of error terms is a serious problem in time-series analysis, but I don’t have a good sense of how well this issue has been treated in climate analysis.

    Could someone on this site comment on commenter #32 on Revkin’s blog, at http://community.nytimes.com/comments/dotearth.blogs.nytimes.com/2010/01/08/a-rebuttal-to-a-cool-climate-paper/?sort=oldest&offset=2 ?

    Is this a warranted criticism in the light of the references he cites?

    [Response: A criticism of what? Non-stationarity in historical time series is affected by a lot of things - homogeneity problems, forcings as well as the internal dynamics of the climate system. Climate models show very similar kinds of phenomenology, but it doesn't impact the interpretation of their climate sensitivity, or projections. Vyushin et al (2009) is quite relevant - gavin]

    Comment by HCG — 9 Jan 2010 @ 9:29 AM

  36. 30: Pete Best wrote ” … Some would say some scientists are wrecking science due to the fact their review was not sufficient and found it worthy of publication. …”

    Read the comments on this thread. No one has ever claimed that peer review is perfect. Arrhenhus’ 1896 paper on GW predicted a climate sensitivity of something like 6C. Angstrom in I forget when, about 1901, showed that the climate was saturated with respect to CO2. It was over a half century before Plass showed the error in Angstrom’s results. Plass found a no-feedback value of about 3C for climate sensitivity. I’m not quite sure when the accepted value of the no feedback sensitivity settled to whatever it is, but I know that by the late 70′s the Charney report quoted a feed-back sensitivity of about 3C. This value has held for the past 30 years with narrowing error bars. This is the normal way that science proceeds. One might argue that Einstein corrected Newton’s “errors”. If nothing was ever published until it was absolutely free of all error I think it’s pretty safe to claim that no scientific paper on any subject would ever have been or ever will be published.

    Comment by John E. Pearson — 9 Jan 2010 @ 10:25 AM

  37. Re: #29,30

    Individual scientists doing honest research resulting in a submitted journal article cannot know all that there is to know related to the data, its analysis and interpretation (or modeling). That, and occasionally, you get referees who for whatever reason (hey, they are human) decide to give the review a half-hearted effort. So even if 1 or 2 referees did not catch missteps A or B, the community of “referees” in the same research field who read the published paper (or hear a presentation at a conference) will “catch” the errors and eventually publish their own work of refutation, whether pointing out flaws in methodology or presenting new data or both. And often it doesn’t stop there, with counter-claims made against counter-claims, based on evidence, until useful knowledge emerges. And even this useful knowledge will be challenged as new data arise. And so on; in science no knowledge is sacred. The mistakes of individual researchers are eventually weeded out by the collective works of many and useful understanding emerges.

    What you’ve seen is one of the many powerful engines of science at work to advance our knowledge of the world. All published papers in science are wrong at some level, some more so than others, and the evidence-based sifting out of wrongness is the means to deeper understanding.

    Contrast this to the “we know absolutely that we are right” model of ideology. Nothing is learned – ever.

    Comment by Spaceman Spiff — 9 Jan 2010 @ 10:54 AM

  38. Lynn, re the weather question:

    http://global-warming.accuweather.com/

    … December 2009 temperature results from UAH…….
    “The anomaly for the North Polar region was +1.96C, which is warm, but look at the North polar ocean anomaly….+3.16 C …. relative extreme warmth up in the Arctic Ocean for December. …. The December 2009 anomaly for the lower 48 of the U.S……-1.46 C, which is the coldest month compared to normal since October of 2002. (based on UAH only).”

    Comment by Hank Roberts — 9 Jan 2010 @ 11:32 AM

  39. I know this comment doesn’t belong here – so – In addition to apologising – I’m hoping that, as well as the inevitable flames, someone might be able to point me in the right direction.

    Given that there is no such thing as a free lunch (first law of thermodynamics), and given that we puny humans are, despite our punyness in comparison to the massive Earth-Sun system, now producing enough greenhouse gases to seriously adversely affect our climate in the medium term, what is the likely climatological effect of all the wind- and wave-power systems which will be coming onstream in the medium term round the globe? Has anyone even begun to consider modeling and calculating this?

    Comment by David — 9 Jan 2010 @ 1:12 PM

  40. I know this comment doesn’t belong here – so – In addition to apologising – I’m hoping that, as well as the inevitable flames

    Has anyone even begun to consider modeling and calculating this?

    Don’t know why there’d be flames … it’s a reasonable question, unless to ask “why didn’t you spend 30 seconds in google, like I just did:

    Here’s a paper on the possible effects of large-scale wind farms.

    Didn’t google for the wave power part of your question, I’ll leave that for you, if you don’t mind. If you find something interesting, post it here!

    Comment by dhogaza — 9 Jan 2010 @ 2:01 PM

  41. Re David @20, seeing as the direct thermal contribution of global annual fossil fuel combustion has repeatedly been shown here to be miniscule compared to the increase in greenhouse forcing, (most recently here: http://www.realclimate.org/index.php/archives/2009/10/an-open-letter-to-steve-levitt/ ) is there any reason to think that capturing a small portion of total wind and wave kinetic energy would a more significant effect?

    Comment by Jim Eager — 9 Jan 2010 @ 2:05 PM

  42. David,
    My gut reaction is that wind/wave/tide/solar farms are too puny to have direct climatological effects. Some local weather effects maybe. My argument is as follows – the reason fossil fuels are affecting the climate is the greenhouse gas after-effects. The direct heating effects of burning fuels is at *least* a couple of orders of magnitude less. The effects of wind/wave/tide/solar farms, whatever those effects might be, are going to be on the order of their power generation, like the direct heating effects of burning fuel. These “green” technologies don’t produce some accumulative waste product that is the real problem with fossil fuels.

    Comment by GFW — 9 Jan 2010 @ 2:20 PM

  43. Continuing my post #37, from the referee’s perspective.

    Again, no single referee (or reviewer) can find all the mistakes in a paper because he/she has incomplete knowledge (and oh yeah, we’re human). Hopefully, the worst problems are caught, explanations are tightened, error analysis is improved, etc. Two referees are better than one, and three are better still, but with diminishing returns given that the primary role of the reviewer is to ascertain that the paper meets established minimum scientific standards and is of interest to the community. One of the jobs of the science editor is to ascertain that the referee is doing his/her job appropriately.

    What I said just above and in post #37 provide good reasons that science does not rely on the findings of a single refereed journal paper. With time and the work of many other investigators and new data, shoddy analyses of data and less useful hypotheses lose to those that which lead to explanations that encompass best the data and lead to better predictions of the behavior of nature. Science is not a democracy of ideas. Instead, it is a ruthless process of scrutinizing explanatory ideas against the real world. If it isn’t useful in predicting the behavior of nature, it doesn’t survive long.

    So again, as far as I can tell this looks to be an example of good science in action.

    Comment by Spaceman Spiff — 9 Jan 2010 @ 3:01 PM

  44. > is there any reason to think …?

    Locally, sure.

    http://enperublog.com/2009/10/20/limas-fog-nets-catching-water-for-the-citys-poor-featured/

    http://www.google.com/search?q=redwood+capture+water+mist+fog

    Coming across a climate question, I’ll usually Google first, before thinking (or deciding not to think), because “It’s a poor sort of memory that only works backwards”; almost invariably, since I last updated my own memory, something new has been learned, and often Google finds it.

    Comment by Hank Roberts — 9 Jan 2010 @ 3:05 PM

  45. I have to agree with Tom Wigley (#13) – there were a number of obviously questionable things about this paper (not the results, the procedure) from the first time I saw it, and I’m no expert. A competent referee should have at least asked what their procedure was for selecting the time intervals for analysis (no objective criterion given??) and what effect different choices would have (where are the error bars?). They should have asked for more justification for the tropical restriction and why they thought extra-tropical energy flows were not an issue. And they perhaps should have noticed something odd about the feedback analysis. Maybe these questions were asked, and Lindzen somehow got around them with some argument or other. Or maybe the review criteria for GRL need some examination if they don’t encourage that sort of level of reviewer attention.

    Peer review in other fields is much stricter than this – for example, in mathematics reviewers sometimes spend weeks going over an article’s argument with a fine-toothed comb. Maybe that level isn’t necessary for geophysics – but it sure looks like there’s a need for a bit more effort here.

    Comment by Arthur Smith — 9 Jan 2010 @ 3:40 PM

  46. Arthur Smith@45, I think the referees may have felt some pressure to allow publication, since Lindzen is a prominent skeptic (or at least pseudo-skeptic), and the implications were significant if he was right. Personally, I think it’s probably better to have this one published and demolished rather than floating around the blogosphere (in various incarnations) as another zombie. In a GRL, even if a letter is wrong, it can sometimes have a useful technique or two–not in this case, but sometimes.

    All a rejection would have accomplished is giving Lindzen a chance to play martyr on all the denialist blogs.

    Comment by Ray Ladbury — 9 Jan 2010 @ 4:18 PM

  47. Barton Paul wrote: “5. Show that CO2 levels and greenhouse effect is not already saturated.” Nice job digging up all those peer review studies!…

    Real Climate has also extensively covered this with at least four posts on this issue as well:

    Wow, I thought this saturation issue had been resolved for quite a while now…especially since mainframes started finally being able to do calculations for multiple layers of the atmosphere starting in the 1960s and 1970s. Amazing that the contrarians are still bringing this up.

    http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

    http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

    http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument-part-ii/

    http://www.realclimate.org/index.php/archives/2005/11/busy-week-for-water-vapor/

    http://www.cfa.harvard.edu/hitran//

    Comment by Richard Ordway — 9 Jan 2010 @ 5:11 PM

  48. Thanks, guys – a large amount of information there! :)

    Comment by Josh — 9 Jan 2010 @ 5:17 PM

  49. Re #36 The issue is the media and the fact that they jump on this type of paper for in anything that creates an argument there is a story. The media thinks that a peer reviewed published is gospel and it happens all of the time. If this paper is quite poor according to RC then the reviewers were not very good at reviewing surely.

    It could have been refuted by the reviewers as part of the process of peer review.

    Comment by pete best — 9 Jan 2010 @ 5:23 PM

  50. For sure locally, Hank, just as thermal power plants (and even hydro electric plant impoundments, for that matter) and built-up urban areas also have a significant climate effect on a local scale, but I took David’s question to refer to the climate system as a whole.

    Comment by Jim Eager — 9 Jan 2010 @ 7:23 PM

  51. For years I have been wondering why there has been no comment from a process/automation engineer about temperature controls with only positive or negative feedbacks.

    Any process engineer will tell you what the end result is in a system that has only positive feedback components, process ending in the extreme temperature. So goes with earth climate. Nowhere are there any mentions of the negative feedback components of earth climate, or the study of them. The CO2 and methane are studied, but in the history earth has had very different values for both gasses, and we dont live in a boiler.

    Hasn’t any climate scientest been to process automation 101? For earth climate being as stable as it is there must be both positive and negative temperature debendant coefficients limiting the change.

    [Response: The Planck long wave emission (sigma&T^4) is the dominant negative feedback. Everything else is just modifying that. - gavin]

    Comment by HH — 9 Jan 2010 @ 9:41 PM

  52. I’m still curious as to exactly what they were measuring. It’s already known we had an ice age recently. I don’t think anyone has postulated the gross change in solar radiation which would explain this without amplification, so it seems the Earth’s temperature has to be sensitive to SOME stimulus.

    Many of the skeptics have suggested most of the twentieth century temperature change is due to the sun. If so, wouldn’t these measurements measure sensitivity to a mixture of CO2 change and solar change – even if the whole thing were correct?

    Comment by David Weisman — 9 Jan 2010 @ 9:52 PM

  53. Ray, I’ve never been fond of the “letters” approach to science publishing, which connotes late-breaking, novel, newsworthy–and generally short–publications. That’s a media-circus like approach to science that is out of place IMO (where did that whole idea come from anyway?). What exactly can you really say, and explain well, and defend–forget about introducing properly–in four pages? It serves nobody to be so over-extended that you can’t catch fundamental mistakes. And even less when you get put through a Rick Trebino-like experience in trying to point out such in a formal Comment, if such are even allowed.

    Comment by Jim Bouldin — 9 Jan 2010 @ 10:12 PM

  54. I’m curious – I assume it’s normal for studies to rebut single papers, but is this amped up particularly in climate science? I’m wondering if the climate wars, which are so hard-fought in the political arena and semi-popular blog literature, has accelerated the practise.

    Comment by barry — 9 Jan 2010 @ 10:16 PM

  55. “Peer review in other fields is much stricter than this … Maybe that level isn’t necessary for geophysics – but it sure looks like there’s a need for a bit more effort here.”

    Is that statement meant to cover climate science publications in general, or just those of contrarians’ publications?

    Comment by Walter Manny — 9 Jan 2010 @ 11:21 PM

  56. Re: (Jim Prall, 15): “… question: does anybody ever actually pay those steep single-aritcle prices?”

    If you are an AGU member ($20/year) you can buy a “multi-choice” subscription: 10 papers for $40, 20 for $30, or 40 for $50 (barely $1 each). Much less than a GRL subscription.

    If you are quite interested in the subject, subscriptions to Nature and Science will likely prove more economical over the course of a year than single paper purchases there. Often, PNAS papers are open source.

    For web searches, use scholar.google.com (in addition to regular google), and search by title or author and year. Often there will be a link off the right to a PDF copy of the paper. You can also try to sites of the authors, and often you can obtain papers there.

    To organize your library of papers, if you get deep into this, I recommend Biblioscape. From scholar.google you can download citations directly into the programs database, find things easily later, and link to a paper’s website and a pdf on your hard drive, relate one paper to another, take notes, etc., etc. $300 (ouch), but I can’t function without it anymore.

    Comment by Larry — 9 Jan 2010 @ 11:44 PM

  57. @gavin: “I’m only aware of papers being withdrawn in the case of proven fraud”

    A bit of an exception which proves the rule, but there is one quite surprising instance – a paper of C.L. Siegel and Richard Bellman was accepted in the Annals of Mathematics before Siegel had seen the proofs – Bellman had done the proofs. Siegel withdrew the paper and demanded that the issue – which had been printed and shipped to subscribers – be recalled and printed with his further minor revisions to the proofs. Which was done.

    I wouldn’t expect that to happen today though.

    Comment by Andrew — 9 Jan 2010 @ 11:59 PM

  58. @pete best: “30.Thats the one thing about peer review that seems a little odd to the public at large perhaps. If its findings turn out to be wrong then how come it got published in the first place?”

    A large amount of referee work in many fields is handed to graduate students. This is, oddly enough, one reason that it is easy to get a bad paper published in a well known problem, and sometimes hard to get even a good paper published in a not-so-well-known problem.

    One other point is that “Letters” journals are often for rapid publication of work that, for one reason or another, seems to need rapid publication. Sometimes, although it is not supposed to, this results in a lower standard. The layman should probably think of “Letters” publications as ink still drying.

    Comment by Andrew — 10 Jan 2010 @ 12:12 AM

  59. @Ray Ladbury: “I think the referees may have felt some pressure to allow publication, since Lindzen is a prominent skeptic (or at least pseudo-skeptic), and the implications were significant if he was right. Personally, I think it’s probably better to have this one published and demolished rather than floating around the blogosphere (in various incarnations) as another zombie.”

    I think there ought never be consideration of who wrote a paper in deciding whether to publish, although maybe I’m old fashioned? Most of my papers were reviewed anonymously by anonymous referees – I system I was trained in, although more recently not. I don’t see what’s wrong with anonymous review; although current styles of referring to previous work can make things transparent.

    On the other hand I do agree that it’s good to have this line of argument out in the open though, since Lindzen and Choi apparently thought it worth working out. I am quite unimpressed with the turning point detection they used, surely MIT has some statistician who can straighten them out on that. Change point detection in such cases should not be done “in vacuo” but with respect to the overall estimation – you don’t really care as much about what those dates were as you care about whether your climate sensitivity can be more honestly or closely bounded. Likely best will be using a probability distribution for the regime change as opposed to a change point, which can reduce variance and also increase robustness. Or if one loves point estimation too much, you could use a soft model like the log odds on the distribution of the change points (e.g. logistic change).

    Whenever a transition at an uncertain time is modeled as a certain transition (at the best estimated time) most models that estimate the best time work to reduce some residual as much as possible – typically by fitting as much sample error as possible, thereby biasing the point estimate. Almost any reasonable transition smoothing reduces the influence of the data in the immediate neighborhood of the transition, leading to much less risk of “overfitting” (i.e. cherry picking). As soon as you consider correlates conditioned by the transitions you are really asking for trouble unless you have confidence that those transitions are really nailed down.

    Comment by Andrew — 10 Jan 2010 @ 12:32 AM

  60. So much going on. Busy, busy busy. The heat island effect in urban areas is cumulative, and any power plants of any kind are peripheral to cars, asphalt (an absorber of short wave rad while emitting long wave; the darker the better at this), industrial scale bakeries/food processing, etc. Here in my very small area, the immediate harbor of lake Erie doesn’t freeze over due to the Coal fired (formerly Niagara Mohawk, now NRG) 600 MW Dunkirk, NY power plant. If I had to make an estimation of the lake which doesn’t freeze from the hot cooling water expelled from the facility, I’d say it’s roughly 10-15x the footprint of the entire property (not just the plant). I’m not sure how many football fields that would be, but it would be measured in football fields, not in large units like states or even counties. A very interesting but little known text book called “Energy, Physics and the Environment” by McFarland, et al turned out to be a very good primer on pretty much all known forms of energy and the general footprint of each. The book’s description says it’s for a student who has already had college physics 101, but I would recommend it to anyone. It offered a pretty balanced, dispassionate presentation of what is available, feasible and known. The authors also delve into the fringes (eg: fusion) and the whole lifespan of technologies (like the start with mining) in a pretty clear format. It’s not a perfect book, but I would recommend it to anyone who has an interest in getting a broad overview of energy extraction methods and their real costs, from cradle to grave. It’s not a book with an agenda other than to present the facts, so the reader is left to make up their own mind. Many of the chapter 3end questions are easy mathematically but though provoking, and answers, without the steps to obtain them, are provided for all the questions. Of course there are areas which could have been better treated, but it’s very, very good for what it is. So I bring up that book to go back to things like tidal energy and extracting it to do work (covered briefly, but with equations, in the book, while wind is treated a bit more extensively, although friction tends to be ignored along with overcoming the initial inertia, while apparently well-established constants, which I have found elsewhere, dominate the equations, which tells me that in all these years of making fans and other turbines, but I digress)… tidal energy, from what I’ve read about it, just converts existing energy, which does work (on shores, ocean floor, etc., where ever) and isn’t put to human use. What I learned from that book, which I hadn’t before considered (like the cooling which occurs due to evaporation), is that when water “falls” from one location and lands on another, latent heat is released, which can be used, in conjunction with the force of gravity, to do work (extractable energy). This can be observed if you’re somewhat sensitive by sticking your hand into a bowl of water and comparing how that feels to pouring that same bowl of water in a direct stream onto the same part of your hand. One will feel colder than the other, but neither make any significant effect on the room temperature (and this analogy could probably be used in some clever way to distinguish between weather and climate as well). The temp of the water after it falls onto a surface (as in to perform work on a turbine) is higher than it’s starting temp (assuming all other regional temps are the same, including the surface which it hits; some substances dissipate heat more easily, others more slowly as in wood/insulator vs. copper, a conductor)but we’re not talking about large amounts of heat. There is no fire involved in these processes, and this, in my mind is crucial: there is no STEAM involved. STEAM = water, converted into steam by constant high temp burning of whatever substance. In coal and nuke plants, steam is used to move turbines to create work; the same with active geothermal plants (the ones which use molten magma, not the kind that rely on localized air temp in near-surface bore holes) so water vapor, along with whatever other emissions the heat source emits, are sent into the atmosphere. With solar, wave (tidal), wind, no steam is required, a massive human generated heat source isn’t required. No aquifers depleted, no heat island effect. That isn’t to say that there couldn’t be ecological problems (fish which depend on shores where tidal power might be useful are already in danger, bats and birds are being imperiled by poorly placed wind turbines and a heat island effect from a concentrated solar installment could create its own problems), but, combined with conservation, it’s all better than anything we have. My personal conclusion is that if the power technology uses water (steam, like all coal, active magma geothermal and nuke plants), then it’s not sustainable. We’re talking about some pretty massive amounts of water for each, and there are lots of places running out of water. Pumping more and more of the sequestered CO2 into the atmosphere is very, very harmful for our (humanity’s) long term survival, but if we wipe out our fresh water by pollution and overuse, then doomsday comes much sooner. We’ve lived with water like it’s endless, and worse, let corporations treat it like it’s in endless supply, and those days are coming to a close, even here in America. in the mean time, we can argue about things like whether or not LC09 has any real meaning on what we’re doing, while each and every one of us contribute to what could be our demise if we don’t get serious.

    Comment by Shirley — 10 Jan 2010 @ 12:35 AM

  61. Larry says: 9 January 2010 at 11:44 PM

    Alternatively, marrying a faculty member of a university works a treat. Just pick one at a school w/bulk access to journals.

    Comment by Doug Bostrom — 10 Jan 2010 @ 12:36 AM

  62. @HCG: “35.Nonstationarity of error terms is a serious problem in time-series analysis, but I don’t have a good sense of how well this issue has been treated in climate analysis.”

    It’s a very well understood issue, but you have to understand that nonstationary residuals may very well occur in nonstationary time series. It is not always a ‘flaw’, it can be a fact.

    There is no reason to expect in general that a nonstationary stochastic process will have stationary residuals. In fact, it is sort of exceptional for that to occur – exceptional in any of several senses which can be made mathematically precise.

    Comment by Andrew — 10 Jan 2010 @ 12:46 AM

  63. Here some similar concerns Roy Spencer has regarding LC09 http://www.drroyspencer.com/2009/11/ Scroll down.

    Comment by H Hak — 10 Jan 2010 @ 4:30 AM

  64. Off topic, but the UK press is reporting in this story that oceanic cycles are producing the wether changes many of us are seeing in the NH, and they’ll continue. The story quotes Mojib Latif who gives the same story as he did last year – cooling for at least 20 years, plus a whole lot more about weather for the next few decades.

    A Professor Tsonis is also quoted as saying that multi-decadal oscillations explain all the major changes in world temperatures in the 20th century. “We can expect colder winters for quite a while”.

    I believe that the GCMs don’t account for these MDOs that Latif and Tsonis say account for climate shifts. What is RC’s position on this? Because press stories like this here in the UK are likely to gain a lot of traction

    [Response: Wow. Quite frankly I find these comments (assuming the recent quotes are accurate and in context) very strange. Neither Tsonis or Latif can have done any kind of attribution studies with data from this winter, and so their connection of two weeks of negative AO to some multi-decadal cycle is just speculation. Latif's paper in 2008 made predictions for the period 2000-2010 which are guaranteed not to come true (this year would need to be as cold as a year in the 1970s) - and for quite well understood reasons (see this recent paper by Dunstone and Smith (2010)(sub. reqd.)) . Tsonis's paper was discussed here by his co-author. - gavin]

    Comment by AxelD — 10 Jan 2010 @ 6:13 AM

  65. Andrew @59 says “Change point detection in such cases should not be done “in vacuo” but with respect to the overall estimation – you don’t really care as much about what those dates were as you care about whether your climate sensitivity can be more honestly or closely bounded.”

    Important point–and one that is under-appreciated. You have to treat the change date (or other such parameter) as a parameter. This is very clear in an analysis I did where I developed a fitting routine based on Akaike Information Criterion. If the additional parameters don’t add info, they should be omitted and a simpler model used. Failure to do so can really distort results.

    Regarding the whole LC’09 analysis, the words “too clever by half” spring to mind.

    And Re: the peer review process, I suspect that an anonymous review would have been impossible. Everyone knew Lindzen had been working on this. He’d essentially published the analysis on WUWT. In any case, I think that reviewers often give some leeway to dissenting opinions, regardless of authorship.

    Comment by Ray Ladbury — 10 Jan 2010 @ 7:59 AM

  66. Walter Manny,
    We’re talking Geophysics Research Letters here, ferchrissake. The whole point of the publication is to get results out in front of the community with rapidity. The very fact that LC’09 was published, despite obvious flaws that had been identified even in its blog version, simply puts paid to the assertion in the denialosphere that you guys can’t get published. You can get published–it’s just that what you publish is usually wrong or uninteresting.

    Comment by Ray Ladbury — 10 Jan 2010 @ 8:06 AM

  67. Bary asks, “I’m curious – I assume it’s normal for studies to rebut single papers, but is this amped up particularly in climate science? I’m wondering if the climate wars, which are so hard-fought in the political arena and semi-popular blog literature, has accelerated the practise.”

    I see no difference between climate science and any other field at the level of journals or conferences. It’s only in the blogosphere, the editorials and the political arena where things get nasty. In other fields I’ve seen some really nasty, life-long feuds, too. Scientists are human–well most of us. They said that von Neumann was a Martian who just learned to do a good imitation of human behavior by later life.

    Comment by Ray Ladbury — 10 Jan 2010 @ 8:12 AM

  68. Jim Bouldin,
    Well, as someone who had the job of following geophysics for a physics magazine, I didn’t devote a whole lot of attention to GRL. The real purpose as you say was “breaking news”–discoveries of interest to the community. In reality, it has come to be a place where you publish results when you don’t want to deal with the rigor and time of an intense peer review. It’s not a dumping ground, and I did faithfully scan the tables of contents every month, but GRL does have a low SNR.

    Comment by Ray Ladbury — 10 Jan 2010 @ 8:17 AM

  69. @dhogaza(40)& GFW (42) Thanks guys. The reason for apologising and expecting flames was that of going off-topic. The reason for posting here first rather than doing a Google search first (dhogaza – you are correct) was the usual obvious one: gambling on better result asking within a forum of genuinely interested parties than doing a Google search, however well you frame the query. Indeed – subsequent Google searches turned up little in (so far) 20 min of work. However, there was one item, so, regarding – “if you come across anything interesting post it here” – well, all right then:
    http://www.ucalgary.ca/~keith/papers/94.Kirk-Davidoff.SurfaceRoughnessJAS.p.pdf
    But only from one of the same authors, unfortunately.

    Comment by David — 10 Jan 2010 @ 8:48 AM

  70. HH says, “Any process engineer will tell you what the end result is in a system that has only positive feedback components, process ending in the extreme temperature.”

    Actually, only the ones who haven’t studied infinite sereies would tell you that. As long as each term in the series decreases geometrically, the series converges:

    http://mathworld.wolfram.com/Series.html

    And as Gavin points out, there are negative feedbacks.

    Comment by Ray Ladbury — 10 Jan 2010 @ 8:54 AM

  71. I think you are making a serious mistake by ignoring what is happening right now in the real world. The populous regions of the northern temperate zone are experiencing historically unprecedented cold — and I don’t just mean since systematic temperature records were kept, I mean since people started to make written records. Frost in New Orleans, snow in Orlando and Chihuahua, Ireland and the British isles completely covered in snow. I know this has to do with the Arctic oscillation and that temperatures elsewhere are warm, but politically, the consequences of this event are obvious to me — most people just aren’t going to believe the world is getting warmer when the world around them is manifestly colder than it has ever been. And any hope of political action in the U.S. to reduce carbon emissions is gone, completely, for at least the next few years, until people’s immediate experience changes. That’s a fact, and you need to acknowledge it, and come down out of the ivory tower and address it.

    Just some friendly advice

    [Response: "Colder than it's ever been"? Really? - gavin]

    Comment by cervantes — 10 Jan 2010 @ 9:52 AM

  72. Well, at the same time Eastern Canada had a very unseasonally hot weather. Roughtly 15 degrees above the average. Since their is no big cities there nobody speaks about it.

    Comment by Yvan Dutil — 10 Jan 2010 @ 10:22 AM

  73. LC’09 and the subsequent discussions have been fascinating. As an unintended consequence, they seem to have taken some of wind out of the deniers who often pollute the RC discussions. Good riddance. They have been cluttering up the RC discussions. Maybe I am wrong, but I don’t believe that an MIT professor and an MIT post doc could have accidentally made the mistakes they have made. [edit - this is not ok]

    Obviously I am not nearly as charitable as the non-denier RC experts whose opinions I respect.

    Comment by Dallas — 10 Jan 2010 @ 10:24 AM

  74. “More generally, this episode underlines the danger in reading too much into single papers……Research at the cutting edge – where you are pushing the limits of the data or the theory – is like that. If the answers were obvious, we wouldn’t need to do research.”

    I fully agree with this. Also, it is good to realise, that together with the good research, countless flawed, biased, erroneous papers get published annually in peer-reviewed scientific journals. This may come as a surprise to those not involved in the process themselves.

    I do not mean that we should abandon the scientific process. I do believe that in the long run it will deliver the goods. Emphasis on the long run. Especially with something as complicated as global warming.

    Comment by Joe — 10 Jan 2010 @ 10:27 AM

  75. re#71: Colder than it has been since 1981-2, then 1962-3, then 1947 , the oh ..this weather stuff is boring

    Comment by Bill — 10 Jan 2010 @ 10:33 AM

  76. @51 says:
    “Nowhere are there any mentions of the negative feedback components of earth climate, or the study of them.”

    Because climatologists are actually interested in understanding how Earth’s climate works, they do investigate and assess negative as well as positive feed backs. See, if you weren’t interested in understanding how nature behaves, you’d go around cherry-picking only those data which confirm what we already know must be true. Of course, such behavior would lead to nothing learned, contrary to the actual development of knowledge and understanding since the early 19th century.

    For starters try here and here for discussions of the effects and amplitudes of both types of feedback. An enormous amount of work has been done to understand all of the important feedback mechanisms.

    Comment by Spaceman Spiff — 10 Jan 2010 @ 10:42 AM

  77. Eastern Canada is NOT warmer than usual. In Montreal, we are having a “normal” winter. However, November was warm with some 15 C days.

    It was -20 C last night.

    Comment by Joe Blanchard — 10 Jan 2010 @ 10:47 AM

  78. Just out of curiosity.

    Is there any paper or study that tried to come up with a figure of the max theoretical temperature that the earth could achieve?

    Comment by Joe Blanchard — 10 Jan 2010 @ 11:08 AM

  79. Dallas@73 Let’s not play this game. LC’09 had some significant flaws that were spotted fairly quickly once it was published. If his intent had been to deceive, do you think he would have published it and put it out in front of all the smart people, or would he have made the rounds of denialist blogs and received their adulation.

    Lindzen and Choi played by the rules. They published. This puts them lightyears ahead of the typical denialists. Their ideas were put out in front of the community, flaws were found, and now L&C have acknowledged the flaws and promised to submit an improved version. Kudos to them. That is how you play the game of science.

    Anti-science is what happens when one publishes on blogs frequented by laymen who read things refracted through their own ideological prism and shout down the voices of scientific reason that point out flaws. I have been harshly critical of Lindzen in the past for overstepping the bounds of scientific respectability in op eds, public debates, etc.

    I’ll even go so far as to take back my assertion that LC’09 is “not even wrong” and apologize to the authors. It had clear flaws, but remember the authors aren’t experts in interpreting satellite data. Maybe they have an ideological bias, but I think that they were honestly trying to call attention to something they thought was important.

    The important thing to take away from this is the way science works–the individual scientists have biases, but the product of the collective effort of the scientific community winds up with those biases greatly diminished.

    Comment by Ray Ladbury — 10 Jan 2010 @ 11:08 AM

  80. @Ray Ladbury: “Important point–and one that is under-appreciated. You have to treat the change date (or other such parameter) as a parameter.”

    Absolutely. This falls into the category of “do we actually have to say this?” Apparently it is necessary to say it.

    On the other hand, simply counting parameters is really not recommended, nor are simple complexity measures like AIC, etc. Too much distributional dependence and sample independence is needed for those approximations to work. You really want to be looking at eigenvalues of the Fisher information if you want to live with efficient estimation. I personally have given up on efficient estimation in practice, but superefficient estimation is probably nonviable in climate, where the political pressure on credibility probably makes it impossible to use methods which, however objectively superior, necessarily include arbitrary choices.

    And another “need we say it?” point is that change points that were parameterized in preliminary or unsuccessful analyses reduce the significance of the final analysis even if those parameters do not appear – or else the “multiple testing” form of the prosecutor’s fallacy can occur. To make this concrete, suppose you will require the probability of a false positive by no more than 5% in order to accept an analysis. Well, as long as you have more than 100 analyses up your sleeve, then none of those analyses need be anything other than chance for you to have a 99% chance of finding one test with 95% significance. (The formula is 1 – (1 – p)^n where p is the probability of a false positive and n is the number of independent trials with that probability.)

    When you have a division of noisy data into epochs by change points, it doesn’t take too many change point choices to result in this sort of effect – for example 7 change points, each with 2 choices, yield 128 different possibilities. And if you present the one division of epochs that gave the “best results” – possibly for good reason – then the significance of that result is still much less than it would be had the other possible divisions not been considered.

    Note that this is an additional complication to possible sensitive dependence of the inference on the location of the change points – it is only necessary that some parameterizations of the model result in an uncertain distribution of the value of interest for the significance to be much reduced, it is not required that the values actually fitted exhibit this sensitivity for the inference to suffer the loss of significance. This criticism will apply not just to the Lindzen and Choi analysis, but to other “counter-analyses”, unless care is taken to avoid this problem.

    Comment by Andrew — 10 Jan 2010 @ 11:48 AM

  81. Given the large number of comments on the peer-review process in general and in the LC09 case in particular, it is probably worthwhile to give a bit more backstory to our Trenberth et al. paper. On my first reading of LC09, I was quite amazed and thought if the results were true, it would be incredible (and, in fact, a good thing!) and hence warranted independent checking. Very simple attempts to reproduce the LC09 numbers simply didn’t work out and revealed some flaws in their process. To find out more, I contacted Dr. Takmeng Wong at NASA Langeley, a member of the CERES and ERBE science teams (and major player in the ERBE data set) and found out to my surprise that no one on these teams was a reviewer of LC09. Dr. Wong was doing his own verification of LC09 and so we decided to team up.

    After some further checking, I came across a paper very similar to LC09 but written 3 years earlier – Forster & Gregory (2006) , hereafter FG06. FG06, however, came to essentially opposite conclusions from LC09, namely that the data implied an overall positive feedback to the earth’s climate system, though the results were somewhat uncertain for various reasons as described in the paper (they attempted a proper error analysis). The big question of course was, how is it that LC09 did not even bother to reference FG06, let alone explain the major differences in their results? Maybe Lindzen & Choi didn’t know about the existence of FG06, but certainly at least one reviewer should have. And if they also didn’t, well then, a very poor choice of reviewers was made.

    This became clear when Dr. Wong presented a joint analysis he & I made at the CERES science team meeting held in Fort Collins, Colorado in November. (http://science.larc.nasa.gov/ceres/STM/2009-11/index.html). At this meeting, Drs. Trenberth and Fasullo approached us and said they had done much the same thing as we had, and had already submitted a paper to GRL, specifically a comment paper on LC09. This comment was rejected out of hand by GRL, with essentially no reason given. With some more inquiry, it was discovered that:

    1) The reviews of LC09 were “extremely favorable”
    2) GRL doesn’t like comments and is thinking of doing away with them
    altogether.
    3) GRL wouldn’t accept comments on LC09 (and certainly not multiple comments), and instead it was recommended that the four of us submit a stand-alone paper rather than a comment on LC09.

    We all felt strongly that we simply wanted to publish a comment directly on LC09, but gave in to GRL and submitted a stand-alone paper. This is why, for instance, LC09 is not directly referenced in our paper abstract.
    The implication of statement (1) above is that LC09 basically skated through the peer-review process unchanged, and the selected reviewers had no problems with the paper. This, and for GRL to summarily reject all comments on LC09 appears extremely sketchy.

    In my opinion, there is a case to be made on the peer-review process being
    flawed, at least for certain papers. Many commenters say the system isn’t perfect, but it in general works. I would counter that it certainly could be better. For AGU journals, authors are invited to give a list of proposed reviewers for their paper. When the editor is lazy or tight on time or whatever, they may just use the suggested reviewers, whether or not those reviewers are appropriate for the paper in question. Also, when a comment on a paper is submitted, the comment goes to the editor that accepted the original paper – a clear conflict of interest.

    So yes, the system may work most of the time, but LC09 is a clear example that it doesn’t work all of the time. I’m not saying LC09 should have been rejected or wasn’t ultimately worthy of publication, but reviewers should have required major modifications before it was accepted for publication.

    Comment by Chris ODell — 10 Jan 2010 @ 11:55 AM

  82. > 77 Joe Blanchard says: 10 January 2010 at 10:47 AM
    > Eastern Canada is NOT warmer than usual.

    Joe, it is LESS COLD than usual, that’s what the map shows.
    This does not mean you feel warmer, nor that local temps aren’t cold.

    It’s the _anomaly_ from the longterm mean shown on the map.

    I wish Andy Revkin explained this every time; his DotEarth thread posters make the same mistake repeatedly.

    Comment by Hank Roberts — 10 Jan 2010 @ 12:23 PM

  83. Andrew@80 This may be taking the discussion too far afield, but if Gavin will indulge our side discussion…

    I agree that the Fisher Information is a fundamental quantity, but I am also not quite ready to give up on quantities like AIC, BIC, DIC… for the simple reason that I think overfitting is a significant problem in many analyses. In some cases in my field, under-fitting is also an issue. For example, in the example you gave, the changepoints are additional parameters and the AIC would have to improve exponentially to justify the added complexity. I don’t see how you get the same Occam’s-razor effect just with Fisher Information (I’ll admit this could be due to the fact that I’m just a dumb physicist). One thing I have noticed is that for a “good” model the decrease on log-likelihood is less than linear as you add data, while for a “bad” model, it doesn’t improve or may even worsen.

    BTW, speaking of overfitting, have you seen this wonderful example:

    http://blogs.discovermagazine.com/cosmicvariance/2007/07/13/the-best-curve-fitting-ever/

    Comment by Ray Ladbury — 10 Jan 2010 @ 1:15 PM

  84. @HH, Ray Ladbury: “70.HH says, “Any process engineer will tell you what the end result is in a system that has only positive feedback components, process ending in the extreme temperature.”

    Actually, only the ones who haven’t studied infinite sereies would tell you that. As long as each term in the series decreases geometrically, the series converges.”

    HH is thinking of feedback in the context of systems, which is a bit different than a series of positive terms.

    What HH has in mind is that the linear system of differential equations y’ = A y is unstable if A is not identically zero and none of the elements of the matrix A are negative. This is an elementary consequence of Perron-Frobenius theory, which provides that such A have a postive eigenvalue. It follows that nonlinear equilibria or periodic solutions with such linearizations are unstable.

    Since about the time of Budyko and Sellers, it has been useful to view climate as a nonlinear system of equations (I have in mind works of Ghil and others – see North et. al. http://ams.allenpress.com/archive/1520-0469/36/2/pdf/i1520-0469-36-2-255.pdf).

    In this sort of picture, you expect equilibria to be characterized by their linearizations, hence requiring some sort of negative feedback to be stable; the most important such feedback (radiation into space) already having been mentioned.

    It is necessary in such a model, in the presence of both positive and negative feedbacks to attempt to assess the stability of a purported equilibrium. However one should not be too concerned with this sort of “phase portrait” since even in the case of a known stable equilibrium the issue is whether the climate will remain in a happily habitable region – not whether it will eventually return to a habitable region after leaving it.

    Comment by Andrew — 10 Jan 2010 @ 1:19 PM

  85. Hank Roberts,

    I don`t care much for the map. I know that here in Montreal it is NOT less cold – I checked the history myself. That map is not reliable (the MET office admitted it but they have not corrected it).

    Comment by Joe Blanchard — 10 Jan 2010 @ 2:05 PM

  86. “In this sort of picture, you expect equilibria to be characterized by their linearizations, hence requiring some sort of negative feedback to be stable”

    And this is not the picture that represents climate feedbacks, Andrew.

    Sorry, but there it is.

    Comment by Completely Fed Up — 10 Jan 2010 @ 2:29 PM

  87. Comments that are flagged as spam get deleted automatically? Now there’s some blog functionality. :(

    Comment by Steve Bloom — 10 Jan 2010 @ 2:37 PM

  88. @Ray Ladbury: “84.Andrew@80 This may be taking the discussion too far afield, but if Gavin will indulge our side discussion…

    I agree that the Fisher Information is a fundamental quantity, but I am also not quite ready to give up on quantities like AIC, BIC, DIC… for the simple reason that I think overfitting is a significant problem in many analyses. In some cases in my field, under-fitting is also an issue. For example, in the example you gave, the changepoints are additional parameters and the AIC would have to improve exponentially to justify the added complexity. I don’t see how you get the same Occam’s-razor effect just with Fisher Information (I’ll admit this could be due to the fact that I’m just a dumb physicist). One thing I have noticed is that for a “good” model the decrease on log-likelihood is less than linear as you add data, while for a “bad” model, it doesn’t improve or may even worsen.

    BTW, speaking of overfitting, have you seen this wonderful example:

    http://blogs.discovermagazine.com/cosmicvariance/2007/07/13/the-best-curve-fitting-ever/

    We certainly agree on the importance of knowing how good one’s fit is, and whether it is due to chance. However under- or over- fitting are really only approximate ideas of model quality.

    One problem with the various xIC ideas is that they come from a parametric model of the likelihood which doesn’t accomodate a lot of things. For example consider how many parameters does an SVM have? Or a regression tree?

    The “occam’s razor effect” of all those xIC type information criteria comes from the Fisher information in the first place – in some sense the appropriate “dimensionality” for a model that will be estimated by conventional means is the number of eigenvalues of the Fisher information which are positive. However when you do not know the true parameter of a system, you are using an estimate of the Fisher information, and so you are up against the question of the number of positive eigenvalues of the Fisher information at some other, hopefully nearby point. The various xIC ideas all correspond to this sort of inference. It’s one way the Fisher information contributes to model assessment.

    However it can be shown that if you want to estimate the true parameter, then the parameterization of your model itself matters. The Fisher information provides a Riemannian metric on the manifold of parameters. On this Riemannian manifold, it turns out that the kernel of the heat equation (using the Laplace-Beltrami operator from the Riemannian metric given by the Fisher information) provides a family of “reference” priors which are known in advance to outperform lots of other forms of estimation, especially in the case of high dimensional model with lots of noise. These “superefficient” estimators will beat all the unbiased forms of estimation (which are limited to being merely efficient). As a result, xIC type model selection, appropriate to efficient estimators, will choose a much lower dimensional model than optimal. The superharmonic estimators “know” lots of stuff about the geometry of the whole manifold, your xIC stuff only knows about the geometry of the manifold in the neighborhood of your estimate of the parameter. You can think of the superharmonic estimator as being able to “see over the horizon” of a bumpy likelihood landscape – extremely powerful stuff.

    This sort of stuff is starting to hit the open literature (see list below) – the Japanese school of information geometry is hot on this sort of trail.

    Here’s the problem. There isn’t just one “best” superefficient estimator. Consistent with “Stein’s Paradox” – the granddaddy of all such “shrinkage” estimator – if you have one such estimator for a model, you have infinitely many and no objective way to prefer one over the others. So you pick your point or set of superefficiency, um, because it’s your favorite color, I don’t know. You can be quite confident of beating the more sensible appearing lower complexity efficient estimators, and it is in any objective sense the sort of estimator you should be doing if you really want the best possible answer. But the hang-up is just try and explain why policy makers should prefer your estimate over worse performing estimators which can be at least pretended to be objective. Ask the policy makers their favorite color when picking the estimator? There’s a reason why statisticians spent the last 50 years sweeping superefficient estimation under the rug. They got away with it for a long time because superefficient estimation does best when you have a very high complexity model and not as much data to fit it as you would like.

    I have seen stuff like that Laffer curve fit. As a long time practitioner of finance, I am a confirmed economics skeptic, if not an outright economic denialist.

    Useful references:

    AMARI, S. (1987) Differential geometry of a parametric family of invertible linear systems – Riemannian metric, dual affine connections, and divergence. Mathematical System Theory 20, 53–82.

    AMARI, S. and NAGAOKA, H. (2000) Methods of Information Geometry.
    Oxford: American Mathematical Society.

    Kass, R. and Vos, P. Geometric Foundations of Asymptotic Inference, (1997) Wiley Interscience
    KOMAKI, F. (2006) Shrinkage priors for Bayesian prediction. to appear in Annals of Statistics. (http://arxiv.org/pdf/math/0607021)

    Fuyuhiko Tanaka, Fumiyasu Komaki, Asymptotic Expansion of the Risk Difference of the Bayesian Spectral Density in the ARMA model (http://arxiv.org/abs/math/0510558)

    Malay Ghosh, Victor Mergela, Gauri Sankar Dattaa, Estimation, prediction and the Stein phenomenon under divergence loss (http://dx.doi.org/10.1016/j.jmva.2008.02.002)

    Comment by Andrew — 10 Jan 2010 @ 2:39 PM

  89. @CFU: “And this is not the picture that represents climate feedbacks, Andrew.”

    I guess the link to North et. al. was broken?

    http://authors.library.caltech.edu/11329/1/NORjas79.pdf?

    Comment by Andrew — 10 Jan 2010 @ 2:45 PM

  90. Joe Blanchard writes:
    > “(the MET office admitted it but they have not corrected it).”

    Where did you read that, Joe? I can’t find a source for your claim.

    I hope you’re not misstating this guy’s blog column comments:
    http://www.bbc.co.uk/blogs/paulhudson/2010/01/a-frozen-britain-turns-the-hea.shtml

    Comment by Hank Roberts — 10 Jan 2010 @ 2:46 PM

  91. Gavin, just wanted to give you guys a 2 thumbs up for this site. This subject alone is worth the cost of admission.

    Being a lay person, it is very educational to see how science actually works. I think most of us lay people have pictures in our minds of overstuufed chairs, cigars and brandy in the Profs lounge! Absolutely intriguing!

    Thanx for allowing us into your world.

    Comment by Leo G — 10 Jan 2010 @ 2:57 PM

  92. Several people have commented with the opinion that the peer review process may have been flawed in this case. This may be so, but as a layman I would like to offer my opinion on what peer review should be, at least from my perspective.

    It seems to me that people are saying that papers that are wrong should be weeded out at the review level, but personally I think this is wrong unless the reviewer can show clear evidence of intentional ‘errors’. The contradiction of a paper should happen at the published level rather then the pre-published level if the paper in question is not obviously intentionally wrong.

    I say this because doing it at the review level keeps it hidden from us laymen. We don’t see the contradiction to the paper, all we hear about is “they refused to publish it”.

    Basically a concept from law is appropriate here: “Justice should not only be done, but be SEEN to be done.” Papers prevented from even being published are like secret trials. We (meaning us laymen) have to take both sides at their word rather than see the claim counter-claim process out in the open.

    Wrong is not bad… fraudulent is. Being wrong can still educate the layman on the process by which science moves forward, and can help people like me to see that the science IS being done fairly and completely.

    My argument would go both ways – if this paper was allowed through, but any contradictory papers weren’t then yes, ‘peer review’ would have failed because it would have been used to push one side or the other.

    Peer review should be to ensure that the published paper is as good as it can be… not to decide whether it is publishable at all (except as I said in extreme cases). If Lindzen and Choi want to stake their reputations on a paper, they should be allowed to, even if it is wrong. Let the follow up papers show why and how they are wrong, and no one can complain that they were treated unfairly.

    Basically, peer review should be there to allow the authors of a paper to see the kinds of arguments that will be made against the paper, and to thus modify the paper pre-publication to address those kinds of criticism. If the authors wish to go ahead with a paper that the reviews feel is flawed, then that is their risk to take.

    I guess what I am saying is that any other way of working would put the journals in the position of being the gatekeeper, deciding what is and isn’t “science”, and giving them the ability to bias the field one way or another, even if unintentional. The journals should be the formal discussion forum, not the final word on what is or is not correct – they are there to keep the discussion formal, not to decide what should be discussed.

    Comment by Simon Rika — 10 Jan 2010 @ 2:58 PM

  93. Oh, I see. Sorry for the digression, folks.
    Joe’s looking at today’s temperatures in Montreal, comparing them to
    http://www.metoffice.gov.uk/corporate/pressoffice/2010/images/20100106b-chart.jpg (26 December through January 1st).
    This may help:
    http://www.weather.com/outlook/events/weddings/monthly/CAXX0301?from=36hr_topnav_wedding
    – 23F Jan. 1st in Montreal;
    – 3F yesterday in Montreal.
    That’s the weather. Now back to your climate discussion, I hope.

    Comment by Hank Roberts — 10 Jan 2010 @ 3:03 PM

  94. buenos dias, cervantes: it’s winter.

    Record high temperatures beat out record lows by two to one over the past decade.

    Comment by Barton Paul Levenson — 10 Jan 2010 @ 3:17 PM

  95. Comment by cervantes — 10 January 2010 @ 9:52 AM

    It’s important not to dismiss Cervantes’ objection too lightly.

    – Average people are members of the electorate. To a greater or lesser extent, policymakers are responsive to the electorate.

    – If the average person is unable to reason out something so basic as the difference between weather and climate, it is quite unlikely they’ll be able to follow the science behind climate change.

    – For the specific case cited by Cervantes, if the average person is not helped to reason out why today’s weather is an unreliable indicator of the future, the average person is not going to be able send a signal of concern to policymakers.

    – As Cervantes indicates, with such a poorly prepared electorate, policy response to climate change will be severely retarded in speed.

    No surprise to denizens of RC, but there is an excellent site with friendly and comprehensible explanations of virtually all of the misunderstandings encountered by the average person with regard to climate science.

    Here’s how that site explains how to sort out confusion over weather versus climate:

    http://www.skepticalscience.com/global-warming-cold-weather.htm

    Comment by Doug Bostrom — 10 Jan 2010 @ 3:57 PM

  96. I should add with regard to Cervantes’ remarks, what little actual scientific research (as opposed to opinion polls) has been performed on public understanding of climate science indicates that the public (in the U.S. at least) has been dithering around a fairly poor level of understanding for the past 15 years.

    Public thinking about climate is actually surprisingly good, considering the firehose of deception directed into the ears of John Q. Citizen, but is not up to delivering a useful message to policy makers.

    Beyond reactive battling against malicious PR campaigns there’s a huge job of remedial education waiting to be done here. Overcoming susceptibility to misleading PR requires shovel work at a basic level, and more than shovels it needs patience.

    Comment by Doug Bostrom — 10 Jan 2010 @ 4:13 PM

  97. @Simon Rika: “It seems to me that people are saying that papers that are wrong should be weeded out at the review level, but personally I think this is wrong unless the reviewer can show clear evidence of intentional ‘errors’.”

    Oh my no. You want the referees to catch as many of your unintentional errors as possible to save you the difficulty of having them immortalized in print.

    Comment by Andrew — 10 Jan 2010 @ 4:43 PM

  98. #93 Hank and Joe

    This should be under the Unforced Variations thread, but it came up here, so…

    Hank, the anomaly map you cite is for Dec 26-Jan 1. The current cold snap started after that.

    This cold snap is indeed unusual – at least in eastern North America. It’s *significantly* colder than normal in most of the US east of the Rockies The cold reaching down in to the SE United States is particularly unusual – not just in terms of the temps reached, but moreso in the duration of the cold. It is very strange – esp in an El Nino year when we were expecting a mild winter.

    There is a clear cause for the strange weather pattern: A very strong negative Arctic Oscillation. This negative AO is NOT NATURAL: It is a effect of severe warming in the Arctic.

    From http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=1398

    “A new atmospheric pattern emerges: the Arctic Dipole
    In a 2008 article titled, Recent radical shifts of atmospheric circulations and rapid changes in Arctic climate system Zhang et al. show that the extreme loss of Arctic sea ice since 2001 has been accompanied by a radical shift of the Arctic atmospheric circulation patterns, into a new mode they call the Arctic Rapid change Pattern. The new atmospheric circulation pattern has also been recognized by other researchers, who refer to it as the Arctic Dipole (Richter-Menge et al., 2009). The old atmospheric patterns that controlled Arctic weather–the North Atlantic Oscillation (NAO) and Arctic Oscillation (AO), which featured air flow that tended to circle the pole, now alternate with the new Arctic Dipole pattern. The Arctic Dipole pattern features anomalous high pressure on the North American side of the Arctic, and low pressure on the Eurasian side. This results in winds blowing more from south to north, increasing transport of heat into the central Arctic Ocean.”

    While these reports concerned earlier episodes of the Arctic Dipole pattern, the same appears to be occuring now. The high pressure over Greenland is forcing cold arctic air southward and causing the unusually cold weather in the eaastern North America.

    So, ironically, the unusually cold weather we are experiencing is most likely an effect of global warming! (I’m just a lay person tyring to connect the dots. I’m sure that I’ve misunderstood more than one thing along the way.)

    [Response: Don't get carried away with pop attributions. The AO has a very strong random component, even while there is some evidence that its pdf can be shifted by increasing GHGs, volcanoes, solar etc. The expected tendency as CO2 increases is towards slightly more positive phases (Miller er al, 2006), with a similar tendency associated with volcanic effects ('winter warming') and long-term solar. You could make a vague (and I think weak) argument that the current phase of the solar cycle could give a slight tendency towards a negative AO, but the magnitude of any forced tendency is much, much smaller than what we've seen so far this winter. AFAIK, there is no evidence or model study that we expect greater variance in the AO as a result of any of these forcings. - gavin]

    Comment by Jiminmpls — 10 Jan 2010 @ 5:11 PM

  99. Hypothetically, let’s suppose GRL picked three reviewers; Spence Royer, a well known AGW skeptic and flat earth creationist, Lad Raybury, a middle of the road physick who accepts the mainstream view of AGW but is aware of its shortcomings, and Lord Blowhawk, a warmist who spends half his time running arcane incomprehensible models and the other half blogging about how the sky is falling. Whatever their sociopolitical bent, they have the math/physics/science chops to understand the fundamentals of LC09 and make a reasonable assessment of its merit.

    Spence’s initial reaction is “Aha, this will put another nail in the AGW coffin”, but he takes his job as a reviewer seriously, and finds a few flaws or weaknesses, suggests some improvements, and recommends publication.

    Lad’s initial reaction is “this isn’t even wrong – they are obviously unaware of FG06″, but he takes his job as a reviewer seriously, and finds a few flaws or weaknesses, suggests some improvements, and although he doubts this paper will significantly advance the science, maybe some young Turk thinking about how FG06 and LC09 differ will make a breakthrough, so he recommends publication.

    Lord Blowhawk’s reaction is “more denialist propaganda disguised as Real Science”, but he takes his job as a reviewer seriously, and finds a few flaws or weaknesses, suggests some improvements, makes notes of not-so-obvious flaws where it can be attacked after publication, hopes that maybe the embarrassment of a crap paper will quiet some of the denialist camp and help influence policy makers, and recommends publication.

    Which is my take on why unimportant(and I’m specifically avoiding “bad” or “error ridden”, as these are usually post facto judgments) papers, “useful fools” of publication, make it into print.
    (I suppose I should include the usual “None of the people or events depicted in this scenario are actual events or depictions of real people. It is a fictional account intended only for illustrative purposes, and any names similar to real world people are coincidental and used here for the edification of readers, without sarcasm or intent for ridicule” disclaimer &;>)

    Comment by Brian Dodge — 10 Jan 2010 @ 6:13 PM

  100. Re#71 Gavin’s Response

    I read Revkin’s Dot Earth blog and was surprised by the statement that rain was falling in Greenland in winter. I then did a Google search and found this article -

    Rain speeds Antarctic Peninsula glacier melt – http://www.reuters.com/article/idUSTRE50F35D20090116

    This got me curious about the effects of rain on the glaciers and if more rain led to more and quicker melting. And if this was considered in the glacier melt models. I found the following articles:

    Reducing the uncertainty in the contribution of Greenland to sea-level rise in 20th and 21st centuries by Bugnion (2000) – http://www.uas.alaska.edu/envs/publications/pubs/Motyka_et_al.pdf
    This provides MIT rain versus snow modeling to calculate runoff (such as ice melts faster than snow, bare ice will not freeze rain until the next snow fall). The paper concludes “The changes in sea level estimated by all three models for the 20th and 21st centuries cannot be distinguished from zero at any confidence level.” This does not seem to support the Reuters article but it is limited in scope to Greenland.

    Submarine melting at the terminus of a temperate tidewater glacier, LeConte Glacier, Alaska, U.S.A. by Moytyka et al (2003) – http://www.uas.alaska.edu/envs/publications/pubs/Motyka_et_al.pdf
    This paper does state that melt rates are highest in late summer and after heavy rain. And “However, it is likely that submarine melting does contribute directly and indirectly to both short- and long-term changes in terminus position. If so, we suggest that prolonged periods of exceptionally heavy rain, coupled with warm fjord water temperatures, could trigger terminus destabilization of a tidewater glacier.We note that LeConte Glacier began its retreat in fall of 1994, after a long period of exceptionally heavy rain.”

    Greenland Ice Sheet Surface Mass Balance Variability (1988–2004) from Calibrated
    Polar MM5 Output
    by Jason E. Box, et al (2006) -http://polarmet.osu.edu/jbox/pubs/Box_et_al_J_Climate_2006.pdf
    The paper only mentions it as a “liquid water lubrication of ice sheet flow, as suggested by Zwally et al. (2002).” I does not say anything about the actual contribution to the melting of the ice by rain.

    Elimination of the Greenland Ice Sheet in a High CO2 Climate by Ridley et al (2005) – http://epic.awi.de/Publications/Rid2005a.pdf
    Describes saturation of liquid water in the snowpack.

    The Dynamic Response of the Greenland and Antarctic Ice Sheets to Multiple-Century Climatic Warming by Huybrechts, and de Wolde (1999) – http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2F1520-0442%281999%29012%3C2169%3ATDROTG%3E2.0.CO%3B2
    It gives a good description of rain versus snow in the models. but does not seem to be include anything about the rain increasing the ice melt. It may, but I didn’t understand the equations.

    Greenland and Antarctic Mass Balances for Present and Doubled Atmospheric CO2 from the GENESIS Version-2 Global Climate Model by Starley L. Thompson and David Pollard (1997) – http://ams.allenpress.com/perlserv/?issn=1520-0442&issue=05&page=0871&request=get-document&volume=010&ct=1
    Contains equations for refreezing and meltwater corrections for models.

    Greenland’s climate: A rising tide by Quirin Schiermeier (2004) -http://www.nature.com/nature/journal/v428/n6979/full/428114a.html
    It’s behind Nature’s pay wall. Google listed this quote: “Less snowfall and more rain would cause the ice to disappear at a faster rate than …” which supports the statement in the Reuters article.

    Modelling Changes in Glacier Mass Balance That May Occur as a Result of Climate Changes, by Roger J. Braithwaite and Yu Zhang © 1999 Swedish Society for Anthropology and Geography. – http://www.jstor.org/pss/521488.
    Another pay wall. Google quote: “Simulated snow melt and rain being insignificant in amount over the Antarctic ice sheet”

    After I finished search I then went back to Revkin’s blog then to the blog where he first mentioned rain in Greenland. I linked to his source of the data and found the following message in a box at the top -

    From Weather-Forecast.com – http://www.weather-forecast.com/locations/uummannaq20/forecasts/latest -
    Alert: We recently moved our weather-forecasts to a new machine. An incompatibility with our existing code has resulted in equivalent rain being forecast rather than snow, regardless of whether the temperature is close to or below zero. Update 22:30 GMT 7-Jan-2009: This bug has been fixed. We apologize for the error.

    So I guess Rivken got some bad info about it raining in Greenland.

    Re #40 – Google time plus searching and reading papers then writing this = 2 hours+, not 30 seconds. I know it would have been quicker to just post a question, but going through the papers was rewarding and something to do on a very cold Sunday afternoon.

    If anyone else knows of some good papers not behind pay walls please let me know. I watched 22 inches of snow melt to a few inches over a cold (just above freezing), wet Christmas day last month and I am curious about any acceleration of the glaciers and ice packs that rain can cause.

    Thanks

    Comment by Jason Miller — 10 Jan 2010 @ 7:13 PM

  101. Comment by Simon Rika — 10 January 2010 @ 2:58 PM

    Simon–you are just wrong in suggesting that journals should not be gate keepers. Many times I have recommended that a manuscript be rejected for this journal (x), but it would make a good paper for journal y. Every knows that journal y is more prestigious and that the quality of manuscripts received is very high, but 80% are rejected. The biggest reason for rejecting manuscripts has nothing to do with mistakes or errors. Rather, the manuscript just does not reach the level of quality and impact required to be published in journal y. These lower quality but “valid” studies can be published in lower tier journals.

    Comment by Bill DeMott — 10 Jan 2010 @ 7:58 PM

  102. Sorry about the confusion in my comment above. My intent was to say that a manuscript rejected by a more prestigious journal can make a good paper for a lower tier and often more specialized journal. Every time a scientist starts writing a manuscript he or she needs to decide the target journal. This is in part a matter of strategy. “Should I aim for the more prestigious journal or just be satisfied with the less competitive journal?”

    Comment by Bill DeMott — 10 Jan 2010 @ 9:06 PM

  103. #98 Jiminmpls:

    Your statement about it being “*significantly* colder than normal” may be true for the southeast & south central U.S., but I don’t think it is true in the Northeast. At least here in Rochester, NY, I would say that the departure is not particularly dramatic. Yes, it’s certainly been colder than average for the last couple of weeks, but not dramatically so. In fact, I would venture to say that a year when we did not have a period this cold sometime during the winter would be anomalous.

    Comment by Joel Shore — 10 Jan 2010 @ 9:55 PM

  104. Jason Miller wrote in 100:

    If anyone else knows of some good papers not behind pay walls please let me know. I watched 22 inches of snow melt to a few inches over a cold (just above freezing), wet Christmas day last month and I am curious about any acceleration of the glaciers and ice packs that rain can cause.

    I would recommend doing a search in http://scholar.google.com for glaciers rain acceleration modeling pdf and you will probably find papers that fit the bill. Maybe not the ones that you are specifically looking for, but…

    One paper that seems like it may address a few of your questions is the following:

    A physically based approach to compute melt involves the assessment of the energy fluxes to and from the surface. At a surface temperature of 0C, any surplus of energy at the surface-air interface is assumed to be used immediately for melting. The energy balance in terms of its components is expressed as:

    QN+QH+QL+QG+QR+QM = 0 (1)

    where QN is net radiation, QH is the sensible heat flux, QL is the latent heat flux (QH and QL are referred to as turbulent heat fluxes), QG is the ground heat flux, i.e., the change in heat of a vertical column from the surface to the depth at which vertical heat transfer is negligible, QR is the sensible heat flux supplied by rain and QM is the energy consumed by melt. As commonly defined in glaciology, a positive sign indicates an energy gain to the surface, a negative sign an energy loss. Melt rates, M are then computed from the available energy by:

    M = QM/(ρw*Lf) (2)

    where ρw denotes the density of water and Lf the latent heat of fusion. Energy-balance models fall into two categories: point studies and distributed models.

    pg. 366, Regine Hock(2005) Glacier melt: a review of processes and their modelling, Progress in Physical Geography 29, 3, pp. 362–391, (60 citations)

    Hope this helps…

    Comment by Timothy Chase — 10 Jan 2010 @ 10:21 PM

  105. PS RE Jason Miller

    One search term I have often found useful at times is “review”. At least in evolutionary biology it isn’t uncommon for a review to contain references to over a hundred papers, and typically a good review will be just that — a review of the current state of investigation into its subject, fairly in depth yet written for the nonspecial-ist. If you are looking for answers to some fairly basic questions, getting too many results but each too narrow in their focus to deal with those questions, “review” may get you what you want.

    Comment by Timothy Chase — 10 Jan 2010 @ 10:30 PM

  106. Joel Shore says: 10 January 2010 at 9:55 PM

    #98 Jiminmpls:

    “Your statement about it being “*significantly* colder than normal” may be true for the southeast & south central U.S., but I don’t think it is true in the Northeast.”

    Meanwhile here in Western Washington we’ve been setting some record highs the past few days. Last month, record lows. Global warming, disproved and proved again, within a month, uh-huh. Watt’s up with that?

    Comment by Doug Bostrom — 10 Jan 2010 @ 11:50 PM

  107. Personally, I think some science publishing has not caught up with the Internet.

    1) “In print” in a scholarly journal used to have a specific meaning, and there were fairly long lags from printing, to getting responses, and printing them.

    2) I’m really not exactly sure what “in print” really means these the days.
    After all, a journal issue can easily contain articles subjected to ferocious review, to those subjected to none (G&T for example, seemingly). This is independent of whether something is read on-line or actually received in a bound journal.

    3) While this would certainly take work, I would really wish that journals:

    a) Do whatever they do now, clearly labeling content some way w.r.t. the leve lof review. Some are certain clear about this, others not.

    b) But this business of letters/articles, etc seems unnecessarily broken in the Internet age. I know it’s more work, but I’d sure love to see a a blog thread provided for each such article, open for a while following publication, and tightly moderated, to encourage substantive comments by experts.

    c) As it is, if a non-expert wants to ask:

    “What was the reaction to this paper?” it can take a lot of work to find:
    “This paper was refuted fairly quickly.” because that fact may well be scattered all over the place. Immediate, simple comments like “reference XYZ wasn’t mentioned and comes to opposite results” ought to be easy.

    It might *never* be refuted by a real paper, if it’s not worth the bother. There is no simple way to go from a paper directly to proposed refutations, although you can look up citations, and see what they say, if you have access to those journals. I’ve sometimes tried this. It wastes a lot of time.

    Anyway, it would be awfully nice if credible quick feedback could easily be directly associated with an article.

    Comment by John Mashey — 11 Jan 2010 @ 12:23 AM

  108. Andrew@88, I’ve just started to look at Information Geometry. I can already see that it would be quite powerful. However, I work in a rather applied field–I got in trouble with one reviewer for even using AIC in a recent paper. He said that it must be an obscure technique because he’d never heard of it! I suggested an alternative explanation and the paper was published.

    I suspect I would be lynched if I tried to bring non-Euclidian geometry into the mix.

    Comment by Ray Ladbury — 11 Jan 2010 @ 6:04 AM

  109. @Andrew #97

    I didn’t mean they shouldn’t catch errors. I meant they should not use such errors to refuse to publish unless they are clearly intentional. An intentional error would be one the authors refuse to correct or explain why their way is better/correct. But if they do have an explanation why their way should be considered correct, then if they still want it published, they should be able to be published with the notification that the reviewers have raised concerns, and the authors have addressed them.

    I know I’m probably just confusing you even more – I guess I can’t explain myself properly, and this is off topic. Sorry.

    @Bill DeMott #101

    I’m not sure what you described is much different from what I proposed, except perhaps in wording. Being published in a different journal is not the same as being refused publication, to me anyway. I’m simply saying if the journal wants to publish it, and the authors address any issues raised by the reviewers, then they have done their job, and it is up to other scientists to refute or ignore it. I am not saying in this particular case the reviewers did a good job of raising these issues or the authors addressed them.

    I just noticed that some people seemed to be saying that even if the paper has raised valid questions, if there were any errors it should be refused publication, regardless of any benefit it might have.

    In this particular case, I got the impression that the journal in question was one of those lower tier journals that publishes just about anything. Was I wrong in that impression?

    Comment by Simon Rika — 11 Jan 2010 @ 6:12 AM

  110. Simon: “I didn’t mean they shouldn’t catch errors. I meant they should not use such errors to refuse to publish unless they are clearly intentional.”

    Or if the errors are bad enough that persuing the paper requires a new paper WITHOUT the errors to allow at least some sort of progress.

    If there are too many errors, you can’t use the paper at all. E.g. if you were a 17 year old high school student writing a paper on string theory, there’s likely to be a LOT of unintentional errors and anyone wanting to use that paper for anything will have to repeat a lot of work finding out what is an error and what is arguable.

    Comment by Completely Fed Up — 11 Jan 2010 @ 9:30 AM

  111. John Mashey, I think that part of the problem IS the Internet age. After all, substandard work has always found its way into obscure journals. In the past, it would moulder there and generate the lack of attention it so richly deserves. Now, there is no journal so obscure (the Quarterly Journal of the Hungarian Meteorological Service, ferchrissake!?!) that a substandard paper can’t go viral (or virial in Miskolczi’s case;-) ) among the gullible.

    In the case of LC’09, they did at least publish in a mainstream journal, available to the community. I just wish the peer review could have been better so that it could have been either definitively rejected, or the authors could have taken their best shot and stood or fell (most likely, fell) on the merits of their argument.

    You have to admit that the process plays out more efficiently in the peer-reviewed literature than in the blogosphere.

    Comment by Ray Ladbury — 11 Jan 2010 @ 9:59 AM

  112. #106 Doug, Gavin and Hank,

    Since when are the GRL and NSIDC “pop” news sources? If you don’t recognize this cold as an extreme weather event, you’re not paying attention. Occassional nights that dip below freezing are not unusual in southern FL. A week long freeze is pretty much unprecedented.

    So, what is causing this extreme cold weather event?

    From the NDISC..

    “The phase of the AO is described in terms of an index value. In December 2009 the AO index value was -3.41, the most negative value since at least 1950, according to data from the NOAA Climate Prediction Center…….The negative and positive phases of the AO set up opposing temperature patterns. With the AO in its negative phase this season, the Arctic is warmer than average, while parts of the middle latitudes are colder than normal. The phase of the AO also affects patterns of precipitation, especially over Europe. ”

    From http://www.agu.org/pubs/crossref/2008/2008GL035607.shtml

    “Recent radical shifts of atmospheric circulations and rapid changes in Arctic climate system”

    *Extreme* negative phase and *radical shifts* in the AO don’t sound like business as usual to me.

    And no, the climate models don’t predict this change in the AO. They didn’t predict the rapid decline in Arctic sea ice observed since 2005, either. And warm temps in Washington State are consistent with the Dipole pattern.

    References
    Francis, J.A., W. Chan, D.J. Leathers, J.R. Miller, and D.E. Veron, 2009, “Winter Northern Hemisphere weather patterns remember summer Arctic sea-ice extent”, Geophysical Research Letters, 36, L07503, doi:10.1029/2009GL037274.

    Honda, M., J. Inoue, and S. Yamane, 2009. Influence of low Arctic sea – ice minima on anomalously cold Eurasian winters, Geophys. Res. Lett., 36, L08707, doi:10.1029/2008GL037079.

    Overland, J. E., M. Wang, and S. Salo, 2008: The recent Arctic warm period. Tellus, 60A, 589.597.

    Richter-Menge, J., and J.E. Overland, Eds., 2009: Arctic Report Card 2009, http://www.arctic.noaa.gov/reportcard.

    Simmonds, I., and K. Keay (2009), Extraordinary September Arctic sea ice reductions and their relationships with storm behavior over 1979.2008, Geophys. Res. Lett., 36, L19715, doi:10.1029/2009GL039810.

    Wu, B., J. Wang, and J. E. Walsh, 2006: Dipole anomaly in the winter Arctic atmosphere and its association with sea ice motion. J. Climate, 19, 210-225.

    Zhang, X., A. Sorteberg, J. Zhang, R. Gerdes, and J. C. Comiso (2008), Recent radical shifts of atmospheric circulations and rapid changes in Arctic climate system, Geophys. Res. Lett., 35, L22701, doi:10.1029/2008GL035607.

    Comment by Jiminmpls — 12 Jan 2010 @ 7:43 AM

  113. Comment by Jiminmpls — 12 January 2010 @ 7:43 AM

    Jim, are we disagreeing? I don’t think so.

    Absolutely, it’s an extreme weather event; if you’ve got water pipes installed in your attic because that seemed ok during the short period Florida has been under development and you now have a volunteer sprinkler system dripping through the ceiling, you’re going to damply stand there saying “What the hell is all this talk about warming, for Pete’s sake?”

    There is no plan for when and where heat is going to be delivered. As I mentioned, we had a string of record cold temperatures set here in Western Washington last month, this month we’ve had some record highs. The only solid conclusion I can draw from this scanty local data is that for any location we choose we’ll find our archive of measurements to be pretty brief. Our detailed weather history for any location is paltry.

    Now widen the scope, take a snapshot of the entire globe and weather effects largely vanish. Then we see that whatever the situation in any particular bit of the world, such as your wet Barcalounger, the overall temperature signal is not so different from what it was prior to the onset of the cold snap in Florida and NE Europe. From this viewpoint we can get a datapoint that’s useful when thinking about climate.

    Keep the same global viewpoint, take snapshots through time and only then can we draw any inferences about the overall direction of Earth’s climate.

    Not so hard, really, but when you’ve got your phone in one hand calling the insurance adjuster and the other gripping the hose of a shopvac sucking water out of your carpet the big picture is not going to be uppermost in mind.

    Comment by Doug Bostrom — 12 Jan 2010 @ 12:52 PM

  114. re: 111
    ray: I wasn’t saying Internet made for goodness.
    I would much rather not have commentary spread all over the blogosphere. I have long been irritated at the assymmetry between between articles and letter in many places. I simply observe that it would be value-add for a journal to figure out how to use the Internet to collect well-moderated material “near” an article.

    Journals already publish mixes of material, with combinations of:

    a) Time-lag
    b) Size
    c) “Credibility” (of peer-review, knowledgable editorial board, review, etc).

    I just think we ought to think about how to use the Internet more efficiently to promote good work. Personally, I think that publishing something, only to have serious refutations placed next to my article within a few days, might make me think harder.

    Some of what goes on now reminds me too much of MS Fnd in a Lbry. :-)

    Comment by John Mashey — 13 Jan 2010 @ 3:26 AM

  115. To be fair to Latif, he has hit back at the distortion in the Guardian:
    http://deepclimate.org/2010/01/11/mojib-latif-slams-daily-mail/
    http://www.guardian.co.uk/environment/2010/jan/11/climate-change-global-warming-mojib-latif

    More here from earlier distortions:
    http://deepclimate.org/2009/10/02/an-email-exchange-with-mojib-latif/
    http://deepclimate.org/2009/10/02/anatomy-of-a-lie-how-morano-and-gunter-spun-latif-out-of-contro/

    WUWT ran with the story as well, and offered Latif a guest post to explain his position which I see as an insulting gesture given it’s the Mail’s reporter, David Rose, who should be doing the explaining.

    Comment by JBowers — 15 Jan 2010 @ 6:49 AM

  116. At 51 Gavin said “The Planck long wave emission (sigma&T^4) is the dominant negative feedback. Everything else is just modifying that.”

    Infrared radiation accounts for about 42 % of heat removed from the surface whereas about 48 % percent of surface heat removed is in the form of latent energy. Would this not make surface water evaporation the dominant negative feedback?

    [Response: There's no evaporation to space. - gavin]

    Of course there are both positive and negative feedbacks but the climate system has to be dominated by negative feedbacks. Turn off the sun and it would cool very quickly. Any engineer will tell you a system dominated by positive feedbacks is unstable.

    Comment by Muhammad Bear — 16 Jan 2010 @ 7:45 PM

  117. At 116 “There’s no evaporation to space. – gavin”

    Agreed. My understanding is that the process (as far as latent heat is concerned) is through convection and the heat is transferred to the troposphere once the water vapour condenses and the latent energy is released in the form of sensible heat. I wasn’t suggesting that water vapour convects to space.

    My point was merely that more heat is taken away from the surface through evaporation than through electromagnetic radiation.

    [Response: What happens at the surface is only a part of the change that determines the planetary response. More importantly, you need to think about the energy fluxes at the top of the atmosphere - which are radiative, and that is why the sigma*T^4 response is the dominant negative feedback. - gavin]

    Comment by Muhammad Bear — 17 Jan 2010 @ 12:49 AM

  118. Muhammad Bear,

    You have positive feedbacks confused with diverging feedbacks.

    Comment by Barton Paul Levenson — 17 Jan 2010 @ 7:05 AM

  119. MB: My point was merely that more heat is taken away from the surface through evaporation than through electromagnetic radiation.

    BPL: Your point is wrong. The Earth’s surface radiates about 389 watts per square meter on mean global annual average, compared to losing 80 watts per square meter to latent heat and 17 to sensible heat. That means 80% radiation, 16% evaporation.

    Comment by Barton Paul Levenson — 17 Jan 2010 @ 7:06 AM

  120. Gavin’s inline implies a different idea as to what “negative feedback” could mean.

    Not what I’d call a feedback, since it makes the questioning of negative feedbacks in climate models somewhat silly: it makes temperature itself a negative feedback.

    Comment by Completely Fed Up — 17 Jan 2010 @ 7:39 AM

  121. CFU,
    It is clearly a feedback, as the energy loss increases with increasing temperature.

    Comment by Ray Ladbury — 17 Jan 2010 @ 9:08 AM

  122. Okay. I was possibly making a really bad point.

    I was just limiting the feedback analysis to the response to surface insolation; i.e. incoming short wave radiation tries to heat the surface and IR and convection are negative feedbacks to this process.

    BPL and Gavin are right as I was ignoring the effects of longwave radiation and the GH. I was analysing this as feedbacks later in the process. I probably misunderstood Gavin’s initial point as I thought he was just referring to the Earth’s surface as the blackbody.

    Comment by Muhammad Bear — 17 Jan 2010 @ 12:41 PM

  123. Muhammad Bear,
    It’s easy to get mixed up, especially with slightly different usages of the term “feedback”.

    Comment by Ray Ladbury — 17 Jan 2010 @ 1:52 PM

  124. And the other problem: lots of the simplifying assumptions of figuring out temperatures depends on a local equilibrium and no spectral response.

    Yet GG activities require knowing when and how they are both broken.

    Comment by Completely Fed Up — 17 Jan 2010 @ 2:36 PM

  125. Ray, 121, however, since it’s what we’re trying to measure, making temperature the feedback is more than a little weird.

    Comment by Completely Fed Up — 17 Jan 2010 @ 2:37 PM

  126. BPL, 119: Your point is wrong.

    I have just looked at this again. Actually, my point was correct (in intent) but very badly expressed. I should have said that most of the energy is removed from the Earth’s surface through the convection of sensible heat and latent energy compared with energy removed through NET longwave radiation.

    When viewed in terms of upward longwave released less downward longwave received less energy is released through radiative transfer. I just wanted to clarify what I meant.

    Comment by Muhammad Bear — 18 Jan 2010 @ 11:13 AM

  127. CFU, It is not that temperature is the feedback, but rather that the feedback (thermal radiation) scales with temperature. What is more, there is nothing that unusual about this. Most other feedbacks–e.g. water vapor, albedo from ice/snow, CO2 from melting permafrost or clathrates, etc.–also scale with temperature, although not as directly.

    Comment by Ray Ladbury — 18 Jan 2010 @ 11:22 AM

  128. > When viewed in terms of upward longwave released less
    > downward longwave received less energy is released
    > through radiative transfer. I just wanted to clarify what I meant.

    You might want to try one more time, with a few more short words.
    “Viewed in terms of” is about as vague as it gets.

    Comment by Hank Roberts — 18 Jan 2010 @ 11:25 AM

  129. Ray, feedbacks (especially negative ones) are usually counted as, for example, cloud cover. Or the brightening of land by desertification.

    I’ve not heard temperature characterised as one before.

    Comment by Completely Fed Up — 18 Jan 2010 @ 12:07 PM

  130. Muhammad Bear wrote in 117:

    My point was merely that more heat is taken away from the surface through evaporation than through electromagnetic radiation.

    Barton Paul Levenson wrote in 119:

    Your point is wrong. The Earth’s surface radiates about 389 watts per square meter on mean global annual average, compared to losing 80 watts per square meter to latent heat and 17 to sensible heat. That means 80% radiation, 16% evaporation.

    Muhammad Bear wrote in 126:

    I have just looked at this again. Actually, my point was correct (in intent) but very badly expressed. I should have said that most of the energy is removed from the Earth’s surface through the convection of sensible heat and latent energy compared with energy removed through NET longwave radiation.

    When viewed in terms of upward longwave released less downward longwave received less energy is released through radiative transfer. I just wanted to clarify what I meant.

    You are making some good points. However in discussions it is oftentimes easier to focus on points of disagreement rather than agreement. In the recent thread Plass and Surface Budget Fallacy they discuss two of the energy budgets — or “balances” if you think of a “balance” in terms of what you have left over after all of the “accounting”.

    By the principle of the conservation of energy, if the rate at which energy enters a given volume is greater than the rate at which it exits the volume, then the net rate at which it enters the volume is positive and the amount of energy in that volume will increase over time. The same thing applies to mass, momentum and other conserved quantities. And the same thing applies to thermal energy if one includes the generation of thermal energy due to a dissipative process in the term for thermal energy entering the volume.

    You can certainly consider energy in terms of the surface budget, but you may also consider it in terms of the ocean budget, atmospheric budget, land budget, tropics budget and so on. At the same time, the fundamental energy budget is that which exists for the climate system as a whole — and this is what Gavin was pointing to when he inlined in 117:

    What happens at the surface is only a part of the change that determines the planetary response. More importantly, you need to think about the energy fluxes at the top of the atmosphere – which are radiative, and that is why the sigma*T^4 response is the dominant negative feedback.

    Gavin’s point is of fundamental importance — the energy budget of the climate system as a whole is what lies at the center of the greenhouse effect that raises the temperature of the climate system above the effective radiating temperature of the climate system — that is the temperature that the planet would have in the absence of the atmosphere. Effectively, the top of the atmosphere is the surface than bounds that volume, and thus what crosses that surface, entering or leaving the volume, is of fundamental importance to the climate system as a whole. But at the same time it helps to be able to think in terms of the other budgets. And we I think that in our discussions we sometimes lose sight of the latter when focusing on the former.
    *
    Mohammad Bear wrote in 116:

    Of course there are both positive and negative feedbacks but the climate system has to be dominated by negative feedbacks. Turn off the sun and it would cool very quickly. Any engineer will tell you a system dominated by positive feedbacks is unstable.

    I am not exactly sure how engineering applies the term “positive feedback,” and it is possible that there is some important difference. However, as “feedback” is applied in climatology, it is thought of principally in relation to “radiative forcing.” There is the forcing that raises the surface temperature by a certain number of degrees. Increased solar radiation, for example. Then there is the response to this initial forcing. The rise in temperature — which in the absence of an atmosphere would simply result in an increase it the rate at which thermal radiation is emitted by the surface — according to the sigma*T^4 response that Gavin mentioned — where sigma refers to the emissivity of the surface and the T^4 comes from Planck’s law of blackbody radiation.

    But then there is evaporation, which removes heat from the surface but moves heat into the atmosphere. In terms of the fundamental energy budget this isn’t yet what would be considered a “feedback” as it has neither increased nor reduced the net rate at which energy enters or leaves the climate system. However, when the water evaporates from the surface this results in increased water vapor — and water vapor is a greenhouse gas.

    This increases the opacity of the atmosphere to thermal radiation. This will reduce the rate at which energy escapes the climate system, but energy will continue to enter the climate system at the same rate as before. Therefore the amount of energy in the climate system will increase. And likewise the amount of thermal energy in the climate system will increase.

    Now this will not mean any sort of runaway global warming where the temperature increases without limit. Such a thing is clearly impossible. Nor will it mean runaway global warming in the sense that applied to the evolution of Venus. In fact it won’t mean runaway global warming in any sense at all. However, it does mean that the surface has to warm further if the climate system is to emit thermal radiation at a rate that is equal to the rate at which thermal radiation is entering the system.

    As such the temperature response is greater than that which would result from the forcing (in this case, increased solar radiation) alone. This is why we refer to the feedback as a “positive” feedback. In contrast, a “negative” feedback would result in an overall increase in temperature that is smaller than that which would come from the initial forcing.
    *
    Now you had stated, “Any engineer will tell you a system dominated by positive feedbacks is unstable.”

    However, if by “unstable” you are refering to a runaway effect, this isn’t the case.

    Of course if increased water vapor results in an increase in temperature this will result in more water vapor that will result in a further increase in temperature, but on earth at least this generally won’t result in a runaway effect. Not since each additional increase in water vapor results in a smaller increase in temperature than the previous increase in temperature. Positive feedback is limited just as a geometric sum is limited when each successive term is smaller than the term before it. Thus for example, (1/2)^0+(1/2)^1+(1/2)^2+(1/2)^3+… (1/2)^(n-1)+(1/2)^n is less than 2 so long as n is finite and goes only to 2 when n approaches infinity.

    However, all this talk in terms of forcings and feedbacks is immaterial as far as climate models themselves are concerned. They don’t work in terms of forcings and feedbacks. They are based upon the principles of physics. Generally speaking, talk in terms of “forcings” and “feedbacks” is strictly for our convenience — when we analyze how the climate system increases in response to an increase in the rate at which energy enters the system, a reduction in the rate at which it escapes, etc..

    Comment by Timothy Chase — 18 Jan 2010 @ 1:50 PM

  131. CFU, you didn’t read what I wrote. TEMPERATURE is NOT the negative feedback. Rather the negative feedback SCALES WITH TEMPERATURE to the 4th power. This is a different matter. Water vapor also scales with temperature pretty directly. And most other feedbacks have some sort of scaling with temperature.

    Comment by Ray Ladbury — 18 Jan 2010 @ 2:13 PM

  132. Muhammad Bear wrote in 126:

    When viewed in terms of upward longwave released less downward longwave received less energy is released through radiative transfer. I just wanted to clarify what I meant.

    Hank Roberts wrote in 128:

    You might want to try one more time, with a few more short words. “Viewed in terms of” is about as vague as it gets.

    Oh dear!

    I rather liked the expression, myself. “Viewed in terms of”, “from the perspective of”, “in this context” and so on. But I suppose that’s my dialectics again. Don’t blame Chris Matthew Sciabarra — it really was a problem well before I even ran into him. And of course he would tell you that the blame ultimately rests with Aristotle. But I suppose all of that is really beside the point…

    In either case, you might find the last sentence of the preceding paragraph a bit clearer — it was expressing the same idea.

    There Muhammad Bear had stated:

    I should have said that most of the energy is removed from the Earth’s surface through the convection of sensible heat and latent energy compared with energy removed through NET longwave radiation.

    Upward longwave released (upwelling longwave radiation) less (minus) downward longwave received (downwelling longwave radiation) is net longwave radiation. And the rate at which net longwave radiation carries thermal energy away from the surface is smaller than the rate at which thermal energy is carried away — from the surface — through evaporation.

    (But of course, for all intents and purposes, the only way thermal energy will ultimately leave the climate system itself is through thermal radiation. And from this perspective (that is, viewed in the context of the energy budget of the climate system as a whole) evaporation carrying away latent energy is simply moving thermal energy around within the climate system but never actually getting it out — except insofar as it facilitates the loss of thermal energy through radiation at higher altitudes.)

    Comment by Timothy Chase — 18 Jan 2010 @ 2:59 PM

  133. Muhammad Bear: “Any engineer will tell you a system dominated by positive feedbacks is unstable.”

    I am an engineer and can say that Muhammad Bear is an incompetent engineer. Specifically, for a linear feedback model with forward gain G and feedback gain H, with an input transfer to output transfer function of
    G/(1+GH) if -1<GH<0, there is amplification without a runaway effect.

    Comment by RB — 18 Jan 2010 @ 3:24 PM

  134. Re RB 133

    Is “unstable” synonymous with being prone to “runaway effects,” or could he have simply been referring to the amplification itself? In either case and as you would no doubt point out — amplification is not a problem. Unbounded amplification — runaway effects — that would be a problem.

    Comment by Timothy Chase — 18 Jan 2010 @ 4:02 PM

  135. “(But of course, for all intents and purposes, the only way thermal energy will ultimately leave the climate system itself is through thermal radiation)”

    An irrelevant distinction: the only way energy will leave the climate system is through radiation.

    It’s complicated enough as it is.

    Comment by Completely Fed Up — 18 Jan 2010 @ 4:03 PM

  136. I wonder if those who do not understand feedbacks can explain how lagging on a hot water tank can work.

    After all, the lagging produces no heat of its own.

    Comment by Completely Fed Up — 18 Jan 2010 @ 4:05 PM

  137. I apologize for the tone in(#133) , that was uncalled for. Since for BIBO (bounded input bounded output) stability, we need to avoid GH= -1 (or a phase of 180 degrees at unity gain), when designing circuits, we shoot for typically ensuring 60 degree phase margin (i.e. a phase not exceeding 120 degrees or 60 degrees below 180). This results in some overshoot but the oscillatory phenomena for transient changes is damped.

    Comment by RB — 18 Jan 2010 @ 4:10 PM

  138. I explain the same in #402, #415 here:
    http://www.realclimate.org/index.php/archives/2010/01/unforced-variations-2/comment-page-9/#comments
    and in the two comments here:
    http://www.realclimate.org/index.php/archives/2010/01/unforced-variations-2/comment-page-23/#comment-155264

    Comment by RB — 18 Jan 2010 @ 4:14 PM

  139. MB, your point is still wrong. Why do you count longwave radiation as NET but convection and conduction as GROSS? The inputs are:

    161.2 watts/square meter sunlight
    324.8 watts/square meter IR from the atmosphere

    The outputs are:

    17 watts/square meter sensible heat
    80 watts/square meter latent heat
    389 watts/square meter longwave radiation

    You are taking two streams from the output side and comparing them to the net from one input stream and one output stream. Why?

    Comment by Barton Paul Levenson — 19 Jan 2010 @ 7:30 AM

  140. I, too, am curious about the effects of rain on the glaciers and if more rain led to more and quicker melting.

    Comment by California Blogger — 26 Jan 2010 @ 6:44 AM

  141. I think this is a great closing comment : “…the episode underlines the danger in reading too much into single papers…go against the mainstream (in either direction) …. conclusions will not stand up … takes a while for this to be clear. Research at the cutting edge pushing the limits of the data or the theory if the answers were obvious, we wouldn’t need to do research.”

    I agree with this …and if the science were settled there also would be no debate on the science, only on the policy solution.

    The issue is – this really is a policy matter … as discussed at http://climate-check.blogspot.com/. Magritte painting of a pipe (“Ceci n’est pas une pipe”) is a painting, not a an actual pipe, and how well this fits your idea of a pipe depends on your interpretation – filtered through ones lens or world view of a pipe not looking through the eyes of the painter.

    IPCCs probabilities are not statistical tests of scientific hypotheses about whether a model describes an outcome (… I say that the IPCC assigned numerical (Delphi) probabilities are truly nonsense). The policy issue is highly leveraged on the 100 year model outputs and claims of warming 1.5-4.5 oC or whatever it is these days. The model outputs are not capable of verification or falsification by testing a hypothesis and the model projection is 100 years out of sample. The models can be tested in sample, i.e., in a backcast. Determination of heat trapping capability of GHG “forcings” is science … applying these in a “nonverifiable” out of sample (100 years) projecting (not a “prediction”) a temperature increase is not science … it is computer code.

    And we are still at early stages of understandings on other important forcings and feedback interactions. Over some time (?) the models projections will be tested – I imagine over the next 5 years the models will be improved and expanded to include important non manmade forcings and tested allowing better confidence. As things stand now the “way” the models are being used and the results communicated result in a lot of criticism and questions on the uncertainties as discussed on this RealClimate page.

    The different theories or models will indeed play out as a “rugby scrum” over some time (a few years?) until one or a group of models are more or less accepted. Then everyone including the public … and even the politicians … will understand with greater confidence. It is not like solving Poincare’s conjecture – mathematical proof starting from basic premises. The models now are not hypotheses but conjectures – unproven hypotheses although underpinned by basic physical science.

    Policy models are not required to be “complete” or “correct” descriptions (models) of actual working systems. They simply have to be deemed acceptable for making policy decisions. As in Douglas Elmendorf’s macroeconomic models to look at stimulus and jobs.

    Peseran (1996, 2004) defines three criteria for evaluation of policy models – relevance, consistency and adequacy. Relevance – does the model meet its required purpose? Consistency – what else is known that may be useful and is it being used. Adequacy – the usual statistical measures of goodness of fit. Consistency and adequacy are important when building a model. But a generalized form of relevance is the most important thing for evaluation of the model. Dagum (1989) said “knowledge is of value only when it is put to use,” and Marshak said “knowledge is useful it helps to make the best decisions.” Also, Keynes said about Alfred Marshall, “Marshall arrives early at the point of view that the bare bones of economic theory is not worth much in themselves and do not carry one directly to any useful practical conclusions. The whole point lies in applying them to the interpretation of current economic life.”

    Comment by Danley Wolfe, PhD, MBA — 28 Jan 2010 @ 3:34 AM

Sorry, the comment form is closed at this time.

Close this window.

0.562 Powered by WordPress