RealClimate

Comments

RSS feed for comments on this post.

  1. Also the irony of a reconstruction with the straightest shaft since the original “hockey stick”. More slope, but still, pretty darn straight compared to the other reconstructions (as summarized in the NRC 2006 report) which showed a lot more interdecadal variability… makes the 20th century really stand out as a change in the long term system behavior…

    Comment by M — 20 Aug 2010 @ 6:51 PM

  2. “our model gives a 80% chance that [the last decade] was the warmest in the past thousand years”

    I would be interested in knowing the next highest posterior probability for the warmest decade. Looking at “rolling decades” (as M&W do) yields a lot of 10-year periods (just shy of 1000, really). I would think that if a single 10-year period has an 80% chance at being the warmest, then that doesn’t leave much room for other 10-year periods to also have high probabilities. (although I may not have my Bayesian interpretation straight here…)

    Comment by ABG — 20 Aug 2010 @ 7:57 PM

  3. Thanks for the link to R. Here’s hoping I can figure out how to use it.

    [Response: Unless you're an experienced programmer or a polymath or genius with access to some good R books, don't be in a hurry.--Jim]

    recaptcha what is a cuounkno?

    [Response: It's an R function :)]

    Comment by Edward Greisch — 20 Aug 2010 @ 8:14 PM

  4. There are many comment about the normalisation based on hemispheric average temperature. This would overweight the northern data point, which would also explain why the curve is almost a carbon copy of the one of Kaufman.

    Comment by Yvan Dutil — 20 Aug 2010 @ 8:59 PM

  5. I think there is a lot of misunderstanding of the 80% quote from McShane & Wyner. They started off the paper by supposedly showing that the proxy data is worthless, worse than random garbage, and then proceeded to go ahead anyway and calculate past temperatures based on that worse than random data. So I don’t think that they’re saying that there is REALLY an 80% chance that 97-07 was the warmest decade, but rather, given their model and the [edit - watch the inflammatory rhetoric] data, that that is the result you get. To use that quote out of the context that explains that the quote is based on [edit] data is misleading because it makes it sound like the authors are agreeing that there really is an 80% chance that 97-07 was the warmest decade.

    [Response: Needless to say (except it isn't), they demonstrate nothing of the sort. They use a single method (lasso) with relatively short holdout periods and show that it can't beat some noise models. How does that demonstrate that the underlying data is 'worthless'? All sorts of reasons might explain this - lasso might not br very good, their noise models are mis-specified, the validation metric is flawed, the holdout period too short etc. Think about ways that you could use to test this and try to be a little more dispassionate in addressing these issues.... - gavin]

    Comment by Mindbuilder — 20 Aug 2010 @ 9:09 PM

  6. M&W’s conclusion that the proxy data is worse than random data at predicting temperatures is hard to believe. Even if you think there is very little relationship between the proxies and temperature, it seems like there ought to be SOME relationship. At least slightly better than random data. If I was going to test M&W’s evaluation of the proxy data, probably the first thing I would do is create simulated proxy data by adding various amounts of noise to the thermometer record and then compare the noisy thermometer record to random data using M&W’s method. I suspect it may turn out that less noisy random data scores better than slightly noisier data that really does correlate somewhat well to the actual thermometer temperatures. The other thing that might truly be throwing off the correlation with proxy data is any residual contamination of UHI or other errors in the thermometer record that would not be present in the proxies.

    When I referred to the data as worse than random garbage in my above comment, I was just trying to make clear what I thought M&W’s position on it was. I wasn’t necessarily agreeing with it. Hence the word “supposed”. I think you should make my comment whole, though it would perhaps be acceptable and less inflamatory to leave out just the word garbage.

    Comment by Mindbuilder — 20 Aug 2010 @ 9:51 PM

  7. Gavin:

    Needless to say (except it isn’t), they demonstrate nothing of the sort. They use a single method (lasso) with relatively short holdout periods and show that it can’t beat some noise models. How does that demonstrate that the underlying data is ‘worthless’? All sorts of reasons might explain this – lasso might not br very good

    In fact, one might wonder if they didn’t search for a method that wouldn’t beat some noise models (i.e. lasso) for the data at hand …

    Comment by dhogaza — 20 Aug 2010 @ 10:56 PM

  8. M&W’s conclusion that the proxy data is worse than random data at predicting temperatures is hard to believe. Even if you think there is very little relationship between the proxies and temperature, it seems like there ought to be SOME relationship. At least slightly better than random data.

    Oh, regarding tree rings, at least, you’d imagine they’d just proven that Liebig’s law of the minimum doesn’t really work, therefore a bunch of increases in ag production efficiency is no better than that which would’ve occurred randomly, and probably a bunch of other similar things.

    The underlying problem, IMO, is that proxies are chosen for real, physical, reasons, and to say that some b-school profs have proven they don’t reflect a particular environmental signal runs right up against practical and successful applications in today’s real world.

    [Response: Thank you. The beginning of their abstract's second sentence reads: "The relationship between proxies and temperature is weak..." Well there's an interesting finding, but damned if they didn't hide it in a paper in a statistics journal!! I'm looking forward to the explanation of why temperate and boreal trees' primary and secondary meristems go dormant during the cold season (= no growth) but those of tropical trees, which have no such season, do not, and why ITRDB tree ring data at temperature limited sites show such suspiciously high correlations between the growth rates of the different trees in a stand. The foresters and arboriculturists are going to be really upset at this news--they are under the impression that tree growth somehow or other responds to temperature. Ain't that nuts?--Jim]

    Comment by dhogaza — 20 Aug 2010 @ 11:01 PM

  9. Mindbuilder:

    You say that adding noise to the temperature record would be a good way to evaluate, but doesn’t that build in a bias in a way? If the noise is ‘noisy enough’ you’ll get the same data you had to start with (generally) when you average.

    Comment by David — 20 Aug 2010 @ 11:21 PM

  10. What bothers me the most is that the curve represents two data sets, proxies and thermometers. It is the thermometer records that make it look alarming.

    [Response: Yes, that's because they ARE alarming, at least to those who know the physical and biological ramifications of rapidly raising the planet's temperature almost a degree C, and at a rapidly increasing rate at that.--Jim]

    What basis is there for the assumption that two data series can be reasonably displayed on the same chart.

    [Response: Correlation over their common interval.--Jim]

    Also, Is there no proxy data for the past 150 years? If not, why don’t we go find some? If yes, can deploy the curve without the temp data so we can see what it looks like?

    [Response: Yes, lots of it. That's what the correlations with the instrumental are based on. And (2) yes, it's been done in a number of papers, especially those exploring the tree ring vs T divergence phenomenon--Jim]

    Comment by Thomas — 20 Aug 2010 @ 11:45 PM

  11. Some quick metacomments. (I have been too lazy to read the paper and the critiques, may do that later.)

    R is currently a de facto standard in statistical programming, and well worth learning because of its popularity. R is not perfect, for example it is quite slow and has confusing scalar-vector semantics, and awkward N-D arrays compared to, e.g., Python’s numpy. But as a programming language, it is relatively modern.

    Lasso etc.: Lasso or L1 regularization has been popular on predictive models because it gives sparse solutions – only part of the variables tend to contribute strongly. This comes directly from its interpretation as a prior with a strong peak at zero. In the predictive sense, Lasso may work a bit better than L2 (gaussian prior) with a huge number of variables, but it is not favored for inference, that is, where the coefficients of the model are interpreted, because as a prior it is unnatural. New evidence indicates that a Cauchy prior is a good compromise (Gelman et al.), but it may be numerically difficult in large problems (leading to non-convex optimization).

    Without commenting the models of the M&W paper, going bayesian is in general good. Bayesian models make assumptions explicit, handling of missing data more principled, and allow hierarchical assumptions about the error structure. For example, different kinds of proxies may be tied together with common priors. But the wild range of possibilities may also be a downside in a politically charged issue like this.

    BTW, “80% change of X given the model” means just what it says. If the model is bad, the result is meaningless. Just remember that you cannot really talk of probabilities without conditioning them on some model, implicit or explicit.

    Comment by Janne Sinkkonen — 21 Aug 2010 @ 1:05 AM

  12. “our model gives a 80% chance that [the last decade] was the warmest in the past thousand years”….

    Yeh, except, the paper was about the lack of reliability of proxies. They pretty much state they aren’t reliable. So, the statement above is referring only to the outcome of the model, not some view of reality of the authors. The model gave them that answer, only they don’t believe the model to be accurate.

    Hope that helps.

    [Response: What would help more is if you actually thought about it a little. How do they address reliability? They take a new method ('lasso') that is untried and untested in this context and compare it to noise models on holdout periods that are shorter than have been used before, using a metric that hasn't been used before. If they don't believe that their method is any good, then none of the conclusions hold about anything. I therefore doubt that it is actually what they think. - gavin]

    Comment by suyts — 21 Aug 2010 @ 2:04 AM

  13. I’m not remotely qualified to remark on the nitty-gritty statistical treatments in this M&W paper but what I found remarkably conspicuous was the cumbersome load of political content and what appears to be framing it carried. By the time I’d waded through the extended and arguably refracted history of the world of hockey stick enthusiasts I was left wondering not only whether I was reading Senate testimony as opposed to a scientific research article but also whether I should bother using any of my (tiny) precious mind reading the scientific appendix attached to the popular media piece comprising the first section.

    Comment by Doug Bostrom — 21 Aug 2010 @ 2:06 AM

  14. Martin Vermeer mentioned another possible problem which I’ve found interesting.
    http://scienceblogs.com/deltoid/2010/08/a_new_hockey_stick_mcshane_and.php#comment-2729979

    What do you think?

    Comment by Anders M — 21 Aug 2010 @ 3:07 AM

  15. In order to validate their methods, they use an interpolation task rather than an extrapolation task. That is, they fit the model on the temp. record, minus a middle chunk, and test how well their model can infer the missing middle part.

    This has raised eyebrows. Clearly providing both endpoints makes it much easier to estimate the middle part. The fact that random-based model can “beat” proxy-based models at this task may be an artifact of the error function (RMSE on the unsmoothed outputs, IIUC – which I probably didn’t).

    At any rate, the interpretation that “proxies can’t predict temperatures better than random processes” is itself a serious extrapolation!

    I wonder if the RC gurus have an opinion on this particular point.

    Comment by Visiteur du Matin, Chagrin — 21 Aug 2010 @ 4:08 AM

  16. I’m in the process of writing my own advanced statistical software in VB2008 because I’m having so much trouble dealing with R. I don’t know if R even does things like unit root tests and cointegration regressions.

    Comment by Barton Paul Levenson — 21 Aug 2010 @ 5:36 AM

  17. Jim: Thanks for the advice on R. I took the Algol 20 course in 1964
    and did some self-study on LISP and a bunch of other languages between
    then and now, but I much prefer hardware engineering over software.
    I learned statistics as a physics undergrad and at the Army’s school
    for quality control. I will hold off on installing R unless you can
    point me to a free online course in it.

    Comment by Edward Greisch — 21 Aug 2010 @ 7:13 AM

  18. Climate confuser Bjorn Lomborg makes up a new story, copied by ‘news’papers around the world: A 7-metre sea level rise is no big deal, and costs only 600 billion per year to cope with: http://bit.ly/Nonsens

    “Tokyo coped with 5 metre subsidence since the 1930s, so we can cope with sea level rise as well”. Forgets to mention that subsidence was stopped in 1975 (http://bit.ly/Toksub), and that Tokyo is still 6+ metres above sea level!

    Comment by Kees van der Leun — 21 Aug 2010 @ 7:48 AM

  19. M&W shorter: `The data don´t work and are worse than random noise, but damned if they don´t reproduce a hockeystick. Gee, who ya gonna believe, me or your lyin´eyes.´

    Comment by Ray Ladbury — 21 Aug 2010 @ 9:37 AM

  20. BPL [off-topic], try the tseries or urca packages.

    Comment by CM — 21 Aug 2010 @ 9:47 AM

  21. Why does this site delete opposing comments so much? I don’t see such antics at sites such as wattsupwiththat or climateaudit? Are you guys agenda driven and trying to hide something? Inquiring minds want to know.

    [Response: Case in point. Do you think your comment adds anything to the discussion on this topic? - gavin]

    Comment by tom s — 21 Aug 2010 @ 9:48 AM

  22. #18–Not to mention that Tokyo’s seafront is a little smaller than the world’s total built-up seafront–and far richer than the mean, to boot.

    Comment by Kevin McKinney — 21 Aug 2010 @ 9:58 AM

  23. Well, the blog’s been a tad slow of late, but out in the real world I’ve enjoyed reading my local-language paper, with comments on recent extreme events by Gavin (in NYT) and particularly Stefan. If this is you being “preoccupied”, carry on!

    Comment by CM — 21 Aug 2010 @ 9:59 AM

  24. But I know of and have submitted posts of substance in the past, yet you delete. [insults removed]

    [Response: If you think that having a discussion about science revolves around insulting our integrity, you are very confused. I have absolutely no interest in playing games with you - either make substantive points, or don't bother. Your call. - gavin]

    Comment by tom s — 21 Aug 2010 @ 10:00 AM

  25. I would like to see further discussion of the interpolation vs extrapolation debate as it relates to proxy calibration. The authors discuss this point in some detail and feel that the proxies should outperform noise in both cases. Also, empirical AR and brownian noise outperform the proxies in almost all blocks including the beginning and end which are more extrapolation based. Can you clarify?

    [Response: Interpolation in a system with low frequency variability is always going to be easier than extrapolation. For instance, in a simple example take a situation that is flat, then a trend, then flat. If you take out a chunk in the middle, and fit red noise to the two ends, you have a lot of information about what happened in the middle. For extrapolation, all noise models will eventually revert to the mean of the calibration method, and so if that wasn't actually the case, the proxies should outperform the noise. And of course the shorter the period you take out, the harder it is for proxies to outperform. There isn't much difference between Brownian motion and AR(1) on these short time periods though. This is amenable to testing with the code provided (R_fig9[abc] for instance, but it takes some time to run). – gavin]

    Comment by Ohio — 21 Aug 2010 @ 10:20 AM

  26. Why does this site delete opposing comments so much? I don’t see such antics at sites such as wattsupwiththat or climateaudit?

    Gee, confirmation bias, much? You don’t think that wattupwiththat or climateaduit delete comments? The mind boggles.

    [Response: Discussion of blogs comment policies on threads that are actually related to substantive issues, just leads to distraction. Let's try to stay on topic please. - gavin]

    Comment by Steve Metzler — 21 Aug 2010 @ 10:24 AM

  27. One more question; in your GISS-USA-temp-anom-adjusted-for-UHI data set, why do you adjust the early part of the century down and the latter portion up? If you are correcting for UHI shouldn’t it be the other way around? Why cool the 1930s when cities were smaller, wouldn’t it have been even warmer if the cities were larger yet you adjust down? Makes no sense.

    [Response: These are anomalies. It makes no difference whatsoever. Please at least try and address issues in the post. - gavin]

    Comment by tom s — 21 Aug 2010 @ 10:27 AM

  28. David Jones and I, of the Clear Climate Code project, will be taking part in the surface temperatures workshop at the Met Office, and I hope we may be able to contribute to the ongoing project. Steve Easterbrook should also be there, at my suggestion.

    Comment by Nick Barnes — 21 Aug 2010 @ 10:36 AM

  29. I always read, and try to check the comments, especially all those wonderful links, but rarely post as I lack the level of science training necessary to contribute. However, I do know English and logic, and am good at spotting fluffing and fudging, such as that in the recent Curry-Schmidt hot air balloon. And on that subject, Dr. Schmidt’s description of Dr. Curry’s qualifications and concerns garnered more respect from me than anything she said, which speaks reams to those attacking the former and this site for a lack of courtesy.

    The idea that someone WUWT and CA are pure as the driven snow about comment moderation and RC is off the charts for prejudice would be laughable if it didn’t create an appearance that does not resemble reality.

    One of the reasons I come here is to continue to educate myself, and I find that moderation is exactly as stated – gentle, patient, and sticking to science until the commenter begins to be more obvious, if less genuine, in spouting the party line in the fake skeptic movement.

    The moving goalposts in the misinformation movement have recently trumpeted “unfair” moderation and lack of courtesy as more important than reality and facts.

    This is wrong, and approaches evil, affecting as it does our future on the planet. Even without the convincing developments of the last 40 years of climate science and its expansion to cover a wide variety of evidence and disciplines, it should be obvious to the simpleminded that our planet is finite, our consumption increasing, imitators from the world previously unable to use so much are on the increase, and resources are being used up. Ever increasing difficulty in maintaining a supply of polluting fuels of convenience is continuing to make decades of government subsidies of fossil fuels instead of alternatives a more obvious scandal of pollution and waste. Does anyone even count the loss of those polluting substances, in addition to their obvious poisoning of our world?

    So again, this is wrong, approaching evil.

    Comment by Susan Anderson — 21 Aug 2010 @ 11:17 AM

  30. [Response: What would help more is if you actually thought about it a little. How do they address reliability? They take a new method ('lasso') that is untried and untested in this context and compare it to noise models on holdout periods that are shorter than have been used before, using a metric that hasn't been used before. If they don't believe that their method is any good, then none of the conclusions hold about anything. I therefore doubt that it is actually what they think. - gavin]

    Gavin, it is possible I’m reading it wrong, but the way I read the paper, is I first read the title to see what the authors were trying to determine. They didn’t state they were trying to ascertain if the last decade was the warmest or not. They wanted to see if proxies were reliable enough to make such statements with any degree of certainty. They selected(right or wrong) the method they thought most appropriate. Their methods gave them the 80% result. They later, and emphatically, state the proxies aren’t reliable enough to make such statements. Note, in this post, I’m not making a judgment as to the validity of their conclusions, I’m stating the 80% ratio isn’t part of their conclusions.

    Comment by suyts — 21 Aug 2010 @ 11:45 AM

  31. suyts: if your argument is that their model is rubbish, I think you may find quite a few people who agree with you. But if that includes the authors, one wonders why they published at all…. unless it was to generate column inches.

    But I don’t expect criticism of the paper to focus on the model. For all it’s flaws, it’s not particularly interesting. The errors are obvious. But their claims about “reliability” are way off, and I expect those to attract wide criticism.

    To be blunt: if they can’t extract the signal from the noise, that is because they are Doing It Wrong.

    And their complaints about failing to predict the present warming when the last 30 years are withheld: that’s just plain moronic. I can’t wait to see this paper comprehensively dissected.

    Finally: their jabs at climate scientists are just rude. Who allows them to publish stuff like that? I hope the paper gets some heavy editing before it is published.

    As if we didn’t have enough examples of what happens when statisticians wander outside their domain knowledge and don’t bother to collaborate already!

    Comment by Didactylos — 21 Aug 2010 @ 12:48 PM

  32. I would just like to give you guys some credit for not jumping to conclusions as some other sites seem to have done. Since the code is supplied, and since you folks here should be able to actually test their work, I look forward to what I hope will be an even handed review of the work once it is more thoroughly looked at. As I have told others, this will either bear the test of time, or it won’t. Though certainly there will be those who cling to it or dismiss it one way or the other.

    But again, props for not flying off the proverbial handle. Already too much of that on both sides of the fence.

    Comment by BPW — 21 Aug 2010 @ 1:26 PM

  33. Ho-hum. Same stuff, different day.

    The FUDsters resort repeatedly to two memes, hoping to deliver themselves from the unpleasant conclusions of science: 1) the temperature record is unreliable, and 2) climate scientists are corrupt. Now that climategate has sunk beneath the waves of reality, they turn again to hopes they can break the hockey stick. Alas for the current celebration at WUWT, it is proving as obdurate as ever.

    But deniers are indefatigable. One has only to look to creationists for an example foretelling the future of climate science denial; this latest flop will not deter then from cobbling together new ways to confuse the public and waste the time of their betters. One waits with weary resignation for the next pothole they will dig in the path of science.

    Comment by Adam R. — 21 Aug 2010 @ 2:01 PM

  34. “I think there is a lot of misunderstanding of the 80% quote from McShane & Wyner. They started off the paper by supposedly showing that the proxy data is worthless, worse than random garbage…”

    Well yes, there would seem to be misunderstanding right there, and possibly even misunderstanding on the part of the authors.

    The fact that high frequency rich signals (random or otherwise) used as covariates to a noisy signal were preferred over possibly physically relevant proxies by shrinkage estimation of a high dimensional model is neither particularly surprising, nor does it say much about the relationship between the proxy signals and the noisy signal.

    If I show you a satellite image of two ants on Mount Kilimanjaro, don’t be surprised if a shrinkage model for processing the image spends all it’s parameters on Kilimanjaro and none of them on the ants.

    Why is climate change a signal like ants compared to noise like Kilimanjaro? Because climate change is slow compared to the time scale on which the proxies are measured, and the proxies have chaotic (i.g. broadband) higher frequency components (which from the climate change point of view are noise). Now plain old mean square error (MSE) is a broadband measure – (Plancherel’s theorem) which means that MSE regards equal amplitude features in proportion to their bandwidth. So the climate change signal is swimming uphill in the MSE criterion just because it’s narrowband, let alone that it’s amplitude is small compared to higher frequency components. Picking MSE without some sort of frequency compensation was telling the model estimation to try and model something other than climate change. OK, the authors didn’t think that through. I’m not sure why they didn’t, though.

    But when you use a high bias estimator (anything that reduces the parameter count down by orders of magnitude, as was used in this paper) then you run the risk of the estimator annihilating anything small, and in the MSE way of measuring things, that means that any narrowband feature may be ignored.

    So it’s not surprising that a high-bias estimator locks onto high frequency content. And it doesn’t say much about the things it didn’t lock onto, either.

    Comment by Andrew — 21 Aug 2010 @ 2:19 PM

  35. Barton Paul Levenson: “I’m in the process of writing my own advanced statistical software in VB2008 because I’m having so much trouble dealing with R. I don’t know if R even does things like unit root tests and cointegration regressions.”

    R is a language, the free software alternative S. Together, these are by far the languages preferred by many, many, academic statisticians. A very large number of modern statistical techniques make their first appearance in R.

    Although I have exactly zero use for cointegration and unit root tests, (which might seem odd coming from a professor of quantitative finance), this is the sort of thing that one can be sure there R will have plenty of available choices, as Googling “cointegration R package” shows.

    In my previous career as a portfolio manager, up until the turn of the millenium, I wrote entire real time high frequency trading systems in S-plus (which is essentially equivalent to R). (Full disclosure disclaimer: in exchange for an S-plus product endorsement I got some extra support regarding source code they didn’t usually provide). So you can do quite a bit with R.

    Anywhere there is a statistics department, there will be some people who are fluent in S and R, so maybe if you check with the local statisticians.

    Note that currently, I use numerical Python (aka numpy) because of the seamless inclusion of the multithreaded linear algebra and support for memory mapped files (our data sets are on the orders of Terabytes). I would like to suggest you skip right to numerical Python, but it has a noticeably bigger learning curve than R.

    Comment by Andrew — 21 Aug 2010 @ 2:37 PM

  36. ‘The fire down below’: @Revkin interviews scientist Guillermo Rein (University of Edinburgh) on the environmental significance of underground peat fires: http://nyti.ms/RevPeat

    Comment by Kees van der Leun — 21 Aug 2010 @ 3:10 PM

  37. My stats abilities may not qualify me to comment on the number-crunching part of this M&W paper, but putting that aside, firstly, something that struck me was the rather overtly political commentary that went on in places, especially in some of the introductory section – a lot of which seems to have been sourced via areas of media or politics. That raised a couple of eyebrows here.

    Secondly, the viral way in which a pre-publication MS flew around the internet was, er, interesting. On WUWT I did comment along the lines that high-fives were being done in less time than it would normally take to read the abstract, although perhaps my language was a little less temperate than that. I guess it will be interesting to see how the final version compares with the pre-publication one…. of which several thousand copies are likely saved on various hard-drives…

    Cheers – John

    Comment by John Mason — 21 Aug 2010 @ 4:39 PM

  38. Help!

    I need to simulate the SOI for another 100 years or so. I can’t relate the damn thing to anything, even by Fourier analysis (which I’m probably doing wrong). It just looks like a total random walk.

    How can I simulate the distribution, given the mean, standard deviation, kurtosis, and skewness? Is there a mathematical technique for doing so? This is advanced stats, so anyone who can help… Au secours! Au secours! Sauvez moi!

    Comment by Barton Paul Levenson — 21 Aug 2010 @ 6:47 PM

  39. Barton Paul Levenson @38 — Start by reading how to generate a sample from a Gaussian distribution.

    Comment by David B. Benson — 21 Aug 2010 @ 7:29 PM

  40. hey BPL,

    This might help (uses the probability integral transform).

    Comment by apeescape — 21 Aug 2010 @ 9:29 PM

  41. #30 suyts:

    They selected(right or wrong) the method they thought most appropriate. Their methods gave them the 80% result. They later, and emphatically, state the proxies aren’t reliable enough to make such statements.

    But why then even bother to compute a worthless result?

    Why am I reminded of “It isn’t happening, it’s not our fault, and anyway it will be good for us?”

    Comment by Martin Vermeer — 21 Aug 2010 @ 9:49 PM

  42. A question, and two points of clarification.

    First, thanks for the graphics that show various “Lasso” outputs with the Tiljander proxies omitted. This post’s “No Tiljander Proxies” figure with the six reconstructions (1000 AD – 2000 AD) is an extension of M&W10′s Figure 14. As such, it’s a little hard to interpret without reference to that legend.

    According to Fig. 14:

    * The dashed Green line is the M&W10 backcast for Northern Hemisphere land temperature that employs the first 10 principal components on a global basis.

    * The dashed Blue line is the backcast for NH land temperature that employs the first 5 PCs (global), as well as the first 5 local (5×5 grid) PCs.

    * The dashed Red line is the backcast for NH land temperatures that employs only the first PC (global).

    All three dashed lines appear to be based on the use of the entire data set used in Mann et al, (PNAS, 2008)–both Tree-Ring proxies and Non-Dendro proxies.

    This is an important consideration, as there seems to be general agreement that Mann08′s Dendro-Including reconstructions are not grossly affected by the exclusion of the Tiljander proxies. This point was made in Mann08′s Fig. S8a (all versions).

    However, there has been much contention over the extent to which NH land reconstructions restricted to non-dendro proxies are affected by the inclusion or exclusion of the Tiljander varve data. See, for example, the 6/16/10 Collide-a-scape thread The Main Hindrance to Dialogue (and Detente).

    Mann08 achieved prominence because of its novel findings: the claim of consistency among reconstructions based on Dendro proxies, and those based on Non-Dendro proxies. Mann08 also claimed that the use of Non-Dendro proxies in could extend validated reconstructions far back in time. This is stated in Mann08′s abstract (see also press release).

    Thus, the matters of interest would be addressed if the dashed and solid traces in this post’s “No Tiljander Proxies” figure were based only on the relevant data set: the Non-Dendro proxies.

    (The failure of Non-Dendro reconstructions to validate in early years in the absence of Tiljander was raised by Gavin in Comment #414 of “The Montford Delusion” thread (also see #s 483, 525, 529, and 531). While progress in that area would be welcome, it is probably difficult to accomplish with M&W10′s Lasso method.)

    .
    The first point of clarification is that the Tiljander proxies cannot be meaningfully calibrated to the instrumental temperature record, due to increasing influence of non-climate factors post-1720 (Discussion). This makes them unsuitable for use by the methods of Mann08 — and thus by the methods of M&W10 — which require the calibration of each proxy to the 1850-1995 temperature record.

    .
    The second point of clarification is concerns this phrasing in the post:

    People seem inordinately fond of obsessing over the Tiljander proxies (a set of four lake sediment records from Finland that have indications of non-climatic disturbances in recent centuries – two of which are used in M&W).

    The “four lake sediment records” used in Mann08 are “Darksum,” “Lightsum,” “XRD,” and “Thickness.” The authors of Tiljander et al (Boreas, 2003) did not ascribe meaning to “Thickness,” because they derived “Darksum” by subtracting “Lightsum” from “Thickness.” Thus, “Thickness” contains no information that is not already included in “Lightsum” and “Darksum.”

    In other words, there are effectively only three Tiljander proxies (Figure).

    [Response: We aren't going to go over your issues with Mann et al (2008) yet again - though it's worth pointing out that validation for the no-dendro/no-Tilj is quite sensitive to the required significance, for EIV NH Land+Ocean it goes back to 1500 for 95%, but 1300 for 94% and 1100 AD for 90% (see here). But you missed the point of the post above entirely. The point is not that M&W have the best method and it's sensitivities need to be examined, but rather that it is very easy to edit the code and do what ever you like to understand their results better i.e. "doing it yourself". If you want a no-dendro/no-Tiljander reconstruction using their methodology, then go ahead and make it (it will take just a few minutes - I know, I timed it - but to help you along, you need to change the selection criteria in R_fig14 to be sel < - (allproxy1209info[,"StartYear"] <= 1000) & (allproxy1209info[,2] != 7500) & (allproxy1209info[,2] != 9000) (no_dendro) and change the line proxy < - proxy[,-c(87:88)] to proxy < - proxy[,-c(32:35)] (no_tilj)). Note that R_fig14 does not give any info about validation, so you are on your own there. The bottom line is that it still doesn't make much difference (except the 1PC OLS case, which doesn't seem very sensible either in concept or results anyway). - gavin]

    Comment by AMac — 21 Aug 2010 @ 9:59 PM

  43. BPL, I have a book call “El Nino: Storming Through History” which uses historical data going back several hundred years (can’t remember the author off hand), but you might be able to extract some info about the SOI from his data. It won’t be easy, but there is something so chew on in that book.

    Comment by Rattus Norvegicus — 21 Aug 2010 @ 10:37 PM

  44. “The foresters and arboriculturists are going to be really upset at this news–they are under the impression that tree growth somehow or other responds to temperature. Ain’t that nuts?–Jim”

    Wasn’t there a recent paper that showed crop yields went down with rising temperatures? Maybe precipitation is what has led us to the idea there is a correlation to temperature.

    [Response: BIll there have been thousands of papers describing effects of T on plants over the years. The key point wrt tree rings is that sites are chosen to attempt to minimize the confounding of P with T, depending on which variable one is interested in (a lot of dendro sites focus on P don't forget). If you went from 5 to 11 k ft in the Cascades for example, say Mt Jefferson , you could find many sites on steep, S-facing aspects that respond primarily to P. As you ascended to near treeline, and avoided such exposures, you would increasingly sample stands that are limited primarily by T. You could also find places where both could be limiting, which would require a more complex model to explain the variation.--Jim]

    Comment by Bill Hunter — 21 Aug 2010 @ 10:58 PM

  45. BPL : “I need to simulate the SOI for another 100 years or so. I can’t relate the damn thing to anything, even by Fourier analysis (which I’m probably doing wrong). It just looks like a total random walk.

    How can I simulate the distribution, given the mean, standard deviation, kurtosis, and skewness? Is there a mathematical technique for doing so? This is advanced stats, so anyone who can help… Au secours! Au secours! Sauvez moi!”

    There are lots of mathematical techniques for doing this; the obvious first cut would be to compute the distribution function of the maximum entropy distribution with the specified moments, which is from an exponential family, so if all you wanted was to sample from a distribution with specified first four moments, there you are. One could (and many do) argue from information theoretic considerations that the maximum entropy distribution would be the only distribution to have the given moments which did not contain additional “information”.

    However I would think you want to sample from a space of realizations that are hard to distinguish from the actual ENSO process. In other words, you want to choose lengths of history which have the desired marginal information (the moments you mentioned) but which also have marginal distributions of time separated observations. In other words, if you knew that the ENSO was really an ARMA(2,3) process with some heavy tailed innovations, then you would want to use realizations of that process with innovations chosen from the appropriate distribution.

    Now this is actually pretty hard to do if you cannot confidently write down the process which generated the ENSO data, which I suppose is the case. You don’t really know whether the process is this or that nonlinear differential equation, and you aren’t really sure what the distribution of forcing should be, etc. One assumes that you would like to believe things like inertial range turbulence (at least the 3-D stuff on small scales) tells you something about the things which will provide your innovations. However you’re not going to get a low dimensional process that way, and that is a bit of a problem.

    What is pretty straightforward is to model the data with a high dimensional linear model (ARMA, but not in those coordinates) and then apply model reduction, Bayesian estimation, and shrinkage. Your model will usually have innovations closer to Gaussian than your original data, which takes the pressure off fancy modeling of the distribution of innovations.

    You can also use this same machinery, but with the data transformed in various ways to include nonlinear aspects of the evolution. For example if instead of each observation x(t) being represented as that, you can represent each observation as a vector, say – v(t) = (x(t), x(t)^2, x(t)^3), and then a MIMO model for linear evolution of v(t) ends up capturing some nonlinear parts of the evolution. (You can check with a quadratic map y(t) = c y(t) (1 – y(t) to see how this can work). Now choosing such a nonlinear embedding is nontrivial, but there’s a lot of stuff lying around the machine learning yards of tricks for doing that – (look up Mercer’s theorem, the “kernel trick”, or, for the more mathematically adventurous,reproducing kernel Hilbert space methods in machine learning). And there are other approaches.

    Strangely enough though, there are numerical models for the Southern Oscillation lying around that could save you the trouble of modeling it yourself. For example my thesis adviser actually did this back in 1991 (http://www.atmos.berkeley.edu/~jchiang/Class/Fall08/Geog249/Week13/gv91.pdf, also http://www.environnement.ens.fr/perso/claessen/ateliers/SSA/biblio/Vautard_Ghil_92.pdf). Using one of these would save you the trouble of cooking up your own.

    Comment by Andrew — 21 Aug 2010 @ 11:36 PM

  46. On a vaguely related note (statistics journals: the last refuge?), McKitrick’s rejoinder to AR4 on surface temps was recently published in the new Statistics, Politics and Policy. Which aspires to publish articles “in which good statistical analysis informs matters of public policy and politics” – or was it the other way around? McK’s arguments were briefly discussed here a few months ago. Any prospect of getting a comment published? Or would that be too much flogging of a dead horse, already?

    [Response: As I have stated before, these papers demonstrate only that if you are persistent enough you can get anything published - you just need to go further and further afield until you get reviewers who don't know enough about the domain to realise what's wrong. My enthusiasm for dealing with these issues is limited, but it is easy to see what is wrong with this latest variation. He has calculated the spatial pattern of the Arctic Oscillation, ENSO and PDO etc. and put in those patterns in with the multiple linear regression. The patterns come from a reanalysis, not the surface temperature dataset and so are mis-specified from the beginning (and he includes both the NAO and the AO which is redundant). But the main problem is that these patterns do not have an unknown random effect on temperature patterns! On the contrary, they are known very well, and the impact of each phenomena on the 1979-2002 trends in CRU can be calculated precisely. If you really wanted to correct his method, you would have calculated a new set of trends corrected for the indices and then done the regression. He did not, though it might be fun for someone else to.

    However this is just a diversion, in his other 'new' paper with Nierenberg, he slides in a admission that, yes, indeed, there is spatial correlation of the residuals in the MM07 model (duh!), and since he knows that individual realisations of the GISS model come up with fits to the 'socio-economic' variables as good as he has calculated in the real world, the practical significance of these continued variations is zero. (Note that even though he has calculated the results using individual run, he has not reported them in his papers which only include the ensemble mean results and where he bizarrely claims they have the same spatial characteristics as individual runs - they do not). He is also still claiming that de Laat and Maurelis (2006) supports his results when they do not (recall they found a pattern in the satellite trends as well!). So, it is unclear what further repetition of these points will really serve since he clearly is not interested in getting it right. - gavin]

    Comment by CM — 22 Aug 2010 @ 12:56 AM

  47. #41 Martin Vermeer

    I imagine that they’d have to compute to get the results and then check the results. One can’t check results if you haven’t computed them yet. Don’t get me wrong, I’m not well versed in this kind of statistical analysis to say whether the methodologies were proper or not. I’ll wait for the publishing and read the back and forth before I’ll be able to make a determination. What I’m saying, is that to read some papers, one has to view statements contextually. As pointed out earlier, given the tone and tenor of the background and the conclusions, and given that the 80% comment is in the same paragraph that they state, “proxies cannot capture sharp run-ups….”, I would find it very odd that they are asserting the 80% probability. I’m not trying to rain on anyone’s parade, I’m just pointing out that there may be a different interpretation of that particular statement.

    Comment by suyts — 22 Aug 2010 @ 2:31 AM

  48. 40 apeescape: Thanks for the interesting URL.

    Comment by Edward Greisch — 22 Aug 2010 @ 2:58 AM

  49. tom s — 21 August 2010 @ 10:27 AM
    “…in [the] GISS-USA-temp-anom-adjusted-for-UHI data set, why [is] the early part of the century down and the latter portion up? [In] correcting for UHI shouldn’t it be the other way around?”

    If the moderators will permit a partial repost of a comment I made in another thread – somewhat relevant to “Doing it yourselves” climatology -
    Menne et al http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf, following the pioneering work done by Anthony Watts and his surfacestations.org volunteers, found “…no evidence that the CONUS average temperature trends are inflated due to poor station siting.” On the contrary, they found that the corrections made to the data resulted in “… bias in unadjusted maximum temperature data from poor exposure sites relative to good exposure sites is, on average, negative while the bias in minimum temperatures is positive (though smaller in magnitude than the negative bias in maximum temperatures).”

    Because the basics of anthropogenic global warming are fairly straightforward – CO2 is a greenhouse gas, because of the lapse rate water vapor condenses or freezes out in the troposphere and acts mainly to amplify the effect of CO2, humans are burning a lot of fossil C and increasing the CO2 in the atmosphere, the surface of the earth is warming, the cryosphere is retreating, the climate that supports civilization is rapidly changing, and consequently we are facing an uncertain future – but the details are complex, it’s easy to “misunderestimate” the way climate works in detail.

    For instance, the reradiation of IR from GHGs’ in the atmosphere back to the surface heats the surface; because of the density gradient in the atmosphere, and the lapse rate effect on water vapor, most of the action takes place near the surface, If the surface warms by 1 degree, then a constant adiabatic lapse rate would mean every altitude would warm by 1 degree. But a rising parcel of air doesn’t just cool by adiabatic expansion, but also by radiation, and more CO2 will increase the radiation, and increase the environmental lapse rate. But the radiative cooling is time dependent, and a steeper lapse rate will increase convection and decrease the time over which a rising parcel can radiate heat away,increasing the relative amount of adiabatic versus radiative cooling. Increased temperature will increase the absolute humidity according to the Clausius-Claperyon equation; a larger amount of water vapor will decrease the density of air, all else being equal, which will increase convection and the relative amount of adiabatic versus radiative cooling. But the additional water vapor will radiate heat away more quickly, having the opposite effect; however, when the temperature drops to where the water begins condensing, the latent heat released will decrease the lapse rate to the moist adiabatic lapse rate. Because of the different intramolecular forces between water molecules as vapor in air, water, and ice, the wavelengths of emission and absorption are shifted; some of the radiation from the water/ice droplets at the top of a cloud can escape to space because the atmosphere above it is transparent at its wavelengths, whereas the same radiation from droplets at the bottom of a cloud will be absorbed and re-emitted in random directions from the droplets above, including back down to the originating droplets.

    Trying to figure out how all this “should” work in your head will make smoke come out your ears; one needs to read a lot of scientific literature, and do a lot of analysis assuming one has the knowledge. Most people don’t have the time or the math chops. Arguing “I think the climate scientists did it wrong” when one doesn’t know what the scientists did or how to do it is silly. Using different statistical methods and getting a different answer is hard, but usually the difference is a matter of accuracy, not basic principles – Idso shows a lower sensitivity, less temperature increase for a doubling of CO2, but still an increase. No statistical analysis of the proxy record will ever disprove any of the basics I stated above.

    Figuring out which effects are dominant, and quantifying how large they are is important, but the current level of inaccuracy in our understanding of these processes is insufficient to invalidate the basics of AGW. Even prominent skeptics like Monckton, Lindzen, Spencer, Idso, McKitrick and Michaels accept the basics, and I suspect Watts, Goddard, and maybe even Morano do as well.

    Comment by Brian Dodge — 22 Aug 2010 @ 4:38 AM

  50. R is not that hard to learn. I use it exclusively nowadays. If you understand C programming then you should not find R too difficult. Hey, I learnt R without a C background. Barton, I think that you are really wasting your time developing your own statistical package when R will do it for you.

    Comment by Richard Steckis — 22 Aug 2010 @ 6:01 AM

  51. Watts, true to form, left the 80% business out of his selected quotes, so I served notice of the omission to WUWT’s long suffering readers last week.

    Too bad the proxies remain as underwhelming as the funding available for augmenting them.

    Comment by Russell Seitz — 22 Aug 2010 @ 7:46 AM

  52. Gavin, that was clear, thanks. No, nothing McKitrick’s been up to make me think that he’s driven by a wish to get it right. But the journal and its readers might want to, so hopefully someone will take you up on the tip how to correct him.

    Comment by CM — 22 Aug 2010 @ 7:53 AM

  53. Anders M #14: upon further consideration I don’t think this idea is valid as I described it there. I was too quick. I now rather tend to think that it also has to do with their choice of the number (10) of PCs to include.

    There is some discussion of this at DeepClimate.

    Comment by Martin Vermeer — 22 Aug 2010 @ 9:28 AM

  54. I think that “Statistics, Politics and Policy” is the Journal that Wegman started up recently. If this is the case, I’m not sure the journal is interested in getting it right.

    Comment by Rattus Norvegicus — 22 Aug 2010 @ 9:59 AM

  55. Um.

    “… the current level of inaccuracy in our understanding of these processes is insufficient to invalidate the basics of AGW …”

    Help needed with that somehow.

    Comment by Hank Roberts — 22 Aug 2010 @ 10:26 AM

  56. 54 (Hank Roberts),

    Um.

    “… the current level of inaccuracy in our understanding of these processes is insufficient to invalidate the basics of AGW …”

    Help needed with that somehow.

    Come on, you can’t not misunderstand what wasn’t unsaid, nor doubly and negatively implied, by the reverse of the opposite of that contrariwise statement.

    [Response: :)]

    Comment by Bob (Sphaerica) — 22 Aug 2010 @ 11:12 AM

  57. Hank, I’m not entirely sure but that might have something to do with the flux capacitor, not being properly aligned with the quantum wave generator?

    Sorry, I’m not sure. Maybe, someone else has a clue?

    Comment by John P. Reisman (OSS Foundation) — 22 Aug 2010 @ 11:39 AM

  58. suyts,

    They calculate Bayesian probabilities in their model which enable them to construct probabilistic interpretation of their results. I think the main reason they present the 80% result was to illustrate this feature in their model. They also calculate this result:

    … we estimate a zero posterior probability that the past thousand years contained run-ups larger than those we have experienced over the past ten, thirty, and sixty years (again, the largest such run-ups on record). This suggests that the temperature derivatives encountered over recent history are unprecedented in the millennium.

    They then “emphatically” minimize the significance of this particular result because the model could not capture the recent upswing in temp. Therefore, they argue, that the 80% result (which goes over the whole reconstruction) may be a bit too high a result because it won’t detect the MWP (kinda fluffy logic).

    So there are two results: 80% and the 0%. The latter of which they think is unrealistic, and the former they think is a tad charitable… At least that’s my interpretation.

    Also, I don’t think articulating uncertainties (what essentially M&W2010 are doing) is a “worthless result” — that’s basically a statistician’s job. But I would have liked to see some more forward-thinking in producing models with a greater signal instead of just giving a warning. Of course that would require collaboration with physicists!

    Comment by apeescape — 22 Aug 2010 @ 11:43 AM

  59. Bill Hunter

    Wasn’t there a recent paper that showed crop yields went down with rising temperatures? Maybe precipitation is what has led us to the idea there is a correlation to temperature.

    Plant physiologists aren’t as naive or ignorant as you assume. There’s a reason why tree proxies for paleoclimatology are chosen from regions near a species’ altitudinal or latitudinal range limit, where precipitation is KNOWN TO BE MORE THAN ADEQUATE to support healthy growth.

    Comment by dhogaza — 22 Aug 2010 @ 12:16 PM

  60. 57 apeescape says:

    Thank you for better articulating what I’ve been trying to state. The 80% quote has to taken in context of their statements.

    Comment by suyts — 22 Aug 2010 @ 12:49 PM

  61. Thanks, Rattus, and everyone who offered help. I appreciate it.

    Comment by Barton Paul Levenson — 22 Aug 2010 @ 2:35 PM

  62. Edward (17):

    R is an extremely powerful tool–I would definitely install it and begin to play around with it (if you have time). You can learn a lot that way, and it comes with a set of manuals, one of which is an introduction to R. If you’re doing basic stuff it’s not so bad. Problems arise when you try to program something complex without sufficient background in the various exceptions and ins and outs of the language. Frankly, R exacerbates this issue with some vagaries that will absolutely make you tear your hair out and which make no obvious sense (I can give numerous examples from recent experience). Because the help menus are generally poor/cryptic, you almost have to buy at least one or two books. I highly recommend Joseph Adler’s “R in a Nutshell”–very helpful, a very good blend of statistics and programming how to.

    The USGS offers an online course yearly, starting soon and I believe they archive past years’ sessions. It’s geared more toward standard statistical analysis using existing R packages, not programming. There are others too. I think the best way though is to experiment on your own, with a good book.

    http://www.fort.usgs.gov/brdscience/LearnR.htm
    http://www.stats.ox.ac.uk/~ruth/RCourse/

    Comment by Jim — 22 Aug 2010 @ 2:42 PM

  63. On my blog I am trying to describe the the climate change debate in Russia as it relates to the wildfires.

    http://legendofpineridge.blogspot.com/2010/08/ria-novosti-explores-possible-causes-of.html

    I just have a question.

    An RIA Novosti article quotes a prominent Russian scientist, Roman Vilfand, the head of the Russian state meteorological center. This may be the agency known in Russia as the Federal Service for Hydrometeorology and Environmental Monitoring or ROSHYDROMET (“Росгидромет”) or it may be a different agency. Dr. Vilfand is described by RIA Novosti (8-13-10) as “the head of the Russian state meteorological center” and by Business News (8-16-10)as “previously the head of Russia for Hydrometeorology.”

    Comment by Snapple — 22 Aug 2010 @ 3:11 PM

  64. Snapple have you read the NOAA report about the Russian heatwave?

    http://www.esrl.noaa.gov/psd/csi/moscow2010/

    Comment by Warmcast — 22 Aug 2010 @ 3:54 PM

  65. Dear Warmcast,

    I had not read this authoritative report and will link it on my post. They say the heatwave was due to blocking as do the articles I cited. Thanks

    Comment by Snapple — 22 Aug 2010 @ 5:23 PM

  66. Regarding http://www.esrl.noaa.gov/psd/csi/moscow2010/

    “The current heat wave is therefore all the more remarkable coming on the heals of such extreme cold.”

    I think the author meant “heels”….

    Stuff like that drives me buggy!! How can I take the study & data seriously with such stupid mistakes?

    Comment by CStack — 22 Aug 2010 @ 11:50 PM

  67. With respect to BPL’s frustration starting with R, and others:

    For those new to \R\, its power is in the vast array of 3rd party packages – libraries, if you like. R itself is a fairly small language, one which provides real/complex numerics, vector, matrix, array and list entities, more statistically oriented data frames, and a fair bit more. The real meat is in the packages provided at the CRAN site (Comprehensive R Archive Network); at last count there are 2473 packages available, covering every contingency I should expect.
    Tutorial and primer style documents, provided by third parties and also some of the R team, may be found here. Plenty to look at and work through – for a solid coverage of not just R, but also a suitable Windows editor that can work with R (ie hooks up with R to run your edited scripts etc), and quite a few environment handling issues not well covered elsewhere, check out Petra Kuhnnert and Bill Venables notes here.

    Comment by Donald Oats — 23 Aug 2010 @ 12:04 AM

  68. 62 Jim Bouldin: Thanks

    Comment by Edward Greisch — 23 Aug 2010 @ 2:46 AM

  69. Bob (Sphaerica)@56

    Yeah, yeah.

    Comment by Nick Gotts — 23 Aug 2010 @ 5:19 AM

  70. #63 Snapple

    I have a summary report that addresses recent events including work from Jim Hansen, the NOAA/ESRL report, NCDC, the OMM/WMO (World Meteorological Organization) and some news highlights

    http://www.ossfoundation.us/projects/environment/global-warming/summary-docs/leading-edge/2010/aug-the-leading-edge

    I continue to update the page as the month progresses though.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 23 Aug 2010 @ 5:59 AM

  71. A bit OT but meanwhile in Virginia…
    http://www.newsplex.com/vastatenews/headlines/101155564.html

    After a hearing Friday afternoon in Albemarle County, Judge Paul Peatross says he will have a ruling within ten days on Virginia Attorney General Ken Cuccinelli’s demand for records related to former University of Virginia professor Michael Mann’s research.

    Isn’t it time from someone in climate science community to do something like this?

    http://rationalwiki.org/wiki/Lenski_affair

    The Lenski affair was a poorly conceived stunt by Andrew Schlafly to denigrate the groundbreaking research of National Academy of Science member Richard Lenski, in which Lenski and his student Zachary Blount actually observed evolution happening. Schlafly’s stunt backfired completely and led to one of the best responses to creationism to date. It is now one of the most famous incidents in creation/evolution circles on the Internet.

    The fame of “the affair” left the safe confines of the Internet in September 2009 when it received a brief mention in Richard Dawkins’s book The Greatest Show on Earth following a longer discussion of the results themselves.[1]

    The actual correspondence is posted at the rationalwiki.org link I provided, I believe they are a must read and could serve as a template for anyone in the science based community who has to deal with attacks against their own integrity and honesty regardless of their area of expertise.

    Comment by Fred Magyar — 23 Aug 2010 @ 8:20 AM

  72. Nice to see you guys bringing this up for comment.

    I read the paper, and will commend the authors on writing the paper so it can be easily understood. Many academic papers are indecipherable when they don’t need to be. The target audience is typically other academics though.

    The central point of the paper is that the proxies are simply not good enough to make any reasonable conclusions from. Anybody can confirm this intuitively by examining the graphs of the individual proxies, they are a mess.

    I thought they made a good point on how the confidence of the reconstruction was over estimated due to the method for verification. Using start and end blocks allowed the verification to know the starting and ending points and it filled in the rest. This ave it an “unfair” advantage in reconstruction ability. When only giving a starting point, it faired worse.

    The use of a smarter null model, one that tracks local temperatures, but has no long term trending skill, was also enlightening. This null model performed as well as the constructed model, which indicates backcast skill of the constructed model was questionable.

    Probably the most important point which is glossed over in most reviews is that different models that have almost identical verification scores have wildly different backcasts. Who’s to say which model is really best?

    [Response: Read the paper more carefully. They only tested 'lasso' for verification scores, not any of the other models, and not any model that other groups have used. You can test which methods work best, but they did not do these tests. One must be careful therefore not to over-interpret. - gavin]

    Arguing over the minutia of the statistics isn’t very useful, trees just aren’t very good thermometers.

    [Response: Tell us another one. Better yet, explain why the rings of the trees in a stand very often go in the same direction wrt ring width, from year to year, far beyond what could be explained by chance. When you're done with that, we'll start into the experimental forestry and tree physiology literature--Jim]

    I think climate science would be wise to not engage in a math war against the professional statisticians in this case.

    [Response: ...or some statisticians (and posters) who might bother to consider biological reality before getting lost in numbers.--Jim]

    It’s not really very relevant when the main concern is where the climate is going, not where it has been.

    Comment by Tom Scharf — 23 Aug 2010 @ 8:40 AM

  73. #66 Cstack

    You apparently can’t take it seriously because your are too busy looking at the molehill to see the mountain.

    It is a serious problem in some peoples brains though. i suppose if you found a comma out of place in a version of Mark Twains Huckleberry Finn, you would conclude that Twains work is not that good because some editor missed that comma.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 23 Aug 2010 @ 8:51 AM

  74. Second estimate: 2051 is the year the fraction of Earth’s land surface in severe drought hits 70%, and global human agriculture collapses. My previous estimate was 2037, so we’re a little better off.

    I’ve written up the statistical analysis as an article. I’ve asked Tamino to check the statistical work; waiting for his reply. Would any pro climatologists be willing to look at the article before I submit it? I desperately want this to pass peer review.

    And if I’m right, the word has to get out. Soon. Please help.

    Comment by Barton Paul Levenson — 23 Aug 2010 @ 9:03 AM

  75. “[Response: Read the paper more carefully. They only tested 'lasso' for verification scores, not any of the other models, and not any model that other groups have used. You can test which methods work best, but they did not do these tests. One must be careful therefore not to over-interpret. - gavin]”

    This is true. But it does look like they did test verification scores for different models in Fig 9, in which the lasso scored similarly to the PC and other methods. They did not show the reconstructions of all the methods.

    I just take from it that there are some statistical methods which generate a hockey stick, and there are some methods that don’t generate a hockey stick, and the methods are roughly equal from a purely statistical point of view.

    It may be that climate science has some proprietary “secret sauce” that shows the hockey stick method clearly superior (proxy selection, etc.), I’m just not convinced of that. Of course I am equally subject to confirmation bias as everyone else.

    [Response: There are ways to test whether methods work by using synthetic data derived from long model runs. These are obviously less complex or noisy than the real world, but if a method doesn't work for that, it is unlikely to do so in the real world. As I said, these tests were not done in M&W. - gavin]

    Comment by Tom Scharf — 23 Aug 2010 @ 9:35 AM

  76. Tom Scharf says: “I thought they made a good point on how the confidence of the reconstruction was over estimated due to the method for verification. Using start and end blocks allowed the verification to know the starting and ending points and it filled in the rest. This ave it an “unfair” advantage in reconstruction ability. When only giving a starting point, it faired worse.”

    I’m not sure I understand this. To me it looks like it is their method (M&W's) that consists in giving the model the starting bit and the ending bit, and see how well it does at guessing the (small) bit in the middle.

    By contrast, I was under the impression (perhaps misguided) that usual validation methods in climate science involve giving one end of the data to the model, and see how well it matches the (unseen) other end, without "showing" it the other end point.

    Did I get it hopefully wrong?

    Comment by toto — 23 Aug 2010 @ 10:45 AM

  77. I’ll be the first to admit my knowledge of statistics is minimal (OK almost 0), but my knowledge of logic and my knowledge of programming give me a few insights.

    So when I read:

    “Second,
    the blue curve closely matches the red curve from 1850 AD to 1998 AD because
    it has been calibrated to the instrumental period which has served as
    training data”

    Now logic, to me anyway, states that you must be able to match the proxies to the instrument record. Otherwise you don’t have the slightest clue what the proxies actually represent.

    So do they, or don’t they?

    And if they do, why does this paper attempt dismiss that correlation? Seeing as it’s vital to the whole excercise.

    Then secondly the paper gets a firm thumbs down by me the second that it tries to predict future change by extending a hindcast forward into the future.

    My logic says that if we can determine the impact on the proxies by climate we can reconstruct the past climate from the proxies (within the limits of known error). However unless we know significantly more about climate triggers and include them into the statistical model; we cannot predict anything in the future because we must know much, much more about the changes in clmiate change triggers (CO2, methane, solar cycle etc), in order to be able to do so.

    As soon as it does that, to my logic, their credibility goes down the pan.

    But then I’m neither a statistician nor a climate sicentist…..

    But their spectacular failure to predict any future change, imho, does not mean the data is rubbish. It means the climate is different to any in the last 1,000 years.

    Or did I miss something?

    Comment by NeilT — 23 Aug 2010 @ 11:14 AM

  78. Here is Wegmans DIY review of his new DIY statistics journal- I am deeply (at least three sigma) shocked that it accepts submissions by invitation only

    WIREs is a WINNER
    0. Edward J. Wegman1,*, Yasmin H. Said David W. Scott
    Ahttp://onlinelibrary.wiley.com/doi/10.1002/wics.85/fullbstract

    “A group of us met with the editorial management of John Wiley and Sons… our new journal was launched officially in July-August 2009 titled as Wiley Interdisciplinary Reviews: Computational Statistics. … a hybrid review publication that is by invitation only, “

    the success of PR focus rroups , left and right, in powering up new vanity presses to dumb down climate science call what Auden wrote of the power of comic rhyme .“instead of an event requiring words to describe it, words had the power to create an event”. Pop science becomes self parody when words take over and leave sense behind.

    Comment by Russell Seitz — 23 Aug 2010 @ 12:28 PM

  79. Tom Scharf #75

    This is true. But it does look like they did test verification scores for different models in Fig 9, in which the lasso scored similarly to the PC and other methods. They did not show the reconstructions of all the methods.

    Surely you mean Figure 12? I don’t agree — the two Lasso versions score clearly poorer than even ten PCs, which arguably isn’t the even best among the OLS solutions. And this presupposes that the way the authors do cross-validation is proper and relevant for testing the suitability of a reconstruction technique for what it is actually used for. As Gavin says, there are more definitive ways of doing that, and they weren’t used. And plotting the spread of the reconstructions by a couple dozen not-quite-optimal (by a questionable metric) alternative techniques, and referring to that as the uncertainty of reconstruction, is surprisingly fuzzy thinking for statisticians.

    BTW don’t trust your intuition too much. Only scientifically trained intuition works… sometimes :-)

    Comment by Martin Vermeer — 23 Aug 2010 @ 2:19 PM

  80. re: #78 Russell Seitz

    Oh, that is not the only strange thing about that journal.
    See its Editorial Board, which claims (noticed by Deep Climate a while back):

    “Edward J. Wegman, Bernard J. Dunn Professor of Data Sciences and Applied Statistics, George Mason University
    Yasmin H. Said, Professor, Oklahoma State University, Ruth L. Kirschstein National Fellow, George Mason University this is very strange.
    David W. Scott, Noah Harding Professor of Statistics, Rice University”

    <a href="http://www.okstate.edu/registrar/Catalogs/E-Catalog/2009-2010/Faculty.html&quot; OSU catalog,
    and the associated PDF, created 08/05/09 both list Yasmin H. Said as an Assistant Professor in Statistics, still there 08/12/10, but not at
    <a href="http://statistics.okstate.edu/people/faculty.htmOSU statistics department
    =======
    A. In any case, back to the original topic. There is a fascinating chain of connections that may illuminate the provenance of this strange paper published by statisticians with no obvious prior experience in this turf. This chain includes:
    McShane (now in Marketing @ Northwestern), Wyner Wharton statistics … which references a paper by:

    J. Scott Armstrong, Wharton Marketing … in same building
    A regular speaker @ Heartland conferences and founder/cofounder of Intl. J. of Forecasting, of whom one of the Assoc Editors is:

    Bruce McCullough, Drexel (~ 1 mile away)
    http://www.pages.drexel.edu/~bdm25/
    “Associate Editor, International Journal of Forecasting (1999 – now)
    and he is also:
    Associate Editor, Computational Statistics and Data Analysis (2003 – now)

    of which WEGMAN has been advisor and frequent author since 1986.

    But that leads back to an odd sequence around the Wegman Report:
    Anderson, Richard G., Greene, William H., McCullough, Bruce D., and Vinod, H. D. (2005) “The role of data and program code archives in the future of economic research,” Federal Reserve Bank of St. Louis, Working Paper 2005-014B.
    http://research.stlouisfed.org/wp/2005/2005-014.pdf
    http://climateaudit.org/2005/04/22/anderson-et-al-2005-on-replication

    The paper is about economics, with the *lead author* at the St. Louis Federal Reserve (i.e., tax funds are paying for this), and a strange footnote that whacks MBH and praises MM, with 6 citations about paleo, and the bibliography includes an uncited reference to a “mimeo” from McKitrick.
    After this was published, McIntyre referenced it.
    I.e., this is “meme-laundering”: McK gives this to one of the authors, most likely McCullough. Now it’s in a more credible place.

    Then, the WR cites that, a second wash. Then:

    2008
    Richard Anderson, William H. Greene, B. D. MCCULLOUGH and H. D. Vinod “The Role of Data/Code Archives in the Future of Economic Research”
    Journal of Economic Methodology 15(1), 99-119, 2008 http://www.pages.drexel.edu/~bdm25/agmv.pdf
    This includes all 7 paleoclimate references, including the mimeo.!
    This looks like the third wash cycle, finally in a peer-reviewed journal, presumably.

    2009
    B. D. MCCULLOUGH and Ross R. MCKITRICK
    Check the Numbers: The Case for Due Diligence in Policy Formation The Fraser Institute, February http://www.pages.drexel.edu/~bdm25/DueDiligence.pdf
    This includes a cornucopia of climate anti-science references, via thinktank Fraser [MAS2010].
    The mimeo reference finally disappeared.

    I haven’t yet untangled all the relationships through the journals …

    but the way I’d put it is that there is a TINY handful of {economists, marketing folks, and statisticians) who:
    a) Often start DIY journals of one sort or another
    b) Have strong dedication to obscuring inconvenient climate science, often publishing such in places unlikely to face credible peer review.

    Of course, then Wegman and Said organized 2 sessions at Interface 2010 (statistics conference), inviting Fred Singer, Jeff Kueter (GMI) and Don “imminent global cooling.” Said comments strongly on awful behavior of climate scientists.

    Comment by John Mashey — 23 Aug 2010 @ 2:20 PM

  81. Re: #43 Rattus Norveigicus
    The book title is “El Nino in History: Storming through the Ages” by
    César N. Caviedes. Gainesville : University Press of Florida, c2001
    ISBN 0813020999; LC catalog # C296.8 .E4 C39 2001X

    http://www.amazon.com/El-Nino-History-Storming-Through/dp/0813020999/ref=sr_1_5?ie=UTF8&s=books&qid=1282592961&sr=8-5

    This sounds interesting – first I’d heard of it. Caveides works in historical geography at UFL. The Amazon.com writeup mentions another “magesterial” work in this genre:

    Late Victorian Holocausts: El Niño Famines and the Making of the Third World by Mike Davis, 2002. ISBN 978-1859843826

    http://www.amazon.com/Late-Victorian-Holocausts-Famines-Making/dp/1859843824/ref=sr_1_1?s=books&ie=UTF8&qid=1282593147&sr=1-1

    The latter author is described in one review as Marxist; in any event it’s clear this book is sharply critical of European colonialism, attributing massive famines to its influence (set in a context of El-Nino-driven shifts in regional climate).

    Comment by Jim Prall — 23 Aug 2010 @ 2:57 PM

  82. Homer-Dixon on the need to prepare for possible climate shocks: http://nyti.ms/HDshoc

    Comment by Kees van der Leun — 23 Aug 2010 @ 4:20 PM

  83. The Caviedes book is quite good. I’ve had it in my library for several years and it provides quite a bit of interesting information including such gems as historical shipwreck counts in areas prone to El Nino storminess. Really interesting stuff.

    Comment by Rattus Norvegicus — 23 Aug 2010 @ 4:24 PM

  84. Why highlight such garbage as that statistical analysis paper? They fail the basic mechanistic test, that of applicability of their datasets to the questions they ask.

    For example, consider regional reconstructions of temperature based on lake sediment analysis in areas that were once covered by glaciers. Who doubts that periods of glaciation and persistent ice cover could be distinguished from periods of biological productivity and high rainfall, or periods of drought?

    So, if you look specifically at lake sediment profiles across a wide range, and correlate them to one another, you can start to understand how the deglaciation since the last glacial maximum progressed.

    Trying to shoehorn all the proxies into one dataset and then make claims about a single variable, the “global average surface temperature” is nonsensical.

    Each type of proxy has different rules and tells you different things, some very qualitative, some very quantitative. You can’t lump them all together in this manner – well, you can, using a statistical formula, but that’s just gibberish masquerading as science.

    That argument applies to the initial hockey stick graph, too – a more convincing argument might be a video reconstruction of global ice extent over the last 50,000 years. However, if you used that ice extent in an effort to get temperature estimates, resulting in a “hockey stick” graph, you’d be attacked by statisticians out to respin the numbers.

    What the statistician will not do, however, is attempt to question the global ice coverage map c. 5000 ya, for example – that’s far too specific for a general smear effort.

    Comment by Ike Solem — 23 Aug 2010 @ 6:16 PM

  85. Do it yourself reply to http://dotearth.blogs.nytimes.com/2010/08/23/study-finds-no-link-tying-disaster-losses-to-human-driven-warming/
    “Study Finds No Link Tying Disaster Losses to Human-Driven Warming”
    AMERICAN METEOROLOGICAL SOCIETY
    Laurens M. Bouwer
    I think RealClimate debunked this kind of thing some time ago. RealClimate search gives me too many articles. I don’t see the right one, do you? Dotearth needs some expert commenting.

    Comment by Edward Greisch — 23 Aug 2010 @ 8:10 PM

  86. suyts #47, apeescape #58, I read it a bit differently. Their remark

    insofar as the proxies cannot capture sharp run-ups,

    to me very much reads “the proxies are worthless”.

    What is worse is that the above implication is just factually wrong. This “loss of sensitivity” meme is widespread, but consider that, even over the late 20th C divergence period, the response of (specifically) tree rings to year-to-year temperature variations remains full strength, without any “divergence”… loss of sensitivity just cannot explain this.

    The above remark would never have made it past domain-savvy reviewers. There is an extended history to this subject, see, e.g.,
    http://www.cce-review.org/evidence/Climatic_Research_Unit.pdf

    that M&W seem blissfully unaware of.

    Of course it is unsatisfactory that we don’t know the real reason for the divergence phenomenon, although progress has been made. But in research like this you do the best you can, based on the knowledge you have, noting the caveats. The quoted remark is much more than a caveat, and IMHO (as also a non-expert, but somewhat familiar with backgrounds) inappropriate.

    Comment by Martin Vermeer — 24 Aug 2010 @ 1:02 AM

  87. Ike Solem: “Each type of proxy has different rules and tells you different things, some very qualitative, some very quantitative. You can’t lump them all together in this manner – well, you can, using a statistical formula, but that’s just gibberish masquerading as science.”

    Well you can, actually, but it’s harder than it looks to these authors. You gave part of the reason in your post – that these many proxies really bear witness to lots of things, and the model was fit in a way which didn’t adequately reduce the scope of the sort of explanation which was sought – just the biggest explanation.

    What makes things a little worse is that knowing how the proxies relate to lots of things seems to make you think that they should be sorted out intelligently. This makes sense if there is so much information that you are really only using light duty statistics. When you are up against really difficult problems with high dimensionality and noise, then the “meaning” of the various sources of information falls by the wayside, and what matters most is how all the data can be “holographically” combined to maximize the flow of information from all the observations to the inference being attempted.

    This may sound a little like hand waving, but we can give the classic example from meteorology – if you have two bad thermometers in a sealed room with a fan and you want to know the history of the room temperature from the sequences of thermometer readings, one strategy is to decide which thermometer was the good one and ignore the other. Another strategy is to average the two thermometers. Depending on how accurate the two thermometers are, either one of these strategies could be better than the other. But if you do the simple calculus problem based on mean square error, you find that the optimal combination of the two thermometers is their average weighted by their “accuracies” (inverse variances). So if the two thermometers are equally accurate, the simple mean of the two is best. However, (and this is my point), it is NEVER the best combination to ignore one of the two thermometers unless it has unbounded variance. That is not really that likely.

    Now if the fan doesn’t mix the air too well, you have to take into account the cross-correlation of the errors, but this is easy enough.

    Now one can also take into account that one might not really care about the overall mean temperature of the room, one might care about how long it will take paint to dry on the walls, which is a somewhat different question. One might find that for this purpose, a different combination of the thermometers is best than the best combination for the overall average. And if one wanted to know whether the paint was drying uniformly, then yet another combination might be best.

    This argument extends both to multiple thermometers, or multiple weather stations with multiple quantities measured, etc. If one follows along this trail a bit, one arrives in the well undertood field of “Data Assimilation”.

    So far, we have thought about Mean Squared Error, and how to minimize it. This will end up providing us with unbiased estimators.

    But we probably know by now that unbiased estimators are not “all that” and we might consider that there are reasons to prefer biased estimators – robustness for one, and better accuracy in the high noise regime. When we go there, other combinations come into play – not always linear.

    And then there is the situation where we can combine additional data – say one of the thermometers is part of a thermometer-barometer instrument and we get pressure readings. We might assume that the ideal gas law would tell us how to connect pressure and temperature, leading to a likely nonlinear combination of the pressure and temperature readings. It would be entirely possible that the noise in the thermometer signals would not be that much like the noise in the barometer signal, depending on how frequently the instruments were sampled. If the sampling frequency is high enough, then the different response times of the various instruments introduce dynamic correlations (e.g. lagged covariances, etc.).

    Even so, you can dump just about any such concoction of instruments, physics, and “questions to be answered” into something called an Extended Kalman Filter. This has been around for a long time, and indeed people use it for Data Assimilation, and it works pretty well as long as you have enough data and are willing to slosh a little structured linear algebra around.

    OK so what comes out of this relatively simple starting point are some reasonable principles for modeling noisy data:

    1. It’s rarely a good idea to forgo the use of any data, no matter how unlikely it may seem.

    2. The way to combine data series depends strongly on the question which is being asked – so even if you have the same data, asking different questions can lead to much different ways in which the data are combined.

    3. When you have plenty of data, this is a fairly well understood problem with a good old solution.

    What’s not to like here?

    Well it’s that “have enough data” thing. There is a vast variation in the quantity, quality, and “physics” of the various proxy series. To get what we want from it, we are pretty much up against it for getting lots more data.

    In and of itself, that is not the biggest problem in the world since the statistical methods being used in this part of the climate science world were kind of improvised, sort of old school. It’s pretty clear that one should be able to get a lot of mileage out of higher bias estimators. I’ve said it here for years. So I really don’t blame McShane and Whyner for breaking open the wrapper on this sort of attack. However, the Lasso is not exactly the first tool I would reach for (precisely for the same reason M&W chose it, oddly enough). When I was reading their paper, this was when I thought the wheels might not stay on the cart for the whole road. There are lots and lots of ways to gain efficiency of estimation, and sparsity is a brutal choice which introduces extreme dependence on arbitrary coordinate choice, an effect which increases with dimension. Essentially, the Lasso works if your high dimensional problem is really a low dimensional problem, except that someone just stuck in a lot of useless extra data series and you don’t know which ones those are. It is very common in social sciences, public health, and economics, where data sets are frequently a pile of questionnaires that someone originally wrote a long time ago that have been unchanged (to keep them fixed) even though the people filling them out have changed a lot. Or business surveys were the economists really didn’t know what predicts inflation so they covered as many things as were easy to ask for. So you get handed this giant heap of data series, and many of them are irrelevant, but it is hard for a human to sort out. The Lasso (and his friends the Garotte, etc.) can chop off uninformative data series very well indeed.

    But the Lasso is not so good if you are trying to sieve out information which is common to the many different series, but which in each one is obscured by different noise distributions. As we saw above, it’s rarely the best idea to completely ignore the worst thermometer when the better thermometers are still bad. Even after transforming to a different coordinate system, throwing out components only works if there is such a thing as an “efficient basis” – in other words if after changing coordinates, most of the coordinates are useless. Useless for what?

    This is where we bring in the dependence on the “question being answered”. Because the Fourier transform is unitary (of if you don’t speak math, because I said so – ) mean square error gives equal weight to prediction skill at every frequency. Now the best combination of the data for that chore is not going to be the same as the best combination for predicting on some small frequency range – such as only the low frequencies. However if you don’t filter out prediction of high frequencies, then the lasso will keep combinations of data according to how well they predict temperature on ALL frequencies, and most of that temperature variation is not climate.

    You can very likely expose this behavior of their method by randomly choosing a combination of several of the proxy series, then passing that through a narrowband filter, adding the complementary frequencies of the true target, and seeing how that gets modeled by Lasso based technique of M&W. The Lasso will very likely zero out some of the component series that make up this target, even though that component is preset in the target. Well, yes, but it’s only present in a narrow band. This would not be the first time a high bias estimator couldn’t see one particular tree due to the surrounding forest.

    OK so at this point, I think I have explained that yes, you DO want to toss all the data into a big pot and apply good statistics, and with enough data, that’s been a done deal for a long time. But to cope with the problem at hand, there is not enough data, so one has to use a better statistical method.

    And M&W are not crazy to pick a high bias superefficient estimator, that’s really the only game in town. I am encouraged that people are going in that general direction.

    However M&W have made an unfortunate choice of method, which says more about what field they come from, and their lack of experience with some aspects of high dimensional superefficient estimation than anything else. (Yes I’ve seen other people mention their lack of climate science, but that’s not as big a problem.) The mistake of aiming at a sparse model is not confined to climate science problems – similar disasters await them in some aspects of finance. Had they gotten their ducks in a row statistically, I don’t think they would have been fooled by the lack of interpretability resulting from their model, nor would their predictions have been so awful.

    That being said, what would have been better? I haven’t thought too deeply about it, but obviously, to the extent that one can HONESTLY filter out extraneous components from the model target, one will get a better result. This will require care, but it doesn’t seem Herculean when so many times we see people using what amounts to a five year Nyquist rate.

    Then one should avoid using hard thresholding, or things that amount to it (like the L1-penalty in the Lasso, etc.). Yes, lots of econometricians “like” that, because they “like” to believe that the world really is simple if only you look at it the right way. One must use superefficient methods, but one must respect that high dimensional signals can go in a lot of directions. Happily it’s not too hard to come across superefficient methods that suit – e.g. Rudolf Beran’s “REACT”, etc.

    Comment by Andrew — 24 Aug 2010 @ 3:31 AM

  88. I think its time to stop quibbling over past research and focus on the important question: How do glean reliable paleo-temperature data from the various temperature proxies, (given their known drawbacks)? We must develop a method agreeable to researchers on both sides of the debate, and go from there. Any suggestions?

    Comment by Louis Hooffstetter — 24 Aug 2010 @ 9:27 AM

  89. IMO, “researchers on both sides of the debate” is an erroneous presumption.

    “The debate” doesn’t define “research”–rather the reverse. Debaters quibble. Researchers don’t (or at least, shouldn’t.) Researchers evaluate evidence as objectively as possible (or should) and modify, discard or develop their theories accordingly.

    So my suggestion, in short, is that if “we” stick to substance, are guided by evidence as it develops, and generally allow the scientific process to work, “we” won’t constantly be “quibbling over past research.” In fact, we’ll probably find that some of the “past research” is pretty valuable, and that the methods already developed retain considerable utility, even as new methodologies are created and applied.

    Comment by Kevin McKinney — 24 Aug 2010 @ 11:30 AM

  90. Maybe someone can answer to discussion about how you choose PCA factor in your reconstruction. For me it is pretty forward to use an estimation of noise level as a guideline. Once you have a noise estimation, the weight factor is only the signal over its variance.

    Comment by Yvan Dutil — 24 Aug 2010 @ 12:30 PM

  91. Re : #78 and #80

    Re: Politicised stats. journal.

    The culmination/application of their ‘original’ research into social networks.

    Comment by deconvoluter — 24 Aug 2010 @ 1:20 PM

  92. Any chance of setting up a permanent open thread? There are times when I would like to ask a question or note something that is not connected to the topic du jour.

    You might even want multiple open threads: headline news (Pakistan, Russia, etc.), scientific news (papers, conferences, events, etc.), and communications/politics (how to present results to a lay public, dealing with denialists, pending legislation, etc.).

    It might encourage participation by lay people who feel uneasy diving into technical discussions, but do have real concerns and would like to learn more. (Which could be a good thing or a distraction, depending on your mission.)

    Just my 2 cents…

    Comment by JimCA — 24 Aug 2010 @ 6:28 PM

  93. #92 JimCA

    Please keep in mind that the moderators of this site are working scientists that actually need to sleep once in a while. . ., well okay, not Gavin so much, but vampire qualities aside, rest can be quite healthy.

    You know what they say ‘A healthy scientists is a day, helps keep the denialists away’.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 25 Aug 2010 @ 12:05 AM

  94. My general assessment, at this time, about the McShane and Wyner paper is that it is a McKi ‘trick’

    http://www.ossfoundation.us/projects/environment/global-warming/myths/ross-mckitrick

    The illuminating connections provided by John Washey in post #80 do show connections to the Fraser Institute and McKitirick.

    I know, probably just a coincidence ;)

    Hmmm. . ., fancy that, M&M are referenced four times in the M&W paper.

    And Yasmin Said is an Editor in Chief? Where have I heard that name before?

    I’m going by memory here, but wasn’t she used by M&M somehow? And wasn’t she put on an investigating panel to investigate M&M results in connection to the Wegman report and continued investigation on the Hockey Stick?

    I will dig around, but it is interesting I think.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 25 Aug 2010 @ 12:28 AM

  95. 84 Ike Solem: “periods of glaciation and persistent ice cover could be distinguished from periods of biological productivity”
    Leads to an interesting question. I remember from somewhere something like the following:
    Tropical oceans are clear because they are too hot for phytoplankton. Cold oceans are turbid because lots of phytoplankton grow in them. [neglecting nutrients] Thus ice ages [not snowball Earth ages] are more lively than interglacials. Besides, the continental shelves are dry during glacial maxima, increasing land area for land organisms.
    So humans should be happy to have increased glaciation? Is the holocene really all it’s cracked up to be? Or are our previous cities just hidden by 400 feet of water?

    Comment by Edward Greisch — 25 Aug 2010 @ 1:34 AM

  96. My fingers and my brain sometimes are at odds with each other…

    ‘A healthy scientist a day, helps keep the denialists away’.

    I think that was Goethe (paraphrased of course)?

    On the Yasmin Said thing. If that was her on the Wegman investigation committee, it may be a case of the fox guarding the hen house.

    Comment by John P. Reisman (OSS Foundation) — 25 Aug 2010 @ 5:24 AM

  97. Perhaps someone can clear up my reservations about using tree-ring data at all for temperature reconstructions?
    I have two main doubts – one, while it is plausible that tree growth should be dependent on climate factors such as rainfall or hours of sunlight, is it realistic to expect that variations of global temperature of 1°c or less should be detectable. Secondly, the variation of tree-ring growth from one season to the next is substantial. However, this “signal” lies in a frequency range outside of what one would expect of a global temperature change, which should have a frequency in the per decade or per century range. In other words, can such a low frequency signal be extracted reliably out of the high frequency noise.
    M&W seem to be saying no to both questions, so why even attempt to use tree-ring data?

    [Response: M&W have no insight into tree-rings at all. Where did you get that from? But to answer your question, tree rings do show fidelity to temperatures if you go to areas where trees are sensitive to temperatures. Read Salzer et al for instance. - gavin]

    [Response: Plausible? Trees, as much or more than any group of organisms on the planet, are 110% integrated with environmental variables, with temperature arguably of first importance among them. They constantly respond, in a bazillion ways, directly and indirectly to temperature--they have no choice. Their carbon balance, and hence fitness and survival, is directly and strongly related to temperature, as selected over tens of millions of years. Any suggestion that growth (and hence tree ring characteristics) from thermally limited sites are insensitive to temperature is wrong at the most fundamental level. As for your question: (1) we are often talking about more than 1 degree C variation over long time spans, and (2) you enhance regional/global s:n by including as many sites as you can, just as you enhance it at the local scale by sampling a collection of trees, and at the tree level by taking two or more cores.--Jim]

    Comment by Tom Peel — 25 Aug 2010 @ 8:40 AM

  98. re: #97
    M&W seemed to have learned what they know of tree-rings from that authoritative source on the topic called the Wegman report … which plagiarized/mangled it from Bradley(1999), as per Deep Climate, Dec 2009 and recently updated into the modern age of color (or colour) DC, July 2010.

    Following WR tradition, M&W vaguely reference Bradley … with even less evidence of actually studying it. The WR at least looked at it enough to extract a few tables and diddle a a few pages of text, although with lots of “confounding factors” added to the mix, as well as at least one direct inversion of Bradley’s conclusions.

    However, the WR, p.10 did make a major contribution, the use of “artifacts” in an innovative way:

    “Paleoclimatology focuses on climate, principally temperature, prior to the era when instrumentation was available to measure climate *artifacts.* Many natural phenomena are climate dependent and, where records are still available, these phenomena may be used asproxies to extract a temperature signal. Of course the proxy signals are extremely noisy and thus temperature reconstruction becomes more problematic as one attempts reconstructions further back in time…”

    In bits and pieces, WR p.10 fairly clearly comes from Bradley(1999), pp.1-10, especially p.1, which does *not* use “artifacts”. The last sentence above betrays some misunderstanding, of course.

    McShane&Wyner write, p.1:
    “The key idea is to use various *artifacts* of historical periods which
    were strongly influenced by temperature and which survive to the present.”

    Who knows, maybe paleoclimate will adopt this exciting new terminology, the idea that proxies are artifacts? Maybe not.

    Comment by John Mashey — 25 Aug 2010 @ 10:53 AM

  99. Stefan Rahmstorf and Martin Vermeer has in their latest sea level study projected sea level rise to 2100 based on current and past warming trends.

    Would it be possible to do it the other way around – indicate past temperatures (last 2000 year or so) based on sea level?

    Has it been done? Hope I’m not too off topic.

    Comment by Preben — 25 Aug 2010 @ 1:01 PM

  100. ‘artifact’ may go back a ways:
    http://www.google.com/search?q=climate+%2Bartifact

    Comment by Hank Roberts — 25 Aug 2010 @ 1:16 PM

  101. Here is a tree ring study from Ireland that concern the influence of diggerent climate parameteres:

    http://climate.arm.ac.uk/publications/tree_rings.pdf

    Comment by Ibrahim — 25 Aug 2010 @ 2:56 PM

  102. Something this article (and many others) ignores is that there are far more paleoclimate proxies than tree rings. Chironomids, foraminifera, cosmogenic isotopes, conodonts (if you wanna get “really” paleo), varves, oxygen isotope ratios in clam shells and other organic matter, and of course, more well known ice cores. I’m a budding paleoclimatologist, not a statistician, so I can’t speak to any of the methods used, but there seems to be an attitude in many circles and in this article that tries to pretend that tree rings are all we have, which couldn’t be further from the truth. The best use of proxies is when combining several methods. Take a look at this paper on Baffin Island http://www.glyfac.buffalo.edu/Faculty/briner/buf/pubs/Thomas_et_al_2010.pdf On the 14th page, you can see the a graph depicting the use of several proxies and how they all agree quite nicely, especially considering the temporal scale in question. The McShane and Wyner article claim it’s very difficult to reconstruct past climates, but apparently, they haven’t been paying attention. There are people out there who have gotten pretty good at it.

    Comment by Shirley J. Pulawski — 25 Aug 2010 @ 3:46 PM

  103. According to the late B. Kliban, the phrase is actually “An apple every eight hours keeps three doctors away.”

    Comment by Barton Paul Levenson — 25 Aug 2010 @ 4:28 PM

  104. NASA/NOAA study finds El Niños are growing stronger, suggests possibly due to climate change: http://bit.ly/NinoSt

    Comment by Kees van der Leun — 25 Aug 2010 @ 4:32 PM

  105. [Sorry, now with the right link] NASA/NOAA study finds El Niños are growing stronger, suggests possibly due to climate change: http://bit.ly/NinoStr

    Comment by Kees van der Leun — 25 Aug 2010 @ 4:37 PM

  106. Jim,

    In your response to Tom Peel (#96), your use of hyperbole, absolutes and generalizations doesn’t help your arguement very much.

    I’d say that all organisms are 100% ‘integrated with environmental variables’. However, if trees can be ’110% integrated’, then you may be onto something!

    [Response: Well, I'm sorry but that's wrong. Any organism that cannot regulate its internal temperature, either metabolically or by movement, has a greater T dependency than those that can do so. Perennial woody plants, and trees in particular, are least able to get around this problem. The exaggeration was to emphasize this point, given that some seem to feel that a couple centuries of plant development and physiology research can be dismissed because they are fonder of number crunching than biology.--Jim]

    You are correct. Temperature is likely the environmental variable of ‘first importance’ for some trees, but definitely not for all. Careful evaluations of the characteristics of each species and the local conditions at the sampling locations are needed before this ascertion can be assumed to be correct. Has this been done?

    I don’t think that anyone is suggesting that trees are ‘insensitive to temperature’. However, some are suggesting that there may be other variables invovlved! Assumption that these other variables can be ignored or assumed to be constant is also ‘wrong at the most fundamental level’.

    [Response: I was referring to thermally limited sites over largish spatial scales. We're talking about regional to global temperatures here. Of course there are other determinants--and you can be sure that dendrochronologists are well aware of them--especially when it comes to distinguishing between thermal and hydric effects. Your statement about "careful evaluations" is quite off target. It is not necessary, and in most cases not even possible, to have detailed site knowledge--the whole point of the sampling is to obtain that knowledge. What is necessary is a general ecological sense of how various species are likely to respond in different geographic, physiographic, and demographic situations. This knowledge has been built up by field experience over many decades. Your last statement is certainly true but is a complete straw man, because nobody argues any such thing.--Jim]

    More cores. Sounds good! I love field work. Let’s go get some more cores to validate the existing data sets!

    [Response: Deal. How soon can you make it to Laramie?]

    Comment by fhsiv — 25 Aug 2010 @ 5:01 PM

  107. re: #102, #101, #98
    Again, it is virtually certain that M&W were learning paleoclimate from the Wegman report and maybe other places … and then cited Bradley(1999) to make it look plausible. The first paragraph of M&W has “artifacts”, but there are two better clues there (at least, I haven’t looked really carefully yet.)

    {C’mon Hank, go for it!]

    Comment by John Mashey — 25 Aug 2010 @ 9:56 PM

  108. Sorry I didn’t get a chance to thank those who helped me a couple of threads ago, which really helped (if not the confirmed denialists, at least others who may have been swayed by their arguments).

    Now I need more help. Any good response to this “the greenhouse effect has been disproved” argument:

    Actually I’d check my understanding of The Second Law of Thermodynamics….and Stefan-Boltzmann equations underpinning the AGW …Man Made Global Warming theory – NATURAL GREENHOUSE EFFECT…It was long ago debunked. As a matter of fact: NASA debunked it nearly 40 years ago during Apollo

    I can sort of handle the entropy thing with — if GW is impossible because of entropy, then so is life (which also defies entropy…at least for a while), and we’re not really here.

    Comment by Lynn Vincentnathan — 26 Aug 2010 @ 2:07 AM

  109. Re: John Mashey@107 etc.

    Do we have a statsgate in the making?
    Or am I lowering myself to the same level as…??

    Comment by Warmcast — 26 Aug 2010 @ 2:08 AM

  110. #102 Shirley J. Pulawski

    I made a page to show the scope of the climate science knowledge base here:

    http://www.ossfoundation.us/projects/environment/global-warming/what-we-know

    It is a little summary, and a copy of the NCDC paleo section

    Your point should be held up higher and in the bright light. So many are arguing that we know so little, when in fact it is the convergence of a tremendous body of data from multiple disciplines that make up, not only the foundations of climate science, but the extraordinary structure of understanding that has come together.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 26 Aug 2010 @ 4:54 AM

  111. #103 Barton Paul Levenson

    OT (please forgive) and shameless plug for just how cool Basel is!

    Re. the possible origins of ‘An apple a day’ phrase near Basel?

    When here in Switzerland, I live near the edge of the Allschwillerwald, which is a short distance from Dornach, where Goethe and Rudolph Steiner resided. The locals around here seem to think that the phrase originated in Dornach.

    All the major drug companies are here as well as the Bank of International Settlements. Pope Martin V ordered the council convene in Basel in 1424 and the council did convene 1431 – 1449. Though the history is amazing prior to that as well with our very own Roman settlement, gladiator coliseum, and Roman theater dating back 2300 years ago, it is the central role that Basel played between the church, the academia and the city that positioned this region as a center for intellectual, spiritual and academic development. I would not be surprised if the phase as ‘An apple a day. . .’ originated around here.

    Even the story of Wilhelm Tell involved an apple! Though I don’t know if there is a connection here?

    BTW, the Roman theatre seats 3000 and is still used today.

    http://www.augusta-raurica.ch/e/menu/index.php

    This week the Alexander Fest ended on the 25th.

    http://www.theater-basel.ch/spielplan/stueck.cfm?s_nr=4212

    Romerfest is this weekend.

    http://www.roemerfest.ch/

    Comment by John P. Reisman (OSS Foundation) — 26 Aug 2010 @ 4:58 AM

  112. #107 John Mashey

    Thank you for your wonderful investigative work :)

    Comment by John P. Reisman (OSS Foundation) — 26 Aug 2010 @ 5:00 AM

  113. Preben #99 asks if anyone has thought of using sea level rise as a thermometer.

    Yes.

    Comment by Philip Machanick — 26 Aug 2010 @ 5:19 AM

  114. Preben #99 asks if anyone has thought of using sea level rise as a thermometer.

    Yes.

    Comment by Philip Machanick — 26 Aug 2010 @ 5:19 AM

  115. Lynn, Re #108 – See http://scienceblogs.com/illconsidered/2008/09/greenhouse-violates-thermodynamics.php

    Comment by Silk — 26 Aug 2010 @ 7:59 AM

  116. You are correct. Temperature is likely the environmental variable of ‘first importance’ for some trees, but definitely not for all. Careful evaluations of the characteristics of each species and the local conditions at the sampling locations are needed before this ascertion can be assumed to be correct. Has this been done?

    Of course it’s not been done, paleoclimatologists are so stupid they never thought of the need to do careful site selection (facepalm). They just stick pins into a voodoo-doll like map of the planet when blindfolded and dead drunk, and use that as their site selection criteria.

    [edit - play nice]

    Comment by dhogaza — 26 Aug 2010 @ 8:21 AM

  117. Lynn Vincentnathan wrote: “… life (which also defies entropy…at least for a while) …”

    Life does not defy entropy. Or to say it another way, to defy entropy “for a while” is not to defy entropy at all.

    Comment by SecularAnimist — 26 Aug 2010 @ 10:20 AM

  118. ‘apple a day’:
    http://www.straightdope.com/columns/read/2141/whats-the-story-with-johnny-appleseed

    Comment by Hank Roberts — 26 Aug 2010 @ 10:46 AM

  119. For John Mashey — is your work being collected in any one place? I find bits and pieces, glad to dig in, pointers welcome. The statistics journal has corrected the typo in the title of the article on their website list of upcoming issues; anyone know if the draft has been revised at all?

    Comment by Hank Roberts — 26 Aug 2010 @ 11:04 AM

  120. Re #117 – Life does challenge the spirit of the second law if not the law itself.

    Comment by pete best — 26 Aug 2010 @ 11:08 AM

  121. Lynn 108,

    The person you’re quoting is either lying or relaying a lie. Nobody ever “disproved” AGW.

    Comment by Barton Paul Levenson — 26 Aug 2010 @ 11:59 AM

  122. Johann 112,

    Sehr gut fur Sie.

    Comment by Barton Paul Levenson — 26 Aug 2010 @ 12:01 PM

  123. Lynn Vincentnathan @108 — I recommend reading “Into the Cool” for an entertaining and enlighening essay of the role of lide in the thermodynamics of Terra.

    Comment by David B. Benson — 26 Aug 2010 @ 12:33 PM

  124. Here’s yet another denialist attack, using the esteemed by denialists everywhere blog source (you never see these “cutting edge” attacks in peer-reviewed jnls) at http://www.climatechangefraud.com/climate-reports/7525-leading-us-physicist-labels-satellitegate-scandal-a-catastrophe

    Leading US Physicist Labels Satellitegate Scandal a ‘Catastrophe’
    Written by John O’Sullivan, via e-mail | 19 August 2010

    Respected American physicist, Dr Charles R. Anderson, has waded into the escalating Satellitegate controversy publishing a damning analysis on his blog [http://objectivistindividualist.blogspot.com/2010/08/satellite-temperature-record-now.html].

    In a fresh week of revelations when NOAA calls in their lawyers to handle the fallout, Anderson adds further fuel to the fire and fumes against NOAA, one of the four agencies charged with responsiblity for collating global climate temperatures. NOAA is now fighting a reargaurd legal defense to hold onto some semblance of credibility with growing evidence of systemic global warming data flaws by government climatologists.

    NOAA Systemically Excised Data with ‘Poor Interpolations’…

    Comment by Lynn Vincentnathan — 26 Aug 2010 @ 4:05 PM

  125. Re: #108

    1. Life and entropy. The idea that life violates the second law never had any basis. It is based on the evolution of complexity. But …

    As taught, most elementary thermodynamics is developed for closed systems.The entropy is a function which reaches its maximum value at thermodynamic equilibrium. Even though life exists in an open system and it never reaches equilibrium , it is still possible to work with entropy. In this case the entropy of the living thing + its surroundings is always rising because life involves irreversible processes which create entropy. The extra ends up outside the organism. The process of evolution is also accompanied by entropy production in the world outside.

    2. Re: greenhouse effect. The opposite of disproof. It has been ‘proved’ better than most things by looking at the spectrum of the downward infra-red. Please see Fig. 2 here:

    http://www.skepticalscience.com/empirical-evidence-for-co2-enhanced-greenhouse-effect.htm

    3. Disproof of gh effect. That has been disproved better than anything else. You are spoilt for choice:

    http://scienceofdoom.com/2010/05/28/the-first-law-of-thermodynamics-meets-the-imaginary-second-law/

    http://arxiv.org/PS_cache/arxiv/pdf/0802/0802.4324v1.pdf

    http://scienceblogs.com/stoat/2010/05/comment_on_falsification_of_th.php

    http://scienceblogs.com/stoat/upload/2010/05/halpern_etal_2010.pdf
    (the rebuttal in the journal)

    Ely Rabett’s site

    Comment by Geoff Wexler — 26 Aug 2010 @ 5:47 PM

  126. Lynn @ 108

    Ah yes, NASA. That hot bed of denialism.

    “Global warming is the unusually rapid increase in Earth’s average surface temperature over the past century primarily due to the greenhouse gases released by people burning fossil fuels.”

    NASA fact sheet

    Anyway, not clear on what point this alleged debunking supposedly occurs. Did they even cite a paper that can be checked?

    Anybody can make stuff up and argue generalities. Or you can do some denier-style jujitsu and throw it back at them: the debunking was debunked (if any “debunking” of AGW occurred, this is probably true anyway) and 30-something years ago NASA, etc., etc., rockets, etc., then Hansen said, etc.

    Life gets energy from the sun which feeds organizing activity. However entropy doesn’t always protect us from badly encoded fools who like to play with fire…

    Comment by Radge Havers — 26 Aug 2010 @ 6:46 PM

  127. #108 Lynn

    Seeing as the people you seem to be dealing with aren’t in the business of citing good science, a good resource might be Skeptical Science. They’re in the process of rewriting all the arguments at differing levels of complexity, basic, intermediate, advanced. For this particular exercise you’re looking for 64 and 73 on the All Arguments list. And their phone app is free if you need instant access to solid backup.

    Comment by adelady — 26 Aug 2010 @ 8:09 PM

  128. I love you guys. It almost always lightens my mood to see the kind of serious thought that works through these comments. In the comments there are not only reams of realistic and detailed material, careful, analytical, truthful, relevant, etc., but help for my amateur attempts to derail the fake skeptics with their self-serving gotchas at my main hangout, DotEarth. Special thanks to Edward Greisch for calling attention to that post, which I found appalling – my skills are not up to the my need to call it out.

    John Mashey, your research is outstanding, and I hope you won’t mind if I steal it to demonstrate the self-referential nature of that material. (I tried a more biological adjective but the spam filter didn’t like it, I think.)

    Shirley Pulawski, 102, so obvious but only after you mention it. Great stuff!

    It seems to me Andy Revkin is eager to embrace counter information; perhaps he is prone to wishful thinking. It is presumptuous of me to guess at this, but it is sad to see those who wish to exploit scientific uncertainty given a platform there. However, if you look at the comments in the BBC and the Guardian, this donnybrook is all too common. It’s so easy to promote doing nothing … and so dangerous.

    Comment by Susan Anderson — 26 Aug 2010 @ 8:36 PM

  129. Jim,

    I’m a dumb geologist, so forgive me!

    You said “Any organism that cannot regulate its internal temperature, either metabolically or by movement, has a greater T dependency than those that can do so. Perennial woody plants, and trees in particular, are least able to get around this problem.”

    Why do you say that trees cannot regulate their internal temperature? Don’t they do this metabolically at both the high and low ends of the range of temperatures to which they are exposed?

    Also you said, “It is not necessary, and in most cases not even possible, to have detailed site knowledge…”. Ahh! a statement that crystallizes my concerns with any inferences made on the basis of analyses of small population dendro data sets. Aren’t your ‘regional to global’ analyses simply a collation of data from a number of relatively small, local scale studies? If you don’t know the metadata (slope aspect, slope inclination, soil/rock type, soil moisture/hydrologic variability, biological factors, etc.) associated with a set of specific core samples, how do you know to what extent you are not measuring variables other than temperature? Is this filtered out in your regional/global analyses by some sort of statistical method?

    In my business (soil engineering), the analytical data associated with a set of samples has very little meaning or credibility unless the location and condition of the samples (obtained with standardized sampling an analytical methods and as described by detailed exploration logs and maps) is precisely known. As a matter of fact, much of my work is created by practitioners who come to conclusions and render opinions (i.e. bad designs) based solely on the laboratory analysis of samples with poorly constrained provenance.

    But then again, this may not be an appropriate analogy. After all, in soil engineering, we do not rely on statistical analyses of measurements of indirectly related variables from samples obtained from a specific site to model the future behavior of materials at sites somewhere else in the world!

    Finally, you said “How soon can you make it to Laramie?” No can do! But, I’d like to help with some foxtails and bristlecones in CA.

    [Response: Not meaning to imply you're dumb. Sometimes it's hard to tell where people are coming from. I will try to respond to this and a couple other comments as soon as time avails.--Jim]

    Comment by fhsiv — 26 Aug 2010 @ 10:23 PM

  130. Susan Anderson 26 August 2010 at 8:0 PM,

    Special thanks to Edward Greisch for calling attention to that post, which I found appalling – my skills are not up to the my need to call it out.

    Did you mean this article?

    What did you find apalling about it?

    Comment by Anne van der Bom — 27 Aug 2010 @ 1:57 AM

  131. Susan (#127),

    My view is that Revkin relies very heavily on Pielke Jr., gives his views undue weight, and fails to look critically at any of his stuff, including blog ramblings. Dr. Steig put it tactfully:

    http://www.realclimate.org/index.php/archives/2009/12/who-you-gonna-call/

    Also, here’s is my belated response to the thread you’re referring to. Revkin’s mistakes are not limited to the original post.

    Andy Revkin writes:

    “On the Russian heat wave, NOAA has stated flatly that human-driven warming isn’t involved (in a draft analysis):
    http://www.esrl.noaa.gov

    The conclusion is blunt and clear:”

    Sounds like the spin one would normally find on a denier blog. Your own quote indicates only that the analysis finds that human-driven global warming isn’t the sole or primary cause. It certainly does not “state flatly” that it wasn’t involved.

    “Despite this strong evidence for a warming planet, greenhouse gas forcing fails to explain the 2010 heat wave over western Russia. The natural process of atmospheric blocking, and the climate impacts induced by such blocking, are the principal cause for this heat wave. It is not known whether, or to what exent, greenhouse gas emissions may affect the frequency or intensity of blocking during summer.”

    and of course there are other perspectives from climate scientists on this.

    As for the disaster loss study, I had a look. As suspected, it relies very heavily on Pielke Jr.’s work. The author has also collaborated with Pielke and to some extent Landsea (skeptic of the hurricane link to global warming). The cited papers are weighted a bit towards hurricane landfalls in the CONUS. They also cite one on earthquakes, which seems odd for the purposes of the study, since I’ve seen scant research on global warming’s link to earthquakes. But it does give them an extra point in the No column. Scanning the citation list for an independent non-Pielkian assessment, I find one:

    http://www.ingentaconnect.com/content/klu/nhaz/2005/00000036/00000003/00000977

    “These suggest that the occurrence of flood disasters could be mainly induced by local human activities before the mid-1980s, and thereafter mainly by abnormal precipitation in Xinjiang. Meteorological and hydrological records showed that the number of heavy rainfall events and the frequency of rainstorm flood disasters increased since the 1980s. In addition, siltation of reservoirs and loss of flood control structures are partly responsible for the increase of flood-damaged area. These results suggest that the increasing trend in flood disasters in Xinjiang since the middle 1980s could be attributed, at least in part, to an increasing trend in annual precipitation.”

    There are in fact a number of studies cited that show a trend in disaster losses even after normalization. See Table 1 at the end of the study:

    http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3092.1

    Note, however, a bit of spin used by the author to dismiss the one cited above. “Since
    this effect is not quantified, it is hard to conclude whether or not losses have increased due to an increase in extreme rainfall only.”

    That reminds me of Andy’s leap above. An NOAA analysis concludes that natural weather patterns were the principal cause of an extreme heat weather event. Andy takes the logical leap and concludes global warming played absolutely no role, which doesn’t follow. Few are claiming that the Russian heatwave was caused by global warming only. Similarly, it’s a strawman to suggest that anyone’s claiming the increase in raw flood disaster costs in China is solely due to the effects of increased precipitation from GW. However, the study indicates that it clearly played a role. There are 8 citations from Table 1 that indicate an increase in normalized disaster losses. So it takes a few leaps of illogic, some by the author, and another by Andy (which takes the abstract a step further by saying “no link”), to get from there to Andy’s nothing-to-worry-about headline:

    “Study Finds No Link Tying Disaster Losses to Human-Driven Warming”

    which is entirely wrong.

    Comment by MarkB — 27 Aug 2010 @ 3:28 AM

  132. A third huge chunk has just broken off of an ice shelf in the Canadian Archipelago:

    “A large parcel of ice has fractured from a massive ice shelf on Ellesmere Island in Nunavut, marking the third known case of Arctic ice loss this summer alone.

    The chunk of ice, which scientists estimate is roughly the size of Bermuda, broke away from the Ward Hunt Ice Shelf on the island’s northern coast around Aug. 18, according to NASA satellite imagery.

    At 40 metres thick, the Ward Hunt Ice Shelf is estimated to be 3,000 to 5,000 years old, jutting off the island like an extension of the land.

    “The cracks are going right to the mainland, basically, right to Ellesmere Island,” John England, a professor of earth and atmospheric sciences with the University of Alberta, told CBC News on Tuesday. “So, in the core of the ice shelf itself, the fracturing is occurring.

    “I think that’s really quite significant, that it’s like the most resistant and most tenacious part of the ice shelf is now being dismantled.”"

    Will any of these events alter denialists’ positions?

    I doubt it.

    Comment by wili — 27 Aug 2010 @ 7:17 AM

  133. Thanks, I enter this conversation with great hesitation, as I am an non-scientist amateur, though (sadly) despite my deficit my scientific literacy and critical thinking ability is high compared to the general population.

    Yes, that’s the article.

    re Roger Pielke, I walk on eggshells there which is why I didn’t bring it up. If you imagine what it must be like to be under constant attack, recently from both sides for almost three decades, you might find it easier to understand why someone with a mildly conservative bent and a dislike of extremist positions has become friendly with Pielke. Andy hammers away at the damage that exaggerations and unsupported statements do. Especially recent, anything and everything is fodder for attack and they keep getting better at the offside stuff, like labeling comment policy as prejudice and claiming “censorship” and “blacklisting”. I am saddened by the turn over the last couple of years towards the fake skeptic fans and away from the development evidence of catastrophic potential, but I can understand it. No matter how hard he tries to return us all to other overpopulation and overexploitation issues on a finite planet, the climate wars go on and on. That’s not to say I like the enabling of disinformationalists, but if you look at the BBC and Guardian comments, you can see that it is a lot of work to maintain civility on any public forum.

    I have more to say but my security apparatus is calling me so will save it for later. I mention this because I sometimes have computer trouble when using RealClimate and DotEarth, and am highly suspicious that malice is not entirely absent in some of these problems.

    Comment by Susan Anderson — 27 Aug 2010 @ 10:13 AM

  134. Should have taken time to proof the above.

    Ann van der Bom, perhaps “appalling” is too strong a word. I found it misleading, and it was the overall sense of proportion that bothered me. Since the attack machine is so well armed and ubiquitous, I hate to see them given a handle.

    For specifics, I recommend MarkB’s response above.

    The fact that each extreme weather event is not specifically attributable to climate change due to global warming is given, but the overall trends over time seem obvious if people just move outside their politics and look at world rather than just local news, as well as trends over time. The recent shortage of MSM coverage of the increasing Pakistani flood disaster is a case in point.

    Wili’s note about the latest ice breakoff is another example. Of course the antis are busy saying it’s normal, it will block the other ice and cause a stoppage. Anything to convince those who don’t have the time, interest, or energy to look it up for themselves, and want to believe it’s nothing much.

    I do think it important to notice all the great work Andy has done over the years, and the current efforts he makes to put things in proportion. Dissing him as if his whole record is dirty is certain to make him think it not worth it to avoid his less credible fans.

    Comment by Susan Anderson — 27 Aug 2010 @ 10:58 AM

  135. RE #108, and thanks for all your help. I did respond with the Coby Beck argument, but here is what I got back — don’t really know what he’s talking about (it’s on the Catholic Answers Forum at http://forums.catholic.com/showthread.php?p=6997120#post6997120 :

    Actually what was said Is The Second Law of Thermodynamics is ignored – which makes the equation used to suport the Green House Effect, as presented – wrong

    Now, try applying this “An inert object can only radiate the amount it absorbs – Not twice… We can not even get a perfect 1 to 1 transfer [ In a Perfect heat transfer ]

    Now take out Stefan-Boltzmann equations wrongly [ and knowingly ] applied.

    You have merely parroted THE SAME OLD EQUATIONS The equations used BY ALL AGW’ers – CRU – NASA – IPCC – KYOTO – Mr Mann – Hansen – Schmidt – et al

    I’ll give you fair warning This is the Words of Dr. Judith Curry [ You do know who she is at NASA don't you? ]

    Quote: Dr. Curry: “I’m contacting NASA about this.”

    Comment by Lynn Vincentnathan — 27 Aug 2010 @ 11:25 AM

  136. oops, one more peanut from this peanut gallery. The original provocation at DotEarth was buried in an earlier post where Andy Revkin “selected” a comment extracting the single paragraph from the NOAA article by one wmar, who is a clone of Marc Morano, if not MM himself (only reason I don’t think he’s MM is that MM often signs his name) and posts a constant stream of borrowed secondhand disinformation, very sciencey and polite, heavily reliant on WUWT et al. When I googled, I found the denialosphere had seized on the NOAA par and was crowing about it. When one looks at articles about climate on the internet, it seems DotEarth is specifically targeted as when the larger NYTimes readership weighs in the comment proportion is more like 90:10 instead of the 50:50, 60:40 denial regularly seen at DE.

    Comment by Susan Anderson — 27 Aug 2010 @ 11:27 AM

  137. SOS. Please if anyone has time weigh in on this!

    http://dotearth.blogs.nytimes.com/2010/08/26/on-harvard-misconduct-climate-research-and-trust/

    Comment by Susan Anderson — 27 Aug 2010 @ 11:32 AM

  138. RE trees, or plants in general, and the impact of temp. I came across this article, which suggests that for rice the increasing minimum diurnal temps are having a negative impact now, with the maximum diurnal temps expected to have negative impact when they reach a certain point.

    The article: http://www.pnas.org/content/early/2010/07/26/1001222107.abstract?sid=f116865b-4e70-4ef1-abea-e5b209a3bbb8

    I think that’s interesting, since they say that the deaths due to the 2003 European heat wave were more due to the higher nighttime temps (not getting cool enough), when the body needs to recouperate from the daytime heat. (And in the organic gardening class I just took I learned that plants can, like us, be harmed by viruses, bacteria, and micro-organisms; so I’m developing a sympatico with plants now.)

    Anyway, I’m just wondering if this mininum diurnal temp is something to look at for all plants. I know it is increasing faster than the diurnal max temps, and is a very good signal for AGW.

    There are also studies on the effects of both increasing temps and CO2 (and temps/CO2/water) on trees…

    Comment by Lynn Vincentnathan — 27 Aug 2010 @ 11:49 AM

  139. Lynn, sounds like you’re participating in a religious forum, carrying explanations of physics people give you here, posting them over there, and hoping to improve their understanding. The people you’re talking to are going to denial sites, getting explanations there, and posting them. Seems like neither you nor they understand the physics, but are exchanging posts of second hand information. I doubt this can educate anyone.

    Why not instead try pointing people to the FAQs and suggesting they read?

    Comment by Hank Roberts — 27 Aug 2010 @ 11:59 AM

  140. 134 (Lynn),

    There are certain people that you cannot reason with, because the mechanics of reason are simply a weapon in their personal arsenal, rather than being something that they apply to the real world to understand it.

    I recently had a run around with a guy who likes to go by the name of “tallbloke”. I took it all seriously until I visited his site and found out he actually believes that Dayton Miller successfully disproved the Theory of Relativity.

    That said, once again, I’d advise writing for the people “listening in,” not the person you’re actually arguing with (because he’ll always find some loony response to twist the conversation).

    All you need to do is to discredit his line of attack (easily done in this case). There’s no arguing against fiction, so demonstrate that it’s fiction.

    A little bit of research on the supposed quote by Dr. Curry demonstrates that it only appears in one form, from a bizarre article that’s been duplicated on about a dozen conservative really-out-there sites on the web. Never on a real news site, never anything more professional or substantive, but only in places equivalent to forwarding the “Mars will get bigger than the moon on August 27th” e-mail.

    A link to the supposedly suppressed and recently published “paper” demonstrates it to be the sort of thing an eighth grader might throw together for a science assignment:

    This is a paper?

    The list of “authors” is similarly unimpressive.

    So the bottom line answer here to the denier is “wow, you’ll believe anything you read as long as it supports your own firm desire for AGW to be a hoax!”

    Comment by Bob (Sphaerica) — 27 Aug 2010 @ 12:02 PM

  141. Susan,

    I can’t seem to find the comment to support this, but I recall a comment on a blog about a year ago that said something to the effect of “Andy’s an alarmist from way back but we’re working on him”. It stuck with me because it’s notable that there are maybe a couple dozen regulars over there with the same rhetoric, all recommending each other’s posts, no matter how nutty. The “we” is interesting. I wouldn’t go as far to say that there is an organized effort specifically at his blog. It could be just individuals of like-minded ideology identifying with each other. I think contrarians see Revkin and some other journalists as prone to their misinformation (although not easily) if they keep hammering at it through a variety of methods. Perhaps they are right on that. There’s certainly evidence to support it.

    I agree that Revkin’s life’s work shouldn’t be judged by some poor comments. I tend to have high expectations of those who don’t have obvious anti-science agendas. When I see comments such as the specific one regarding global warming and the Russian heatwave, or headlines like the one on disaster losses, or uncritical reliance on RPJ as a unassailable source, I tend to cringe. He should know better at this point.

    Comment by MarkB — 27 Aug 2010 @ 12:23 PM

  142. Thanks, Hank (#138) — I’ve been actually holding my own pretty much, answering the “WV is the biggest GHG” type of arguments with skill and my own knowledge (much of that gained from years of reading RC & other good sources). Perhaps that’s why the denialist group there has taken it to higher levels I cannot address on my own.

    I still need to address these guys for the sake of the others reading the posts. Thanks Bob #139 for your advice — and, yes, it seems that “Greenhouse Effect on the Moon” paper is the one to which they refer.

    I can return to answering them my previous way (when I can use science to fight bogus science), the “Even if you’re right about AGW (and we all hope you are), still…..” There are plenty of good reasons to do the things that help mitigate AGW, aside from AGW is happening — one of them “entropy” itself, or the profligate using up of finite resources.

    Comment by Lynn Vincentnathan — 27 Aug 2010 @ 1:11 PM

  143. #102 Shirley J. Pulawski

    A few more perspectives. . .

    ‘Ignorance is bliss’, so they say. As long as they don’t look it’s not there.

    I think I wrote something about this 4 or 5 years ago. It’s like a child with their eyes closed singing in gleeful joy to their parents, ‘You can’t see meeeee’!!!

    Invert the scenario to the antithesis and that’s it. As long as they don’t look, global warming is not human caused. It’s a truly pathetic defense in the light of the bulk and scope of the scientific evidence. Reason and common sense is no where involved in the debate from their limited position.

    I think ‘immature’ is about the nicest word possible to describe the behavior.

    Naiveté, I think, has to be considered off the table for anyone that claims to subscribe to the scientific method in thought or deed.

    Ignorant is probably the best word to describe it, in that they literally have do have to ignore all the other evidence to pull off such an immature and gleeful position of childish bliss.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 27 Aug 2010 @ 2:01 PM

  144. Lynn@137 – Virtually all food plants are very sensitive to temperature ranges, especially day vs night and especially during the period of bloom and pollination and then in conjunction with humidity. A for instance – tomatoes will not pollinate above their preferred temp range, fruit set stops for periods of unusually high temps or unusually high humidity, and those situations are becoming more “normal” where I live …. another – most fruit trees will not set if the night temps fall too low [even short of a frost] within about 48 hours of pollination. And during periods of high temps the trees will use available moisture for their respiration, to keep cool, instead of using it to grow the fruit. Many more examples, both the increasing average temps and the increasingly common extremes are already a problem for food production. Temperature and humidity extremes have effects on a wide variety of the disease problems of food plants as well, higher values of either enabling many unfriendly organisms. Really doesn’t bode well for the future, even well short of BPLs desertification scenarios.

    Comment by flxible — 27 Aug 2010 @ 3:06 PM

  145. Lynn (#141)–

    I think the biggest point is probably the gulf between the fact that no real body is a perfect blackbody and the conclusion drawn that the greenhouse effect is incorrect. To argue that other things besides the greenhouse effect affect temperature is true; to argue that that means there is no greenhouse is NOT–it’s just a non-sequitur. Call it a slightly more subtle than usual strawman.

    Their graph shows minimum Lunar temps of around 100 K; that’s about 100 K colder than anything the Earth ever sees. Do they really expect anyone to believe that it’s all due to Earth’s greater mass? That’s an LOL.

    Give the Moon a Terrestrial atmosphere and you’d see much different temperatures, until that atmosphere leaked away again.

    Comment by Kevin McKinney — 27 Aug 2010 @ 3:14 PM

  146. BBC Radio 4 on climate.

    Next Roger Harrabin is presenting a couple of programmes on climate including interviews with Bob Watson,Nigel Lawson and Stephen McIntyre. His own account so far carries the vague remark:

    some climate scientists think the warming will be restricted to a tolerable 1C or 1.5C.

    This is journalese.
    1C warmer than when? i.e. Now or the beginning of the industrial revolution? or some other time?

    By when will this warming be reached?

    Will the warming of 1C or 1.5C then stop?

    Under which scenario i.e how much extra greenhouse gas is assumed and how fast will it be liberated?

    OR is he perhaps (or his editor) thinking of the climate sensitivity to 2 XCO2?

    1C or 1.5C will be ‘tolerable’ for whom?

    Sloppy writing like this tends to go with sloppy thinking. It is not a good omen for another so called investigation by an environmental correspondent. I hope I am wrong.

    Comment by deconvoluter — 27 Aug 2010 @ 3:52 PM

  147. Lynn (#108)

    there’s a parody of the Hertzberg et al. nonsense at DenialDepot:

    http://denialdepot.blogspot.com/2010/06/apollo-mission-giant-leap-contradicting.html

    It’s side-splittingly funny, and also quite pedagogical — read it, and you should be able to figure out what’s wrong with Hertzberg et al., without any need to get into esoteric arguments about entropy.

    Comment by CM — 27 Aug 2010 @ 4:03 PM

  148. Does anyone here care to comment on the paper by Spencer & Braswell at JGR?

    Link to Spencer’s Blog:
    http://www.drroyspencer.com/2010/08/our-jgr-paper-on-feedbacks-is-published/

    Giving it a brief read through was interesting, but I was interested in reaction here, particularly on their methods and the papers effect going forward.

    Comment by sambo — 27 Aug 2010 @ 4:19 PM

  149. the fact that no real body is a perfect blackbody

    Not perfect, but the emissivity of most of the surface is pretty close to 1 (typically above 0.9 often above 0.99). Don’t forget that this refers to infra-red.

    Try e.g. Rubio et al,2003,Int.J.Remote Sensing.Vol.24,no.24,5370-90

    Even snow is nearly black at these wavelengths (try Knuteson)

    Comment by Geoff Wexler — 27 Aug 2010 @ 4:24 PM

  150. Lynn 135,

    Your [edit] correspondent is parroting the line that for a layer of atmosphere to radiate the same amount both up and down (since it has a top and a bottom) somehow violates conservation of energy. It radiates “twice as much” as it should, in the view of these incompetents.

    Comment by Barton Paul Levenson — 27 Aug 2010 @ 4:41 PM

  151. Thanks everyone. DotEarth is getting an earful.

    MarkB, unquestionably a clique, extremely organized, but we are not allowed to mention it in polite company. Thanks for your help. You may notice that Andy selected my comment, suggest a visit to RealClimate for info on the Montford book, and quoted Stephen Schneider. I regret the need to use kid gloves, but quite often, especially since he started his job at Pace and left the regular reporting job, he does quite often call out the deniers. We just have to keep hammering away.

    Remember – as was said to Lynn Vincentnathan, in commenting one is writing not for the regular nasties, but for the literate lurker who might think they have a point. It’s important to point at correct information and make one’s points both polite and obvious if possible.

    More flies are caught with honey than with vinegar.

    Comment by Susan Anderson — 27 Aug 2010 @ 5:45 PM

  152. Re Lynn Vincentnathan – it may help to point out that a layer of atmosphere has the potential to absorb radiation from above and from below and thus will tend to emit radiation upward as well as downward. Try refering to “Schwarzchild’s equation”, which describes radiant intensity along a path through an absorbing medium at LTE (a perfect blackbody has an optical thickness of infinity with zero scattering). An object emits as much as it absorbs if it is at the same temperature as that which absorbs what it emits and emits what it absorbs (at LTE). The surface can be approximated as a perfect blackbody at the relevant wavelengths with a range of temperatures – global average about 288 K, though because of nonlinearities, the average emitted flux won’t exactly correspond to the flux emitted at the average temperature, but the spatial and temporal variability of surface temperature is small enough that using global average temperature can be a useful approximation; space, as it relates to climatologically-significant radiant fluxes, can be approximated as a perfect blackbody near absolute zero.

    Or to simplify, try: an object that is a perfect blackbody on all sides would emit it all directions according to it’s dimensions and temperature on each side, and would absorb according to what it intercepts coming from those directions. An object that only absorbs a fraction of the radiation it intercepts will emit a fraction of the radiation that a blackbody would emit.

    Comment by Patrick 027 — 27 Aug 2010 @ 10:59 PM

  153. Susan Anderson: Thanks for your comments on dotearth. They are very good. Andy Revkin’s problem is that he has a degree in journalism and needs a degree in physics. Everybody needs a degree in physics. Without actually understanding the reality, one tends to drift with the current. To be grounded in reality means to have done the experiments personally. Doing experiments is a good therapy when somebody is trying to drive you crazy, which is what the denialists are trying to do.

    How good are you at math?

    Children of today have the advantage of Dragonfly TV. I hope the schools are requiring them to watch it and do their own experiments.

    Comment by Edward Greisch — 27 Aug 2010 @ 11:22 PM

  154. Edward and Susan,

    I’m taking that the references are to the current thread on the Hauser imbroglio. It is interesting that the two links he highlights have to do with press coverage of climate change and the tendency of Nature and Science to highlight headline findings. I have a comment awaiting moderation there, but the whole comment can be seen at Eli’s place (current sea ice head post). It seems rather wrong headed to criticize the science for the excesses of the press releases by doing this, he is only playing Anthony Watts’ game.

    Comment by Rattus Norvegicus — 28 Aug 2010 @ 12:25 AM

  155. Re #124: As a physicist, I will pipe in on that article that you were wondering how to respond to…First of all, I don’t know how the physicist quoted comes to be called a \leading U.S. physicist\. I am sure he is a fine guy and appears to have had a successful career in physics, but there is no evidence that he is considered by colleagues to be in any way special, e.g., he hasn’t been elected a fellow of APS or AAAS or won any awards from them, at least according to this blurb at the website of the small company that he founded: http://www.andersonmaterials.com/about_us.html#Charles

    And, while it is hard make heads-or-tails about his claims on the satellites, I read something here that he wrote about the greenhouse effect http://climaterealists.com/index.php?id=5926 and it is riddled with serious misunderstandings. His claim that the cooling effect of greenhouse gases in absorbing incoming solar IR is much greater than the effect in absorbing outgoing IR is nonsense (partly because of the considerations below and partly because he says that 45% of the incoming solar radiation is in the IR without understanding that almost all of that is in the near-IR where CO2 does not have significant absorption and water has some significant absorption but than in the far IR; see http://www.globalwarmingart.com/images/7/7c/Atmospheric_Transmission.png ). His claims about how little IR the surface emits make no sense whatsoever…They violate what we know from the Stefan-Boltzmann Equation (and the measured fact that most of the earth’s surface has an emissivity in the IR that is close to 1, i.e., only a few percent below that for a perfect blackbody emitter). And, furthermore, he seems completely unaware of the fact that scientists understand the role played by other transfer mechanism besides radiative transfer within the troposphere. That is why the focus is on calculating the radiative forcing at the top of the atmosphere, since the only way that energy can be transferred (to any significant extent) between the earth-atmosphere system and space (and the sun) is via radiation. So, whatever abilities Dr. Anderson has in has field of materials science, he seems woefully ignorant on basic physical facts about the atmosphere and climate.

    Comment by Joel Shore — 28 Aug 2010 @ 8:21 AM

  156. I am reading about the Russian wildfires. The official media are quoting NASA about the locations of the fires. It’s a big country.

    http://legendofpineridge.blogspot.com/2010/08/russias-2010-wildfires-miser-pays-twice.html

    I think we are helping them pinpoint their fires. Putin dismantled the federal fire fighters, and now Russian environmentalists are talking about this mistake.

    I think the Russians are very thankful to NASA. President Medvedev is affirming that there is global warming. Before this summer, Medvedev spread the same conspiracy theories as our denialists Inhofe and Cuccinelli, but the party line has changed. Of course, we don’t know if his words will lead to new commitments.

    I know they say that the immediate cause of the fires was a \blocking event,\ but scientists also say that it is possible these could become more extreme with global warming. They are being conservative about this because their computer models can’t relate global warming to \blocking events\ yet.

    http://legendofpineridge.blogspot.com/2010/08/president-medvedev-says-russians-have.html

    Comment by Snapple — 28 Aug 2010 @ 8:37 AM

  157. #148 sambo

    Looks like Spencer might be the poster boy this season since Monckton will have to regroup and come up with all new tired old arguments.

    It looks like another one of those we don’t know everything, therefore we don’t know anything papers. . . move along, nothing here to see, move along.

    So, another link in the Lindzen chain. . . and I thought that had been dealt with. Silly me.

    On to the conclusions!!! Please pardon my farcical unskilled examination.

    Para. 56 Spencer claims his simple model shows something.

    Para 57 He claims “internal radiative forcing” (aka natural variability mechanisms) is a better term than the previous “internal climate variability” (aka. weather) because the latter does not distinguish between radiative and non radiative temperature change. I personally don’t know what non-radiative temperature change is? Maybe someone else can help me out on that.

    In para. 58 he seems to assert that we are still not good at predicting the weather due to complexity and that feedback inside of the weather signal is hard to identify. Hmmm. . ., I wonder if you removed the weather variables and dealt with the identifiable climate related components one might be able to see if feedbacks are identifiable? I dunno, sounds kinda crazy.

    In para. 59 ‘Some’ AR4 models showed roughly parallel lines in short and long term feedbacks, but it’s too soon to say short term and long term feedback’s are the same. He’s probably right about that, but wrong about the direction of inferred feedback. He is pointing out the uncertainty in the AR4 of course.

    In para. 60 he seems to be saying that examining some of wiggly lines produces a slope in 9 years of data similar to feedbacks diagnosed by Lindzen and Choi [2009] as well as Spencer et al. [2007] and that it’s not obvious how this work relates to long-term climate sensitivity.

    In para. 61 I think he put in the extra effort to say that extra caution is needed in interpreting relationsships between surface temps and TOA regarding regression relationships and temp variations.

    In para. 62 he underscores the notion that climate is really complex and since we can’t freeze the system in place, we really can’t know what’s going on a that moment. he further purports that we need to understand more stuff as it moves through time.

    In para. 63 he nails it all together by saying it is clear we can’t be accurate regarding short term feedbacks (or even long-term climate sensitivity). Finally he claims that he hopes that his work will help everyone realize that he has provided some really important stuff here about how much we don’t know and is happy to help correct everyones misunderstanding.

    To summarize, I would guess that he wrote this paper so that some key phrases could be found in the peer reviewed literature such as “It is clear that the accurate diagnosis of short‐term feedbacks (let alone long‐term climate sensitivity) from observations of natural fluctuations in the climate system is far from a solved problem.”

    His blog article is a bit more obvious

    http://www.drroyspencer.com/2010/08/our-jgr-paper-on-feedbacks-is-published/

    After years of rewrites and hostile reviewers. . ., . . . the climate system has given the illusion of positive feedback.

    Ah, so that explains observed trends in rising temperatures, melting glaciers, more named storms, sea level rise and shifting seasons and bio systems. It’s an illusion.

    Now it all makes sense and we can sleep better at night with our warm fluffy comforter and pillows :)

    I’m looking forward to a more erudite examination of his paper though. Focusing on how to recognize feedback inside signals what it’s all about. I’m not so confident that was the reason why this paper was written.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: The Natural CycleThe Greenhouse EffectHistory of Climate ScienceArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 28 Aug 2010 @ 8:50 AM

  158. For my book-in-progress on sea level rise, I need to compile a list of countries and/or cities that are certain to have problems with sea level rise. I shall call these my “poster-children of sea level rise.” Any suggestions for the list will be welcome.

    Comment by Hunt Janin — 28 Aug 2010 @ 10:11 AM

  159. Andy Revkin’s problem is that he has a degree in journalism and needs a degree in physics. Everybody needs a degree in physics. Without actually understanding the reality, one tends to drift with the current.

    I partially agree. It would be so much better if journalists could read and follow the published papers, at least the conclusions. A physics, maths or similar degree would certainly help. But ideally I would hope for more from a senior science journalist:

    (a) Some experience in scientific research.
    (b) Experience in having his/her work monitored for accuracy.

    The reason for (a) is that so many people in the media have no idea what happens in a research community.They also don’t know much about how papers are published, rejected corrected , improved by later papers etc. I suspect that the ‘social networking’ model applies most to environmental correspondents who prefer gossip to mastering their subject.

    BBC Radio had an approximation some time ago i.e John Maddox who used to provide long radio reports called “Scientifically Speaking”. He started life as an astrophysicist of modest accomplishments and was for many years editor of Nature as well as reporter for the BBC. Of course he too had his critics.

    Comment by Geoff Wexler — 28 Aug 2010 @ 10:25 AM

  160. Some of Spencer’s blog followers will need to selectively agree w/him on his feedback paper while still emphatically disagreeing with his heretical refusal to comply with G&T dogma. Deemed definitely wrong on a fundamental matter, correct on something else much more subtle, a strange and tense heterodoxy but the man is too valuable to be discarded as an apostate.

    Tough row those folks must hoe, positively rocky.

    Comment by Doug Bostrom — 28 Aug 2010 @ 11:05 AM

  161. Edward Greisch (and *thank you* for the compliment; it really does help to be encouraged):

    Before I start, note that Andy Revkin has a degree in biology, and has been covering climate change well since the mid 70s. He’s only gone off a bit recently, and I think aside from his natural desire to see things as not as bad as they are (who would not join him), he is tired of the wars, having been in the crosshairs forever. Some of us hotheads are partly to blame. Hatred makes enemies, and we too can be too quick on the draw. This is not to say that I don’t totally agree with the many who are trying to enlighten him about the consequences of his making 97:3 look more like 60:40. He does try, a lot, and deserves credit for it. You should also know that he is a pioneer for the NYTimes on the media, DotEarth was the beta for their new comment formats, and his use of twitter was also cutting edge in the beginning. This all consumes a lot of time. When I taught drawing, I noted that the best potential students, those best able to listen, absorb new concepts and apply them changed from physicists etc. to computer scientists in the 90s. It seemed to me the best brains were going there.

    Back to your question, this old dog got stymied by differential equations in 1973 or thereabouts at MIT, and dropped out to become an artist. I was doing biochemistry, but ended up teaching drawing to a lot of good scientists (Feynman was one of our alumni), and drawing from life is another good reality check.

    Since very few people are willing or able to climb the heights of analytical science, let alone physics, my efforts are bent towards convincing observant laypeople to believe their own eyes and do their own research, rather than believing the slanted digests that have become ubiquitous since the tobacco era and all the dragon’s teeth thinktanks. The new info on the Koch brothers is an eyeopener, but I haven’t seen much on the MSM yet (of course Rachel Maddow did a good job with it, she’s a superb reporter, digs and has a twinkle with it).

    I was once told a person didn’t know cars but they knew people, which helped in choosing a used car. I think that’s the best we can do.

    Until the Jonas Brothers, Miley Cyrus, the Black-Eyed Peas and all the other high-energy-media types climb on board and start thinking about their promotion of virtual reality as life, there’s not much hope, I think.

    Comment by Susan Anderson — 28 Aug 2010 @ 11:20 AM

  162. How will the Polar Bear survive?

    Comment by Basil Hatford — 28 Aug 2010 @ 1:52 PM

  163. Really, global warming science has had a bit of a victory in the Russian petrostate.

    “[T]he Russian position has always been that climate change is an invention of the West to try to bring Russia to its knees,” says Vladimir Chuprov, director of the Greenpeace energy department in Moscow.

    When President Medvedev visited Tomsk two months after the Copenhagen Climate Conference, he characterized the global-warming debate as “some kind of tricky campaign made up by some commercial structures to promote their business projects.”

    Russia’s President Medvedev recently stated that global warming is happening. RIA Novosti (7-31-10) reports that President Medvedev stated:

    “What is happening to our planet’s climate should motivate all of us, I mean, states and heads of non-governmental organizations, to take more active steps to resist global warming.”

    This quote was in reference to the Winter Olympics in Sochi, not the fires. Still, it seems to be the terrible fires that got the official media talking about global warming–even tho’ scientists do not attribute the immediate cause of the fires to global warming.

    This affirmation of global warming is an about-face for President Medvedev, the former CEO of GAZPROM.

    http://legendofpineridge.blogspot.com/2010/08/union-of-concerned-scientists-debunks.html

    Comment by Snapple — 28 Aug 2010 @ 3:45 PM

  164. Susan Anderson @161 — Illuminating, thank you.

    And, as always, thank you for your efforts.

    Comment by David B. Benson — 28 Aug 2010 @ 4:50 PM

  165. On Spencer’s blog, he states “As we show in the new paper, the only clear signal of feedback we ever find in the global average satellite data is strongly negative, around 6 Watts per sq. meter per degree C.” That would mean the 8 degree warming at the termination of the last ice age would have been accompanied by a 48 watt/m^2 negative forcing. Eyeballing the graph at http://en.wikipedia.org/wiki/File:Vostok_420ky_4curves_insolation.jpg, it looks like the Milankovitch forcing was only about 50 watts – which would mean that a net change of 2 watts would raise the temperature by 8 degrees. The time constants then were 5-10k years so that may make a difference.

    Comment by Brian Dodge — 28 Aug 2010 @ 9:56 PM

  166. “We’ve been a little preoccupied recently, but there are some recent developments in the field of do-it-yourself climate science that are worth noting.”

    Uhh….you guys are climatologists and bloggers, what could be so preoccupying?

    Comment by CStack — 29 Aug 2010 @ 1:09 AM

  167. Looking at the uni bremen site..the NW and the NE passage are now open for business for I believe only the second time in history and the season has still got a few more weeks of melting to go. The arctic ice melt as the charts go seems to indicate that this summer is already the 3rd worst melt ever just behind the summer of 2007 & 2008 and could well take second spot when the melt season draws to a close.

    Comment by Lawrence Coleman — 29 Aug 2010 @ 5:58 AM

  168. I disagree with the notion that in order to be an effective science writer one needs a scientific degree. Elizabeth Kolbert, Chris Mooney and Carl Zimmer are good writers, and I am not sure they have degrees in the fields about which they write.

    Comment by Deech56 — 29 Aug 2010 @ 6:00 AM

  169. I can’t talk about the science because I don’t know much science, but I know that defending science/scientists from censorship is very important.

    Persecution and censorship of scientists happens more in Russia, but US politicians and fossil fuel entities discredit and persecute US climate scientists, too. They want to censor scienitsts by intimidating and ruining their reputations. The illegal release of the Climategate emails made me notice this.

    The denialists and their sponsors are very hypocritical to complain about censorship when really they are trying to censor real scientists. They don’t fool me because I know exactly how political operators do this in Russia, and the tactics being used against climate scientists are pretty much the same.

    Politicians in the pockets of fossil fuel entities and juries should not be deciding what is good science. Scientists should be deciding this, and they are not in a huge conspiracy. In Russia professional scientific organizations are not as strong as here, and they have more fake science. Still, real Russian scientists are pretty brave, and they are mostly true professionals.

    I have an observation and a question.

    OBSERVATION

    I am reading about the Russian fires. The Russian government is using data from US satellites to locate the fires, but so is Russian Greenpeace; and Greenpeace is using the data to confront the government when they don’t tell the truth.

    The Russian government even shut down the website of state forestry-protection agency (RCFH), when RCFH told that there are fires in the radioactive Bryansk forest. An RCFH official said they shut their site to stop panic, but according to Russian Greenpeace one expert on the fires, VV Kostin, was fired for not following an order about releasing information to journalists:

    “Кульминацией информационной политики правительства стал приказ руководителя Рослесхоза, запрещающий сотрудникам агентства и подведомственных ему организаций общаться со средствами массовой информации и предоставлять информацию о лесных пожарах без согласования с управлением науки, образования, международного сотрудничества и информационного обеспечения. Один из ведущих специалистов Рослесхоза по тушению лесных пожаров – В.В.Костин и вовсе был уволен на основании этого приказа за несанкционированное общение с журналистами.”

    {Use your google translation tool. You will get the main points.)

    http://www.greenpeace.org/russia/ru/news/4922865

    QUESTION

    Scientists seem to agree that the immediate cause of the heatwave in Russia was “blocking,” but they also said that their computer models can’t presently tell them if blocking will become more of a problem due to global warming/climate change.

    Would it be correct to say that we really don’t know if this severe blocking was due only to weather. Would it be correct to say that this heatwave COULD be due to global warming, but that scientists can’t yet demonstrate this?

    Comment by Snapple — 29 Aug 2010 @ 9:16 AM

  170. re: #168 (Deech56)

    Correction partially accepted. I found Chris Mooney and Elizabeth Kolbert to be well worth reading.

    It is OK for a non scientist to write about science provided he or she avoids pontificating about technical issues by substituting gossip for understanding. There is also a difference between what a columnist writes and a senior reporter who has even more responsibility to know his or her stuff. Perhaps there could be a special web site where prospective environmental writers could open their efforts to scrutiny by serious experts who could check them for accuracy and depth?

    The lesson which I learned from the CRU email-hack, is that some environmental writers were too easily swayed by a determined contrarian offensive and that this was partly due to years of experience in being rewarded for superficiality, fostered by an inability to go deep enough into the issues. How can a non spec_ialist who relies on chat, know how much credence to give to Stephen McIntyre or Andrew Montford?

    Comment by Geoff Wexler — 29 Aug 2010 @ 9:58 AM

  171. Hi flxible, #144. Thanks for your insights about plants. Could you give me some sources for them. I’m revising a paper for publication on food rights and climate change. Thanks.

    Comment by Lynn Vincentnathan — 29 Aug 2010 @ 10:33 AM

  172. Hunt Janin, 28 August 2010 at 10:11 AM

    The Netherlands being one of the most obvious problem areas wrt sea level rise, I suppose you’re already aware of the Delta Commission’s report. In case not,
    here it is.

    Comment by Anne van der Bom — 29 Aug 2010 @ 10:35 AM

  173. Thanks all for your help with black bodies pooh, ergo no GH effect. Here is the site the denialist finally admitted to getting his low-down on how AGW is now irrefuteably disproved :) http://climatology.suite101.com/article.cfm/no-greenhouse-effect-in-semi-transparent-atmospheres

    Apparently it is the well esteemed climate scientist, Ferenc Miskolczi, who has definitively disproved the GH effect, so we can all go home now….

    ((How come there are more denialists now than 20 years ago, when AGW had not reached the golden .05?))

    Comment by Lynn Vincentnathan — 29 Aug 2010 @ 10:43 AM

  174. Snapple, try MT’s thread here on blocking events and climate:
    http://initforthegold.blogspot.com/2010/08/seeking-precedent-for-asian-jet-stream.html

    Comment by Hank Roberts — 29 Aug 2010 @ 12:02 PM

  175. Lynn@171 re plant climate sensitivity
    My understanding is a lifetime of “anecdotal” experience in food production, primarily fruit and juice and a passion for my own strain of tomatoes — for some tomato info scroll down to “Causes of Poor Tomato Fruit Set” here – tomatoes are a very important commercial crop – for the basics on fruit tree temperature needs see here and also here — an interesting observation in my area is after a mild winter, when there is likely to be a poor fruit set, the following fall when the night temperatures start cooling again, some trees will pop up random blooms [having finally made up their "chill hour" needs], which of course doesn’t result in any fruit production with dormancy about to start.

    With an awareness of climate change I also take note of reports such as this of temperature effects on seed production, and general evaluations like this. Our food production is well adapted to specific climatic conditions, and it doesn’t take much of a shift of one aspect to wreak havoc on commercial production, as seen in the Canadian prairies grain production this year, with poorly timed heavy rains having resulted in a total wipeout in large areas. On top of the Russian drought induced wheat-export ban, I think the coming year will see very high prices.

    Comment by flxible — 29 Aug 2010 @ 12:53 PM

  176. Thanks, Dr. Roberts.

    John Reisman–your site seems like it is not working at the moment. You have some good news links.

    Comment by Snapple — 29 Aug 2010 @ 1:36 PM

  177. #168 Deech56

    I agree. Writers don’t need a scientific degree. They do need to understand the relevant contexts. Unfortunately that is not so easy. It’s like when you are hiking in mountains. You see a ridge up ahead and you think ahhh, I’m almost there, get to the ridge and then realize you are not there, so you need to go forward to the next ridge.

    CLimate science has a lot of ridges. And getting to one or even a few is probably not enough to have relevant context for writing.

    I have to admit that I see the difficult challenge. One who is unknowledgeable in such contexts may very well hear someone say but it’s the sun, and have someone tell them that solar is increasing, and without even checking they might just think, well if that’s true, then it makes sense.

    Or it’s natural cycle. And without context, it sounds good.

    It’s a real challenge. So it needs more chipping away at. Then those looking may begin to see that there are a lot of ridges, rather than just the one they were just presented. and then they start realizing that that was a fake ridge. Once they find a few fake ridges, then the science context might start taking hold.

    #169 Snapple

    Generally climate drives weather, but predictability of weather is difficult. Trends will play themselves out and help probably improve the climate models. Then we may have more predictability?

    Blocking might become more prevalent, but the status of resolution on how to predict may still be a way down the road.

    I did some climate/weather context on it:

    http://www.ossfoundation.us/projects/environment/global-warming/summary-docs/leading-edge/2010/aug-the-leading-edge

    #176 Snapple

    It can go down when it gets hit with too many requests, but it has a cron job that reboots the site after 5 minutes.


    Fee & Dividend: Our best chanceLearn the IssueSign the Petition
    A Climate Minute: Natural CycleGreenhouse EffectClimate Science HistoryArctic Ice Melt

    Comment by John P. Reisman (OSS Foundation) — 29 Aug 2010 @ 2:38 PM

  178. Brian Dodge (#165): Actually, the plot that you referred to shows the change in insolation at one particular latitude at one particular time of the year. The global annual change in insolation from the Milankovitch oscillations is very close to zero. The actual global forcings come almost entirely from the changes in albedo (due to the growing or shrinking ice sheet and secondarily vegetation changes) and changes in the levels of greenhouse gases (with an additional small contribution due to changes in aerosol loading). See, for example, Hansen’s discussion here: http://naturalscience.com/ns/articles/01-16/ns_jeh2.html

    But, your basic point is well-taken, which is that with such a large negative feedback, it becomes very difficult to explain what happened during the ice age – interglacial cycles. There has to either be a huge forcing operating that we don’t know about (or a huge underestimate of the forcing due to ice albedo change or what-have-you) or there has to be some reason why the negative feedback didn’t operate during those times.

    Comment by Joel Shore — 29 Aug 2010 @ 3:01 PM

  179. Re #173.

    He does not claim to have disproved the greenhouse effect, only to have discovered a new version of the saturation argument i.e. to have made it harder to enhance by adding more greenhouse gas.

    I have not understand it partly because I have not had the time.

    Just one question/comment stands out which may not be trivial. His use of the term thermal equilibrium between the atmosphere and surface is highly ambiguous. Two systems at such different temperatures are never in thermal equilibrium; they can off course approximate to steady state conditions which is very different … but English is not his native language.

    Is this a sophisticated variation of Gerlach and Treuschner’s revised 2nd law of thermodynamics?

    —————–
    * At equilibrium we would all be dead.

    Comment by Geoff Wexler — 29 Aug 2010 @ 4:16 PM

  180. Re: My previous comment:

    http://www.realclimate.org/wiki/index.php?title=Ferenc_Miskolczi

    Fatal oversight; forgot to check RC first.

    Comment by Geoff Wexler — 29 Aug 2010 @ 5:21 PM

  181. # Re: 179. Thanks for deleting (I hope) the comment I called “my previous comment”

    Comment by Geoff Wexler — 29 Aug 2010 @ 6:49 PM

  182. > Snapple says: 29 August 2010 at 1:36 PM
    > Thanks, Dr. Roberts.

    Errrror; no Dr. with my name, I’m a dropout …

    Comment by Hank Roberts — 29 Aug 2010 @ 9:12 PM

  183. How come there are more denialists now than 20 years ago

    Denialism consists of sources and sinks (i.e consumers) So far the sources become more active and ugly whenever there is a sign of action e.g. at Kyoto and Copenhagen. Not surpisingly expensive campaigns pay off and the number of consumers has increased too.

    Will it ever be checked? Only by education at all levels, including that of scientists who think that what they have learned from the media is sufficient for them to make judgments outside their expertise.

    The garbage to which you have directed us Lynn has its part to play. Many scientists for example, have no idea about the nature of the propaganda which is being used. Some of them think that ‘skeptical’ papers are being suppressed, but might get a shock if they were better informed about the ones that got through (like McLean, de Freitas and Carter, or Gerlach & Tscheuschner (correctly spelt this time) and also, very likely, Miscolczi. These papers provide faint echoes of the Chinese Cultural revolution when academics were ridiculed by the uninformed.

    Comment by Geoff Wexler — 30 Aug 2010 @ 5:17 AM

  184. 183 (Geoff),

    Will it ever be checked? Only by education at all levels…

    I disagree from two perspectives.

    First, my experience with “normal people” who have not taken an almost morbid interest in the subject is that they don’t really care, or know about it, one way or the other. It’s a distant problem compared to their every day lives, and it has a sort of nagging gravity to it, the way nuclear war did in the sixties. It was always there, but you couldn’t worry about it every moment of every day, and disarmament seemed so impossible that you just hoped a nuclear war never happened and tried not to think about it.

    Climate change, for most people, is a lot like that, IMO. They know it’s there. They even strongly suspect it’s true. But they see no point to taking it further, because the perceived repercussions of either action or inaction are so awful, they just would rather worry about saving for their kid’s college education, or not being included in the next round of layoffs, or whatever, and let someone else worry about the big stuff.

    Secondly, I think the garbage we see on the Internet magnifies the size of that population ten thousand fold. They are very vocal, but if you look, it’s the same twenty or thirty regulars, day after day after day, on all of the sites, both science and denial psycho-science (I won’t even honor them with the tag “pseudo-science”). There are probably a few hundred retired engineers that have nothing better to do than to get online and trumpet how much smarter they are than everybody else, because they can see through this wicked conspiracy. After that, they probably have a few thousand fans. But that’s it. There aren’t as many as it seems.

    Bring it up with a random group of people, and most of them know nothing.

    The noise is all out of proportion to the reality, and the reality is very, very different from the noise. Not better… it’s more of a resigned, frightened apathy than the angry, rabid denial that is seen on blogs.

    It’s still a problem, and the deniers still need to be called out to prevent their eventual use as a convenient excuse for inaction by the common man. But correcting everyday apathy will require two things.

    The first is education, but not on the complex details and denial fallacies, but rather simply the gravity of the problem and likelihood of future events, and the importance of reasoned, measured action now, instead of frantic, panicked action too late. I think too many people think it won’t happen for a century, and by then we’ll figure out a technology that neatly makes the problem vanish.

    The second, unfortunately, will be a string of high profile, nasty climate events. As much as people talk about The Greatest Generation, America would not enter WW II until they were dragged in, kicking and screaming, by a nasty dose of Pearl Harbor. The same is true of terrorism and 9/11.

    Curiously, the 1918-1919 flu pandemic, which should have been a wake up call for virus research and management, instead lead Americans to merely want to forget it as quickly as possible.

    I don’t like it. I just think that this is the way it is. It’s human nature, particularly in today’s modern society where we buy insurance for everything, and count on 4 minute EMT response times, life saving heart surgery, and such, while our everyday lives are so full of money/paper/system management that there’s little room for deep thinking or future social planning.

    Comment by Bob (Sphaerica) — 30 Aug 2010 @ 9:12 AM

  185. As a brief anecdote, I was at a family picnic last weekend. My niece had signed up for an environmental biology course as a high school sophomore, and was given a global warming book to read over the summer as prep work.

    She pulled the typical teen attitude of rolling her eyes, scrunching her face, and disdainfully whining “why do I care about that, I mean come on?” That’s disheartening, even just as an attitude towards education, and if she finished the book she might care, or her teacher might enlighten her.

    But not one of the dozen other people there, from college students to recent college grads to adults, had anything to say on the issue (except for me, after a long pause while hoping desperately that someone else would show the same level of interest).

    And when I said that she should be concerned, that it may be the biggest factor in her generation’s lifetime in many different ways… no one agreed or disagreed. There was just silence, followed by a quick and successful effort to change the subject.

    Comment by Bob (Sphaerica) — 30 Aug 2010 @ 10:02 AM

  186. It has been a while since I’ve tried to read econometrics. But as far as I can tell, we have a novel statistical method that a) when presented with (structured) noise and data, preferentially selects noise over data, b) fails to reproduce (e.g.) the Medieval Warm Period or the dip in temperature at the Maunder minimum.

    Unless I’ve read the equation on Page 13 wrong, it is based on a method with no penalty for missing the intercept, and only penalizes error in the slope (rate-of-change) coefficient. So, under this method, it does not matter at all if you get the actual temperature correct, it only matters that you get a good fit to the slope. Interestingly, the resulting reconstruction has two pieces, one where the slope is down, one where the slope is up.

    The other aspect of method that I find puzzling is that it constrains the sum of the (absolute value of the) regression coefficients, without respect to the units in which the underlying series are measured.

    For economists, that wouldn’t matter. Typically, economists estimate elasticities (which are constructed to be unitless). Here, it seems to me that the method fundamentally introduces a totally irrelevant tradeoff between the units in which an underlying series is measured and the resulting estimated relationship.

    In other words, again looking at the equation on page 13, if you rescaled one of your underlying series (say by dividing by 1,000,000), for a given Lagrange multiplier lambda, you’d get a new solution that would reduce the coefficient on that series, in this constrained estimation, nearer to zero.

    In short, I don’t think this method is robust to the *units* in which the underlying proxy series are recorded.

    Again, to an economist, that typically doesn’t much matter. Or, it can be made not to matter by estimating a log-linear regression so that the coefficients are unitless elasticities.

    So, I don’t pretend to understand the method, but if that’s a constrained multivariate OLS, then as a matter of arithmetic, the solution is not going to be independent of the units chosen for the underlying predictors.

    What I’d really like to see is a plot of the (absolute value of the) Lasso regression coefficients vis-a-vis the regression coefficients from the unconstrained OLS. The point of the lasso is to constrain variation, but in fact nothing is free. I would suspect that an unintended side effect of this particular formulation is that the use of the constrained estimation means that the choice of physical units for the underlying series is now material.

    If so, wouldn’t it be a hoot if one of the things this approach did was simply to minimize the prediction’s reliance on those series that just happened to be expressed smaller physical units?

    If true, well, if you arbitrarily re-weight the predictors to satisfy the lambda constraint (page 13), you’d kind of expect to see poor performance.

    Apologies if I’ve missed the point. But it seems like running a regression constraining the sum of the regression coefficients, when the underlying series are measured in arbitrarily different physical units, is just asking for trouble.

    What I didn’t see in the paper is a test of whether or not this method would reproduce temperature changes from pseudo-data in which the proxies are definitely related to the temperature by construction.

    In other words, this looks like bad science to me. What you have is a conclusion that this method, which ignores the intercept and constrains the sum of the regression coefficients, is unable to reproduce the standard errors found around other hockey stick reconstructions.

    Hmm.

    , f I had to pick a favorite part of the paper, it would have to be the section on the Medieval Warm Period.

    Let me see if I have this right. This novel statistical approach does not reproduce any Medieval Warm Period. They explain this by saying that the model does not capture modern warming either, so you wouldn’t expect it to capture the Medieval Warm Period.

    Comment by Christopher Hogan — 30 Aug 2010 @ 10:11 AM

  187. Bob (Sphaerica), Ignoring for the moment that I often disagree with your position of substance, I think your comment 184 is an apt and cogent assessment of the public situation.

    Comment by Rod B — 30 Aug 2010 @ 10:23 AM

  188. Bob @ 184

    they don’t really care, or know about it, one way or the other

    Probably true for many kinds of normal people, which is not to deny that all the noise does have an effect. Around the water cooler, I’ve certainly seen people’s opinions blow with the wind on this topic, denialism gaining the most traction as part of a suite of political issues and manipulations. It’s almost like a social parroting induced by politically obsessive appeals and badgering.

    Somehow a culture that respects rigor over one that celebrates sensate aggravation and magical thinking needs to be encouraged. As it is, people get amped up or attached to a meme, and it becomes hard to have a sensible conversation on any topic.

    Comment by Radge Havers — 30 Aug 2010 @ 10:31 AM

  189. Bob@185 & 186 makes an astute analysis of it all. :(

    Comment by flxible — 30 Aug 2010 @ 10:55 AM

  190. >>The second, unfortunately, will be a string of high profile, nasty climate events. As much as people talk about The Greatest Generation, America would not enter WW II until they were dragged in, kicking and screaming, by a nasty dose of Pearl Harbor. The same is true of terrorism and 9/11.

    >>Curiously, the 1918-1919 flu pandemic, which should have been a wake up call for virus research and management, instead lead Americans to merely want to forget it as quickly as possible.

    I won’t want to go too far off topic, but your second paragraph undermines your first. Terrorism never was a serious problem. The 9/11 attacks killed 3,000 people, but many times that number die in car accidents every year. The main cause of accidents is speeding and inattention, but I don’t see any campaigns to make people drive smarter. Likewise, the wars waged to stop terrorism did not. Afghanistan and Iraq are as unstable as ever, and by CIA estimates, a terrorism is as much a threat as before 9/11 (though, as I stated before, the threat is small compared to others, and mainly emotional.)

    It is easy to get people whipped up for war. But it is much harder to get them to take common sense action against more real but common threats, such as flu epidemics or climate change. The fact that the public did not want to invest in virus research after the 1918-1919 awful flu epidemic seems to point out that, even if we have a relatively large environmental disaster, the public won’t necessarily want to take action, either.

    I just think we should take each situation by itself without generalizing, especially about human responses, which are hard to quantify and predict.

    Comment by Paul Tremblay — 30 Aug 2010 @ 11:17 AM

  191. #186 Christopher Hogan:

    Unless I’ve read the equation on Page 13 wrong, it is based on a method with no penalty for missing the intercept, and only penalizes error in the slope (rate-of-change) coefficient. So, under this method, it does not matter at all if you get the actual temperature correct, it only matters that you get a good fit to the slope. …

    I don’t think so. All betas including the intercept are penalized for getting the fit wrong, in the usual, least-squares way. Additionally all betas except beta-0 get penalized for not being zero. This penalty is linear, not quadratic, and seems to push all betas that are already relatively small, toward zero. I have understood from descriptions of the Lasso that this tends to effectively eliminate most of them… in this case I really wonder if this is a sensible thing to do, as the whole idea with multi-proxy reconstructions is precisely to extract a weak common-mode signal from a large number of noisy proxies combined.

    The other aspect of method that I find puzzling is that it constrains the sum of the (absolute value of the) regression coefficients, without respect to the units in which the underlying series are measured.

    True, but… “the matrix of predictors X [i.e., the table of proxy time series -- MV] is centered and scaled”. So the units/scale are not entirely arbitrary.

    Comment by Martin Vermeer — 30 Aug 2010 @ 11:55 AM

  192. Apologies for being off-topic but a judge in Virginia has ruled against the state Attorney General, Ken Cuccinelli, in the “fraud” case against Michael Mann when he was at the University of Virginia. From the article at http://www2.timesdispatch.com/news/news/2010/aug/30/judge-rules-against-cuccinelli-uva-case-ar-479707/, “A judge ruled today that Attorney General Ken Cuccinelli hasn’t shown the University of Virginia has documents relevant to his fraud investigation against former U. Va. climate scientist Michael Mann.

    In a six-page decision, Albemarle County Judge Paul M. Peatross Jr. also ruled that the attorney general also has not sufficiently “stated the nature of the conduct” believed to constitute possible fraud by Mann alleged to satisfy the requirements of the law under which the office can issue a civil investigative demand for information from the university.”

    Once again, the anti-science denialists have been ruled against.

    Comment by Dan — 30 Aug 2010 @ 12:05 PM

  193. A too literal “do it yourself” comment: just been getting bids on replacing a very old roof. Roofers bids for ‘cool roof’ shingles are about 2x their bids for ordinary shingles, but from reading, the ‘cool roof’ shingle stock from the manufacturer costs only about ten percent more.

    I suspect we’ll use rolled roofing, to be hand-painted white by me. Sigh.

    Comment by Hank Roberts — 30 Aug 2010 @ 1:21 PM

  194. Also off topic, before the distortions start (may be too late already):

    http://reviewipcc.interacademycouncil.net/ReportNewsRelease.html

    Comment by Martin Vermeer — 30 Aug 2010 @ 1:48 PM

  195. #192 (OT but down to earth.. almost)

    Hank.

    The idea would even work in countries with temperate climates, such as Britain, because white-coloured roofs would help to reflect the radiated heat from homes and offices back into the building during winter months, said Dr Chu.

    http://www.independent.co.uk/environment/climate-change/obamas-climate-guru-paint-your-roof-white-1691209.html

    Your climate may be different, but will you be looking for a special paint with low emissivity in the infra-red?

    Comment by Geoff Wexler — 30 Aug 2010 @ 3:34 PM

  196. Re #194

    It was introduced just now (Radio 4′s 10PM news) by an assertion that it was in response to a string (or series) of errors by the IPCC. They quoted the one about the disappearance of the Himalayan glaciers but failed to sustantiate any others. For extra analysis they went to Roger Pielke.

    Comment by deconvoluter — 30 Aug 2010 @ 5:56 PM

  197. > cool roofs … emissivity in the infrared

    Look for a high number; a low number means the roofing holds the heat and puts it into the building; a high number means it’s radiating energy, in the infrared, back into the sky.

    “… Emissivity? … ability to release absorbed heat…. between 0 and 1, or 0% and 100%, to express emittance…. most roofing materials can have emittance values above 0.85 (85%)…. EPA will post emissivity values for all products on the ENERGY STAR Qualified Products List to assist consumers in their purchasing decision. Longer term, EPA plans to revisit the possibility of adding an emissivity component to the ENERGY STAR specification.”

    Higher emissivity means the roof doesn’t get as hot during the day.

    http://www.energystar.gov/index.cfm?c=roof_prods.pr_roof_emissivity

    Comment by Hank Roberts — 30 Aug 2010 @ 7:07 PM

  198. ps, here’s the EPA’s current list:
    http://downloads.energystar.gov/bi/qplist/roofs_prod_list.xls
    http://downloads.energystar.gov/bi/qplist/roofs_prod_list.pdf

    The ‘cool roof’ material costs about ten percent more from the manufacturer, according to folks studying it in academia. Roofers are asking 2x for “cool” vs ordinary shingles so far. I’ll haggle.

    Comment by Hank Roberts — 30 Aug 2010 @ 7:14 PM

  199. #197 Hank.

    White is black…

    when it comes to the infra-red properties of paint(like most other substances). At least that is the conclusion I confirmed on looking at the top few rows of the spread sheet to which you linked.
    Of course you might want it to be IR-black to keep you cool in the summer and I might want it to be IR-white to keep me warm in the winter, and save CO2 emissions.

    So where does that leave Dr.Chu’s advice for us Brits. (see #195)?

    Well there is this..

    http://www.freepatentsonline.com/4131593.html

    So far nothing else except the metallic option.

    Comment by Geoff Wexler — 31 Aug 2010 @ 4:39 AM

  200. Hank 30 August 2010 at 1:0 PM,

    Another idea?

    Comment by Anne van der Bom — 31 Aug 2010 @ 7:02 AM

  201. I tried posting this in the new thread. First try, I got Captcha problems. Second try, they told me it was a duplicate posting. I don’t know if it got in or not; thus my posting here. Apologies if this is a duplicate.

    OT: Gavin, Mike, Ray, somebody who has published in peer reviewed journals–I really need someone to look over my paper and make sure I’m not making any stupid mistake that will get it rejected on the first pass. Tamino is very kindly reviewing the statistics for me, and it’s a largely statistical analysis paper. But I don’t want to get the climate science wrong. If you’re sick of hearing about this from me, let me know and I’ll shut up.

    Comment by Barton Paul Levenson — 31 Aug 2010 @ 8:06 AM

  202. To Barton Paul Levenson @38: This paper has relevant recipes…

    Rangaswamy, M., Weiner, D., and Ozturk, A., “Computer Generation of Correlated Non-Gaussian Radar Clutter,” IEEE Trans. on Aerospace and Electronic Systems, v.31, p.106-116 (1995).

    Comment by Chris G — 31 Aug 2010 @ 12:44 PM

  203. Thanks, Chris.

    Comment by Barton Paul Levenson — 31 Aug 2010 @ 1:26 PM

  204. This seems like the applicable thread to ask this question: I am having trouble finding details on the radiation models in the recently released Community Earth System Model (CESM). I am interested in both solar spectrum absorption and scattering (as a function of wavelength) and the flip-side infrared from earth. All I can find and download are users guides for the linked model. In the interest of “Doing it yourselves”, do you have any links or info that may help? Thanks.

    Comment by AJ — 1 Sep 2010 @ 8:36 PM

  205. Re: ~#193 Hank

    Have you considered enhancing the transpiration ‘of’ your roof instead of its albedo?

    http://www.guardian.co.uk/environment/2010/sep/02/green-roof-urban-heat-island

    [Stuart Gaffin's site is down now which is another reason why I cannot vouch for its effectiveness].

    Comment by Geoff Wexler — 4 Sep 2010 @ 10:10 AM

  206. Bill McKibben, founder 350.org, author ‘Eaarth’, with David Letterman; on climate change and the 10/10/10 ‘Global Work Party’: http://bit.ly/McKLet (Youtube video)

    Comment by Kees van der Leun — 5 Sep 2010 @ 12:58 PM

  207. This blog piece alleging the gulf oil spill has dramatically affected the North Atlantic current is doing the rounds. http://europebusines.blogspot.com/2010/08/special-post-life-on-this-earth-just.html Can we ignore this as largely unfounded speculation? This http://environmentalresearchweb.org/cws/article/news/43681 would suggest so.

    Comment by Hugh Laue — 7 Sep 2010 @ 5:10 AM

  208. Geoff, yes, I checked; a ‘green roof’ adds weight not allowed by code on old structures; rebuilding to support them isn’t in the code yet in our area (every old house has different issues; cheaper to tear down and rebuild completely).

    Some white “cool” roofing plastic is the same stuff used as the base waterproof layer for green roofs, which might leave the option open for later structural improvement.

    No good pure choices for roofs I can find. All have issues.

    Comment by Hank Roberts — 7 Sep 2010 @ 11:34 AM

  209. Speaking of \do-it-yourself,\ there is a rather big brewhaha at the \Climate Skeptic\ blogsite. One of the commentators, under the moniker of \Russ R,\ has posted the following to demonstrate the statistical relevance of recent climate models. I have challenged him to peer-review his findings–which he believes are unique to climate science–but he has refused. I wonder if anyone at Real Climate could comment upon them?

    This is the verbatim post; the complete trainwreck thread can be seen at the link above. Russ R posts:

    \I’ve fed the UAH and GISS data into excel and done a little basic statistical analysis.

    Based on IPCC (1990), we are testing whether the 1990-2010 data support a predicted temperature rise of 0.3 degrees per decade or 0.03 degrees per year.

    UAH data (1990 – 2010)
    Observations = 247
    Slope = 0.01840195
    Standard Error = 0.001953884
    Ho: slope ≠ 0.03
    T Stat = 5.935894464

    GISS data (1990 – 2010)
    Observations = 247
    Slope = .019243
    Standard Error = 0.001579303
    Ho: slope ≠ 0.03
    T Stat = 6.811077087

    Turning to IPCC (1995), we are testing whether the 1995-2010 data support a predicted temperature rise of 2.5 degrees per century or 0.025 degrees per year.

    UAH data (1995 – 2010)
    Observations = 187
    Slope = 0.013700253
    Standard Error = 0.002937661
    Ho: slope ≠ 0.025
    T Stat = 3.84651187

    GISS data (1990 – 2010)
    Observations = 187
    Slope = 0.016804263
    Standard Error = 0.002198422
    Ho: slope ≠ 0.025
    T Stat = 3.728008917

    \If I recall correctly, a t-stat greater than 1.96 indicated a significant difference at the 95% confidence level. But I’ll leave the interpretation of the above t-stats to Waldo.

    \The raw data are here, for anyone who’d like to check my work.\

    Russ R then corrects himself a few posts later.

    \And after typing that last comment, I just caught one of my own mistakes.

    In my statistical analysis for the IPCC (1995) predictions, I accidentally used a prediction of 2.5 degrees per century when I should have used 2.0 degrees per century for the null hypothesis. This was a misreading on my part and it changes the T-stats substantially. (This mistake did not impact the 1990 analysis.)

    Correction follows for IPCC (1995):

    UAH data (1995 – 2010)
    Observations = 187
    Slope = 0.013700253
    Standard Error = 0.002937661
    Ho: slope ≠ 0.020
    T Stat = 2.144477

    GISS data (1990 – 2010)
    Observations = 187
    Slope = 0.016804263
    Standard Error = 0.002198422
    Ho: slope ≠ 0.020
    T Stat = 1.45365

    Using the corrected T-stats, the P-values (two-tailed) are 0.03199 (UAH) and 0.14605 (GISS). Therefore, the UAH data would allow us to reject the IPCC (1995) predictions with more than 95% confidence. The GISS data show a difference with only ~85% confidence.\

    Thanks.

    Comment by Waldo — 7 Sep 2010 @ 1:45 PM

  210. Waldo @209 — Use statistical tests to determine how long an intrval produces a result one can be confident of. For climate, 15 years might work but 20 years is safer.

    Nothing of significance can be learned from a mere decade of data.

    Comment by David B. Benson — 7 Sep 2010 @ 4:53 PM

  211. Waldo, given that the guy doesn’t even know how to formulate a proper null hypothesis, I think we can dispense with peer review and simply reject.

    Comment by Ray Ladbury — 7 Sep 2010 @ 7:04 PM

  212. Thanks David and Ray. I hope you don’t mind if I cross-post your responses?

    Comment by Waldo — 7 Sep 2010 @ 9:02 PM

  213. Waldo, this finds a good high-school level explanation with exercises for the student, written by someone who knows how to do this stuff professionally.
    http://www.google.com/search?q=grumbine+how+detect+trends

    Always provide the best most useful answer you can, even if the person asking won’t follow the pointer; some later reader may appreciate it.

    Comment by Hank Roberts — 7 Sep 2010 @ 10:28 PM

  214. It looks to me like someone hasnt also considered the error bounds on what the IPCC models would predict for a 15-20 year trend. The IPCC prediction is invalidated when 95% error bars on observed trend dont intersect the 95% bounds on the prediction.

    Comment by Phil Scadden — 7 Sep 2010 @ 11:14 PM

  215. @ Waldo 7 September 2010 at 1:45 PM re Russ R “…demonstrate the statistical relevance of recent climate models …”

    “For the mid-range IPCC emission scenario, IS92a, assuming the “best estimate” value of climate sensitivity[7] and including the effects of future increases in aerosol concentrations, models project an increase in global mean surface temperature relative to 1990 of about 2 deg C by 2100.” IPCC Second Assessment Climate Change 1995 – http://www.ipcc.ch/pdf/climate-changes-1995/ipcc-2nd-assessment/2nd-assessment-en.pdf

    Over the period of his analysis – 1995-2010 – CO2 only increased by ~30ppm, from ~360 to 390+ ppmv, at current fossil C emissions of ~7-8 GT/yr. By 2100, under scenario IS92a anthropogenic C emissions will be ~20 GT/yr (see fig 2, p10 of the IPCC report). His extrapolation fails to account for this, and he’s making the same sort of inept (intentional?) mistakes as Monckton, demonstrating the statistical irrelevance of bad analyses.

    One also might wonder why he’s choosing to attack the 1995 IPCC report instead of AR4 which came out in 2007; or why he chose 1990 and 1995 as start dates when data is available back to 1980?

    Comment by Brian Dodge — 7 Sep 2010 @ 11:20 PM

  216. Waldo@209 – Thanks for cross-posting the statistical results here at Real Climate. I’d appreciate having them checked out by anyone who’s got the know-how, time and interest.

    To provide readers here with some context I’ll copy over my original concerns with the predictive accuracy of the warming projections made in IPCC (1990) and (1995):

    “Regarding IPCC (1990), it predicted: “An average rate of increase of global mean temperature during the next century of about 0.3°C per decade (with an uncertainty range of 0.2—0.5°C per decade) assuming the IPCC Scenario A (Business-as-Usual) emissions of greenhouse gases;…” In the 20 years since then, the world has gone about “business as usual” regarding emissions, but actual warming has been only 0.184°C per decade (UAH) or 0.192°C per decade (GISS). Both observations are below the lower bound of uncertainty given by the prediction.

    Moving on to IPCC (1995), they reduced their warming prediction by approximately a third compared to their 1990 estimate: “Temperature projections assuming the “best estimate” value of climate sensitivity, 2.5°C, (see Section D.2) are shown for the full set of IS92 scenarios in Figure 18. For IS92a the temperature increase by 2100 is about 2°C. Taking account of the range in the estimate of climate sensitivity (1.5 to 4.5°C) and the full set of IS92 emission scenarios, the models project an increase in global mean temperature of between 0.9 and 3.5°C”
    Evidence to from 1995 to date: emissions have been business as usual, but actual warming has been only 0.137°C per decade (UAH) or 0.168°C per decade (GISS).”

    The data I used to test the projections are neatly compiled here: http://woodfortrees.org/data/uah/from:1990/plot/uah/from:1990/trend/plot/gistemp/from:1990/plot/gistemp/from:1990/trend
    If you prefer, you can find original data sets here: http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt and here: http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt

    And Waldo was kind enough to post the statistical results above, so I won’t duplicate his efforts.

    David @210: “Use statistical tests to determine how long an intrval produces a result one can be confident of. For climate, 15 years might work but 20 years is safer. Nothing of significance can be learned from a mere decade of data.” Well David, there are 20 years of data to test the IPCC (1990) projections, and 15 years for the IPCC (1995) projections. The results show with 99.99…% confidence that the 1990 predictions were too high, and very good (>85%) confidence that the 1995 predictions were too high.

    Ray @211: “…given that the guy doesn’t even know how to formulate a proper null hypothesis, I think we can dispense with peer review and simply reject.” It would be a more convincing rejection if you actually evaluated the statistical results on their merits and pointed out any errors or misinterpretations. Thanks.

    Comment by Russ R. — 8 Sep 2010 @ 12:38 AM

  217. Phil Scadden @214 – “It looks to me like someone hasnt also considered the error bounds on what the IPCC models would predict for a 15-20 year trend.” I have never seen any 95% error bars on the IPCC’s reports, otherwise I would have happily considered them. I did take into consideration the IPCC (1990) stated range of uncertainty (0.2 – 0.5 deg C per decade). And the actual warming trend as measured by both GISS (0.192) and UAH (0.184) was outside that range.

    Brian Dodge @215 – If I’m understanding your concern correctly, you’re saying the CO2 emissions in Scenario IS92a are back-end loaded, so the early years (1995-2010) should by definition see less warming than the average of 2.0 deg C per century? If so, that’s a fair point. Is there a more specific warming forecast that can be tested with the observations since the 1995 report was issued? Or do we have to wait until 2100 to validate the model?

    “One also might wonder why he’s choosing to attack the 1995 IPCC report instead of AR4 which came out in 2007; or why he chose 1990 and 1995 as start dates when data is available back to 1980?” No need to wonder… I’ll explain as clearly as I can. I’m testing the 1990 and 1995 projections for exactly the reason specified by David B. Benson @210. “Nothing of significance can be learned from a mere decade of data”. And I chose 1990 and 1995 as start dates because I’m not interested in how well models can be tuned to replicate the past… the success of a model is based solely on how well it can predict the future.

    Comment by Russ R. — 8 Sep 2010 @ 1:24 AM

  218. Russ R., The statistics are inextricably related to the errors–which you utterly ignore. Look at the confidence intervals rather than the point estimates. Moreover, consider that the IPCC estimates are equilibrium values, and we probably have not acheived equilibrium on a timescale of a decade (oceans take ~30 years to reach equilibrium). I’m sorry, Russ, but you might want to look at the analysis that has been done (some of it on this very site) before claiming your Nobel prize.

    Comment by Ray Ladbury — 8 Sep 2010 @ 4:07 AM

  219. Ray Ladbury @218,

    No need to be snide… I’m not planning on flying to Norway any time soon.

    You say I “utterly ignore” the errors. Not so. The IPCC (1990) warming projections gave a range of uncertainty, and in the 20 years since they were published, the warming trend has been below the lower bound of that range (See my comment #217 responding to Phil Scadden). IPCC (1995) gave a range of estimates, that by my reading were the point estimates for the set of IS92 scenarios and a range climate sensitivities, so are not “error bars” per se (See my comment #216). If you can point me toward where I could find the uncertainties around the 1995 projections it would be appreciated.

    In regard to your “timescale of a decade” comment, I’m looking at 20 years of observed temperatures since IPCC (1990) and 15 years since IPCC (1995). If you’re arguing that it requires no less than 30 years of observations before a model’s predictions can be reliably tested, you’re saying that all of the models currently in use are not only untested, but are, in any practical sense, untestable.

    You argument around confidence intervals and equilibrium timescales is a double-edged sword. While these factors make it more difficult to reject a model, they also reduce the amount of confidence in the model’s predictions. At some point, wide uncertainties and long timescales render predictions untestable.

    There’s a rather crude saying (which I believe originates in quantum physics) likening models which can’t be tested to toilets which can’t be flushed. I’ll leave the punch line to your imagination.

    Joking aside, if you’re serious about the predictive ability of climate models (and not merely their ability to recreate the past), you should want to see model outputs rigorously tested and validated. If your argument is that I’m going about testing predictions in the wrong way, then please point me in the right direction. How would you propose testing model predictions of warming against observations? (And if it’s been discussed in a prior thread, please direct me there.)

    Comment by Russ R. — 8 Sep 2010 @ 9:18 AM

  220. “Waldo@209 – Thanks for cross-posting the statistical results here at Real Climate.”

    You are welcome, Russ, I thought you’d like to have some experts look over your equations just in case you might have missed something along the way.

    Comment by Waldo — 8 Sep 2010 @ 11:47 AM

  221. Ray, you dodged Russ’s request. If he’s to consider these “error bars,” he’d like you to point to what they actually are out side this .2-.5 degrees/decade which, if I recall correctly, don’t come with any specific confidence range in those older reports. Either way link to it, or at least explain it to us. Don’t just sit back an throw out vague criticisms. Its completely unconstructive.

    Further, you don’t test a model against the error bars, you test it against the mostly likely predicted result, then get the confidence of that this mostly likely result is the same as the observed result. So your null hypothesis is a slope of what ever the average predicted warming was, ie. .25 degrees/decade, then you take the observed values and get the derivatives, sum them, then test against your observed slope using a T test. This is how it is done. The specific model predictions for each year are not data, as much as you’d like them to be, so you don’t test each group of points like a normal T test. You certainly don’t test against the lower bound 95% CI, that would be double counting the error. If you did, then you’d basically be able to say X is with in, for example, the 60% confidence interval of the 95% confidence interval of Y. Err, no, not how you do it.

    Comment by Wally — 8 Sep 2010 @ 1:35 PM

  222. Russ R. @219 — The climate exhibits multidecadal internal variabity. This variablity is not predicatable in a deterministic sense. Crudely speaking, this means that the error bounds on predictions have to be fairly wide.

    But even ultrasimple climate models offer some predictability. Here is mine:
    http://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530

    Comment by David B. Benson — 8 Sep 2010 @ 2:32 PM

  223. Russ R 219,

    Try here:

    http://BartonPaulLevenson.com/ModelsReliable.html

    Comment by Barton Paul Levenson — 8 Sep 2010 @ 3:24 PM

  224. Russ R.,

    > so the early years (1995-2010) should by definition see less warming than
    > the average of 2.0 deg C per century?

    Not by definition, by model results, but yes. Have you looked at the projection you’re discussing? [1] It’s not a straight line. It curves up. And the estimate given is explicitly for 2100, not for 2010. It won’t do to take a straightedge ruler, draw a line from 1990 to 2100, and say the model fails if observations to date don’t have the same slope. The IPCC in 1995 did not project a linear warming and you cannot just divide 2.0 deg C by ten to get a 0.2 deg warming rate for the first decades.

    (Incidentally — since the projection is a 2 deg rise from 1990 to 2100, [2] isn’t that eleven decades and 0.18 deg per decade, well within one standard error of the GISS observations?)

    I think what you’d really want to do is take the SAR’s model output for 1990-2010 (or 1995-2010, which is very short, but whatever), calculate a linear trend for that period, and test that against the trend in the observations (with uncertainties). For inspiration, look e.g. here:
    http://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/

    Now, I don’t know if the model output data is online (anyone got a link?), so I don’t have a least squares fit for you. But perhaps we can get some use out of that straightedge ruler after all: You can get an idea of the 1990-2010 model trend by taking the graph of the projection [1], printing it out, and drawing a line from the origin through the graph at year 2010. If you prefer to draw the line through the graph at 1995 and 2010, go ahead, same difference. What do you get? I got a line crossing 2100 at about 1.43 deg C, so I figure we’re looking at a projection of about 0.14 deg per early decade, which seems almost complacent compared to what GISTEMP is showing.

    Or 0.13 deg/decade, really, if we’re dividing by eleven decades. Uh, what was the UAH observed trend again?
    :-)

    Notes:

    [1] IPCC SAR WG1, p. 322, fig. 6.21 (the solid “all” line). All you climate science memorabilia buffs out there, take note: the IPCC has finally got the full FAR and SAR digitized and online (http://www.ipcc.ch/publications_and_data/publications_and_data_reports.htm). Don’t try this over dial-up, though, the SAR WG1 report alone is 51 MB.

    [2] Just to be clear, we’re talking about the SAR’s projection for the then mid-range IPCC emission scenario (IS92a), assuming the “best estimate” value of climate sensitivity (then 2.5 deg C) and including the effects of future increases in aerosol, using a simple upwelling diffusion-energy balance climate model.

    Comment by CM — 8 Sep 2010 @ 4:35 PM

  225. Wally, You and Russ can find the answers here, among other places on this very site:

    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/

    http://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/

    The fact Russ is declaring the models dead without having even done a 5 minute search suggests he isn’t really serious. This isn’t that hard.

    Comment by Ray Ladbury — 8 Sep 2010 @ 5:25 PM

  226. Ray,

    “The fact Russ is declaring the models dead without having even done a 5 minute search suggests he isn’t really serious. This isn’t that hard.”

    Please, drop this kind of BS. Its nothing but ad hominem appeal to motives. Stay on topic.

    Second, those links don’t answer any of the issues brought up. One is talking about AR4, which is 2007 right? That report is not being discussed here, its far to early to test it. Then the next talks about Hansen 1988, but it doesn’t TEST it at all. It simply draws a couple lines on a graph. No statistical test was used AT ALL. The author drew on the graph and proclaimed something “resonable.” Sorry, no sale.

    Maybe you think this stuff is easier than it is?

    Comment by Wally — 8 Sep 2010 @ 6:18 PM

  227. Russ R – the point you are trying to make has been discussed often before. As a starting point, look at fig 1 in:
    global cooling wanna bet
    This is a model output which shows that decade of slow warming is completely consistent with the physics. At decadal scale there is too much internal variability in the climate system to be making predictions.

    More discussion on this can be found in:
    What the ipcc models really say and
    decadal predictions

    Comment by Phil Scadden — 8 Sep 2010 @ 6:32 PM

  228. CM,

    “It’s not a straight line. It curves up. And the estimate given is explicitly for 2100, not for 2010. It won’t do to take a straightedge ruler, draw a line from 1990 to 2100, and say the model fails if observations to date don’t have the same slope. The IPCC in 1995 did not project a linear warming and you cannot just divide 2.0 deg C by ten to get a 0.2 deg warming rate for the first decades. ”

    The 1990 result was linear, at least very close to it, the scanned in pdf is not wonderfully clear.

    “I got a line crossing 2100 at about 1.43 deg C, so I figure we’re looking at a projection of about 0.14 deg per early decade, which seems almost complacent compared to what GISTEMP is showing.”

    I don’t think there is a lot of validity to testing a model through roughly 10% of the predicted time scale, if the predictions drastically change in the next 90%. For instence if you look at figure 18 in IPCC 1995, all the predictions are basically the same through at least 2020. So our models are not discernable after at least 25 years, though they don’t truly diverge from each other until 2050 to 2060.

    So, I think what we’re coming to here is that IPCC 1990 sucks, and everything after that is untestable for at least the next 20 or 30 years. And people wonder why we’re skeptical of doing all that is proposed to save us from this disaster?

    Comment by Wally — 8 Sep 2010 @ 6:50 PM

  229. Wally, the subject is internal variability, and those two posts are quite relevant. The fact of the matter is that the warming seen to date is quite consistent with predictions. Do you have any idea how tiring it is to have neophytes/trolls (one cannot tell the difference) come on here every 2 months or so claiming to disprove climate change based on a very simple, WRONG statistical analysis? And the mistakes are always the same.

    I suggest both of you pay a visit to Tamino’s and READ.

    Comment by Ray Ladbury — 8 Sep 2010 @ 7:46 PM

  230. Russ R.,
    The models are testable. They reproduce the overall behavior of Earth’s climate quite well over millennia. They reproduce regional patterns fairly convincingly. And I am sorry if you don’t like the fact that climate takes decades to settle. Don’t blame me. Take it up with reality.

    Comment by Ray Ladbury — 8 Sep 2010 @ 7:51 PM

  231. I would also say that estimating 95% confidence limit model predictions for a 1-2 decade period is difficult to say the least. It is my understanding that so far we dont have remotely enough computer power to explore the possible space for this level internal variability. It is only on longer scales that climate trends in response to particular forcing scenarios can be made with any confidence.

    Comment by Phil Scadden — 8 Sep 2010 @ 8:52 PM

  232. > take a straightedge ruler, draw a line from 1990 to
    > 2100, and say the model fails

    That’s
    http://www.realclimate.org/index.php/archives/2010/08/monckton-makes-it-up/

    Comment by Hank Roberts — 8 Sep 2010 @ 9:07 PM

  233. Ray,

    Well my post to CM that is still awaiting moderation, despite lacking anything that could even remotely be considered abusive, may have helped here, but I suppose I might as well attempt to restate some of those things to both specifically address what little you brought up and to maybe get through this apparently biased filter (which is famous around the climate-blogosphere, though I’ve never run into it myself maybe until now).

    Anyway, yes of course the issue is variability, but you had better expand on what you think to be “internal variability.” Are you referring to the variability of the actual climate or of the models, or both? I assume because you’re likely referring to the first figure from that “what the ipcc models really say” article that you’re mostly talking about the models’ internal variability. Well, that I have little use of. The only reason you have variability in a model is because your model lacks confidence in trying to account for what we think of a chaotic system and you need many runs (think simulations of a baseball season or playoffs). This doesn’t really matter when it comes time to test your model, because your model has essentially turned into the pooled set of model runs. It is no longer any individual run. And like I said before, you don’t really care about the model’s internal error when it comes time to test it against reality. You don’t essentially get to double count error and say something like well this is with a 95% of the 95th percentile run. You want to know how it did against the average of all the model runs, because, as I said, this is essentially your model now.

    “The fact of the matter is that the warming seen to date is quite consistent with predictions.”

    Prove it, or give me a link that actually tests this statistically instead of just drawing lines and calling it “reasonable” as Gavin, if I recall correctly, did.

    “Do you have any idea how tiring it is to have neophytes/trolls (one cannot tell the difference) come on here every 2 months or so claiming to disprove climate change based on a very simple, WRONG statistical analysis?”

    I love how you attempt to sneakily stick in a little ad hominem attack while playing the victim card here. I don’t care if you’re tired, more to the point, it doesn’t matter in establishing what’s right or wrong. If you’re so tired, too tired to adiquately explain your position or leavy any reasonable criticism to help us reach the truth of this matter, why do you bother commenting? I was around reading and commenting on some of those threads, I guess the problem here is that you (which I mean both specifically and generally) can’t actually convincingly show us what’s wrong, much less what’s actually right.

    “I suggest both of you pay a visit to Tamino’s and READ.”

    Please, stop talking down to me, and Russ for that matter, to the unbiased, undecided reader you just make your argument look weak. I’ll be happy to read something specific you think makes your case, but telling me I need some sort of general refresher is nothing short of attempting to make an argument out of an insult. Just stop. That doesn’t work out here in what you refear to as “reality” in that post to Russ.

    Comment by Wally — 8 Sep 2010 @ 9:30 PM

  234. Wally, there is no getting around the 30 year wait for some kinds of prediction. You want some long term validation? How about
    Broecker
    Okay, based on incredibly primitive model and frankly lucky. However, models make MANY predictions that you can validate. BPL link earlier Model prediction has a good list. You really want to wait another 30 years before you do anything. Suppose they were right?

    Comment by Phil Scadden — 8 Sep 2010 @ 10:37 PM

  235. Wow… I’ve got some catching up to do:

    David B. Benson @222 – Very nice work there. What I like most is that you state a clear prediction (with a range of uncertainty) for a specific time period (2010s: 0.686 +- 0.024 K), such that anyone can easily look at it in 2020 for validation. I only wish all the projections out there were so straightforward.

    Barton Paul Levenson @223 – That’s quite the reading list… and after skimming through I haven’t found anything I disagree with. However, I notice Hansen (1988) isn’t on your list of model predictions that have been confirmed.

    CM @224 – Thanks for your critique, most useful thing I’ve read all day… Very much appreciated. I’ll correct my work and update the results here. I’ll first have a crack (below) at statistically testing Hansen (1988) since I’ve finally found the model output data (though I haven’t seen any error ranges).

    Ray @225 & Wally @226 – I’ve already seen lots of squiggly lines on charts presented as evidence either supporting or refuting model predictions. I’m simply trying to apply some basic statistics to come up with an objective evaluation of my own, without cherry picking, distorting, or otherwise trying to bias the conclusion. My hypothesis is that certain predictions were too high, but if the data say they’re good, I’ll happily accept what the data say. And I appreciate honest input and guidance if I’m doing something incorrectly.

    So, to test Hansen (1988), I’m using the model output data from here: http://www.realclimate.org/data/scen_ABC_temp.data
    If anyone can point me to uncertainty values, it would be appreciated.

    I’ve seen plenty of back and forth debate over whether Scenario A or B is more appropriate. I’m in no position to choose one over the other, so I’ll test them both (I’ll throw Scenario C in as well, since the data are already in the table).

    Using a start year of 1988 and an end year of 2010, I get the following best fit linear regression slope coefficients (deg C / year):

    Scenario_A Scenario_B Scenario_C
    Slope 0.029844862 0.027396245 0.019274704

    I’ll test those against the following GISTEMP and UAH observations from 1988 to present (up to July 2010) available here: http://woodfortrees.org/data/uah/from:1988/to:2010.58/plot/gistemp/from:1988/to:2010.58

    I’ve noticed that some evaluations of Hansen (1988) use 1984 as a start date. I’m choosing not to follow that approach, instead only testing predictions against future observations.

    Here are the results:

    SUMMARY OUTPUT (UAH)
    Observations 271
    Slope 0.017105791
    Standard Error 0.001694254

    Scenario_A Scenario_B Scenario_C
    Ho: 0.029844862 0.027396245 0.019274704
    T Stat 7.518986439 6.073738521 1.280157957
    P value 5.52891E-14 1.24966E-09 0.200489589

    SUMMARY OUTPUT (GISTEMP)
    Observations 271
    Slope 0.018145974
    Standard Error 0.00137222

    Scenario_A Scenario_B Scenario_C
    Ho: 0.029844862 0.027396245 0.019274704
    T Stat 8.525519713 6.741099703 0.822557306
    P value 0 1.57192E-11 0.410759786

    Would appreciate any thoughts, guidance, etc.

    Comment by Russ R. — 8 Sep 2010 @ 10:50 PM

  236. > Tamino’s
    Site search is a useful tool; try these for some places to begin:
    http://www.google.com/search?q=site:tamino.wordpress.com/+Hansen+prediction

    Comment by Hank Roberts — 8 Sep 2010 @ 11:39 PM

  237. @236: I tried the first six or so links in the search and they return 404. Using the search box on the site for “hansen” comes up empty. Could you please post links to exact posts?

    Comment by Nick Rogers — 9 Sep 2010 @ 12:52 AM

  238. Phil,

    I know there is no way to get around the 30 year wait (or more). This is not good or bad, its just “reality” as Ray so kindly points out. But just because its difficult or takes a lot of time to really gain knowledge or confidence in a theory, does not mean we can simply settle for less confidence before we drastically alter our lifestyle so to avoid some dubiously predicted catastrophe.

    Yes, we all know raising the GMT by 4 degrees or more over 100 years or less, is going to give some adverse effects. Yes, we also know that we should probably expect at least some warming (say 0-2 degrees/100 years). What we disagree about is the confidences surrounding how much warming and what that means we should do. So you ask suppose they are right? Well, even if they are right, I still don’t think we understand the Earth’s climate well enough to know if over the whole this “catastrophic” warming will even be bad. Sure some people will likely be hurt in a myriad of ways, but it would be completely naive to not also study and consider posible benefits. Yet, I don’t think very many people are looking at such things, do you?

    So, ok, lets suppose they are right, despite any convincing proof that they are, now convince me I need to do something about it.

    Comment by Wally — 9 Sep 2010 @ 1:47 AM

  239. > Tamino’s
    Ah, that blog had some WordPress problems; older topics may be missing.
    I don’t have the time to find you individual posts, and blog posts aren’t going to answer basic questions without some background; most of us here are just readers like yourself, not professional searchers or reference librarians.
    Try the \Start Here\ link above, or http://www.google.com/search?q=site:realclimate.org+Hansen+predictions, or
    http://www.google.com/search?q=site:skepticalscience.com+Hansen+predictions
    Most of this assumes you’ve had statistics 101. If you haven’t, Robert Grumbine will help, and invites questions.
    http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html

    Comment by Hank Roberts — 9 Sep 2010 @ 2:43 AM

  240. Ray : “Russ R., The statistics are inextricably related to the errors–”
    True, but the specific problem is the following : in order to prove a theory, you have to test definite predictions that are substantially different from other theories. Consider for example the problem of general relativity. The 43″/century precession of the perihelion of Mercury was successfully explained py GR, but it wasn’t a definite proof because it could be obtained by a number of other causes (actually ANY deviation from a 1/r2 law produces a precession ! ) that could be possibly ill known and underestimated (a possible ellipticity of the sun for instance). BUT the deviation of light from stars close to the sun during the eclipse in 1919 was definitely, beyond any doubt, twice the classical newtonian result and there were absolutely no hope for explaining it but by relativistic effects. Proving a theory “beyond any doubt” requires exactly that – actually for me *means* exactly that : exhibiting a definite observational result that is really impossible to explain without the theory at stake (same for all “classical” great experiments like Morley Michelson, Davisson and Germer, etc….).

    I don’t blame climate science for uncertainties : I just don’t understand very well how one can both recognize the uncertainties AND claim that science is proved “beyond any reasonable doubts” – and put the blame on those who question these uncertainties.

    Comment by Gilles — 9 Sep 2010 @ 2:46 AM

  241. Oh, and if you search for a few minutes you’ll find this sort of help:
    http://eesc.columbia.edu/courses/ees/climate/labs/globaltemp/index.html

    Comment by Hank Roberts — 9 Sep 2010 @ 2:51 AM

  242. Gilles: “I just don’t understand very well how one can both recognize the uncertainties AND claim that science is proved “beyond any reasonable doubts” – and put the blame on those who question these uncertainties.”

    Very simple.
    1)The models have been around since the late 70s and predicted trend reasonably well for the level of maturity they had at the time. These are dynamical, not statistical models, so it they had been substantively wrong, you would have expected large errors by now.
    2)The models predict MANY things in addition to temperature.
    3)The trend thus far is consistent with the predicted trend modula the inherent internal variability of the climate system.

    Perhaps a visit to Barton’s page would be in order, Gilles.

    http://www.bartonpaullevenson.com/ModelsReliable.html

    And then maybe you can explain to us why you guys want the models to be wrong so badly that you can’t even wait for the required period to assess the prediction–after all, if the models are wrong, it doesn’t make the reality of climate change go away.

    Comment by Ray Ladbury — 9 Sep 2010 @ 4:14 AM

  243. @239: I am sorry, that’s not helpful. You posted a link that turned out to be dead. If you don’t have time to search to where that material is now, that’s OK, but please don’t suggest that someone might not know how to use Google or might not have had statistics 101. That doesn’t win you any favors.

    Comment by Nick Rogers — 9 Sep 2010 @ 4:34 AM

  244. Russ,
    My question is why are you concentrating on only a single quantity–temperature–despite the fact that
    1)the time series is too short
    2)you don’t have good estimates for the uncertainties (they are provided in one of the references I gave)
    3)temperature is a particularly noisy variable
    4)we don’t know how much warming is in the pipeline before the system returns to equilibrium

    There is also the question of what, precisely, you intend to do with this info. Let’s say you find significant disagreement (you won’t). Does that mean that the models are substantively wrong? Perhaps. However, it could also imply 1)more warming in the pipeline, 2)the models do not include all the global dimming we are seeing (and this is a transient effect, probably due to aerosols from fossil fuel consumption in India, China, etc.).

    I really think that your time would be better spent learning the science rather than trying to assess predictions you don’t understand.

    Comment by Ray Ladbury — 9 Sep 2010 @ 5:00 AM

  245. Wally 238: I still don’t think we understand the Earth’s climate well enough to know if over the whole this “catastrophic” warming will even be bad.

    BPL: How does “harvests failing around the world due to massive increases in drought” grab you?

    Comment by Barton Paul Levenson — 9 Sep 2010 @ 5:34 AM

  246. Gilles

    You know that the basics are way beyond doubt – the physical properties of gases being the central issue. After that there are observations of many kinds in dozens of fields, and they go together to show a coherent picture. That picture is consistent with the underlying physics.

    Uncertainties? Absolutely. But they only show gaps in the jigsaw, they don’t change the overall picture.

    Science is not a house of cards. Even if one or several lines of evidence need adjustment or replacement, the structure will not fail. The only killer item is the foundation in basic physics, and overturning that is Nobel Prize material. Even then, there will still be all the other lines of investigation and evidence which will need a new basis to align around. Take out the underlying physics and you then have to explain everything from lasers to the climate temperature record and sea level rise on the new basis.

    Climate disruption is the big picture on the box. We’ve pretty well got the corners right and the edges look pretty good. The fact that we can’t yet see the pieces to complete a few patches in the picture we’ve so far put together does not mean that the boats in the foreground are upside down or that the horizon should be beneath the waterline. The picture is coming together and even if we get frustrated that we can’t see how to complete a house or a tree in the background it does not mean that we should chuck it all back in the box and start again.

    Uncertainties do not change or challenge the overall picture, they just tell us where to do the next lot of work.

    Comment by adelady — 9 Sep 2010 @ 7:10 AM

  247. Russ (#235),

    You’re welcome. But there are more pitfalls for the unwary. Before you rush back to Excel to do more tests, I’d like to suggest you read and digest some the links people have given above, plus the FAQ and other posts here:
    http://www.realclimate.org/index.php/archives/2004/12/index/#ClimateModelling

    Among other things, you may want to read up on what model “tuning” actually involves, since you appear to think that the models are tinkered with until they match the past temperature trend.

    As for testing Hansen ’88, the link I provided already covers that (see also discussion of the limitations of the model, scenarios, and different temperature records in this earlier post.)

    Wally (#228),

    > I don’t think there is a lot of validity to testing a model through
    > roughly 10% of the predicted time scale

    Well, the elapsed time in empirically observable reality is what we have to work with. Feel free to suggest a better way to test the projection that does not involve time travel, psychic powers, or substituting a straight line of your own fancy for the model projection.

    The state of the art has moved on a bit since the 1990 projection, done with a simple model before aerosol forcing was taken properly into account. Still, if you compare with a “no warming” null, even the 1990 projection doesn’t half suck so far.

    And yes, I do wonder why you are “skeptical” about addressing a risk just because it’s poorly bounded. Poorly bounded risk is not a happy thought.

    Comment by CM — 9 Sep 2010 @ 8:15 AM

  248. Ray Ladbury – Please stop assuming my intent. I don’t “want the models to be wrong”. I want to objectively assess how well some widely publicized models have predicted warming over the longest time period available. I don’t have any preferred outcome. The facts are what they are.

    “why are you concentrating on only a single quantity–temperature–despite the fact that
    1)the time series is too short
    2)you don’t have good estimates for the uncertainties (they are provided in one of the references I gave)
    3)temperature is a particularly noisy variable
    4)we don’t know how much warming is in the pipeline before the system returns to equilibrium”

    1) The time series wasn’t too short to evaluate the Hansen (1988) model in the links you provided me. How is it too short now, even with another 2-3 years of added observations?
    2) I saw mentions of uncertainty in one of the links you gave (+/-0.05 and +/-0.06 deg C/decade), but I couldn’t trace them back to the original data, which is why I asked. (FYI, the slope of both data sets is well outside this range of uncertainty.)
    3) Yes, temperature is a noisy variable, which is why I’m looking at the oldest predictions. Again, you seemed happy with the use of temperature data to support the model in the links you sent.
    4) How do you even know whether the current temperature is above or below equilibrium? Why assume that warming is in the pipeline and not cooling? Transitioning from an El Nino to a La Nina would suggest cooling lies ahead, right?

    “There is also the question of what, precisely, you intend to do with this info.” I intend to use what the data say to better understand the reliability of climate projections. Wouldn’t you want to do the same? And at present, the data are saying that the Hansen (1988) temperature projections for both Scenarios A and B are significantly higher than the subsequent warming trend in the 22.5 years since they were published. The same can be said for the IPCC (1990) temperature projection and the 20 years of available temperature data. IPCC (1995) is looking much more reliable, based on CM’s analysis above (comment #224). Yes, all of these assessments may change if temperatures move sharply in the future.

    “Let’s say you find significant disagreement (you won’t).” Actually, I did. Take a look at the T-stats I posted above which for Scenarios A and B range from 6.1 to 8.5. Wouldn’t you call those significant?

    “Does that mean that the models are substantively wrong? Perhaps.” No, it doesn’t mean the models are wrong, at least not structurally. But it does suggest that some factors may be poorly understood, with incorrect values assumed. It could also mean that some structural elements are not factored into the models.

    “However, it could also imply 1)more warming in the pipeline, 2)the models do not include all the global dimming we are seeing (and this is a transient effect, probably due to aerosols from fossil fuel consumption in India, China, etc.).” You’ll need to build a case for point 1 as to why there might be “warming in the pipeline”. However, I agree with you on point 2. The models don’t factor in everything… they are simplifications. Testing them from time to time is the only way to see how well the predict the future.

    Comment by Russ R. — 9 Sep 2010 @ 8:33 AM

  249. Barton,

    “How does “harvests failing around the world due to massive increases in drought” grab you?”

    Maybe I did not make myself entirely clear. I won’t be scared into believing you, you need to post large and exhaustive set of studies on how the climite change will effect basically everything. Grabbing one headline, without a source, is a meaningless scare tactic. Feel free to try again though.

    Comment by Wally — 9 Sep 2010 @ 9:29 AM

  250. Russ R #235: I get for your GISTemp data set, using ordinary least-squares with some software I had lying around:

    n = 271

    Least Squares solution:
    Trend: 0.01814597 +/- 0.00137222

    Jackknife solution:
    Trend: 0.01814603 +/- 0.00136439

    The agreement seems to suggest that you are ignoring the autocorrelation present (right?), which is significant for monthly data and should be accounted for.

    On a more general note, the test you are doing (the most appropriate one being against the B scenario) kicks over the straw man that the model outputs are “perfect”. Of course they are not. Here Hansen writes that the doubling sensitivity of the model used, 4.2 degrees, was at the upper edge of the then-known uncertainty range of 3 +/- 1.5 degrees. No, he gives no explicit uncertainty estimate, but that’s the ball park we’re looking at. I think 30% off was pretty good for then.

    Comment by Martin Vermeer — 9 Sep 2010 @ 9:58 AM

  251. @246: “Uncertainties do not change or challenge the overall picture, …”

    That’s false. If that were true, a prediction that the Earth is warming at a rate of 100 degrees per decade (center), give or take 500 degrees per decade (uncertainty) would make sense and be scientifically interesting. It doesn’t and isn’t.

    Uncertainties are critical part of any prediction.

    Comment by Nick Rogers — 9 Sep 2010 @ 10:36 AM

  252. CM,

    “Well, the elapsed time in empirically observable reality is what we have to work with. Feel free to suggest a better way to test the projection that does not involve time travel, psychic powers, or substituting a straight line of your own fancy for the model projection.”

    Good lord, do you really need to resort to hyperbole and appeals to ridicule to make your argument? First, it only makes me less likely to seriously consider anything you say. And second, it just makes you look irrational. But to the point, I already stated all that can be done about this problem. That is to test the models that we can (which suck) and to simply recognize that the other models are little more than a guess since they can’t be validated.

    “Still, if you compare with a “no warming” null, even the 1990 projection doesn’t half suck so far. ”

    The null hypothesis may be no warming, but there are alternative hypotheses which include lower amounts of positive feedback, etc. So we can do better than test it against no warming, we can test it against what has actually happened, and compare those test results between several models. You don’t just take any model that beats nothing, you take the best model. F=ma/2 would describe motion better than no equation at all, but F=ma is better.

    “And yes, I do wonder why you are “skeptical” about addressing a risk just because it’s poorly bounded. Poorly bounded risk is not a happy thought.”

    The concept of risk is poorly bounded CM, that’s why its risk. This risk just happens to be EXTREMELY poorly bounded, if you want to use that term. I know my risk of death in a car accident is on average, say 1/100,000, but how much does that change based on the car I drive? How about how I drive? Where I drive? Pooled risk might be better understood, by my individual risk is as you might say, poorly bounded. But just because its poorly bounded, doesn’t mean I ignore it. I generally obay the speed limit, I check my blind spot and use a blinker when changing lanes, I bought pretty much the safest car in my budget, I use my seat belt, I even took a highway driving class at Sears Point Raceway. But all those things are of fairly trivial marginal cost. The sears point thing was only a couple hundred bucks, was a ton of fun, and it saved me about as much as it cost on my insurance. It doesn’t cost me anything to drive 65 instead of 85, in fact it saves me money, nor does it cost anything to buckle a seat belt, and I was going to spend 20-24K on a car anyway, and the marginal safety improvement from other similar models was of negligable cost (and this car was again even safer than more expensive models). You can probably tell where this is, but with climate change, the purposed risk reductions are extremely expensive, so much so that they may even be worse than the problem itself. You’ve probably heard the term the cure is worse than the disease? Well, in this case, the disease is not even known to have negative effects, how bad those effects are, nor how likely it is we get the disease. Yet we do know that cutting back CO2 emissions to the scale required to make a serious dent will give extemely negative effects, and we don’t even know if we could do that with the continued development of the rest of the world. What kind of suffering will take place in just America if the cost of energy say, doubles, which is a very conservative estimate given the steps we’d likely need to take to get CO2 emissions just back to around 1980 or 1990 levels for the next century? What about the developing world? How many lives are you going to negatively effect by essentially preventing industrialization and modernization which requires so much energy? Sure maybe you can count on alternative sources of energy, but those currently aren’t going to solve this problem, and its far from certain it ever will. Unless, you support large scale construction of nuclear power all over the globe, which I do for many reasons outside of global warming.

    Anyway, I could go on and on about the rational behind my skeptic stance, but I suppose you get the idea, and its quite likely I just get some terse ad hominem in responce anyway, so I’ll just stop.

    Comment by Wally — 9 Sep 2010 @ 10:39 AM

  253. > please don’t suggest that someone … might not have had statistics 101

    Nobody’s asking you personally to disclose that. You decide where to begin, based on what you know.

    We’re readers like you, not here to “win any favors” from you.

    The science assumes some statistics; if you don’t have that background or, like me, learned it a third of a century ago, it’s helpful to read at least something like Grumbine on trends for the basic idea.

    A lot of resources on using Excel for climate point to old topics at Tamino’s blog that are currently unavailable, for example this page:
    http://processtrends.com/toc_trend_analysis_with_excel.htm (which is still worth a look; I just found it this minute, not a recommendation yet).

    I see some readers at Tamino’s have found and maybe archived some of the missing material. That will help.

    Comment by Hank Roberts — 9 Sep 2010 @ 10:58 AM

  254. Tim Chase’s explanation of how to search for cached topics from Tamino’s: http://tamino.wordpress.com/2010/07/01/on-thin-ice/#comment-43321

    Comment by Hank Roberts — 9 Sep 2010 @ 11:03 AM

  255. “Yes, we also know that we should probably expect at least some warming (say 0-2 degrees/100 years).”

    How do we know this?

    “What we disagree about is the confidences surrounding how much warming and what that means we should do.”

    Wrong wrong wrong wrong wrong. Try again.

    We /know/ what the impact of forcing on temperature is. This is called ‘climate sensitivity’. We DO NOT need models to know this.

    See http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html

    “So you ask suppose they are right?”

    So this isn’t a question. Because we /know/ climate sensitivity, so we don’t need models to tell us the temperature impact of GHG emissions.

    “I still don’t think we understand the Earth’s climate well enough to know if over the whole this “catastrophic” warming will even be bad.”

    Great. Try reading the IPCC AR4. Here are a bunch of people who /do/ know what the impacts will be. They are experts. You are not. Learn something.

    “Sure some people will likely be hurt in a myriad of ways, but it would be completely naive to not also study and consider posible benefits.”

    Strawman. Costs and benefits are analysed

    “Yet, I don’t think very many people are looking at such things, do you?”

    Wrong.

    “Oo, ok, lets suppose they are right, despite any convincing proof that they are, now convince me I need to do something about it.”

    Convincing proof already exists. You don’t appear to have read any.

    - We know climate sensitivity is 3 degrees (+/- 1.5 deg)
    - We know what current GHG emissions are
    - We know what proportion of current GHG emissions end up in atmosphere
    - We know, therefore, what future emissions will do to atmosphere, and hence what they will do to T
    - We have done a LOT of studies of benefits and negatives of increasing T
    - You /don’t/ need to do something about it. This won’t effect you since you are rich, and probably too old to really be impacted. However, if you don’t do anything about it, you leave a legacy for your children and grandchildren that they may be utterly incapable of dealing with.

    Any questions?

    Comment by Silk — 9 Sep 2010 @ 11:16 AM

  256. Wally, ocean pH change is better understood and faster than climate change:
    http://www.nature.com/nature/journal/v425/n6956/abs/425365a.html
    http://www.nature.com/cited/cited.html?doi=10.1038/425365a

    Comment by Hank Roberts — 9 Sep 2010 @ 11:18 AM

  257. @253: Good lord, skip lectures on the importance of statistics. They are totally uncalled for. We know that “science assumes some statistics” and that understanding statistics is important. Please post when you have something valuable to say, eg, when you have links for your resource, otherwise you are just cluttering the discussion (so do I in this post, but I will keep it short and will never post on that topic again).

    Comment by Nick Rogers — 9 Sep 2010 @ 11:33 AM

  258. Martin Vermeer @250 – “The agreement seems to suggest that you are ignoring the autocorrelation present (right?), which is significant for monthly data and should be accounted for.”

    Thank you for your response, and yes, you’re correct, I hadn’t considered autocorrelation. Quick calculation shows the lag-1 autocorrelation in the monthly GISTEMP data is very high (0.761)… which I suspect is more than enough to invalidate the calculated significance of a simple OLS regression. I hadn’t anticipated this issue, and will look at ways to correct for it. Suggestions are welcome. (Ah the joys of ‘doing it yourself’…)

    “On a more general note, the test you are doing (the most appropriate one being against the B scenario) kicks over the straw man that the model outputs are “perfect”. Of course they are not.” I don’t expect the models to be perfect. I expect every model prediction will be “precisely wrong” but aims to be “generally right”.

    My issue is that two people with different axes to grind can still manage to look at the same model prediction and the same observed data and disagree over whether the model is “generally right” or not because both sides will set different arbitrary, self-serving thresholds for what constitutes “generally right”. Statistics would appear to be the objective way to put such disputes to rest, and I personally try to be as objective as possible.

    And yes, I’m aware that Hansen (1988) used a higher estimate for climate sensitivity than is currently popular. That’s why I suspect those predictions came out on the high side. I don’t see this as justification for discrediting modeling, rather for better understanding some of the factors that go into models. If the predictions came out on the low side, that would be equally informative.

    Comment by Russ R. — 9 Sep 2010 @ 11:41 AM

  259. Wally, are you aware that computer models are used successfully for many, many applications where we have no ability to run a full-scale test? Whether it’s simulating the explosion of a nuclear bomb, modelling the failure modes of a hydroelectric dam, or ensuring an aeroplane is safe to fly – we use models all the time.

    Climate models are among the better validated models, since we have over 30 years of data from our 1:1 model so we don’t have to rely entirely on back casting. Your claims that the models don’t perform well are simply wrong. If your “null hypothesis” is a constant warming (which is a really stupid thing to do, which is why the scientists didn’t use that as a null) – but assuming you are that stupid, then do you understand that this means that you have just assumed that the world is constantly warming, and that you have no explanation for why? Consequently, we still need to act to deal with this warming – but of course, we don’t know what is causing it, so we don’t know how to tackle it? But you aren’t that stupid, of course. You do, however, need to go and look up what a null hypothesis actually is – because currently, you have it wrong.

    Your arguments about the cost of reducing carbon to the developing world are just shocking. Climate change will hit developing countries hardest, since those countries are already at the limit of liveability. They are also countries which can really benefit from investment is a completely carbon-free economy. The risks of action and inaction may be still an area of active research, but the current conclusions are utterly different to what you seem to think.

    And you evidently haven’t been paying attention, because those serious about tackling climate change are in favour of using nuclear power where appropriate.

    Looking through your rambling post, I’m struggling to find any position of yours that isn’t based on a misconception. Get the facts straight, then the conclusion follows easily.

    “Anyway, I could go on and on about the rational behind my skeptic stance”

    Nothing you have said so far is in agreement with the facts, so I look forward to a clearer explanation of your rationale after you have re-examined the evidence.

    Comment by Didactylos — 9 Sep 2010 @ 11:50 AM

  260. 252 (Wally),

    …but with climate change, the purposed risk reductions are extremely expensive, so much so that they may even be worse than the problem itself. You’ve probably heard the term the cure is worse than the disease? Well, in this case, the disease is not even known to have negative effects, how bad those effects are, nor how likely it is we get the disease.

    This is unnecessarily alarmist. Do you have any citations for these statements?

    First, everything I’ve read says that professional economists estimate the cost at between 1% and 3% of GDP, which is pretty minor. No skyrocketing costs. Just retooling, and jobs/economy involved in the process.

    Second, it also addresses other problems, like peak oil and strategic dependence on oil and coal rich countries. We have to get off of fossil fuels eventually, so why not do so in a comfortable, controlled fashion, instead of under the gun of “oh my God, there’s so little left!”

    Third, the world has transitioned in the past from horses and sailing ships to steam ships to more, and from broad sheets to telegraph to radio to television to the Internet. Change happens, and it’s not by default bad and expensive.

    Fourth, if we wait and then find out that the problem is big, the expense will then be huge. Then we have to put the brakes on faster, and worry about having reached a CO2 level that is already dangerous and can’t be undone. We really will then be in the situation you’re trying to avoid, causing economic pain and suffering when it all could have been avoided with an earlier but more conservative approach.

    Fifth, no free market democracy, no matter what is done, is going to implement policies that strangle the economy and cause the sort of damage you’re describing. In the worst of cases, politicians that try to do so would be voted out of office. That sort of behavior could not come about until climate change itself was so damaging and indisputable that the populace was panicking, which will be too late.

    So you are promoting a complete lack of action to avoid what is nothing more than the illusion of dangerous action, and in the end will result in the sort of chaos and suffering that you use as the basis for your argument.

    Comment by Bob (Sphaerica) — 9 Sep 2010 @ 12:05 PM

  261. David Archer’s classes:
    http://www.youtube.com/view_play_list?p=FA75A0DDB89ACCD7

    “PHSC 13400: Global Warming
    This 10-week course for non-science majors focuses on a single problem: assessing the risk of human-caused climate change. The story ranges from physics to chemistry, biology, geology, fluid mechanics, and quantum mechanics, to economics and social sciences. The class will consider evidence from the distant past and projections into the distant future, keeping the human time scale of the next several centuries as the bottom line. The lectures follow a textbook, “Global Warming, Understanding the Forecast,” written for the course. For information about the textbook, interactive models, and more, visit: http://forecast.uchicago.edu/

    Hat tip to Lionel Smith for the pointer, in a comment here:
    http://scienceblogs.com/deltoid/2010/09/steven_schneider_and_the_skept.php?utm_source=mostactive&utm_medium=link#comment-2785002

    Comment by Hank Roberts — 9 Sep 2010 @ 12:39 PM

  262. Adding to Hank’s link:

    http://www.statsoft.com/textbook/multiple-regression/

    Comment by Jacob Mack — 9 Sep 2010 @ 12:46 PM

  263. Russ R. @235 — Thank you.

    I now have a more realistic, but still very simple, model which will make short range “predictions” if youn supply future net forcings and also future values for the AMO, an index of internal variabilty, also SOI, another indexc of internal variability. Since neithr is predicatable in a deterministic sense, one would need to sample against an ARMA-type model for both. For SOI there is enough data available to construct one; this is much more problematic for the AMO.

    Nonetheless, his would give approximate 10–20 year predictions with appropriate error intrvals.

    Comment by David B. Benson — 9 Sep 2010 @ 1:13 PM

  264. Bob,

    I’m sorry, 1-3% of GDP yearly is minor? This is effect of basically going into a recession every year. Over a few years this would have the effect of legislating ourselves into another great depression. Your belief that 1-3% of GDP is minor is simply an indication of your ignorance.

    Sources for energy cost increase for purposed cap and trade legislation are litterally all over the place. Here’s a piece from the Heritage: http://www.heritage.org/research/testimony/the-economic-impact-of-the-waxman-markey-cap-and-trade-bill

    Which, is estimated at loosing about ~1M jobs, a three thousand dollars/household yearly, and about $400B in GDP yearly. That is not by any means minor.

    Other points:

    “Second, it also addresses other problems, like peak oil and strategic dependence on oil and coal rich countries.”

    Whether we use oil for commerce/industry is going to ultimately be irrelevant for peak oil or defense. F-22 are never going to be solar powered. We’ll always need to protect our oil supplies.

    “Third, the world has transitioned in the past from horses and sailing ships to steam ships to more, and from broad sheets to telegraph to radio to television to the Internet. Change happens, and it’s not by default bad and expensive.”

    This is a complete red herring and an awful analogy to boot. Have we ever transitioned from a cheaper, more efficient, more easily harvested fuel, to something that is more expensive, and more difficult to use? That’s what we’re considering when going from coal/oil to solar/wind. This change is bad and expensive. What other changes have been before has absolutely ZERO barring on the current technologies and resources available.

    “Fourth, if we wait and then find out that the problem is big, the expense will then be huge. Then we have to put the brakes on faster, and worry about having reached a CO2 level that is already dangerous and can’t be undone.”

    This may be true, but its only an argument from ignorance. You’re essentially trying to tell me because we don’t know if A will happen we should protect ourselves from A at pretty much an unquantifiable cost, because well, it MIGHT be worse later, but you can’t tell me what kind of chances are around that might. Quite illogical.

    “So you are promoting a complete lack of action to avoid what is nothing more than the illusion of dangerous action”

    I regect your conclusion based on the false assumption that the action is not “dangerous,” yearly lose of 1-3% of GDP from something like cap and trade alone would be devastating.

    Comment by Wally — 9 Sep 2010 @ 1:33 PM

  265. Nick Rogers, your reply to Adelady is certainly right: “Uncertainties are critical part of any prediction.” I’d guess that when she wrote @246 “Uncertainties do not change or challenge the overall picture” she meant that greater uncertainty doesn’t move the midrange, it widens both extremes — it is a reason to worry more, rather than less, about the worst case.

    If you’re, say, the Nick Rogers who’s written several climate books, don’t take simple answers personally. We don’t know who you are (you can put a URL into the “Website” field if you want more known).

    Pointers to basic statistics get given here repeatedly, because later readers may find them useful. No offense meant.

    Comment by Hank Roberts — 9 Sep 2010 @ 2:11 PM

  266. To #258:

    “Your claims that the models don’t perform well are simply wrong.”

    “Nothing you have said so far is in agreement with the facts.”

    It is telling that instead of “simply” showing why exactly the claim (which applies to specific models, see previous page) is wrong or what exactly in the post is not in agreement with the facts (I take “nothing” for what it is – a pose), you have chosen to write 6 paragraphs of invective.

    Comment by Nina Myers — 9 Sep 2010 @ 2:49 PM

  267. Nick Rogers @257 — Unfortunately this method of communications requires a superabundance of repitition. Learn to deal with it, possibly by just scanning quickly over comments (thoughtfully) offered to others less knowledgable than you.

    Thank you.

    Comment by David B. Benson — 9 Sep 2010 @ 3:28 PM

  268. Wally (#252), your advice is well taken. Hyperbole is best avoided. That goes also for your claim that the testable models suck and the ones not yet testable are little more than guesses.

    I think Hargreaves (2010) has a more reasonable take. Already the Hansen (1988) forecast had skill. Lately, it’s turning out to be high, but for reasons that are largely understood. From what we now know about oceans and aerosols, and on current best estimates of climate sensitivity, we’d expect it to be too high. Since current climate models take this knowledge into account, we’d expect them to do at least as well.

    I have a trip to make, so I’ll leave the discussion there.

    Comment by CM — 9 Sep 2010 @ 4:44 PM

  269. Nick #251, good grief! If projections were contrary to observations then we’d have to re-examine them. As we’re not projecting anything like your suggestion, or any other outlandish possibility, I don’t see your point.

    My worry about the projections is that we seem to have understated several things. The Arctic melt is the obvious one at the moment, but methane seems to be bubbling up around both poles and that doesn’t seem outlandish to me. It’s worrying.

    Comment by adelady — 9 Sep 2010 @ 6:45 PM

  270. Wally, you’re comparing the cost of taking action to — what? Your assumption that nothing could go wrong with business as usual?

    Seriously, look for something you can read about the costs of going on in the direction we’re headed. Example:

    Estimates of the long-term US economic impacts of global climate change-induced drought. – 2010 –
    http://www.osti.gov/bridge/servlets/purl/984152-0fKpjM/984152.pdf

    “This report quantifies some of the potential economic impacts of a subset of potential consequences of global climate change, namely changes in domestic agricultural productivity, changes in water available for consumption to large consumers of water, and changes in hydroelectric power consumption caused by global climate change-induced drought in the United States…. this report examines a range of realizable outcomes of different severities to gain a better understanding of the range of possible economic consequences…..”

    Comment by Hank Roberts — 9 Sep 2010 @ 7:17 PM

  271. Or, Wally, read some of the science.

    This is — or should be — an eye-opener.

    http://www.nature.com/ismej/journal/v4/n9/full/ismej2010107a.html

    Commentary
    The ISME Journal (2010) 4, 1090–1092; doi:10.1038/ismej.2010.107; published online 8 July 2010
    Dangerous shifts in ocean ecosystem function?

    Comment by Hank Roberts — 9 Sep 2010 @ 7:24 PM

  272. Nick Rogers, Tell ya what. We’ll quit emphasizing the importance of proper statistical analysis as soon as you guys actually start doing it. Deal?

    Comment by Ray Ladbury — 9 Sep 2010 @ 7:31 PM

  273. Didactylos

    “Wally, are you aware that computer models are used successfully for many, many applications where we have no ability to run a full-scale test? Whether it’s simulating the explosion of a nuclear bomb, modelling the failure modes of a hydroelectric dam, or ensuring an aeroplane is safe to fly – we use models all the time.”

    Uh, Mr. didactylos, we absolutely have full scale tests for all of those things. People don’t just jump right on a plane that has never been test flown and only undergone some sort of simulated flight. Similarly our hydroelectric models, have how many real world dams to build off of? Do you think we went straight from E=mc^2 to Hiroshima? These models where of course varying degrees of accurate upon their first conception, and as time has passed models for building “aeroplanes” in silico have improved greatly thanks to repeated real world experimentation, but real world tests are essential for every model. Eventually this will be true of climate science as well, both time passes and technological improvements make it easier to run actual experiments.

    The rest of your post is utter gibberish, blindly throwing around insults and making grand claims of “limits of livability” with out any grounding in fact. I suppose India is at the “limits of livability?” Please, show me that research piece.

    Comment by Wally — 9 Sep 2010 @ 7:31 PM

  274. BPL ““How does “harvests failing around the world due to massive increases in drought” grab you?”
    Maybe I did not make myself entirely clear. I won’t be scared into believing you, you need to post large and exhaustive set of studies on how the climite change will effect basically everything” Wally — 9 September 2010 @ 9:29 AM

    google scholar search for global+warming+drought

    Dai, Aiguo, Kevin E. Trenberth, Taotao Qian, 2004: A Global Dataset of Palmer Drought Severity Index for 1870–2002: Relationship with Soil Moisture and Effects of Surface Warming. J. Hydrometeor, 5, 1117–1130. Cited by 328
    “Together, the global land areas in either very dry or very wet conditions have increased from 20% to 38% since 1972, with surface warming as the primary cause after the mid-1980s. These results provide observational evidence for the increasing risk of droughts as anthropogenic global warming progresses and produces both increased temperatures and increased drying.”

    Constraint to Adaptive Evolution in Response to Global Warming, Science 5 October 2001: Vol. 294. no. 5540, pp. 151 – 154 DOI: 10.1126/science.1063656
    “Despite genetic variance for traits under selection, among-trait genetic correlations that are antagonistic to the direction of selection limit adaptive evolution within these populations. Predicted rates of evolutionary response are much slower than the predicted rate of climate change.”

    Translocation experiments with butterflies reveal limits to enhancement of poleward populations under climate change. S. L. Pelini, J. D. K. Dzurisin, K. M. Prior, C. M. Williams, T. D. Marsico, B. J. Sinclair, and J. J. Hellmann (2009) PNAS 106, 11160-11165
    “…we have evidence that facilitation of poleward range shifts through enhancement of peripheral populations is unlikely in either study species.”

    Keeping Pace with Fast Climate Change: Can Arctic Life Count on Evolution? Oxford Journals Life Sciences Integrative and Comparative Biology Volume44, Issue2 Pp. 140-151.
    Dominique Berteaux, Denis Réale, Andrew G. McAdam and Stan Boutin
    “Our conclusion is that evolution by natural selection is a pertinent force to consider even at the time scale of contemporary climate changes. However, all species may not be equal in their capacity to benefit from contemporary evolution.”

    Science 15 August 2003: Vol. 301. no. 5635, pp. 929 – 933 DOI: 10.1126/science.1085046 Climate Change, Human Impacts, and the Resilience of Coral Reefs

    Flattening of Caribbean coral reefs: region-wide declines in architectural complexity, Proc Biol Sci. 2009 Aug 22;276(1669):3019-25. Epub 2009 Jun 10.
    “In the Caribbean and elsewhere, reef-building corals now face new threats from climate change, particularly in the form of thermally induced coral bleaching and mortality, which are becoming increasingly frequent and extensive as thermal anomalies intensify and lengthen.”

    Science 20 August 2010: Vol. 329. no. 5994, pp. 940 – 943 DOI: 10.1126/science.1192666 Drought-Induced Reduction in Global Terrestrial Net Primary Production from 2000 Through 2009 Maosheng Zhao* and Steven W. Running
    “The past decade (2000 to 2009) has been the warmest since instrumental measurements began, which could imply continued increases in NPP; however, our estimates suggest a reduction in the global NPP of 0.55 petagrams of carbon. Large-scale droughts have reduced regional NPP, and a drying trend in the Southern Hemisphere has decreased NPP in that area, counteracting the increased NPP over the Northern Hemisphere. A continued decline in NPP would not only weaken the terrestrial carbon sink, but it would also intensify future competition between food demand and proposed biofuel production.”

    Science 22 February 2008: Vol. 319. no. 5866, pp. 1080 – 1083 DOI: 10.1126/science.1152538 Human-Induced Changes in the Hydrology of the Western United States
    “The results show that up to 60% of the climate-related trends of river flow, winter air temperature, and snow pack between 1950 and 1999 are human-induced. These results are robust to perturbation of study variates and methods. They portend, in conjunction with previous work, a coming crisis in water supply for the western United States.

    Science 25 June 2010: Vol. 328. no. 5986, pp. 1642 – 1643 DOI: 10.1126/science.1186591 Climate Change: Dry Times Ahead
    “In the past decade, it has become impossible to overlook the signs of climate change in western North America. They include soaring temperatures, declining late-season snowpack, northward-shifted winter storm tracks, increasing precipitation intensity, the worst drought since measurements began, steep declines in Colorado River reservoir storage, widespread vegetation mortality, and sharp increases in the frequency of large wildfires.”
    “All of these changes, as well as dramatic warming and drying elsewhere in the region and deep into Mexico, are consistent with projected anthropogenic climate change, but seem to be occurring faster than projected by the most recent national (2) and international (3) climate change assessments…”

    Projected changes in drought occurrence under future global warming from multi-model, multi-scenario, IPCC AR4 simulations, J Sheffield, EF Wood – Climate Dynamics, 2008 – Springer
    “Recent and potential future increases in global temperatures are likely to be associated with impacts on the hydrologic cycle, including changes to precipitation and increases in extreme events such as droughts.”

    Global Warming and the Water Crisis, ]S Kanae – Journal of Health Science, 2009
    “… Less water availability and increased drought due to global warming cannot be fully mitigated by high-tech measures such as desalination, even if we neglect the fact that such measures consume much fossil fuel.”

    doi:10.1016/j.foreco.2009.09.001 A global overview of drought and heat-induced tree mortality reveals emerging climate change risks for forests, Forest Ecology and Management Volume 259, Issue 4, 5 February 2010,
    “Although episodic mortality occurs in the absence of climate change, studies compiled here suggest that at least some of the world’s forested ecosystems already may be responding to climate change and raise concern that forests may become increasingly vulnerable to higher background tree mortality rates and die-off in response to future warming and drought, even in environments that are not normally considered water-limited.”

    http://www.ars.usda.gov/research/publications/publications.htm?seq_no_115=155952, Responses of Wheat Varieties Released since 1903 to Increasing Atmospheric Carbon Dioxide
    “…newer varieties did not show a stronger carbon dioxide response when growth and yield were compared at a common CO2 concentration of 290 and 370 ppm.”
    “In addition, the newer varieties showed a strong decrease in protein content and baking quality…”

    Footprint of temperature changes in the temperate and boreal forest carbon balance, Geophysical Research Letters (GRL) paper 10.1029/2009GL037381, 2009
    “The authors show that while current mean annual temperatures do not correlate well with current NEP[net ecosystem productivity}, temperature changes spanning the recent past (1980–2002) may be important factors that influence current carbon balance. In particular, changes in past springtime temperatures seem to have had the greatest effect on current annual NEP, the authors find. Their results also suggest that if global warming continues, forests will not continue to be carbon sinks in the future but may instead become carbon sources.”

    Comment by Brian Dodge — 9 Sep 2010 @ 7:59 PM

  275. Wally: “People don’t just jump right on a plane that has never been test flown and only undergone some sort of simulated flight. ”

    me: see Test pilot.

    Wally: “Eventually this will be true of climate science as well”

    me: see hindcasting.

    Didactylos: “modelling the failure modes of a hydroelectric dam”

    Wally: “we absolutely have full scale tests for all of those things”

    Me: Really, full scale tests of failures of dams. Wow.

    Wally, is there anything in the world that you know anything about?

    Comment by elspi — 9 Sep 2010 @ 9:03 PM

  276. “Your belief that 1-3% of GDP is minor is simply an indication of your ignorance.” Wally — 9 September 2010 @ 1:33 PM

    According to http://www.tradingeconomics.com/Economics/GDP-Growth.aspx?Symbol=USD
    “From 1947 until 2010 the United States’ average quarterly GDP Growth was 3.31 percent reaching an historical high of 17.20 percent in March of 1950 and a record low of -10.40 percent in March of 1958.”
    Using their chart tool, From January 1993 to january 2001, the average GDP growth rate was 3.81 percent/yr. From January 2001 through January 2009, the GDP growth rate was 1.75 percent/yr.

    According to wikipedia “Another proposed definition of depression includes two general rules: 1) a decline in real GDP exceeding 10%, or 2) a recession lasting 2 or more years.[3][4]” and “Production as measured by Gross Domestic Product (GDP), employment, investment spending, capacity utilization, household incomes, business profits and inflation all fall during recessions;” Depending on whether the actual number is 1% or 3% of GDP, and what the economic climate is, it probably wouldn’t cause a recession,let alone a depression.

    “Whether we use oil for commerce/industry is going to ultimately be irrelevant for peak oil or defense. F-22 are never going to be solar powered. We’ll always need to protect our oil supplies.” Wally

    For FY 2009, Federal spending was 28 % of GDP, Department of Defense spending was 4.8% of GDP. http://www.census.gov/compendia/statab/2010/tables/10s0459.pdf

    “…yearly lose of 1-3% of GDP from something like cap and trade alone would be devastating.” Wally

    According to https://www.cia.gov/library/publications/the-world-factbook/geos/us.html, 2009 US GDP was 14.14 trillion dollars.

    According http://www.eia.doe.gov/emeu/perfpro/news_m/index.html?featureclicked=3&amp;, nineteen major energy companies reported revenues of 260.6 billion for the second quarter of 2010; ~1.0 trillion for the year, or ~7.4% of GDP.

    Replacing fossil fuel expenditures with renewable expenditures wouldn’t be a loss, and it would be less devastating than the current fossil fuel and military expenditures in the Mideast.

    Comment by Brian Dodge — 9 Sep 2010 @ 9:53 PM

  277. Brian,

    “Using their chart tool, From January 1993 to january 2001, the average GDP growth rate was 3.81 percent/yr. From January 2001 through January 2009, the GDP growth rate was 1.75 percent/yr.”

    Right, so you’d basically stagnate growth at least, or slap it negative, which would be devastating as your population continues to grow.

    “Depending on whether the actual number is 1% or 3% of GDP, and what the economic climate is, it probably wouldn’t cause a recession,let alone a depression.”

    Uh, did you just look at your facts? Through a 9 year period we averaged 1.75 percent growth. If we assume a 1-3% range of depression of growth and that more recent growth is better indicator or future growth than that of pre-2000, you basically have a a 50-50 chance at positive or negative growth. So….its quite possible this would lead to a recession or even a depression. Especially since growth is never linear. Lets say we entered the most recent recession, yet we had this extra 1-3% drag from cap and trade…TADA! Recession turns to depression.

    [Response: Completely OT, no more please. Jim]

    Comment by Wally — 9 Sep 2010 @ 10:25 PM

  278. 273 Wally sez: “Do you think we went straight from E=mc^2 to Hiroshima?”

    WHat does the one have to do with the other? You might want to learn some physics and history of physics.

    Comment by John E Pearson — 9 Sep 2010 @ 10:45 PM

  279. Wally 249,

    I’m about to have a paper published on the subject. In the meantime, I recommend reading these:

    Battisti, D.S., and R.L. Naylor 2009. “Historical Warnings of Future Food Insecurity with Unprecedented Seasonal Heat.” Science 323, 240-244.

    Dai, A., K.E. Trenberth, and T. Qian 2004. “A Global Dataset of Palmer Drought Severity Index for 1870–2002: Relationship with Soil Moisture and Effects of Surface Warming.” J. Hydrometeorol. 1, 1117-1130.

    Comment by Barton Paul Levenson — 10 Sep 2010 @ 2:03 AM

  280. Wally #277, you’re holding the wrong end of the stick on this… the 1-3% should be subtracted from the GDP itself, not from the growth rate. From consumption, not investment. Read about it in the IPCC’s WG3 report… what 3% of GDP means is that we’ll still be getting richer all the time — only a year or two later.

    During the Vietnam war, U.S. military spending went as high as 10% of GDP, while the economy just went on growing… you’re seriously underestimating the robustness of the US (and Western countries’) production system. All this depression talk is alarmism of the worst kind.

    Comment by Martin Vermeer — 10 Sep 2010 @ 2:06 AM

  281. Wally 252: we do know that cutting back CO2 emissions to the scale required to make a serious dent will give extemely negative effects,

    BPL: And how do we know that, precisely?

    Comment by Barton Paul Levenson — 10 Sep 2010 @ 2:07 AM

  282. Wally 264: I’m sorry, 1-3% of GDP yearly is minor? This is effect of basically going into a recession every year.

    BPL: You and he both have it wrong. That’s a projection of how much lower GDP will be in 2030 compared to if we take no action, and assuming AGW doesn’t cause any further damage.

    Your basic mistake is assuming we’ll be just fine if we do nothing. We won’t be.

    Comment by Barton Paul Levenson — 10 Sep 2010 @ 2:13 AM

  283. Dear Wally: *sigh*

    I didn’t really expect you to go away and learn anything, but I hoped so all the same.

    It’s not what you don’t know – it’s the vast ocean of knowledge that you aren’t even aware exists. You don’t know how much you don’t know.

    For example: the test ban treaty means modelling is now the only method available for nuclear testing. Planes are expensive. Manufacturers do conduct destructive tests, but only the absolute minimum that they are required to do. Thousands more scenarios are run completely virtually, and validated by the full scale tests.

    I could go on, but if you aren’t interested in learning then evidently I am wasting my time.

    Your odd claim that we will be able to do real world climate experimentation in the future is just bizarre. It’s another clear indicator that you are deeply, truly out of your depth. We already do many real world tests, which are the basis of our models. But we can never do full-scale climate simulations because we only have one Earth-like planet, and we can only run it at 24 hours per day.

    That’s quite a major limitation, and destructive testing is severely contra-indicated.

    You mention liveability in India. Why you picked a country with one of the highest rates of malnutrition in the world is beyond me. And another example of your inability to find out the first thing about what you are talking about. It’s not difficult. All you need to do is click. And continue to learn.

    Comment by Didactylos — 10 Sep 2010 @ 3:03 AM

  284. “I regect your conclusion based on the false assumption that the action is not “dangerous,” yearly lose of 1-3% of GDP from something like cap and trade alone would be devastating.”

    Wally – If you’re going to argue the economics, do so ready.

    Go away and read the Stern review.

    Very short summary

    “Using the results from formal economic models, the Review estimates that if we don’t
    act, the overall costs and risks of climate change will be equivalent to losing at least
    5% of global GDP each year, now and forever. If a wider range of risks and impacts
    is taken into account, the estimates of damage could rise to 20% of GDP or more.
    In contrast, the costs of action – reducing greenhouse gas emissions to avoid the
    worst impacts of climate change – can be limited to around 1% of global GDP each
    year.”

    ‘Cap and trade alone would have a tiny impact on the US economy. Obviously depends on where you set the cap, but the suggestion that it could cost 1% of US GDP is based on … nothing.

    Comment by Silk — 10 Sep 2010 @ 3:46 AM

  285. Wally, Had you been paying attention, you would have seen that:
    1)increasing temperatures decrease yields of rice and many other important food crops
    2)increase (along with increasing CO2) survivability and vigor of noxious weeds such as poison ivy
    3)increase drought
    4)increase severe weather events

    And all of this as human population increases to 10 billion by mid century. What is more, you assume that all of the investment in mitigation is lost, when in reality,
    1)the development of a non-fossil fuel economy is essential even in the absence of climate factors due to the finitude of fossil fuels
    2)Investment in technology pays dividends down the road.

    Comment by Ray Ladbury — 10 Sep 2010 @ 4:59 AM

  286. 264 (Wally),

    I’ll ignore the % of GDP issue, since everyone else is hammering you with it.

    Whether we use oil for commerce/industry is going to ultimately be irrelevant for peak oil or defense. F-22 are never going to be solar powered. We’ll always need to protect our oil supplies.

    This is silly. Obviously we will need oil for certain critical things for a long time, which is all the more reason to be more careful with it. Using our reserves up so you can drive to the mall three times a week is a huge waste.

    This is a complete red herring and an awful analogy to boot. Have we ever transitioned from a cheaper, more efficient, more easily harvested fuel, to something that is more expensive, and more difficult to use?

    It’s a perfect analogy. What, the only good analogy would be to point to that time in 1520 when we converted from fossil fuel use to zorgon power? The point of the analogy is that the world changes. It always has and always will. When the world changes that doesn’t instantly make everyone poorer. Quite the opposite, it creates jobs, new opportunities, and ultimately frees people up to focus on other things.

    To be honest, when I look at the complexity involved in harvesting oil reserves (locate, drill, extract, transport, refine, transport, and also clean up the inevitable mess) compared to something like wind (build a tower and power grid, then maintain it) I don’t know what you can be thinking by claiming that a FF infrastructure is so wonderful.

    You also twisted the argument. You ignored the point that the world has revamped its infrastructure many times in the past, to instead argue that a change to wind and solar power “is bad and expensive.” Based on what? Your personal opinion? One hundred years from now people are going to wonder why the heck we wasted more than a century on FF use.

    Note that your argument is also equivalent to “human civilization is doomed.” If we can’t safely get off of FF use now, with the benefits of FF to ease the transition, how are we going to do so when it’s completely gone? Are you saying that when FFs run out, it’s going to be the dark ages for all time?

    This may be true, but its only an argument from ignorance. You’re essentially trying to tell me because we don’t know if A will happen we should protect ourselves from A at pretty much an unquantifiable cost, because well, it MIGHT be worse later, but you can’t tell me what kind of chances are around that might. Quite illogical.

    Except it’s not “might”, it’s “almost certainly.” It’s also not an unquantifiable cost, although ignoring the problem may be. As far as being unable to tell you what kind of chances… well, you know you just made that up. What did you think the IPCC AR4 report was for?

    I regect your conclusion based on the false assumption that the action is not “dangerous,” yearly lose of 1-3% of GDP from something like cap and trade alone would be devastating.

    Alarmism, and in a far worse and more incorrect form than when that term is used for those people that (correctly) recognize that climate change is a problem. You have no basis for your argument, except for emotional assertions.

    The economy is the denier’s polar bear. It’s the emotional symbol that is used to appeal to people’s fears, to get them to stop thinking and ignore the facts.

    Comment by Bob (Sphaerica) — 10 Sep 2010 @ 7:28 AM

  287. 264 (Wally),

    I do want to add one point on the GDP issue. When the money is spent, that does not mean that it simply vanishes, which is how you are behaving. The money goes to people, to perform labor, and to the owners of various resources (land, metals, vehicles, tools), to construct something new. It’s not very different from what happens now, except that different people are going to receive the money for doing different, new things.

    That new thing is replacing an old thing which will eventually wear out anyway. If the money isn’t spent on the new thing, it must instead be spent maintaining and replacing the old, worn out thing. Old oil tankers get decommissioned and replaced by new ones. Old refineries need parts continuously fixed and replaced. Oil wells run dry and are abandoned, and new ones must be built.

    All of this is part of the necessary expense of maintaining the existing, but not eternal and unchanging and “cheap,” FF infrastructure. There is a huge, huge expense behind fossil fuels.

    So spending some amount of money on wind, solar, hydro, nuclear and other power sources is merely a choice not to put some or all of that same money back into continued fossil fuel use.

    The money doesn’t just vanish. No harm whatsoever will come to the economy. Period. In fact, once we are invested in that path, my own personal prediction is that once we get some momentum it will be historically viewed as something of a New Renaissance, the time when the world got cleaner and smarter and better organized, and abandoned a dirty, failing, clumsy and inevitably short term power source.

    The only real difference here, if we do things methodically and rationally, is that the poor, rich, Texas oil barons aren’t going to be as rich. Which is why they’re fighting it tooth and nail… just like every other owner of an obsolescent technology has fought change throughout human civilization.

    [Response: Let's get back on topic please]

    Comment by Bob (Sphaerica) — 10 Sep 2010 @ 8:00 AM

  288. I’m not entirely sure this is “back on topic” in the strictest sense, but it’s in the spirit of DIY, I think–DIYers always steal from the pros. . .

    August GISS anomaly is now online, reported at .74 C. That’s a 2-way tie for 3rd-warmest ever, behind (you may have guessed it) August 1998 (.76 C.) Didn’t check/don’t remember the confidence intervals on the monthly anomalies, but clearly this was an August just as toasty globally as it was here in the Southeast.

    UAH had a (predictably) lower, but still warm .51 (of course, that’s for the lower troposphere, not the surface.) We’ll see what NCDC says in a few days.

    Comment by Kevin McKinney — 10 Sep 2010 @ 11:07 AM

  289. Wow, way to much to tackle here, but I’ll try.

    “I do want to add one point on the GDP issue. When the money is spent, that does not mean that it simply vanishes, which is how you are behaving. The money goes to people, to perform labor, and to the owners of various resources (land, metals, vehicles, tools), to construct something new. It’s not very different from what happens now, except that different people are going to receive the money for doing different, new things.”

    True that money doesn’t “vanish” but if you’re putting money to less productive use, the marginal difference in that efficiency does effectively vanish. If we us X dollars to make Y power now, but in the future you need X+Z dollars to make Y power because of cap and trade or what ever legislation, you’ve lost the ability to use Z dollars for other kinds of investments or consumption. Effectively Z dollars are now gone from the new system compared to the old system. Further, this will happen every year this system is in place. So it will continually slow growth, not simply slow growth for a few years until we return to some sort of normal growth as one poster claimed above.

    “The point of the analogy is that the world changes. It always has and always will. When the world changes that doesn’t instantly make everyone poorer. Quite the opposite, it creates jobs, new opportunities, and ultimately frees people up to focus on other things. ”

    That depends on the kind of change Bob. That was my point, and you ignored it. If we transition to more expensive, less efficient fuels, that’s change for the worse. We’ll use our current wealth to create new wealth less efficiently. That’s the point.

    “You ignored the point that the world has revamped its infrastructure many times in the past, to instead argue that a change to wind and solar power “is bad and expensive.” Based on what?”

    Cost per unit energy.

    >Except it’s not “might”, it’s “almost certainly.”<

    No, it isn't "almost certainly," that's what we've been attempting to discuss until the GDP discussion started, and no one has been able to prove such a thing.

    "1)increasing temperatures decrease yields of rice and many other important food crops
    2)increase (along with increasing CO2) survivability and vigor of noxious weeds such as poison ivy
    3)increase drought
    4)increase severe weather events"

    I won't bother to go into a lot of detail, as neither has this poster even attempted to prove such things, but every one of these issues has been debunked.

    "1)the development of a non-fossil fuel economy is essential even in the absence of climate factors due to the finitude of fossil fuels"

    True, but that doesn't mean we have to make painful top-down legislation. As fossil fuels become more scarce, the price will go up naturally, and the alternatives will become more competitive. Plus, by that point, the alternatives might actually have improved as well.

    "2)Investment in technology pays dividends down the road."

    Depends on technology and the amount of investment. We could likely dump billions into cold-fusion and never get anything productive. Similarly what's the point of using up X dollars just to make the technology if you'll never actually be able to save at least X dollars in the future with it? These kinds of statements are far to simplistic to be at all useful or constructive.

    "Using the results from formal economic models, the Review estimates that if we don’t
    act, the overall costs and risks of climate change will be equivalent to losing at least
    5% of global GDP each year, now and forever."

    So you've now combined the results from climate models with dubious validity with economic models? I see expanding error bars….

    "Obviously depends on where you set the cap, but the suggestion that it could cost 1% of US GDP is based on … nothing."

    Silk, its based on increased power costs, which are pretty straight forward to evaluate.

    "Your basic mistake is assuming we’ll be just fine if we do nothing. We won’t be."

    This hasn't been established, as much as you'd like it to be. You can neither prove the magnitude of warming, much less its effects on the entirety of world and human actions.

    And this one is too rich to pass up:

    "273 Wally sez: “Do you think we went straight from E=mc^2 to Hiroshima?”

    John: "WHat does the one have to do with the other? You might want to learn some physics and history of physics."

    First, you're talking to some one that holds a BS in physics among other things. Second, the idiocy of your comment is self evident, so I'll let it speak for itself. Good laugh though John. Next time think before you cast an insult, you might just make yourself look like a fool again.

    Comment by Wally — 10 Sep 2010 @ 12:48 PM

  290. > a BS in physics
    I’ll believe that, but I’d bet it was more than 30 years ago. Much has been learned since. Have you read Spencer Weart’s history? First link under Science sidebar.

    Comment by Hank Roberts — 10 Sep 2010 @ 3:14 PM

  291. First, you’re talking to some one that holds a BS in physics among other things.

    Wally may need to consider the RealClimate PhDs have kindly indicated to him that this is not the place for those “other things”.

    Comment by flxible — 10 Sep 2010 @ 3:24 PM

  292. Re: Silk (284)

    Stern released an update:

    In June 2008, Stern said that because climate change is happening faster than predicted, the cost to reduce carbon would be even higher, of about 2% of GDP instead of the 1% in the original report.

    Source here.

    The Yooper

    Comment by Daniel "The Yooper" Bailey — 10 Sep 2010 @ 3:31 PM

  293. Wally, maybe you didn’t notice, but your “answer” failed to actually answer anything.

    Which really has been our complaint all along. Now, stop appealing to your own imaginary authority, and go and seek some knowledge.

    We’ll wait.

    Comment by Didactylos — 10 Sep 2010 @ 3:44 PM

  294. Kevin, I see 0.53 – where are you looking?

    0.53 is identical to July, and in line with the end of El Nino.

    Comment by Didactylos — 10 Sep 2010 @ 3:50 PM

  295. There are two basic tactics used by those who profit enormously from fossil fuels and who therefore wish to obstruct and delay the urgently needed phase-out of fossil fuels:

    1. Deny that the problem exists. Everyone here is familiar with the various forms of AGW denial.

    2. Deny that alternatives to fossil fuels exist. This includes denigration and disparagement of alternative energy sources, exaggeration of the importance of fossil fuels (e.g. without fossil fuels we will live in caves and wear animal skins — just like people in the 19th century did, I guess), exaggeration of the cost of transitioning from a fossil fuel economy to a wind/solar based economy, etc.

    What Wally is offering here is the second tactic. I don’t accuse him of deliberate dishonesty; I have no reason to doubt that he has embraced in all innocence and good faith the obstructionist propaganda that has been spoon-fed to him — and to a great many people — by those who wish to protect the billion-dollars-per-day profits of the fossil fuel corporations at any cost.

    The facts are, that the AGW problem is FAR WORSE than most people realize, including (I believe) some climate scientists; and on the other hand, the solution is FAR EASIER than most people realize, if they are not paying close attention to ongoing developments in renewable energy and efficiency technologies.

    We have a very horrible problem, that we can solve rather quickly, easily and inexpensively if we choose to do so. The obstacles are not technical or economic; the only real obstacle is the vast, entrenched power and wealth of the fossil fuel corporations which has enabled them for a generation to obstruct solutions.

    Comment by SecularAnimist — 10 Sep 2010 @ 4:04 PM

  296. Sea Surface Temperatures in the Atlantic’s main hurricane development region had warmest August on record, 1.23C above normal: http://bit.ly/JeffMWB

    Comment by Kees van der Leun — 10 Sep 2010 @ 4:09 PM

  297. “”1)increasing temperatures decrease yields of rice and many other important food crops
    2)increase (along with increasing CO2) survivability and vigor of noxious weeds such as poison ivy
    3)increase drought
    4)increase severe weather events”

    I won’t bother to go into a lot of detail, as neither has this poster even attempted to prove such things, but every one of these issues has been debunked.” Wally — 10 September 2010 @ 12:48 PM

    For those who might be interested in some details from the literature –

    (1)
    Global scale climate–crop yield relationships and the impacts of recent warming, stacks.iop.org/ERL/2/014002
    “For wheat, maize and barley, there is a clearly negative response of global yields to increased temperatures. Based on these sensitivities and observed climate trends, we estimate that warming since 1981 has resulted in annual combined losses of these three crops representing roughly 40 Mt or $5 billion per year, as of 2002.”

    Rice yields decline with higher night temperature from global warming, http://www.pnas.org/content/101/27/9971.full
    “We analyzed weather data at the International Rice Research Institute Farm from 1979 to 2003 to examine temperature trends and the relationship between rice yield and temperature by using data from irrigated field experiments conducted at the International Rice Research Institute Farm from 1992 to 2003. Here we report that annual mean maximum and minimum temperatures have increased by 0.35°C and 1.13°C, respectively, for the period 1979–2003 and a close linkage between rice grain yield and mean minimum temperature during the dry cropping season (January to April). Grain yield declined by 10% for each 1°C increase in growing-season minimum temperature in the dry season,
    “Relationships between grain yield and temperature or radiation were evaluated by using yield data from field experiments conducted under irrigated conditions with optimal management at the IRRI Farm from 1992 to 2003.”

    Nonlinear temperature effects indicate severe damages to U.S. crop yields under climate change, http://www.pnas.org/content/106/37/15594.abstract
    “We find that yields increase with temperature up to 29° C for corn, 30° C for soybeans, and 32° C for cotton but that temperatures above these thresholds are very harmful. The slope of the decline above the optimum is significantly steeper than the incline below it.”
    “Holding current growing regions fixed, area-weighted average yields are predicted to decrease by 30–46% before the end of the century under the slowest (B1) warming scenario and decrease by 63–82% under the most rapid warming scenario (A1FI) under the Hadley III model.”

    Radically Rethinking Agriculture for the 21st Century, Science 12 February 2010: Vol. 327. no. 5967, pp. 833 – 834 DOI: 10.1126/science.1186834
    “Climate change also has important implications for agriculture. The European heat wave of 2003 killed some 30,000 to 50,000 people (3). The average temperature that summer was only about 3.5°C above the average for the last century. The 20 to 36% decrease in the yields of grains and fruits that summer drew little attention.”

    (2)
    Biomass and toxicity responses of poison ivy (Toxicodendron radicans) to elevated atmospheric CO2. Proc Natl Acad Sci U S A. 2006 Jun 13;103(24):9086-9. Epub 2006 Jun 5.
    “In this 6-year study at the Duke University Free-Air CO(2) Enrichment experiment, we show that elevated atmospheric CO(2) in an intact forest ecosystem increases photosynthesis, water use efficiency, growth, and population biomass of poison ivy. The CO(2) growth stimulation exceeds that of most other woody species. Furthermore, high-CO(2) plants produce a more allergenic form of urushiol. Our results indicate that Toxicodendron taxa will become more abundant and more “toxic” in the future, potentially affecting global forest dynamics and human health.”

    Response of an allergenic species, Ambrosia psilostachya (Asteraceae)[ragweed], to experimental warming and clipping: implications for public health (American Journal of Botany. 2002;89:1843-1846.)
    “Warming increased ragweed stems by 88% when not clipped and 46% when clipped.” “Although warming caused no difference in pollen production per stem, total pollen production increased by 84% (P < 0.05) because there were more ragweed stems."

    Changes in biomass and root:shoot ratio of field-grown Canada thistle ( Cirsium arvense), a noxious, invasive weed, with elevated CO2: implications for control with glyphosate, Weed Science, 52:584–588. 2004
    "…the study indicates that carbon dioxide–induced increases in root biomass could make Canada thistle and other perennial weeds that reproduce asexually from belowground organs harder to control in a higher [CO2] world." "At the time of herbicide application, growth at elevated [CO2] had resulted in a small but significant increase in shoot biomass in both years but a larger, significant increase in root biomass (2.5- to 3.3-fold) relative to am bient [CO2]."

    There is some good news – "Production of morphine in wild poppy (Papaver setigerum) (Ziska et al. 2008b) (Figure 1) showed significant increases with both recent and projected CO2 concentrations." http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2649213/

    Comment by Brian Dodge — 10 Sep 2010 @ 5:04 PM

  298. Didactylos, #294–

    This is what I was looking at. Seems like it’s the correct table, but let me know if I’ve missed something.

    http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts.txt

    Comment by Kevin McKinney — 10 Sep 2010 @ 5:04 PM

  299. Hank,

    It was going on six years ago now. And how much has BS level physics really changed in the last, I don’t know 20 years, at least? All the basic courses, quantum mech, thermo, electromagnetism are pretty much unchanged. Some of the elective type class may have been improved upon, update, and even rotated out of fashion, but undergrad physics is probably one of the most static majors as far as content goes out of all the sciences over the last 50 years. Compare to say, biology. We change what we teach undergrads almost yearly.

    Comment by Wally — 10 Sep 2010 @ 7:11 PM

  300. Secular,

    You’re creating strawmen. Most of us in the skeptic camp do agree with AGW, we disagree with catastrophic AGW.

    Second, we don’t deny no alternatives exist, we point out they are not currently practical and/or expensive.

    Comment by Wally — 10 Sep 2010 @ 7:14 PM

  301. Wally: “I won’t bother to go into a lot of detail, as neither has this poster even attempted to prove such things, but every one of these issues has been debunked.”

    Hmm. Let’s see. Rice yields:

    PNAS: Rice yields decline with higher night temperature from global warming
    http://www.pnas.org/content/101/27/9971.full

    http://www.reuters.com/article/idUSTRE6790HS20100810

    Poison Ivy?

    http://news.nationalgeographic.com/news/2006/05/060530-warming.html

    Drought and floods?

    http://www.csmonitor.com/Books/2008/0304/p13s02-bogn.html

    http://www.nature.org/initiatives/climatechange/issues/art19624.html

    You know, Wally, normally when you are going to lie, it’s customary to make it a little more difficult to prove. 3 tiny little Google searches and now you can sweep up the ashes of your credibility.

    Oh, and by the way John is a PhD in physics…and so do I. We can recommend texts if you’d like.

    Comment by Ray Ladbury — 10 Sep 2010 @ 8:13 PM

  302. Ray, Wally’s information on Rice yields most likely comes from Watts Up With That which is subsequently cross-posted on Climate Skeptic.

    He will most likely defer to the blogsites’ version of recent history.

    Comment by Waldo — 10 Sep 2010 @ 10:03 PM

  303. Wally, did you learn in school what’s in Weart’s book (first link, right sidebar)? That book covers up to about the 1980s, so that much history ought to have be in the textbooks in 1992 or so — but I’d be surprised if it was.

    Did you take biology? Understand rates of change?

    Did you read the ocean pH link above?

    Have you read the phenology material in older topics here at RC?

    What does “catastrophic” mean to you? It might mean many things
    – something so fast it happens in your own lifetime
    – something that affects your own wealth?
    – something that changes easily one direction but isn’t easily reversed
    – something that lasts centuries or millenia
    – rates of change faster than anything but an asteroid impact has caused

    Or something else. What’s it mean to you?

    Comment by Hank Roberts — 10 Sep 2010 @ 10:10 PM

  304. Somewhat OT

    I’ve started a project that I’ve called GoGCM. I wanted to see if anyone would care to comment on the idea and/or how to go about it.

    http://gogcm.blogspot.com/

    Thanks for any feedback.

    Comment by Arrow — 10 Sep 2010 @ 11:42 PM

  305. Wally:

    You’re creating strawmen. Most of us in the skeptic camp do agree with AGW, we disagree with catastrophic AGW.

    Ahhh, Wally outs himself as being just another garden-variety cut-and-paste denialist (well, he probably did earlier, but I couldn’t be bothered working through most of his bullshit).

    There’s no more massive strawman than the denialist invention … “AGW” vs. “CAGW”.

    Wally – please tell us in your own words what you believe to be the difference between the two. Please use numbers, and show your work.

    Comment by dhogaza — 10 Sep 2010 @ 11:46 PM

  306. FYI: from Climate Skeptic, Russ R’s latest equations verbatim:

    “RE: Hansen (1988) – Autocorrelation issue.

    Wally et al,

    I spent a few hours last night reading up on the various approaches for identifying and correcting autocorrelation in time series data.

    Since I don’t have access to professional statistics software with GLS tools, I’m limited to excel-based OLS methods. I’ve settled on the AutoRegressive (AR) approach, which basically runs a standard OLS regression twice – first regressing the data against 1-period lagged observations to identify and strip out auto-correlation, and second to regress the residuals against the date to identify trend and significance.

    Here’s a summary of the preliminary results:

    Hansen’s temperature projections (1988-2010)
    Scenario: Scenario_A Scenario_B Scenario_C
    Slope: β = 0.029844862 0.027396245 0.019274704
    (Source: http://www.realclimate.org/data/scen_ABC_temp.data )

    UAH data set (1988 – 2010.58)
    Observations: n = 271
    Slope: β = 0.017105791
    Autocorrelation: ρ = 0.834750724
    (Source: http://woodfortrees.org/data/uah/from:1988/to:2010.58/plot/gistemp/from:1988/to:2010.58 )

    GISTEMP data set (1988 – 2010.58)
    Observations: n = 271
    Slope: β = 0.017105791
    Autocorrelation: ρ = 0.760693177

    (Note the higher autocorrelation coefficient for the UAH data.)

    UAH AR(1) autoregression
    Observations: n = 270
    Intercept: α = 0.022953044
    Lag-1: β = 0.838341028

    GISTEMP AR(1) autoregression
    Observations: n = 270
    Intercept: α = 0.097138485
    Lag-1: β = 0.76108575

    Both data sets were stripped of autocorrelation as modeled above, and the residuals used for the next stage of regression.

    UAH Residuals regression
    Observations 270
    Slope 0.019506406
    Standard Error 0.006566299
    Scenario: Scenario_A Scenario_B Scenario_C
    Ho: Slope ≠ 0.029844862 0.027396245 0.019274704
    TStat = 1.574472273 1.2015656 -0.035286678
    Pvalue (2 tail) 0.115378307 0.229531876 0.971851146

    GISTEMP Residuals regression
    Observations 270
    Slope 0.01920112
    Standard Error 0.004669017
    Scenario Scenario_A Scenario_B Scenario_C
    Ho: Slope ≠ 0.029844862 0.027396245 0.019274704
    TStat = 2.279653399 1.755214024 0.015759938
    Pvalue (2 tail) 0.022628253 0.079222706 0.987425909

    Conclusions: As expected, the T-stats are lower than before. Due to the higher degree of autocorrelation in the UAH data, they only give us 77% & 88% confidence that we can reject the predictions for Scenarios A & B respectively. The GISTEMP data show a significant difference between predictions and observations with 98% and 92% confidence.

    As always, please let me know if you have any concerns with the method, and I’m happy to email the spreadsheet I used to anyone who’d like it.

    If the above work checks out, I’ll repeat with IPCC (1990).”

    [Response: With all due respect, what do you think this means? Forecasts are always 'different' from the actuality for all sorts of reasons - and the further out you go, the more different they will be. Yet forecasts still have skill if they get closer to the true outcome than any other method. For instance, a forecast of 0.2 deg C/decade will be statistically significantly different to a reality of 0.19 deg C/dec given a long enough period. However, this is clearly skillful given the alternative of no change. Please think about what you are trying to calculate. - gavin]

    Comment by Waldo — 10 Sep 2010 @ 11:47 PM

  307. BPL: congrats on getting your paper accepted. This is really “doing it yourself” at a very high level.

    Comment by Rattus Norvegicus — 10 Sep 2010 @ 11:48 PM

  308. What does “catastrophic” mean to you?

    Oh, Hank, c’mon, you know the answer. It means any scientific result hinting that he might need to abandon his political ideology if he cares about the health and welfare of his kids (present or future).

    Comment by dhogaza — 10 Sep 2010 @ 11:48 PM

  309. Wally 289: If we us X dollars to make Y power now, but in the future you need X+Z dollars to make Y power because of cap and trade or what ever legislation, you’ve lost the ability to use Z dollars for other kinds of investments or consumption.

    BPL: Unless by doing so you free up W dollars that used to go to health care, lost productivity, funerals, and property damage from the pollution that is no longer being created. Then you might break even or even do better.

    Comment by Barton Paul Levenson — 11 Sep 2010 @ 5:58 AM

  310. SecularAnimist 295,

    I agree completely. The AGW issue is radicalizing me very fast. I’ve been a liberal Democrat more or less since I quit the Libertarian Party in 1990, but I never really regarded the GOP as the Party of Evil until they legalized torture and embraced the full-scale war on science (and it’s not just AGW, folks, it’s evolution and cosmology and astronomy and geology and HIV-causes-AIDS). And I’m beginning to seriously doubt that the Democrats, well-intentioned as some of them may be, are capable of doing something about the problem or even understanding it.

    The big problem is, most Americans regard themselves as either Democrats or Republicans, usually after generations of family tradition. A third party is very unlikely to be able to win in the time civilization has left. And if one does, it might be the Tea Party.

    Comment by Barton Paul Levenson — 11 Sep 2010 @ 6:03 AM

  311. Brian Dodge 297,

    These are great citations, but could you please lists the authors as well?

    Comment by Barton Paul Levenson — 11 Sep 2010 @ 6:08 AM

  312. Wally 300: Most of us in the skeptic camp do agree with AGW, we disagree with catastrophic AGW.

    BPL: You’re still wrong.

    Comment by Barton Paul Levenson — 11 Sep 2010 @ 6:10 AM

  313. Waldo, repeating a lie makes one a liar. And lying to oneself counts. I haven’t yet decided whether Anthony micro-Watts is actually stupid enough to believe what he posts. I am less willing to let Willy plead to the lesser charge of stupidity.

    Comment by Ray Ladbury — 11 Sep 2010 @ 6:46 AM

  314. Global scale climate–crop yield relationships and the impacts of recent warming
    David B Lobell and Christopher B Field stacks.iop.org/ERL/2/014002

    Radically Rethinking Agriculture for the 21st Century
    N. V. Fedoroff,1* D. S. Battisti,2 R. N. Beachy,3 P. J. M. Cooper,4 D. A. Fischhoff,5
    C. N. Hodges,6 V. C. Knauf,7 D. Lobell,8 B. J. Mazur,9 D. Molden,10 M. P. Reynolds,11
    P. C. Ronald,12 M. W. Rosegrant,13 P. A. Sanchez,14 A. Vonshak,15 J.-K. Zhu http://www.faculty.ucr.edu/~jkzhu/articles/2010/833.pdf

    Rice yields decline with higher night temperature from global warming
    1. Shaobing Peng *,
    2. Jianliang Huang †,
    3. John E. Sheehy *,
    4. Rebecca C. Laza *,
    5. Romeo M. Visperas *,
    6. Xuhua Zhong ‡,
    7. Grace S. Centeno *,
    8. Gurdev S. Khush § , ¶, and
    9. Kenneth G. Cassman ¶ , http://www.pnas.org/content/101/27/9971.full

    Nonlinear temperature effects indicate severe damages to U.S. crop yields under climate change
    Wolfram Schlenker and Michael J. Roberts http://www.pnas.org/content/106/37/15594.abstract

    Biomass and toxicity responses of poison ivy (Toxicodendron radicans) to elevated atmospheric CO2
    * Jacqueline E. Mohan,
    * Lewis H. Ziska,
    * William H. Schlesinger,
    * Richard B. Thomas,
    * Richard C. Sicher,
    * Kate George,
    * and James S. Clark http://www.pnas.org/content/103/24/9086.full?sid=2f6085af-a211-4277-8af7-13ddd758df9c
    [FWIW, I live about 10 miles from the Duke University FACE site, and have developed poison ivy allergy in the last ten years. I have a magnificent specimen 4 inch diameter poison ivy vine growing on my property, to the top of a ~60 foot red oak.]

    Response of an allergenic species, Ambrosia psilostachya (Asteraceae), to experimental warming and clipping: implications for public health1
    Shiqiang Wan, Tong Yuan, Sarah Bowdish, Linda Wallace, Scott D. Russell and Yiqi Luo http://www.amjbot.org/cgi/content/abstract/89/11/1843

    Changes in biomass and root:shoot ratio of field-grown Canada thistle ( Cirsium arvense), a noxious, invasive weed, with elevated CO2: implications for control with glyphosate, Weed Science, 52:584–588. 2004
    LH Ziska, S Faulkner, J Lydon http://ddr.nal.usda.gov/dspace/bitstream/10113/10283/1/IND43643718.pdf

    Rising CO2, Climate Change, and Public Health: Exploring the Links to Plant Biology
    Lewis H. Ziska, Paul R. Epstein, and William H. Schlesinger http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2649213/
    [This source for my snarky comment about more morphine(e.g. stronger heroin, not really a good thing - more OD's, more profit for Afghan warlords) is a good overview with links to a lot of other work]

    Comment by Brian Dodge — 11 Sep 2010 @ 9:26 AM

  315. RV 306,

    I haven’t submitted the paper yet. I need to have some climate scientist look it over to make sure I didn’t include some dumb mistake that will get it rejected on the first pass. Tamino is looking at the statistics for me.

    Comment by Barton Paul Levenson — 11 Sep 2010 @ 10:02 AM

  316. 256, Hank Roberts: Wally, ocean pH change is better understood and faster than climate change:
    http://www.nature.com/nature/journal/v425/n6956/abs/425365a.html
    http://www.nature.com/cited/cited.html?doi=10.1038/425365a

    I couldn’t access the article, only the abstract. Turnabout is fair play, I suppose, as people sometimes can’t access the articles in Science that I cite. Anyway, accumulation of anthropogenic CO2 is supposed to lower ocean pH from 8.1 to 7.7 or maybe, in the extreme, 7.3. Is that correct? Study of the biological effects of this “acidification” is in its infancy, and would make a good thread of its own some day.

    Comment by Septic Matthew — 11 Sep 2010 @ 12:14 PM

  317. 261, Hank Roberts: The lectures follow a textbook, “Global Warming, Understanding the Forecast,” written for the course. For information about the textbook, interactive models, and more, visit: http://forecast.uchicago.edu/

    Cool (!) Thanks for the link.

    Comment by Septic Matthew — 11 Sep 2010 @ 12:21 PM

  318. Wally, You have to understand that the current warming epoch happens to coincide with what is likely to be the peak human population on the planet. Even if it were to only decrease agricultural yields, water supplies or healthy living space marginally, the results could be catastrophic. All of the evidence to date suggests that the effects will be largely negative and more than marginal. My suggestion would be to look at climate change as one facet of THE PROBLEM for our generation and the next–achieving a sustainable and still vibrant economy. If we do this, we stand a chance of preserving civilization. If we fail, then the anthropocene will indeed be a very thin geologic stratum.

    Comment by Ray Ladbury — 11 Sep 2010 @ 12:26 PM

  319. Re: SM (313)

    Here’s an additional Caldeira piece on the subject:

    “Our results indicate that atmospheric release of CO2 will produce changes in ocean chemistry that could affect marine ecosystems significantly, even under future pathways in which most of the remaining fossil fuel CO2 is never released. Thus chemical effects of CO2 on the marine environment may be as great a cause for concern as the radiative effects of CO2 on Earth’s climate.”

    Source available here.

    The Yooper

    Comment by Daniel "The Yooper" Bailey — 11 Sep 2010 @ 1:16 PM

  320. Wally wrote: “Most of us in the skeptic camp do agree with AGW, we disagree with catastrophic AGW.”

    That is one of the forms of denialism. The evidence that unmitigated AGW resulting from anything close to continued business-as-usual use of fossil fuels will have catastrophic consequences is overwhelming. It is particularly overwhelming since those catastrophic consequences are already occurring right before our eyes.

    Wally wrote: “Second, we don’t deny no alternatives exist, we point out they are not currently practical and/or expensive.”

    The claim that wind and solar are “not currently practical and/or expensive” is false. Solar and wind are the fastest growing sources of new electricity generation in the world, and both are growing at record-breaking double-digit rates every year.

    According to WorldWatch Institute’s Renewables 2010 Global Status Report:

    “For the second year in a row, in both the United States and Europe, more renewable power capacity was added than conventional power capacity (coal, gas, nuclear). Renewables accounted for 60 percent of newly installed power capacity in Europe in 2009, and nearly 20 percent of annual power production … Globally, nearly 80 GW of renewable capacity was added … Wind power additions reached a record high of 38 GW … Solar PV additions reached a record high of 7 GW … Almost all renewable energy industries experienced manufacturing growth in 2009, despite the continuing global economic crisis … Nearly 11 GW of solar PV was produced, a 50-percent increase over 2008 … Major crystalline module price declines took place, by 50–60 percent by some estimates … Wind power received more than 60 percent of utility-scale renewables investment in 2009 …”

    Wally wrote: “You’re creating strawmen.”

    Far from demonstrating that I am “creating strawmen” you have given a perfect illustration of exactly what I was talking about.

    You have denied that the problem exists — by denying the seriousness of the problem, ignoring the overwhelming evidence that it is indeed already profoundly serious and that it is rapidly becoming ever more serious.

    You have denied that the solution exists — by disparaging alternative energy sources as “not currently practical and/or expensive”, ignoring the overwhelming evidence that the key alternatives (wind and solar) are already being widely, rapidly and profitably deployed all over the world.

    Your comment is a textbook example of the two tactics of obstructionism that I described.

    Comment by SecularAnimist — 11 Sep 2010 @ 2:18 PM

  321. Another for SM – a dedicated acidification blog

    Comment by flxible — 11 Sep 2010 @ 2:35 PM

  322. Arrow 304,

    I’ve been teaching myself to write radiative-convective models (single-column approximations of the atmosphere plus surface). I’ve written a tutorial on how to do that. Would you like a copy?

    Comment by Barton Paul Levenson — 11 Sep 2010 @ 4:56 PM

  323. http://www.google.com/search?q=site%3Aipcc.ch+ocean+acidification
    or if you don’t like that source try these:
    http://www.google.com/search?q=ocean+acidification+global+conspiracy+world+government

    Comment by Hank Roberts — 11 Sep 2010 @ 6:23 PM

  324. Ray, congrats on the Phys. Ph.D., I’m about to finish my own in a field of genetics.

    For your papers:

    The PNAS piece “Grain yield declined by 10% for each 1°C increase in growing-season minimum temperature in the dry season, whereas the effect of maximum temperature on crop yield was insignificant. This report provides a direct evidence of decreased rice yields from increased nighttime temperature associated with global warming. ”

    Correlation, not causation. This paper would have been far more interesting if they set up several plots of rice crops in a green houses, and exposed them to different temperatures. Experiments Ray, they should have tought you how useful they were while you were getting that Ph.D. Correlations are nice, and can hint at something being possible, but controlled experiments prove it. There is probably a reason this ended up in PNAS.

    The other’s are not research pieces, but blogs or news reports. So, I’m still waiting…

    Comment by Wally — 11 Sep 2010 @ 9:27 PM

  325. Gavin,

    “Forecasts are always ‘different’ from the actuality for all sorts of reasons – and the further out you go, the more different they will be. Yet forecasts still have skill if they get closer to the true outcome than any other method. For instance, a forecast of 0.2 deg C/decade will be statistically significantly different to a reality of 0.19 deg C/dec given a long enough period. However, this is clearly skillful given the alternative of no change. Please think about what you are trying to calculate.”

    I won’t speak for Russ, but I think it should be pretty obvious what this means. The models that missed by 98 and 92 did not due particularly well predicting the climate. Like you said, there could be a lot of reasons for this. Well, now that we know they missed by a pretty large margin, we should try to explain why. If its simply because the CO2 levels were not accurately predicted, and we could update the model with that information and find a near match, GREAT! I’m all for an honest persuit of the truth here. But if you do that and you still find that the models are off by a far bit, well, you might want to start testing some new theories. Maybe the CO2 forcing isn’t as strong, or the feedbacks, or other factors. What ever. Its honestly unfortunate you can’t do this more quickly in climate science, I can do it about every week, with a controlled experiment no less, in my field. Not that I’m trying to one up anything, but you have to recignize the limitations. Even if these models do roughly work with these kinds of “hindcasting” tests, you then run into the issue that you can tune the model to do anything you want it too. Just fiddle with the set of parameters slightly, and viola! But will that help you going forward? Don’t know, need to wait again.

    [Response: You are very confused. 0.19 is different from 0.20 with 100% significance. Does it mean that a forecast of 0.20 was not useful in anticipating 0.19? The size of the significance is irrelevant. Skill is what you need to assess. This is measured as the difference between the RMS error in the forecast compared to the RMS error in what you would have anticipated without the forecast (S= 1 - RMS(forecast)/RMS(naive) ). So given a naive forecast of 'no change' (which is what was being put forward by others at the time), the skill of the Hansen forecast is easy to calculate and very positive (Hargreaves, 2010). And if you think I can 'get the model to do whatever I want it to by playing with a few parameters', you are even more confused. - gavin]

    Comment by Wally — 11 Sep 2010 @ 9:37 PM

  326. Barton,

    “Unless by doing so you free up W dollars that used to go to health care, lost productivity, funerals, and property damage from the pollution that is no longer being created. Then you might break even or even do better.”

    That’s quite an IF there that you avoided.

    Comment by Wally — 11 Sep 2010 @ 9:39 PM

  327. BPL 304,

    I would love any help you can give. If it’s only me that will be doing this I don’t think it will get very far. I’d probably try to get the tutorial on the site, although if you wanted it as a post I’m sure we can arrange something. I have a few other posts to get out of the way first in terms of some of the basics for how the model will be set up, but I could definitely see the radiative-convective models being high on the priority list.

    If you want to help with the project, I would certainly welcome anyone who is willing. Email me if you think you’d be interested or just to send me the tutorial. Either way, I appreciate the help.

    Comment by Arrow — 11 Sep 2010 @ 9:58 PM

  328. Wally, I’m surprised that even though you say you are about to get a degree in “genetics,” you seem utterly unaware of the difficulty of conducting controlled experiments in fields such as agriculture and agronomy. It would appear that you need to expand your reading a bit.

    Also, you might want to attach wheels to the goalposts–it will make them easier for you to move rapidly. I note that you’ve gone from “debunked” to “requiring additional confirmation”. This is, of course, opposed to your position–which has zero evidence.

    Evidence, Wally, it’s what’s for dinner when you’re a scientist. I hope you learn that someday.

    After that, you might want to study statistics and hypothesis testing. Here’s a start–you cannot compare a point estimate of a model to an experimental result. Google confidence intervals.

    Comment by Ray Ladbury — 12 Sep 2010 @ 9:45 AM

  329. Wally, in what field of genetics are you about to get a Ph.D., if you care to say? If you have been reading the journals, you should be more, not less, aware than most of the rate of change problems.
    http://eco.confex.com/eco/2010/techprogram/P22682.HTM
    I realize an almost-Ph.D. is very focused on one particular subject, but I suspect that if you ask around in your own department you’ll find a lot of information about climate change.

    Comment by Hank Roberts — 12 Sep 2010 @ 2:06 PM

  330. Is there anything more tiresome than willfully ignorant people who refuse to educate themselves, cavalierly dismiss the evidence presented to them, and cling obstinately to the sophistry and disinformation that they have embraced, and then post comments on blogs proclaiming that they are “still waiting” for others to devote their own time and energy to convince them of facts that are fully understood and accepted by the entire world’s scientific community, but which they are ideologically driven to reject no matter what?

    Comment by SecularAnimist — 12 Sep 2010 @ 4:18 PM

  331. 289 Wally babbled: “”273 Wally sez: “Do you think we went straight from E=mc^2 to Hiroshima?”

    John: “WHat does the one have to do with the other? You might want to learn some physics and history of physics.”

    First, you’re talking to some one that holds a BS in physics among other things. Second, the idiocy of your comment is self evident, so I’ll let it speak for itself. Good laugh though John. Next time think before you cast an insult, you might just make yourself look like a fool again.

    Nonsense isn’t any less nonsensical just because it is part of popular culture.

    Comment by John E Pearson — 12 Sep 2010 @ 4:36 PM

  332. Gavin, that was of very little help. I assure you I am not confused, I’ve been using and testing models in contagious disease and genetic networks for years. I fully understand the concept of confidence intervals, and how through increased sampling or decreased variability confidence goes up. I also fully understand that creating a model that is better than anything else is certainly of value, even if it is of by no means perfect (this would be every genetic network model ever created by anyone). All these issues however, do not mean you can escape poor stat. tests of your model by just hand waving away the very large differences found. What you’re attempting to tell me is that the ~20 years of climate data we’re using to test the model is basically a perfect set of data that can distinguish between extremely small differences in trend. This is not the case. As you can tell by the results from the test posted by Russ originally. Just look at the confidence differences between the 3 slopes. Those are hardly staggering levels of confidence given a nearly 2x difference in yearly change from observed .0171, and scenario A .0298. And the one that was about 1.6x the observed slope fell within 95% CI, though not by much. You can’t have it both ways Gavin.

    “And if you think I can ‘get the model to do whatever I want it to by playing with a few parameters’, you are even more confused.”

    Maybe it is you that is confused. Take your random model using a fairly small set of differential equations, say 5 or 6 equations, and lets assume each equation has 3 parameters. Now try to fit it to a variety of curve shapes by changing those 15 or so parameters. You might be surprised what you can get. Now, I’ve never actually seen the code for Hansen’s model, but I’m guessing its a great deal bigger than that. Personally I’ve made models with a hundred equations and maybe 500 parameters. Those models can pretty much do what ever I want them to. The trick is getting them to match the experiment.

    [Response: Why do you think that GCMs are like random low order differental equations? They are nothing like it. The vast majority of the code is tied very strongly to very well known physics (conservation of energy, radiative transfer, equations of motion etc.) which can't be changed in any significant aspect at all. The number of variable parameters that can be used for the tuning exercises you seem to want to do are tiny (a handful or so), and even they can't be changed radically because you'll lose radiative balance, or reasonable climatology. I strongly suggest you actually look at some GCM code, and read a couple of papers on the subject - GCMs are far more 'stiff' than you imagine. - gavin]

    Comment by Wally — 12 Sep 2010 @ 5:57 PM

  333. Ray,

    I’m not all saying experiments would be easy, but there are quite possible. Particularly for testing rice yields against temperature.

    “Also, you might want to attach wheels to the goalposts–it will make them easier for you to move rapidly. I note that you’ve gone from “debunked” to “requiring additional confirmation”. This is, of course, opposed to your position–which has zero evidence.”

    I did not move anything. I asked you to prove your case. You merely hinted at it being possible. It is unfortunate you fail to understand the difference. Providing some evidence for something is one thing, proving that something to be true is another. A lesson in science I suppose you missed?

    Comment by Wally — 12 Sep 2010 @ 6:04 PM

  334. Hank,

    “I realize an almost-Ph.D. is very focused on one particular subject, but I suspect that if you ask around in your own department you’ll find a lot of information about climate change.”

    Actually, I fairly regularly go to talks from faculty and students working on various bio solutions to our CO2 problems and a few work on those ag. issues we’re talking about as well. It is a fairly big part of our department.

    Comment by Wally — 12 Sep 2010 @ 6:11 PM

  335. Wally, I am becoming less and less likely to believe that you are a scientist. Yes, I presented evidence. Evidence is what science is about. It is not about proof.

    Now the evidence I presented is more than sufficient to establish decreased agricultural yield, increased drought and severe weather, etc. as credible risks. The approproate next step would be to find an upper bound to risk posed by these threats. The problem here is that there is a non-negligible probability of warming over 5 degrees C, which would result in catastrophic consequences. This means that the only appropriate action is risk avoidance. All of this, Wally, is based on the available evidence, none of which you have even challenged in any serious manner.

    You, on the other hand have presented nothing but your own incredulity based on your own ignorance.

    Comment by Ray Ladbury — 12 Sep 2010 @ 7:16 PM

  336. “This paper would have been far more interesting if they set up several plots of rice crops in a green houses, and exposed them to different temperatures. Experiments Ray, they should have tought you how useful they were while you were getting that Ph.D. Correlations are nice, and can hint at something being possible, but controlled experiments prove it.” Wally — 11 September 2010 @ 9:27 PM
    ” I would prefer that this article refered to actual real experiments. … strong benefits from increased CO2 with much lower temperature dependence. …”
    wattsupwiththat.com/…/rice-yields-co2-and-temperature-you-write-the-article/

    http://books.irri.org/3540589066_content.pdf International Rice Research Institute. 1976. Climate and Rice. Los Ba–os, Philippines.
    “Controlled-environment facilities cannot copy climates in which plants are to be selected or tested, nor are they meant or designed to do so. They cannot reproduce climates, since climate embraces an hour-by-hour, day-by-day, and year-by-year variation. Their function is to isolate particular and critical features of the environment, so that their effects can be examined and subsequently reproduced at will.”
    “Early experiments in the Pasadena phytotron led Went to suggest that many plants are favored in their growth and development by night temperatures lower than the daytime optimum, ” Global warming is reducing this diurnal variation – see below.
    “At high day and night temperatures (35 deg C day and 30 deg C night), sterility increases as a result of smaller pollens and non-dehiscence of the anthers (Sato et al., 1973).”
    “Top growth of rice plants after transplanting is, in general, linearly accelerated by raising average temperature from approximately 18јC to 33јC (Ueki, 1966; Place et al., 1971; Sato, 1972a; Chamura and Honma, 1973; Osada, Takahashi, Dhammanuvong, Sasiprapa, and Guntharatom, 1973; Yoshida, 1973). Above and below this range, the growth notably decreases.”
    “Sasaki (1927) reported that the elongation of rice leaves increased with rising temperature from 17 to 31 deg C; thence it tended to decrease and practically ceased at 45 deg C.”
    “At [0] stage, temperature-ripening grade curves have two characteristics. One is higher optimum temperature of about 26јC, the other is the rapid decline of ripening grade at high temperatures above optimum.”
    “It is also observed that the temperature curve of ripening grade at [ + 20] stage, as compared with that at [-20] stage, shows slow decline in the low temperature range, but rapid decline in the higher temperature range.”
    “Temperature of water – Under laboratory conditions, Kondo and Okamura (1932) reported that elongation is greatest at 25 deg to 30 deg C. At 35 deg to 40 deg C, it is retarded and the plant dies if submergence is prolonged.”
    Science already had shown by 1976 that temperatures above the optimum caused various adverse effects, and that the rate of onset versus temperature was faster than for temperature declines below optimum.

    http://books.irri.org/3540589066_content.pdf ‘Climate Change and Rice’, S. Peng K.T. Ingram H.-U. Neue L.H. Ziska (Eds.)
    “There is a large body of literature on the individual effects of CO2 and temperature on rice physiology. In summary, CO2 concentration is directly related to biomass production (Baker et al. 1988), crop development rate (Baker et al. 1988), leaf-level water-use efficiency (Allen et al. 1988), and grain yield (Yoshida 1976; Cure 1985). For tropical areas, increased temperature leads to faster crop development (Nishiyama 1976), higher respiration rates (Munakata 1976), spikelet sterility (Yoshida et al. 1981; Mackill et al. 1982), and reduced grain yield (Imai et al. 1983).”
    “Though there are few data on the interactive effects of elevated CO2 and temperature, speculated interactions based on their individual physiological effects are mostly negative. For example, increased temperature hastens crop development, thereby shortening the time from planting until maturity and reducing the total time for photoassimilation and yield development. Elevated CO2 on the other hand increases the rate of CO2 uptake thereby offsetting, at least in part, the negative effect of increased temperature. Unfortunately, elevated CO2 also hastens crop development (Baker et al. 1988) and seems likely to exacerbate the negative effects of increased temperature on total duration of crop growth. Preliminary experimental data support this view and indicate that increased temperatures cause large reductions in rice yield, which are not compensated for by increases in CO2 (US DOE 1989). Extrapolation of these data to real-world field conditions suggest the possibility that tropical rice grain production of current cultivars could decline about 7 or 8% for each 1 deg C rise in temperature, seriously affecting world food supply, (US DOE 1989).” [prediction from models based on real data, field and controlled environment studies, in 1989.]
    “Although these results suggest dramatic effects of climate change on rice production, they are based on only one experiment under highly controlled conditions for one rice cultivar . Additional cultivars must be studied under actual tropical conditions with variable temperatures to verify the above data and to verify whether they can be extrapolated to real-world rice production.”

    http://www.pnas.org/content/101/27/9971.full
    “Here we report that annual mean maximum and minimum temperatures have increased by 0.35°C and 1.13°C, respectively, for the period 1979–2003 and a close linkage between rice grain yield and mean minimum temperature during the dry cropping season (January to April). Grain yield declined by 10% for each 1°C increase in growing-season minimum temperature in the dry season,”
    “Relationships between grain yield and temperature or radiation were evaluated by using yield data from field experiments conducted under irrigated conditions with optimal management at the IRRI Farm from 1992 to 2003.” – confirming the 1989 predictions, by studying rice yields, growing under actual real world tropical conditions with varying temperatures. Except that the observations were worse than the predictions.
    Irrigation and fertilizer were maintained at constant levels, and “Mean radiation also increased during the same period …” so light wasn’t a factor.
    “The increase in minimum temperature was 3.2 times greater than the increase in maximum temperature, which is consistent with the observation that minimum temperature has increased approximately three times as much as the corresponding maximum temperature from 1951 to 1990 over much of the Earth’s surface…” – see reference above to “early experiments” by Went at the Pasadena phytotron.

    Repeating a body of experiments already performed by a host of plant physiologists over the last hundred years isn’t necessary, let alone interesting.

    What do you think, according to Liebig’s law, caused the decline in rice production? Cosmic rays? It wasn’t the Lindzen Iris closing – radiation increased. Natural cyclic variations of unknown origin?

    Comment by Brian Dodge — 12 Sep 2010 @ 9:08 PM

  337. Repeating a body of experiments already performed by a host of plant physiologists over the last hundred years isn’t necessary, let alone interesting.

    Not to mention the body of evidence from the real world experience of the folks growing Wallys food since he was in diapers, but like many young grad school hot shots he assumes any thought or objection he has is entirely original and incisive, so investigating what’s been done in the real world over the last hundred years isn’t necessary.

    Enough with the citizen auditors misunderstandings please! this is not the “do-it-yourself climate science” that’s worth noting.

    Comment by flxible — 12 Sep 2010 @ 11:43 PM

  338. Wally:
    > bio solutions to our CO2 problems

    What _are_ your CO2 problems then? What do you think are problems?

    Comment by Hank Roberts — 12 Sep 2010 @ 11:55 PM

  339. ” they can’t be changed radically because you’ll lose radiative balance, or reasonable climatology…GCMs are far more ‘stiff’ than you imagine. – gavin”

    Sorry Gavin, but how is it compatible with the large uncertainty of the climate sensitivity? if they were that stiff, they should all give the same result ! you just seem to forget (I can’t imagine that you don’t know it !) that
    1) average temperature is not univocally linked with enthalpy content

    2) there is ample room for the temporary storage of energy in the oceans, to drive limiting cycle on century, and probably on millenary scale

    3) GCM are not accurate enough to describe non linear feedbacks like cloud covering – hence their very poor predictive power on ENSO cycles for instance – which could also easily change the energy balance – just modifying the albedo by 1% is already a large change.

    Ray “The problem here is that there is a non-negligible probability of warming over 5 degrees C, which would result in catastrophic consequences. ”
    The problem here is that there is no sensible way of quantifying properly this “non-negligible probability”, just because, as I argued already, there is no theory of probabilities applied to an unknown theory.

    Comment by Gilles — 13 Sep 2010 @ 5:18 AM

  340. Arrow,

    I wasn’t thinking of posting the tutorial, but just of giving you a document with some of the basics in it so you could develop your own posts.

    I’ll need your email address. Mine is readermail1960@gmail.com.

    Comment by Barton Paul Levenson — 13 Sep 2010 @ 5:52 AM

  341. Wally: What you’re attempting to tell me is that the ~20 years of climate data we’re using to test the model is basically a perfect set of data that can distinguish between extremely small differences in trend. This is not the case.

    BPL: If the differences in trend are “extremely small,” it means the model was close to the reality, doesn’t it?

    Duh.

    Comment by Barton Paul Levenson — 13 Sep 2010 @ 5:55 AM

  342. Wally

    “So you’ve now combined the results from climate models with dubious validity with economic models? I see expanding error bars….”

    Indeed. But not expanding in a direction that’s going to help /you/. Those ‘expanding error bars’ make it increasingly unlikely that something catastrophic will occur.

    “Silk, its based on increased power costs, which are pretty straight forward to evaluate.”

    I’m calling you out here. Some me some /evidence/ that cap and trade in the US would cause a 1% contraction in the US economy.

    Indeed, you’re argument seems to be that growth would be down 1% EVERY year. So brack that up, too.

    Comment by Silk — 13 Sep 2010 @ 9:15 AM

  343. 321, flxible, thank you.

    Comment by Septic Matthew — 13 Sep 2010 @ 11:19 AM

  344. 320, Secular Animist: The claim that wind and solar are “not currently practical and/or expensive” is false. Solar and wind are the fastest growing sources of new electricity generation in the world, and both are growing at record-breaking double-digit rates every year.

    Add to that: Somerville et al in the 13 August 2010 Science, page 790, report that the global production of biofuels equals the consumption by Germany of all liquid fuels.

    All energy supplies receive government subsidies, and the subsidies for wind and solar are higher than the subsidies for some others. Subsidies have to be paid by reductions in GDP elsewhere, but with costs of renewables declining it is likely that they will soon have smaller subsidies than most other sources.

    Comment by Septic Matthew — 13 Sep 2010 @ 11:29 AM

  345. Gilles @339 — ENSO is far from cyclic, being best described as a quasi-periodic oscillation. Cloud cover is nothing to do with it.

    Septic Matthew @344 — Fossil fuels receive much higher subsidies, at least in the USA, than do wind/solar. That is all actually OT here on RealClimate. Take to, for example,
    http://climateprogress.org/

    Comment by David B. Benson — 13 Sep 2010 @ 1:10 PM

  346. Gavin,

    I’ll happily go through an exercise in getting one of these models to do damn near anything. Would like to suggest a paper we could start with?

    [Response: You need to start with the code, not a paper. That comes afterwards. Try EdGCM, NCAR CSM or GISS ModelE. - gavin]

    Comment by Wally — 13 Sep 2010 @ 2:44 PM

  347. Brian, that’s a nice list of quotes. But I never claimed this was easy. Nor do the experimental conditions have to perfectly reproduce the climate. I can take an embryo out of it egg/mother and explant it into growth media. That media is good enough to allow the embryo to grow for at least several hours, during which I can preform a CONTROLLED experiment. The conditions do not have to be perfect, you just need to be able to get them to work reasonably well, so that you can create a controlled environment for your experiment. If it take $1B to do such an experiment in rice crops, that’s not really my problem. And it certainly doesn’t allow for us to placed increased confidence in simple observation based correlations.

    Comment by Wally — 13 Sep 2010 @ 2:49 PM

  348. Gilles,

    Gavin: “they can’t be changed radically because you’ll lose radiative balance, or reasonable climatology…GCMs are far more ’stiff’ than you imagine.”

    Gilles: “Sorry Gavin, but how is it compatible with the large uncertainty of the climate sensitivity? if they were that stiff, they should all give the same result ! you just seem to forget (I can’t imagine that you don’t know it !) that”

    Exactly, if you can push one of these handful of parameters that you can not arrive at experimentally such that you attain some sort of “unreasonable” climate. It should become pretty obvious that those parameters that are unknown can have drastic effects on the model outputs.

    Its a little comical that Gavin can in one breath tell us that the models are based on well known physics that give it a great deal of certainty. Then in the next tell us that changing a few of the unknown parameters gives us an “unreasonable” climate. All while being condescending and ridiculing those he presumably would like to convince.

    Here’s a hint Gavin, create an argument that makes sense and do attempt to tell me I’m “confused” or something similar. If I’m missing something, explain it rationally, logically and respectively, otherwise you are not going to get your point across and only further entrench me in my position, even if it is, in part, based on ignorance. Remember ignorance is no crime, being willfully ignorant however can be. But since we’re hear talking, you should assume there is at least some chance a respectful and logical argument could swing my opinion on one matter or another. But if you and the others around here wish to assume that is not true, well that’s what you’ll get. A bit of self fulling prophecy that would be.

    [Response: Ignorance is indeed curable. The cure starts with a dose of background reading on model development (Schmidt et al , 2006 is a reasonable example of the genre). There are also FAQs on climate models that might be helpful. Both should help you formulate what it is that actually want to know. - gavin]

    Comment by Wally — 13 Sep 2010 @ 3:07 PM

  349. 345, David B. Benson: That is all actually OT here on RealClimate. Take to, for example,

    It’s just a slight detail added to Secular Animist’s assertion about the viability of alternatives. Here’s another report on biofuels, not exactly new:

    That is all actually OT here on RealClimate. Take to, for example,

    It seems (whine, whine, whine — oh woe is me) that some topics don’t become off-topic until I comment on them, even when I write in support of Alternative Energy.

    Comment by Septic Matthew — 13 Sep 2010 @ 7:27 PM

  350. BPL @ 340 – I’m trying to find people who will want to contribute to this project, which is why I was suggesting posting it or anything else you think you can contribute. While I plan on being active on this project, I know that it won’t work without other people getting involved. My email is bloggingarrow@gmail.com. I’ll email you soon in order to get the tutorial. I’ll probably ask if you want to contribute as well, but I’ll understand if you have other demands on you’re time. If anyone else would like help however, check out the project site or send me an email.

    PS I’ve been looking at the GISS ModelE code and I have some questions (minor technical clarifications, like what does this variable mean, what is the purpose of such and such subroutine). Can one of the RC moderator’s let me know who I should email my questions to? Should it be to Gavin or is there someone more appropriate (I don’t want to bore him with all my mundane questions). Thanks for any help.

    [Response: email me. - gavin]

    Comment by Arrow — 13 Sep 2010 @ 8:09 PM

  351. Gavin,

    “You need to start with the code, not a paper. That comes afterwards. Try EdGCM, NCAR CSM or GISS ModelE. – gavin]”

    No gavin, you start with the paper, because the raw code is meaningless without an explanation of what equations are what, and what’s coming from where. I’m not going to spend a week trying to piece a fairly large model together, finding all the background for each equation from scratch. Luckily the GISS model does have a paper to go with it, which happened to be the schmidt paper you gave me. Its far to late to do much with it now, but give a day or so.

    And those FAQs from your own site…not useful here.

    But hey, you’ve stop with the appeals to ridicule and explaining simplistic issues that don’t answer the criticism all in a condescending manner, so maybe we can get somewhere.

    Comment by Wally — 13 Sep 2010 @ 10:31 PM

  352. It is interesting that Wally and Gilles both emphasize the uncertainty in climate sensitivity, but without any indication of which way those uncertainties are skewed. It is pretty much impossible to understand Earth’s climate if you presume sensitivity is less than 2 degrees per doubling. However, even 2 degrees per doubling would give us a temperature rise of about 4 degrees by the end of the century–well into the red zone. It is much easier to make the models work with a sensitivity of 4 degrees per doubling–which puts us into the range of mass extinction by century’s end.

    Guys, I’ll keep saying it until you get it. Uncertainty is not your friend.

    Comment by Ray Ladbury — 14 Sep 2010 @ 7:09 AM

  353. Wally: The irony! You first lecture Gavin on what he can do with GCMs, and when he points out you don’t know how they work (which you admit) you accuse him of being condescending.
    You say that logical argument could swing your opinion, but it actually looks like your have already reached the conclusion you want.

    Comment by Rocco — 14 Sep 2010 @ 1:53 PM

  354. Rocco,

    You apparently did not understand my post. I may not be able write out any particular model from scratch or make perfect sense of the raw code without a paper to explain it, but that does not mean I do not appreciate what these models are comprised of. One issue is the raw code, the other is basic theory and rational. They are two different things.

    Comment by Wally — 14 Sep 2010 @ 3:35 PM

  355. Wally 13 September 2010 at 3:07 PM

    Here’s a hint Gavin, create an argument that makes sense and do attempt to tell me I’m “confused” or something similar. If I’m missing something, explain it rationally, logically and respectively, otherwise you are not going to get your point across and only further entrench me in my position,

    In an earlier comment you said something similar:

    Good lord, do you really need to resort to hyperbole and appeals to ridicule to make your argument? First, it only makes me less likely to seriously consider anything you say.

    You’re not here to do anyone any favours except yourself. Whether you let your emotions interfere with your judgment is your choice.

    Comment by Anne van der Bom — 14 Sep 2010 @ 5:45 PM

  356. Wally: But you have yet to demonstrate that you really “appreciate what these models are comprised of”. Perhaps it would be wise to do so before you try to convince climate modellers that you understand GCMs better then they do.

    Comment by Rocco — 14 Sep 2010 @ 5:48 PM

  357. Wally says: ” I may not be able write out any particular model from scratch or make perfect sense of the raw code without a paper to explain it, but that does not mean I do not appreciate what these models are comprised of. ”

    Do you have any idea how often I had students in introductory physics classes try to tell me, “I may not understand the math, but I just know relativity can’t be right!”

    Comment by Ray Ladbury — 14 Sep 2010 @ 7:11 PM

  358. Anne,

    My emotions are not the problem. The problem is getting useful criticism that can advance my own knowledge, or as you want to put it “do me favors,” from a generally abusive set of posts.

    There is not much to gain if someone more less calls you stupid and tells you to read a stat book. If there is something specific that causes someone to say such a thing, that can be useful however. Unfortunately, its difficult for most people to do both at the same time, but I’ve tried to pull what I can out of this discussion.

    Comment by Wally — 14 Sep 2010 @ 7:35 PM

  359. “That doesn’t win you any favors.” — Wally
    “as you want to put it ‘do me favors’” — Wally
    “explain it rationally, logically and respectively” — Wally

    People are giving you the same answers others get when they ask the same questions, and you’re asking very frequently asked questions so far, and taking offense at almost everything.

    Maybe finishing the PhD first would be a good idea, before starting to dig into a whole different field?

    Comment by Hank Roberts — 14 Sep 2010 @ 10:08 PM

  360. @359: Hank, the first quote that you attribute to Wally was mine. Pay attention, please. I won’t comment on your insulting tone as I promised not to do it.

    On the debate: of the 9 posts I currently see on this page, only 2 have *something* to do with climate science – 351 and 352. Others are pointless attacke on how someone does not really know anything and has to go take lessons (5 posts) and predictable reaction to that (2 posts). Pathetic, particularly from the side of attackers.

    On models: of course, the ultimate truth is in the code, but the argument in 351 that there has to be some guiding document behind any good model that would describe its behavior without going into a lot of detail has merit as well. Could anyone please point Wally to such a document for, say, ModelE? I would do that myself, but I am not familiar with that model.

    Comment by Nick Rogers — 15 Sep 2010 @ 2:46 AM

  361. Wally 14 September 2010 at 7:0 PM

    The problem is getting useful criticism

    That requires effort from both sides. Focus is key. Try not to get carried away by economics and etiquette. And even in the field of climate science there is too much to cover in a few exchanges of comments. Focus on those areas where you see most issues (I think you have already identified climate models as one of those areas, so a suggestion might be to try and stick to that subject).

    Gavin saying “you’re confused” is not being offensive, he is being direct. He doesn’t know you and cannot look inside your head. He only has your comments. If those comments show confusion, he’ll tell you that. Directly and without detours. I always find it very helpful in a discussion if people leave me no guesswork about where they stand.

    but I’ve tried to pull what I can out of this discussion.

    Like a famous president once said “Don’t ask what your country….”. Maybe if you forget about what you can get out of this discussion and start looking for what you can contribute, you’ll learn much more from it. For example, instead of demanding evidence from the other posters, bring up some yourself.

    Do you really think you’ve already tried enough? You knew you were entering the lion’s den, don’t tell me you didn’t expect stiff opposition to your viewpoints. Don’t give up so easily.

    Comment by Anne van der Bom — 15 Sep 2010 @ 5:35 AM

  362. Ah, sure enough, ‘favors’ was initially used by Nick Rogers, y’all are both complaining about being referred to statistics and code sources.

    But those are regular answers to frequently asked questions.

    It’s easy to make a blog exchange of text into a Monty Python sketch,* but it takes effort to have a human conversation in text without visual and verbal cues to feeling. Make the effort, others here will work with you. Yes, there are always going to be people trying to stir things up, and people thin-skinned enough; people do make mistakes. Don’t take it personally. Nobody’s sounded angry or upset with you yet, tho’ you’ve heard some impatience. Seriously, look at “Start Here” and Weart’s book, and consider that even easily irritated and cranky people will respond to questions that are asked in the right way, e.g.
    http://www.catb.org/esr/faqs/smart-questions.html
    ______
    * http://www.ibras.dk/montypython/episode29.htm#11

    Comment by Hank Roberts — 15 Sep 2010 @ 9:39 AM

  363. Nick, reread 351. You’re asking people to give Wally what he has now.

    Wally writes “Luckily the GISS model does have a paper to go with it, which happened to be the schmidt paper you gave me…. FAQs from your own site…not useful here.”

    Luckily? happened to be?

    He’s been given what he asked for, and without having read it yet he is dismissing it as luck and happenstance.

    That’s not respective.

    The Model E page says how to join the mailing list. Done that?
    There are a lot of people doing what you all are asking help with.
    Ask good questions, get better answers.

    Comment by Hank Roberts — 15 Sep 2010 @ 10:59 AM

  364. http://imgs.xkcd.com/comics/physicists.png

    Comment by Hank Roberts — 15 Sep 2010 @ 12:18 PM

  365. Gavin, you write: “The number of variable parameters that can be used for the tuning exercises you seem to want to do are tiny (a handful or so), and even they can’t be changed radically because you’ll lose radiative balance, or reasonable climatology.”

    I am interested to know your reaction to Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate sensitivity. GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007

    My reading of Kiehl is that there is a great deal of uncertainty in historical estimate of aerosol forcings and that these uncertainties are one of the primary sources of the uncertainty in the climate sensitivities generated by the climate models. Although I am not familiar with all the ins and outs of creating climate models, it seems to me that Kiehl’s observations at least create an appearance that climate models are “tuned” to historical data with choices of historical aerosol forcing from a plausible range of historical estimates.

    [Response: These two different issues. I was discussing the tuning of the model physics, while you are alluding to the potential for the tuning of the model forcings. I cannot speak for any of the other groups, but we chose our aerosol forcings based on a best guess estimate of aerosol distributions (from an offline aerosol model) and an estimate of the aerosol indirect effect from the literature (-1 W/m2 pre-industrial to present). We did not go back and change this based on our transient temperature response in the 20th C simulations. For AR5, the model has changed significantly, the aerosol distribution as well, and yet in the configuration that requires an imposed aerosol indirect effect we are using the same magnitude as in AR4 (i.e. -1 W/m2 pre-industrial to present). No tuning is being done. We will also have configurations in which the indirect effect is calculated from first principles - this is going to give whatever it gives. And again, we are not going to tune it based on the transient temperature response. - gavin]

    Comment by pauld — 15 Sep 2010 @ 1:42 PM

  366. Gavin’s inline response @ #332 received the response:

    “Here’s a hint Gavin, create an argument that makes sense. . .” That’s two insults in ten words–”hint” is pretty clearly sarcastic, while the assertion by implication that the argument did not make sense
    would have a pretty high probability of causing offense with most people, IMO.

    And what was Gavin’s comment?

    “Why do you think that GCMs are like random low order differental equations? They are nothing like it. The vast majority of the code is tied very strongly to very well known physics (conservation of energy, radiative transfer, equations of motion etc.) which can’t be changed in any significant aspect at all. The number of variable parameters that can be used for the tuning exercises you seem to want to do are tiny (a handful or so), and even they can’t be changed radically because you’ll lose radiative balance, or reasonable climatology. I strongly suggest you actually look at some GCM code, and read a couple of papers on the subject – GCMs are far more ‘stiff’ than you imagine. – gavin]”

    By contrast, there is to my eye no insult here. The closest thing to it is the initial question “Why do you think. . .” But when I (for example) ask for information, there’s a tacit assumption that whomever I ask actually does know more than I do on that topic. I can hardly fault them for accepting that assumption; rather, I should be grateful for their time. I am certainly not doing them a favor! The remainder is straight declarative sentences, substantive and relevant.

    The final sentence is a recommendation, true enough, but as I said, when you come around looking for information, there’s a tacit assumption. . .

    Comment by Kevin McKinney — 15 Sep 2010 @ 2:10 PM

  367. Cognitive bias isn’t your friend when people are telling you that you have everything wrong. I’m reasonably certain this is the only reason that Wally can’t extract anything from these discussions.

    xkcd is very interesting, today. Painfully apt, too.

    Comment by Didactylos — 15 Sep 2010 @ 2:31 PM

  368. Gavin, can you expand on your response to 365? How exactly do you use the information that the aerosol indirect effect is 1 W/m2 from the preindustrial in a model? I assume it is not just a global perturbation to the radiation budget. Do you add an extra cooling to the radiative transfer calculations when aerosols are present? Does there have to be both aerosols and clouds? I don’t see how you could have a configuration with an ‘imposed’ indirect effect, without essentially building a simple parameterisation.

    Secondly, have you seen the paper: Knutti, Why are climate models reproducing the observed global surface warming so well?, GRL, 2005. Knutti seems to suggest that the trend in temperatures over the 20th century has been used, consciously or not, in model development.

    Finally, why is there so much room within CMIP for modelling groups to choose their own forcing. Shouldn’t at least the aerosol concentration, ozone concentration etc. as well as any offline/imposed forcing values be consistent accross the board?

    [Response: The details need to be read - this paper (Hansen et al, 2005) has the rationale and actual mechanics of the parameterisation (section 3.3.1). Not sure where the idea came from that this was not a parameterisation though. Knutti's paper is interesting (as is almost everything he writes), but as I said (and I can't speak for any other groups), we do not tune the forcings to get a better match to the transient temperatures. We aren't far off though and so we have no incentive to do so - that might be a factor in some collective emergent behaviour. As for why groups vary in what they use to force the models, it is inevitable given the range of sophistication in modelling groups. I'll give an easy example - many early codes did not have absorption or scattering by aerosols in the radiation code. This is a hard problem, and not necessarily the first thing you would get to when you start out. Yet these groups still wanted to include some accounting of the effects of volcanoes (which are clear in the record). So they applied the aerosol forcing from volcanoes as if it was the exact equivalent of a dimming of the sun. This works ok for some things (global cooling), but doesn't work at all for strat. warming, circulation impacts, ozone feedbacks etc. So along comes a committee of climate modellers who try and get everyone to do the same experiment that involves volcanic forcing. Can they mandate that everyone use the exact same aerosol mass distribution in the stratosphere? No, because that won't work at all for some groups. Do they force everyone to the lowest common denominator? (i.e. a change in TSI?) Again no, because once people have implemented complex physics in the code, they hate turning it off to keep people who didn't make that effort happy. They want to see if it works. So you are stuck - everyone will end up applying the forcings their own way. This is even more of a problem when it comes to the complex 3D forcings - tropospheric aerosols, indirect effects and ozone. Plus you have the even more fundamental issue that the amount of radiative forcing from a physical change in GHGs or aerosols is a function of the model base climate and the radiative transfer code - so even if you forced everyone to do things identically, you still aren't going to get identical impacts. Of course, some experiments like this are useful (2xCO2 + slab ocean for instance) - but they are useful only for comparing models. For comparing to the real world, over-specification of some ideal (but uncertain) forcing restricts the phase space of where the models can be and underestimates the uncertainty. In some cases an unfortunate choice can mean that none of the models can get anywhere close to the real world - and what is the point in that? - gavin]

    Comment by Marty Singh — 15 Sep 2010 @ 5:00 PM

  369. Nick,

    I’ve been looking at ModelE in what time I can spare (which hasn’t been much) for the last 2 days, and I’m not exactly satisfied with the explainations for it. The Schmidt paper that supposedly describes it, does a pretty good job of explaining the basic rational from a pretty high level, however, the model code is not explained. We need a list of where the model equations and parameter values are coming from, including references and discussion. I know most model papers have a hard time with this because it is quite a lot of work and requires many, many additional pages that journals often don’t want to bother with. But it makes reproducing these models very difficult to impossible.

    However, to go back to the point of looking at GISS modelE, its obvious from the Schmidt paper that they are estimating, guessing, or “tuning”, at several parameter values such as the aerosols that were brought up before, drag co. effs., radiation scattering effects by clouds, cloud formation, and percepitation.

    Gavin is now attempting a bait and switch in his latest comment to pauld. We were talking about GCMs in their entirety. Earlier he wanted me to believe these things where ‘stiff’ but now he’s retreating from that position and saying that the “physics” is known, but the forcings are more maliable. I was speaking to the entirety of these models, includng CO2/aerosol forcings, as well as black body ration and conduction. Obviously we know some things very well, but others not so much.

    Comment by Wally — 15 Sep 2010 @ 6:59 PM

  370. Off topic, I suppose,but
    “Nanodiamonds Discovered in Greenland Ice Sheet, Contribute to Evidence for Cosmic Impact”:
    http://www.sciencedaily.com/releases/2010/09/100914143626.htm
    Clovis comet?

    Comment by David B. Benson — 15 Sep 2010 @ 7:19 PM

  371. Anne,

    “Gavin saying “you’re confused” is not being offensive, he is being direct. He doesn’t know you and cannot look inside your head. He only has your comments. If those comments show confusion, he’ll tell you that.”

    No, it isn’t direct, its a rididule. Plain and simple. Since as you say he can’t look inside my head, there are several other posibilities beyond me being confused.

    1) Gavin is confused
    2) My post was confusing (notice this is different than ME being “very confused”)
    3) There is just some general misunderstanding threw slightly different use of vocabular or something, that you might not qualify as confusion on anyone party

    Gavin does not have the required information to make a broad statement about my general understand from just a couple of posts (you even state as much). Thus, making such a statement about me personally, instead of my argument is an appeal to ridicule. I’m not being senstive here, I’m stating the facts. Comments such as that one are ad hominems. If you, or Gavin or Hank, don’t want me to point them out, you all better get together and decide to not throw them out there.

    And of course after this statement Gavin goes on to describe a statistical concept that is extremely remedial as if somehow he’s “teaching” me, which doesn’t even do anything to advance his argument. He came in assuming I don’t even have a high schoolers understanding of stats. As hank was moning about, that isn’t doing him any favors. Isn’t it at least on of reason for this blog to help convince people like me?

    And Kevin, my “hint” was quite serious. Was it disrespectful? Probably, but I generally don’t give a lot of respect to those that don’t show me any.

    Anyway, the general point is that Gavin, and I, as well as others on this board would be better served if we depersonalized this debate. No more, “you’re confused”, “you’re wrong”, “you need to go learn some stats”. If you have a point explain it as it pertains to the arguments only, not those making the arguments. As we know, such statements are ad hominems regardless of truth. I’m sure I’m ignorant of a great many things, but someone else stating as much in argument is pointless. Let the facts be the facts.

    Comment by Wally — 15 Sep 2010 @ 7:21 PM

  372. 1- Livingston and Penn have popped up again.
    http://www.probeinternational.org/Livingston-penn-2010.pdf

    2- There’s also an October Science Magazine ref which I’ve misplaced

    3- My first real live experience with MSM. From the Financial Post (Energy rag???):

    “…The upshot for scientists and world leaders should be clear, particularly since other scientists in recent years have published analyses that also indicate that global cooling could be on its way. Climate can and does change toward colder periods as well as warmer ones. Over the last 20 years, some $80-billion has been spent on research dominated by the assumption that global temperatures will rise. Virtually no research has investigated the consequences of the very live possibility that temperatures will plummet. Research into global cooling and its implications for the globe is long overdue…”

    Read more: http://opinion.financialpost.com/2010/09/16/lawrence-solomon-chilling-evidence/#ixzz0zjDkaElt

    This is surely OT. Does it belong anywhere on RC?

    Comment by John Peter — 16 Sep 2010 @ 4:05 PM

  373. > Lawrence Solomon

    Shorter Lawrence Solomon — maybe the sun __won’t_ come up tomorrow!
    It’s Possible! We should plan for that happening!!!

    On the less dramatic possibility of another solar minimum, not to worry even if it happens:

    http://www.agu.org/pubs/crossref/2010/2010GL042710.shtml

    Comment by Hank Roberts — 16 Sep 2010 @ 4:14 PM

  374. John Peter @372 — Sure you didn’t find it on The Onion?

    Comment by David B. Benson — 16 Sep 2010 @ 5:34 PM

  375. Gavin: Earlier I received the following response from you to the question I asked at about #365:
    “[Response: These two different issues. I was discussing the tuning of the model physics, while you are alluding to the potential for the tuning of the model forcings. I cannot speak for any of the other groups, but we chose our aerosol forcings based on a best guess estimate of aerosol distributions (from an offline aerosol model) and an estimate of the aerosol indirect effect from the literature (-1 W/m2 pre-industrial to present). We did not go back and change this based on our transient temperature response in the 20th C simulations. For AR5, the model has changed significantly, the aerosol distribution as well, and yet in the configuration that requires an imposed aerosol indirect effect we are using the same magnitude as in AR4 (i.e. -1 W/m2 pre-industrial to present). No tuning is being done. We will also have configurations in which the indirect effect is calculated from first principles - this is going to give whatever it gives. And again, we are not going to tune it based on the transient temperature response. - gavin]”

    I appreciate your response. While it helped clarify some issues in my mind, it raises a host of other questions that would be off-topic on this thread.

    I think that it would be very helpful for RealClimate to post a discussion of Jeffrey T. Kiehl, Twentieth Century Climate Model Response and Climate Sensitivity, 34 Geo. Res. Lett. L22710 (2007), http://www.atmos.washington.edu/twiki/pub/Main/ClimateModelingClass/kiehl_2007GL031383.pdf , along with the related article, Stephen E. Schwartz, Robert J. Charlson and Henning Rodhe, Quantifying Climate Change – Too Rosy a Picture?, 2 Nature Reports: Climate Change 23 (2007). http://www.ecd.bnl.gov/pubs/BNL-78121-2007-JA.pdf

    The way you describe incorporating estimates of historical forcing of aerosols sounds reasonable, but I am surprised that you are not aware of how other modelers are handling this issue. What do you think of Scwartz’s, et.al. recommendations that: ‘A much more realistic assessment of present understanding would be gained by testing the models over the full range of forcings”? Is there any movement towards implementing this suggestion in the next IPCC or at least developing a standard set of historical forcings and evaluating models based upon them?

    Comment by Pauld — 16 Sep 2010 @ 5:35 PM

  376. RE 372

    Science Writer added this:
    “…The phenomenon has happened before. Sunspots disappeared almost entirely between 1645 and 1715 during a period called the Maunder Minimum, which coincided with decades of lower-than-normal temperatures in Europe nicknamed the Little Ice Age…”

    Full story at: http://news.sciencemag.org/sciencenow/2010/09/say-goodbye-to-sunspots.html

    Comment by John Peter — 16 Sep 2010 @ 5:38 PM

  377. Gavin,

    Thanks for your detailed reply and reference. I come from idealised modelling world, so when I see ‘imposed’ I read ‘specified’ ;-).

    Your point is well taken – the inputs used must be appropriate for the level of physics within each model. However, your assertion that allowing more freedom in terms of inputs leads to a wider exploration of the phase space is at odds with Knutti’s conclusion, in which the correlation between aerosol forcing and climate sensitivity narrows the ensemble of 20th century runs.

    Do you think we should adopt his interpretation of the 20th century runs as conditional on observations? If this is the case, we can’t really claim reproduction of the global mean 20th century temperature as any kind of prediction success. While it may be true that formal attribution studies do not rely on the global mean temperature history, the IPCC literature does give a lot of weight to these figures as showing model skill.

    If we don’t adopt this view, we have to explain why the aerosol forcing climate sensitivity correlation exists. While you, and no doubt other modelling groups, are adamant they only use the climatology and seasonal cycle for tuning, is there no possibility that the 20th century change enters at some level? Most of these models have been around for a long time in one form or another, and thus the models ability to simulate the 20th century climate is known to those choosing aspects of the model to improve.

    I would appreciate hearing your views on this issue.

    [Response: The basic fact is that the 20th C trend does not constrain the total aerosol forcing or the climate sensitivity and the space of allowable combinations that provide basically the same temperature trend is large. Clearly we could create a set of model runs that mapped out the uncertainty in each independently and get a much wider range of 20th C trends (and indeed this has been done in simpler contexts). You can then ask what the projections are for the future given the actual temperature trend (plus or minus some tolerance), and you will get a range that is quite close to the range of AR4 projections (maybe a little wider). That is certainly a useful exercise. But the AR4 models were not created in such a fashion, and so thinking about them like that is not as useful. Each group strives to create the best simulation they can and therefore most of the models cluster around 'best estimates' of various quantities. But even then, some of the models didn't use any aerosol indirect effects and have late 20th C trends that are significantly too warm. Nonetheless, the existence of a reasonable match to the observational trend in a specific GCM does not preclude the potential existence of another reasonable match with a different sensitivity and a different forcing. Thus you have to look at the models as plausible and consistent renditions of what happened, but that are not definitive. Widening the scope of variables, time periods, and complexity will give much better chances of constraining important facets of the projections. - gavin]

    Comment by Marty Singh — 16 Sep 2010 @ 8:31 PM

  378. Martin Singh (368) and Wally (369) (Anyone else as well)

    If you want to understand how GCM’s work, I would encourage you to look my GoGCM project. My intent with this project is to try to show how these models are made, what assumptions go with them and what they can, and can’t do.

    http://gogcm.blogspot.com/

    Comment by Arrow — 16 Sep 2010 @ 9:25 PM

  379. “Gavin is now attempting a bait and switch in his latest comment to pauld.”

    “Anyway, the general point is that Gavin, and I, as well as others on this board would be better served if we depersonalized this debate.”

    Indeed, “we” would. Personally, I think Gavin makes a major contribution toward that end by not responding to the first quoted comment. Many could profit by his refusal to take (perhaps unintended slurs) personally.

    Oh, and “ad homs” are arguments of the form: “A is a bad person, therefore his argument his discredited.” Simple name-calling is not an ad-hom, much less so a statement (correct or not) that someone is “confused.”

    Comment by Kevin McKinney — 16 Sep 2010 @ 9:52 PM

  380. Anne et al.,

    I have become convinced after many long conversations with Wally that he is acutely sensitive to any criticism and so becomes reactionary whenever anyone who disagrees with him, even when it is deserved, and that he is genuinely tone-deaf to his own online persona. He honestly does not know how he comes off.

    The ‘you-are-not-in-my-head’ / ‘you-don’t-know-me’ response is also a motif with him; I have no idea why.

    It is probably better not to engage in these kinds of conversations.

    Comment by Waldo — 16 Sep 2010 @ 10:46 PM

  381. Hank @373
    David @374

    I believe both of you are in error.

    According to the Science blog it actually the global warming that is causing the sunspots to start to disappear.

    Thanks to both of you for your help anyway. It pointed me in the right direction.

    Comment by John Peter — 16 Sep 2010 @ 11:23 PM

  382. for those who didn’t click the link above:

    GEOPHYSICAL RESEARCH LETTERS, VOL. 37, L05707, 5 PP., 2010
    doi:10.1029/2010GL042710

    On the effect of a new grand minimum of solar activity on the future climate on Earth

    Georg Feulner, Stefan Rahmstorf
    Potsdam Institute for Climate Impact Research, Potsdam, Germany

    The current exceptionally long minimum of solar activity has led to the suggestion that the Sun might experience a new grand minimum in the next decades, a prolonged period of low activity similar to the Maunder minimum in the late 17th century. The Maunder minimum is connected to the Little Ice Age, a time of markedly lower temperatures, in particular in the Northern hemisphere. Here we use a coupled climate model to explore the effect of a 21st-century grand minimum on future global temperatures, finding a moderate temperature offset of no more than −0.3°C in the year 2100 relative to a scenario with solar activity similar to recent decades. This temperature decrease is much smaller than the warming expected from anthropogenic greenhouse gas emissions by the end of the century.
    —-

    Comment by Hank Roberts — 16 Sep 2010 @ 11:34 PM

  383. Hank, Don – After a lot of screwing around here’s a copy of what I hope is my last post to the Morningstar forum:

    Re: Re: Greenhouse effect vs Maunder Minimuma few seconds ago | Post #2899434

    I suggest that the reason it’s being reported now is as stated in the 3rd paragraph of the OP:

    We are now in the onset of that next sunspot cycle, called Cycle 24 – these cycles typically last 11 years — and Livingston and Penn have this month published new,potentially ominous findings in a paper entitled Long-term Evolution of Sunspot Magnetic Fields: “we are now seeing far fewer sunspots than we saw in the preceding cycle; solar Cycle 24 is producing an anomalously low number of dark spots and pores,” they reported.

    Don

    You are cherry picking. Read further, the Science blog clearly states that “it is global warming that is causing the sunspots to disappear”.

    Seriously now, it’s a bit late — but — I just finished reading the entire Penn Livingston paper. The paper presents evidence that magnetic field measurements can be used as a proxy for sunspot size and for sunspot intensity. The only “trends” they mention are in their statistical PDFs (probability distribution functions) corelating the magnetic field measurements to the spots’ sizes and intensities. It’s a Solar physics instrumentation paper, good, careful work.

    P&L reference only 2008 sunspot data, almost three years old. Nowhere do they even mention climate science or the Maunder Minimum. The Science reporter, Phil Berardelli who should be canned for misrepresenting the P&L work, slipped that one in on you. To put it simply, you’ve been had.

    Read P&L.

    John

    Of course P&L’s work should be funded and continued. The biggest problem in astrophysics is meaningful data. Anyone willing to do peer reviewable tracking should be encouraged.

    Comment by John Peter — 17 Sep 2010 @ 12:26 AM

  384. @377: So what, we are going to keep repeating now that it is better not to make personal attacks, while still making them, like you?

    On topic:

    Is there anybody on this thread who have been able to reproduce the results reported by ModelE?

    [Response: Yes. ;) But what is in question here? The code is available, and as long as you have enough computer power (a laptop) you can run it. If you want replication of the basic results independent of modelE, look at the other GCMs. Perhaps you can be a little clearer about what you want? - gavin]

    Comment by Nick Rogers — 17 Sep 2010 @ 1:01 AM

  385. I should have been more clear.

    Is there anybody in this thread [except Gavin and other keepers of RC] who have been able to reproduce [any of] the results reported by ModelE [concerning global temperatures, for years past 2010, like those on the page below]?

    http://data.giss.nasa.gov/modelE/transient/dangerous.html

    Example graph:

    http://data.giss.nasa.gov/cgi-bin/cdrar/do_LTmapE.py

    Comment by Nick Rogers — 17 Sep 2010 @ 1:09 AM

  386. Wally 371: No more, “you’re confused”, “you’re wrong”, “you need to go learn some stats”.

    BPL: We’re no longer allowed to tell anyone they’re wrong? I have news for you, Wally–sometimes people really are wrong. All viewpoints are NOT equally valid.

    Comment by Barton Paul Levenson — 17 Sep 2010 @ 5:43 AM

  387. Thanks for #373 and #379, Hank–a denialist “regular” whom I encounter has pushed this line periodically in the past. My impression is that Livingstone & Penn are presenting a reasonable case as astronomers studying solar activity, and that they are not responsible for the misuse of their conclusions by such as Lawrence Solomon. Is that the case, or are they more similar to the Soon & Baliunas prototype? Here’s a report from NASA science news, which to me suggests the former:

    http://science.nasa.gov/science-news/science-at-nasa/2009/03sep_sunspots/

    Of course, the idea that a new solar minimum would result in Little Ice Age-like conditions is logical enough in a way, but ignores the obvious fact that climate can be affected by more than one factor at a given time. Ignoring AGW is logical enough, I suppose, if you don’t believe it exists–which is to say that the “new solar minimum” meme represents a denier position that at least has some internal consistency (by no means a given in that community.) But of course you can’t–successfully!–go so far as to use the assumption that AGW isn’t real as evidence that AGW isn’t real!

    #377–I think you’re right, Waldo.

    #378–”According to the Science blog it actually the global warming that is causing the sunspots to start to disappear.”

    Wow, that’s some feedback loop! ;-)

    Comment by Kevin McKinney — 17 Sep 2010 @ 6:42 AM

  388. In case anyone wants to read another item about the rapid growth of renewable energy, here is one:

    http://www.renewableenergyfocus.com/view/12482/solar-pv-shipments-to-exceed-16-gw-in-2010-but-tough-2011/

    Two of my opinions: (1) 16 GW per year of new generating capacity is a truly non-negligible amount of new power generation; (2) although exponential growth can not continue forever on a finite planet, doubling every 1 – 2 years (as currently for biofuels, wind and solar) can continue for the next 20 years, as far as can be known now.

    No matter what you think about anything else, that will be a lot of new power generation by any standard.

    Comment by Septic Matthew — 17 Sep 2010 @ 1:37 PM

  389. John Peter says: 16 September 2010 at 11:23 PM
    ”According to the Science blog it actually the global warming that is causing the sunspots to start to disappear.”

    “the Science blog” John Peter refers to has to be Wattsup.

    Congrats, JP, for the best Poe I’ve seen in quite a while.

    Google finds that phrase and excerpts from the source, thus:

    “NASA: Are Sunspots Disappearing? | Watts Up With That?
    Sep 3, 2009 … This disappearing act is possible because sunspots are made of magnetism…. I expect the feedback effect of CO2 is causing a disturbance to the magnetic field and inevitably leading to the sun going out and the extinction of all life in the solar system. Possibly within days….”

    Comment by Hank Roberts — 17 Sep 2010 @ 2:02 PM

  390. Barton,

    “We’re no longer allowed to tell anyone they’re wrong? I have news for you, Wally–sometimes people really are wrong. All viewpoints are NOT equally valid.”

    I suppose I should have further explained the “you’re wrong” example. First, stating such doesn’t really matter. You need to explain why and thus it will become obvious. What is often found here is a grand claim of how someone is wrong or confused, then little is actually proven. Such as in Gavin’s example, where he explained a concept that doesn’t apply to this situation, then after I explained why, he hasn’t responded. So, was gave confused about calling me confused? Better to just stick to facts and save the grand claims of who’s right, wrong, or confused.
    Second, if all you do is say the equivalent of “you’re wrong, go read X.” You’re basically putting forth a ridicule, because you need to establish what specifically in X you’re talking about and where/why it is useful in this discussion. Just telling someone to go read is not an argument, its an ad hominem.

    [Response: Nonsense. If someone makes a claim that is based on false premises, telling them to go read some background is perfectly appropriate. You are not the first person to ask very similar questions and so answers given in other cases are likely to be relevant. Indeed, the existence of this record is one of the boons of the internet since, hopefully it allows for some progress in the discussion. Please note that my ability to engage in multiple conversations at all times is constrained by the existence of the rest of my life (including my day job). If you feel that you have a question I haven't addressed, then repeat it (but leave out the attitude - it doesn't increase my inclination to be responsive). - gavin]

    Comment by Wally — 17 Sep 2010 @ 2:16 PM

  391. Kevin,

    >Oh, and “ad homs” are arguments of the form: “A is a bad person, therefore his argument his discredited.” Simple name-calling is not an ad-hom, much less so a statement (correct or not) that someone is “confused.”<

    You don't need to have a strict if A, then B type attachment to the insult or ridicule. Gavin, attempted to add strength to his argument by casting this ridicule, in the first sentence no less. Further, ad hominems do not need to be strict insults. They can come in many forms, but the basic point is that they are about the author of the argument and not the argument itself. Thus, saying "you're X", where X = something bad, abusive or similar, is going to pretty much always be an ad hominem. Logically, there are ways to prove these kinds of statements, but generally they would then be red herrings to your original topic anyway.

    [Response: Please stick to real issues. Endless debates about what is and is not ad hom and who is the most outraged are extremely tedious. - gavin]

    Comment by Wally — 17 Sep 2010 @ 2:34 PM

  392. Surely you’re joking, John Peter.

    Comment by David B. Benson — 17 Sep 2010 @ 2:35 PM

  393. John Peter refers to the opening post of the (inverted) “discussion” thread accompanying the article @ Science: http://news.sciencemag.org/sciencenow/2010/09/say-goodbye-to-sunspots.html

    The thread reads much like something from WUWT.

    Comment by arch stanton — 17 Sep 2010 @ 2:58 PM

  394. http://www.google.com/search?q=do_LTmapE

    Nick, are you getting your material from old blog posts Google turns up searching on that string? It was apparently quite a popular topic on that side of the blogosphere for a while.

    But did you check your second link?
    It doesn’t find anything at the moment.

    Comment by Hank Roberts — 17 Sep 2010 @ 4:56 PM

  395. I’m referring to blogstuff mentioning that old link a few years ago.

    The ridicolous 40% by 2020 campaign | Kiwiblog
    Jul 20, 2009 … http://data.giss.nasa.gov/cgi-bin/cdrar/do_LTmapE.py.
    http://www.kiwiblog.co.nz/

    Tropical Troposphere « Climate Audit
    Apr 26, 2008 … http://data.giss.nasa.gov/cgi-bin/cdrar/do_LTmapE.py

    Comment by Hank Roberts — 17 Sep 2010 @ 5:00 PM

  396. For folks who persist in trying to make computer models look silly by suggesting they break with extreme numbers ignoring limits that are physically realistic, a comparison to experience with aircraft simulation might be instructive:

    http://catless.ncl.ac.uk/Risks/26.16.html#subj12

    “… a simulator cannot be known accurately to represent what would happen during unusual piloting rudder-reversal behavior because, well, until the accident nobody knew at what point airframe structure would fail (it turned out to be some one-third stronger than required by certification regulations)!”

    Simulators do not necessarily accurately represent the behavior of aircraft close to the “edge” of their “flight envelope”, and they cannot be taken to do so for flight outside the envelope. Aerodynamicists study these “out of envelope” characteristics by use of wind tunnel models, but actual aircraft are not flown in flight test “out of envelope” except for certain restricted manoeuvres prescribed in the certification regulations …. For most “out of envelope” flight, aerodynamicists can make very well-educated guesses (from their wind-tunnel modeling) as to what might happen, but they are the first people to say that they are not at all certain. Nobody goes out to flight-test Boeing 747 aircraft in partially-inverted almost-vertical semi-spins, such as what happened to a China Air Lines Boeing 747 over the Pacific near San Francisco in 1985 …[link to details in original post] So there are limits to what simulators can achieve, and it is a matter for research how much “out of envelope” behavior can be usefully and veridically simulated.
    _______________

    And for global climate, we’re piloting our flight by committee(s).

    Comment by Hank Roberts — 17 Sep 2010 @ 5:49 PM

Sorry, the comment form is closed at this time.

Close this window.

1.060 Powered by WordPress