RealClimate logo


Technical Note: Sorry for the recent unanticipated down-time, we had to perform some necessary updates. Please let us know if you have any problems.

Michaels’ new graph

Filed under: — gavin @ 26 March 2009

Every so often people who are determined to prove a particular point will come up with a new way to demonstrate it. This new methodology can initially seem compelling, but if the conclusion is at odds with other more standard ways of looking at the same question, further investigation can often reveal some hidden dependencies or non-robustness. And so it is with the new graph being cited purporting to show that the models are an “abject” failure.


The figure in question was first revealed in Michaels’ recent testimony to Congress:

The idea is that you calculate the trends in the observations to 2008 starting in 2003, 2002, 2001…. etc, and compare that to the model projections for the same period. Nothing wrong with this in principle. However, while it initially looks like each of the points is bolstering the case that the real world seems to be tracking the lower edge of the model curve, these points are not all independent. For short trends, there is significant impact from the end points, and since each trend ends on the same point (2008), an outlier there can skew all the points significantly. An obvious question then is how does this picture change year by year? or if you use a different data set for the temperatures? or what might it look like in a year’s time? Fortunately, this is not rocket science, and so the answers can be swiftly revealed.

First off, this is what you would have got if you’d done this last year:

which might explain why it never came up before. I’ve plotted both the envelope of all the model runs I’m using and 2 standard deviations from the mean. Michaels appears to be using a slightly different methodology that involves grouping the runs from a single model together before calculating the 95% bounds. Depending on the details that might or might not be appropriate – for instance, averaging the runs and calculating the trends from the ensemble means would incorrectly reduce the size of the envelope, but weighting the contribution of each run to the mean and variance by the number of model runs might be ok.

Of course, even using the latest data (up to the end of 2008), the impression one gets depends very much on the dataset you are using:

More interesting perhaps is what it will likely look like next year once 2009 has run its course. I made two different assumptions – that this year will be the same as last year (2008), or that it will be the same as 2007. These two assumptions bracket the result you get if you simply assume that 2009 will equal the mean of the previous 10 years. Which of these assumptions is most reasonable remains to be seen, but the first few months of 2009 are running significantly warmer than 2008. Nonetheless, it’s easy to see how sensitive the impression being given is to the last point and the dataset used.

It is thus unlikely this new graph would have seen the light of day had it come up in 2007; and given that next year will likely be warmer than last year, it is not likely to come up again; and since the impression of ‘failure’ relies on you using the HadCRUT3v data, we probably won’t be seeing too many sensitivity studies either.

To summarise, initially compelling pictures whose character depends on a single year’s worth of data and only if you use a very specific dataset are unlikely to be robust or provide much guidance for future projections. Instead, this methodology tells us a) that 2008 was relatively cool compared to recent years and b) short term trends don’t tell you very much about longer term ones. Both things we knew already.

Next.


225 Responses to “Michaels’ new graph”

  1. 151
    Mark says:

    Consensus.

    If EVERYONE who drops a ball sees it fall DOWN, there is a consensus that things fall DOWN. Not UP.

    Because of this, is the probably apocryphal story of the apple hitting Newton on the head proof that gravity isn’t science? I mean, it’s a CONSENSUS, isn’t it. We all agree on that experiment, don’t we. ‘cept maybe David Copperfield who will say he can make it float and disappear.

    So if consensus is NOT science and therefore your implication that if there is consensus it cannot BE science, gravity isn’t scientific.

    Really weird people out there.

  2. 152
    BFJ Cricklewood says:

    #147 Dan

    re 146. That funding side, as in the tremendous amount of disinformation funded by Exxon/Mobil, right?

    A mere drop in the ocean compared to what the state spends.

    …you have an agenda! Unlike the data which are unbiased.

    This is a fundamental error. Different selections of facts can point to different conclusions.

    …what “consensus” means in the scientific sense…Consensus and peer-review are two of the foundations of all science.

    Consensus can be bought. As can peer-reviewers.

    [Response: This is hilarious. You obviously don't know any scientists either. But my pointing out that this is complete nonsense without a shred of evidence to back it up can simply be dismissed by the claim that I too am doing my paymaster's bidding. You have no idea how far away this is from the truth. - gavin]

  3. 153
    Mark says:

    [Response: You have absolutely no idea how the federal government works.]

    YES HE DOES!!!

    He’s watched ALL the X-Files and read all the way up to level 10 on Xenu biography. And David Ike told him about the TRUTH.

    They’re all non-human lizard overlords who hide EVERYTHING and want to keep the real humans from realising that the lizards are in charge.

    And he knows that there are no lobbying efforts from any corporation to coerce corruption in government because The Corporation Is Your Friend (Please Report For Termination. Have A Nice Day) and they wouldn’t do anything *bad*. Not like governments do. ‘cos governments are “peopled” by our inhuman lizard overlords whereas corporations are built from the VERY BEST humans.

    So there!

  4. 154
    BFJ Cricklewood says:

    #148 dhogaza

    If government-funded science biases results as you claim it does, …a state change in the government-funded scientific consensus would be impossible.

    That wrongly assumes no change in thinking in the government.

  5. 155
    Timothy Chase says:

    Hank Roberts in 117 wrote:

    >> Adam Gallon
    > [Response: Do even you take this kind of nonsense seriously?
    > Please take the conspiracy crap somewhere else….

    He’s _from_ the ‘somewhere else’; n.b. anti-Gore link behind his name.

    He thinks Al Gore wants to blow up the King of England. A few too many role playing games if you ask me.

  6. 156
    Ray Ladbury says:

    BFJ Cricklewood, OK. Let me get this straight. We have temperature measurements that show each subsequent decade of the last 3 is warmer than the last, but you don’t trust them because the data are processed by the government. We have satellite measurements showing we’ve lost 2 trillion tons of ice in the past 5 years, but you don’t trust them because they’re government satellites. We’ve got glaciers retreating the world over, but you don’t trust those measurements because government scientists made them. We have a hundred and fifty years of climate science all of which supports anthropogenic causation of the current warming, but you don’t trust all that work because some of the scientists worked for the government.

    You contend that the whole motivation is for governments to extend their power, even though the measures needed to address climate change will alter the very fabric of the economies that have supported governments up to now.

    Gee, BFJ, given that this blog is about climate science, and you have no interest in science of any kind, since it’s all a government plot, as I have said above, it would appear we have nothing to discuss. Have a nice life, and enjoy your irrelevance.

  7. 157
    BFJ Cricklewood says:

    #149 Mark

    “how much confidence can one have in the “consensus” if one side of the argument is funded vastly more than the other?”

    Yup. Compare the entire university funding grant of the world to the combined total revenue of the oil/coal/gas and tobacco industry.

    No, compare the oil/coal/gas (why tobacco?) spend on climatology, to state spending on climatology. Peanuts.

    (Your comments on fuel tax I again cannot make head or tail of)

  8. 158
    David B. Benson says:

    BFJ Cricklewood (146) — In many, if not all, branches of science there are some researchers who don’t manage to put it togethr properly. So there are always some few producing way-out, easily seen to be wrong, kooky papers.

    Usually competent researchers just ignore the work as such individuals quickly gain an unfavorable reputation. That is certainly my view of some noisy few, from my amateur back-bench in climatology.

  9. 159
    JBL says:

    A plea from a lurker:

    BFJ Cricklewood is proposing an argument whose natural conclusion is that *no* statement of fact can be trusted, as the person or institution making the statement has motivations that might cause them to lie. This view is completely consistent, totally uninteresting, and self-evidently being applied selectively to reject those views for which s/he has other reasons to reject. I see no value in Gavin or other non-troll commenters on this site paying him any further attention.

    HOWEVER,

    the argument BFJ Cricklewood is proposing is conceivably somewhat persuasive to someone who knows nothing about the (for example) NSF funding process. And, in fact, even though I’m an academic-in-training, I don’t know very much about this process, either. It would be far more instructive if those people who *do* know about it could describe it in a little detail, rather than spending their time pointing out the (self-evident) fact that BFJ Cricklewood is engaging in nutcase conspiracy theorizing.

    In particular, I gather that NSF receives a pot of money from Congress each year. Then there is some process by which this money gets distributed to individual researchers, institutitons, or students. How exactly are these decisions made? Does Congress direct the funds in any way? I know that people outside of the NSF are invited to take part in the grant-evaluation process — who takes part, and what is their role? What is the basic decision-making process? Has anyone on this thread ever taken part in an NSF grant review and so could describe it briefly?

  10. 160
    BFJ Cricklewood says:

    #159 JBL

    BFJ Cricklewood is proposing an argument whose natural conclusion is that *no* statement of fact can be trusted, as the person or institution making the statement has motivations that might cause them to lie.

    No, you repeat the earlier mistake of inserting conspiracy into my argument. It’s more about systemic bias, the ultimate source of funding unavoidably having some influence – eg by selecting which institutions get money. This influence is up at the political level, which is why is goes way over the head of your typical working scientist who has technical issues in the forefront of his consciousness. And it applies mainly to discinplines like climatology, where the conclusions can have large political implications.

    [Response: Ahh.... so now we are unwitting dupes, unconsciously doing the bidding of our reptilian overlords! As the earlier commenter said, this a completely invulnerable shield of nonsense, and so, we must reluctantly bring down the curtain on this amusing interlude. No more responses on this please. - gavin]

  11. 161
    Timothy Chase says:

    Ray Ladbury wrote in 156

    BFJ Cricklewood, OK. Let me get this straight. We have temperature measurements that show each subsequent decade of the last 3 is warmer than the last, but you don’t trust them because the data are processed by the government. We have satellite measurements showing we’ve lost 2 trillion tons of ice in the past 5 years, but you don’t trust them because they’re government satellites. We’ve got glaciers retreating the world over, but you don’t trust those measurements because government scientists made them. We have a hundred and fifty years of climate science all of which supports anthropogenic causation of the current warming, but you don’t trust all that work because some of the scientists worked for the government.

    Sounds about right — right down to your quantum mechanics, the absorption and emission of photons (Einstein was a patent clerk) by matter, the spectra, any inconvenient fact. All the studies, papers, scientists and sciences one vast conspiracy to hide the truth of his world view. Reminds me of some of the more extreme views I found among young earth creationists.

    Of course the larger the conspiracy, the more difficult it would be to keep it quiet:

    But more importantly, the longer a supposed conspiracy takes place and the wider the conspiracy, the more the number of people who must necessarily be involved, and the greater the number of chances each conspirator has for messing up, inadvertently slipping up and letting out enough details that the conspiracy will be discovered. In fact, the likelihood increases more or less exponentially with the amount of time involved, the number of conspirators, and the amount of evidence which must be covered-up.

    A conspiracy of silence
    By Timothy Chase, BCSE member
    http://www.bcseweb.org.uk/index.php/Main/AConspiracyOfSilence

    But even today the UK is home to a few who believe that the sun revolves around the earth.

    Date Index for geocentrism, 02-2005
    http://www.free lists.org/archive/geocentrism/02-2005

  12. 162
    David B. Benson says:

    JBL (159) — NSF used to mainly use individual reviewers, much as peer review of paper publication. It seems that NSF is going for more panels these days; I’ve served on two and will never do it again, even if asked, which I won’t, being retired.

    The panelists are supposed to read the funding requests before meeting to discuss the requests and then write a bit about the requests all while NSF program managers are observing.

    In either form of review, the funding requests fall into three easily divisable groups: definitely fund, don’t no matter what, and the hard cases. The hard cases are those whose funding depends upon the amount of funds that NSF has to grant, so some words about importannce are necessary to provide some guidance to the NSF program directors.

    In any case the NSF program directors have the finally say about who is funded and at what level, subject only review and potential changes by the NSF oversight committee. In this aspect it is slightly different, but not much, from the editorial process used by most quality journals.

    [reCAPTCHA agrees: "confer- hearing".]

  13. 163
    Deech56 says:

    RE JBL 30 March 2009 at 6:10 PM:

    I cannot speak for NSF, but I do work at NIH on extramural programs, so I have some knowledge about grant processes, and I have published some medical research papers so I have some knowledge of the peer-review publishing process.

    Congress makes an appropriation to the funding agencies to use for disbursement through grants. A researcher (Principal Investigator or PI for short) submits a grant application to the agency. The application describes the aims of the research, research plan, personnel and a proposed budget that covers the cost of the research project. The budget would normally include salaries, fringe benefits, supplies, equipment, and overhead for the institution (university or research center).

    The grant gets grouped together with other grants and the set of grants is reviewed by other researchers in the field (peers – at NIH they are mainly outside the government), who score the grant based on the importance (and novelty) of the specific aims, feasibility based on preliminary data and experimental plan and past performance of the PI. This is the basis of peer-review of funding.

    There is another peer review process for the publication of research findings. A set of authors (driven by the lead or senior author – first and last names on the author list) writes a manuscript and submits it to a scientific journal. There are various tiers of journals based on impact – Science and Nature rank near the top, followed by various specialty journals. Gavin, et al. can fill in the order for climatology, but I would assume that Energy and Environment doesn’t rank very high.

    The journal editor assigns the manuscript review to two or three reviewers (other scientists who should have the technical expertise to judge the work) and the reviewers try to pick apart the arguments that the authors are trying to put forward and give a judgment as to whether the article is publishable or not (or could be improved to make it publishable). If the manuscript is rejected, the authors will usually redo the manuscript and submit to another journal – the authors and the publishers look for a good fit, and authors often aim high.

    Congress will sometimes exert influence by directing some of the appropriation to a category ($X billion for AIDS, for example) or provide earmarks that are not part of the NIH or NSF budget and are not peer-reviewed.

    Peer review is an important part of the process, but the charge of conspiracy, whether applied to climate or viral origin of AIDS, falls flat. There are examples of truly novel research that goes against conventional wisdom; Marshall and Warren managed to publish their “heretical” H. pylori research and the Alvarezes got their K/T boundary paper published in Science. I know medical researchers who established strong reputations by thinking outside the box.

  14. 164
    Deech56 says:

    RE David B. Benson 30 March 2009 at 7:10 PM: But wouldn’t the discretion exercised by NSF Program Staff be somewhat limited? My experience is that the more highly scored applications generally get funded and applications with lower scores do not. I do know that there is a gray area that can be more up in the air (and there are adjustments for “program balance”).

    I also wonder how much many programs get competed through contracts – contracts also get funded through a peer-review system, unless the agency can justify “other than full and open competition” and not have the process get challenged by a competitor.

  15. 165
    Eli Rabett says:

    David Benson has it about right. Most individual investigator proposals are evaluated by letter review. Special programs are evaluated in panels (center grants, etc.) Panels, which tend to look at proposals over a large range of fields, will often receive written reviews from experts. NSF panels tend to be fairly small, 10-15 people. Each person is asked to be a primary on ~5 and secondary reviewer ~5 proposals in the panel. The job of the secondary is to summarize the letter reviews. A third person keeps track of the discussion and writes the review. Everyone has to sign off on every review

    In NIH almost everything runs over very large panels called study sections. Very intense stuff.

    NASA runs mostly by panels, at least in the areas Eli is familiar with

    If Eli was a grad student getting ready to graduate, rather than an old and tired bunny, he would call up a program officer in his area and discuss how the reviews are done in that directorate, what reviewers tend to stress. Anyone who actually has a new faculty position should volunteer to serve on a panel to get an idea of what wins.

    Have to go now, I owe a review

  16. 166
    Timothy Chase says:

    JBL wrote in 159:

    BFJ Cricklewood is proposing an argument whose natural conclusion is that *no* statement of fact can be trusted, as the person or institution making the statement has motivations that might cause them to lie.

    I would change that to “either has motivations or may have motivations that might cause the to lie.” This way he can assume that they are merely being hapless pawns fooled by the grand conspiracy until they start detailing why they are certain.

    JBL wrote in 159:

    This view is completely consistent, totally uninteresting, and self-evidently being applied selectively to reject those views for which s/he has other reasons to reject. I see no value in Gavin or other non-troll commenters on this site paying him any further attention.

    Not too far removed from the theory that one’s brain is in a vat being zapped by aliens. Or that some omnipotent being created the world last Wednesday. But I’ve spent eighty pages before analyzing Descartes Six Meditations almost line by line. I probably wouldn’t want to go through that again — and yet it would be a great deal more interesting than playing Cricklewood’s game.

    What is it with all these tin-foil hats anyway? Are we being flash-mobbed by a convention of schizophrenics?

    JBL wrote in 159:

    HOWEVER,

    the argument BFJ Cricklewood is proposing is conceivably somewhat persuasive to someone who knows nothing about the (for example) NSF funding process….

    ….Has anyone on this thread ever taken part in an NSF grant review and so could describe it briefly?

    Can’t help you there, but I would be interested.

  17. 167
    Ray Ladbury says:

    “Never argue with an idiot. They drag you down to their level then beat you with experience.”–Jawaad Abdullah

    If a man alleges that all the evidence is tainted by politics, that all the scientists are tainted by politics and refuses to even discuss the theory, the chances of having a discussion based on science are pretty slim. Mr. Cricklewood would reduce climate change to politics because he knows he cannot win on the basis of the evidence.

    Don’t play that game. Don’t feed the troll.

  18. 168
    Dan says:

    It is truly a sad reflection on the state of science education when a software engineer (i.e., Cricklewood) actually believes he knows something that all the major climate science professional societies across the world do not. It reminds me of the “chemtrail” people. Now that’s one to Google for a laugh.

  19. 169
    JBL says:

    Thanks to everyone who responded!

    Timothy Chase wrote in 166: “but I would be interested.”

    Yes, me too — it was certainly more interesting to hear a description of why BFJ Cricklewood was wrong directed at those of us who mostly just listen in than it would have been to see a few more posts noting that BFJ Cricklewood is completely nuts.

    I think I’m straying into Hank Roberts’ territory in noting this, but writing *to trolls* is almost always worthless: they don’t learn, and it sidetracks threads into uninteresting tangents. But writing *about why trolls are wrong* (and directing the comments elsewhere) can be quite interesting, and also feeds them less.

  20. 170
    David B. Benson says:

    Deech56 (164) — NSF Program Directors usually, almost always, take the advice of the reviewers. But in the gray area where the proposals are not of the very highest quality and the program dollars just about all committed, some judgement calls must be made.

  21. 171
    Timothy Chase says:

    Re: JBL (159), David B. Benson (162), Deech56 (163, 164), Eli Rabett (165)

    So basically one of the key problems with Cricklewood’s “theory” is that the decision-making that determines which studies get funded and which do not is itself highly decentralized.

    However, I would argue that another (albeit related) key problem is the decentralized nature of the process of scientific discovery. Those who determine what funding gets done won’t know beforehand what will be discovered — and they wouldn’t be able to keep track of all of the interconnections which will be discovered by the vast number of independent and highly intelligent minds that are involved in this process. To do so would greatly exceed the intelligence of any central authority, whether it be an individual or committee.

    Something about all of this sounds oddly familiar.

    However, I would argue it is also closely related to why science is so powerful:

    The justification for a conclusion supported by several independent lines of investigation is generally far greater than that which it receives from any given line of investigation considered in isolation.

    In science there are a great many largely independent lines of evidence and investigation, each of which are supported by a great many more.

  22. 172
    Eli Rabett says:

    The more speculative a proposal is, the more the reviewers want to see preliminary evidence that there is some there somewhere or other.

  23. 173

    #26-29 Gavin, as would be expected, anti-science media picked up on Dyson word for word we discussed. Tragically predictable, quite human I am afraid, only Dyson can completely untangle the mess. I doubt he will, given that retractions are rare amongst high reputation scientists.

    http://mediamatters.org/countyfair/200903300047?show=1

  24. 174
    Mark says:

    “No, compare the oil/coal/gas (why tobacco?) ”

    For those who aren’t whacko nutjobs but are likewise unsure why tobacco was included, Phillip Morris (big Tobacco) is a big supporter of all the better funded skeptic circuits. The reason for this is that if it can be “proven” that climate science is wrong about GW, then maybe the biologists are wrong about the dangers of smoking.

    And if Body Thetan Cricklewood wants only what’s spent on climatology for these people, then it should be the same for government sponsored work. Exclude meteorology even, not just biology, engineering, astronomy, particle physics,… But should include all lobbying efforts by the companies, since they wouldn’t waste money on lobbying if they didn’t think it would work and wouldn’t miss out countering works that would see their revenue sink like a stone.

    You still have more money on the anti side.

    PS I now have Batfink on my mind ‘cos of that guy: “your truths cannot harm me! My brainpan is like a shield of steel!”

    PS any way to skip the “please reword ‘cos it’s spam” when theres naff all spammy about the message. Or at least show up the text with the bad words in there.

  25. 175
    Mark says:

    Apparently either “muc ho” or “dine ro” were spam.

    How?

    The day that spammers start selling e is a day wordpress sites will die. Or maybe I should say “Th day that spammrs start slling is a day wordprss sits will di”. Heck even wordprss will have to change its name…

    I can understand WHY there’s a spam filter. It seems as if it’s not only throwing the baby out, but the bath, sink, plumbing and the family dog out with the bathwater.

  26. 176
    Chris S says:

    Re #171 Timothy Chase

    “However, I would argue that another (albeit related) key problem is the decentralized nature of the process of scientific discovery. Those who determine what funding gets done won’t know beforehand what will be discovered — and they wouldn’t be able to keep track of all of the interconnections which will be discovered by the vast number of independent and highly intelligent minds that are involved in this process.”

    One is reminded of the recent UK GM Field-scale trials. For example see the range of commentaries here:

    http://www.agbioworld.org/biotech-info/articles/biotech-art/farmscaleevaluations2.html

    Note e.g. the contrast between Nigel Williams & Conrad Lichtenstein.

  27. 177
    Deech56 says:

    RE Timothy Chase 30 March 2009 at 21:33

    “So basically one of the key problems with Cricklewood’s ‘theory’ is that the decision-making that determines which studies get funded and which do not is itself highly decentralized.”

    Good point – for the most part, it’s out of the hands of the government (or more correctly, the government relies on the reviewers for advice and almost always follows their advice). Program people are also glad that grant/contract review is separate from the influence of Congress and lobbyists. We really want the science to be the highest quality – a successful grant or contract portfolio (advancement of the science, publications) is good for everyone.

  28. 178
    Deech56 says:

    Timothy Chase (171): Of course, the response to your reasonable conclusion is usually (as expressed to me by one of the leading practitioners of the Chewbacca defense) is that scientists are engaged in “groupthink.” This impression is fed by the utterings of people like Spencer, W. Gray and Lindzen.

  29. 179
    Mark says:

    “that scientists are engaged in “groupthink.” This impression is fed by the utterings of people like Spencer, W. Gray and Lindzen.”

    Which is another group think.

    Odd, eh?

  30. 180
    Geoff Wexler says:

    Re: Dyson (please correct if I have made an error)

    The most substantive point in the Wikipedia article is

    “The effect of carbon dioxide is more important where the air is dry, and air is usually dry only where it is cold. The warming mainly occurs where air is cold and dry, mainly in the arctic rather than in the tropics, mainly in winter rather than in summer, and mainly at night rather than in daytime. The warming is real, but it is mostly making cold places warmer rather than making hot places hotter. To represent this local warming by a global average is misleading,”

    While the last sentence may have some merit (for different reasons from those above) , the rest seems dubious.

    1. Positive feedback caused by rise in water vapour (caused by warming) accounts for perhaps half of the estimated warming and this will be located most where the air is humid in contradiction to Dyson’s “cold and dry”.

    2. The enhanced CO2 will also have a direct effect where the air is humid because its absorption spectrum does not completely overlap the water vapour.

    2a). Some of the extra CO2 may end up lying above (i.e at a greater height) than a humid region. Why can’t this act to make hot places hotter?

    [Point 2 is also in Michael Tobis's web page who makes other points]
    —————————-
    By the way I think that Dyson is a great enough physicist without the hype being given to his contributions in some places. But the conjecture that he is right about everything needs to be tested. In the past he appears to have assumed Moore’s law for everything for a century or more. I’m surprised that his economic models only contain growing exponential functions. How about Malthus?

  31. 181
    dhogaza says:

    For those who aren’t whacko nutjobs but are likewise unsure why tobacco was included, Phillip Morris (big Tobacco) is a big supporter of all the better funded skeptic circuits. The reason for this is that if it can be “proven” that climate science is wrong about GW, then maybe the biologists are wrong about the dangers of smoking.

    Don’t forget that the tobacco industry has also invested in the “DDT is harmless and environmentalists banned it because they want to kill poor black people in Africa” scam.

  32. 182

    Re #181

    They don’t have to prove that that climate science is wrong. Just through some doubt on the science. Then that doubt rolls over onto the “cigarettes causes cancer” and people take a chance.

    But the spreading of doubt has worked! See http://tigger.uic.edu/~pdoran/012009_Doran_final.pdf It is no use calling for more education of the public. What we need is more public relations training in “some scary scenarios” for scientists.

    Cheers, Alastair.

  33. 183
    Deech56 says:

    RE Mark 31 March 2009 at 8:1 AM

    Me: This impression [groupthink] is fed by the utterings of people like Spencer, W. Gray and Lindzen.

    Mark: Which is another group think.

    But the denial crowd really can’t agree on anything (CO2 up? No warming? Solar? Surface temps?) except that the published literature is wrong. That fact doesn’t seem to bother them. Excuse me – I think my head is going to explode.

  34. 184

    Weaving of Threads, part I of II

    Deech56 wrote in 178:

    Timothy Chase (171): Of course, the response to your reasonable conclusion is usually (as expressed to me by one of the leading practitioners of the Chewbacca defense) is that scientists are engaged in “groupthink.” This impression is fed by the utterings of people like Spencer, W. Gray and Lindzen.

    And your acquaintance would be right after a fashion. Dialogue is a form of group-think in which the group is capable of far more than any individual in isolation.

    Please see:

    Likewise, this is the principle behind dialogue. It is largely a matter of mathematics. If you have two individuals where each has only three insights which neither shares with the other, each individual is able to make only three connections between any two points. However, if these two individuals come together, there exists the possibility of making fifteen different connections. Bring in a third person and the number goes up to twenty-eight, and a fourth brings it to sixty-six. And if instead of simple, directional two-term connections, one thinks in terms of paths between all the available points, with one individual there are six possibilities, but with four people the number of potential paths goes up to more than 479 million.

    A conspiracy of silence
    By Timothy Chase, BCSE member
    http://www.bcseweb.org.uk/index.php/Main/AConspiracyOfSilence

    I have seen this principle at work — particularly at St. Johns College:

    But this isn’t simply a matter of abstract theory. I have seen this in action at St. John’s College. At this school, we would read things like “The Origin of the Species,” Plato’s “The Republic” or “St. Augustine’s Confessions,” then come in and discuss what we had read.

    Oftentimes people wouldn’t have read the assignments, and the discussion would simply turn into some sort of bull session where people would simply debate poorly thought-out personal opinions. This happened the good majority of the time. Alternatively, some one person would try to dominate the discussion, and we would simply end up discussing his views.

    However, every once in a while we would have a genuine dialogue where insight would build upon insight upon insight until the illumination was almost blinding. Individuals who normally didn’t seem that terribly bright would have insights which made them seem like geniuses. After an especially good discussion, you would leave the classroom, and it would feel like you were six feet off the ground. It would take more than an hour to come back down to earth.

    ibid.

    That is the power of human thought and civilization:

    But to leave things on a somewhat more positive note, these principles also suggest something of the power of human thought itself. The history of thought is the history of an ancient and ongoing dialogue. New participants come and older participants go, but the understanding of the community of participants becomes wider, deeper and stronger over time — thanks to the participation of everyone involved.

    ibid.

    … and it is key to understanding science:

    Empirical science plays a very important part in that dialogue, but in a certain sense it could be viewed as something even wider: a dialogue between humanity and the world in which we live. It is a dialogue in which the questions we ask of nature determine what kind of answers we receive from it — which then affects what questions we will ask afterwards. But this dialogue does not proceed along any one line of conversation. There are many different threads which are largely independent of one-another. With congruence between different, independent lines of investigation, the conclusions which we reach take on far greater justification than any one line of investigation would be capable of by itself.

    ibid.

  35. 185

    Weaving of Threads, Part II of II

    Science and civilized thought aren’t merely some sort of echo chamber or mass delusion — and if someone were to argue otherwise they would be guilty of self-referential incoherence insofar as the very fabric of their thought is dependent upon that “mass delusion.” How could they possibly know what they claim to know? This is the problem with radical skepticism.

    So as not to post something especially long, I will refer you to something I wrote some time ago. I posted the piece in DebunkCreation, although it was actually part of a much longer paper.

    The part that I am refering you to begins here, a few paragraphs down:

    Something Revolutionary: A Critique of Kant’s Transcendental Idealism
    Part 9, Section 22: The Meaning of Self-Referential Incoherence

    At this point, I would like to introduce what I call “the norm of self-referential coherence.” This norm prohibits self-referential incoherence. Thus to understand the meaning of the norm, one must understand the meaning of self-referential incoherence. I will present an example before attempting a definition.

    Re: [DebunkCreation] Epistemic Abyss
    Fri Nov 11, 2005 4:09 am
    http://groups.yahoo.com/group/DebunkCreation/message/81678

    But in short, if a radical skeptic were to claim that all of this is simply a mass delusion, then in logic he couldn’t claim to know this or to even know that the proposition were meaningful.

    However, the above is concerned with the global problem of radical skepticism, not some more localized form of denialism. So how do we respond to that? Is it possible for the scientific consensus to be wrong?

    First we must admit that there exist widespread interdependencies between the sciences.

    Please see the following comment, and in particular the section that begins:

    Now I will begin my second example. Roughly at the time that Darwin, it was considered a recognised fact that the earth and the sun couldn’t be more than a few million years old: the only fires known were chemical fires, and alternatively, the only other source of energy which we could conceive of for the sun was due energy being released as the result of gravitational collapse. On the basis of the latter, Lord Kelvin calculated that the age of the sun had to be in the range of millions of years, not thousands of millions. This required evolution to take place at a rate which seemed unlikely.

    Do Scientific Theories Ever Receive Justification?
    A Critique of the Principle of Falsifiability
    16 November 2007 at 1:12 PM
    http://www.realclimate.org/index.php/archives/2007/11/bbc-contrarian-top-10/#comment-68052

    Likewise, we must admit that errors and possible, but then we can point to the fact that science is self-correcting:

    Scientific theories are a form of knowledge, but they are a form of corrigible knowledge — and science itself is a falliblistic, self-correcting endeavor — in which progress is real, and knowledge is cummulative despite the errors which may be made along the way.

    http://www.realclimate.org/index.php?p=583#comment-94611

    … and that even in the case of major paradigms for which there existed a great deal of evidence but which were later replaced, much of what was at one time thought to be true has in fact been preserved in the form of a correspondence principle between the older theory and its newer replacement — that the greatest difference between the two lies simply in the languages in which the theories are expressed (ibid.).

    What the particular conclusions of empirical science draw their strength from consists for the most part in the multiple, largely independent lines of empirical investigation. The more they accumulate, the stronger the justification for those conclusions become, such that even when some of the stronger conclusions are eventually replaced, we may know with some confidence that much of their content will be preserved, and that the difference will largely consist of the form (language) in which that content is expressed.

    A great deal of empirical evidence has accumulated for the major conclusions of climatology, evidence from many largely indepedent lines of investigation. In logic, if one is to avoid radical skepticism for skepticism of a more deliniated form with respect to empirical conclusions justified by reference to evidence, one cannot offer a broad, philosophic argument that by its very nature would undermine all human thought, but one must provide arguments of a more deliniated form. In logic, one cannot arbitrarily proclaim the major conclusions of climatology unwarranted without addressing the issue of the evidence that supports them — and presenting an alternative which is equally coherent and capable of explaining that evidence with the same degree of specificity.

    This is something which denialists will never attempt because even they see the utter futility of such an endeavor.

  36. 186
    Ray Ladbury says:

    Geoff Wexler, That Dyson quote is a wonderful illustration of how a very smart guy can be very wrong when he ventures outside his realm of expertise. How wet does he think the atmosphere is above cloudtops? As I’ve said before, I don’t think Dyson likes to get bogged down in details, and therein lies the devil.

  37. 187

    I had written towards the end:

    In logic, one cannot arbitrarily proclaim the major conclusions of climatology unwarranted without addressing the issue of the evidence that supports them — and presenting an alternative which is equally coherent and capable of explaining that evidence with the same degree of specificity.

    This is something which denialists will never attempt because even they see the utter futility of such an endeavor.

    Weaving of Threads, Part II of II

    Case in point:

    So, Jim Bob, with your degree in “physical science” and computer-programming skills, perhaps you could enlighten us on exactly what evidence the “skeptical” side has presented. ‘Cause try as I might, I can’t find jack in the published literature that is at all convincing. Maybe you could start with a model of Earth’s climate that has a CO2 sensitivity less than 2 degrees per doubling? No? How about an explanation of how “solar effects” explain the past 30 years of warming when solar luminosity has been pretty much constant over that period? No? How about a learned treatise on how either a solar or PDO mechanism can warm the troposphere while cooling the stratosphere? Nope? Or how about how a local “oscillation” gives rise to a sustained warming lasting decades? No, huh?

    Ray Ladbury
    31 March 2009 at 11:26 AM
    http://www.realclimate.org/index.php/archives/2009/03/a-potentially-useful-book-lies-damn-lies-science/#comment-116887

    *
    Captcha fortune cookie:
    to listen

  38. 188
    iheart says:

    Seeing how the envelope in any of the graphs spans a 1.6 deg C possible trend, and the observed temp has only varied 0.4 over the graph time span. Then not only is the model envelope an abject failure, the whole thing fails by design, especially with %75 play from the envelope. What is being proven or disproven here, and why so many “told you so’s” from the commenters, it’s getting old.

    [Response: You miss the point entirely. Short term trends are too variable due to the internal variability to constrain long term sensitivity. - gavin]

  39. 189
    iheartheidicullen says:

    I do now see that you are correct in your response, Gavin. Maybe this argument should not be made from either side then, at this time. Thanks

  40. 190
    Mark says:

    “But the denial crowd really can’t agree on anything”

    Oh, they all agree that AGW isn’t a problem.

    There’s a large group (think about where the term “groupthink” has its etymological roots in) that believe that anything the government does is WRONG.

    There’s plenty of agreement.

    They don’t agree on WHY or HOW.

    Mostly because they don’t CARE about having a theory, they just want to tear down one. No need for a consistent theory if all you want is another one torn down, is there. That’s likely to result in you being proven wrong. So don’t give a counter theory, just make out how bad the AGW theory is. DO NOT REPLACE IT. It’s not wanted and it hurts the denialist aim: remove AGW as a theory.

  41. 191

    Deech56 wrote in 183:

    But the denial crowd really can’t agree on anything (CO2 up? No warming? Solar? Surface temps?) except that the published literature is wrong. That fact doesn’t seem to bother them.

    We took different paths, but we appear to have arrived at the same place.

    Deech56 wrote in 183:

    Excuse me – I think my head is going to explode.

    Quite possibly.

  42. 192
    walter crain says:

    “abject failure”! sounds familiar…

  43. 193
    Hank Roberts says:

    Read what Hansen wrote about the NYT before you fall for the spin in their article about Dyson and Hansen.
    http://solveclimate.com/blog/20090329/ny-times-invents-climate-science-war

    Go to Hansen’s page for the full letter he wrote.

    He’s quite open to Dyson’s point of view, when understood. It’s a mature response.

    Recommended.

  44. 194
    Jason says:

    Of course the graph is dependent on the end point. Of course the various points are highly correlated. Why would anyone expect differently?

    DID MICHAELS CLAIM OTHERWISE?

    No.

    He is showing that, independent of starting point, the models aren’t doing particularly well. He is making this argument SPECIFICALLY because previous attacks have claimed that the starting point was cherry picked. As far as I know, he hasn’t been attacked for choosing the most recent data as an end point, because until now that was considered the right thing to do.

    Using the end of 2008 is ONLY cherry picking if we know that temperatures are going to go up significantly in 2009 and beyond. Are we sure of this? Is Real Climate ready to make a prediction for 2009 and 2010 temperatures?

    If you are going to criticize his graph because he doesn’t use 2009 and 2010 data, then you should tell us what you expect temperatures in those years to be, so we can revisit the issue should your expectations be wrong. [I'll be using HadCRUT to see how accurate you were]

    OTOH, if we don’t know that 2009 and 2010 temperatures are going to be up significantly, then there really isn’t any basis for criticizing the graph.

    Everybody agrees that a very warm 2009 will render this graph useless, while a very cold 2009 will mean that the model mismatch has been greatly understated. What is news, in your post, is that “next year will likely be warmer than last year”. That’s good to know. Would you care to quantify this so we can understand just how misleading an endpoint 2008 really is?

    [Response: Actually Michaels did show his graph with 2009 data filled in equal to 2008. That wasn't my invention. But he didn't show what you get using GISTEMP and he didn't show what would happen if (as is likely) 2009 is warmer than 2008. And yes, that is a prediction. The point is that Michaels claimed that these results imply that the models are 'abject failures'. Such a strong conclusion should rely on more than a single year of data no? - gavin]

  45. 195
    Jim Bouldin says:

    Jason (194):
    “He is showing that, independent of starting point, the models aren’t doing particularly well”

    No, he THINKS he is showing that. What he’s actually showing is that short term plateaus, or even declines, can occur in a long-term upward trend, which is absolutely not news. Also, what does he have to say when the observed trends are higher than predicted by the models? Let me guess “the temporal variance is too high–the models never correctly predict the time course of T evolution, and therefore are not to be trusted”

    “Using the end of 2008 is ONLY cherry picking if we know that temperatures are going to go up significantly in 2009 and beyond.”

    Wrong. It’s also cherry picking if you wait for a relatively low year and then do your analysis. As mentioned in the article, why didn’t he do it last year?

    “Everybody agrees that…a very cold 2009 will mean that the model mismatch has been greatly understated.”

    Yeah everybody…except the ones who know something about it. Model mismatch for what exactly? For how long? And since you’re fond of odds-making, what are your odds for a “very cold 2009″ given that like 9 of the warmest years in the last century have occurred in the last 11 years?

  46. 196
    Jason says:

    Michaels is making this presentation to a political audience. While I would not agree that the models are an “abject failure”, he is presenting to people who have been told that “the science is settled”. Properly nuanced testimony on capitol hill falls on deaf ears.

    A quick review of your boss’s public statements show a similar lack of nuance. Climate sensitivity has been “nailed” at three degrees centigrade. Hardly any acknowledgment is made of how very limited our understanding of the climate system is.

    And I have a hard time faulting him for it. Hansen and Michaels both understand the environment in which their statements are being interpreted. They are using stronger language and fewer caveats than I would like. But they aren’t talking to me. They are talking to congress and to the public at large, and they are adjusting their statements accordingly.

    I honestly find “the science is settled” to be a far more unreasonable statement than “the models are an abject failure”. Science (which the models are part of) is evolving based on real world observations. Our understanding of climate WILL improve. The current models WILL be replaced. Climate sensitivity in the new models WILL be higher or lower than in the old ones. Congressional testimony about global warming WILL remain largely devoid of nuance. [Those are my four predictions.]

    [Response: The only scientist I can find that has ever said the 'science is settled' is...... Patrick Michaels (last paragraph). You will not find such a statement made in any post on RC. - gavin]

  47. 197
    David B. Benson says:

    I’ll hazard a prediction for the global surface temperature for 2009 CE: one of the cooler years this century (2000 CE onwards) and in the top 12 overall. I base this solely on the prolonged solar minimum just now. If a goodly el Nino happens to come along, I probably lose.

  48. 198
    Vernon says:

    With Lucia showing here:
    http://rankexploits.com/musings/2009/multi-model-mean-trend-for-models-forced-with-volcanic-eruptions-mega-reject-at-95/

    What if we pick 95% as our confidence intervals?
    Well… then we don’t reject this multi-model mean in 1974, or 1996 and for a few years after. So, if you feel bound and determined to save the reputation of the models, you should think up reasons why 1974 or 1996 are the “correct” years for testing models over the “longer term”, while simultaneously claiming you picked these entirely at random.
    At the 95% confidence interval, the multi-model mean using volcanic cases only are rejected if we happen to use 2001 to compute the initial trend. What if 2000 is the right start year? We reject the multi-model mean based on cases with volcanic forcing.
    So, to those who think these “rejections” are due to selecting a short period for analysis: Nope! These rejections are due to the observed earth temperature veering away from the projected values.

    And a study by Evan et al (2009) The Role of Aerosols in the Evolution of Tropical North Atlantic Ocean Temperature Anomalies

    Observations and models demonstrate that northern tropical Atlantic surface temperatures are sensitive to regional changes in stratospheric volcanic and tropospheric mineral aerosols. However, it is unknown if the temporal variability of these aerosols is a key factor in the evolution of ocean temperature anomalies. Here, we elucidate this question by using 26 years of satellite data to drive a simple physical model for estimating the temperature response of the ocean mixed layer to changes in aerosol loadings. Our results suggest that 69% of the recent upward trend, and 67% of the detrended and 5-year low pass filtered variance, in northern tropical Atlantic Ocean temperatures is the mixed layer’s response to regional variability in aerosols.

    And with Hansen’s presentation at Copenhagen found located at http://www.columbia.edu/~jeh1/2009/Copenhagen_20090311.pdf

    where he said:

    The aerosol forcing is negative and substantial, but the truth is that, based only on first principles, we do not know the aerosol forcing as well as indicated.

    We do not have measurements of aerosols going back to the 1800s – we don’t even have
    global measurements today.

    Any measurements that exist incorporate both forcing and feedback.

    Aerosol effects on clouds are very uncertain.

    Does this not show that the models that incorporate volcanic forcing cannot model aerosol forcing since there are no measurements to use to parameterize and per Hansen, we do not know enough to use first principles.

    [Response: You are confusing many separate issues. a) volcanic stratospheric aerosol loads are reasonably well known back to Agung (1963). Earlier eruptions (such as Krakatoa etc.) are estimated based on ice core sulphate loads in both hemispheres. They are probably ok (though it gets worse going into the pre-industrial). The size distribution and particle type for these eruptions is reasonably well known and the radiation perturbations match well what was observed by ERBE etc. This is not what Hansen is talking about. b) there are many natural aerosols in the atmosphere. In the tropical Atlantic there are significant amounts of dust that come in from the Sahara. Depending on the rainy season in any one year, there might be more or less dust. Dust has a direct radiative effect and hypothesised impacts on ice nucleation, the variance in the dust then, might affect sea surface temperatures. This is mostly what Evan et al are talking about. c) anthropogenic aerosols - mainly sulfate and nitrate (from emissions of SO2 and NOx/NH3) have a strong direct effect and undoubted liquid cloud nucleation impacts (the indirect effects). These are not well known and are what Hansen is talking about. - gavin]

  49. 199
    Jason says:

    Jim Bouldin (195):

    “Wrong. It’s also cherry picking if you wait for a relatively low year and then do your analysis. As mentioned in the article, why didn’t he do it last year?”

    Off the top of my head I can think of two different blog that performed substantially similar analyzes a year ago.

    There are two reasons (besides declining global temperatures) why this issue is receiving increased attention now:

    1. There is a readily available public archive of model runs.

    2. Many of those models have remained largely static since AR3.

    This means that for the first time a large number of models can be readily tested against temperature data recorded AFTER those models were finalized.

    Each year that passes will give us another year of real data to compare to the models. More years mean more statistical certainty when performing these analyses.

    “Yeah everybody…except the ones who know something about it. Model mismatch for what exactly? For how long? And since you’re fond of odds-making, what are your odds for a “very cold 2009″ given that like 9 of the warmest years in the last century have occurred in the last 11 years?”

    I AM very fond of odds making. I would bet that the IPCC prediction for the first three decades of this century as I understand it (0.2 degrees per decade) overstates the amount of global warming we will actually experience. I would also bet that Hansen’s estimate of climate sensitivity at 3 degrees Centigrade is too high.

    I would be flexible in terms of creating a structure for such a bet. Both parties should believe that winning proves something about the other’s a prior assumptions. A one year bet probably wouldn’t do since you could blame a cold 2009 on random noise, and I would feel compelled to agree. (And anyway, 2009 needs to be a fair bit warmer than 2008 to make Michael’s graph look bad in hindsight.)

    Is this something you are interested in?

  50. 200
    Jason says:

    Gavin wrote:

    “But he didn’t show what you get using GISTEMP”

    I think that providing GISTEMP is a very good thing that GISS does. But I, and many other skeptical folks view the GISTEMP product with some suspicion. In particular, the tendency of retroactive temperature adjustments to increase the trend, and your statement about 0.25 FTEs being used to produce it.

    These issues are not so troubling that I would refuse to use it if it were the only option. But HadCRUT is available. It is just as well accepted by the climate science community. And there is no obvious trend to their retroactive adjustments.

    So I don’t think it is unreasonable to use HadCRUT for analyzing global temperatures and not bother comparing the results to GISTEMP. If I convince anybody to take a bet, I would want to use HadCRUT to determine the results (even if this meant that I lost).

    [Response: I was not suggesting that GISTEMP was better (though that is arguable), but I was alluding to the fact that where there are structural uncertainties in an observational quantity (like GMST or MSU-LT) then ignoring them in assessing significance is wrong. It is much better to use both UAH and RSS, or HadCRUT and GISTEMP (and NCDC) than it is to simply pick the one that gives you the trend or character you prefer. As for GISTEMP, it is what it is - an analysis of the raw data. As part of the calculation it needs to estimate the difference between rural and urban trends - it does this for the whole time-series and so will change as more data comes in. There is nothing 'suspicious' about this, and anyone who claims otherwise has no clue what they are talking about. Read the papers to see what is done and why. - gavin]


Switch to our mobile site