RealClimate

Comments

RSS feed for comments on this post.

  1. Thanks for your continued debunking. Just think if we all got a nickle for everytime we had to answer this nonsense!

    Comment by John Atkeison — 26 Mar 2009 @ 4:49 PM

  2. There is a difference between a skeptic and a denier, a skeptic does not play rhetorical games nor plays statistical tricks. When such games are played, how can one not call them denialists?

    Comment by Ken — 26 Mar 2009 @ 4:55 PM

  3. “Next” will be:

    http://jennifermarohasy.com/blog/2009/03/the-available-evidence-does-not-support-fossil-fuels-as-the-source-of-elevated-concentrations-of-atmospheric-carbon-dioxide-part-1/

    presumably.

    [Response: Well, we probably wont' bother as we debunked this stupidity a couple of times in our first few months of launching RealClimate (and it really is an idea that exposes anyone subscribing to it as a truly phenomenally ignorant person, more than just about anything else out there, on par with the earth-is-flat idea (if not stupider, since the earth *does* look flat from where we stand on it...)). See here and here--eric]

    Comment by Ed Davies — 26 Mar 2009 @ 5:10 PM

  4. It almosy makes one wish that 2008 had been much hotter, doesn’t it? Why, I thought that I must have forgotten how to read graphs for a moment there.
    Thanks for being honest enough to tell the truth, even if it does hurt our chances of driving these facts home.

    [Response: Don't be an ass. It's unbecoming. - gavin]

    Comment by James Staples — 26 Mar 2009 @ 5:12 PM

  5. Why? What could possibly be his motives to do such sloppy work?

    If it is scientific oversight then he really harms his stature as a Research Professor at the University of Virginia.

    If it is duplicity motivated by ideology then he becomes dangerous.

    Comment by Richard Pauli — 26 Mar 2009 @ 6:01 PM

  6. Poor fellow, he must be torn between his “stature as a Research Professor at the University of Virginia” and his fellowship at the Cato Institute.
    That would be “Research Professor of Environmental Sciences,” and “Senior Fellow in Environmental Studies.”
    Nope, no ideology to see here. Move along.

    Comment by chris — 26 Mar 2009 @ 6:28 PM

  7. Thank you! This thread will certainly be useful in replying to the ignoranti.

    Comment by David B. Benson — 26 Mar 2009 @ 7:35 PM

  8. David B. Benson, the idea that ‘anyone who questions conventional AGW theories is ignorant’ has been thoroughly debunked – here and elsewhere. Please use the start here link at the top of the page.

    Comment by Michael — 26 Mar 2009 @ 8:02 PM

  9. Why would he do it?

    Because millions (well tens of thousands really, this is not an important graph really) will see it and in a short time.

    Six months from now they will remember it as fact, and whats worse, even some of the people that read Gavin’s nice explanation of why it’s wrong will still remember it as having be “true”.

    That’s why he does it.

    It doesn’t even matter if it is a lie, as long as people hear it several times, it will be excepted by most as having been fact.

    Comment by Dale Power — 26 Mar 2009 @ 8:07 PM

  10. Michael says, “…the idea that ‘anyone who questions conventional AGW theories is ignorant’ has been thoroughly debunked – here and elsewhere.”

    OK, so where are the smart ones?

    Comment by Ray Ladbury — 26 Mar 2009 @ 8:22 PM

  11. Michael, who are you quoting?

    You seem to have misread a reference to “ignoranti” and put your misreading into quotes as though someone else said it. Nobody did.

    David’s mock-latin refers to those more traditionally called know-nothings or ignoramuses (those whose eternal posture is that they “do not know” — no matter how informed; whose stance is that of “constant adversary” — look it up*).
    __________
    * http://en.wikipedia.org/wiki/Ignoramus_(drama)

    Comment by Hank Roberts — 26 Mar 2009 @ 8:30 PM

  12. Gavin, I’m wondering what you and Michaels did differently: his up to 2008 picture looks different from yours. Also, what would your response be if 2009 looked like 2008, and the data stayed near the lower boundary of confidence in the models? Would the trend be robust then? At what point would it be, if not?

    Comment by Chris — 26 Mar 2009 @ 8:36 PM

  13. Michael (8) — The statemnt you quoted was not even implied by what I wrote.

    Comment by David B. Benson — 26 Mar 2009 @ 8:49 PM

  14. If I correclty understood you critisize Michaels for misusing short term data (actually cherry-picking end points) to make impression that future observations will be close to the low end of IPCC range.

    RC: “However, while it initially looks like each of the points is bolstering the case that the real world seems to be tracking the lower edge of the model curve, these points are not all independent”

    If so, why dind’t you plot graphs since 1979 where satellite measurmeents began, if you are so sensitive on procedures that cherry-pick start and end points? Just to check out whether real world really “tracks low end” or not?

    So, RSS shows 0.16 deg C per decade 1979-2009, UAH shows 0.13 deg per decade 1979-2009 and Had Crut 0.17 in the same period. Lower end of IPCC range is somewhere about 0.15 deg C per decade if I am not wrong. Now, what do you exactly think to accomplish with such a critique of Michaels? Both satellite data sets, and HadCru as well, pretty closely track lower end of IPCC range in period 1979-2009. Models predict by and large constant rate of warming, which is around 0.15 deg C in previous 30 years. What’s controversial in projecting that rate of warming into the future? Do you think that basic IPCC science captured by models is somehow wrong? That rate of warming shouldn’t be constant?

    [Response: The models track the temperatures since 1979 very well so I'm not sure what point you are making. The warming is not expected to be constant. - gavin]

    Comment by Nickolas — 26 Mar 2009 @ 9:02 PM

  15. #3 Jennifer Marohasy is employed by an Australian extreme right wing think tank, whose job description seems to involve taking a contrarian view towards anything that suggests rampant laissez faire capitalism is not good for the environment. She has, for example, stated that the drying up of the once mighty Murray River is perfectly natural and nothing to worry about. But global warming is obviously the big one for these people as it is in the US. Greenhouse temperature rise is a massive refutation of the proposition that the world should be run by businessmen for businessmen (http://www.blognow.com.au/mrpickwick/33318/Addicted_to_CO2.html), and the ideological representatives of business employ people like Marohasy to try to muddy the waters and the air.

    Comment by David Horton — 26 Mar 2009 @ 9:52 PM

  16. Ah yes, the infamous Cato Institute…I remember them with an ‘article’ in the Arizona Republic here in Phoenix, claiming that second hand smoke was harmless. This was only a couple of years ago. Kind of takes your breath away.

    [Response: Literally. - gavin]

    Comment by Steve Missal — 26 Mar 2009 @ 11:02 PM

  17. The conceit of using segments this short at all needs some justification. The whole exercise is rather silly…let’s look at long-term trends that are interesting statistically. Or more importantly, trends that are interesting mechanistically! This wouldn’t move me even if Michaels were right.

    Comment by DrCarbon — 27 Mar 2009 @ 12:08 AM

  18. Please don’t use that “rocket science” cliche. Not least because rocket science is the simplest science there is — action and reaction are equal and oppsite.

    But rocket engineering — now, that’s tricky!

    Comment by John Gribbin — 27 Mar 2009 @ 2:27 AM

  19. Re #2: “There is a difference between a skeptic and a denier, a skeptic does not play rhetorical games nor plays statistical tricks. When such games are played, how can one not call them denialists?”

    I personally think the term “psuedo-skeptic” is more acurate and descriptive. The stereotypical psuedo-skeptic is the pet shop owner in Monty Python’s dead parrot sketch.

    Comment by Alan of Oz — 27 Mar 2009 @ 3:33 AM

  20. What troubles me most is that this has been presented to the US Congress as something authoritative.

    Comment by Bruce Tabor — 27 Mar 2009 @ 5:12 AM

  21. Re #20

    I hope that the relevant people in the US Congress will be /(will have been) informed of this correction directly.

    Comment by Geoff Wexler — 27 Mar 2009 @ 6:23 AM

  22. My two cents worth….

    Until we can show a dew point increase at 10km, which is a CO2 forcing
    requirement for the current models we will probably have skeptics. At present,
    satellite data taken sine 1979 has shown a stable dew point, which if correct
    would reduce the CO2 forcing argument.

    Perhaps more work needs to be done with the satellite measuring systems &
    techniques. The world may be getting warmer due to rising CO2 but you still
    need to close off any holes. Simply stating from our side that the sensors
    or readings are faulty is not good enough.

    I’m sure the skeptics will get plenty of air time in the interim !

    [Response: You are possibly a little confused. The 'Dew point' is the temperature at which an air parcel needs to be cooled in order to cause water vapour to condense (I'm sure you know that, but I repeat it for clarity). Thus it is a measure of specific humidity (the total amount of water vapour in the parcel). And specific humidity has been increasing at all levels of the troposphere (IPCC AR4 3.4) (the stratosphere too, but that is a somewhat separate issue). Thus the dew point has been increasing as well. I unfortunately see no let up in the number of pseudo-skeptics. - gavin]

    Comment by Jonas — 27 Mar 2009 @ 7:07 AM

  23. “I personally think the term “psuedo-skeptic” is more acurate and descriptive.” But what is the purpose in seeming skeptic yet not acting as one? To deny a proposition that they do not wish to accept.

    Denial is the creed. Denialist the name.

    Comment by Mark — 27 Mar 2009 @ 7:19 AM

  24. Did you (the good people at RC) sent some sort of correction to the US Congress?

    The Michaels graph is very deceptive indeed (probably purposely so) – I must admit it took me a minute or so to see what exactly was depicted and why it is the sheerest nonsense.

    Unfortunately your own graph does not help much, seen from a PR viewpoint. I am afraid that the message people will take from your graph is: “Hey, even the AGW-scientists get a cooling trend” (because the leftmost points are below zero), while not realising that a trend based on a few years (and hence the whole graph) is rubbish.

    [Response: The graph is cut off at 5 years, but if you extend it further the overall impression changes considerably (since both the model spread and the very short term trends are much wider). If I was in the business of propaganda, I would have used that instead of just using the parameters set by Michaels. But that's the difference. - gavin]

    Comment by Dick Veldkamp — 27 Mar 2009 @ 7:21 AM

  25. Absolutely hilarious. Failure to check that result for dependence on the endpoint would be unforgivable even in my field (economics) let alone in the hard sciences.

    So what is it with these people? I used to think that a lot of the mistakes among denialist scientists were due to their failure to grasp the difference between experimental data and observational (non-experimental) data. For a lot of folks trained in experimental method, the error term associated with a data point is the measurement error, period. So they can’t quite grasp that the error term when you model something from observational data is a different thing entirely.

    If that is the root cause of their confusion, then this analysis is of a piece with the Douglass et al tropical tropospheric trends analysis. There, they treated each model prediction as if it were a datapoint, ignoring the large irreducible uncertainty around each datapoint. Here, the terminal datapoint is treated as if it had no transient component. What I see in common is that in both cases, they completely failed to come to grips with the nature of the error term they were dealing with. As if it were just some tiny little measurement error that could be ignored (as usual).

    So, never assume conspiracy when stupidity provides an adequate explanation of the facts. But for this one, I really do wonder. If you don’t grasp that there is a large transient component in any one year’s temperature measurement, you have no business talking about trends. Let alone testifying in front of the Congress.

    Comment by Christopher Hogan — 27 Mar 2009 @ 7:53 AM

  26. I personally think the CATO institute is over rated as the science institution it
    may claim to be. The real argument with the lay is over how serious AGW is:

    http://www.nytimes.com/2009/03/29/magazine/29Dyson-t.html?pagewanted=1&_r=2&hp

    As the future can be theorized at will, can actually morph in many shapes, the battle lines are drawn now, how will we shape it? Freeman Dyson offers the greatest counter arguments , the most potent ones, than anything Michael’s has done. However his wife reasons correctly on this issue, so much for spousal influence!

    A couple of things, he states are flat wrong:

    “Most of the time in history the Arctic has been free of ice,” Dyson said. “A year ago when we went to Greenland where warming is the strongest, the people loved it.”

    The first part is flat out rubbish, shocking statement from a man of science, there is no evidence of a wide open Arctic ocean until very recently…. it still is, apparently a thriving old English myth, to the demise of many explorers, Martin Frobisher, Sir John Franklin , Cabot etc. I suggest Dyson to check out the history of Bowhead whalers (17th to 20th centuries), never going from Alaska to Greenland chasing their prey. Always separated by the frozen Northwest passage, which is no longer so frozen in the summer anymore. Not even the fierce Norse, in the middle of the medieval warming period, are said to have visited Alaska. Alice, his wife should know that Polar bears are threatened at the fringes of the ice world, not where there is still a lot of ice, so anyone can comfortably say that they are doing fine.

    The latter statement from Greenlandic people may be true for some residents who don’t particularly love long winters, virtually everyone in the Arctic enjoys warm weather, but not at the detriment of their entire way of life.

    Dyson’s claim “where is the AGW damage?” is the most frequently used statement by all anti AGW theorists, the answer is again most clearly seen in the wide open Arctic ocean, no longer a hiding place for Nuclear submarines, Where Polar Stern, a German ship, ventured where no ship has sailed freely without ice before, amazingly close to the North Pole last year. It is also astounding, all Polar Nations are scrambling as they never had before, to claim vast swats of Ocean now seasonally open for trade, may be this is not damage, per say, but one may only imagine what will come next if the climate doesn’t cool down. Finally, surely, the biology of ice dwelling creatures, is forever transformed, if there is …. no ice….

    [Response: Dyson is wrong on many issues and displays a shocking ignorance of climate modelling despite holding very strong views on the matter. However, the example you picked out here is mainly semantic. 'History' as you assume means recorded history - and certainly the Arctic has not been ice free for any of that, but Dyson is referring to 'Earth history' (i.e. the geological timescale), and if you look at that he is likely correct - Arctic ice started forming in earnest about 14 million years ago having been absent for most of the last 65 million years. However, the relevance of this for today's situation is zero. Thus while he can rightly be accused of offering up red-herrings, this isn't quite as damning as you imply. - gavin]

    Comment by wayne davidson — 27 Mar 2009 @ 8:35 AM

  27. Christopher Hogan, that’s why “pseudo skeptics” isn’t right. If anything they are credulous. ANYTHING that says “AGW is false” is accepted without thought.

    Comment by Mark — 27 Mar 2009 @ 8:58 AM

  28. 26r

    Dyson wrote, a couple of years ago:

    “I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry, and the biology of fields and farms and forests.”

    I was more or less inclined to take him at his word, given his reputation as a thoughtful and accomplished scientist. Where is it in particular that you believe his “shocking ignorance” lies?

    [Response: First off, he is confusing models that include the carbon cycle with those that have been used in hindcasts of the 20th Century and are the basis of the detection and attribution of current climate change. Those don't need a carbon cycle because we already know what the atmospheric CO2 concentration was. Second, his slanders of Jim Hansen reveal someone who has never actually read a Hansen paper nor met him in person (I would be astonished if this was not the case). Statements such as "They come to believe models are real and forget they are only models" reveals he has never had a conversation with a climate modeller - our concerns about ice sheets for instance come about precisely because we aren't yet capable of modelling them satisfactorily. If you want to read about what modellers really say (as opposed to how they are stereotyped) try here. Where is this religious environmentalism he thinks we modellers revel in? It doesn't exist - and that demonstrates clearly his ignorance on these matters. I'd be more than happy to discuss these things with him if he had any interest in furthering his education. - gavin]

    Comment by wmanny — 27 Mar 2009 @ 9:09 AM

  29. #26 Thanks Gavin, Ha, but the citation article implies “history”, not geological history, which left alone, needs a reply. However Dyson’s argument becomes still pretty bad and disconnected,
    if he claims an open Arctic ocean is “normal” as per Earth history, then where will the Polar bears and walruses and seals hang out? Even more compelling, what would happen to the entire Earth’s climate if the ice disappears? We know other damages, coral bleaching is still a scurge not often told, ocean acidification is another. He seems not to grasp the damage done by quick climate transitions. Its not been a question of AGW triggering an end to the world, but a rapid change causing chaos to not only humans, but all living things.

    Comment by wayne davidson — 27 Mar 2009 @ 10:44 AM

  30. “I was more or less inclined to take him at his word, given his reputation as a thoughtful and accomplished scientist.”

    So why don’t you take the hundreds of thoughtful and accomplished scientists who are part of the consensus at their word?

    Comment by Marcus — 27 Mar 2009 @ 10:52 AM

  31. Re:http://www.realclimate.org/images/2008_from1979.jpg

    The temperature vs time data are noisy. Slopes derived from this graph will be noisier. Isn’t it much better to look at the actual T vs t graph than the trends derived from the differences of arbitrary endpoints ?

    Re: Dyson

    I’m afraid he’s gone emeritus.

    Comment by sidd — 27 Mar 2009 @ 11:28 AM

  32. http://www.realclimate.org/images/2008_from1979.jpg
    What a mess!
    Does this suggest that the claims of increases in the order of 4.5-6 (or more) K are highly unlikely, as temperatures do seem to be bumping along the bottom of the predicted ranges?
    Also what about publishing how modelled predictions from 5 or 10 years ago compare with measured temperatures?
    That would be an interesting test as to the accuracy of models, as I believe that GisTemp E, as modelled 5 years ago, is now .15K above current temperature.

    [Response: What is GisTemp E? GISTEMP is the analysis of station and satellite data, while GISS ModelE is a GCM that was part of the simulations done for AR4. They have little or nothing to do with each other except by virtue of being in the same building. Those simulations are included in the figures I showed before, and your 'belief' is not accurate. For the 8 runs I have easily available (GISS-ER and GISS-EH), 2008 temperatures range from 0.12 to 0.59 deg C above the 1980-1999 mean. GISTEMP was 0.19 (0.17 for HadCRUT3v) on the same baseline. Thus while 2008 was cooler than the expected trend in the absence of any natural variability, it is still well within the envelope of the what the models expect. - gavin]

    Comment by Adam Gallon — 27 Mar 2009 @ 11:30 AM

  33. 30. That’s why the “more or less”. Appeals to authority are problematic, as are appeals to consensus. In the age of blogs and confirmation bias, there is no question it is difficult to navigate the opposing points of view and to determine who, if anyone, has it right. It’s a complex topic. Dyson’s is an interesting perspective, and obviously not the only one.

    Comment by wmanny — 27 Mar 2009 @ 11:42 AM

  34. Dyson should serve as a cautionary tale to any scientist commenting well outside the area of his expertise. While I do not know the motivation behind his animus against climate science, some other technophiles I know are horrified by the prospect that for a while the march of their pet scientific subject might have to take a back seat to survival.

    Comment by Ray Ladbury — 27 Mar 2009 @ 12:32 PM

  35. That looks like a fatal flaw for Michaels. I wonder what would happen if the same approach was applied to other climate metrics, like sea surface temperature, water vapor feedback strength, and precipitation-evaporation changes. That approach is avoided by Michaels & co, because despite various levels of uncertainty in each dataset, they all point in the same direction.

    Probably the best thing to show Congress (after a short explanation of those graphs) is a video of the model results, as seen on Google Earth:

    http://www.metoffice.gov.uk/climatechange/guide/keyfacts/google.html

    The various layers of interest for Google Earth are here:

    Global climate model temperature projections

    USA regional CO2 emissions

    Global per capita CO2 emissions

    NSDIC Glaciers and Climate Change

    British Antarctic Survey ice shelf retreat

    Here, you can see a geologic history of plate tectonics – worthwhile for thinking about past climate regimes.

    That shows there are a whole lot of effects that go with a rise in temperature, and any honest analysis would examine all of those issues, not just one.

    For example, the energy increase represented by a rise in atmospheric and surface temperatures is a fraction of the energy increase represented by ocean warming.

    IPCC FAR points:

    “Over the period 1961 to 2003, global ocean temperature has risen by 0.10°C from the surface to a depth of 700 m”

    “The patterns of observed changes in global ocean heat content and salinity, sea level, thermal expansion, water mass evolution and biogeochemical parameters described in this chapter are broadly consistent with the observed ocean surface changes and the known characteristics of the large-scale ocean circulation.”

    By the way, Reuters published a decent article on the role of global warming, the oceans and other factors in Australia’s drought:

    http://www.reuters.com/article/latestCrisis/idUSSP141565

    Global warming 37 pct to blame for droughts-scientist
    Wed Mar 25, 2009, By David Fogarty, Climate Change Correspondent, Asia

    SINGAPORE, March 25 (Reuters) – Global warming is more than a third to blame for a major drop in rainfall that includes a decade-long drought in Australia and a lengthy dry spell in the United States, a scientist [Peter Baines] said on Wednesday.

    For more background, see Humans and Nature Duel Over the Next Decade’s Climate, Richard A. Kerr, Science, Aug 10, 2007 – a generally sympathetic treatment of natural variations (including a review of Peter Baines and the AMO).

    Where the natural variation claims fail is in their presumption of periodic oscillatory behavior. We do see such periodic behavior in seasons and tides, which is due to planetary orbits. There, the time series analysis and the physical mechanism are both reliable and well-understood. That is definitely not the case for the other natural variations such as the PDO and the AMO, which rely on fairly crude statistical correlations (fish catches?) and suffer from a huge lack of observational ocean data, especially subsurface data, as well as from short time periods. The longer the time period, the more robust the time series analysis – pulling out a 20-30 year cycle from a 100 years worth of data is not very robust. Notice also that this really is a curve-fitting procedure, unlike climate models.

    Time series analysis takes a dataset and fits a series of harmonic curves to it. It is like “rocket science” in that the details are highly complicated, and fiddling with those complex details can have a large result on the outcome. Thus, the situation is ripe for bogus but hard-to-evaluate claims, and everything related to it needs careful examination.

    For example, due to the lack of ocean data, secondary data is often used to infer what the ocean is doing – thus, the AMO analysis relies not on ocean temperature measurements, but rather on air pressure measurements as a proxy for ocean behavior – iffy at best.

    The point here is that if climate denialists focused their nitty-gritty questions on the issue of natural variations to the same extent they did with climate models, then you would see that forecasts based on climate models are far more reliable than forecasts made using the purely statistical, curve-fitting approach of time series analysis. The natural variations contain a large random component, depending on the specific variation – for example, a huge volcanic eruption six months from now would require new short-term climate predictions (but the long-term forecast would remain unchanged).

    Thus, when Don Easterbrook gets up and says that the world is locked into a new cooling cycle “because of the PDO”, you should be able to tell him why that’s nonsense.

    Comment by Ike Solem — 27 Mar 2009 @ 1:07 PM

  36. Gavin,

    Thanks for providing more details of the results of this type of analysis.

    First, a minor clarification, you write:

    “The idea is that you calculate the trends in the observations to 2008 starting in 2003, 2002, 2001…. etc, and compare that to the model projections for the same period. Nothing wrong with this in principle.”

    But that is not exactly what we do. We don’t compare observations with the same time period in the models (i.e. the same start and stop dates), but to all model projections of the same time length (i.e. 60-month to 180-month) trends from the projected data from 2001-2020 (from the A1B run) (the trends of a particular length are calculated successively, in one month steps from 2001 to 2020). This provides us with a sample of all projected trend magnitudes from a particular model run under an emissions scenario (A1B) that is close to reality (at least for the next 20 years). I bring this up just to clarify that we are comparing observations with the distribution of all model trends of a particular length, not just those between a specific start and stop date (i.e. we capture the full (or near so) impact of internal model weather noise on the projected trend magnitudes). I’ll gladly provide you with more details if you would like them to try to exactly replicate what we have done so far.

    While this methodology doesn’t eliminate your point that the trends from different periods in the observed record (or from different observed datasets) fall at various locations within our model-derived 95% confidence range (clearly they do), it does provide justification for using the most recent data to show that sometimes (including currently), the observed trends (which obviously contain natural variability, or, weather noise) push the envelop of model trends (which also contain weather noise). If the observed trends (at any time—now, or last year, or 10 years ago, or 5 years from now) fall outside the range of model expected trends of a particular length, then there is the possibility that something is amiss.

    The things that possibly are amiss are numerous. They include possible issues with the models (e.g., weather noise, climate sensitivity), the observed temperatures (e.g., GISS vs. HadCRUT), the forcings (e.g., observed forcings don’t match A1B forcings), or simply that rare events do happen.

    We have not, as of yet, sorted all these things out. Consequently, I personally think that it is still too soon to conclude from only our analysis (as it now stands) that the models are “abject” failures (although, admittedly, I am not exactly sure that means). In my comments in a previous thread, I said that our analysis is a more appropriate reference for the statement in the Cato ad than is the Douglass et al. paper—primarily because ours directly assesses the most recent period and uses a global dataset—but, I fully understand that the issue is far more complex that a single citation can do justice.

    An entire Chapter of the IPCC AR4 is devoted to model evaluation, which is the subheading under which I think our analysis falls. So, I think that the dismissal of our work with “Next” is a bit unjust, instead, I we have developed a legitimate methodology for illuminating several important issues—whatever the cause of the current observation/model mismatch, it probably deserves greater scrutiny. I would think you would be encouraging us to push this issue forward in the peer-reviewed literature—although perhaps with more guarded conclusions than are bandied about in the popular press.

    In that light, I appreciate any constructive suggestions/comments that you may have. I think once you fully understand our methodology that you will find it to be a reasonable approach to comparing observations with model expectations on large spatial (global), but intermediate (5-15 years) temporal scales. Perhaps you will differ in how we interpret our results, perhaps not—as they have yet to be fully arrived at.

    -Chip

    [Response: But Chip, who is bandying these things about? Who described this as a proof that the models were "abject" failures? I'm pretty sure that it wasn't us..... - gavin]

    Comment by Chip Knappenberger — 27 Mar 2009 @ 1:40 PM

  37. So you want to say that flat trend since 1996/97, according both UAH and RSS, is consistent with models? You Mr Schmidt once on this blog, if I correctly remembered, said that you would consider a decade without warming inconsistent with models. What about 1996-2009 without warming?

    [Response: Not flat. Do please fact check before posting. - gavin]

    Comment by Nickolas — 27 Mar 2009 @ 1:43 PM

  38. Apart from his genius (Emeritus or not, he is still a better man than most of us), Freeman Dyson is the sort of contrarian I cherish. He delights in disturbing any consensus that shows signs of becoming a matter of belief rather than evidence; but he always has a reasonable position to advocate. His position on climate change is that if it gets out of hand, we can probably find a technical fix. (He has suggested genetic engineering to produce a trillion carbon-eating trees.) I think he is probably right (though I do not think that he figured in the time needed to genetically engineer of his trees nor to check the stability of that engineering; nor indeed the loss of what those trees would have to displace. My own hunch is that the best bet for such a fix may be in genetically engineering carbon-absorbing monocellular marine life).

    The problem Dyson does not address is central. It not one of science or modelling. It is that “we probably can” does not equal “we will”. Maybe we can’t. In that case we had better get on with doing the job the hard, greenhouse gas emissions reducing way now because the longer we put it off, the harder it will be; and the higher the risk of disasterous positive feedbacks setting in. Maybe we can, but will we? We are very unlikly to develop and prove a worthwhile and effective technical fix if we do not recognise the problem and devote to it sufficient resources to create the effective demand for the fix. The conclusion Dyson points to of just letting global warming chug along for now only makes sense if we are, collectively, willing to take risks that really scare me with the only place we have to live on.

    As for Dyson on modelling: most complex simulation modellers in all fields slip from time to time into thinking that their models picture reality. I have caught myself doing that a couple of times. Our present climate forcasting models do not include the carbon cycle; but in some years time are we not likely to want it in? And I guess Dyson noted that the models he looked at simply could not assess a possible carbon-absorbing technical fix.

    Dyson on Hansen? As most people on Real Climate will know, Hansen has done a magnificent job not only on the science but also on fighting the argument through the various establishments. I guess that Dyson does not understand the way the latter necessarily conditions the presentation of the science. If he met Hansen, I suspect the Hansen would educate him in a way that you Gavin, good as you are, could not manage (and I could not even aspire to). And they would get on very well together.

    Comment by D iversity — 27 Mar 2009 @ 1:54 PM

  39. #28. So Gavin,

    So in regards to Dyson’s climate change credibility,

    can’t you approach it in the way that says Dyson is not a current publishing scientist in peer-reviewed journals whose current work stands up under peer review?

    Comment by Richard Ordway — 27 Mar 2009 @ 2:52 PM

  40. His graph shows ~0.12C/decade over a period of 15 years while yours shows perhaps 0.2. That’s a durn good fit for people on opposite sides of the fence. Yeah, his timing smells of cherry pie, but then we know some of the reasons why 2008 was on the cool side: decreased solar output, Chinese pollution, and the PDO. 95% is not a magic number, especially when one has answers for much of the variance.

    Comment by RichardC — 27 Mar 2009 @ 3:05 PM

  41. wmanny said:”In the age of blogs and confirmation bias, there is no question it is difficult to navigate the opposing points of view and to determine who, if anyone, has it right.”

    No, it isn’t. Defer to the experts if you are not one. To determine who is expert in a particular area of science, look at publications in the field’s peer reviewed journals and grant funding.

    Comment by t_p_hamilton — 27 Mar 2009 @ 3:25 PM

  42. gavin: “The idea is that you calculate the trends in the observations to 2008 starting in 2003, 2002, 2001…. “

    Maybe you stated this somewhere and I missed it, but how are you defining ‘the trends’? Is this a least squares fit or just a difference of end points? Same question for Chip.

    [Response: OLS. - gavin]

    Comment by Steve Reynolds — 27 Mar 2009 @ 4:57 PM

  43. Dyson wants to genetically engineer carbon-eating trees? What does he think trees do?

    Comment by Jeffrey Davis — 27 Mar 2009 @ 5:52 PM

  44. Yes, since 1996 trend is not flat, but since 1997 is. Since 2001 we have sharp global cooling. Most probably, trend 2001-2010 will be negative. If 12 years of flat trend and 8 years of negative trend are not inconsitent with models, predicting 3 degrees C of warming on century scale, what is? Is there any imaginable state of affairs on decadal or slighlty longer time scale which is inconsistent with models? Do you still stick to your earlier assertion that decade without warming would be a problem for models? If yes, you should reconsider your present attitude toward models. If not, what is new theory – when models become inconsistent with observations? With two decades of cooling or flat trend? Or three? Or four?

    Comment by Nickolas — 27 Mar 2009 @ 5:57 PM

  45. #5 and #6: Michaels is only a retired professor now at University of Virginia, so his current employed position should be identified as the Cato Institute:

    http://www.dailyprogress.com/cdp/news/local/article/former_climatologist_will_pursue_research_work/1857/

    Looks like staying employed at UVA would have required disclosing his private clients, and that was unacceptable.

    Comment by Brian Schmidt — 27 Mar 2009 @ 6:10 PM

  46. Re #43: Dyson want trees that produce diamonds as fruit, like just about everything else he says it is within the context of his Dyson Sphere. Most of his stuff is really just informed science fiction but he has contributed to Maths.

    Re #23: Dyson is a psuedo-skeptic, he has failed to question his own ideas on AGW after having his errors pointed out. Dyson is also a WW2 veteran, calling a WW2 veteran a “denier” is akin to calling them a Nazi.

    Comment by Alan of Oz — 27 Mar 2009 @ 7:04 PM

  47. Re: #42

    Same answer as Gavin.

    -Chip

    Comment by Chip Knappenberger — 27 Mar 2009 @ 7:59 PM

  48. t_p_hamilton: “Defer to the experts if you are not one. To determine who is expert in a particular area of science, look at publications in the field’s peer reviewed journals and grant funding.”

    Do you also apply that technique to economics? Most people here do not seem willing to be consistent about believing economic experts the way they believe science experts.

    Comment by Steve Reynolds — 27 Mar 2009 @ 8:02 PM

  49. “Quirk’s Exception” (failed attempt to shut thread down prematurely).
    Please proceed.

    Comment by Hank Roberts — 27 Mar 2009 @ 8:56 PM

  50. Do not your figures show a 2 sigma deviation and Michaels’ figure shows the 95% CI? A 2 sigma ellipse allows for 86.5% confidence and is therefore a wider level of acceptance than the 95% CI.

    [Response: The difference is .4499736% - you wouldn't be able to see it due to the thickness of the line. - gavin]

    Comment by Richard Steckis — 27 Mar 2009 @ 9:48 PM

  51. Do you also apply that technique to economics? Most people here do not seem willing to be consistent about believing economic experts the way they believe science experts.

    That might have to do with my 401K having largely kicked the bucket recently despite economists insisting the problems of the first 1/2 of the 20th century having been solved, while gravity still causes me all sorts of bruises when I stumble and fall …

    Captcha suggests I get drunk when I think about the triumphant success of economics: more SAUTERNE.

    Comment by dhogaza — 27 Mar 2009 @ 11:18 PM

  52. Gavin: What does “OLS” mean?
    I forwarded your email to me congressman’s office.

    [Response: Sorry, Ordinary Least Squares - the basic linear regression technique. - gavin]

    Comment by Edward Greisch — 28 Mar 2009 @ 12:20 AM

  53. #43 Jeff:
    There is ample opportunity to improve trees and other plants if we can agree that the key productivity criterium is efficiency of carbon capture. Cultivated potatoes are no more what they used to be in the wild.

    In fact, an improvement on some of the sequestration ideas. Probably more greenwash than a solution, though.

    Comment by Pekka Kostamo — 28 Mar 2009 @ 12:44 AM

  54. Alan of Oz says:

    “Most of his stuff is really just informed science fiction but he has contributed to Maths.”

    Most of his stuff informed science fiction? Contributed to Maths?

    You obviously do not understand much about physics. Dyson’s contribution to physics has been huge.

    Climate science has its base in physics. I doubt very much that there is a climate scientist alive whose understanding of physics is as profound as Dyson’s.

    Comment by Jari — 28 Mar 2009 @ 1:38 AM

  55. Re. No. 37
    Yes, UAH 1996-2009 is not flat, but 1997-2009 is for UAH, RSS and HADCRUT. 1998, 2000, 2001 and so on would show significant cooling trends. The only starting year since 1997 that would give a warming trend for these datasets is 1999. (GISSTEMP is different as it was the only dataset that showed 2007 as a particularly warm year. Also, compared with all the others, it has 2005 rather than 1998 as the warmest year.)

    Comment by sven — 28 Mar 2009 @ 2:18 AM

  56. Gavin,

    Thank you for the informative graphs. They have prompted me to make my first blog post. My question will expose my ignorance on the AGW topic but we all have to start somewhere… Have there been model runs that assume no human emissions of CO2 from burning fossil fuels? (Showing the envelope the trend should fall within naturally) It would be very interesting to me to see the same graphs you provided with the natural expectation for the gray area. I think this would go a long way in visually showing me how we are affecting the global temperature by letting me see side by side how things should be naturally versus how things are, as illustrated by your graphs. I apologize in advance first of all if this is common knowledge and secondly, if my terminology is incorrect.

    Tad

    [Response: Yes. The picture in question is here (from IPCC AR4 WG1 Fig 9.5). The lower panel is what you get without any human-caused emissions of greenhouse gases and aerosols, the upper panel is what you get if you include them. The black line is the observed temperature. - gavin]

    Comment by Tad Boyd — 28 Mar 2009 @ 2:42 AM

  57. #37

    To be sure, the thirteen years shown in the graph Gavins provided are upward trending.

    Below is an interesting 1-2-3 snapshot of the same data in more recent, decreasing windows. Again, I do not mean to suggest AGW theory is in any sense disturbed by these graphs or that we are in a cooling state. Clearly, though, there is a relative flatness in the last decade. Key word: relative. I think it’s an interesting mini-trend, others do not.

    http://www.woodfortrees.org/plot/uah/from:1999/to:2009/plot/hadcrut3vgl/from:1999/to:2009/plot/uah/from:1999/to:2009/trend/plot/hadcrut3vgl/from:1999/to:2009/trend

    http://www.woodfortrees.org/plot/uah/from:2000/to:2009/plot/hadcrut3vgl/from:2000/to:2009/plot/uah/from:2000/to:2009/trend/plot/hadcrut3vgl/from:2000/to:2009/trend

    http://www.woodfortrees.org/plot/uah/from:2001/to:2009/plot/hadcrut3vgl/from:2001/to:2009/plot/uah/from:2001/to:2009/trend/plot/hadcrut3vgl/from:2001/to:2009/trend

    [Gavin or current moderator, I am not savvy enough to turn links these into "here, here, and here" -- please let me how or change it on your own. I apologize for the clutter otherwise.]

    Comment by wmanny — 28 Mar 2009 @ 8:37 AM

  58. One thing that might happen is a larger year-to-year variation in temperature just because we are dealing with larger numbers. I am wondering if a larger spread is contributing the the severity of the flooding in North Dakota. The flooding seems early by a month compared to historic huge floods. A deeper winter followed by a more rapidly warming spring than historically known might account for the extra surge of water. I’ve seen exploration on the effect of warming on alpine snowpack, but it also ought to affect lower lying snow as well.

    Finding extreme endpoints may be easier as warming continues.

    Comment by Chris Dudley — 28 Mar 2009 @ 8:53 AM

  59. From response to 28:
    “I’d be more than happy to discuss these things with him if he had any interest in furthering his education. – gavin”

    I presume he would, as he seems to lack the arrogance implied in the comment above.

    Comment by Billy Ruffn — 28 Mar 2009 @ 9:04 AM

  60. Sven says, “The only starting year since 1997 that would give a warming trend for these datasets is 1999.”

    Which happens to be the first year where the starting point is not strongly influenced by the 1998 El Nino. Next!

    Comment by Ray Ladbury — 28 Mar 2009 @ 9:12 AM

  61. Jari says, “I doubt very much that there is a climate scientist alive whose understanding of physics is as profound as Dyson’s.”

    And by the same token, the number of physicists who have a better understanding of climate than Dyson are legion. Expertise matters. Do you ask your electrician for advice on a heart problem?

    Comment by Ray Ladbury — 28 Mar 2009 @ 9:15 AM

  62. > I think it’s an interesting mini-trend

    http://dx.doi.org/10.1016/S0165-1765(99)00251-7
    http://tamino.wordpress.com/2007/10/21/how-to-fool-yourself/
    http://plasmascience.net/tpu/downloads/Kleppner.1992.pdf
    http://www.youtube.com/watch?v=TZm7BM1WIEc

    Comment by Hank Roberts — 28 Mar 2009 @ 9:20 AM

  63. Steve Reynolds, Where economists are in general agreement (a rarity), I tend to listen to them. When their opinions contradict physical reality (e.g. assuming current models of economic growth can continue indefinitely), physical reality wins.

    Comment by Ray Ladbury — 28 Mar 2009 @ 9:20 AM

  64. And how hard would it be to add error bars or confidence intervals to the charting at woodfortrees? Without them, folks like wossname up there can just use it to fool themselves.

    Comment by Hank Roberts — 28 Mar 2009 @ 9:53 AM

  65. This is a bit off-topic but a general comment on mitigating the effects of low occurrence, large magnitude outliers on calculated trends: use robust estimation methods rather than rather than methods based on first- and second-order statistics such as OLS. For example, report median and median absolute deviation rather mean and standard deviation. I mention this because mean and st.dev. are very sensitive to outliers and (I think) should be “taken with a grain of salt” unless skew and kurtosis are also reported. In contrast, median and median absolute deviation are pretty robust (in the colloquial sense as well as the mathematical one).

    There’s a nice article by Willmott, et al. on the robust statistics in a recent issue of Atmospheric Environment: “Ambiguities inherent in sum-of-squares-based error statistics”, vol.43, p.749 (January 2009). And some book refs for those who are interested: Robust Statistics by Peter Huber, Robust Regression and Outlier Detection by P.Rousseeuw and A.Leroy. The Section of Statistics at Katholieke Universiteit in Leuven also has a very useful website, http://wis.kuleuven.be/stat/robust/

    Chris

    Comment by Chris G — 28 Mar 2009 @ 10:35 AM

  66. I found this useful

    ..From which you can see (I hope) that the series is definitely going up; that 15 year trends are pretty well all sig and all about the same; that about 1/2 the 10 year trends are sig; and that very few of the 5 year trends are sig…

    Comment by PeteB — 28 Mar 2009 @ 10:50 AM

  67. The graphs selected by wmanny (#56) show a cooling trend for the period 2001-2009. This is obvious.

    But this per se doesn´t say anything about climate sensitivity being wrong (or models, for that matter). It just shows that CO2 forcing is not fast enough to override all other forcings. If CO2 emmissions were, say, only 1/5 of it is, we would see much more pronounced cooling periods – even assuming today´s calculated sensitivity to be accurate.

    How do models respond if you feed them with the ENSO fo the last 7 or 8 years?

    (I´m a layman. I don´t know if it really works this way)

    Comment by Alexandre — 28 Mar 2009 @ 11:08 AM

  68. Alexandre, I’m just another reader here, not a scientist, not a statistician.

    But I’ve read the comments and links. You haven’t, or haven’t understood why wmanny’s wrong.

    So are you.

    Here’s why (shorter version of what’s been said and linked above)

    wmanny picked too short a period to know anything about a trend.

    That’s all.

    Comment by Hank Roberts — 28 Mar 2009 @ 12:40 PM

  69. Can I enquire how the model ensembles would perform against the satellite temperature data (RSS/UAH)?

    Comment by Don Keiller — 28 Mar 2009 @ 1:31 PM

  70. #54 Yari,

    “Climate science has its base in physics. I doubt very much that there is a climate scientist alive whose understanding of physics is as profound as Dyson’s.”

    With respect to the NY times article, he didn’t show a well reasoned understanding, rather
    a matter of fact collage of incoherent thoughts. His mastery of physics may be doubtless,
    but his grasp on climate science doubtful.

    I would rather read a paper on Climate by Dyson, if you have one , please link…

    Comment by wayne davidson — 28 Mar 2009 @ 1:42 PM

  71. Hank,

    I am asking a question and not a rhetorical one. Repeatedly, though you seem unwilling to notice them, I through caveats around like rice at a wedding. Is it that you believe I am wrong to ask the question? If not, what am I wrong about? Why is “relative flattening” such a difficult premise? In any event, since you are inclined to turn my “mini-trend” into a trend, I offer yet another olive branch — how about we call it a “micro-trend”? I’ll go to nano- or pico- if necessary, but the question stands.

    Walter

    Comment by wmanny — 28 Mar 2009 @ 1:54 PM

  72. Getting back to the Michaels graph
    #36
    Some constructive criticism for Chip:

    a) It appears that the trend length is wrong in the Michaels graph. Checking against the years in Gavin’s graph seems to show an “off by one” error. For example the trend since 1998 in HadCRUT (i.e. 0) is labeled as 11 year trend, but in fact is the 10 year trend.

    b) As Gavin pointed out, the CI calculation is unclear. Obviously it should change because of (a) above, but you should clarify how it was calculated, and justify the departure from Gavin’s calculation. Or better yet, just use the correct calculation.

    c) The uncertainty envelope for the observations should be included, as Gavin has done.

    d) The other major data set, GISTEMP, should be included as Gavin has done.

    Once you’ve done all that, you should also include the same graph for 2007, to make it apparent how much short-term trends can change in one year.

    I have one more suggestion, but that will require a separate comment.

    Comment by Deep Climate — 28 Mar 2009 @ 2:04 PM

  73. #47 et al – Re: Knowing which experts to trust

    As with Pat Michaels, one good approach is indeed to look for a relationship between their source of funds, and the likely impact of their findings on the prospects and objectives of their benefactors. The case of the tobacco company scientists who found no link between smoking and lung cancer springs to mind.

    Climatology as a whole should not escape this scrutiny either. I think I’m right in saying its ultimate source of funds – peer reviewers included – is virtually always the state; and, in the main, climatology’s findings currently provide the state with an excellent new rationale to expand itself, and vastly extend its control over society.

    Being a scientist does not make a person immune to motivations of ideology and vested interest, however objective they otherwise may be. The lingering question is not so much what experts do tell the layman, but what they don’t.

    Comment by BFJ Cricklewood — 28 Mar 2009 @ 2:08 PM

  74. For Don Keiller:

    http://scholar.google.com/scholar?q=how+the+model+ensembles+would+perform+against+the+satellite+temperature+data+RSS+UAH%3F

    Remember to note the “cited by” numbers and click there to see what later publications have relied on the article; read the footnotes, read the citing papers, check for errata, and note how the idea has changed over time)

    For contrast, you’ll find you’ve posted a question much beloved of some of the usual sites that argue it ain’t happening. To make the comparison, use Google instead of Scholar:

    http://www.google.com/search?q=how+the+model+ensembles+would+perform+against+the+satellite+temperature+data+RSS+UAH%3F

    Comment by Hank Roberts — 28 Mar 2009 @ 2:17 PM

  75. re 73. And why would they conspire amongst the 160+ sovereign countries and the thousands of climate scientists AND STILL MANAGE TO HIDE THE CONSPIRACY.

    Even harder, do so with GWB in charge of the US government…

    Try your explanation on sites about sept 11th. After that attack, governments had nearly carte blanche to do what they wanted with no oversight.

    Comment by Mark — 28 Mar 2009 @ 2:39 PM

  76. #68 Hank Roberts

    I think my problem was a poor choice of words. I´m a layman as well as a non native English speaker.

    Yes, my knowledge goes as far as to know that 7 or 8 years is too short a period to determine any climatic trend. My tentative reasoning did not aim to imply that.

    I´ll try to rephrase it. The speed of the warming depends on the pace of GHG emmissions. Were this pace slow enough, you could even have much longer periods of cooling without challenging what we know about the climate sensitivity. You could, say, have 25 years of cooling like we had at the middle of last century. Other forcings would have room to cause that only because the GHG increase would be so slight (like it was last century). Yet this would not disprove the man made greenhouse effect.

    Even at our present emmision pace, if these other forcings had enough intensity on the cooling side, we could have a 25-year cooling period even now. If the sun radiation had a steady many-decade strong enough drop, for instance, it could stabilize or even reduce the global temperature during this period, and this would not refute what is known about climate sensitivity to GHG.

    So this is why I asked about ENSO. If it were possible to feed the models with the last 7 or 8 years of ENSO, what would they show? Wouldn´t they get much closer to the observed temperatures? Wouldn´t the better ex-post knowledge of the other forcings allow them to be isolated and better show the influence of GHG forcing in this recent period?

    (I don´t know if ENSO is technically a “forcing”. I ask for some tolerance to inaccuracy here…)

    Comment by Alexandre — 28 Mar 2009 @ 2:40 PM

  77. Gavin said: “…short term trends don’t tell you very much about longer term ones.”

    #24 Dick Veldcamp Gavin said to Gavin:
    “…from a PR viewpoint … I am afraid that the message people will take from your graph is: “Hey, even the AGW-scientists get a cooling trend” (because the leftmost points are below zero), while not realising that a trend based on a few years (and hence the whole graph) is rubbish.”

    This is a very real problem. Perhaps the addition of long-term trend information within the graph would help. Here’s my attempt to do just that:

    http://deepclimate.files.wordpress.com/2009/03/global-surface-trends2.gif

    The dashed line is the trend from 1979 to 2008 (0.17 deg C/decade in both GISTEMP and HadCRUT).

    Ex. 1 Year 2003. Here we see that HadCRUT short-term trend is -0.25 deg C/decade. Yet the long term HadCRUT trend from 1979 has not changed. 1979-2003 is exactly the same as 1979 to present (0.17 deg C/decade).

    Ex. 2 Year 1998. As is well known, the 10-year trend since the large El Nino is flat in HadCRUT and only 0.1 deg C/decade in GISTEMP. And yet the long term trend from 1979 is up from about 0.15 deg./decade in 1998 to 0.17 deg./decade at present.

    It may surprise some that a long term trend can be greater than both trends in two sub-periods, as is the case for almost all “pivot” points as far back as 1995 in these data sets. That’s a fact that Michaels and the Cato Institute would like to keep well hidden. They shouldn’t get away with it.

    Comment by Deep Climate — 28 Mar 2009 @ 2:47 PM

  78. Why is “relative flattening” such a difficult premise? In any event, since you are inclined to turn my “mini-trend” into a trend, I offer yet another olive branch — how about we call it a “micro-trend”? I’ll go to nano- or pico- if necessary, but the question stands.

    You need to drop the word “trend” entirely. “trend”, in statistics (and therefore science), implies significance. You’re seeing noise, not a trend, and it’s not interesting, it’s *expected*.

    Comment by dhogaza — 28 Mar 2009 @ 2:53 PM

  79. You obviously do not understand much about physics. Dyson’s contribution to physics has been huge.

    Actually his most important contribution was to condensed matter physics, and while it may be true that most if not all of climate science is attributable to condensed matter physics, most of that we take for granted now, however, when the subject under analysis here is land, sea and atmospheric response to solar incident radiation, we are indeed mostly referring to MACROSCOPIC physics.

    Sometime in the near future work by Dyson as incorporated into condensed matter physics may very well indeed yield appropriate technological solutions to the climate change problem, but the fundamental result is not dependent on it. We knew about this problem long before we had big supercomputers. Condensed matter physics make it easier.

    Captcha : may fossil

    Comment by Thomas Lee Elifritz — 28 Mar 2009 @ 3:26 PM

  80. re 73. Funding from a source that has a financial stake in the outcome raises questions, but does not necessarily invalidate the science. In the case of Pat Michaels, his misrepresentation of James Hansen’s data, plus this particular exercise in obfuscation, cause me to distrust anything he says.

    In physical sciences like climate science it is virtually impossible to be dishonest, since dishonesty in method or data is rather easy to expose (like this post). Of course there can always be honest differences about the interpretation of data.

    Conspiracy of governments? Are you kidding? Kyoto would have been a roaring success if anything like that were afoot.

    Comment by Ron Taylor — 28 Mar 2009 @ 3:36 PM

  81. BFJ Cricklewood wrote in 73:

    #47 [Brian Schmidt] et al – Re: Knowing which experts to trust

    As with Pat Michaels, one good approach is indeed to look for a relationship between their source of funds, and the likely impact of their findings on the prospects and objectives of their benefactors. The case of the tobacco company scientists who found no link between smoking and lung cancer springs to mind.

    Maybe I can help. Here are the organizations that Patrick J. Michaels has (or has had) direct ties to…

    http://www.exxonsecrets.org/index.php?mapid=1365

    … with those that have received money from Exxon indicated by dollar signs. I have gone ahead and expanded the links for key individuals on those which have received the most money — and links to the organizations that those key individuals are tied to are automatically filled in. The organizations that have received the most funding are ALEC (American Legislative Exchange Council), CEI (Competitive Enterprise Institute), CFACT (Committee for a Constructive Tomorrow), the George C. Marshall Institute, Heartland Institute and Heritage Foundation. (The CATO Institute is still one of the smaller players so I’ll let people dig into that one for themselves.)

    Clicking on the links that I have included just above will take you to the entries for those organizations in SourceWatch. There you will find out for example that ALEC, Competitive Enterprise Institute, Heartland Institute, and Heritage Foundation has been involved in both the defense of cigarettes and fossil fuels. However, back in the map if you hover your mouse over any organization and left-click left click on “More Info” you can bring up a bunch of information, including the main details, overview, further description, funding from Exxon for each year — well you get the idea. Follow the links and it will even bring you to pdfs of the form where Exxon declares its “charitable contributions” for tax purposes.

    However, the map is only good for Exxon. It doesn’t include all of the other oil and other fossil fuel companies. But it is a start. Hopefully you can take it from here.

    Comment by Timothy Chase — 28 Mar 2009 @ 4:09 PM

  82. Mr. Cricklewood, you should check your assumption.

    You’re assuming all climate science is paid for by governments — you believe most of the money spent on understanding this are in academia or civil service jobs rather than industry? And that it’s part of a plot by governments to extend their power?

    I posted this link in the Climate Models FAQ topic a while back, suggesting this be part of the developing FAQ. Check it out, just to give you a start on learning how industry has been using climate modeling for a long time, successfully.
    http://www.nap.edu/openbook.php?record_id=5470&page=11

    Proprietary work for industry isn’t published routinely; you have to look a bit harder to find out about it.

    Don’t expect industry to publish their scientific work. And don’t expect industry’s PR position to be consistent with its proprietary research funding.

    http://scholar.google.com/scholar?num=50&hl=en&lr=&newwindow=1&safe=off&q=%22public+health%22+%22sound+science%22+%2Bpetroleum&btnG=Search

    You have to be more skeptical about this kind of thing or you’ll fall for people who claim public health is s_c__l_sm.

    Comment by Hank Roberts — 28 Mar 2009 @ 4:18 PM

  83. PS to my comment above…

    You can also bring up information on Patrick J. Michaels — the guy at the center of the web on…

    http://www.exxonsecrets.org/index.php?mapid=1365

    … by hovering your mouse over him and left-clicking.

    This is some of his own funding (including coal):

    Dr. Michaels has acknowledged that 20% of his funding comes from fossil fuel sources: (http://www.mtn.org/~nescncl/complaints/determinations/det_118.html)Known funding includes $49,000 from German Coal Mining Association, $15,000 from Edison Electric Institute and $40,000 from Cyprus Minerals Company, an early supporter of People for the West, a “wise use” group. He received $63,000 for research on global climate change from Western Fuels Association, above and beyond the undisclosed amount he is paid for the World Climate Report/Review. According to Harper’s magazine, Michaels has recieved over $115,000 over the past four years from coal and oil interests.

    … and here is the Exxon funding for the major organizations followed by Michaels’ role in them:

    CEI – $2,005,000 – CEI Expert
    CFACT – $582,000 – Board of Academic and Scientific Advisors
    George C. Marshall Institute – $840,000 – Author
    Heartland Institute – $676,000 – Expert
    Heritage Foundation – $530,000 – Policy Expert

    Comment by Timothy Chase — 28 Mar 2009 @ 6:11 PM

  84. Hank Roberts:
    “And don’t expect industry’s PR position to be consistent with its proprietary research funding.”

    Exactly. The oil and gas industry denies climate models when they make projections of future climate – but at the same time uses paleoclimate models to help them figure out where oil and gas may be located.

    Comment by Gary Strand — 28 Mar 2009 @ 6:26 PM

  85. Looking for best gcm model using anthropogenic surface processes (anthropogenic forcings) without economic specific data (ie GDP or average income etc).

    Comment by GBS - Aesthetic Engineer — 28 Mar 2009 @ 6:26 PM

  86. #67 Alexandre

    My apologies if I do not fully understand your context?

    I thought about what you were getting at and to get it in context you may need to understand that natural variability exists both in the natural and anthropogenically influenced climate.

    You cant say that the anthropogenic effect is being completely overridden by natural right now though. If that were true, then the average temperature would be following the natural trend-line, which is far below where we are.

    Take a look at this chart:

    http://www.ossfoundation.us/projects/environment/global-warming/natural-variability/overview/image/image_view_fullscreen

    The current lower short term trend/temperature is still above what would be expected in the natural cycle. Short term trends are meaningless when you are looking at long-term and overall forcing and inertia.

    Comment by John P. Reisman (OSS Foundation) — 28 Mar 2009 @ 6:43 PM

  87. #81
    IMHO, SourceWatch.org (yes, I’m a sometime volunteer contributor) is a great resource.

    The SourceWatch article on Pat Michaels begins thus:

    Patrick J. Michaels (±1942- ), also known as Pat Michaels, is a “global warming skeptic” who argues that global warming models are fatally flawed and, in any event, we should take no action because new technologies will soon replace those that emit greenhouse gases.

    Michaels, who has completed a Ph.D. in Ecological Climatology from the University of Wisconsin-Madison (1979) is Editor of the World Climate Report. He is also associated with two think tanks: a Visiting Scientist with the George C. Marshall Institute and a Senior Fellow in Environmental Studies with the Cato Institute.

    The Cato ad, Michaels’ recent House testimony and the RealClimate commentary posts have all been added to the article.

    Comment by Deep Climate — 28 Mar 2009 @ 6:44 PM

  88. Alexandre (76) — Learning about how climate attribution studies are done might aid you with your question, to which I don’t know the answer.

    Comment by David B. Benson — 28 Mar 2009 @ 6:46 PM

  89. Chip, you say:

    “I bring this up just to clarify that we are comparing observations with the distribution of all model trends of a particular length, not just those between a specific start and stop date (i.e. we capture the full (or near so) impact of internal model weather noise on the projected trend magnitudes)”

    Your i.e. does not follow. Are you really claiming that models perfectly simulate “internal model weather noise”? Do you believe that if you examine model output you are looking at the real world?

    I mean, if you put a variation (like a sine curve) on top of an increasing trend (exponential, linear or logarithmic), you get a combined trend + variation – read the time series link. However, are you assuming that these natural variations are periodic oscillations? Are you thinking of the trend and the variations in such hopelessly simplistic terms – as if we could model global climate with a handful of exponential and trigonometric functions, as Douglass & Knox tried to do?

    Even with the unwarranted simplification, let’s say we apply your procedure to a sine curve superimposed on a linear temperature increase, but instead of using short time scales, we use long time scales – the time scales at which natural fluctuations average out? Yes, we know that at short time scales natural variations can dominate the global warming trend – but why didn’t you look at 320 months, 640 months, etc?

    Take a look at the trends in ocean heat content over the past few decades – the models predict a steady increase, but the data shows a steplike advance.

    http://www.physorg.com/news133019164.html

    New research suggests that ocean temperature and associated sea level increases between 1961 and 2003 were 50 percent larger than estimated in the 2007 Intergovernmental Panel on Climate Change report.

    The results are reported in the June 19 edition of the journal Nature. An international team of researchers, including Lawrence Livermore National Laboratory climate scientist Peter Gleckler, compared climate models with improved observations that show sea levels rose by 1.5 millimeters per year in the period from 1961-2003. That equates to an approximately 2½-inch increase in ocean levels in a 42-year span.

    The ocean warming and thermal expansion rates are more than 50 percent larger than previous estimates for the upper 300 meters of oceans…

    “This is important for the climate modeling community because it demonstrates that the climate models used for assessing sea-level rise and ocean warming tie in closely with the observed results,” Gleckler said.

    That’s another puzzling thing about your approach – the focus on a single metric. Why not look at ocean temperature trends vs. model predictions as well? How about Arctic sea ice volumes vs climate model predictions – what does that one look like?

    From that perspective, your work just looks like propaganda aimed at introducing doubt into the discussion, nothing more – right along the lines of “the world is slipping into a cooling period because of the PDO.”

    Now, for real issues related to climate modeling, keep in mind that the real internal variability is starting to be captured – not by statistical hand-waving, but rather by using ensembles of global circulation models that rely on numerical integration of climate equations:

    Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model, Smith et al. Science 2007

    Previous climate model projections of climate change accounted for external forcing from natural and anthropogenic sources but did not attempt to predict internally generated natural variability. We present a new modeling system that predicts both internal variability and externally forced changes and hence forecasts surface temperature with substantially improved skill throughout a decade, both globally and in many regions. Our system predicts that internal variability will partially offset the anthropogenic global warming signal for the next few years. However, climate will continue to warm, with at least half of the years after 2009 predicted to exceed the warmest year currently on record.

    http://www.precaution.org/lib/warmer_after_2009.070810.pdf

    That’s another test of climate models, specifically of skill in predicting the natural variation contributions – but models have already been well-tested, for example, Pinatubo confirmed that the basic radiative balance and ocean heat absorption issues were fairly well understood, and the 2xCO2 = 3C heating still seems about right.

    The only thing left for denialists to do is nit-pick over a small group of residual issues, while also using PR methods to spread the message.

    Comment by Ike Solem — 28 Mar 2009 @ 6:47 PM

  90. #76 Alexandre

    I understand about poor choices of words, not an uncommon problem in such a complex subject. I’m sure I still do it all the time :)

    Here is the general picture of the Climate Forcings:

    http://www.ossfoundation.us/projects/environment/global-warming/forcing-levels

    I’m a layman also, but from what I can tell, you would actually need long term negative forcing that reduced the positive forcing more to get longer term negative trend. Since the forcing is still around +1.6 W/m2, I expect continued long term warming. If we increased aerosol output thus increasing negative forcing through our industrial processes we could increase the negative forcing component. There are large risks in increasing pollution as well though.

    As Dr. Hansen and others have pointed out, we made a deal with the devil and we are the devil in this context. We added positive forcing and negative forcing. If we get to a point where we can no longer pollute due to effects on human population… well, you can begin to imagine the dilemma.

    Also, the amount of increase in GHG’s in the last century can not reasonably be considered slight when considering forcing levels.

    Major Greenhouse gases:

    - Co2: Pre-Industrial: 280 ppm; Current(Feb. 2007): 382 ppm, Increase = 36%
    - Methane: Pre-Industrial: 715 ppb; Recent 1774 ppb*, Increase = 148%
    - Nitrous Oxide: Pre-Industrial: 270 ppb; 319 ppb*, Increase = 18%

    This took us from a natural cycle forcing range of around 0 W/m2 to -3.4W/m2 (interglacial/glacial) to a whopping +3.8 W/m2 to -3.4Wm2.

    It is difficult to see this as ‘slight’.

    Even at solar minimum and if that minimum period persists, it is hard to fathom the positive forcing and feedbacks not overriding the negative. There would have to be a physical reason for it to do so.

    As far as ENSO, I’m pretty sure their effects are fairly well considered and that those considerations are getting better all the time. There are a lot of pieces in the climate puzzle.

    Comment by John P. Reisman (OSS Foundation) — 28 Mar 2009 @ 6:55 PM

  91. Dale Power says:

    It doesn’t even matter if it is a lie, as long as people hear it several times, it will be excepted by most as having been fact.

    How very true Dale Power, in all sorts of contexts.

    Comment by Ian Lee — 28 Mar 2009 @ 7:15 PM

  92. 82 Hank Roberts

    Mr. Cricklewood, you should check your assumption.
    You’re assuming all climate science is paid for by governments — you believe most of the money spent on understanding this are in academia or civil service jobs rather than industry?

    No, not all is by governments. Just the (ovewhelming) majority of it.

    And that it’s part of a plot by governments to extend their power?

    Governments, like other organisations, spend money on what will help them grow and prosper. Call that a ‘plot’ if you must.

    Comment by BFJ Cricklewood — 28 Mar 2009 @ 7:42 PM

  93. Mr. Cricklewood says: “Climatology as a whole should not escape this scrutiny either. I think I’m right in saying its ultimate source of funds – peer reviewers included – is virtually always the state; and, in the main, climatology’s findings currently provide the state with an excellent new rationale to expand itself, and vastly extend its control over society.”

    Congratulations Mr. Cricklewood, with this paranoid rambling, you have graduated from the ranks of paranoid crank to full blown nutjob. I hope you will use these new powers wisely.

    Comment by Ray Ladbury — 28 Mar 2009 @ 8:01 PM

  94. Deep Climate wrote in 87:

    #81
    IMHO, SourceWatch.org (yes, I’m a sometime volunteer contributor) is a great resource.

    The SourceWatch article on Pat Michaels begins thus:…

    Definitely a great resource. Extensive and in depth. Still I like the interactive map at Exxon Secrets:

    http://www.exxonsecrets.org/index.php?mapid=1365

    A lot of facts if you know how to use it.

    However, another good place to try is of course DeSmogBlog at:

    http://www.desmogblog.com

    They may not have one definitive entry on each subject, but they have a bunch of articles on Pat Michaels, ALEC (American Legislative Exchange Council), CEI (Competitive Enterprise Institute), CFACT (Committee for a Constructive Tommorrow), George C. Marshall Institute, Heartland Institute, and Heritage Foundation. I am sure that many of the other organizations show up as well.

    Comment by Timothy Chase — 28 Mar 2009 @ 8:32 PM

  95. GBS – Aesthetic Engineer (85) — I’m not sure your question makes sense. AFAIK all the GCMs currently in use have skill or they wouldn’t be used. AFAIK all just include the physics, nothing about economics, etc.

    Comment by David B. Benson — 28 Mar 2009 @ 9:55 PM

  96. 28. Gavin;
    The notion that Freeman ” has never had a conversation with a climate modeller” seems completely over the top.

    His contributions to hydrodynamics reflect long association not just with Princeton modelers like Mahlman, but La Jolla oceanographers like Higgens-Longuet, and I suspect his views reflect the formers’ oft-stated skepticism about the difficulty of making decadal projections when decadal processes and parameters are imperfectly understood.

    It is always a temptation to impute too much to too few words– a NYTimes report of a cocktail party remark , or a simple declarative sentence addressed at NYRB readers, should not be elided with a Phys. Rev. review article.

    [Response: Sure - but his statements sound like nothing I have heard or would have implied based on discussions with actual modellers ( and I've talked to most). If he wanted such a conversation I'd be happy to host it here. Dyson may be wicked smart but on this he is way out. - gavin]

    Like Lowells and Cabots, chic ubergeeks may speak only to each other and to God, but as RC gets cut a lot of slack for its devotion to clarity, a physicist’s physicist like Dyson may deserve some as well.

    Comment by Russell Seitz — 28 Mar 2009 @ 10:29 PM

  97. #56 Tad – Gavin’s reply

    Thank you Gavin. The linked graphs were a great help in putting this discussion into context for me.

    Tad

    Comment by Tad Boyd — 28 Mar 2009 @ 10:40 PM

  98. Note to John P. Reisman:

    This is a just-published report on the role of the negative aerosol forcing over the Atlantic ocean:

    http://www.sciencemag.org/cgi/content/abstract/1167404v1

    The Role of Aerosols in the Evolution of Tropical North Atlantic Ocean Temperature Anomalies, Evan et al. Science Mar 26 2009

    “…it is unknown if the temporal variability of these aerosols is a key factor in the evolution of ocean temperature anomalies. Here, we elucidate this question by using 26 years of satellite data to drive a simple physical model for estimating the temperature response of the ocean mixed layer to changes in aerosol loadings…”

    The press release is here:
    http://www.sciencedaily.com/releases/2009/03/090326141553.htm

    The only thing that raises some questions is the nature of the simple physical model they used to estimate the temperature response of the oceans, and the resulting error bars on their estimates of the aerosol contribution. This claim is a little iffy, though:

    The result suggests that only about 30 percent of the observed Atlantic temperature increases are due to other factors, such as a warming climate. While not discounting the importance of global warming, Evan says this adjustment brings the estimate of global warming impact on Atlantic more into line with the smaller degree of ocean warming seen elsewhere, such as the Pacific.

    See earlier research from this group:
    http://www.sciencedaily.com/releases/2006/10/061010022224.htm

    The question involves the warming of the Atlantic, as reported in Levitus 2005, who reported 7.7, 3.3, and 3.5 x 10^22 Joules for the Atlantic, Pacific and Indian Oceans. That is a long-term trend over the period from 1955-2003, while the Evan et al. paper only looks at the past 26 years of aerosol data (satellite limitations), meaning that a direct comparison is difficult. Anyway, the most up-to-date ocean study seems to be this:

    http://www.nature.com/nature/journal/v453/n7198/abs/nature07080.html

    Our ocean warming and thermal expansion trends for 1961–2003 are about 50 per cent larger than earlier estimates but about 40 per cent smaller for 1993–2003, which is consistent with the recognition that previously estimated rates for the 1990s had a positive bias as a result of instrumental errors.

    That doesn’t really match with an aerosol explanation, however – the question will be to see what datasets Evan et al. used. If the warming was overestimated, then the effect of aerosol reduction would also be overestimated.

    In any case, this is legitimate science, though ocean response estimates based on simple models tend not to be very reliable, and the percentage claims must have very large error bars, probably asymmetric error bars at that – systematic errors are common in simple ocean models.

    To understand this, take a look at CLIMATE CHANGE: Confronting the Bogeyman of the Climate System, Richard Kerr, Science 2005

    In particular, see Carl Wunsch’s comments:

    The ocean flow is a complicated beast,” he said. Calling the ocean conveyor the thermohaline circulation (THC) has come to imply that only differences in temperature and salt content drive it. In fact, “the crucial element for knowing what the ocean is doing is knowing what the wind is doing,” he said…

    … As long as the wind blows, essential parts of the THC such as the warm Gulf Stream will continue to flow, Wunsch said, “and I don’t know how to stop the wind.” A safer label for the ocean conveyor might be the meridional (north-south) overturning circulation (MOC), many at the workshop concluded.

    Notice also that the most modern coupled atmosphere-ocean models fail to produce anything like a halt in the MOC. These are some of the reasons why simple ocean models don’t carry much weight.

    Now, as far as the media effort to assign a percentage of the Australian drought to global warming, that is a very strange issue indeed:

    Global warming 37 pct to blame for droughts-scientist
    Wed Mar 25, 2009 Reuters

    Peter Baines of Melbourne University in Australia analysed global rainfall observations, sea surface temperature data as well as a reconstruction of how the atmosphere has behaved over the past 50 years to reveal rainfall winners and losers. What he found was an underlying trend where rainfall over the past 15 years or so has been steadily decreasing, with global warming 37 percent responsible for the drop.

    However, a little searching reveals that Dr. Peter Baines has never published anything related to drought in Australia, whatsoever. Where does this data come from, that Reuters published as fact?

    Not a very good sign at all. How does Reuters (David Fogarty reporting as the Reuters Climate Change Correspondent) justify that? The article has appeared in quite a few places, such as the Toronto Star – but why? Normally, an article like this follows the publication of a scientific article – and how did we go from being unable to assign any specific event to global warming, to being able to assign a 37% contribution from global warming? Could it be 75%? Or 15%? Or are we just supposed to take it on faith?

    Comment by Ike Solem — 28 Mar 2009 @ 11:30 PM

  99. Governments, like other organisations, spend money on what will help them grow and prosper. Call that a ‘plot’ if you must.

    In other words, most of modern science is simply a plot to extend the power of government, and has no basis in objective reality or data, right?

    May I buy you a cigarette? Or 1,000,000 of them? And light them for you?

    Because, I’m sure, just like Richard Lindzen, probably the most scientifically credible skeptic out there (tenured at MIT and all), you believe that the evidence that cigarette smoking can lead to lung cancer and heart disease is just a “plot” to extend government power.

    Comment by dhogaza — 29 Mar 2009 @ 12:10 AM

  100. Ray Ladbury No.60

    “Which happens to be the first year where the starting point is not strongly influenced by the 1998 El Nino. Next!”

    And so are all the years after 2000 (incl.) not influenced by 1998 él nino. Next!

    Comment by sven — 29 Mar 2009 @ 1:02 AM

  101. Scientists know that much of the best science is supported by NSF and NIH. People who believe that climate scientist are just hyping the issue, might also believe that cancer is not a health risk and that AIDs is not contagious. Afterall, many scientists with government grants are studying these diseases. When a scientist has data her or she believes they try to turn it into pubications and maybe even grants. However, grant reviewers are very critical and do not accept BS. NSF grants in my field typically have 6-10 reviews by expects in the field, the ratings need to be mostly excellent with some “very goods” to have much chance for success. This is why NSF funding success is on the order of 10-15% and many scientists have given up trying.

    Comment by Bill DeMott — 29 Mar 2009 @ 1:05 AM

  102. Re #54 “Dyson’s contribution to physics has been huge.”

    The guy is a good physicist, no doubt about that. But what exactly is his contribution, beyond streching the imagination of his readers?

    Comment by Alan of Oz — 29 Mar 2009 @ 1:06 AM

  103. Gavin, the “!” scale for physicists goes to 11, and I’d say Dyson is maybe a 7, no more than 8.*

    I think it’d be great if you’d invite him to something (maybe an ideal opportunity to have a pair of threads, one for the invited scientists and the other for us kibitzers, for focus).
    _______________________
    * As evidence, this idea is at least at 10 on the “!” scale:
    http://www.springerlink.com/content/n870972259510024/

    Comment by Hank Roberts — 29 Mar 2009 @ 1:19 AM

  104. “Every so often people who are determined to prove a particular point will come up with a new way to demonstrate it. ”

    Isn’t this in fact exactly your approach ;-)

    Comment by Joe — 29 Mar 2009 @ 4:40 AM

  105. Re:Knowing which experts to trust; towards a lay scrutiny of climatology

    BFJC : …[one should] look for a relationship between their source of funds, and the likely impact of their findings on the prospects and objectives of their benefactors. The case of the tobacco company scientists who found no link between smoking and lung cancer springs to mind.

    …[Climatology's] …ultimate source of funds – peer reviewers included – is virtually always the state; and, in the main, climatology’s findings currently provide the state with an excellent new rationale to expand itself, and vastly extend its control over society.

    Ray Ladbury : Congratulations Mr. Cricklewood, with this paranoid rambling, you have graduated from the ranks of paranoid crank to full blown nutjob. I hope you will use these new powers wisely.

    Perhaps if you could provide the substance of your disagreement – as opposed to merely the statement thereof – we could engage in a fruitful debate. Are you of the opinion that state-funded science should be beyond such scrutiny?

    BFJC: Governments, like other organisations, spend money on what will help them grow and prosper. Call that a ‘plot’ if you must.
    DHOGAZA : In other words, most of modern science is simply a plot to extend the power of government, and has no basis in objective reality or data, right?

    Your two points are unrelated. Yes, obviously governments do take over science to further their own interests, an idea given a huge boost through the Manahattan Project. But how objective this politicised science is, depends on what if any political implications emerge. Probably not very much from the square on the hyponenuse, but plenty if people can be convinced of AGW, regardless of whether or not it turns out to be true.

    DHOGAZA : …I’m sure…you believe that the evidence that cigarette smoking can lead to lung cancer and heart disease is just a “plot” to extend government power.

    It’s no good pretending that a substantial part of the attraction of this to governments, is that it gives them an easy excuse to raise taxes. As with green taxes, and impending AGW taxes, these are never offset by tax reductions elsewhere, indicating where the state’s true interests lie.

    Comment by BFJ Cricklewood — 29 Mar 2009 @ 5:35 AM

  106. Gavin,

    I am not a stats person but what I have read by the people that seem to know quite a bit, for example Lucia over on The Blackboard on 16 March 2009 wrote:

    you’ll see the multi-model mean based on models driven by volcanic forcing is rejected at the 90% confidence level for every year between 1960 and 1998. We get some fail to rejects for shorter trends — but it’s well known that type II error (i.e. failing to reject models that are wrong) is common when one short trends. Needless to say, this multi-model mean trend is rejected for 2001.

    What if we pick 95% as our confidence intervals?

    Well… then we don’t reject this multi-model mean in 1974, or 1996 and for a few years after. So, if you feel bound and determined to save the reputation of the models, you should think up reasons why 1974 or 1996 are the “correct” years for testing models over the “longer term”, while simultaneously claiming you picked these entirely at random.

    At the 95% confidence interval, the multi-model mean using volcanic cases only are rejected if we happen to use 2001 to compute the initial trend. What if 2000 is the right start year? We reject the multi-model mean based on cases with volcanic forcing.

    So, to those who think these “rejections” are due to selecting a short period for analysis: Nope! These rejections are due to the observed earth temperature veering away from the projected values.

    Does this not apply to what you did? Using shorter trends to get fail to reject? I am not saying this to say that Michael is right because other sites I have read imply that he is not at the 95% confidence interval but that what he says is correct for the 90% confidence interval. What does that imply?

    Comment by Vernon — 29 Mar 2009 @ 6:38 AM

  107. Correction to #105: It’s no good pretending that a substantial part of the attraction of [cigarette taxes] to governments, is not that it gives them an easy excuse to raise taxes. As with green taxes, and impending AGW taxes, these are never offset by tax reductions elsewhere, indicating where the state’s true motives lie.

    Comment by BFJ Cricklewood — 29 Mar 2009 @ 6:44 AM

  108. BJF, less petrol being used in the UK means that the massive amounts of tax on fuel will be lost.

    According to your theory, this should mean that the UK would be against AGW.

    They are not.

    Your theory is disproven.

    [Response: No more on this. It's just getting tedious - gavin]

    Comment by Mark — 29 Mar 2009 @ 7:35 AM

  109. On the subject of graphs & effects of AGW.
    May I ask for comment on this.
    http://www.telegraph.co.uk/comment/columnists/christopherbooker/5067351/Rise-of-sea-levels-is-the-greatest-lie-ever-told.html
    “One of his most shocking discoveries was why the IPCC has been able to show sea levels rising by 2.3mm a year. Until 2003, even its own satellite-based evidence showed no upward trend. But suddenly the graph tilted upwards because the IPCC’s favoured experts had drawn on the finding of a single tide-gauge in Hong Kong harbour showing a 2.3mm rise. The entire global sea-level projection was then adjusted upwards by a “corrective factor” of 2.3mm, because, as the IPCC scientists admitted, they “needed to show a trend”.

    [Response: Do even you take this kind of nonsense seriously? Please take the conspiracy crap somewhere else. It is too tedious to bother with. - gavin]

    Comment by Adam Gallon — 29 Mar 2009 @ 7:59 AM

  110. Jari in 54 wrote:

    You obviously do not understand much about physics. Dyson’s contribution to physics has been huge.

    Alan of Oz wrote in 102

    The guy is a good physicist, no doubt about that. But what exactly is his contribution, beyond streching the imagination of his readers?

    Well, he is rather interesting in this regard. His first great achievements in theoretical physics were roughly by age 26 — in 1949:

    Dyson is best known for demonstrating in 1949 the equivalence of the formulations of quantum electrodynamics that existed by that time — Richard Feynman’s diagrammatic path integral formulation and the operator method developed by Julian Schwinger and Sin-Itiro Tomonaga. A by-product of that demonstration was the invention of the Dyson series.

    http://en.wikipedia.org/wiki/Freeman_Dyson

    Not that unusual — as far as major contributions to theoretical physics go. However, he didn’t stop there:

    A seminal work by Dyson came in 1966 when, together with A. Lenard and independently of Elliott H. Lieb and Walter Thirring, he proved rigorously that the exclusion principle plays the main role in the stability of bulk matter [12]. Hence, it is not the electromagnetic repulsion between electrons and nuclei that is responsible for two wood blocks that are left on top of each other not coalescing into a single piece, but rather it is the exclusion principle applied to electrons and protons that generates the classical macroscopic normal force. In condensed matter physics, Dyson also did studies in the phase transition of the Ising model in 1 dimension and spin waves[11] Dyson was awarded the Lorentz Medal in 1966 and Max Planck medal in 1969.

    ibid.

    He was still doing major (“seminal,” in this case) work by 1966. This is around the time that he is age 43. Normally theoretical physicists aren’t doing any of their great work beyond their late twenties or early thirties. But he continued making major contributions well into his forties. By that time he should have been well settled into a job teaching physics, not making major contributions.

    But of course, most people probably know him for his science fiction ideas, e.g., the Dyson sphere and Dyson tree. It takes a certain degree of education simply in order to be able to understand the nature of his greater achievements. And of course the Dyson sphere and Dyson tree are good examples of his signature optimism, the later involving a genetic engineering to make trees produce items that we want, to act as factories even for poorer villages. Actually this reminds me of how genetic engineers intend to use bacteria. For example, we already using bacteria to produce insulin.

    What I find ironic is that it is his can-do optimism that is in this case working against our ability to do something about our dependence on fossil fuels and the climate change that this dependence is resulting in, that is, switching to alternate energy, preserving modern civilization and the world economy beyond Peak Oil and Peak Coal, preventing climate change from becoming such a huge problem that it destroys that the world economy — and more than likely leads to a series of highly destructive wars over limited resources. And yet it is only by means of making this sort of change to our civilization that we are likely ever to get to the point at which we engaging in the sort of genetic engineering and bending of higher organisms to our purposes that he envisions. Doubly ironic, I suppose. A little too unbelievable to write as fiction, actually.

    Comment by Timothy Chase — 29 Mar 2009 @ 8:34 AM

  111. BFJ Cricklewod, I am sorry, but I do not see how I can have a fruitful debate with a conspiracy theorist who believes that the entire scientific community is committing a giant fraud to increase its funding. That accusation is so beyond the pale it indicates that you and objective reality are so unfamiliar that you don’t even exchange holiday greetings.

    First, you have to realize that it is the entire scientific community against which you level those accusations. Not one single, solitary scientific professional or honorific science organization has dissented from the consensus opinion on climate change. Not one. And it’s been examined in minute detail by the NAS, AGU and a veritable alphabet soup of scientists.

    Second, the whole premise that anyone benefits from our having to address climate change is simply absurd. It means that all scientific inquiry has to take a back seat to survival for at least a generation or maybe two. How does that benefit me as a frigging rocket scientist?

    Third, if there were such a conspiracy, the strongest incentives would be to blow the lid off of it, become famous AND be hailed as a hero of science. Can you imagine an ambitious, young scientist resisting that temptation of fame and glory AND doing the right thing as well? Ben Franklin said, “Two men can keep a secret if one of them is dead.” We’re talking about a conspiracy involving hundreds of thousands of folks who generally are pretty lousy at lying.

    Sir, you owe to the entire scientific community, to the very profession of scientist a heartfelt apology. You owe to yourself an honest examination of whether your silly, conspiracy-laced view of the world holds water. Until you realize this, I am afraid we have nothing to discuss.

    Comment by Ray Ladbury — 29 Mar 2009 @ 8:38 AM

  112. Sven, OK work with me here. If you start before 1998, your starting data is quite close to the big El Nino year. If you start in 1999, then 1999 is Bigger than 1998, right? Therefore it doesn’t influence the starting point.

    Take a little while if you need it to let that sink in.

    Comment by Ray Ladbury — 29 Mar 2009 @ 8:42 AM

  113. I think folks are drawing entirely the wrong conclusion from Dyson’s skepticism. He’s as smart as they come. His very early work on QED is brilliant. The lesson is that no matter how smart you are, if you venture far outside your realm of expertise, your opinions count no more than the man on the street. Expertise matters. Years of study matter. Actively publishing in a field and advancing the state of understanding matters. The real question I would like to hear Dyson answer is why anyone should take seriously his opinions on a matter in which he is far from expert.

    Comment by Ray Ladbury — 29 Mar 2009 @ 8:56 AM

  114. John Reisman (#90) –

    My mention of “slight” was in the context of a hypothetical lower emmision (“were this pace slow enough…”) This hypothetical scenario is not showing itself to be useful in the debate, so I´ll give it up.

    Your ellaborate answer was useful for its own content, though. Thanks for your reply.

    Thanks to David Benson (#88) too.

    Comment by Alexandre — 29 Mar 2009 @ 11:14 AM

  115. dhogaza wrote in 99:

    Because, I’m sure, just like Richard Lindzen, probably the most scientifically credible skeptic out there (tenured at MIT and all), you believe that the evidence that cigarette smoking can lead to lung cancer and heart disease is just a “plot” to extend government power.

    Not that uncommon a view among Libertarians and Objectivists in the past. Ayn Rand held to such a view — at least until she developed lung cancer. It was romanticized in her novels. The heroes smoked — it represented the promethean power over fire, the “fire” lit in the active mind — and even helps one achieve a higher level of attention, or as an Objectivist would put it, “focus one’s mind.” Then she gave up smoking, presumably right there in her doctor’s office.

    Objectivists and other Libertarians continue to have an uncomfortable relationship with smoking to this day. But of course nowadays most Objectivists and Libertarians acknowledge the risks to one’s health that smoking poses — at least for some people. And yet some smoke — and of course feel guilty about it since this demonstrates a lack of will power, or they will go one step further and try to rationalize it, arguing that it isn’t an issue if they don’t have a family history of smoking-related illnesses or that since it helps on focus one’s mind smoking is somehow worth the risk.

    Even if you were to convince them that smoking is some sort absolute sin they would still have a problem with outlawing it. They will pretty much always have a problem with taxation for any “worthwhile” goal — including the maintanence of an army, police force, sewage system, highway system — pretty much anything. Never did find out what they thought of when it came to innoculations against disease.

    Nowadays the Objectivist movement is fragmented into a number of different groups. You have those that stuck with Leonard Piekoff (who has since stepped down) and the Ayn Rand Institute, those that went with David Kelley and the Objectivist Center (since renamed the Institute for Objectivist Studies then Atlas Society), the the Kiwis associated with Lindsay Perigo (who has since stepped down) and the Free Radical in New Zealand.

    The most liberal of these groups is probably David Kelley’s Atlas Society. They get along with other Libertarians. Those with ARI don’t — as a way differentiating themselves from those who do. (But it wasn’t always this way.) The Kiwis actually called themselves Libertarianz (with a “z”) but had a falling out with David Kelley’s group.

    Some of the more consistent Libertarians are anarcho-Capitalists following in the footsteps of Murray N. Rothbard (a former Objectivist) Others are constitutionalists more along the lines of Frederick A. Hayek. Both Murray N. Rothbard and Frederick A. Hayek were students of Ludwig von Mises, where von Mises also taught Alan Greenspan and George Reisman, the latter two of which were followers of Ayn Rand.

    *

    But the Ayn Rand Institute on the right and Atlas Society on the left still take pretty much the same view on smoking…

    Here is ARI:

    Ayn Rand, Smoking, & Atlas Shrugged
    by Andrew Lewis (May 19, 2000)
    http://www. capmag.com/article.asp?ID=566

    … and here is AS:

    Smoking
    Answered by William Thomas
    http://www. objectivistcenter.org/cth–1275-Smoking.aspx

    Me? I was in what I thought of as “No Man’s Land,” standing with no one, recognizing no authority greater than reality itself, and standing for dialogue when I formed “The Objectivist Ring.” But that was after I split with ARI. I felt guilty about smoking when I was aligned with ARI and broached the topic with Dr. Andrew Bernstein and then again when I “moderated” The Objectivist Ring and broached the subject with a good friend and dialectician.

    *

    BFJ Cricklewood wrote in 107:

    Correction to #105: It’s no good pretending that a substantial part of the attraction of [cigarette taxes] to governments, is not that it gives them an easy excuse to raise taxes. As with green taxes, and impending AGW taxes, these are never offset by tax reductions elsewhere, indicating where the state’s true motives lie.

    BFJ Cricklewood is himself a Libertarian — and here he is arguing with an Anarcho-Soc.ialist, apparently:

    No, I recognise that you (plural) are working on a blueprint for a type of soc.ialism without central planning. (And further that this blueprint is the central topic of discussion on this forum).

    Re: The SPGB House Style
    BFJ Cricklewood
    Sat Feb 2, 2002 4:43 pm
    http://tech.groups.yahoo.com/group/WSM_Forum/message/12035

    … and here he is making reference to Frederick A. Hayek’s “problem of economic calculation”:

    But before you allocate resources you need to know what to allocate them for. If there is enough meat resource for 100 steaks or 500 burgers, how do you calculate how much to allocate for each? How are these opportunity costs transmitted to the consumer to enable him to make a rational and informed decision?

    << ROBIN the mechanism for ensuring the efficient allocation of resources without market prices in the light of thier opportunity costs(i.e. calculation in kind, a self regulating sysytetm of stock control and the crucially, the law of the minimum) >>

    Calculation in kind is precisely what causes the combinatorial explosion of choices problem.

    ibid.

    When I met Mar.xists from South Africa and one of them said of me, “No use arguing with him — he’s an Objectivist and his mind is all made up,” I thought that it was proof of the superiority of Objectivism over Mar.xism — proof that our arguments were superior and that we would win in the long-run. Nowadays it seems more like pig-headedness.

    Personally, I doubt there is much use in arguing with Cricklewood. His mind is all made up — and no one can change it, except with the fairly remote possibility of himself.

    Comment by Timothy Chase — 29 Mar 2009 @ 11:59 AM

  116. To dismiss Dyson is to dismiss thought. He thinks about things far more broadly and openly than is suggested by the inference that he’s a brilliant old coot who is past his prime and out of his bailiwick. I have read Dyson and found him to be a far more interesting writer and thinker than anyone here, which is meant neither to insult nor surprise. For those who are fearful of his challenge to climate orthodoxy, though, here is a quotation from a recent publication of his:

    “One of the main causes of warming is the increase of carbon dioxide in the atmosphere resulting from our burning of fossil fuels such as oil and coal and natural gas.”

    He thinks decidedly outside the box, though, particularly about biomass and the future of biotechnology and farming, and there’s one aged scientist who I hope lives a lot longer. I understand perfectly well that climate modelers want to own the expertise about climate – it’s only human – but surely contributions from outside that narrow field should be welcomed, if disagreed with, rather than being dismissed out of hand.

    Comment by wmanny — 29 Mar 2009 @ 12:07 PM

  117. >> Adam Gallon
    > [Response: Do even you take this kind of nonsense seriously?
    > Please take the conspiracy crap somewhere else….

    He’s _from_ the ‘somewhere else’; n.b. anti-Gore link behind his name.

    Comment by Hank Roberts — 29 Mar 2009 @ 12:33 PM

  118. I say, let’s forget the negative thoughts on Dyson – I like having my mind expanded – and concentrate on what seems to be working. My questions are uninformed and one of you might like to extend yourself for a chat off line. I basically don’t understand the entire role of anthropogenic forcing and would like some clarity. The parameters that have been mentioned in McKitrick and Michaels 2007 don’t make much sense to me and I was hoping for some enlightenment. Or, another paper to read that might have more in the way of breaking down the anthropomorphic parameters.

    In case you want to see what an aesthetic engineer does you can see my TOPEX/POSEIDON visualization that I did for EAPS.

    Comment by GBS - Aesthetic Engineer — 29 Mar 2009 @ 2:24 PM

  119. On another topic – does anyone use those student in situ data from GLOBE? for their GCM? I think at one point GISS did.

    Comment by GBS - Aesthetic Engineer — 29 Mar 2009 @ 3:19 PM

  120. “To dismiss Dyson is to dismiss thought.”

    Muh? So Dyson is the God Of Thought???

    To dismiss Dyson is to dismiss Dyson. No more, no less.

    And it’s funny how you come to that with Dyson yet fail to think the same if it’s someone saying “AGW is real, it’s now and we must do something”.

    Comment by Mark — 29 Mar 2009 @ 3:55 PM

  121. I am not a stats person but what I have read by the people that seem to know quite a bit, for example Lucia over on The Blackboard on 16 March 2009 wrote:

    I’m not a stats person either, but I think Lucia is using a test that Santer uses, but for some reason doesn’t point out that Santer also stated that this test could return false negatives. I don’t think her CIs take this into account.

    Comment by Doktor Bettnässer — 29 Mar 2009 @ 4:21 PM

  122. Walter, I agree that Dyson is brilliant, but he is far, far outside his realm of expertise. Dyson tends to be a “big-picture” kind of guy who doesn’t sweat the details. Climate change is one of those inconvenient details that could derail his technophilic vision of the future. I think he tends to be a bit naive–not uncommon in a physicist.
    It is crucial to pay attention to those who are actively advancing understanding in the field

    Comment by Ray Ladbury — 29 Mar 2009 @ 4:59 PM

  123. GBS – Aesthetic Engineer (118) — I encourage you to read W.F. Ruddiman’s “Plows, Plagues and Petroleum”. He also has a guest thread here on RealClimate regarding the early anthropocene.

    I also suggest David Archer’s “The Long Thaw”.

    Comment by David B. Benson — 29 Mar 2009 @ 5:23 PM

  124. #100 Ray, I’ve just done some recalculations of high Arctic sunset locations, turns out, I didn’t do it to standard during 2001. After the Powerful El-Nino of 1998, there was an extended La-Nina:
    http://www.appinsys.com/GlobalWarming/ENSO.htm
    (would like to put graphs instead of linking)
    …. Of which in 2001 I filmed and observed 7 sunsets well below the astronomical horizon by 2 degrees (this may be a world record). Flash forward to now, same sample number of observations during roughly the same days, again at end of a powerful La-Nina>> Results: none, zip, no high zenith distance sunsets beyond 92 degrees. I translate this as effects of La-Nina starting from a higher energy atmosphere, corresponding to overall higher temperatures throughout the atmosphere at the beginning and the end of a LaNina (not necessarily measured fully measured by conventional means due to lack of data resolution).
    I give one example on my web site.. Look for the sun line pictures.

    Comment by wayne Davidson — 29 Mar 2009 @ 5:28 PM

  125. Re Mark #120

    wmanny wrote in 116:

    To dismiss Dyson is to dismiss thought.

    Mark wrote in 120:

    Muh? So Dyson is the God Of Thought???

    To dismiss Dyson is to dismiss Dyson. No more, no less.

    And it’s funny how you come to that with Dyson yet fail to think the same if it’s someone saying “AGW is real, it’s now and we must do something”.

    Mark, you might find this of interest… Comment #43, Monckton Flunks Latin at Deltoid

    Comment by Timothy Chase — 29 Mar 2009 @ 6:31 PM

  126. PS

    to the above…

    Paste the following into your browser, but get rid of the space:

    http://wattsupwiththat.com/2009/03/24/guardian-headline-leading-climate-scientist-democratic-process-isnt-working/#comment-104058

    Comment by Timothy Chase — 29 Mar 2009 @ 6:55 PM

  127. @Doktor Bettnässer

    I’m not a stats person either, but I think Lucia is using a test that Santer uses, but for some reason doesn’t point out that Santer also stated that this test could return false negatives. I don’t think her CIs take this into account.

    All statistical tests can return both false positives and false negatives. In section 6 of their article, Santer et al 2008 discuss a feature that can cause their test to return slightly more false positives than intended. That is, when the test is designed to result in 5% false rejections of the null hypothesis, it may result in a few more. This is illustrated in their figure 5, and can be seen by nothing the result of synthetic tests for the 5% level appears to return a bit more than 5% false positives. Squinting at their graph, it looks like they get somewhere between 5% and 6% false rejections when they set their criteria to return 5% false rejections.

    You are correct that I do not correct for this in my blog posts but I have mentioned it from time to time.

    I have also mentioned several other issues with the test in Santer.

    Though Santer did not discuss the possibility, it is known that t-tests involving linear fits will result in too many false negatives if the underlying trend is non-linear. Non-linearities in very evident underlying are exhibited in model mean over all models: There are dips associated with volcanic eruptions. These dips do not average out over different runs of individual models. I’ve estimated the effect on the surface trends computed since the 80s, and the increase in false negatives is larger than the rate of false positives for a similar period.

    I do not do the corrections for either effect in the recent blog posts, but have discussed these issues rather generally and even specifically, for example here.

    Because the magnitude of the counter acting effects depends on the degree of non-linearity, the rms of the residuals to a linear fit, the length of the trend and the temporal auto-correlation of the “weather noise”, it is difficult to generalize whether the method will generally result in too many false positives or negatives. One must make assumptions and estimate the effect in each instance. However, though I make no formal correction, I know the order of magnitude of both effect and correcting would not change the reject/fail to reject diagnosis in any important way.

    For what it’s worth, I haven’t specifically checked for the effect of non-linearities on the underlying trend (as estimated based on the model mean) on the liberality of the test on the tropospheric trend presented in the test reported in Table III in Santer et al. Given the reported magnitudes of d*, I anticipate if I did check, the major results would remain intact provided one sticks to the specific years used in the analysis. The d*’s in that table are sufficiently below the threshold for the 95% confidence intervals, though, it is possible this could change if the test of the tropospheric trends included more recent data. (This, would, of course depend on what the more recent data show.)

    Comment by lucia — 29 Mar 2009 @ 7:28 PM

  128. Wayne, That is cool! Interesting correlation. Any idea of the cause?

    Comment by Ray Ladbury — 29 Mar 2009 @ 7:31 PM

  129. timothy chase,
    wow – thanks for the detective work. man those guys are unscrupulous.

    Comment by walter crain — 29 Mar 2009 @ 9:07 PM

  130. #127 Ray,

    Any analysis is difficult because we lack resolution, it would be nice to have more stations
    everywhere. Since its not possible, must rely on other means. Best possibility,
    the ice is thinner, the observed sunsets are purely over sea ice, ray paths exactly over NW passage ice.
    Local measurements have thinner ice, but National Ice Center is off line

    http://www.natice.noaa.gov/products/arctic/index.htm

    I cant confirm if it is so. But a Upper air profile from a land station may differ low over sea ice, for the lower atmosphere may be warmer over thinner ice compared with previous years. Refraction
    is stronger in colder air, so its likely that the lower atmosphere over thinner ice is warmer.

    Comment by wayne Davidson — 29 Mar 2009 @ 9:22 PM

  131. I have a question about another testimonial which seems quite compelling.
    It’s christy’s testimonial about the models on CO2 and cloud sensitivity (which overstate wraming or are wrong according to him):
    http://waysandmeans.house.gov/media/pdf/111/ctest.pdf

    Are the models actually this lousy and does he have a point? Isn’t he one of the lead authors of the IPCC?

    [Response: His graph on the 1988 projections has been significantly distorted by playing with the baseline. See here for a much better discussion of the same issue (summary: the models projected a ~0.23+/-0.04 deg C/dec, and the data showed ~0.20+/-0.04 deg C/dec from 1984 to 2008 ). His second graph is exactly what we discussed in the main post. His characterisation of Spencer's work is a huge overstatement since Spencer's paper only looked at the MJO (Madden-Julien Oscillation, or Intra-seasonal oscillation) which is a dynamic oscillation in the tropics and not a response to surface warming at all. He doesn't discuss climate sensitivity at all. Climate models results can indeed be usefully questioned - but Christy's tesimony was not one of those efforts. - gavin]

    Comment by KTB — 30 Mar 2009 @ 4:21 AM

  132. Nickolas writes:

    Since 2001 we have sharp global cooling.

    I regressed NASA GISS temperature anomalies 2001-2008 on year and got a positive slope that was statistically insignificant. Where did you get “sharp global cooling” from?

    Comment by Barton Paul Levenson — 30 Mar 2009 @ 6:07 AM

  133. Billy Ruffn accuses Gavin of arrogance for saying he could educate Freeman Dyson on climate science.

    You know what, Billy? I accuse Freeman Dyson of arrogance for saying he can educate climate scientists on climate science, when he himself has never studied the subject.

    And I accuse you of arrogance for spouting off about Gavin when you don’t understand the discussion to begin with.

    CAPTCHA: “multiplied flames”

    Comment by Barton Paul Levenson — 30 Mar 2009 @ 6:12 AM

  134. wmanny writes:

    Why is “relative flattening” such a difficult premise? In any event, since you are inclined to turn my “mini-trend” into a trend, I offer yet another olive branch — how about we call it a “micro-trend”? I’ll go to nano- or pico- if necessary, but the question stands.

    If it’s not statistically significant, it’s not a “trend.”

    Comment by Barton Paul Levenson — 30 Mar 2009 @ 6:46 AM

  135. BFJC writes:

    I’m sure…you believe that the evidence that cigarette smoking can lead to lung cancer and heart disease is just a “plot” to extend government power.
    It’s no good pretending that a substantial part of the attraction of this to governments, is that it gives them an easy excuse to raise taxes. As with green taxes, and impending AGW taxes, these are never offset by tax reductions elsewhere, indicating where the state’s true interests lie.

    How about evaluating what’s true on the basis of the evidence instead of endlessly blathering about who benefits, and other worthless ad hominem arguments? It doesn’t matter if the government wants to rape puppies and torture small children–either what it says about tobacco is right, or it’s wrong. Gassing about their motives proves nothing one way or the other.

    Comment by Barton Paul Levenson — 30 Mar 2009 @ 6:55 AM

  136. barton, re: nickolas’ “sharp cooling” – that’s funny/sad. sounds like a claim that’s been through several non-scientist hands. somebody changed “decrease in warming” (a very questionable starting point) to “cooling”, then someone added “sharp” – it’s like they played telephone.

    Comment by walter crain — 30 Mar 2009 @ 7:21 AM

  137. Somebody was watching…

    Smoking
    Answered by William Thomas
    http://www. objectivistcenter.org/cth–1275-Smoking.aspx

    … is gone.

    But I have a copy.

    If you want one you can get it here for the time being:

    http://209.85.173.132/search?q=cache:ztCMHH9B1LcJ:www.objectivistcenter.org/cth–1275-Smoking.aspx+http://www.objectivistcenter.org/cth%E2%80%931275-Smoking.aspx&cd=1&hl=en&ct=clnk&gl=us

    Comment by Timothy Chase — 30 Mar 2009 @ 10:16 AM

  138. Timothy, from the Google Cache page the link to “current page” still finds it. Slight difference from the link as you have it formatted:
    http://www.objectivistcenter.org/cth–1275-Smoking.aspx

    Uh, oh, watch for software help.
    As typed this is two hyphens —
    And in the link, between “cth” and “1275″ should be two hyphens. It looks like something is changing those to a single long dash, from the preview. That would break the link, eh?

    Comment by Hank Roberts — 30 Mar 2009 @ 11:06 AM

  139. Ike (re:89),

    You ask:

    “Your i.e. does not follow. Are you really claiming that models perfectly simulate “internal model weather noise”? Do you believe that if you examine model output you are looking at the real world?”

    I answer:

    Our analysis captures the impact of model weather noise on the short-term model surface temperature trends. Whether or not the models capture the general characteristics of ‘natural variability’ is one of the things we are examining.

    -Chip

    Comment by Chip Knappenberger — 30 Mar 2009 @ 11:31 AM

  140. Deep Climate (re:72),

    Thanks for your comments.

    a) Our trend calculations are not “off by a year.” There are 11 years of data from 1998-2008.

    b) Our CI calculation is more general than Gavin’s. It is explained a bit in Michaels testimony as well as in my comment 36

    c) Gavin does not include an “uncertainty envelope for the observations.” However, his point that the value of observed trends change given the start and stop date of the observations is valid. Michaels showed the data from the last stop date available (i.e. the current values as of December 2008). In our paper, we’ll show the time history of n-year trends (over all start and stop dates)

    d) We will include the GISS data as well in our paper.

    -Chip

    Comment by Chip Knappenberger — 30 Mar 2009 @ 11:47 AM

  141. Hank Roberts wrote in 138:

    Uh, oh, watch for software help.
    As typed this is two hyphens —
    And in the link, between “cth” and “1275″ should be two hyphens. It looks like something is changing those to a single long dash, from the preview. That would break the link, eh?

    Don’t type in the two hyphens. Copy and paste, then get rid of the space if it is there. Or just click on the link for the cached.

    It works for me when I click the link for the cached version.

    In any case, remove the space from the original address, put it into Google. Hit search. The original page will turn up as one of the results — the first result — and then “Cached” will appear as a link directly beneath it — to the right.

    However, since that has the long hyphen another approach is to enter the following into Google:

    site:www.objectivistcenter.org smoking

    Once again, the first result will be the “Smoking” article and you will have a link to the Google “Cached” version underneath and to the right.

    Hey — I was helping to track down creationist organisations on the web and compile evidence for the British Centre for Science Education for a while. They leave tracks, you can follow them, and you can tell other people how to find them. However, the cache will remain in Google only for so long. Maybe two weeks.

    So it helps to get the copy.
    *
    Captcha fortune cookie:
    Siberia sacks

    Sounds like someone is carrying around some extra baggage…

    Comment by Timothy Chase — 30 Mar 2009 @ 12:01 PM

  142. Hank, you are right — I just didn’t understand.

    The old web page is still there. Just me being paranoid — this time.

    Comment by Timothy Chase — 30 Mar 2009 @ 12:25 PM

  143. Deep Climate writes in 77:

    Ex. 2 Year 1998. As is well known, the 10-year trend since the large El Nino is flat in HadCRUT and only 0.1 deg C/decade in GISTEMP. And yet the long term trend from 1979 is up from about 0.15 deg./decade in 1998 to 0.17 deg./decade at present.

    The two subperiods… For the sake of the example I will focus on the GIS figures…

    Deep Climate continues in 77:

    It may surprise some that a long term trend can be greater than both trends in two sub-periods, as is the case for almost all “pivot” points as far back as 1995 in these data sets. That’s a fact that Michaels and the Cato Institute would like to keep well hidden. They shouldn’t get away with it.

    You would be speaking of from the trend in temperature anomaly (warming) from 1979 to present being greater than either of the two shorter trends from 1979 to 1998 and from 1998 to present — if I understand you correctly.

    I take it that this would be similar to the following:

    The 2008 season strongly reinforces the thirty-year downward trend in Arctic ice extent. The 2008 September low was 34% below the long-term average from 1979 to 2000 and only 9% greater than the 2007 record (Figure 2). Because the 2008 low was so far below the September average, the negative trend in September extent has been pulled downward, from –10.7 % per decade to –11.7 % per decade (Figure 3).

    2 October 2008
    Arctic Sea Ice Down to Second-Lowest Extent; Likely Record-Low Volume
    Despite cooler temperatures and ice-favoring conditions, long-term decline continues
    http://nsidc.org/news/press/20081002_seaice_pressrelease.html

    The long-term ice area loss from September of 1979 to September of 2007 was -10.7%. Then we actually gained some ice area from 2007 to 2008 (although we lost about half the ice volume going from 2007 to 2008 — if I remember correctly). But even though the trend was negative from 1979 to 2007 and then positive from 2007 to 2008, the trend from 1979 to 2008 was more negative (-11.7% per decade) than the trend from 1979 to 2007 (-10.7%).
    *
    Captcha fortune cookie: deplores the

    We both do.

    Comment by Timothy Chase — 30 Mar 2009 @ 1:21 PM

  144. I wander what the above graphs look like if you use more accurate satellite data i.e. RSS or MSU data instead of GISS or HadCRUT3, as in recent years the land based temperature records have been diverging upwards from the satellite data.

    [Response: No they haven't. UAH is the main outlier though. However, you would need to use the MSU-LT diagnostics from the models to have a like-with-like comparison. You can't assume that the trends are the same as the surface air temperature. - gavin]

    Comment by stumpy — 30 Mar 2009 @ 3:55 PM

  145. #111 Ray Ladbury

    ..[you are] a conspiracy theorist who believes that the entire scientific community is committing a giant fraud to increase its funding

    I neither claim nor imply ‘conspiracy’. Everything I refer to happens in the open.
    Ditto ‘fraud’, at least as far the scientific community is concerned. The politicisation of science is driven by political structures, by the state paying for science, thereby systematically selecting some scientists over others.

    Not one single, solitary scientific professional or honorific science organization has dissented from the consensus opinion on climate change. Not one. And it’s been examined in minute detail by the NAS, AGU and a veritable alphabet soup of scientists.

    Not too surprising really, given they all have the same ultimate employer.

    the whole premise that anyone benefits from our having to address climate change is simply absurd. and means that all scientific inquiry has to take a back seat to survival for at least a generation or maybe two. How does that benefit me as a frigging rocket scientist?

    It obviously doesn’t. Benefits only go the the state (and to climatologists/meteorologists at the expense of rocket and other scientists).

    if there were such a conspiracy, the strongest incentives would be to blow the lid off of it, become famous AND be hailed as a hero of science

    As mentioned, conspiracy by scientists doesn’t enter into it. The overall mix and flavour of science is set by its paymasters, who fund, select and reward institutions and people they prefer – the criteria for which may well include factors other than their scientific prowess.
    But Yes, scientists who dare to differ would be able to make a name for themselves. But, given the single (and political) source of funding, what is the likelihood of such people being funded in the first place, or maintaining funding once they step out of line?

    [Response: What a load of twaddle. First off most funding for all sciences is decided among their peers (on grant review panels etc.) not by government bureaucrats; most funded research has nothing to do directly with what gets discussed in the media or on blogs; almost all of it is funded without thought to what the conclusions will be; and there are dozens of different sources of funding (NSF, NASA, NOAA, foundations, DOE, university endowments etc.). The idea that they all share an identical agenda is laughable. - gavin]

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 3:57 PM

  146. #135 Barton Paul Levenson

    How about evaluating what’s true on the basis of the evidence instead of endlessly blathering about who benefits, and other worthless ad hominem arguments? It doesn’t matter if the government wants to rape puppies and torture small children–either what it says about tobacco is right, or it’s wrong. Gassing about their motives proves nothing one way or the other.

    Where there are no easy answers, how much confidence can one have in the “consensus” if one side of the argument is funded vastly more than the other? Facts do not speak for themselves.

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 4:19 PM

  147. re 146. That funding side, as in the tremendous amount of disinformation funded by Exxon/Mobil, right? Because if you are not being equally skeptical with their disinformation campaign, why? Oh, because you have an agenda! Unlike the data which are unbiased.

    BTW, learn about what “consensus” means in the scientific sense before spouting off about things that you apparently do not know. Hint: Consensus and peer-review are two of the foundations of all science. Always have been.

    Comment by Dan — 30 Mar 2009 @ 4:38 PM

  148. Where there are no easy answers, how much confidence can one have in the “consensus” if one side of the argument is funded vastly more than the other?

    Which side of the argument are you thinking about?

    1. The government-funded scientists who were long unconvinced that our CO2 emissions would have a significant effect on climate?

    2. Or the government-funded scientists who later changed their mind as more and more scientific work supported the supposition?

    If government-funded science biases results as you claim it does, such a state change in the government-funded scientific consensus would be impossible.

    Comment by dhogaza — 30 Mar 2009 @ 4:43 PM

  149. “how much confidence can one have in the “consensus” if one side of the argument is funded vastly more than the other?”

    Yup. Compare the entire university funding grant of the world to the combined total revenue of the oil/coal/gas and tobacco industry.

    How can we believe the denialists when their funding comes from a group that have a captive market and can raise ‘taxes’ without losing revenue (unlike governments)?

    Gavin, I thought you were going to knock this sort of crap on the head. If not, then I’ll repeat the UK has a HUGE tax base from fuel taxes. Either the UK has the only altruistic government (which is killing their tax revenue for the benefit of other governments) or you’re wrong.

    Oracle says: funds balk.

    How untrue.

    Comment by Mark — 30 Mar 2009 @ 4:49 PM

  150. #145 Response, Gavin

    The idea that they all share an identical agenda is laughable.

    I can well imagine that’s how it looks from the inside. But the fact is they all do share a common sponsor. And one way or another, either directly or indirectly, the single he who pays the pipers will unavoidably call the single tune.

    [Response: You have absolutely no idea how the federal government works. There is no single tune - more a complete cacophony of random notes that very occasionally give rise to some beat frequency for a short time. The rest of the time, it's mostly chaos combined with a huge inertia. No black helicopters anywhere to be seen. - gavin]

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 4:50 PM

  151. Consensus.

    If EVERYONE who drops a ball sees it fall DOWN, there is a consensus that things fall DOWN. Not UP.

    Because of this, is the probably apocryphal story of the apple hitting Newton on the head proof that gravity isn’t science? I mean, it’s a CONSENSUS, isn’t it. We all agree on that experiment, don’t we. ‘cept maybe David Copperfield who will say he can make it float and disappear.

    So if consensus is NOT science and therefore your implication that if there is consensus it cannot BE science, gravity isn’t scientific.

    Really weird people out there.

    Comment by Mark — 30 Mar 2009 @ 5:01 PM

  152. #147 Dan

    re 146. That funding side, as in the tremendous amount of disinformation funded by Exxon/Mobil, right?

    A mere drop in the ocean compared to what the state spends.

    …you have an agenda! Unlike the data which are unbiased.

    This is a fundamental error. Different selections of facts can point to different conclusions.

    …what “consensus” means in the scientific sense…Consensus and peer-review are two of the foundations of all science.

    Consensus can be bought. As can peer-reviewers.

    [Response: This is hilarious. You obviously don't know any scientists either. But my pointing out that this is complete nonsense without a shred of evidence to back it up can simply be dismissed by the claim that I too am doing my paymaster's bidding. You have no idea how far away this is from the truth. - gavin]

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 5:12 PM

  153. [Response: You have absolutely no idea how the federal government works.]

    YES HE DOES!!!

    He’s watched ALL the X-Files and read all the way up to level 10 on Xenu biography. And David Ike told him about the TRUTH.

    They’re all non-human lizard overlords who hide EVERYTHING and want to keep the real humans from realising that the lizards are in charge.

    And he knows that there are no lobbying efforts from any corporation to coerce corruption in government because The Corporation Is Your Friend (Please Report For Termination. Have A Nice Day) and they wouldn’t do anything *bad*. Not like governments do. ‘cos governments are “peopled” by our inhuman lizard overlords whereas corporations are built from the VERY BEST humans.

    So there!

    Comment by Mark — 30 Mar 2009 @ 5:22 PM

  154. #148 dhogaza

    If government-funded science biases results as you claim it does, …a state change in the government-funded scientific consensus would be impossible.

    That wrongly assumes no change in thinking in the government.

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 5:22 PM

  155. Hank Roberts in 117 wrote:

    >> Adam Gallon
    > [Response: Do even you take this kind of nonsense seriously?
    > Please take the conspiracy crap somewhere else….

    He’s _from_ the ‘somewhere else’; n.b. anti-Gore link behind his name.

    He thinks Al Gore wants to blow up the King of England. A few too many role playing games if you ask me.

    Comment by Timothy Chase — 30 Mar 2009 @ 5:31 PM

  156. BFJ Cricklewood, OK. Let me get this straight. We have temperature measurements that show each subsequent decade of the last 3 is warmer than the last, but you don’t trust them because the data are processed by the government. We have satellite measurements showing we’ve lost 2 trillion tons of ice in the past 5 years, but you don’t trust them because they’re government satellites. We’ve got glaciers retreating the world over, but you don’t trust those measurements because government scientists made them. We have a hundred and fifty years of climate science all of which supports anthropogenic causation of the current warming, but you don’t trust all that work because some of the scientists worked for the government.

    You contend that the whole motivation is for governments to extend their power, even though the measures needed to address climate change will alter the very fabric of the economies that have supported governments up to now.

    Gee, BFJ, given that this blog is about climate science, and you have no interest in science of any kind, since it’s all a government plot, as I have said above, it would appear we have nothing to discuss. Have a nice life, and enjoy your irrelevance.

    Comment by Ray Ladbury — 30 Mar 2009 @ 5:40 PM

  157. #149 Mark

    “how much confidence can one have in the “consensus” if one side of the argument is funded vastly more than the other?”

    Yup. Compare the entire university funding grant of the world to the combined total revenue of the oil/coal/gas and tobacco industry.

    No, compare the oil/coal/gas (why tobacco?) spend on climatology, to state spending on climatology. Peanuts.

    (Your comments on fuel tax I again cannot make head or tail of)

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 5:46 PM

  158. BFJ Cricklewood (146) — In many, if not all, branches of science there are some researchers who don’t manage to put it togethr properly. So there are always some few producing way-out, easily seen to be wrong, kooky papers.

    Usually competent researchers just ignore the work as such individuals quickly gain an unfavorable reputation. That is certainly my view of some noisy few, from my amateur back-bench in climatology.

    Comment by David B. Benson — 30 Mar 2009 @ 5:46 PM

  159. A plea from a lurker:

    BFJ Cricklewood is proposing an argument whose natural conclusion is that *no* statement of fact can be trusted, as the person or institution making the statement has motivations that might cause them to lie. This view is completely consistent, totally uninteresting, and self-evidently being applied selectively to reject those views for which s/he has other reasons to reject. I see no value in Gavin or other non-troll commenters on this site paying him any further attention.

    HOWEVER,

    the argument BFJ Cricklewood is proposing is conceivably somewhat persuasive to someone who knows nothing about the (for example) NSF funding process. And, in fact, even though I’m an academic-in-training, I don’t know very much about this process, either. It would be far more instructive if those people who *do* know about it could describe it in a little detail, rather than spending their time pointing out the (self-evident) fact that BFJ Cricklewood is engaging in nutcase conspiracy theorizing.

    In particular, I gather that NSF receives a pot of money from Congress each year. Then there is some process by which this money gets distributed to individual researchers, institutitons, or students. How exactly are these decisions made? Does Congress direct the funds in any way? I know that people outside of the NSF are invited to take part in the grant-evaluation process — who takes part, and what is their role? What is the basic decision-making process? Has anyone on this thread ever taken part in an NSF grant review and so could describe it briefly?

    Comment by JBL — 30 Mar 2009 @ 6:10 PM

  160. #159 JBL

    BFJ Cricklewood is proposing an argument whose natural conclusion is that *no* statement of fact can be trusted, as the person or institution making the statement has motivations that might cause them to lie.

    No, you repeat the earlier mistake of inserting conspiracy into my argument. It’s more about systemic bias, the ultimate source of funding unavoidably having some influence – eg by selecting which institutions get money. This influence is up at the political level, which is why is goes way over the head of your typical working scientist who has technical issues in the forefront of his consciousness. And it applies mainly to discinplines like climatology, where the conclusions can have large political implications.

    [Response: Ahh.... so now we are unwitting dupes, unconsciously doing the bidding of our reptilian overlords! As the earlier commenter said, this a completely invulnerable shield of nonsense, and so, we must reluctantly bring down the curtain on this amusing interlude. No more responses on this please. - gavin]

    Comment by BFJ Cricklewood — 30 Mar 2009 @ 6:47 PM

  161. Ray Ladbury wrote in 156

    BFJ Cricklewood, OK. Let me get this straight. We have temperature measurements that show each subsequent decade of the last 3 is warmer than the last, but you don’t trust them because the data are processed by the government. We have satellite measurements showing we’ve lost 2 trillion tons of ice in the past 5 years, but you don’t trust them because they’re government satellites. We’ve got glaciers retreating the world over, but you don’t trust those measurements because government scientists made them. We have a hundred and fifty years of climate science all of which supports anthropogenic causation of the current warming, but you don’t trust all that work because some of the scientists worked for the government.

    Sounds about right — right down to your quantum mechanics, the absorption and emission of photons (Einstein was a patent clerk) by matter, the spectra, any inconvenient fact. All the studies, papers, scientists and sciences one vast conspiracy to hide the truth of his world view. Reminds me of some of the more extreme views I found among young earth creationists.

    Of course the larger the conspiracy, the more difficult it would be to keep it quiet:

    But more importantly, the longer a supposed conspiracy takes place and the wider the conspiracy, the more the number of people who must necessarily be involved, and the greater the number of chances each conspirator has for messing up, inadvertently slipping up and letting out enough details that the conspiracy will be discovered. In fact, the likelihood increases more or less exponentially with the amount of time involved, the number of conspirators, and the amount of evidence which must be covered-up.

    A conspiracy of silence
    By Timothy Chase, BCSE member
    http://www.bcseweb.org.uk/index.php/Main/AConspiracyOfSilence

    But even today the UK is home to a few who believe that the sun revolves around the earth.

    Date Index for geocentrism, 02-2005
    http://www.free lists.org/archive/geocentrism/02-2005

    Comment by Timothy Chase — 30 Mar 2009 @ 6:52 PM

  162. JBL (159) — NSF used to mainly use individual reviewers, much as peer review of paper publication. It seems that NSF is going for more panels these days; I’ve served on two and will never do it again, even if asked, which I won’t, being retired.

    The panelists are supposed to read the funding requests before meeting to discuss the requests and then write a bit about the requests all while NSF program managers are observing.

    In either form of review, the funding requests fall into three easily divisable groups: definitely fund, don’t no matter what, and the hard cases. The hard cases are those whose funding depends upon the amount of funds that NSF has to grant, so some words about importannce are necessary to provide some guidance to the NSF program directors.

    In any case the NSF program directors have the finally say about who is funded and at what level, subject only review and potential changes by the NSF oversight committee. In this aspect it is slightly different, but not much, from the editorial process used by most quality journals.

    [reCAPTCHA agrees: "confer- hearing".]

    Comment by David B. Benson — 30 Mar 2009 @ 7:10 PM

  163. RE JBL 30 March 2009 at 6:10 PM:

    I cannot speak for NSF, but I do work at NIH on extramural programs, so I have some knowledge about grant processes, and I have published some medical research papers so I have some knowledge of the peer-review publishing process.

    Congress makes an appropriation to the funding agencies to use for disbursement through grants. A researcher (Principal Investigator or PI for short) submits a grant application to the agency. The application describes the aims of the research, research plan, personnel and a proposed budget that covers the cost of the research project. The budget would normally include salaries, fringe benefits, supplies, equipment, and overhead for the institution (university or research center).

    The grant gets grouped together with other grants and the set of grants is reviewed by other researchers in the field (peers – at NIH they are mainly outside the government), who score the grant based on the importance (and novelty) of the specific aims, feasibility based on preliminary data and experimental plan and past performance of the PI. This is the basis of peer-review of funding.

    There is another peer review process for the publication of research findings. A set of authors (driven by the lead or senior author – first and last names on the author list) writes a manuscript and submits it to a scientific journal. There are various tiers of journals based on impact – Science and Nature rank near the top, followed by various specialty journals. Gavin, et al. can fill in the order for climatology, but I would assume that Energy and Environment doesn’t rank very high.

    The journal editor assigns the manuscript review to two or three reviewers (other scientists who should have the technical expertise to judge the work) and the reviewers try to pick apart the arguments that the authors are trying to put forward and give a judgment as to whether the article is publishable or not (or could be improved to make it publishable). If the manuscript is rejected, the authors will usually redo the manuscript and submit to another journal – the authors and the publishers look for a good fit, and authors often aim high.

    Congress will sometimes exert influence by directing some of the appropriation to a category ($X billion for AIDS, for example) or provide earmarks that are not part of the NIH or NSF budget and are not peer-reviewed.

    Peer review is an important part of the process, but the charge of conspiracy, whether applied to climate or viral origin of AIDS, falls flat. There are examples of truly novel research that goes against conventional wisdom; Marshall and Warren managed to publish their “heretical” H. pylori research and the Alvarezes got their K/T boundary paper published in Science. I know medical researchers who established strong reputations by thinking outside the box.

    Comment by Deech56 — 30 Mar 2009 @ 7:19 PM

  164. RE David B. Benson 30 March 2009 at 7:10 PM: But wouldn’t the discretion exercised by NSF Program Staff be somewhat limited? My experience is that the more highly scored applications generally get funded and applications with lower scores do not. I do know that there is a gray area that can be more up in the air (and there are adjustments for “program balance”).

    I also wonder how much many programs get competed through contracts – contracts also get funded through a peer-review system, unless the agency can justify “other than full and open competition” and not have the process get challenged by a competitor.

    Comment by Deech56 — 30 Mar 2009 @ 7:27 PM

  165. David Benson has it about right. Most individual investigator proposals are evaluated by letter review. Special programs are evaluated in panels (center grants, etc.) Panels, which tend to look at proposals over a large range of fields, will often receive written reviews from experts. NSF panels tend to be fairly small, 10-15 people. Each person is asked to be a primary on ~5 and secondary reviewer ~5 proposals in the panel. The job of the secondary is to summarize the letter reviews. A third person keeps track of the discussion and writes the review. Everyone has to sign off on every review

    In NIH almost everything runs over very large panels called study sections. Very intense stuff.

    NASA runs mostly by panels, at least in the areas Eli is familiar with

    If Eli was a grad student getting ready to graduate, rather than an old and tired bunny, he would call up a program officer in his area and discuss how the reviews are done in that directorate, what reviewers tend to stress. Anyone who actually has a new faculty position should volunteer to serve on a panel to get an idea of what wins.

    Have to go now, I owe a review

    Comment by Eli Rabett — 30 Mar 2009 @ 7:37 PM

  166. JBL wrote in 159:

    BFJ Cricklewood is proposing an argument whose natural conclusion is that *no* statement of fact can be trusted, as the person or institution making the statement has motivations that might cause them to lie.

    I would change that to “either has motivations or may have motivations that might cause the to lie.” This way he can assume that they are merely being hapless pawns fooled by the grand conspiracy until they start detailing why they are certain.

    JBL wrote in 159:

    This view is completely consistent, totally uninteresting, and self-evidently being applied selectively to reject those views for which s/he has other reasons to reject. I see no value in Gavin or other non-troll commenters on this site paying him any further attention.

    Not too far removed from the theory that one’s brain is in a vat being zapped by aliens. Or that some omnipotent being created the world last Wednesday. But I’ve spent eighty pages before analyzing Descartes Six Meditations almost line by line. I probably wouldn’t want to go through that again — and yet it would be a great deal more interesting than playing Cricklewood’s game.

    What is it with all these tin-foil hats anyway? Are we being flash-mobbed by a convention of schizophrenics?

    JBL wrote in 159:

    HOWEVER,

    the argument BFJ Cricklewood is proposing is conceivably somewhat persuasive to someone who knows nothing about the (for example) NSF funding process….

    ….Has anyone on this thread ever taken part in an NSF grant review and so could describe it briefly?

    Can’t help you there, but I would be interested.

    Comment by Timothy Chase — 30 Mar 2009 @ 7:37 PM

  167. “Never argue with an idiot. They drag you down to their level then beat you with experience.”–Jawaad Abdullah

    If a man alleges that all the evidence is tainted by politics, that all the scientists are tainted by politics and refuses to even discuss the theory, the chances of having a discussion based on science are pretty slim. Mr. Cricklewood would reduce climate change to politics because he knows he cannot win on the basis of the evidence.

    Don’t play that game. Don’t feed the troll.

    Comment by Ray Ladbury — 30 Mar 2009 @ 7:51 PM

  168. It is truly a sad reflection on the state of science education when a software engineer (i.e., Cricklewood) actually believes he knows something that all the major climate science professional societies across the world do not. It reminds me of the “chemtrail” people. Now that’s one to Google for a laugh.

    Comment by Dan — 30 Mar 2009 @ 7:59 PM

  169. Thanks to everyone who responded!

    Timothy Chase wrote in 166: “but I would be interested.”

    Yes, me too — it was certainly more interesting to hear a description of why BFJ Cricklewood was wrong directed at those of us who mostly just listen in than it would have been to see a few more posts noting that BFJ Cricklewood is completely nuts.

    I think I’m straying into Hank Roberts’ territory in noting this, but writing *to trolls* is almost always worthless: they don’t learn, and it sidetracks threads into uninteresting tangents. But writing *about why trolls are wrong* (and directing the comments elsewhere) can be quite interesting, and also feeds them less.

    Comment by JBL — 30 Mar 2009 @ 8:20 PM

  170. Deech56 (164) — NSF Program Directors usually, almost always, take the advice of the reviewers. But in the gray area where the proposals are not of the very highest quality and the program dollars just about all committed, some judgement calls must be made.

    Comment by David B. Benson — 30 Mar 2009 @ 8:50 PM

  171. Re: JBL (159), David B. Benson (162), Deech56 (163, 164), Eli Rabett (165)

    So basically one of the key problems with Cricklewood’s “theory” is that the decision-making that determines which studies get funded and which do not is itself highly decentralized.

    However, I would argue that another (albeit related) key problem is the decentralized nature of the process of scientific discovery. Those who determine what funding gets done won’t know beforehand what will be discovered — and they wouldn’t be able to keep track of all of the interconnections which will be discovered by the vast number of independent and highly intelligent minds that are involved in this process. To do so would greatly exceed the intelligence of any central authority, whether it be an individual or committee.

    Something about all of this sounds oddly familiar.

    However, I would argue it is also closely related to why science is so powerful:

    The justification for a conclusion supported by several independent lines of investigation is generally far greater than that which it receives from any given line of investigation considered in isolation.

    In science there are a great many largely independent lines of evidence and investigation, each of which are supported by a great many more.

    Comment by Timothy Chase — 30 Mar 2009 @ 9:33 PM

  172. The more speculative a proposal is, the more the reviewers want to see preliminary evidence that there is some there somewhere or other.

    Comment by Eli Rabett — 31 Mar 2009 @ 12:06 AM

  173. #26-29 Gavin, as would be expected, anti-science media picked up on Dyson word for word we discussed. Tragically predictable, quite human I am afraid, only Dyson can completely untangle the mess. I doubt he will, given that retractions are rare amongst high reputation scientists.

    http://mediamatters.org/countyfair/200903300047?show=1

    Comment by Wayne Davidson — 31 Mar 2009 @ 1:16 AM

  174. “No, compare the oil/coal/gas (why tobacco?) ”

    For those who aren’t whacko nutjobs but are likewise unsure why tobacco was included, Phillip Morris (big Tobacco) is a big supporter of all the better funded skeptic circuits. The reason for this is that if it can be “proven” that climate science is wrong about GW, then maybe the biologists are wrong about the dangers of smoking.

    And if Body Thetan Cricklewood wants only what’s spent on climatology for these people, then it should be the same for government sponsored work. Exclude meteorology even, not just biology, engineering, astronomy, particle physics,… But should include all lobbying efforts by the companies, since they wouldn’t waste money on lobbying if they didn’t think it would work and wouldn’t miss out countering works that would see their revenue sink like a stone.

    You still have more money on the anti side.

    PS I now have Batfink on my mind ‘cos of that guy: “your truths cannot harm me! My brainpan is like a shield of steel!”

    PS any way to skip the “please reword ‘cos it’s spam” when theres naff all spammy about the message. Or at least show up the text with the bad words in there.

    Comment by Mark — 31 Mar 2009 @ 4:27 AM

  175. Apparently either “muc ho” or “dine ro” were spam.

    How?

    The day that spammers start selling e is a day wordpress sites will die. Or maybe I should say “Th day that spammrs start slling is a day wordprss sits will di”. Heck even wordprss will have to change its name…

    I can understand WHY there’s a spam filter. It seems as if it’s not only throwing the baby out, but the bath, sink, plumbing and the family dog out with the bathwater.

    Comment by Mark — 31 Mar 2009 @ 4:30 AM

  176. Re #171 Timothy Chase

    “However, I would argue that another (albeit related) key problem is the decentralized nature of the process of scientific discovery. Those who determine what funding gets done won’t know beforehand what will be discovered — and they wouldn’t be able to keep track of all of the interconnections which will be discovered by the vast number of independent and highly intelligent minds that are involved in this process.”

    One is reminded of the recent UK GM Field-scale trials. For example see the range of commentaries here:

    http://www.agbioworld.org/biotech-info/articles/biotech-art/farmscaleevaluations2.html

    Note e.g. the contrast between Nigel Williams & Conrad Lichtenstein.

    Comment by Chris S — 31 Mar 2009 @ 4:30 AM

  177. RE Timothy Chase 30 March 2009 at 21:33

    “So basically one of the key problems with Cricklewood’s ‘theory’ is that the decision-making that determines which studies get funded and which do not is itself highly decentralized.”

    Good point – for the most part, it’s out of the hands of the government (or more correctly, the government relies on the reviewers for advice and almost always follows their advice). Program people are also glad that grant/contract review is separate from the influence of Congress and lobbyists. We really want the science to be the highest quality – a successful grant or contract portfolio (advancement of the science, publications) is good for everyone.

    Comment by Deech56 — 31 Mar 2009 @ 4:46 AM

  178. Timothy Chase (171): Of course, the response to your reasonable conclusion is usually (as expressed to me by one of the leading practitioners of the Chewbacca defense) is that scientists are engaged in “groupthink.” This impression is fed by the utterings of people like Spencer, W. Gray and Lindzen.

    Comment by Deech56 — 31 Mar 2009 @ 5:05 AM

  179. “that scientists are engaged in “groupthink.” This impression is fed by the utterings of people like Spencer, W. Gray and Lindzen.”

    Which is another group think.

    Odd, eh?

    Comment by Mark — 31 Mar 2009 @ 8:09 AM

  180. Re: Dyson (please correct if I have made an error)

    The most substantive point in the Wikipedia article is

    “The effect of carbon dioxide is more important where the air is dry, and air is usually dry only where it is cold. The warming mainly occurs where air is cold and dry, mainly in the arctic rather than in the tropics, mainly in winter rather than in summer, and mainly at night rather than in daytime. The warming is real, but it is mostly making cold places warmer rather than making hot places hotter. To represent this local warming by a global average is misleading,”

    While the last sentence may have some merit (for different reasons from those above) , the rest seems dubious.

    1. Positive feedback caused by rise in water vapour (caused by warming) accounts for perhaps half of the estimated warming and this will be located most where the air is humid in contradiction to Dyson’s “cold and dry”.

    2. The enhanced CO2 will also have a direct effect where the air is humid because its absorption spectrum does not completely overlap the water vapour.

    2a). Some of the extra CO2 may end up lying above (i.e at a greater height) than a humid region. Why can’t this act to make hot places hotter?

    [Point 2 is also in Michael Tobis's web page who makes other points]
    —————————-
    By the way I think that Dyson is a great enough physicist without the hype being given to his contributions in some places. But the conjecture that he is right about everything needs to be tested. In the past he appears to have assumed Moore’s law for everything for a century or more. I’m surprised that his economic models only contain growing exponential functions. How about Malthus?

    Comment by Geoff Wexler — 31 Mar 2009 @ 10:29 AM

  181. For those who aren’t whacko nutjobs but are likewise unsure why tobacco was included, Phillip Morris (big Tobacco) is a big supporter of all the better funded skeptic circuits. The reason for this is that if it can be “proven” that climate science is wrong about GW, then maybe the biologists are wrong about the dangers of smoking.

    Don’t forget that the tobacco industry has also invested in the “DDT is harmless and environmentalists banned it because they want to kill poor black people in Africa” scam.

    Comment by dhogaza — 31 Mar 2009 @ 10:56 AM

  182. Re #181

    They don’t have to prove that that climate science is wrong. Just through some doubt on the science. Then that doubt rolls over onto the “cigarettes causes cancer” and people take a chance.

    But the spreading of doubt has worked! See http://tigger.uic.edu/~pdoran/012009_Doran_final.pdf It is no use calling for more education of the public. What we need is more public relations training in “some scary scenarios” for scientists.

    Cheers, Alastair.

    Comment by Alastair McDonald — 31 Mar 2009 @ 11:21 AM

  183. RE Mark 31 March 2009 at 8:1 AM

    Me: This impression [groupthink] is fed by the utterings of people like Spencer, W. Gray and Lindzen.

    Mark: Which is another group think.

    But the denial crowd really can’t agree on anything (CO2 up? No warming? Solar? Surface temps?) except that the published literature is wrong. That fact doesn’t seem to bother them. Excuse me – I think my head is going to explode.

    Comment by Deech56 — 31 Mar 2009 @ 11:49 AM

  184. Weaving of Threads, part I of II

    Deech56 wrote in 178:

    Timothy Chase (171): Of course, the response to your reasonable conclusion is usually (as expressed to me by one of the leading practitioners of the Chewbacca defense) is that scientists are engaged in “groupthink.” This impression is fed by the utterings of people like Spencer, W. Gray and Lindzen.

    And your acquaintance would be right after a fashion. Dialogue is a form of group-think in which the group is capable of far more than any individual in isolation.

    Please see:

    Likewise, this is the principle behind dialogue. It is largely a matter of mathematics. If you have two individuals where each has only three insights which neither shares with the other, each individual is able to make only three connections between any two points. However, if these two individuals come together, there exists the possibility of making fifteen different connections. Bring in a third person and the number goes up to twenty-eight, and a fourth brings it to sixty-six. And if instead of simple, directional two-term connections, one thinks in terms of paths between all the available points, with one individual there are six possibilities, but with four people the number of potential paths goes up to more than 479 million.

    A conspiracy of silence
    By Timothy Chase, BCSE member
    http://www.bcseweb.org.uk/index.php/Main/AConspiracyOfSilence

    I have seen this principle at work — particularly at St. Johns College:

    But this isn’t simply a matter of abstract theory. I have seen this in action at St. John’s College. At this school, we would read things like “The Origin of the Species,” Plato’s “The Republic” or “St. Augustine’s Confessions,” then come in and discuss what we had read.

    Oftentimes people wouldn’t have read the assignments, and the discussion would simply turn into some sort of bull session where people would simply debate poorly thought-out personal opinions. This happened the good majority of the time. Alternatively, some one person would try to dominate the discussion, and we would simply end up discussing his views.

    However, every once in a while we would have a genuine dialogue where insight would build upon insight upon insight until the illumination was almost blinding. Individuals who normally didn’t seem that terribly bright would have insights which made them seem like geniuses. After an especially good discussion, you would leave the classroom, and it would feel like you were six feet off the ground. It would take more than an hour to come back down to earth.

    ibid.

    That is the power of human thought and civilization:

    But to leave things on a somewhat more positive note, these principles also suggest something of the power of human thought itself. The history of thought is the history of an ancient and ongoing dialogue. New participants come and older participants go, but the understanding of the community of participants becomes wider, deeper and stronger over time — thanks to the participation of everyone involved.

    ibid.

    … and it is key to understanding science:

    Empirical science plays a very important part in that dialogue, but in a certain sense it could be viewed as something even wider: a dialogue between humanity and the world in which we live. It is a dialogue in which the questions we ask of nature determine what kind of answers we receive from it — which then affects what questions we will ask afterwards. But this dialogue does not proceed along any one line of conversation. There are many different threads which are largely independent of one-another. With congruence between different, independent lines of investigation, the conclusions which we reach take on far greater justification than any one line of investigation would be capable of by itself.

    ibid.

    Comment by Timothy Chase — 31 Mar 2009 @ 12:14 PM

  185. Weaving of Threads, Part II of II

    Science and civilized thought aren’t merely some sort of echo chamber or mass delusion — and if someone were to argue otherwise they would be guilty of self-referential incoherence insofar as the very fabric of their thought is dependent upon that “mass delusion.” How could they possibly know what they claim to know? This is the problem with radical skepticism.

    So as not to post something especially long, I will refer you to something I wrote some time ago. I posted the piece in DebunkCreation, although it was actually part of a much longer paper.

    The part that I am refering you to begins here, a few paragraphs down:

    Something Revolutionary: A Critique of Kant’s Transcendental Idealism
    Part 9, Section 22: The Meaning of Self-Referential Incoherence

    At this point, I would like to introduce what I call “the norm of self-referential coherence.” This norm prohibits self-referential incoherence. Thus to understand the meaning of the norm, one must understand the meaning of self-referential incoherence. I will present an example before attempting a definition.

    Re: [DebunkCreation] Epistemic Abyss
    Fri Nov 11, 2005 4:09 am
    http://groups.yahoo.com/group/DebunkCreation/message/81678

    But in short, if a radical skeptic were to claim that all of this is simply a mass delusion, then in logic he couldn’t claim to know this or to even know that the proposition were meaningful.

    However, the above is concerned with the global problem of radical skepticism, not some more localized form of denialism. So how do we respond to that? Is it possible for the scientific consensus to be wrong?

    First we must admit that there exist widespread interdependencies between the sciences.

    Please see the following comment, and in particular the section that begins:

    Now I will begin my second example. Roughly at the time that Darwin, it was considered a recognised fact that the earth and the sun couldn’t be more than a few million years old: the only fires known were chemical fires, and alternatively, the only other source of energy which we could conceive of for the sun was due energy being released as the result of gravitational collapse. On the basis of the latter, Lord Kelvin calculated that the age of the sun had to be in the range of millions of years, not thousands of millions. This required evolution to take place at a rate which seemed unlikely.

    Do Scientific Theories Ever Receive Justification?
    A Critique of the Principle of Falsifiability
    16 November 2007 at 1:12 PM
    http://www.realclimate.org/index.php/archives/2007/11/bbc-contrarian-top-10/#comment-68052

    Likewise, we must admit that errors and possible, but then we can point to the fact that science is self-correcting:

    Scientific theories are a form of knowledge, but they are a form of corrigible knowledge — and science itself is a falliblistic, self-correcting endeavor — in which progress is real, and knowledge is cummulative despite the errors which may be made along the way.

    http://www.realclimate.org/index.php?p=583#comment-94611

    … and that even in the case of major paradigms for which there existed a great deal of evidence but which were later replaced, much of what was at one time thought to be true has in fact been preserved in the form of a correspondence principle between the older theory and its newer replacement — that the greatest difference between the two lies simply in the languages in which the theories are expressed (ibid.).

    What the particular conclusions of empirical science draw their strength from consists for the most part in the multiple, largely independent lines of empirical investigation. The more they accumulate, the stronger the justification for those conclusions become, such that even when some of the stronger conclusions are eventually replaced, we may know with some confidence that much of their content will be preserved, and that the difference will largely consist of the form (language) in which that content is expressed.

    A great deal of empirical evidence has accumulated for the major conclusions of climatology, evidence from many largely indepedent lines of investigation. In logic, if one is to avoid radical skepticism for skepticism of a more deliniated form with respect to empirical conclusions justified by reference to evidence, one cannot offer a broad, philosophic argument that by its very nature would undermine all human thought, but one must provide arguments of a more deliniated form. In logic, one cannot arbitrarily proclaim the major conclusions of climatology unwarranted without addressing the issue of the evidence that supports them — and presenting an alternative which is equally coherent and capable of explaining that evidence with the same degree of specificity.

    This is something which denialists will never attempt because even they see the utter futility of such an endeavor.

    Comment by Timothy Chase — 31 Mar 2009 @ 12:19 PM

  186. Geoff Wexler, That Dyson quote is a wonderful illustration of how a very smart guy can be very wrong when he ventures outside his realm of expertise. How wet does he think the atmosphere is above cloudtops? As I’ve said before, I don’t think Dyson likes to get bogged down in details, and therein lies the devil.

    Comment by Ray Ladbury — 31 Mar 2009 @ 12:36 PM

  187. I had written towards the end:

    In logic, one cannot arbitrarily proclaim the major conclusions of climatology unwarranted without addressing the issue of the evidence that supports them — and presenting an alternative which is equally coherent and capable of explaining that evidence with the same degree of specificity.

    This is something which denialists will never attempt because even they see the utter futility of such an endeavor.

    Weaving of Threads, Part II of II

    Case in point:

    So, Jim Bob, with your degree in “physical science” and computer-programming skills, perhaps you could enlighten us on exactly what evidence the “skeptical” side has presented. ‘Cause try as I might, I can’t find jack in the published literature that is at all convincing. Maybe you could start with a model of Earth’s climate that has a CO2 sensitivity less than 2 degrees per doubling? No? How about an explanation of how “solar effects” explain the past 30 years of warming when solar luminosity has been pretty much constant over that period? No? How about a learned treatise on how either a solar or PDO mechanism can warm the troposphere while cooling the stratosphere? Nope? Or how about how a local “oscillation” gives rise to a sustained warming lasting decades? No, huh?

    Ray Ladbury
    31 March 2009 at 11:26 AM
    http://www.realclimate.org/index.php/archives/2009/03/a-potentially-useful-book-lies-damn-lies-science/#comment-116887

    *
    Captcha fortune cookie:
    to listen

    Comment by Timothy Chase — 31 Mar 2009 @ 12:54 PM

  188. Seeing how the envelope in any of the graphs spans a 1.6 deg C possible trend, and the observed temp has only varied 0.4 over the graph time span. Then not only is the model envelope an abject failure, the whole thing fails by design, especially with %75 play from the envelope. What is being proven or disproven here, and why so many “told you so’s” from the commenters, it’s getting old.

    [Response: You miss the point entirely. Short term trends are too variable due to the internal variability to constrain long term sensitivity. - gavin]

    Comment by iheart — 31 Mar 2009 @ 1:05 PM

  189. I do now see that you are correct in your response, Gavin. Maybe this argument should not be made from either side then, at this time. Thanks

    Comment by iheartheidicullen — 31 Mar 2009 @ 1:31 PM

  190. “But the denial crowd really can’t agree on anything”

    Oh, they all agree that AGW isn’t a problem.

    There’s a large group (think about where the term “groupthink” has its etymological roots in) that believe that anything the government does is WRONG.

    There’s plenty of agreement.

    They don’t agree on WHY or HOW.

    Mostly because they don’t CARE about having a theory, they just want to tear down one. No need for a consistent theory if all you want is another one torn down, is there. That’s likely to result in you being proven wrong. So don’t give a counter theory, just make out how bad the AGW theory is. DO NOT REPLACE IT. It’s not wanted and it hurts the denialist aim: remove AGW as a theory.

    Comment by Mark — 31 Mar 2009 @ 1:41 PM

  191. Deech56 wrote in 183:

    But the denial crowd really can’t agree on anything (CO2 up? No warming? Solar? Surface temps?) except that the published literature is wrong. That fact doesn’t seem to bother them.

    We took different paths, but we appear to have arrived at the same place.

    Deech56 wrote in 183:

    Excuse me – I think my head is going to explode.

    Quite possibly.

    Comment by Timothy Chase — 31 Mar 2009 @ 2:09 PM

  192. “abject failure”! sounds familiar…

    Comment by walter crain — 31 Mar 2009 @ 2:52 PM

  193. Read what Hansen wrote about the NYT before you fall for the spin in their article about Dyson and Hansen.
    http://solveclimate.com/blog/20090329/ny-times-invents-climate-science-war

    Go to Hansen’s page for the full letter he wrote.

    He’s quite open to Dyson’s point of view, when understood. It’s a mature response.

    Recommended.

    Comment by Hank Roberts — 31 Mar 2009 @ 3:05 PM

  194. Of course the graph is dependent on the end point. Of course the various points are highly correlated. Why would anyone expect differently?

    DID MICHAELS CLAIM OTHERWISE?

    No.

    He is showing that, independent of starting point, the models aren’t doing particularly well. He is making this argument SPECIFICALLY because previous attacks have claimed that the starting point was cherry picked. As far as I know, he hasn’t been attacked for choosing the most recent data as an end point, because until now that was considered the right thing to do.

    Using the end of 2008 is ONLY cherry picking if we know that temperatures are going to go up significantly in 2009 and beyond. Are we sure of this? Is Real Climate ready to make a prediction for 2009 and 2010 temperatures?

    If you are going to criticize his graph because he doesn’t use 2009 and 2010 data, then you should tell us what you expect temperatures in those years to be, so we can revisit the issue should your expectations be wrong. [I'll be using HadCRUT to see how accurate you were]

    OTOH, if we don’t know that 2009 and 2010 temperatures are going to be up significantly, then there really isn’t any basis for criticizing the graph.

    Everybody agrees that a very warm 2009 will render this graph useless, while a very cold 2009 will mean that the model mismatch has been greatly understated. What is news, in your post, is that “next year will likely be warmer than last year”. That’s good to know. Would you care to quantify this so we can understand just how misleading an endpoint 2008 really is?

    [Response: Actually Michaels did show his graph with 2009 data filled in equal to 2008. That wasn't my invention. But he didn't show what you get using GISTEMP and he didn't show what would happen if (as is likely) 2009 is warmer than 2008. And yes, that is a prediction. The point is that Michaels claimed that these results imply that the models are 'abject failures'. Such a strong conclusion should rely on more than a single year of data no? - gavin]

    Comment by Jason — 31 Mar 2009 @ 3:16 PM

  195. Jason (194):
    “He is showing that, independent of starting point, the models aren’t doing particularly well”

    No, he THINKS he is showing that. What he’s actually showing is that short term plateaus, or even declines, can occur in a long-term upward trend, which is absolutely not news. Also, what does he have to say when the observed trends are higher than predicted by the models? Let me guess “the temporal variance is too high–the models never correctly predict the time course of T evolution, and therefore are not to be trusted”

    “Using the end of 2008 is ONLY cherry picking if we know that temperatures are going to go up significantly in 2009 and beyond.”

    Wrong. It’s also cherry picking if you wait for a relatively low year and then do your analysis. As mentioned in the article, why didn’t he do it last year?

    “Everybody agrees that…a very cold 2009 will mean that the model mismatch has been greatly understated.”

    Yeah everybody…except the ones who know something about it. Model mismatch for what exactly? For how long? And since you’re fond of odds-making, what are your odds for a “very cold 2009″ given that like 9 of the warmest years in the last century have occurred in the last 11 years?

    Comment by Jim Bouldin — 31 Mar 2009 @ 4:06 PM

  196. Michaels is making this presentation to a political audience. While I would not agree that the models are an “abject failure”, he is presenting to people who have been told that “the science is settled”. Properly nuanced testimony on capitol hill falls on deaf ears.

    A quick review of your boss’s public statements show a similar lack of nuance. Climate sensitivity has been “nailed” at three degrees centigrade. Hardly any acknowledgment is made of how very limited our understanding of the climate system is.

    And I have a hard time faulting him for it. Hansen and Michaels both understand the environment in which their statements are being interpreted. They are using stronger language and fewer caveats than I would like. But they aren’t talking to me. They are talking to congress and to the public at large, and they are adjusting their statements accordingly.

    I honestly find “the science is settled” to be a far more unreasonable statement than “the models are an abject failure”. Science (which the models are part of) is evolving based on real world observations. Our understanding of climate WILL improve. The current models WILL be replaced. Climate sensitivity in the new models WILL be higher or lower than in the old ones. Congressional testimony about global warming WILL remain largely devoid of nuance. [Those are my four predictions.]

    [Response: The only scientist I can find that has ever said the 'science is settled' is...... Patrick Michaels (last paragraph). You will not find such a statement made in any post on RC. - gavin]

    Comment by Jason — 31 Mar 2009 @ 4:34 PM

  197. I’ll hazard a prediction for the global surface temperature for 2009 CE: one of the cooler years this century (2000 CE onwards) and in the top 12 overall. I base this solely on the prolonged solar minimum just now. If a goodly el Nino happens to come along, I probably lose.

    Comment by David B. Benson — 31 Mar 2009 @ 4:53 PM

  198. With Lucia showing here:
    http://rankexploits.com/musings/2009/multi-model-mean-trend-for-models-forced-with-volcanic-eruptions-mega-reject-at-95/

    What if we pick 95% as our confidence intervals?
    Well… then we don’t reject this multi-model mean in 1974, or 1996 and for a few years after. So, if you feel bound and determined to save the reputation of the models, you should think up reasons why 1974 or 1996 are the “correct” years for testing models over the “longer term”, while simultaneously claiming you picked these entirely at random.
    At the 95% confidence interval, the multi-model mean using volcanic cases only are rejected if we happen to use 2001 to compute the initial trend. What if 2000 is the right start year? We reject the multi-model mean based on cases with volcanic forcing.
    So, to those who think these “rejections” are due to selecting a short period for analysis: Nope! These rejections are due to the observed earth temperature veering away from the projected values.

    And a study by Evan et al (2009) The Role of Aerosols in the Evolution of Tropical North Atlantic Ocean Temperature Anomalies

    Observations and models demonstrate that northern tropical Atlantic surface temperatures are sensitive to regional changes in stratospheric volcanic and tropospheric mineral aerosols. However, it is unknown if the temporal variability of these aerosols is a key factor in the evolution of ocean temperature anomalies. Here, we elucidate this question by using 26 years of satellite data to drive a simple physical model for estimating the temperature response of the ocean mixed layer to changes in aerosol loadings. Our results suggest that 69% of the recent upward trend, and 67% of the detrended and 5-year low pass filtered variance, in northern tropical Atlantic Ocean temperatures is the mixed layer’s response to regional variability in aerosols.

    And with Hansen’s presentation at Copenhagen found located at http://www.columbia.edu/~jeh1/2009/Copenhagen_20090311.pdf

    where he said:

    The aerosol forcing is negative and substantial, but the truth is that, based only on first principles, we do not know the aerosol forcing as well as indicated.

    We do not have measurements of aerosols going back to the 1800s – we don’t even have
    global measurements today.

    Any measurements that exist incorporate both forcing and feedback.

    Aerosol effects on clouds are very uncertain.

    Does this not show that the models that incorporate volcanic forcing cannot model aerosol forcing since there are no measurements to use to parameterize and per Hansen, we do not know enough to use first principles.

    [Response: You are confusing many separate issues. a) volcanic stratospheric aerosol loads are reasonably well known back to Agung (1963). Earlier eruptions (such as Krakatoa etc.) are estimated based on ice core sulphate loads in both hemispheres. They are probably ok (though it gets worse going into the pre-industrial). The size distribution and particle type for these eruptions is reasonably well known and the radiation perturbations match well what was observed by ERBE etc. This is not what Hansen is talking about. b) there are many natural aerosols in the atmosphere. In the tropical Atlantic there are significant amounts of dust that come in from the Sahara. Depending on the rainy season in any one year, there might be more or less dust. Dust has a direct radiative effect and hypothesised impacts on ice nucleation, the variance in the dust then, might affect sea surface temperatures. This is mostly what Evan et al are talking about. c) anthropogenic aerosols - mainly sulfate and nitrate (from emissions of SO2 and NOx/NH3) have a strong direct effect and undoubted liquid cloud nucleation impacts (the indirect effects). These are not well known and are what Hansen is talking about. - gavin]

    Comment by Vernon — 31 Mar 2009 @ 5:05 PM

  199. Jim Bouldin (195):

    “Wrong. It’s also cherry picking if you wait for a relatively low year and then do your analysis. As mentioned in the article, why didn’t he do it last year?”

    Off the top of my head I can think of two different blog that performed substantially similar analyzes a year ago.

    There are two reasons (besides declining global temperatures) why this issue is receiving increased attention now:

    1. There is a readily available public archive of model runs.

    2. Many of those models have remained largely static since AR3.

    This means that for the first time a large number of models can be readily tested against temperature data recorded AFTER those models were finalized.

    Each year that passes will give us another year of real data to compare to the models. More years mean more statistical certainty when performing these analyses.

    “Yeah everybody…except the ones who know something about it. Model mismatch for what exactly? For how long? And since you’re fond of odds-making, what are your odds for a “very cold 2009″ given that like 9 of the warmest years in the last century have occurred in the last 11 years?”

    I AM very fond of odds making. I would bet that the IPCC prediction for the first three decades of this century as I understand it (0.2 degrees per decade) overstates the amount of global warming we will actually experience. I would also bet that Hansen’s estimate of climate sensitivity at 3 degrees Centigrade is too high.

    I would be flexible in terms of creating a structure for such a bet. Both parties should believe that winning proves something about the other’s a prior assumptions. A one year bet probably wouldn’t do since you could blame a cold 2009 on random noise, and I would feel compelled to agree. (And anyway, 2009 needs to be a fair bit warmer than 2008 to make Michael’s graph look bad in hindsight.)

    Is this something you are interested in?

    Comment by Jason — 31 Mar 2009 @ 5:10 PM

  200. Gavin wrote:

    “But he didn’t show what you get using GISTEMP”

    I think that providing GISTEMP is a very good thing that GISS does. But I, and many other skeptical folks view the GISTEMP product with some suspicion. In particular, the tendency of retroactive temperature adjustments to increase the trend, and your statement about 0.25 FTEs being used to produce it.

    These issues are not so troubling that I would refuse to use it if it were the only option. But HadCRUT is available. It is just as well accepted by the climate science community. And there is no obvious trend to their retroactive adjustments.

    So I don’t think it is unreasonable to use HadCRUT for analyzing global temperatures and not bother comparing the results to GISTEMP. If I convince anybody to take a bet, I would want to use HadCRUT to determine the results (even if this meant that I lost).

    [Response: I was not suggesting that GISTEMP was better (though that is arguable), but I was alluding to the fact that where there are structural uncertainties in an observational quantity (like GMST or MSU-LT) then ignoring them in assessing significance is wrong. It is much better to use both UAH and RSS, or HadCRUT and GISTEMP (and NCDC) than it is to simply pick the one that gives you the trend or character you prefer. As for GISTEMP, it is what it is - an analysis of the raw data. As part of the calculation it needs to estimate the difference between rural and urban trends - it does this for the whole time-series and so will change as more data comes in. There is nothing 'suspicious' about this, and anyone who claims otherwise has no clue what they are talking about. Read the papers to see what is done and why. - gavin]

    Comment by Jason — 31 Mar 2009 @ 5:24 PM

  201. Jason:

    “Off the top of my head I can think of two different blog that performed substantially similar analyzes a year ago.”

    OK, and they prove what exactly?

    “There are two reasons (besides declining global temperatures) why this issue is receiving increased attention now:”

    So now global temperatures have not just plateaued, they’re declining? Surface temps? Upper parts of troposphere? Upper ocean? Lower ocean?

    “This means that for the first time a large number of models can be readily tested against temperature data recorded AFTER those models were finalized.”

    Wrong. In no particular order: (1) Models are never “finalized” and stating they ARE is pretty revealing, (2) No, there are climate models going back to at least the 1950s that you can test subsequent T data on, (3) You’re saying that the six years between the TAR and AR4 are capable of “testing” whether the IPCC TAR models are good or not? Then you absolutely do not understand the point of these two posts on the topic.

    “Each year that passes will give us another year of real data to compare to the models. More years mean more statistical certainty when performing these analyses.”

    Yes, and we now have about 120 years of pretty good data against which to evaluate the models, and they show unequivocally that GHGs are driving global temperature increases. So your additional years are going to change that?

    “I AM very fond of odds making. I would bet…”

    Who said anything about betting? Hansen’s not the only one with a ~3 degree sensitivity estimate, and by no means is it even the highest. Anyway, what are your odds for a “very cold 2009″, you didn’t say.

    “Is this something you are interested in?”

    A bet that can’t be decided for more than 20 years and proves what (other than grandstanding)? Not in the least.

    Comment by Jim Bouldin — 31 Mar 2009 @ 5:53 PM

  202. > betting on climate
    Google for it. Stoat keeps a list; so do other climate bloggers (not here at RC though).

    And relevant to betting and phenology (as Jim Bouldin’s guest topic is now closed), this:
    http://news.stanford.edu/news/2001/october31/alaskabet-1031.html
    Someone might want to look at what’s happened with that, seems suitable for statistical treatment.

    Comment by Hank Roberts — 31 Mar 2009 @ 6:05 PM

  203. David B. Benson wrote in 197:

    I’ll hazard a prediction for the global surface temperature for 2009 CE: one of the cooler years this century (2000 CE onwards) and in the top 12 overall. I base this solely on the prolonged solar minimum just now. If a goodly el Nino happens to come along, I probably lose.

    I’m certainly not an expert, but I would expect ENSO to remain fairly neutral for the rest of the year — possibly beginning to climb towards the end. A good El Nino? 2 or 3 years. Watch the North Pacific Gyre Oscillation. From what I understand it tends to lead ENSO by 10 to 12 months. Not sure how well the pattern is holding this time around, though.

    Comment by Timothy Chase — 31 Mar 2009 @ 6:06 PM

  204. Just a thought – I’ve just read Micheal’s article and your rebuttal, and I think your point about endpoints is well taken: what it means to me is that he’s a little early to press – if 2009 is warmer, then his premise is trashed. On the other hand, your suggestion that he is using data selectively seems more appropriately directed at your own rebuttal – there are four global temperature series that I know of – you mention only two. Of these, GISS seems to offer the data kindest to your hypothesis, but significantly diverges from the other three – perhaps, if averages are to be used, all of the congruent temperature records should be averaged for this analysis.

    [Response: Your claim about GISTEMP is unfounded. The outlier series is UAH, not GISTEMP. However, both UAH and RSS are measures of MSU-LT - a different quantities then the SAT - and need to be be compared with the same diagnostic from the models (which I don't have handy). The variance of the MSU data is not the same as the SAT and so could make a significant difference. - gavin]

    Comment by Steve — 31 Mar 2009 @ 11:16 PM

  205. For chaps like Michaels, smug with your graph prowess, making fun of our good game of hockey ( of hockey stick fame).

    Take a look:

    http://www.eh2r.com/index_pop_ups/warming.html

    of warming evidence not needing data, just pictures, the sun itself, used as a fixed sphere, gets mangled by the atmosphere, so it is a very good temperature
    evaluator, density of the atmosphere as a whole depends on its temperature… Since it is suppose to cool, since 1998? or better since 2005, if you like,
    why are high arctic sunsets trending as if they were from further South since 2005?
    Why not look around for alternative ways of measuring/observing GW?
    Is better than simple ad hoc half baked graph manipulations? Hey?

    Comment by Wayne Davidson — 1 Apr 2009 @ 6:45 AM

  206. Jason wrote in 196:

    Hardly any acknowledgment is made of how very limited our understanding of the climate system is.

    “… how very limited our understanding of the climate system is”? I believe you are overstating your case.

    Climate models have done fairly well in terms of a variety of predictions. For example, they predicted the expansion of the Hadley cells, the poleward movement of storm tracks, the rising of the tropopause, the rising of the effective radiating altitude, the circulation of aerosols in the atmosphere, the modelling of the transmission of radiation through the atmosphere, the clear sky super greenhouse effect that results from increased water vapor in the tropics, the near constancy of relative humidity, and polar amplification, the cooling of the stratosphere while the troposphere warmed.

    I understand they do rather well with ocean. And they predicted the expansion of the range of hurricanes and cyclones — about a year before Catrina showed up off the coast of Brazil. Not the sort of thing that had ever happened before.

    Of course there are areas where the models seem to do less well. For example, while they have predicted the expansion of the Hadley cells, they appear to have underestimated the rate of its expansion — and thus how rapidly the subtropics would move north. They appear to have underestimated the rate at which sea ice would be lost in the Arctic. They did not take into account glacier slippage until fairly recently — and had in essence assumed that ice would simply melt in place. As such, they had underestimated the rate at which we would lose glaciers.

    These are areas where they have tended to fall down, but typically this has been due to their underestimating the effects of climate change by failing to take into account all of the positive feedbacks. Not the sort of thing I would be playing up if I wanted to say that global warming isn’t a serious issue.

    Then there are areas where the results of models have been more mixed. For example, they did not predict the trend in cloud loss in the tropics, which means that they underestimated the positive trend in outgoing infrared radiation. However, it also means that they underestimated the negative trend in outgoing visible light. And the net effect in terms of warming has been roughly that of one trend cancelling the other.

    But in any area where models are doing poorly, this suggests that there are certain phenomena which are not being taken into account in terms of the physical processes that are included in the models. And once these phenomena are included, it is my understanding that models perform better overall — not just in the areas which modelers sought to improve. Currently it is my understanding that models could be improved with respect to aerosols and clouds. But I understand we are making progress in both. Particularly clouds.

    Comment by Timothy Chase — 1 Apr 2009 @ 10:44 AM

  207. CORRECTION to the above post…

    With respect to the hurricane that showed up off the coast of Brazil, I had called it “Catrina,” but that should have been “Catarina.”

    Comment by Timothy Chase — 1 Apr 2009 @ 10:49 AM

  208. Here’s an amusing blast from the past regarding Michaels.

    http://www.cato.org/testimony/ct-pm072998-2.html

    Showing Scenario A and removing B and C amounts to fraud in my view. Also cherry-picking bad satellite data (granted it wasn’t known to be bad at the time) of the southern hemisphere seems almost comical. Does anyone here have experience with Congressional testimony? Are policymakers this gullible?

    Comment by MarkB — 1 Apr 2009 @ 6:50 PM

  209. David B. Benson (123) Thank you. I saw the PDS series on the first you mentioned. My problem with it was that it did not seem rigorous enough. I will check out the other one. I appreciate the time you took to respond to my inquiry.

    Comment by GBS - Aesthetic Engineer — 2 Apr 2009 @ 12:00 AM

  210. #204

    In my quest to show more trend info on one graph, I display GISTemp 5-year, 10-year and 20-year trends for all end points from 2008 back to 1990.

    http://deepclimate.files.wordpress.com/2009/04/gistemp-trends.gif

    The wild fluctuations of the 5 and 10 year trends can be clearly seen, as can the relative stability of the 20-year trend (and the simple decade over decade measure, dubbed “10-yr-diff”). The two trend measures based on 20 years of data dipped ever so slightly in 2008, but are still ahead of 1998 end point counterparts. So much for the “stopping” of global warming in 1998!

    IPCC AR4 WG1 Chapter 10 doesn’t seem to give projections that are directly comparable to the MSU-LT swathe, but they would appear to be close to the surface projections. As I recall, though, there’s a lot more variance in both the model projections and the observations.

    It’s also true that UAH is the outlier – and in more ways than one. The UAH annual cycle, as elaborated at Open Mind (Tamino) and my blog at Deep Climate, remains unexplained.

    http://tamino.wordpress.com/2008/10/30/annual-cycle-in-uah-tlt/

    http://deepclimate.org/2009/03/26/seasonal-divergence-in-tropospheric-temperature-trends-part-2/

    It’s safe to say that both Mears (of RSS) and Christy (UAH) are aware of the issue, but so far there’s no published work or commentary on this. Maybe soon …

    Comment by Deep Climate — 2 Apr 2009 @ 1:17 AM

  211. ““Off the top of my head I can think of two different blog that performed substantially similar analyzes a year ago.”

    OK, and they prove what exactly?”

    They prove that your suggestion (That Michaels’ analysis was timed to take advantage of a cold 2008) is unwarranted.

    ““There are two reasons (besides declining global temperatures) why this issue is receiving increased attention now:”

    So now global temperatures have not just plateaued, they’re declining? Surface temps? Upper parts of troposphere? Upper ocean? Lower ocean?”

    I thought that YOUR claim is that 2008 surface temps were lower than 2007, and that is why this analysis has been published now. Have I misunderstood?

    ““This means that for the first time a large number of models can be readily tested against temperature data recorded AFTER those models were finalized.”

    Wrong. In no particular order: (1) Models are never “finalized” and stating they ARE is pretty revealing,”

    You are arguing semantics to avoid admiting the obvious point. Climate models are only valuable if they can tell us something about the FUTURE.

    If I produce a GCM that models perfectly the state of the world up until the day I publish it, but looks like pink noise afterwards, it is utterly useless.

    The archived model runs were each produced using a specific version of a specific model with specific inputs. While the models and scenarios may evolve, the specific version of the models and scenarios used to create these runs will remain fixed for all eternity. Call this fixed, finalized, formalized or a snapshot. The point remains the same.

    The experiment that Michaels and Lucia and many others are performing is designed to test whether or not the models are capable of providing us with information about periods of time AFTER they are published.

    “(2) No, there are climate models going back to at least the 1950s that you can test subsequent T data on,”

    First, those models were not held out as accurate forecasts of future temperature. It would have been helpful if, in 1975, the owners of these climate models had written to Newsweek informing them that: A) their story about global cooling was wrong because B) climate models have clearly demonstrated that temperatures are about to head up rapidly. Had they done so, maybe the new York Times wouldn’t have repeated the story one month later.

    Second, many previously published model runs lack sufficient specificity to be tested. Was Hansen’s 1988 congressional testimony an accurate forecast of future temperatures, or a gross exaggeration? It depends on how you interpret his words, and how you center the 1988 data point. You can find numerous analyses making contradictory arguments while using the same data thanks to this lack of specificity. The model runs which are available today are much less vulnerable to this sort of argument.

    “(3) You’re saying that the six years between the TAR and AR4 are capable of “testing” whether the IPCC TAR models are good or not?”

    I’m not sure how many years are sufficient. Obviously I am not yet convinced that the models are an abject failure, so I suppose that not enough time has passed to make such an assessment. If the next 8 years of global temperature look like the last 8 years, I’ll almost certainly conclude that the models greatly overstated climate sensitivity. (I still won’t call them an abject failure. By establishing a testable hypothesis, they will have benefited science, even if that hypothesis is ultimately rejected.)

    “Yes, and we now have about 120 years of pretty good data against which to evaluate the models, and they show unequivocally that GHGs are driving global temperature increases. So your additional years are going to change that?”

    It is VERY easy to model the past. Models should not be published until they have successfully modeled the past.

    It is VERY difficult to model the future of a system as complex as the earth.

    If GCMs prove unable to do the latter, there are very few people who will care about the former, and attempts to legislate greenhouse emissions will fail.

    “Who said anything about betting? Hansen’s not the only one with a ~3 degree sensitivity estimate, and by no means is it even the highest. Anyway, what are your odds for a “very cold 2009″, you didn’t say.”

    I’m not making odds for that. Neither a very cold 2009 nor a very warm 2009 would mean very much to me.

    “A bet that can’t be decided for more than 20 years and proves what (other than grandstanding)? Not in the least.”

    But that is precisely what this thread is about. A bet HAS been made by the climate community that the models will prove to be accurate. If the bet is lost, so are efforts to reduce emissions. Michaels is basically saying: “Hey guys; Remember that bet you made? Things aren’t going the way you expected.”

    I agree that it is too early to call the bet. But people are going to keep on performing this analysis until, one way or another, the bet is called.

    Comment by Jason — 2 Apr 2009 @ 2:22 PM

  212. Jason said:

    It is VERY easy to model the past. Models should not be published until they have successfully modeled the past.

    It is VERY difficult to model the future of a system as complex as the earth.

    Wow, your concept of a GCM is very different from mine. Could you explain to me please how you can easily model past climate behavior, but not be able to model future behavior? A GCM that is not built from first principles is not a model, it is a fraud. I hope you don’t think that climate modelers keep a table of past weather somewhere so that their GCMs can spit out the right numbers for historical runs.

    So why do you think that modeling the past is easier. Does the physics change? Does the chemistry? Is the fluid dynamics of the future more complex?. So why do you think modeling the past is easier?

    Comment by Tim McDermott — 2 Apr 2009 @ 3:59 PM

  213. Re Jason’s 211

    Jason wrote in 211:

    It is VERY easy to model the past. Models should not be published until they have successfully modeled the past.

    It is VERY difficult to model the future of a system as complex as the earth.

    Tim McDermott responded in 212:

    Wow, your concept of a GCM is very different from mine…. A GCM that is not built from first principles is not a model, it is a fraud…

    So why do you think that modeling the past is easier. Does the physics change? Does the chemistry? Is the fluid dynamics of the future more complex?. So why do you think modeling the past is easier?

    Jason, as I alluded to earlier, climate models aren’t based upon simple correlations and aren’t instances of line-fitting. They are built upon physics. Radiation transfer theory, thermodynamics, fluid dynamics, etc.. They don’t tinker with the model each time to make it fit the phenomena they are trying to model. With the sort of curve-fitting which your statement implies, tightening the fit in one area would loosen the fit in others. But with actual climate models, when they improve the physics — because it is actual physics — tightening the fit in one area almost inevitably means tightening the fit in numerous others.

    This is why our knowledge of climatology has grown and grown quite rapidly. And this is why, as I indicated above in 206, your stating in 196:

    Hardly any acknowledgment is made of how very limited our understanding of the climate system is.

    … betrays a profound ignorance of the state — or for that matter even the nature — of climatology as a science.
    *
    Jim Bouldin wrote in 201

    A bet that can’t be decided for more than 20 years and proves what (other than grandstanding)? Not in the least.

    Jason responds in 211:

    But that is precisely what this thread is about. A bet HAS been made by the climate community that the models will prove to be accurate. If the bet is lost, so are efforts to reduce emissions. Michaels is basically saying: “Hey guys; Remember that bet you made? Things aren’t going the way you expected.”

    Don’t flatter yourself — or Michaels for that matter.

    Hansen’s Scenario B from 1988 proved to be rather accurate over the past twenty years. Even though his climate sensitivity was a little off at the time. Even though he was specifically focusing on carbon dioxide. Climate models are approximations. Uncertainties exist, but — as in the case of the law of large numbers — they tend to cancel each other other out. And getting things approximately right is often more than enough.

    When predicting which states will be suffering from extreme drought by 2100. When predicting when all of the glaciers in the Himalayas will be gone. Or how high the sea level will rise. If it is a meter and a half by 2110 instead of 2100, or two-thirds of Florida is under water when they were projecting half, I doubt people will say that listening to the climatologists was a complete waste.

    Yes, this is a bet of sorts. But you are adding up all the costs associated with doing something about climate change without taking into account all the costs associated with doing nothing, aren’t you? And instead of focusing on the fairly accurate projections that Hansen made in 1988, you have chosen in essence to focus on one year — 2008 as the endpoint of Michael’s “test” — and say that the models aren’t performing that well for that year — given the internal variability of the climate system which plays such a big role in the short run, but such a minor role in the long.

    Now given the actual nature of the “bet,” perhaps you can be a little more clear about whose interests you and Michaels are actually looking out for.

    Comment by Timothy Chase — 2 Apr 2009 @ 5:43 PM

  214. Jason, you need to read this:
    http://www.aip.org/history/climate/index.html

    and this:
    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/

    and this:
    http://www.realclimate.org/index.php/archives/2008/01/uncertainty-noise-and-the-art-of-model-data-comparison/

    Comment by Jim Bouldin — 2 Apr 2009 @ 5:49 PM

  215. #211 Jason said:
    They prove that your suggestion (That Michaels’ analysis was timed to take advantage of a cold 2008) is unwarranted.

    Not really. The earlier analyses were timed to take adavantage of the exceptionally cool La Nina winter. For example, WattsUpWithThat had a particularly cretinous post comparing January 2008 to January 2007 (the warmest January on record in most datasets).

    Comment by Deep Climate — 2 Apr 2009 @ 7:43 PM

  216. Re: Politicised science

    #159 JBL

    …the argument BFJ Cricklewood is proposing is conceivably somewhat persuasive to someone who knows nothing about the (for example) NSF funding process

    This misses the point, which is how the NSF (or other beneficiary) is selected in the first place. This issue is above rather than in the science community.

    #161 Timothy Chase

    …quantum mechanics, the absorption and emission of photons (Einstein was a patent clerk)..

    (The above was by way of a suggestion that political funding would then equally colour these issues.)
    No, since – unlike with AGW – there is no obvious political spin to be put on them.

    #167 Ray Ladbury

    If a man alleges that all the evidence is tainted by politics, that all the scientists are tainted by politics and refuses to even discuss the theory, the chances of having a discussion based on science are pretty slim.

    I do elsewhere attempt to discuss the science. But even if I didn’t, that would not still invalidate discussion on how science is structured.

    Mr. Cricklewood would reduce climate change to politics because he knows he cannot win on the basis of the evidence.

    FWIW, I am undecided on the evidence, but unequivocal that blinkering oneself to the relationship between funding and evidence submitted is no answer.

    #168 Dan

    ..Cricklewood) actually believes he knows something that all the major climate science professional societies across the world do not.

    And what is that?

    #171 Timothy Chase

    So basically one of the key problems with Cricklewood’s “theory” is that the decision-making that determines which studies get funded and which do not is itself highly decentralized.

    Within the narrow confines of politics though.

    #174 Mark

    And if Body Thetan Cricklewood wants only what’s spent on climatology for these people [coal industry et al], then it should be the same for government sponsored work. Exclude meteorology even…

    No, don’t. Overall government spending on climatology issues is thousands of times larger than all industry put together.

    Comment by BFJ Cricklewood — 3 Apr 2009 @ 8:44 AM

  217. Thetan level 10 says “No, don’t. Overall government spending on climatology issues is thousands of times larger than all industry put together.”

    Nope, lobbying alone is a million dollar business. The US government even keeps a receipt of such expedicture,

    And when websites are supproted by Exxon, that’s money to add. When David Evans is paid to make a speech about how AGW models are a farce, his expenses are paid by Exxon and pals.

    Millions more is spent on ensuring a business-friendly atmosphere for the Big Oil and Big Tobacco industries than is spent on climatology.

    Comment by Mark — 3 Apr 2009 @ 9:54 AM

  218. Yes, “blinkering oneself to the relationship between funding and evidence submitted is no answer.”

    You can look that up, you know. Did you bother?

    http://scholar.google.com/scholar?q=research+funding+affects+results%3F

    You’re right to suspect something might be happening.

    You’re wrong to assume you know the answer without investigating.

    As long as you keep going in the wrong direction, you’re most likely going to end up where you’re headed.

    Pray consider the possibility that you may be mistaken.

    http://scholar.google.com/scholar?q=research+funding+affects+results%3F

    http://www.springerlink.com/content/r654521305u8547k/
    Referenced by 34 newer articles

    Take your time. Facts are hard to choke down; they don’t completely support anyone’s closely held assumptions.

    Comment by Hank Roberts — 3 Apr 2009 @ 9:57 AM

  219. Cricklewood, Note the title of this blog. Now go to the “About” button at the top of the page and read. To wit:
    “The discussion here is restricted to scientific topics…”

    Much as we all might love to eviscerate your tired regurgitation of the arguments of Crichton on the right and Feyerabend et al. on the left, that’s off topic.
    The problem with your argument is that your very premise doesn’t even hold up. Climate change is not good for governments because it disrupts business as usual, and that disrupts tax revenues. It’s clear you are as ignorant of politics and government as you are of science.

    What is more, you need posit no dark conspiracies–political or otherwise–to understand why contrarian climate science doesn’t prosper. It doesn’t prosper because it isn’t productive. It doesn’t advance understanding of climate. In short, it’s a dry well.

    Here is my recommendation. Go out and figure out how science actually works. Talk to some actual scientists. Send off a letter to a grant-making organization or two. LISTEN. Read Spencer Weart’s excellent History of climate change. Maybe try to learn some of the science. Don’t keep going down your current path. It’s leads straight to crackpotville.

    There are plenty of scientists here. We do science every day. We compete for grants. We publish. Trust me. Your paranoid fantasies don’t ring a bell with us. That is not how science works.

    Comment by Ray Ladbury — 3 Apr 2009 @ 9:57 AM

  220. “FWIW, I am undecided on the evidence, but unequivocal that blinkering oneself to the relationship between funding and evidence submitted is no answer.”

    Then why do you continue to blinker yourself to funding of the denialists?

    Comment by Mark — 3 Apr 2009 @ 10:17 AM

  221. BJ, it’s ridiculous to ascribe all funding of climate research to an “pro-AGW machine.” The only way to get the huge dollar amounts people claim is to include all sorts of normal duties as part of the “machine.” For example, a University professor’s activities involve things like, say, actual teaching, advising, and committee work. But if his salary is counted in toto as part of said “machine,” you’ve effectively decided that most of his time is actually spent pro bono, not doing his “job” of proving that AGW is all that it is claimed to be.

    Also, the idea of the “pro-AGW machine” ignores the facts that:

    a) Most papers are relatively focussed and technical, hence neither support nor controvert the AGW thesis directly; why are the dollars that paid for them counted towards “machine funding?”

    b) The contrarian papers that do get published by this theory should, by conpiratorial logic, be counted as part of the “AGW machine,” since Robert Lindzen’s or Roy Spencer’s salary (for instance) is under the academic umbrella; clearly this is nonsensical, however. They shouldn’t be counted as “machine funding,” either.

    c) Since we see most of the academic research output as neutral, and a small amount actually antagonistic with respect to the AGW question, the money funding legitimate climate research does not “buy results,” as claimed. Maybe most scientists and most funders are actually concerned with what they claim: advancing our understanding of the universe?

    By contrast, I know of no reason to think that Big Oil is anything but satisfied with the money they put into the Heartland Institute, the Cato Institute, etc., etc.

    Comment by Kevin McKinney — 3 Apr 2009 @ 11:58 AM

  222. Responding to BFJ Cricklewood, Ray Ladbury wrote in 156:

    We have a hundred and fifty years of climate science all of which supports anthropogenic causation of the current warming, but you don’t trust all that work because some of the scientists worked for the government.

    I responded to Ray Ladbury in 161:

    Sounds about right — right down to your quantum mechanics, the absorption and emission of photons (Einstein was a patent clerk) by matter, the spectra, any inconvenient fact. All the studies, papers, scientists and sciences one vast conspiracy to hide the truth of his world view. Reminds me of some of the more extreme views I found among young earth creationists.

    BFJ Cricklewood responds to me in 215:

    The above was by way of a suggestion that political funding would then equally colour these issues.)
    No, since – unlike with AGW – there is no obvious political spin to be put on them.

    The trouble is that what I am describing is the physical basis for the greenhouse effect.

    The greenhouse effect is the result of molecules being stimulated into vibrational, rotational and rovibrational states of excitation. This may be the result of the absorption of photons or molecular collisions in which they gain energy. De-excitation occurs either through either molecular collisions in which they lose energy or through the emission of photons.

    The absorption of photons results in the warming of the atmosphere and their emission results in the cooling of the atmosphere. Absorption of thermal radiation cools the thermal spectra of the earth as seen from space, radiation emitted by de-excitation is what results in the further warming of the surface, and the surface continues to warm until the rate at which energy is radiated from the earth’s climate system (given the increased opacity of the atmosphere to longwave radiation) is equal to the rate at which energy enters it.

    The wavelengths at which the absorb and emit photons is described by their absorption/emission spectra — and give rise to images like this:

    Aqua/AIRS Global Carbon Dioxide
    http://svs.gsfc.nasa.gov/vis/a000000/a003400/a003440/index.html

    We had a couple of posts on it here:

    Part I: A Saturated Gassy Argument
    by Spencer Weart and Raymond T. Pierrehumbert
    26 June 2007
    http://www.realclimate.org/index.php?p=455

    … and here:

    Part II: What Ångström didn’t know
    by Raymond T. Pierrehumbert
    26 June 2007
    http://www.realclimate.org/index.php?p=456

    The physical basis for the greenhouse effect is principally that of Quantum Mechanics and more broadly that of Quantum Statistical Mechanics.

    *

    I wrote in 171:

    So basically one of the key problems with Cricklewood’s “theory” is that the decision-making that determines which studies get funded and which do not is itself highly decentralized.

    BFJ Cricklewood responds to me in 215:

    Within the narrow confines of politics though.

    But I wrote in 171:

    However, I would argue that another (albeit related) key problem is the decentralized nature of the process of scientific discovery. Those who determine what funding gets done won’t know beforehand what will be discovered — and they wouldn’t be able to keep track of all of the interconnections which will be discovered by the vast number of independent and highly intelligent minds that are involved in this process. To do so would greatly exceed the intelligence of any central authority, whether it be an individual or committee.

    You see, the trouble is you can’t separate science in that fashion.

    The basis in physics for explaining the greenhouse effect is essentially the same as that for describing photovoltaic devices, or that which Einstein used to suggest the possibility of lasers. The same principles form the basis for our ability to perform calculations in chemistry and biochemistry at the quantum level. It is how we are able to understand and predict the behavior of tunnel diodes.

    It is the same as what goes into infrared detection used in the military by fighter jets:

    AFRL-VS-HA-TR-2004-1145
    Environmental Research Papers, No. 1260
    Users’ Manual for SAMM2, SHARC-4 and MODTRAN4 Merged
    H. Dothe, et al.
    http://www.dtic.mil/…GetTRDoc.pdf

    Now since you have helped me illustrate this principle, clearly you are deeply involved in the conspiracy, and as such there are only two questions still remain to be answered.

    First, as one of the conspirators, are you using your real name or a pseudonym?

    Second, if it is the latter, what name should I write the check out to?

    Comment by Timothy Chase — 3 Apr 2009 @ 12:18 PM

  223. BJFC continues to use ad hominem argument. Attention BJ: IT DOESN’T MATTER what the sources of funding are. it only matter WHETHER THE ARGUMENTS PRESENTED ARE CORRECT OR NOT. Why don’t you get this?

    Google “ad hominem wikipedia”

    CAPTCHA: “QUESTION stayed”

    Comment by Barton Paul Levenson — 4 Apr 2009 @ 5:00 AM

  224. Barton: Rather than look at BJFC as an ad hominem wielding troll, I think it is useful to consider him as a prime example of how we all see the world through our own filters. I think if you asked him what he thought of the Millikan oil drop experiment, his response would be \huh?\ But I suspect the initial response of the vast majority of the folks who went on to become working scientists would include \cool\ and \elegant.\

    It may be impossible for the BJFCs of the world to understand why someone with the intelligence and accomplishments needed to become a working scientist would settle for scientist pay when a fresh MBA starts in six figures. (median package for Wharton grads is 145K$) I suspect that they assume ulterior motives because they have no other explanation.

    oracle says: term beautify

    Comment by Tim McDermott — 4 Apr 2009 @ 11:34 AM

  225. “I think it is useful to consider him as a prime example of how we all see the world through our own filters.”

    That’s not a filter. That’s “AGW sensitive glasses” like Zaphod has…

    We *don’t* think like him (in the main).

    Comment by Mark — 4 Apr 2009 @ 12:30 PM

Sorry, the comment form is closed at this time.

Close this window.

0.702 Powered by WordPress