RealClimate

Comments

RSS feed for comments on this post.

  1. I’ve been perusing some of the skeptic blogs where there is much air-punching and back-slapping. The predominant theme, particularly in the reader comments, is that this new adjustment means global warming hasn’t actually been happening these last 30 years, it was all a NASA glitch. Or to loosely paraphrase Bob Carter, “global warming stopped in 1934″.

    They don’t seem to realise the global trend over the last 30 years still shows dramatic warming, both in the US and especially globally. In fact, the trend seems to me a more important statistic than the “top ten warmest years”. What I would be interested in seeing is the change in the global warming trend since 1975. It was around .17C per decade – does anyone have exact figures from before and after?

    Comment by John Cook — 10 Aug 2007 @ 6:03 PM

  2. It isn’t “much ado about nothing”, it’s an embarrassment. I don’t usually agree with you, but I have always considered you to be a credible and honestly contrary opinion until you made that statement.

    Do you think we can now be “99% certain” that 1938 was the warmest year in the last 1000 years, or are we still 99% certain that it was 1998. How certain are we of ANYTHING that Hansen says, now?

    The problem with most skeptics isn’t that we don’t have open minds… we do. I know I do at least. The problem is the grotesque hubris we associate with scientists who are championing scientific conclusions that we intuitively know to be MUCH shakier than they are letting on. This only reinforces that.

    [Response: Sure it's embarrassing, but only the end result determines whether it matters or not. This doesn't for anything important. Would you characterise the blogosphere reaction as proportionate to the 0.03 deg C shift between 1934 and 1998 in the US temps? Perhaps you'd like to point me to any statements that said anything about being 99% that 1998 was the warmest year in the US? Read the Hansen quote above - written in 2001! -gavin]

    Comment by DaveS — 10 Aug 2007 @ 6:07 PM

  3. They don’t seem to realise the global trend over the last 30 years still shows dramatic warming, both in the US and especially globally. In fact, the trend seems to me a more important statistic than the “top ten warmest years”. What I would be interested in seeing is the change in the global warming trend since 1975. It was around .17C per decade – does anyone have exact figures from before and after?

    Yes. 1940-1975 the temperature was falling whilst CO2 was increasing.

    Apparently its due to pollution, however, now the evidence is that pollution causes the temperature to rise.

    Its been a bad week for AGW.

    Comment by Nick — 10 Aug 2007 @ 6:07 PM

  4. Actually to be fair, your statement that “the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data.” is incorrect. Steve M pointed out where the error came from in his blog posts and his email notifying GISS of the problem. The GISS people simply confirmed that he was correct.

    [Response: Not so. He saw the jump but did not speculate as to the cause. - gavin]

    Comment by Tex — 10 Aug 2007 @ 6:18 PM

  5. John:
    I have been monitoring the same blogs. Clearly the magnitude of the correction is of no great significance to the overall trend. Some on those sites have become inappropriately elated. But I seriously dispute your assertion that this means the majority of regular contributors believe that this proves global warming isn’t happening. Certainly that was not Steve McIntyre’s position. Where it is dramatically important is in the arguments over access to data and the ability to replicate methods. In this instance, GISS’s error provided Steve McIntyre with a proverbial “smoking gun”.

    Frankly, my hope is that this “ado” does in fact lead to a dramatic increase in openness and access to data and statistical techniques and code used to analyze climate data.

    Comment by bjc — 10 Aug 2007 @ 6:22 PM

  6. Here is a back-of-envelope calculation of the effect of this on world temp analysis. I’ve posted this a few places now.

    It is an 0.3% change to world temp anomaly results after 2000.

    0.003C.

    The error was only in data for the lower 48 states, and was 0.15C for that data. The lower 48 is about 2% of the earth’s surface. 0.15 x .02 = 0.003C

    Global temp change over the last century is 0.8 – 1.1C depending on method. .003C out of 1C (in the range and easy to calculate) is 0.3%

    Comment by Lee — 10 Aug 2007 @ 6:29 PM

  7. Gavin: It would be helpful if both graphs used the same Y-axis scale.

    Comment by bjc — 10 Aug 2007 @ 6:30 PM

  8. Perhaps you’d like to point me to any statements that said anything about being 99% that 1998 was the warmest year in the US?

    I apologize. I appear to have mixed-up my climatologists. I think it was one of your co-bloggers who made that statement.

    It should also be noted that most people fully understand that this is US-only data. The manner in which that is pointed out, however, is characteristic of the same sort of “hubris” I mentioned earlier… it’s somewhat dismissive and something along the lines of “This is US only, though. The rest of the world’s data is still almost as good as we thought the US data was a couple of days ago.

    A more healthily skeptical reply would note that it is, indeed, US only, and only affects global temperatures slightly, but, given the fact that the US has the most reliable and well-maintained network, it raises concerns about the quality of data we have been using across the board.

    On a related note… is it true that the “margin of error” for the “global surface temperature” is actually larger than the net warming in the 20th century? I’ve seen that claim thrown around and figured it wouldn’t hurt to ask.

    Comment by DaveS — 10 Aug 2007 @ 6:50 PM

  9. Re: #3 (Nick)

    When you say

    Yes. 1940-1975 the temperature was falling whilst CO2 was increasing.

    you are mistaken. As I have pointed out before on this blog, there is no justification for such a claim.

    The most revealing aspect of the AGW debate is this: when the “warmers” make a claim, it’s based on serious research and considerable effort, and if an error is found, it’s admitted and corrected ASAP. When “deniers” make a claim (like yours), it’s based on the lack of serious research or considerable effort, and if an error is pointed out, it’s excused away or triggers a scrambling attempt to change the subject.

    Comment by tamino — 10 Aug 2007 @ 6:53 PM

  10. As I have pointed out before on this blog, there is no justification for such a claim.

    Perhaps he would justify it with the 2 charts in this post.

    Comment by DaveS — 10 Aug 2007 @ 7:00 PM

  11. Gavin, a query re your link to the global mean (http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.txt). I downloaded that data a few months ago and comparing it to now, the values after 2000 are exactly the same. Is this because the change to global T is smaller than two decimal places? Or has that data not been updated yet?

    Bjc, you’re right that most of the blogs themselves aren’t saying global warming isn’t happening, it’s more happening in the readers’ comments. Overall, I found the emphasis is on comparing 1934 to other top ten’s – very little to no mention of the change to the global trend.

    [Response: Changes are too small to show. - gavin]

    Comment by John Cook — 10 Aug 2007 @ 7:07 PM

  12. I think it’s also worth considering that many people have taken this to indicate that any future corrections will, like this one, indicate a lesser trend in 20th century warming (at any locale or globally). But this is clear gambler’s fallacy: You can’t point to the roller and extrapolate what they’ll roll based on what they have rolled.

    Worth considering also the seeming desperation of some to associate “audits” with showing that warming is less than we thought. “Audits” can easily show that warming has been more than we thought.

    Comment by Justin — 10 Aug 2007 @ 7:28 PM

  13. Run for the ICE. I find this a common meme on the AGW side. When the land record gets a little quake the instinct is to run for the ICE. Actually, it’s pictures of ice, and as we have been told pictures have no value. unless, that is, they are pictures of “nighlights” used by hansen or pictures of Ice, preferably with polar bears, used by Gore.

    Consider this. If one makes an error about data input files ( the first thing you check in IV&V) it is not advisable to run to a slippery surface. The artic ice record prbably needs a good audit as well.

    Comment by steven mosher — 10 Aug 2007 @ 7:34 PM

  14. The positive result of this brouhaha: an army of people who up until this week dismissed the temperature graphs, now not only embrace them but embtace them to a point-to-point accuracy on the scale of 1/100th of a degree.

    Comment by spilgard — 10 Aug 2007 @ 7:34 PM

  15. Just to expand on my last point at 12, (somehow) many see this as indicating that we can’t trust anyone (especially Hansen) to handle the data properly. In fact, if a slight error – the possibility of which already indicated by Hansen 6 years ago – causes us to become Cartesian skeptics about him, then why didn’t the small detections and corrections in the computer code used in models, which are publicly indicated by NASA?

    My mother slightly overcooked the garlic bread in the oven tonight; I guess I can never be sure whether she will make the best prepared food for my friends when they come over. How can I know what will happen????!!!

    J

    Comment by Justin — 10 Aug 2007 @ 7:44 PM

  16. spilgard,

    I concur. This whole debacle reminds me of a chess game I had last night with my significant other (gotta love him). We’re both pretty good players, and at first he was winning (took my knight AND bishop; I had no hits), but eventually the tables turned. Near the end he took one of my pieces and declared: “Aha! I’ve got your piece!” to which I replied, “Honey, you used one of your last pieces to knock one of my many; I’m winning” ;)

    S. Mosher (comment 13) says AGW’s (I hate polarization) run to the ICE, but no – we simply remind, everyone, and with a little alarm, that the trend is clear.

    Comment by Justin — 10 Aug 2007 @ 7:58 PM

  17. The argument from “denialists” has been: We’re not sure about the data because there may be abnormal heating do to the artificial environments close to the temperature stations.

    Here we have had literally billions of dollars put into research of climate data, and the only person who catches a glaring error is an outsider.

    What if the change do to asphalt or AC units had been more gradual? Should we have any confidence that anyone will be able to, much less want to, find the error?

    If we can only detect the anomalies when there is an obvious spike in the data, and the only people that make corrections are “denialist”, doesn’t that shed ANY doubt in your mind on the integrity of the process?

    Any at all?

    even like . . . one Iota?

    [Response: First lesson: don't believe your own propaganda. - gavin]

    Comment by zac — 10 Aug 2007 @ 8:03 PM

  18. RE 14. we never dismissed the temperature graphs.

    1. Believers: accepted them without question.
    2. Deniers: ignored them without cause.
    3. Doubters: put questions to them.

    The doubters were right.
    The deniers, like a stopped clock, got lucky.
    The believers, ran for the ice.

    It strikes me that the belief that Ice melts only because it gets warm is somewhat simplistic. So gavin,
    perhaps we could benefit from a lesson in arctic ice formation and decay and the various causes– wind stress, temperature, albedo, soot, salinity, wave roughness, leads, pressure ridges, etc etc etc.

    Now I know that shrinking ice has the same propaganda impact as Air conditioners by weather stations, but a little more depth on the Ice issue might be a good idea,
    thin ice metaphors and all that.

    Oh, Which stations do you use to measure arctic temp?

    [Response: Leave the 'believers' and 'doubter' rhetoric at home. No scientist believes anything in the sense you claim. Nor as I made abundantly plain two weeks ago is anyone who works with real data blind to the problems there are. The ice isn't just good propagand a, it's a hard nosed fact that makes fooling around with AC theories of climate change moot. Arctic temperatures are directly measured by the Arctic Buoy program, but in the GISS analysis, the temperatures are extrapolated from nearby land stations. That works quite well, but it would be better to assimilate the buoy data directly - there's been some discussion I think, but I don't know what the status is. - gavin]

    Comment by steven mosher — 10 Aug 2007 @ 8:09 PM

  19. steven mosher (#13) wrote:

    Run for the ICE. I find this a common meme on the AGW side. When the land record gets a little quake the instinct is to run for the ICE. Actually, it’s pictures of ice, and as we have been told pictures have no value. unless, that is, they are pictures of “nighlights” used by hansen or pictures of Ice, preferably with polar bears, used by Gore.

    Well, I think it makes a little more sense for supporters of science to point at the arctic ice as this is something people are likely to understand than it does for climate “skeptics” to point at the Dust Bowl and Great Depression of the 1930s. Do skeptics really want the American public thinking in those terms….?

    Consider this. If one makes an error about data input files ( the first thing you check in IV&V) it is not advisable to run to a slippery surface. The artic ice record prbably needs a good audit as well.

    Don’t worry – I suspect you will be getting a fair amount of data in the next month and a half. Pictures, charts, commentary… the whole nine yards.

    As for the global climate, expect a little breather until 2009….

    Earth will feel the heat from 2009: climate boffins
    By Lucy Sherriff
    10th August 2007 15:31 GMT
    http://www.theregister.co.uk/2007/08/10/climate_model/

    It appears that climatologists are in the process of improving their short-range forecasting.

    Comment by Timothy Chase — 10 Aug 2007 @ 8:20 PM

  20. There was a bunch of chatter on Slashdot regarding this. I felt the need to point out that there were so many other disparate sources of evidence for global warming that there was really no question, even if said glitch actually was significant:

    1) Scientists are witnessing ice shelves in Antarctica falling into the sea.

    2) The North Pole is melting so that there will soon be a North-West Passage to which Canada is laying claims.

    3) Much of the global warming data does not come from NASA.

    4) Ski areas in the Alps are soon to be going out of business.

    5) There is glacial melting everywhere.

    .

    6) Indonesia’s islands are being submerged by rising sea level

    I’m sure this group can come up with many more proofs that are independent of the problematic data.

    Comment by niiler — 10 Aug 2007 @ 8:27 PM

  21. RE 16. remind me the trend is clear? Clear?
    Clear +- a little fuzzy. How clear? Crystal clear?
    foggy windshield clear? Tinted glass clear? Leaden glass clear? clear? Trends are Positive or Negative.
    Trends have slopes. Those slopes have errors.

    Now, I never denied the trend. I question the trend. As I have pointed out on CA I am a confirmational Holist.

    Simply, all theory is underdetermined by data. All observation is theory laden. Falsification can always be avoided by appealing to other data ( sea ice, SST, species migration, etc etc etc).

    Acceptance of theory has an epistemic component (“fits” the “data”) a pragmatic component ( makes useful predictions) and a social component ( the dreaded consensus)

    What I noted is a meme. When the land record is attacked, believers tend to run for the ice. Primarily because it serves as a common sense reference point.
    Folk wisdom. Ice melts when it gets warm. It’s rooted in common experience, but I suspect it’s more complicated that the cubes in my G&T.

    Comment by steven mosher — 10 Aug 2007 @ 8:29 PM

  22. RE 18 gavin.

    I will gladly leave the believer /denier rhetoric at home. Funny, I have never seen you inline a similar chastisment to people who throw the denier label around.
    I think it is an instructive distinction relative to the issue of confirmation bias. Be that as it may. I count you therefore in the doubter crowd:neither as a believer in AGW or a disbeliever. A true skeptic.

    Shrugs. ( that is the skeptic handshake)

    Which artic land stations?

    Comment by steven mosher — 10 Aug 2007 @ 8:40 PM

  23. 12: Worth considering also the seeming desperation of some to associate “audits” with showing that warming is less than we thought. “Audits” can easily show that warming has been more than we thought.

    So let’s do them and find out. Is it too hot, too cold or is my poridge just right? This has passed the point of being scientific data. It has become financial data. If someone wants to start taxing me and interfering with my economy based on these data and models, it’s time for full disclosure and accurate accounting.

    http://www.giss.nasa.gov/research/briefs/hansen_07/
    “…Figures 1 and 2, makes clear that climate trends have been fundamentally different in the U.S. than in the world as a whole…”

    Maybe. Or maybe they make it clear that there’s even more of a problem with the data in the rest of the world. Perhaps if we had quality data for the rest of the world, the trends wouldn’t be so fundamentally different?

    If you want me to believe in catastrophic global warming, you’ve got to convince me it’s not just a case of Garbage In Garbage Out. So far, I’m not convinced. I’m less convinced today than I was last week.

    We need to be confident in the data to believe the trends you show us in the nice pretty pictures. Between this error and photos at surfacestations.org, I’m not confident in the US data, let alone those from the rest of the world. Peer review is for science. The IPCC can do all the peer review they want. That doesn’t cut it for economies. Economies use accounting and audits. Failure to do so leads to situations like Enron and WorldComm. If we just want to do science, stick with peer review. If someone wants to influence economies, it’s long past time for auditors to start poking around.

    If NASA doesn’t want to publish source code and all data, at a minimum maybe they should hire an accounting firm to audit climate related data, methods and processes and to issue a public report on the quality they find (and that firm should hire Steve McIntyre ; -). I’m not saying I’d trust that report as much as full and open disclosure, but it would be a start. Both a formal audit and full disclosure would be fantastic!

    Oh. And changes to published data should be version controlled, with something akin to release notes.

    Opaque or unaudited data should not find its way into the policy debates. Ever.

    Comment by SCP — 10 Aug 2007 @ 8:42 PM

  24. Just to expand on my last point at 12, (somehow) many see this as indicating that we can’t trust anyone (especially Hansen) to handle the data properly.
    I think the point is that we shouldn’t have to trust someone as in a single person or entity such as Nasa GISS to develope what is in effect policy for our country.
    It’s un democratic and un scientific.
    A person could make a mistake. Ahem.

    Imagine if we were to use only one thermometer to measure the global temp, say Houston.

    [Response: Presumably in that case you're happy that other people are producing global temperature records, for example CRU - William]

    Comment by papertiger — 10 Aug 2007 @ 8:46 PM

  25. It’s interesting that bit about the Arctic ice retreating to it’s furthest extent already.
    We better get some ice cores from that unmelted portion before that record is lost forever. Who knows what revelations it will add to Antactic, Greenland, and assorted glacial ice series. – oh wait we are talking about Arctic ice. Sneaky bit of misdirection by you guys.

    Comment by papertiger — 10 Aug 2007 @ 8:52 PM

  26. I am wondering if the ocean is rising and flooding out Indonesia, why isn’t it flooding Cape Cod? are they not all at sea level?

    Comment by papertiger — 10 Aug 2007 @ 8:57 PM

  27. RE 18. one last thing. Gavin, the “Ice” is not a hard nosed fact. The “ice” is observed with an instrument.
    Records are collected and processed by code written by humans, subject to error and audit. Not facts. Not raw sensory input. Not facts, interpretations of data collected by instruments, processed by code. This is different from the stuff in my Gin & Tonic. I have not looked at the chain of custody for that “data”. I reserve judgement. The present record of managing data and the lack of outside audit does not inspire confidence. It deepens the need for systematic doubt.

    There are issues with the land instrument record, even though the summer heat is a hard nosed fact. And the SST record has instrument issues ( buckets, buoys, didnt you write up something on a booboo made in a recent paper that thought the sea was cooling)

    Anyway, as you know from spending a considerable time futzing with data and anomalies and adjustments and renormalizations…

    All observations are theory laden.

    [Response: Of course. But just like (I think) Tuzo Wilson said "Nothing in geology makes sense except in terms of plate tectonics", nothing makes sense in current climate change without anthropogenic factors. That's not one dataset here or one anecdote there. The body of work that demonstrates consistency with the mainstream understanding is huge. And these little hiccups don't come anywhere close to affecting it. To deny that.... well.... - gavin]

    Comment by steven mosher — 10 Aug 2007 @ 8:59 PM

  28. There is a term in accounting called “immaterial”.

    I think we could learn to use it again.

    J

    Comment by Justin — 10 Aug 2007 @ 9:03 PM

  29. Earth will feel the heat from 2009: climate boffins
    By Lucy Sherriff
    10th August 2007 15:31 GMT
    http://www.theregister.co.uk/2007/08/10/climate_model/

    It appears that climatologists are in the process of improving their short-range forecasting.

    Interesting. It seems to be inproved just enough to cover the time period right after the next election.
    You sure this is a non political website?

    Comment by papertiger — 10 Aug 2007 @ 9:05 PM

  30. re 25. here tiger. from wiki.

    There are two aspects of confirmation holism. The first is that observations are dependent on theory (sometimes called theory-laden). Before accepting the telescopic observations one must look into the optics of the telescope, the way the mount is constructed in order to ensure that the telescope is pointing in the right direction, and that light travels through space in a straight line (which itself is sometimes not so, as Einstein demonstrated). The second is that evidence alone is insufficient to determine which theory is correct. Each of the alternatives above might have been correct, but only one was in the end accepted.

    That theories can only be tested as they relate to other theories implies that one can always claim that test results that seem to refute a favoured scientific theory have not refuted that theory at all. Rather, one can claim that the test results conflict with predictions because some other theory is false or unrecognised. Maybe the test equipment was out of alignment because the cleaning lady bumped into it the previous night. Or, maybe, there is dark matter in the universe that accounts for the strange motions of some galaxies.

    That one cannot unambiguously determine which theory is refuted by unexpected data means that scientists must use judgments about which theories to accept and which to reject. Logic alone does not guide such decisions.

    Comment by steven mosher — 10 Aug 2007 @ 9:07 PM

  31. 29.

    It would probably be best to stay out of that discussion. The same thing could easily be said about many other locuters in this discussion, even the “auditors” themselves.

    If you do nothing else, interpret the arguments in their bets light.

    J

    Comment by Justin — 10 Aug 2007 @ 9:08 PM

  32. 24.

    I agree that we shouldn’t uncritically agree with just one source of scientific analysis. Yet even if we relied solely on NASA, NASA GISS already agreed with McIntyre even before McIntyre decided to audit climate (i.e. before his blog) – in 2001.

    Of course, there are other issues where they obviously disagree, but Gavin was quite clear and open about this – certainly not the tactics of a dogmatist that invites you to rely on NASA’s word alone. I think we ought to appreciate that.

    J

    Comment by Justin — 10 Aug 2007 @ 9:15 PM

  33. Re 27.

    Nothing comes close, YET. The point is science is always contigent. Wilson is merely stating this. If we gave up plate tectonics, we would have to redo a bunch of work. It is pragmatically valorized not epistemically more secure. As Quine noted no theory faces facts in isolation. This is why no single fact overturns accepted theory. There is always a balancing act. Facts can be ignored, epicycles created, results questioned. It is a huge undertaking to overturn a theory, and with no viable replacement, highly unlikely.

    All the more reason to free the code

    No ne denies the consistency of the body of evidence. That is the whole point of theory laden observation and confirmation bias. Surely you have encounter some folks who never saw a negative feedback they liked? Shrugs

    Comment by steven mosher — 10 Aug 2007 @ 9:35 PM

  34. #20, etc on Indonesia. Please don’t just quote lead sentences — too often, as in this case, they are attention getters with zero substance.

    The article quoted in no way claims that Indonesia has lost ANY islands due to sea rise.

    At least the substance of the article explains what has actually been happening: “The issue has become a hot topic after Indonesia upset neighbouring Singapore recently by banning sand exports to the city state, blaming sand mining for literally wiping some of its islands off the map.”

    Why wasn’t the lead about sand exports? Because that’s not an exciting topic like fear of being inundated by ever-rising sea levels.

    Comment by MrPete — 10 Aug 2007 @ 9:41 PM

  35. Did not Gore say that 9 of the warmest years were in the last decade? The adjustment now indicates that 4 of the warmest years were in the 1930′s and 3 of the warmest were in the last decade or so(for the USA). And you are saying much ado about nothing.

    [Response: Global means! And there nothing has changed.- gavin]

    Comment by Gerald Machnee — 10 Aug 2007 @ 9:50 PM

  36. Re: 33,

    “It is pragmatically valorized not epistemically more secure.”

    No, Wilson said that it doesn’t MAKE SENSE without plate tectonics, not that it would be inconvenient without it. Those are two different claims. Likewise, evolution by natural selection of genes is the only thing that thus far can explain biological change in the distant past, even if we don’t quite understand the gory details of that past.

    J

    Comment by Justin — 10 Aug 2007 @ 10:06 PM

  37. I agree with Gavin that calling people “warmers” or “deniers” is not productive. What we now know is that there was an error in the GISS record and the error was discovered by Steve McIntyre. GISS was notified and, to their credit, they investigated, found the error and corrected the reocrd. The real question is whether or not there are more errors in the record. My instint tells me that GISS is looking at the data to insure that there are no more errors. IMHO, it would be better if GISS released all their raw data, the rational and code for any and all adjustments, and any other data that is revalant to the the temprature record. This would allow everyone to inspect the data. If there are additional errors, they could be found and corrected. By releasing this data, the confidence in the record could be improved.

    Comment by Robert Burns — 10 Aug 2007 @ 10:21 PM

  38. Re #29: The Met Office news release says nothing about the model covering only the time after the next general election (likely to be in 2010), but perhaps there is something in the text of the article (which I haven’t read) linking the model to the British election cycle. Gavin knows more about both of these subjects than I, so perhaps he can comment.

    Comment by S. Molnar — 10 Aug 2007 @ 10:22 PM

  39. Interesting. It seems to be inproved just enough to cover the time period right after the next election.
    You sure this is a non political website?

    The last election was in 2005, so the next might be in 2009, but it could be called anywhere from 2007 to 2010, whenever Gordon Brown or his government decide suits them.

    I assume you are talking about UK politics after all, as it is a UK site being linked, and the research quoted was done by the Met Office in the UK.

    Comment by stuart — 10 Aug 2007 @ 10:24 PM

  40. Does the CRU correct for the urban heat island effect as the GISS does?

    Congrats to the GISS in correcting discovered errors.

    Comment by VirgilM — 10 Aug 2007 @ 10:36 PM

  41. I have a related question one the web page it now states:
    “Our analysis, summarized in Figure 1 above, uses documented procedures for data over land (1), satellite measurements of sea surface temperature since 1982 (2), and a ship-based analysis for earlier years (3). Our estimated error (2σ, 95% confidence) in comparing nearby years, such as 1998 and 2005, increases from 0.05°C in recent years to 0.1°C at the beginning of the 20th century. Error sources include incomplete station coverage, quantified by sampling a model-generated data set with realistic variability at actual station locations, and partly subjective estimates of data quality problems (4).”

    I have never seen a “partly” subjective estimate of data quality issues. How does one integrate this with
    objective methods and report a Std.dev.?
    and how much of the 2sig band is objective and how much is partly subjective and how was this CI constructed and was this partly objective partly subjective method tested?

    Which individuals subjective judgement was used to assess the “data quality” issues and what repeatable methodolgy did they use? Did they employ a rating system?
    is it documented? were the individuals who did the subjective assesment instructed in the tendency of subjective assesments to regress to the mean. If more than one individual was used to subjectively asses data quality did those individuals go through a normalization process?

    Does partly subjective mean guess?

    That page is a treasure trove. Keep digging, then we get to the china station issues.

    Comment by steven mosher — 10 Aug 2007 @ 10:44 PM

  42. Sigh.

    If the aforementioned skeptics had taken the time (2 minutes?) to plot both datasets in Excel, they would have found that there is little to celebrate. I guess that this speaks to the immensity of their intellectual dishonesty.

    Old CONUS Data:
    http://uploader.ws/upload/200708/OLD_CONUS.gif

    New CONUS Data:
    http://uploader.ws/upload/200708/NEW_CONUS.gif

    Comment by Bruno — 10 Aug 2007 @ 11:01 PM

  43. The finding seems similar to “one small step for a man, one giant leap for mankind” – but instead, its “one small temperature change but one giant argument for openness and transparency” in data collection and selection methods, sites selected, algorithms and source code availability and so on. Where the goal from all parties is seeking the truth, an open source approach will be helpful in negating both pro- or con- arguments on AGW. Unfortunately, as ClimateAudit has documented, a number of climate researchers have maintained their data, methodologies, algorithms and source code are private. And hence, not tested, vetted nor reproducable by others. That NASA’s GISS had a Y2K error in its data analysis is a powerful argument that secret science is bad regardless of the field or the source.

    [Response: What is secret? The algorithms, issues and choices are outlined in excruciating detail in the relevant papers: http://data.giss.nasa.gov/gistemp/references.html - gavin]

    Comment by Eric — 10 Aug 2007 @ 11:02 PM

  44. My mother slightly overcooked the garlic bread in the oven tonight; I guess I can never be sure whether she will make the best prepared food for my friends when they come over.

    Some people might argue that cooking garlic bread is more of an art than a science.

    Some people might argue that adjusting the surface record to account for UHI, etc., is more of an art than a science.

    Etc.

    Comment by DaveS — 10 Aug 2007 @ 11:11 PM

  45. Gavin, would you mind clearing up my question from earlier? Is it true that the margin of error in the calculation of “global surface temperature” is greater than the observed warming in the last 100 years?

    That is a point that I keep seeing. Is it true?

    [Response: No. The error on the global mean anomaly is estimated to be around 0.1 deg C now, increasing slighty before 1950 or so and a little larger in the 1880s. The global mean changes of around 0.7 deg C and significantly higher. - gavin]

    Comment by DaveS — 10 Aug 2007 @ 11:16 PM

  46. A minor correction to my previous post (#34): I was unaware of the blog that gavin linked to. I had in mind several blogs listed on digg.com that have been receiving the bulk of the attention today. None of those blogs had graphs of the actual CONUS data.

    But now that I know that (at least) one blog went through the trouble of plotting the data in Excel… I’m even more stunned. How desperate does one have to be in order to cling on to such trivial differences?

    Comment by Bruno — 10 Aug 2007 @ 11:21 PM

  47. Perhaps this would be a good time to remind some people (and I’m sorry if I bore the rest of you by harping on this) that AGW is a prediction, not an explanation. It wasn’t derived by examining past temperature records, noting a rise, and picking on CO2 as the cause. It’s based on the measured increase in atmospheric CO2, combined with knowledge of its properties WRT infrared radiation. (Plus a lot of other factors, of course.) That some pre-industrial year, or series of years, might have been warmer than today is relevant only so far as it reminds us that CO2 is not the only possible driver for temperature.

    Comment by James — 10 Aug 2007 @ 11:32 PM

  48. Five minutes with this new data set – apply smoothing functions (at varying spans, non-causal filters), then some sinusoidal fits and analysis of residuals – brings doubt to the presence of an underlying US warming trend, let alone any room for anthropogenic effects!
    No warming trend in the US – lets hope the global data sets are better verified and validated. This is entirely embarrassing for climate science.

    We will need complete transparency in the global temperature records and any ‘adjustments’ made (techniques, methodologies), as clearly climate scientists are not up to the task. Work will need to be verified by the broader scientific community.

    regards, Steve

    Comment by Steven — 10 Aug 2007 @ 11:36 PM

  49. re 30. Steve Mosher — “observations are dependent on theory”

    In the words of Albert Einstein”


    “… On principle it is quite wrong to try founding a theory on observable magnitudes alone. In reality the very opposite happens. It is theory which decides what we can observe.”

    In post #44 at http://www.kendallharmon.net/t19/index.php/t19/article/4811/
    I give an concrete example of this principle insofar as it pertains to temperature measurements by making an analogy with special relativity.

    Comment by Philip Pennance — 10 Aug 2007 @ 11:48 PM

  50. One has to wonder how many other “anomalies” Mr McIntyre might have come across during his audit, but declined to inform anyone, as they happened to indicate that temperature data had been incorrectly revised too far downward.

    Comment by Dylan — 10 Aug 2007 @ 11:57 PM

  51. I want to concur with:
    35.”We will need complete transparency in the global temperature records and any ‘adjustments’ made (techniques, methodologies), as clearly climate scientists are not up to the task. Work will need to be verified by the broader scientific community.”

    Absolutely true. Anything less is poor science.

    And in response to the question about ‘trusting Hansen’. :
    24.”I think the point is that we shouldn’t have to trust someone as in a single person or entity such as Nasa GISS to develope what is in effect policy for our country.”

    Yes. Isn’t the phrase ‘trust but verify’. Look, we didn’t trust Einstein, we ran independent experiments to confirm his theories. That’s science. The data and the internal adjustment algorithms need to be made completely public and transparent so they all can be verified. This is true even irregardless of the public policy implications that make this more consequential than many other scientific issues.

    One of the responses to a comment that used the word ‘hubris’ said: “Sure it’s embarrassing, but only the end result determines whether it matters or not. …”

    Wrong attitude! A more appropriate response is to consider this an issue of data collection/analysis integrity and quality and thus, even if the change is small, ensure *processes* need to be put in place to make sure there are not other errors lurking in the adjustment algorithms. So not just this but other datasets should be opened up for independent review.

    Whether you are a ‘believer’ or a ‘skeptic’ in AGW, this should be a point of common agreement: Let us at least get the facts right. Let us build confidence in what the facts are through transparency of data and independently reviewable internals of adjustment algorithms.

    “More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century)”
    So recent 5 year temps in the US have been a mere 0.03C greater than in the 1930s? Fascinating.

    Comment by Patrick — 11 Aug 2007 @ 12:12 AM

  52. Aha! I was writing up a blog entry on this very topic, and then came here and was happy to see what you wrote; it agrees pretty much with what I said. I also looked at the data a bit, too, to show that despite the claims, this doesn’t change a whole lot.

    http://www.badastronomy.com/bablog/2007/08/10/is-it-hot-in-here-or-is-it-just-me/

    I’m glad I stopped by before posting!

    Comment by Phil Plait, aka The Bad Astronomer — 11 Aug 2007 @ 12:32 AM

  53. OK, there is an easy solution to all these complaints.

    How about if we double or triple NASA GISS’s budget? Americans complaining about all the stuff that should be done should be in favor of that, and should start writing their Senators and Representatives tomorrow for special earmarks, because they think it’s important to get things right.
    Also, express a wish to pay higher taxes for this [hence, non-taxpayers need not apply]. For the surface stations, demand a lot of extra money as well.

    I do assume that people complaining about lack of access to everything, have for instance, have explored data.giss.nasa.gov in its entirety, and downloaded the extensive code from http://www.giss.nasa.gov/tools (everyone reads FORTRAN 90, right?), and the code is fairly short, even if there’s 191MB of data to check over as well. [I'll admit, I only downloaded the code to look at, I didn't pore over the data sets.] I assume everyone complaining has the expertise to actually help?

    Patrick: can you explain to us your expertise in physics, algorithms, statistics, analysis of imperfect data, software engineering, simulations? You have done most of this stuff professionally, right? [Of course, as an anonymous poster, it will be hard for us to check.]

    I don’t want to pick on anyone in particular, but a lot of the complaints in this thread have not seemed very well-informed about realistic methods of doing real science… but seem more likely to impede real work. Sigh, science is hard enough to get right as it is.

    Comment by John Mashey — 11 Aug 2007 @ 1:08 AM

  54. #52
    “If there were no warming trend at all, you’d expect the hottest 10 years to be randomly distributed…”

    This is an incorrect assumption. You could see a recent warming, without it being part of an ever increasing function – for example, at the rising phase of a sinusoid.

    If you bother to fit some single / multiple frequency sinusoidal models to the data and then examine the residual components, I believe you will quickly negate a requirement for an increasing, sustained signal component i.e. a US warming trend

    regards, Steve.

    Comment by Steven — 11 Aug 2007 @ 2:08 AM

  55. Steve (#48)

    “We will need complete transparency in the global temperature records and any ‘adjustments’ made (techniques, methodologies), as clearly climate scientists are not up to the task. Work will need to be verified by the broader scientific community.”

    Are you aware of this remarkable device used in the scientific community known as “publishing”? It seems that many of the techniques, methodologies and “adjustments” are found right there, clearly explained in the scientific literature.

    As for the global temperature records, did you know that you can get the data for any station in the GISS network here: http://data.giss.nasa.gov/gistemp/station_data/

    before or after “adjustments” documented in the “scientific literature” are applied?

    I normally advocate complete transparency in science, including open source code and experimental results. But your points in this regard are so silly that they are barely worth replying to.

    Comment by ChrisC — 11 Aug 2007 @ 2:32 AM

  56. Steven Mosher (#21) wrote:

    Now, I never denied the trend. I question the trend. As I have pointed out on CA I am a confirmational Holist.

    Sounds like Quine to me.

    Actually he’s not all bad – at least the bits he gets from Pierre Duhem.

    Simply, all theory is underdetermined by data.

    No theory follows with cartesian certainty from any given set of observations. Not bad.

    So for example, I may have a theory that the world is older than 10,000 years. I might cite geologic formations and the rate at which they form, perhaps fossils, radiocarbon dating, a series of fossilized trees and there rings, the distribution of species, etc.. But interpreting much of this evidence as evidence for the age of the world would involve other theories. Ok.

    Then what about the theory that the world is older than five decades? Well, you may not have been here five decades, so I suppose you could hold the theory that any evidence that the world is older than five decades was simply placed there, perhaps by space aliens. Then if someone claims that they are sixty years old and they know that the world has been here at least that long, well, perhaps they only think they are sixty years old, or perhaps they are lying.

    Come to think of it, this is where your “social component” might come in handy – but perhaps we will get to that a little later.

    Now what about the claim that the world is at least five seconds old. I certainly think it is, but you may regard this as nothing more than a “theory” which is “underdetermined” by the “data,” where the data consists of the experience of “memories” and anything else which I might claim as evidence for an older world. But at this point we aren’t even talking about observation per se – we are talking about memory.

    *

    So lets move on to observation.

    You write:

    All observation is theory laden.

    So does this mean that when I look at an iceberg floating off, it might actually not be floating away? Does this mean that if I am looking at the temperature displayed by a thermometer I am holding is rising, that the temperature might not be rising? Or does this mean that I might only think that the displayed temperature of the thermometer is rising, or that the thermometer is in my hand – or that I have a hand?

    If so, I would begin to wonder whether you are engaging in the philosophy of science rather than in some freshman philosophy bullsession. I would also have to wonder just how desperate the “global warming skeptics” have gotten that they find it necessary to appeal to this kind of reasoning.

    *

    But lets continue.

    You write:

    Falsification can always be avoided by appealing to other data ( sea ice, SST, species migration, etc etc etc).

    This isn’t the way that I normally hear it.

    From what I understand, falsification can always be avoided by appealing to another hypothesis. For example, Newton’s gravitational theory predicted that Mercury would have a particular eplliptical orbit, but even in Newton’s day they could see that its orbit tended to shift relative to what Newton’s theory predicted. An auxiliary hypothesis was introduced, namely that there was another planet by the name of Vulcan which was closer to the sun and exerted a gravitational pull on Mercury, causing its orbit to shift.

    Now if one claimed that Vulcan was of such a special nature that there would never be any evidence for it other than the observed precession of Mercury’s orbit, then it would be an ad hoc hypothesis. But supporters of this theory made predictions which could be tested independently of that which their hypothesis was introduced to explain. As such, it was an auxiliary hypothesis, not an ad hoc one. As a consequence, it was a scientific theory at the time.

    *

    But lets focus on the phrase “appealing to other data.” A little later, you speak of a theory as “fitting the data” and “making useful predictions.” Somehow I think these are all interconnected issues. A theory is justified by the evidence, or as you prefer to call it, “the data.” But a theory is also used to explain the evidence, or alternatively, “the phenomena.” And some theories explain more evidence than others. This makes them more useful. Likewise, some theories make more specific predictions than others and this makes them more useful.

    But if someone offers a theory involving Descartes’ demon which they then involk to explain any inconvenient fact, they might find this “useful,” but it wouldn’t be useful in quite the same way. Why? Because it would be consistent with anything. As such, it wouldn’t make any specific predictions. Or to put it another way, it wouldn’t be testable.

    However, if someone offers a theory which explains the orbit of the planets and the paths of objects which fall on the earth, one which can be used to make specific calculations of where things will be at any specific moment, this is useful. And if it does so and works under a great many circumstances, explaining a wide variety of phenomena, then it is even more useful. At this point we have good reason for not abandoning it the first time we encounter evidence which tends to disconfirm it. It is reasonable to look for an auxiliary hypothesis.

    Why?

    Because there exists a large body of evidence which supports that theory. Or alternatively, because that theory explains a great deal of evidence. Much of this evidence is independent of other evidence. Someone investigates the paths of planets in the outer solar system. Someone else investigates the paths of the moons around Jupiter. Someone else studies the inner planets or the orbit of the moon.

    Then there are those who drop cannon balls or apples. All of these largely independent lines of investigation examine phenomena which is explained by Newton’s gravitational theory, but they also provide evidence for the theory. And even when some later theory comes along and supercedes an earlier theory, it must be consistent with and account for the evidence which the earlier theory accounted for.

    This is in fact the basis for what is known as the correspondence principle between classical mechanics and quantum mechanics, between classical mechanics and special relativity, and between Newton’s gravitational theory and Einstein’s General Theory of Relativity. In a certain sense, while much of the form of an earlier theory is discarded, the material of that theory – consisting of the evidence which justified it and which it accounted for – is preserved in the theory which supercedes it.

    *

    A hypothesis or theory which is justified by multiple lines of investigation is generally justified to a far greater degree than it would be if it were simply justified by any one line of investigation considered in isolation.

    Now the vast majority of the scientific community has accepted the view that:
    1. The earth is getting warmer;
    2. greenhouse gases are largely responsible for this; and,
    3. That what has been raising the level of greenhouse gases are human activities.

    You on the other hand are still stuck on (1). Not dogmatically denying it, I understand, but simply doubting it with your healthy, “scientific” skepticism.

    So in the interest of science, lets look at the evidence:

    1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.
    2. These are temperature measurements being taken by planes and satellites, and they show that the troposphere is warming – just as we would expect.
    3. The stratosphere is cooling – just as is predicted by the anthropogenic global warming theory. (Incidently, the latter of these is something which cannot be explained by any theory based upon solar variability.)
    4. Measurements of temperatures at the surface of the ocean show that these temperatures are increasing.
    5. Measurements of temperatures at various depths show warming as far down as 1500 meters.
    6. Measurements of sea level show that it has been rising just as we would expect from thermal expansion.
    7. Gravitometric measurements of Greenland and Antarctica which are showing net ice loss in both cases.
    8. We can witness sea-ice loss in the Arctic which is dramatically accelerating.
    9. We are seeing the acceleration of glaciers in both Greenland and Antarctica, particularly within the last few years. Greenland is no doubt affected by black carbon, but Antarctica is much more isolated.
    10. We are witnessing the rise of the tropopause.
    11. There is the poleward migration of species – just as one would expect with rising temperatures.
    12. There is the increased intensity of hurricanes just as we would expect from rising sea surface temperatures.
    13. There is the accelerating decline of glaciers throughout the world with few rare exceptions.
    14. There is the rise in temperatures at greater depths in the permafrost.
    15. There is the rapid expansion in the last few years of thermokarst lakes throughout parts of Siberia, Canada and Alaska.
    16. There are changes in ocean circulation – just as has been predicted by climate models, for example, with temperatures rising more quickly overland.
    17. We are seeing the disintegration of permafrost coastlines in the arctic.
    18. We are witnessing changes in the altitude of the stratosphere.
    19. We are getting temperature measurements from countries throughout the world which show the same trends.
    20. When we perform measurements using only rural stations, we see almost identical trends compared to those which we get when we perform measurements with all surface stations.

    All of this constitutes evidence for global warming. Some of it constitutes strong evidence for a particular theory of the mechanism by which this warming is taking place. But you would have us discard a conclusion which is based upon such a large body of evidence based upon a fraction of a degree for a particular year for a relatively small region of the globe.

    Somehow I doubt a fidelity to science is what motivates you.

    Comment by Timothy Chase — 11 Aug 2007 @ 3:02 AM

  57. I had written (#19):

    Earth will feel the heat from 2009: climate boffins
    By Lucy Sherriff
    10th August 2007 15:31 GMT
    http://www.theregister.co.uk/2007/08/10/climate_model/

    It appears that climatologists are in the process of improving their short-range forecasting.

    papertiger (#29) responded:

    Interesting. It seems to be inproved just enough to cover the time period right after the next election.

    You sure this is a non political website?

    Different country – Hadley out of England – notice the UK in the website address. But I suppose they could be part of a global conspiracy. You have to watch those conspiratorial types pretty darn closely – particularly the scientists when they start getting uppety….

    As for the Arctic sea, I don’t know its political affiliation for certain, but the blue would seem to be a dead giveaway.

    Comment by Timothy Chase — 11 Aug 2007 @ 4:03 AM

  58. Gavin, if the US48 corrections indeed were imperceptible in the global record, I would expect the GISS global record to either not change at all or to roughly correspond to the US48 corrections (mainly due to rounding errors since the effect is small).

    However diff-ing the GISS global record versions, I find none of the above. E.g. the 2005 US48 correction of -0.3°C is reflected as a +0.01°C in the new global dataset. How come?

    [Response: What's your source for that? The wayback machine version from May 17 2007 is 0.01 degree greater than the current version. i.e. it went down with the correction as you would expect. The data for Fig.A2 doesn't appear to have been updated yet (and the update stamp confirms that). I'll ask that they do. - gavin]

    Comment by Wolfgang Flamme — 11 Aug 2007 @ 4:04 AM

  59. Steve…

    “If you bother to fit some single / multiple frequency sinusoidal models…”

    Some basics…

    Fitting any old set of functions to a given data set is often not very useful. Usually one has some justification for the basis functions one is using what’s yours ? You do realize that given enough terms it is possible to fit any function (data set) exactly…of course such fits are not useful in the context of this issue.

    Have you tried using only say 75% of the data set and doing the same fits and then using the other 25% to see if your fits have any predictive power at all ?

    Comment by D. Donovan — 11 Aug 2007 @ 4:39 AM

  60. btw; How does this affect the accuracy of Hansens’ models? The ones that were portrayed as so stunningly accurate?

    [Response: Not at all. If you recall, the evaluation of those 1988 projections was against the global data, which hasn't changed. - gavin]

    Comment by tom — 11 Aug 2007 @ 4:41 AM

  61. So recent 5 year temps in the US have been a mere 0.03C greater than in the 1930s? Fascinating.

    And what did the 1930s bring the United States? Drought and the dustbowl throughout the midwest. Perhaps Steinbeck’s greatest novel, “The Grapes of Wrath”. Images by folks like Dorthea Lange and Walker Evans that burned into the soul of the country.

    “only” 0.03 warmer than a time that brought significant hardship to many, many farmers.

    And, hey, keep in mind that we’re really concerned about GLOBAL warming. The current temps are part of a global phenomena, the high US temps in the 1930s weren’t.

    Comment by dhogaza — 11 Aug 2007 @ 5:27 AM

  62. [[If you want me to believe in catastrophic global warming, you’ve got to convince me it’s not just a case of Garbage In Garbage Out. So far, I’m not convinced. I’m less convinced today than I was last week.]]

    Then you’re not paying attention.

    Global warming doesn’t rely only on the surface temperature station record, though that record is reliable enough. It has been detected in sea surface temperatures (are there urban heat islands and poorly sited temperature stations on the ocean?). It has been detected in boreholes. It has been detected in the record from balloon radiosondes. It has been detected in the record from satellites. It has been detected in melting glaciers, tree lines moving toward the poles, animals migrating toward the poles. It has been confirmed again and again and again and those who doubt simply don’t understand the situation.

    Comment by Barton Paul Levenson — 11 Aug 2007 @ 6:05 AM

  63. True enough, any curve shape — even a step or a spike — can be reproduced with sinusoids. But what physical process does each sinusoid represent?

    Comment by spilgard — 11 Aug 2007 @ 6:05 AM

  64. [[I am wondering if the ocean is rising and flooding out Indonesia, why isn’t it flooding Cape Cod? are they not all at sea level?]]

    Sea level is not the same everywhere in the world. It is affected by the Earth’s rotation, gravitational anomalies, currents, differences in local salinity, and differences in local temperatures. Sea level on average is rising. That doesn’t mean it’s rising at exactly the same rate everywhere in the world.

    Comment by Barton Paul Levenson — 11 Aug 2007 @ 6:07 AM

  65. “but, given the fact that the US has the most reliable and well-maintained network, it raises concerns about the quality of data we have been using across the board.”

    On what do you base the idea that our network is best? Why is it better than for example that of Sweden?

    Comment by DavidU — 11 Aug 2007 @ 6:38 AM

  66. Gavin:
    The two charts you present are grossly misleading, is there no way to amend them and use the same y-axis.

    Comment by bjc — 11 Aug 2007 @ 6:53 AM

  67. 43: “Unfortunately, as ClimateAudit has documented, a number of climate researchers have maintained their data, methodologies, algorithms and source code are private.”
    “[Response: What is secret? The algorithms, issues and choices are outlined in excruciating detail in the relevant papers: http://data.giss.nasa.gov/gistemp/references.html - gavin]”

    How is “outlining” anything equivalent to actually posting the data and code?

    [Response: Because a) the raw data are publicly available and b) papers are supposed to contain enough detail to allow others to repeat the analysis. If a paper says 'we then add A and B', you don't need code that has "C=A+B". - gavin]

    Comment by Frank Ch. Eigler — 11 Aug 2007 @ 7:07 AM

  68. I’m a physician and not a climate scientist, and am a little confused about the points being made here. Are people arguing that the GISS correction suggests that N America is not warming, that the hemispheric or global means are not increasing or what? How do the respondents tie this in with sea temperatures and satelite data. What do they think of the recent correction of the satelite data anomaly? And what then of the measured sea level rises? Sorry it’s just I’m having trouble following the logical thread of this argument and some of the conclusions being bandied around.

    Comment by jodyaberdein — 11 Aug 2007 @ 7:42 AM

  69. An important statement relevant to this issue was made as part of the recommendations from the CCSP Synthesis and Assessment Report on Tropospheric temperature trends:
    http://www.climatescience.gov/Library/sap/sap1-1/finalreport/default.htm
    see particularly ch 6 (recommendations)
    http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-chap6.pdf

    “To ascertain unambiguously the causes of differences in data sets generally requires extensive metadata for each data set. Appropriate metadata, whether obtained from the peer-reviewed literature or from data made available on-line, should include, for data on all relevant spatial and temporal scales:
    • Documentation of the raw data and the data sources used in the data set construction to enable quantification of the extent to which the raw data overlap with other similar data sets;
    • Details of instrumentation used, the observing practices and environments and their changes over time to help assessments of, or adjustments for, the changing accuracy of the data;
    • Supporting information such as any adjustments made to the data and the numbers and locations of the data through time;
    • An audit trail of decisions about the adjustments made, including supporting evidence that identifies non-climatic influences on the data and justifies any consequent adjust- ments to the data that have been made; and
    • Uncertainty estimates and their derivation.
    This information should be made openly available to the research community.”

    “The independent development of data sets and analyses by several independent scientists or teams will serve to quantify structural uncertainty and to provide objective corroboration of the results. In order to encourage further independent scrutiny, data sets and their full metadata should be made openly available. Comprehensive analyses should be carried out to ascertain the causes of remaining differences between data sets and to refine uncertainty estimates.”

    Last week, Bush signed a bill on “America COMPETES”. There is a relevant part on the open exchange of data and metadata:

    “SEC. 1009. RELEASE OF SCIENTIFIC RESEARCH RESULTS.

    (a) Principles- Not later than 90 days after the date of the enactment of this Act, the Director of the Office of Science and Technology Policy, in consultation with the Director of the Office of Management and Budget and the heads of all Federal civilian agencies that conduct scientific research, shall develop and issue an overarching set of principles to ensure the communication and open exchange of data and results to other agencies, policymakers, and the public of research conducted by a scientist employed by a Federal civilian agency and to prevent the intentional or unintentional suppression or distortion of such research findings. The principles shall encourage the open exchange of data and results of research undertaken by a scientist employed by such an agency and shall be consistent with existing Federal laws, including chapter 18 of title 35, United States Code (commonly known as the `Bayh-Dole Act’). The principles shall also take into consideration the policies of peer-reviewed scientific journals in which Federal scientists may currently publish results.”

    As a climate researcher, I wholeheartedly support the above principles. In my opinion research scientists (and particularly government research scientists) should not be given any “choice” in this matter if they wish to receive government research funding, publish their research in the peer reviewed journals of the major professional societies, and have their data used in assessment reports.

    Yes all this adds to the cost of doing research, and even the COMPETE bill is apparently an unfunded mandate. But it’s a cost we need to accommodate in some way. I have seen too many examples in the climate field where scientists do not want to make their data and metadata available to skeptics such as Steve McIntyre since they don’t want to see their research attacked (and this has even been condoned by a funding agency). Well, in the world of science, if you want your hypotheses and theories to be accepted, they must be able to survive attacks by skeptics. Because of its policy importance, climate research at times seems like “blood sport.” But in the long run, the credibility of climate research will suffer if climate researchers don’t “take the high ground” in engaging skeptics.

    With regards to Steve McIntyre and climateaudit. In the early days of McIntyre’s attacks on the “hockey stick”, it was relatively easy to dismiss him as an industry “stooge.” Well, given his lengthy track record in actually doing work to audit climate data, it is absolutely inappropriate in my opinion to dismiss him. Climateaudit has attracted a dedicated community of climateauditors, a few of whom are knowledgeable about statistics and are interesting thinkers (the site also attracts “denialists”). For all the auditing activity at climateaudit, they have found relatively little in the way of bonafide issues that actually change something, but this is not to say that they have found nothing. So taking the high ground, lets thank Steve and climateauditors if they actually find something useful, assess it and assimilate it, and move on. Such actions by climate researchers would provide less fodder for the denialists, in my opinion.

    Comment by Judith Curry — 11 Aug 2007 @ 8:44 AM

  70. it is still curious the difference between the US and world readings.In Britain the 1930′s temperatures were only matched very recently. Indeed when I was a boy in the 1960′s older people were always saying how much warmer it was then. It turns out they were right. Why is this and which parts of the world have warmed to give the sharp upward increase in global temperatures?.

    [Response: The GISTEMP website allows you to compare any two time slices, so have a look there, comparing say 1930-1935 to 2000-2005. You'll find that almost everywhere is warmer than then (including Europe), but not the SE US, nor much of the Eastern Pacific. - gavin]

    Comment by David Price — 11 Aug 2007 @ 8:55 AM

  71. I may have missed it, but I don’t think anyone has linked James Hansen’s response to this issue.

    http://www.columbia.edu/~jeh1/distro_LightUpstairs_70810.pdf

    By the way, I tried to contact him to be added to his email list and found that his email has been disabled. Given the vicious attacks on him by right wing blogs and talk radio, I suppose his inbox has been overwhelmed by messages calling for his head. It is sad. He is an outstanding scientist of highest integrity and one of my heros – a man who will speak the truth to power at whatever cost.

    Comment by Ron Taylor — 11 Aug 2007 @ 9:25 AM

  72. [ Barton: Sea level is not the same everywhere in the world. It is affected by the Earth’s rotation, gravitational anomalies, currents, differences in local salinity, and differences in local temperatures. Sea level on average is rising. That doesn’t mean it’s rising at exactly the same rate everywhere in the world. ]

    Odd you didn’t mention that maybe some plates might be sliding lower into the ocean. Exactly how much of a rise is there? And how does it vary across the globe?

    Comment by BlogReader — 11 Aug 2007 @ 9:30 AM

  73. Gavin said:

    If a paper says ‘we then add A and B’, you don’t need code that has “C=A+B”

    You might want to rethink that, considering the denial folks seem to need to be led around by the nose.

    Comment by wildlifer — 11 Aug 2007 @ 9:52 AM

  74. Theory laden observation. one of my favorite examples.

    from Hansen 2001:

    “The strong cooling that exists in the unlit station data in the northern California region is not found in either the periurban or urban stations either with or without any of the adjustments. ”

    Rural can’t be cooler. And it can’t be cooler after our perfect adjustment scheme.
    Go figure. we are talking about the early 1900s here. and defining urban/small town/rural using 1980 population data and a photo from a satellite.

    They continue:

    “Ocean temperature data for the same period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the possibility of a flaw in the unlit station data for that small region.After examination of all of the stations in this region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted so these apparent data flaws would not be transmitted to adjusted periurban and urban stations.”

    The ocean is warm, so the cool station record must be wrong. the 5 stations that don’t agree; they must be wrong. And we know they must be wrong because we have records of the temperature of water, taken by guys hauling in buckets of water somewhere in the pacific.

    Now, very often on the coast the land tracks the ocean temp. Let’s see where these stations are.

    In this region ( they don’t specify the exact extent of the region) they mention:

    1. Lake spaulding. You go google map that. It’s about 200 miles from the warm ocean, on the other side of the mountains.

    2. Willows a bit closer to the ocean. It’s in the sacramento Valley.

    3. Electra. About 200 miles from the ocean on the far side of a mountain.

    4. Crater lake: closer to the ocean, but in a crater,
    separated from the ocean by a ridge line.

    5. Orleans. Much closer to the ocean. Behind a big ridge line.

    I’m stumped. Why would these 5 sites show unusual cooling ? Hansen’s reply: they can’t. There must be something wrong with the station data.
    Must. well, There are multiple explanations:

    1. could be something wrong with the Ocean measurements.

    2. Could be something amiss with the assumption that these site records always correlate well with the ocean data.

    3. Could be something wrong with the warmer stations they are compared to. where is the list of the other stations.

    How to decide? confirmation bias to the rescue.

    Dump the cold data.

    Continuing:

    “If these adjustments were not made, the 100-year temperature change in the United States would be reduced by 0.01°C.”

    A Small amount. does not even matter. Why drop it?

    Comment by steven mosher — 11 Aug 2007 @ 10:31 AM

  75. re: “Another week, another ado over nothing.”

    If that is the last warming error in the GISS temp analysis then perhaps it is an embarrassing, but minor issue.

    After an error is found an organization’s sales staff typically weigh in with “It is only one small error, the rest of the product is still solid”. Meanwhile the technical folks scurry around and make sure that that was the only mistake. Take a step back and look Gavin, your business card may need some updating.

    re: “What is secret? The algorithms, issues and choices are outlined in excruciating detail in the relevant papers: http://data.giss.nasa.gov/gistemp/references.html

    Sorry, I don’t see the excruciating detail. I found no pseudo code, no code, no scripts. Had Hansen released code and scripts Steve McIntyre would have caught the problem long ago. That just made him work harder. Until code and scripts are fully released, other errors are going to be slower to surface. I am betting you know this.

    Comment by John Norris — 11 Aug 2007 @ 10:34 AM

  76. One mistake and the “gotcha” squad comes out in full force and says the whole enterprise is wrong! Baloney!
    The modelers admitted the error and promptly corrected it. This is what should be focussed on.
    Jeremy Bernstein, the physicist and science writer visited Erwin Schrodinger,near the end of his life.Schrodinger developed the wave mechanic equations of quantum mechanics.Schrodinger told Bernstein “There is something that the ancient Greek scientists knew that we seem to have forgotton.” He paused and then said “and that is modesty.”
    It seems that Schrodinger was wrong. The modelers had the humility to openly acknowledge the error and to correct it.

    Comment by Lawrence Brown — 11 Aug 2007 @ 10:42 AM

  77. It’s already been this hot (1998) in the past. In other countries it has been hotter in other years. Icebears still live. There have always been droughts and floods. Now we just get informed better than 50 years ago and before.

    In my opinion the rate of change doesn’t say anything about the following years. Temperature records show that after one year with an increase (maybe even a dramatic increase) – one or two years later it has been cooling again.
    From what I gather there is no gain of momentum but a lot of jitter on the raw data, as well as the annual means.
    If anyone could point me to (freely accessible?) scientific literature suggesting something like a momentum in global annual mean temperatures, please feel free to post: I am hungry to learn every day!

    PS: I am currently working on some statistics concerning temperatures (min, avg, max). I will let you know when I’m ready to “publish”… As a preview you can look at this:
    http://www.07000deuschl.de/co2/pubs/avgcharts_ffm/avgs.jpg
    http://www.07000deuschl.de/co2/pubs/avgcharts_ffm/deriv.jpg

    http://www.07000deuschl.de/co2/pubs/avgcharts_ffm/index.html

    Comment by Matthias — 11 Aug 2007 @ 10:56 AM

  78. No. The error on the global mean anomaly is estimated to be around 0.1 deg C now, increasing slighty before 1950 or so and a little larger in the 1880s.

    What do you mean its “estimated” to be? You don’t know what the error is? What’s the error on the error of the GST?

    On what do you base the idea that our network is best? Why is it better than for example that of Sweden?

    Stop obfuscating. It may not be better than Sweden–but I imagine it is. That’s not the point. What’s the network like in Botswana? Nigeria? Rural Mongolia? Siberia? Geez.

    Well, given his lengthy track record in actually doing work to audit climate data, it is absolutely inappropriate in my opinion to dismiss him. –Judith Curry

    It is inappropriate and unprofessional to dismiss anyone and an “industry stooge” simply because they disagree with you. Unsubstantiated ad hominem attacks are never appropriate.

    [Response: If we knew exactly what the error was, we'd fix it and then it would be zero. All errors are therefore estimated (based on spatial coverage, measurement accuracy, etc.). - gavin]

    Comment by DaveS — 11 Aug 2007 @ 11:05 AM

  79. More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century)

    Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important

    So, it is climatically significant that 1930-1934 is 3 hundredths of a degree cooler than 2002-2006, but it is not climatically significant that 1934 and 1998 have now changed places, by 3 hundredths of a degree.

    Can you explain your criteria for assessing when 3 hundredths of a degree is climatically significant and when it is not? Because from where I sit it looks like the criteria is “significant if it supports the AGW hypothesis, insignificant if not”.

    [Response: The longer the average the more relevant, and the more significant the difference. I was just trying to make the point that long term averages didn't shift. If we consider 0.1 (as above) the 'significant' level for single years, the five year mean significance is roughly 0.1/sqrt(5)= 0.045 and so the 1930-1934 and 2002-2006 difference isn't significant, but the difference to 1998-2002 certainly is. - gavin]

    Comment by Enviroagnostic — 11 Aug 2007 @ 11:14 AM

  80. More importantly for climate purposes, the longer term US averages have not changed rank. 2001-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC).

    2001-2006 is longer than 1930-1934 – what does it mean to compare averages of periods of different length?

    And may we please get preview back?

    [Response: I mistyped. It is 2002-2006, I'll correct the post. Thanks. (Preview is coming but for some reason the compreval plugin doesn't want to work). - gavin]

    Comment by llewelly — 11 Aug 2007 @ 11:32 AM

  81. While the error may be minor it does make good copy.

    Comment by captdallas2 — 11 Aug 2007 @ 11:33 AM

  82. Can someone tell me – what are the primary sources of scatter in the global mean temp from year to year?

    [Response: El Nino/La Nina is important, the North Atlantic Oscillation during the winter can be seen, but mostly it is just 'weather' - unforced variability (which locally can be quite large) averaging out to give deviations of about 0.1 deg C from year to year. - gavin]

    Comment by Herb — 11 Aug 2007 @ 11:37 AM

  83. Just for global warming skeptics:
    http://www.nowpublic.com/english_man_swims_1km_north_pole_raise_climate_change_awareness

    Just for everyone else on the cause of global warming (non-scientific):
    It’s true that it’s happening. It’s true that it is new to the Earth, CO2 levels are higher than they’ve been for … how many years? The question is the cause: It’s not from living plants, they reduce CO2; It’s being caused by a change of some kind… It sounds reasonable to assume that the change must be large to have so large an effect on the amount of CO2. It’s most likely a living organisim that emits CO2 whose population is consistently increasing indicating that has no natural predator to thin out their numbers… Could it be a fish? nah, we’d kill them and eat them. Could it be an animal? Could be, but we’d keep their numbers down through hunting/we’d see them. Could it be an insect, nah we’d kill them to keep their numbers down, they are pests after all. Well, what’s causing it: “I have met the enemy and them are us”.

    Comment by Harold Ford — 11 Aug 2007 @ 11:50 AM

  84. Re #56: [All of this constitutes evidence for global warming. Some of it constitutes strong evidence for a particular theory of the mechanism...]

    An interesting & informative read, if a bit long, but I think it’s still backwards. After all, most if not all of the points you cite as evidence of warming would be equally valid for warming from any cause, whether it be an increase in the sun’s output, cosmic rays, or the side-effects of UFO exhaust. That means having to refute each and every alternative explanation that the denialist community dreams up – and denialists being what they are, having to repeat each separate refutation ad nauseum.

    Why not approach it from the other direction, from the theory of the mechanism? The amount of CO2 in the atmosphere has increased, a fact that only the completely irrational denialists try to dispute. That this increase is from fossil fuel burning is also beyond any reasonable question. The properties of CO2 have been measured quite accurately, and are likewise difficult for the denialists to dispute. So from that, and known radiation theory (which has been experimentally verified in many ways), we discover that there is this mechanism by which increasing atmospheric CO2 causes warming.

    If the denialist community wants to try to refute this mechanism, they have to do some hard math and/or experimental science, rather than just randomly sniping at this or that data set. Instead of the foundation for the theory, your long list of warming examples becomes evidence supporting it. If one or more are shown to fail, due to instrument error or the effects of other climate drivers (e.g. sulfate aerosols), that doesn’t undermine the existence of the mechanism.

    Comment by James — 11 Aug 2007 @ 12:12 PM

  85. Gavin:
    Steve McIntyre graciously accepted my suggestion and the two figures you present now are viewable side-by-side. Hopefully it will stimulate further explorations and careful assessments of the published temperature records.

    Comment by bjc — 11 Aug 2007 @ 12:36 PM

  86. RE 77.

    Matthias, your data is from one city in Germany. Hardly data representative of the globe.

    J

    Comment by Justin — 11 Aug 2007 @ 1:32 PM

  87. What I’d like to see is a least-squares analysis of these data sets, which would calculate a growth rate over the period of the data sets. Simple maxima and minima do not make a trend by themselves. The 1934 data for the US might be just an aberration in the trend. As I look at the graph of the data, just eyeballing it as it is, it certainly appears to me that there is a clear upward trend over the course of the 20th Century in the US data set.

    Some of the comments also make it clear to me that there are a lot of folks who think of “pollution” as generic. That is, of course, total nonsense. The effects of particulates (e.g. fly ash), and carbon dioxide are quite different, as I’m sure those of you who are actually scientists know.

    [Response: US trend (1880-2006): 0.043 +/- 0.021 deg C/decade (95% confidence levels), significant but less than the global trend. For the more recent period 1975-2006: 0.3 +/- 0.16 deg C/dec. - gavin]

    Comment by Gene Hawkridge — 11 Aug 2007 @ 1:50 PM

  88. It’s really no surprise how the antiwarming crowd jumps on any small error in the science (in this case a minor error in one data set) to discredit all the science. It is the same tactic the anti-evolutionists use. They do not understand that both theories, AGW and evolution, are based on multiple converging lines of evidence and to bring down either theory would require seriously attacking each of those independent lines of evidence successfully. This instance wasn’t even a small dent and still the fools cry hoax.

    Comment by Peter Houlihan — 11 Aug 2007 @ 2:15 PM

  89. James (#84) wrote:

    An interesting & informative read, if a bit long, …

    My apologies for the length, particularly so soon in the thread. Believe it or not, I was actually inclined to write something a bit longer to address some of your later concerns and to more properly treat some technical issues, but that would have been imposing a great deal upon everyone else, and I believe that given the length I had imposed more than enough as it is. Besides, what you don’t mention at one point you can always bring in later.

    … but I think it’s still backwards. After all, most if not all of the points you cite as evidence of warming would be equally valid for warming from any cause, whether it be an increase in the sun’s output, cosmic rays, or the side-effects of UFO exhaust. That means having to refute each and every alternative explanation that the denialist community dreams up – and denialists being what they are, having to repeat each separate refutation ad nauseum.

    Granted, for something general purpose, one should focus more on the mechanism, particularly anthropogenic greenhouse gas emissions. We actually have that nailed. Likewise, there should be more focus on the foundations of climatology in physics. However, I was responding to Steve Mosher – and he is inclined towards skepticism with regard to the mere fact of global warming. Likewise, McIntyre is trying to cast doubt on the phenomena by focusing on a small fraction of a percent in temperature for one year in the United States – which is afterall what this thread is about. So focusing more on the warming and less on the mechanism seemed warranted, given the context.

    Comment by Timothy Chase — 11 Aug 2007 @ 2:39 PM

  90. “Stop obfuscating. It may not be better than Sweden–but I imagine it is. That’s not the point. What’s the network like in Botswana? Nigeria? Rural Mongolia? Siberia? Geez.”

    How am I obfuscating here? Your original post was along the lines that since “the best” network hads given rise to an eror all the other were in doubt. I took Sweden as an example since I happen to be staying there for a few months now.
    The reliability of data from different regions is of course important to know when deciding how reliable the averages are, but just assuming that our data is the best is no good. Your mention some countries you find unreliable, but what about Australia, New Zeeland, Japan, South Africa? Or Canada, which has more land area than the US and thereby a larger part of the world average.

    Comment by DavidU — 11 Aug 2007 @ 2:50 PM

  91. #82 (including response)

    Doesn’t the scatter in the surface temperature record really boil down to
    the fact that most of the energy in the climate system is stored BELOW the
    surface in the oceans? Thus, vertical transports of energy that would be
    insignificant when compared to the total sub-surface energy can have a
    measureable impact on SURFACE ocean temperatures.

    Comment by Jerry — 11 Aug 2007 @ 2:55 PM

  92. 88. “It’s really no surprise how the antiwarming crowd jumps on any small error in the science …to discredit all the science.”

    This was a classic tactic of the tobacco industry. Any small error in a science article was used to describe the research as ‘flawed’ and unusable. Tobacco lawyers also sued researchers to get patient data: I know one scientist who was unable to visit Texas for a number of years because there was a warrant out for his arrest; he had refused an order obtained by the tobacco industry to turn over his patient records and data.

    Comment by richard — 11 Aug 2007 @ 3:23 PM

  93. re 92
    Don’t any of those foldk read their Grandmother’s journals for when the apples bloomed?

    This year the apples bloomed earlier than in 1934.

    It is hard to argue with an apple tree.

    Comment by Aaron Lewis — 11 Aug 2007 @ 3:24 PM

  94. #6: Lee

    [edit] Percent statements only make sense for ratio-scaled (where 0 means none), not interval-scaled data (where 0 is an arbitrary point on a scale. Anomalies are interval-scaled. If you change the base reference for the anomalies, the percentage “change” in the anomoly will likely be a different amount. Your statement is as sensible as saying a change in the outside temperature from 10C to 15C makes it 50% warmer.

    Comment by RomanM — 11 Aug 2007 @ 3:26 PM

  95. All scientists are skeptics. To “believe” is associated with faith and religion, not science. To “deny” is associated with “believing” regardless of contrary evidence, and regardless of the fact that doing so is unscientific.

    Those who use this terminology easily expose themselves as naive, manipulated, and propagandized. This goes both ways.

    Comment by Chaz — 11 Aug 2007 @ 3:46 PM

  96. Roman,
    No. I was explicitly looking at the total change in temp over a stated time interval, with the beginning and end times stated. ie, the total change over the last century, between an implicit zero at the beginning, to the end, was somewhere between 0.8C and 1.1C. That change is not sensitive to the base reference. Changing the base would change the absolute values for beginning and end, but would leave the difference between them unchanged.

    Yes, it is slightly awkward – but only slightly. I chose to state it that way because claims are being bandied all over the internet that the change in the temp/trend was several percent change.

    Comment by Lee — 11 Aug 2007 @ 3:53 PM

  97. Here is the response of an Inuit child about all this climate change:
    http://www.putfile.com/pic.php?img=6219791

    Comment by Maqaitik Hunter — 11 Aug 2007 @ 3:53 PM

  98. Steve Mosher (#74) wrote:

    Theory laden observation. one of my favorite examples.

    from Hansen 2001:

    “The strong cooling that exists in the unlit station data in the northern California region is not found in either the periurban or urban stations either with or without any of the adjustments.”

    Rural can’t be cooler. And it can’t be cooler after our perfect adjustment scheme.
    Go figure. we are talking about the early 1900s here. and defining urban/small town/rural using 1980 population data and a photo from a satellite.

    They continue: …

    Hansen (2001):

    A closer look at United States and global surface temperature change
    J. Hansen, et al (2001)
    http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf

    They state:

    After examination of all of the stations in this region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted)…

    As such, they are responding to sources of contamination which may not have been regarded as that important – prior to 1929. Not only does this have little to do with current trends, but it demonstrates a concern for accuracy. And as a matter of fact, the elimination of such contamination tends to reduce the apparent rise in temperatures over the twentieth century rather than magnify it.

    In an earlier thread you claimed to be interested in accuracy and were attacking urban sites for their presumed contamination by the urban heat island effect. We of course pointed that you get virtually the same warming trend whether you use all stations or just rural stations – which you didn’t even seem to acknowledge.

    Now Hansen is correcting some fairly obvious contamination, and you find this particularly problematic because he is applying some common sense. You vaguely suggest it involves ulterior motives so on the basis of a kind philosophic skepticism which on close analysis would appear to be of the radical kind roughly on par with Descartes and Hume.

    For some reason, I don’t think that identifying the actual trends and causes of world temperatures (or more generally, for that matter) is your highest priority.

    Comment by Timothy Chase — 11 Aug 2007 @ 3:54 PM

  99. 96 Lee

    Then you are not talking about anomalies. An anomoly is a departure from a norm. If you want to say taht you are looking at the change from some staring point and some end point, you are entitled to do so, but don’t call that an anomoly unless you want us to believe that somehow the temperature at the beginning of the 1900s was the norrmal state of affairs.

    If this webiste wishes to edit my post, I would appreciate it if they would not make it look like I was quoting myself. Thanks.

    Comment by RomanM — 11 Aug 2007 @ 4:06 PM

  100. Roman, a change in anomaly over that time period is equivalent to the change in temp over that period. If the anomaly went from -0.3C to +0.7, then the change in anomaly, is equal to the change in temp, is equal to a delta of 1.0C.

    Make any change to the reference period, change the baseline, and all that happens is you create equivalent offsets to the beginning and ending anomaly values, and the delta is unchanged.

    My percentage calculation references the delta temp (or delta anomaly, which is equivalent), not the absolute anomaly value.

    I wasn’t even imprecise in my language. The adjustment causes a slight change in the anomaly values post 2000, which changed the delta temp, which is what I explicitly referenced in my percent calculation.

    Comment by Lee — 11 Aug 2007 @ 4:24 PM

  101. Re 98 – Tim, I am so grateful for your knowledge, skill and patience in deconstructing plausible sounding strawman arguments and revealing them for what they are. Keep up the good work – and I am sure that it involves a great deal of work on your part. Your posts are a great asset on this site.

    Comment by Ron Taylor — 11 Aug 2007 @ 4:27 PM

  102. It is troublesome that in this day and age we still seem to be using, or over using, an 18th century approach to data capture, a thermometer mounted on a post. Problem is, thermometers don’t really measure what we need to know about energy in the atmosphere, so their measurements are largely irrelevant for models. Thermometers report some unidentified hodge-podge of LTE, upwelling and down welling radiation fields fields and air currents and radiations from the surround, whatever that happens to be.

    What we need — it is 2007 after all — are lateral and vertical radiation field measurements concurrent with pressures, separate pressure fluctuations, wind velocity, and humidity, all sampled at least every five minutes. Stations should be located inside and outside UHI’s. I say let’s stop wasting time with postoffice or whatever thermometers. Let the private sector manage and sell this data. If NASA is going to be in the business of modeling, it needs clean, complete data and trash the rest of it.

    Comment by Allan Ames — 11 Aug 2007 @ 4:30 PM

  103. Hansen’s correction shows that in the United States temperature oscillates with a frequency of about 65-70 years. Thjis oscillation was identified by Schlesinger and Ramankutty in 1994. The last peak was 1940 and we are about to experience the next decline. There is no evidence for “global warming” from Hansen’s record.

    The same oscillation, without overall warming, is to be found in the published regional records for China, and for many well-maintained local records.

    The supposed “global warming” shown in the surface record is an artifact caused by the many inaccuracies and biases in the recording af local temperatures (where average temperatures are never measured) and their influence by local changing circumstances.
    [edit]

    Comment by Vincent Gray — 11 Aug 2007 @ 4:45 PM

  104. Re #7. Was there an answer to this reasonable request? It would be quite illustrative to see the US and global anomalies on a similar scale, don’t you think? Concerning the global anomaly graph, somebody should tell the graphics person not to plot values off the chart, but to choose the y-axis so that everything fits inside the picture.

    Comment by Dodo — 11 Aug 2007 @ 4:52 PM

  105. RE: 92/Richard: Monbiot says that tobacco company documents disclose a concerted effort to fog the secondary smoke issue by discrediting science in general–using AGW research as the focal point of the disinformation attack. From that flowed junkscience.com, its progeny, and most if not all of AGW denial. As such, the denial collective toils, consciously or unconsciously, in the service of tobacco company (and by derivation, extractive company) profits. How proud they must be.

    88. “It’s really no surprise how the antiwarming crowd jumps on any small error in the science …to discredit all the science.”

    This was a classic tactic of the tobacco industry. Any small error in a science article was used to describe the research as ‘flawed’ and unusable.

    Comment by ghost — 11 Aug 2007 @ 5:03 PM

  106. #103

    The probability that the cumulative effect of local “inaccuracies and biases” would be a statistically sigificant global warming signal is miniscule. (Unless there’s a conspiracy… calling Michael Crichton!)

    Comment by Jerry — 11 Aug 2007 @ 5:11 PM

  107. Three Questions.

    Had McIntyre discovered the error change the data in the other direction and made it much warmer would he have brought the error to light?

    Would 1934 still be the warmest year had calculations for the year started in any other date than January 1 through December 31? Had the years been calculated from any other date, it seems very likely that time period we call a year would not stand out at all. The years before and after 1934 were considerably much colder unlike the years in recent decades. Recent years are all much closer together in warmth, hence the mean average being higher for a longer period.

    1934, Dust Bowl right? If in 1998 we scraped the plains of plants so that dust could blow around would it have significantly beat the pants off 1934 for hottest year on record ever?

    [Response: Actually, it wasn't clear which way the error would go once it was spotted. I doubt that it would have got as much publicity if it had gone the other way though. - gavin]

    Comment by FP — 11 Aug 2007 @ 5:22 PM

  108. As I said at DeLong’s typepad, where he has a good piece on this now:

    You can draw a straight line right across the 1934 peak on the revised graph [as I could see it there], and see that it is still lower than the 200x wiggles.

    Yeah, Rush was blabbering about fraudulent NASA scientists, and “Seventy years ago, it was way warmer than it is now” etc. BTW, why can’t groups of people like NASA scientists sue for libel if someone defames them?

    [Response: It's not worth the bother..., though sometimes one is tempted! - gavin]

    Comment by Neil B. — 11 Aug 2007 @ 5:26 PM

  109. In 103, Vincent Gray writes: ‘Hansen’s correction shows that in the United States temperature oscillates with a frequency of about 65-70 years. This oscillation was identified by Schlesinger and Ramankutty in 1994. The last peak was 1940 and we are about to experience the next decline. There is no evidence for “global warming” from Hansen’s record.’

    How interesting. I realize you are only saying that Hansen’s data does not demonstrate global warming, but you imply that global warming does not exist, and is merely a phase of a natural oscillation.

    Since, as you say, this idea has been around since 1994, and since the conclusion contradicts the conclusions of hundreds of peer reviewed studies, surely dozens of peer reviewed articles have been published in support of the idea. Could you refer us to, say, three or four published in the past five years?

    Comment by Ron Taylor — 11 Aug 2007 @ 5:38 PM

  110. #58
    Gavin, I think that explains the difference, thank you.

    Comment by Wolfgang Flamme — 11 Aug 2007 @ 6:01 PM

  111. The Y2K comment was McIntyre being flip. It has caused unneeded confusion. One of the dangers of blog science (tendancy for catchy headlines and the like, versus boring titles).

    The station picture thing was related in an indirect way. Watts et al. had been looking at individual station pictures and also in some cases correlating them to the station temp graphs. Looking at individual temp graphs showed some large adjustments tending to happen at JAN2000. That’s what popped out to the eye and led to the insight. The adjustments are actually bimodal vice normal around the mean, so that it was easier to see an effect than one might expect. (None of this is to defend the station photo effort, which I’ve ridiculed. But to explain the form of the connection as it is. Really much more one of these things of one idea leading to another that happens in science. Heck some of my most interesting results from lit searches did not come from a search or references to references, but from being physically in the library books or bound journals and what you end up finding next to each other…)

    Comment by TCO — 11 Aug 2007 @ 7:21 PM

  112. 49: (science being theory or experiment led). Both can happen. Fractional quantum hall effect, Michelson Morley experiment, Mossbuaer effect, Pennicilin discovery, etc. are examples of unexpected results. Of course, it is also often the case that instruments are specifically designed based off of an experiment.

    Comment by TCO — 11 Aug 2007 @ 7:29 PM

  113. Vincent Gray (#103) wrote:

    Hansen’s correction shows that in the United States temperature oscillates with a frequency of about 65-70 years. This oscillation was identified by Schlesinger and Ramankutty in 1994. The last peak was 1940 and we are about to experience the next decline. There is no evidence for “global warming” from Hansen’s record.

    The same oscillation, without overall warming, is to be found in the published regional records for China, and for many well-maintained local records.

    The supposed “global warming” shown in the surface record is an artifact caused by the many inaccuracies and biases in the recording af local temperatures (where average temperatures are never measured) and their influence by local changing circumstances.

    The article is a bit dusty after thirteen years, I would presume. As such I was unable to find it, but I did find the abstract. It turned out to be a letter to Nature. Not peer-reviewed as I understand it, but nevertheless of some quality, I believe.

    I have bolded the sentence which I found most significant, given the context:

    An oscillation in the global climate system of period 65–70 years
    Michael E. Schlesinger & Navin Ramankutty
    Nature 367, 723 – 726 (24 February 1994)
    http://www.nature.com/nature/journal/v367/n6465/abs/367723a0.html

    IN addition to the well-known warming of 0.5 °C since the middle of the nineteenth century, global-mean surface temperature records1–4display substantial variability on timescales of a century or less. Accurate prediction of future temperature change requires an understanding of the causes of this variability; possibilities include external factors, such as increasing greenhouse-gas concentrations5–7 and anthropogenic sulphate aerosols8–10, and internal factors, both predictable (such as El Niño11) and unpredictable (noise12,13). Here we apply singular spectrum analysis14–20 to four global-mean temperature records1–4, and identify a temperature oscillation with a period of 65–70 years. Singular spectrum analysis of the surface temperature records for 11 geographical regions shows that the 65–70-year oscillation is the statistical result of 50–88-year oscillations for the North Atlantic Ocean and its bounding Northern Hemisphere continents. These oscillations have obscured the greenhouse warming signal in the North Atlantic and North America. Comparison with previous observations and model simulations suggests that the oscillation arises from predictable internal variability of the ocean–atmosphere system.

    Despite the title of Chapter 11 “Regional Climate Projections,” you seem to have thought that its purpose was of detailing observations made it the past rather than making projections regarding the future. Now it would appear that the letter which you are citing in support of your highly idiosyncratic views was written by authors who clearly do not share them. You might wish to consider reading things more carefully before commenting on them.

    Comment by Timothy Chase — 11 Aug 2007 @ 7:40 PM

  114. As I posted in the Arctic sea ice extent thread, the fact that Arctic sea ice is at an historic low this year is more the result of data changes made this January.

    Prior to these changes (which may also contain the same kind of errors as this thread is about) 1995 was the lowest Arctic sea ice extent.

    Here is a BEFORE and AFTER of the changes made in the data this January (hopefully the adjustments do not contain errors.)

    http://img401.imageshack.us/img401/2918/anomalykm3.gif

    Comment by John Wegner — 11 Aug 2007 @ 7:42 PM

  115. #114 nice graphic, but wait until you see the present day sallatlite pictures vs the (modeled) satallite pictures of 100 years ago.

    Comment by DocMartyn — 11 Aug 2007 @ 8:21 PM

  116. John Wegner (#114) wrote:

    As I posted in the Arctic sea ice extent thread, the fact that Arctic sea ice is at an historic low this year is more the result of data changes made this January.

    Prior to these changes (which may also contain the same kind of errors as this thread is about) 1995 was the lowest Arctic sea ice extent.

    Here is a BEFORE and AFTER of the changes made in the data this January (hopefully the adjustments do not contain errors.)

    http://img401.imageshack.us/img401/2918/anomalykm3.gif

    John, the following chart from 2005 would suggest that you are once again being creative…

    28 September 2005
    Sea Ice Decline Intensifies
    Figure 1: September extent trend, 1978-2005
    http://nsidc.org/news/press/20050928_trends_fig1.html

    I must admit, though, that this is better than your decimal place trick.

    Comment by Timothy Chase — 11 Aug 2007 @ 8:29 PM

  117. papertiger said:

    “I am wondering if the ocean is rising and flooding out Indonesia, why isn’t it flooding Cape Cod? are they not all at sea level?”

    It ain’t quite so straightforward. Simply put, all sea level curves are local. This is for a few reasons. For one, the massive ice loads present at the last glacial max depressed the earth’s crust beneath them (intuitive) made it bulge upward along their edges (maybe not so intuitive, but similar to the beam problems I’m sure you enjoyed in diff eq). The removal of these loads caused a global isostatic readjustment, still ongoing, as the mantle flows in toward the depressions. This means that some places (previously beneath loads) are still rising quickly, others (those on the bulges) are falling slowly, and still others (in the far field) have had complicated post-glacial rises then falls.

    Another important factor is that the global sea surface is nowhere near ‘flat’ in the way you might be picturing. Gravity varies spatially, as does the mass of seawater, resulting in hills and valleys on the sea surface. These move in time, sometimes very dramatically (e.g. El Nino, ENSO, etc.).

    Finally, many places are tectonically active, and interseismic strain accumulation (and subsequent release during fault rupture) can complicate measurements mightily. This is the case in much of Indonesia, for example.

    All said, measurements of sea level rise try hard to take these complications into account.

    I’m not sure but I think Cape Cod might be in the forebulge region of the Laurentide ice sheet, so today the coastline there may be sinking in part due to post-glacial isostatic adjustment. Thus the question to ask is if recent sea level rise at Cape Cod has increased in rate (because the rate of isostatic change should be nearly imperceptible at the century scale).

    It’s Saturday night – off to a party!

    Comment by Rich Briggs — 11 Aug 2007 @ 9:20 PM

  118. #59
    Yes, I do understand the irrelevance of ‘fitting’ a single frequency sinusoid, – that was my entire point (I work in science/engineering so understand Fourier analysis) – Sorry, my remark was perhaps obtuse. I was simply criticising the hyper-analysis of linear trend segments in this data set (at arbitrarily defined endpoints, with subjective methods of pre-filtering the data)
    #55
    Yes, I work in research and publish myself. Have you involved yourself in an examination of the GISS data sets? Things are not as transparent as you readily state. The fact that this error has been present for so many years (referenced so often), and required great effort to reverse-engineer, indicates this is a serious problem.

    It is not as if people did not care, and this was an error ‘missed’. A stream of scientists/engineers have been calling for a more transparent accounting, and we have been ignored under the absurd cackling of ‘Denialists!’

    I think it is time for less social/political polarity on these issues, before things get entirely out of hand (if they have not already). The climate science community would do themselves no harm by applauding Stephen McIntyre’s, shall we say? – tenacity. Perhaps working with him in addressing other concerns raised within the CA sphere (that has the participation of many scientists/engineers). This is a different world from the traditional ‘publishing’ science – with serious ramifications for humanity and earth – therefore you will have to put up with us ‘outsiders’ having our say, auditing your data, questioning your results – even if it is outside the realm of our immediate expertise. This is the burden of having chosen such an important realm of scientific investigation – so be it.

    regards, Steve

    Comment by Steven — 11 Aug 2007 @ 10:01 PM

  119. RE 98. Ataraxia. Pyrrhonism. Wiki it.

    Hansen had several choices. See if you can name them all, grasshopper.

    Comment by steven mosher — 11 Aug 2007 @ 10:40 PM

  120. Steven (#118) wrote:

    It is not as if people did not care, and this was an error ‘missed’. A stream of scientists/engineers have been calling for a more transparent accounting, and we have been ignored under the absurd cackling of ‘Denialists!’

    Steve,

    There is a great deal of evidence besides US land-based surface temperatures which demonstrates that global warming is quite real and that it is accelerating.

    I rattled off a list of twenty points (just before #57), where one of the most dramatic of which in my mind is global glacier mass balance – and by our best estimates, it threatens severe water-shortages for over a billion in Asia alone by the end of the century. Twenty largely independent lines of evidence that the world is undergoing a process of global warming, I will add, can and will take on a life of its own as various positive feedbacks come into play.

    What is happening right now up in the arctic should be a pretty good indication of this. You guys might not take it seriously or might like to think it is all made up, but a number of national governments would appear to take it quite seriously – judging from how the US, Russia and Canada are trying to stake claims on the oil below – which until now have been out of reach.

    However, your guys see the reading on one US temperature rise 0.02 degrees Celsius and the other fall 0.01 degrees Celsius, and are ready to say that global warming never happened. The mineralogist who crowned himself “climate auditor” is doing everything he can to encourage this. Somehow I think he is one of the last people in the world that one should ever call upon to keep a profession honest.

    Your guys like to complain that you need more “transparency.” You can download the climate models if you want to, and if you are willing to look a little, you can generally find the data. Heck, you can get the raw data on all US land stations and stations throughout the world along with charts if you are willing to visit:

    GISTEMP – EdGCM Dev – Trac
    http://dev.edgcm.columbia.edu/wiki/GISTEMP

    The guy in charge of the project is even trying to get pictures of all the stations for you and help you identify how close they are to urban centers. But somehow a bunch of guys snapping pictures who refuse to learn statistics, mathematics, physics or the simple fundamentals of the discipline they are attacking or even what information is actually available are going to tell climatologists how to conduct their science.

    Give me a break!

    I am no more of an expert than yourself, but at least I am willing to learn.

    Comment by Timothy Chase — 12 Aug 2007 @ 12:26 AM

  121. PS to #120

    Steve,

    As the charts in Gavin’s essay above suggest, it is quite true that the United States has had it easier than much of the rest of the world so far. Don’t count on our luck continuing as this progresses, though. If the models are as on the money as Jim Hansen’s twenty-year predictions (which were based on a single run with model much more primitive than today, mind you), we are going to make up for it.

    By 2080, there will be a permanent dust bowl forming in the southwest and another in the southeast. We won’t be able to grow wheat south of the Canadian boarder. Wheat. In a world that will be facing severe water shortages, greatly reduced agricultural harvests and the depletion of fish populations.

    This stuff is serious – and you guys are doing everything you can to stand in the way of the science because its telling you what you don’t want to hear. You want to continue with business as usual – even though we know where that will lead.

    Oh well.

    Maybe tomorrow I can quit trying to figure out the the twists and turns of human psychology and focus on learning more of the science. Genuine human achievement – in a world which may soon be seeing a great deal less of it. I guess we will see.

    Time for bed.

    Comment by Timothy Chase — 12 Aug 2007 @ 1:07 AM

  122. Tim:”It turned out to be a letter to Nature. Not peer-reviewed as I understand it, but nevertheless of some quality, I believe.”

    Hmm? Letters to Nature are certainly peer-reviewed, and held to the highest standards as they are the most well-read of all scientific publications.

    Comment by Carl — 12 Aug 2007 @ 4:33 AM

  123. I have been musing on why the “we need to audit the scientists” meme offends me so much. An exemplar of the meme from SCP in #23:

    I’m not confident in the US data, let alone those from the rest of the world. Peer review is for science. The IPCC can do all the peer review they want. That doesn’t cut it for economies. Economies use accounting and audits. Failure to do so leads to situations like Enron and WorldComm. If we just want to do science, stick with peer review. If someone wants to influence economies, it’s long past time for auditors to start poking around.

    If NASA doesn’t want to publish source code and all data, at a minimum maybe they should hire an accounting firm to audit climate related data, methods and processes and to issue a public report on the quality they find (and that firm should hire Steve McIntyre ; -).

    Leaving aside the irony that both Enron and WorldComm were audited every year by an outside accounting firm, and the implied comparison of climate scientist with convicted felons, I have to wonder who the audit memers think they could hire to perform a valid audit? I’ve done enough data analysis to know that you have to know the field to know which data manipulations are valid and which aren’t. How does an accountant, say, decide which adjustments to the temperature records are acceptable? Accountants have the GAAP, FASB, and IRS code and regulations to guide them when they are checking a company’s books. Climatology doesn’t have any equivalent. Except, perhaps, for getting a PhD in Physics, Chemistry, Biology, Oceanography, or Geology and then doing post-doc time with climatologists. In the same way that a non-accountant is going to get hopelessly lost trying to audit exxon-mobile’s books, so will non-climatologists trying to audit climatology.

    There is another dimension of difficulty with the audit meme. Unless the auditors are at least as skillful in climatology as those being audited, there is a powerful draw for the auditer to start seeing what is being audited as a tutorial: “this is how to solve this kind of problem and that technique works here” sort of attitude. And an auditor who falls into this trap will never find a problem.

    Finally, you are not likely to entice a skillful climatologist into auditing someone else’s work.

    Since we all make mistakes, how should we verify the work of climate scientists? Not suprisingly, science already has a mechanism: replicate results. The most effective way to check a scientist’s work is to try to reproduce what he did. And guess what, the climate science community is already doing that. Is there only one historical temperature data set? No. Is there only one GCM? No. Are there lots of studies that come up with a hockey stick? Yes.

    So the answer to the audit meme is that old-time science: reproduce results. The climate audit folks are free to build their own temperature history and publish the results for scrutiny. They are free to build their own models and run them out to 2100 to see what they predict. That’s science.

    (anybody want to start a pool on how long it is until someone starts blaming black helicopters on the IPCC?)

    Comment by Tim McDermott — 12 Aug 2007 @ 6:24 AM

  124. The Before and After sea ice extent images I linked to in #114 are from the Cryosphere Today.

    The Before image is the saved version from the WayBackMachine in December 2006 (before all the data was changed), while the after is today’s graph (so it contains six months more data.)

    Comment by John Wegner — 12 Aug 2007 @ 7:43 AM

  125. Here are two Visible satellite images from a few hours ago of the NorthWest Passage (the satellites don’t centre right over it so you need a few different pictures.)

    You could probably get through with a ship right now.

    http://rapidfire.sci.gsfc.nasa.gov/realtime/single.php?2007223/crefl2_143.A2007223202001-2007223202500.4km.jpg

    http://rapidfire.sci.gsfc.nasa.gov/realtime/single.php?2007223/crefl2_143.A2007223184001-2007223184500.4km.jpg

    Comment by John Wegner — 12 Aug 2007 @ 7:50 AM

  126. Gavin,
    I still have not seen an explanation for the adjustment to years prior to 2000. The error McIntyre found should only have affected data for years 2000-2007. Why the adjustment to 1998 and 1934?

    [Response: The corrections were of different signs in different stations. The urban adjustment uses the rural trends (which may have changed slightly) to correct urban stations and therefore the correction may differ slightly from what was used before. That made quite a few 0.01 or 0.02 differences before 2000. Compare this (from May) with this (from July). Note as well that occasionally there are corrections and additions to the GHCN data for earlier years that get put in at the same time as the monthly update. Thus this magnitude of difference - fun though it might be to watch - is not climatologically significant. - gavin]

    Comment by Ron Cram — 12 Aug 2007 @ 9:00 AM

  127. If 1934 had been the warmest year globally, we may have expected that 5000 year old Alps ice man to have melted back then, instead of in the early 90s. Where’s their thinking cap? Though, of course, local weather can be different from the global average.

    Comment by Lynn Vincentnathan — 12 Aug 2007 @ 9:41 AM

  128. I read in climateaudit.org that GISS does not share its code/algorithm for “fixing” the data. Gavin can you inform us why so? Just curious.

    [Response: The algorithms are fully described in the papers - basically you use the rural stations trend to set the urban stations trend. It's a two-piece linear correction, not rocket science. - gavin]

    Comment by Jay — 12 Aug 2007 @ 10:22 AM

  129. European Temperature records are also wrong :)

    Comment by Count Iblis — 12 Aug 2007 @ 10:35 AM

  130. John Wegner (#125) wrote:

    The Before and After sea ice extent images I linked to in #114 are from the Cryosphere Today.

    The Before image is the saved version from the WayBackMachine in December 2006 (before all the data was changed), while the after is today’s graph (so it contains six months more data.)

    John,

    Regarding the sea ice anomalies…

    Ok. The images which you are using are the correct images.

    However, I would look at them very closely. For example, the low that you are picking out for 1996 (actually the end of 1995) is especially narrow. Given the size of the graphic, it isn’t that great of a surprise that it isn’t showing up from one year to the next. I picked out the low for the end of 2005, put my finger on it and allowed your graphic to switch from one year to the next.

    Blammo! Same point.

    Principally what you are seeing is compression as the result of including one additional year in the same size graphic. It gets rather dramatic in terms of the difference in the images as you get towards the end, but this has to do with how quickly things are progressing in the arctic. Incredible.

    Now I believe you can find all of the data files on the web. However, what I am seeing is principally gzs in ftps. And there is a great deal of data in those files. Additionally, the methods used to process those files into the graphics that you see at various sites is described in some detail, although perhaps not as much detail as I might like.

    You’ve got to remember: the people who process the files and produce the graphics have jobs. The graphics are essentially labors of love. And to describe them in the detail that you or I might like would be time taken away from something else. If you want them to produce the level of detail, step by step instructions for the reproduction of their graphics, this will take resources.

    But no doubt there is more that would be required to make this genuinely available to everyone with a net connection. For example, you would want to have the software available for free. But it would be pretty big. So preferably the software should reside on an internet server, perhaps as perl. This way anyone would have access to it. But then since a fair number of people might be using it at the same time, you will want the servers to be able to handle the traffic. More resources. The you should have the actual code being downloadable. More resources.

    Are you beginning to get the picture? What all the “auditors” out there want is to have all of the resources dedicated to making it possible for them to duplicate what the experts do point-for-point and pixel-by-pixel. But they want these resources without any expansion of the budgets for the relevant agencies and organizations. And this means time and money taken away from a great many other things.

    Anyway, someone who is more familiar with the rendering of data or of the statistical methods which are used might be able to say more. I myself have only a basic understanding. But I do know how to put my finger on the screen. And I believe that little more than this is required to understand the differences between the two images that you see.

    My apologies for suggesting that you might have manipulated the images. Clearly you haven’t. I was just remembering the traveling decimal point from a few weeks back.

    Comment by Timothy Chase — 12 Aug 2007 @ 11:56 AM

  131. Count Iblis (#128) wrote:

    European Temperature records are also wrong

    Yep.

    Or to be more precise, they were “wrong.”

    Following your link I see that we had previously underestimated how bad things currently are relative to the past – because we had overestimated how bad things were in the nineteenth century. This means that our current trajectory is worse than we thought and that things are likely to get worse than we thought sooner than we thought.

    European heat waves double in length since 1880

    New data published by NCCR Climate researcher Paul Della-Marta show that many previous assessments of daily summer temperature change underestimated heat wave events in Western Europe. The length of heat waves on the continent has doubled and the frequency of extremely hot days has nearly tripled in the past century. In their article in the Journal of Geophysical Research–Atmospheres Della-Marta and his colleagues from the University of Bern present the most accurate measures of European daily temperatures ever. They compiled evidence from 54 high-quality recording locations from Sweden to Croatia and report that heat waves last an average of 3 days now–with some lasting up to 13 days–compared to an average of around 1.5 days in 1880. “These results add more evidence to the belief among climate scientists that Western Europe will experience some of the highest environmental and social impacts of climate change,” Della-Marta said.

    NCCR Climate
    http://www.nccr-climate.unibe.ch/

    Nice spin though, Count Ilbis.

    Comment by Timothy Chase — 12 Aug 2007 @ 12:28 PM

  132. Gavin,

    In #126 above, I asked about the changes prior to the year 2000 when the error McIntyre found related to years 2000-2007. I was, of course, referring to changes to U.S. temp anomalies as that was the dataset that NASA changed. Your answer did not directly address why years prior to 2000 should be affected. Are you saying that in the corrected dataset NASA lumped in some other adjustments as well as the adjustments required to fix the error McIntyre found? Also, you linked to global temp anomalies which does not address the question. For US temp anomalies, the old version is found at http://web.archive.org/web/20060110100426/http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt and the new version is at http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt As you can tell by comparing the two, a large number of adjustments were made in years prior to 2000, but (as far as I know) no explanation for these changes has been made. Has NASA explained these changes somewhere so I can read more about them?

    [Response:Actually, that old version is from 2006. I can't say from this whether the changes were from the latest correction, or simply additions/corrections to the database from GHCN. If it is from the latest correction, it's because the GISS urban adjustments come after the step we are discussing now. Therefore those changes could propagate down the line. Since the adjustments are linear in nature, the further back in time you get the more they'll have an impact I suppose. - gavin]

    Comment by Ron Cram — 12 Aug 2007 @ 12:54 PM

  133. Gavin, thanks for the links. I looked at Hansen’s paper on the urban correction/surface temperature (my first cursory look at a climate science paper… makes Ed Witten’s papers look terse). Seems pretty simple I agree (even simpler than Classical Mechanics, i.e., rocket science). Hope this is not too simplistic and that your models are predicting future trends accurately and verified repeatedly as physicists usually expect. Anyways, good luck and thanks for responding to curious strangers.

    Comment by Jay — 12 Aug 2007 @ 1:30 PM

  134. Gavin – response in #128 and earlier – I can’t tell whether you are being deliberately difficult or genuinely don’t understand some people’s concerns.

    Yes, the papers describe what the algorithms SHOULD do but does the ACTUAL CODE do exactly what you think it does? Clearly in one instance (at least), it did not.

    You seem to imply that the code is so simple that it does not need to be released, it only does A+B=C but if the actual code is released, anyone can verify that it correctly does A+B=C. Until the actual code and the basic station data that it processes are released, you are going to get some people wondering if there are more mistakes and they want to see exactly how corrections to individual data sets are applied. Science should be completely transparent and reproducible – so show people exactly how you arrive at the actual numbers. If the coding is fine then you are not vulnerable on that accusation – sure, people will then argue whether the UHI corrections (for example) are correctly applied or of the right magnitude but that is how science advances. If there are further coding problems then swallow your pride and thank whoever finds them for helping everyone understand the science better.

    Judith Curry makes excellent points in #69 – even if you dismiss others as skeptics or deniers, please listen to her!

    Comment by IL — 12 Aug 2007 @ 1:47 PM

  135. RE 101.

    Timothy is not practicing “deconstruction” A deconstruction of my text would involve a variety of techiques to show how distinctions I used or binary oppositions I used were undermined by themselves in the text. Or perhaps how the implications of various tropes ( figures of speach like metaphor ) served to undermine the “stability” of the text, leaving one in a state of not knowing or not being able to fix or determine the meaning of the text. In short, deconstruction, is a method of commentary that multiplies the meanings of texts and shows how they cannot be controlled by method.
    That’s the way Derrida explained it to me. Although I find it ironic that his anti method has come to mean “any kind of critique”

    Comment by steven mosher — 12 Aug 2007 @ 2:46 PM

  136. “Yes, the papers describe what the algorithms SHOULD do but does the ACTUAL CODE do exactly what you think it does?”

    that would be the purpose of repeating the experiment yourself. the code would not need to be released to allow another person the ability to verify it. really not a hard concept.

    also, the above corrections were not code errors as far as I’ve read.

    Comment by ks — 12 Aug 2007 @ 3:14 PM

  137. The happiness of the denialists in this case is the same as the happiness from the Creationists/ID crowd when a part of evolution theory turns out to be incorrect or incomplete. They can use it to say that the scientists were wrong… This is actually a fantastic example of science at work. Somebody found the error, it was corrected. good.

    Comment by Mark UK — 12 Aug 2007 @ 3:58 PM

  138. #134: IL

    I’m all for as much openness as we can get, but:

    IL: how much software have *you* written, QA’d, documented, released, and made available *for free* (or close)? Have you done the work to make code portable among machines? Do you ship extensive test suites?

    Do you understand that all this costs money and time?

    Please make a case that you actually understand this well enough to have a non-naive discussion. For instance, I think a fine topic for RC would be:

    + start with the general issue of current science publishing that involves computer generated results. What’s the tradeoff between:
    - (one extreme) just publishing the results, like in the old days and
    - (other extreme) provide complete, portable, well-software-engineered source code, documentation, makefiles, extensive test suites, with code tested on a variety of platforms ranging from individual PCs (Windows, Macs, Linux, at least), through supercomputers.
    [This takes a lot of work, and not all science researchers have professional-grade software-engineering resources available. Should that be required by journals?]

    + apply this in the climate research world. How much effort does this take? What do people do?

    [There are serious arguments in other disciplines about similar issues, i.e., there are real issues about which reasonable, informed people can disagree.]

    My personal opinion is that GISS does a very good job on software engineering, release, and accessibility … but then I only have 40 years experience with such things, so maybe I’m still naive about it, and could certainly be convinced otherwise by informed arguments.

    IL: Assuming you are American, how much more taxes are you willing to pay to increase the resources for government agencies to put more of their code professionally online? There are lots of other government numbers generated by computers, and it is by no mean obvious that spending $X more for GISS to do even better would be more generally useful than spending $X on agencies whose results are also important, but whose code is far less accessible.

    If you’re not American, you need to convince those of us who are that we should spend more money on this.

    For instance, total areas, or world’s 510,065,600 km^2:
    17,075,200 km^2 Russia (3.3%)
    9,984,670 km^2 Canada (1.96%)
    9,826,630 km^2 USA (1.93%)
    2,166,086 km^2 Greenland (0.42%)
    14,056,000 km^2 Arctic Ocean (2.76%)

    If I *really* wanted to improve the accuracy of *global* numbers, I’m not sure I’d spend most of my effort chasing USA numbers around.

    In particular, if I were Canadian, and I really wanted better numbers, I’d be chasing:
    http://climate.weatheroffice.ec.gc.ca/Welcome_e.html
    http://www.msc-smc.ec.gc.ca/ccrm/bulletin/national_e.cfm

    The latter says “spring temperatures” +1.5C over 60 years, i.e., ~.25C/decade. Is that accurate? Has the code been checked? Canada is slightly bigger than the USA and therefore carries slightly more weight in global calculations, although of course each are only ~2%. Canada has plenty of seriously-rural weather stations. Has any auditor visited them, lately? Russia is 3.3%, but Canada is probably easier to check, especially for Canadians.

    If one *really* wants to get more accurate calculations, Canada is slightly more important (by area) than the USA, although, of course, the numbers might not be to everyone’s taste, since climate science expects Canada to warm faster than the US.

    Note: none of this is intended to ding Canada: I actually care about Canadian temperatures and precipitation because we own ski condos in B.C. and pay taxes up there.

    Comment by John Mashey — 12 Aug 2007 @ 4:31 PM

  139. #134
    In my own field it is actually considered best to NOT release code, but give algorithms and data. The simple reason being that code is hard to read and misstakes in someone else’s code are easy to miss. Instead it is much better if the person wishing to verify a result writes his/her own code directly from the described algorithm. If the new program does not give the same result then either the algorithm is incorrect or one of the two programs.
    Normally it is easier to write your own new code than to completely understand the other persons code too.

    Comment by DavidU — 12 Aug 2007 @ 4:33 PM

  140. steven mosher (#135) wrote:

    Timothy is not practicing “deconstruction.” A deconstruction of my text would involve a variety of techiques to show how distinctions I used or binary oppositions I used were undermined by themselves in the text. Or perhaps how the implications of various tropes ( figures of speach like metaphor ) served to undermine the “stability” of the text, leaving one in a state of not knowing or not being able to fix or determine the meaning of the text. In short, deconstruction, is a method of commentary that multiplies the meanings of texts and shows how they cannot be controlled by method.

    That is hard deconstructionism – well-explained in synoptic form.

    From what I understand, soft deconstructionism may simply distinguish between meaning and significance, where the meaning of the text is independent of either the author or the reader, but is dependent upon a shared language, but where the significance will be dependent upon the reader’s context, which will include their historical context, the issues of the day, their values and standards. There will be ambiguities, but these can generally be resolved by reference to the larger context within the original text. But of course the significance, even for the author, may best be identified and explored by bringing in other elements.

    However, the methods of hard deconstructionism typically consist of tearing one element or another out of its original context so that the reader will no longer refer to the original text, but will instead focus on the secondary text’s elaborate schemes of overinterpretation through ambiguous, archane and often highly idiosyncretic language where the interpreter seeks to impose his or her own often political agenda upon the original text, or alternatively, seeks to show that the written or even spoken word offers only the illusion of communicating the intent of the writer or speaker.

    Of course, if the latter were true, the hard deconstructionist would be entirely incapable of communicating it. What we are speaking of at this point is simply an elaborate attempt to “demonstrate” the self-referentially incoherent position of radical skepticism with respect to all intended meaning.

    Comment by Timothy Chase — 12 Aug 2007 @ 5:08 PM

  141. Timothy (56), sorry but positively as simulating whole bunches of possible indications, most of which have measurement errors greater than indicated trend, and most of which come with very mushy cause and effect, into an undeniable global truth is every bit as bad, scientifically, as skeptics (but not me) justifying our position because the AGW theory is not 100% proven in every aspect and detail and tied up in nice little packages with pretty bows. You forgot the “cherry trees blossomed earlier last year” as another undeniable “proof”.

    Comment by Rod B — 12 Aug 2007 @ 5:24 PM

  142. Re #139: DavidU — I agree that writing my own code is often easier than understanding someone else’s.

    However, when the two programs do not give the same answer, it is oft the case that both are wrong!

    Comment by David B. Benson — 12 Aug 2007 @ 5:26 PM

  143. To keep this in proper perspective, the area of the U.S. is 3.6 million square miles(9.3 million Sq.km) and the surface of the globe is 197 billion(with a B) sq. Mi.or 509.6 billion sq. Km. The ratio is 1.8×10^-5. Too small for any global consequences. Global warming isn’t affected.

    We’ve learned once again that to err is human.Which is why peer review, and constructive criticism are so important to the sciences, and well as other in other areas of human endeavor.

    Comment by Lawrence Brown — 12 Aug 2007 @ 6:58 PM

  144. Re #138:

    First, for my credentials — over the years I’ve personally written somewhere on the order of 500,000 lines of code. I’ve also designed another several hundred thousand, reviewed several million, and QA’d some unknown amount. There are a lot of people here who use code I’ve written over the years. Some open source, some proprietary.

    ANYWAY, it doesn’t cost anything at all to open source software, except some FTP bandwidth, and it greatly improves both the quality and functionality of the code in the long run. And frankly, if the code is so hard to comprehend that no one else with programming skills can read it (I have 28 years in the industry), the programmers need to be fired.

    If someone were asking me my opinion, I’d say that the people who are spending the money don’t want to lose their budgets for either software developers or giant computers. That’s one of the biggest reasons software is kept closed.

    Comment by FurryCatHerder — 12 Aug 2007 @ 7:32 PM

  145. I want to Contrast the stand up attitude of Ruedy, the good sense of Dr. Curry and Gavin’s openness with something quite different.

    Here is the start of an email, written by a principal in this matter. I will not name him.

    “Recently it was realized that the monthly more-or-less-automatic updates of our global
    temperature analysis (http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html) had a flaw in
    the U.S. data. In that (2001) update of the analysis method (originally published in our 1981
    Science paper – http://pubs.giss.nasa.gov/abstracts/1981/Hansen_etal.html) we included
    improvements that NOAA had made in station records in the U.S., their corrections being based
    mainly on station-by-station information about station movement, change of time-of-day at
    which max-min are recorded, etc.”

    It was realized. Mistakes were made.

    Some examples from other areas and how they responded:

    http://www.the-scientist.com/news/display/39805/

    http://pineda-krch.blogspot.com/2007/06/show-me-code.html

    Quoting from the latter to show the guy at columbia how it us suppose to be done.

    “Dear Colleagues,This to inform you that we must retract Hall, B.G. and S. Salipante. 2007. Measures of clade confidence do not correlate with accuracy of phylogenetic Trees. PLoS Comp. Biol 3: (3) e51.As a result of a bug in the Perl script used to compare estimated trees with true trees, the clade confidence measures were sometimes associated with the incorrect clades. The error was detected by the sharp eye of Professor Sarah P. Otto of the University of British Columbia. She noticed a discrepancy between the example tree in Figure 1B and the results reported for the gene nuoK in Table 1. At her request I sent her all ten nuoK Bayesian trees. She painstakingly did a manual comparison of those trees with the true trees and concluded that for that data set there was a strong correlation between clade confidence and the probability of a clade being true. She suggested to me the possibility of a bug in the Perl script. Dr. Otto put in considerable effort, and I want to acknowledge the generosity of that effort.”

    Comment by steven mosher — 12 Aug 2007 @ 7:38 PM

  146. For those interested, I’ve posted a comparison of GISS data before vs after applying the recent correction on my blog.

    Comment by tamino — 12 Aug 2007 @ 7:40 PM

  147. Land area Canada 9.09 million square km
    Land area US 9.16 million square km

    Comment by sam — 12 Aug 2007 @ 8:09 PM

  148. #120
    Thanks Timothy for your response.
    My goal is not to debate every point on your list, but rather to assert that one can be an AGW skeptic and still of sound mind. You presume that someone with a different view on the merit of this science must have some sinister motive – this is a polarising and simplistic view.

    In the scientific community I am part of, members would not mind me proposing the following refutations to your first two points, as it would provide them an opportunity to correct my misunderstanding of the underlying ideas.

    1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.

    If I low pass filter the US surface temperature time series and examine the derivative of the signal, I see no evidence for this accelerating trend. The rate of recent increase (and length of its duration) is comparable to earlier periods in the record.

    2. These are temperature measurements being taken by planes and satellites, and they show that the troposphere is warming – just as we would expect.

    I believe the latest data does not support this point (for the tropics) and certainly does not match climate modelling predictions.

    Christy J. R., W. B. Norris, R. W. Spencer, J. J. Hnilo (2007), Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements, J. Geophys. Res., 112, D06102, doi:10.1029/2005JD006881″

    “Several comparisons are consistent with a 26-year trend and error estimate for the UAH LT product for the full tropics of +0.05 ± 0.07, which is very likely less than the tropical surface trend of +0.13 K decade−1.””

    regards, Steve

    Comment by Steven — 12 Aug 2007 @ 8:16 PM

  149. Re 140: Thank you, Timothy Well said.

    Comment by Ron Taylor — 12 Aug 2007 @ 8:44 PM

  150. As The former Mayor of New York once said “When I make a mistake it’s a beaut”. The Surface Area of Earth is
    197 million square miles ( with an M). What are three orders of magnitude among friends? It’s still holds that global climate figures aren’t affected.

    Comment by Lawrence Brown — 12 Aug 2007 @ 8:45 PM

  151. Re: #143 (Lawrence Brown)

    Sorry, but the area of the globe is 196 million (with an “m”) square miles, about 510 million (with an “m”) square km.

    To err is indeed human.

    Comment by tamino — 12 Aug 2007 @ 9:07 PM

  152. re: #144 F.C.H.

    Well, I said “there are real issues about which reasonable, informed people can disagree.” [FCH at least has relevant expertise, which is not clear for some people expressing strong opinions.]

    After all, “open source” is not new in computing, having started no later than the early 1950s with John von Neumann’s distribution of the IAS plans, and continuing exchange of code via user groups (like SHARE & DECUS) in the 1950s/1960s onward, the UNIX dispersion of the 1970s, the related Software Tools User Group of the 1980s, and lately Linux, Apache, etc, etc.

    Technical users (especially) have long shared code, but thankfully it has indeed gotten a lot easier to share, given the Internet and WWW. Making tapes gets old [at Penn State, in the early 1970s, we made hundreds of tapes of code I'd written, and that actually cost money] … but it cost more to design professional-grade distributions, and if there hadn’t been some grant money to help, we wouldn’t have done it. This was for code we knew people actually wanted to *use* daily, not just maybe look at to see if they could find bugs.]

    The original UNIX distributions from Bell Labs originally said ~ “No warranty, no support, and don’t call us with bugs.” … but people knew perfectly well that we would in fact spend some time informally supporting it. Of course, we had monopoly money to play with…

    Anyway, the issue is that in organizations in which researchers are supposed to generate research, it is a legitimate argument to figure out
    - what level of source/data/test/documentation availability is appropriate,
    - how much time researchers should spend responding to questions,
    - what level of availability is required to publish in various journals
    - and the tradeoff between getting results out in a timely fashion, with normal peer review (and consequently, unfound), versus doing excruciatingly-long testing, outside code reviews, etc, etc.

    I don’t think there is any one right answer. I do think it is important for science to figure out answers, and if we raise the bar, we’d better figure out how to raise the funding, because it usually costs money – Sometimes it’s worth it, sometimes it’s not.

    When I was at SGI, we went through a major exercise to figure out what we had to do to *usefully, professionally* make certain pieces of internal code open source,[not just stick them up on an FTP site], i.e., including the support found to be necessary.

    Many of us wanted to do this, but we found that it simply wasn’t free, so we did it (spending a fair amount of money) for some things (like XFS -> Linux), and we didn’t for others, like Showcase. We also spent money on things like supporting Samba [i.e., hiring Jeremy Allison for a while to do that].

    I would be delighted if every scientific paper that used computers offered a professional-grade distribution … but I can’t make myself believe that would be cost-effective. I have many times been in the position of doing budgets for projects where there was a choice between just building/using software and doing the work to make it usefully accessible elsewhere, and I’ve opted for the latter as often as possible, but it *was not* free.

    Others may legitimately have other opinions, although I’d hope people will say why they have relevant experience (as FCH did).

    Comment by John Mashey — 12 Aug 2007 @ 10:59 PM

  153. Roger Pielke Sr and 14 others published an article in the latest BAMS noting problems with USHCN adjustment procedures beyond TOBS. Is it really possible to correct for station moves…especially in areas of complex terrain? Is it really possible to correct for UHI when land use changes around the station have not been documented very well, if at all? One flaw in the GISS adjustment code has been discovered. How do we know if there are more flaws? Gavin says that there methodology has been well documented, however, how do we know if that methodology was implemented correctly in the code? I understand that the U.S. has one of the best climate networks in the world (and better history information). What about Africa and Asia? It is clear to me that we don’t know what is UHI and what is a CO2 signal in our temperature records. That said, I know that the climate over the Northern Rockies (where I live) is changing. Our winters are milder and our snowpack hasn’t been doing well. It could be all the junk that China is putting in the air is affecting the weather patterns over the Pacific. Eventually that affects patterns over the Northern Rockies. Others say that this pattern change is CO2 induced. Neither side has yet to prove their case. I know the GISS people are high on CO2 causation, but they lost credibility when they keep their code secret (mistakes and all) from those who want to check their work.

    Comment by VirgilM — 12 Aug 2007 @ 11:21 PM

  154. This is how Stephen Mosher says “it should be done…”

    She painstakingly did a manual comparison of those trees with the true trees and concluded that for that data set there was a strong correlation between clade confidence and the probability of a clade being true. She suggested to me the possibility of a bug in the Perl script. Dr. Otto put in considerable effort, and I want to acknowledge the generosity of that effort

    So Stephen apparently thinks that it’s OK that the researchers didn’t open-source the Perl script, nor that the researcher uncovering the error had to do a lot of work.

    On the other hand, NASA is Evil! Evil! Evil! for not having open sourced their code, and Evil! Evil! Evil! for writing an impersonal and slightly ungrammatical e-mail that correctly described what happened.

    Comment by dhogaza — 13 Aug 2007 @ 1:06 AM

  155. #147
    Land area Canada 9,984,670 square kilometers
    Land area US 9,629,091 square kilometers

    #144
    One thing you need to keep in mkind here is that the “programmers” in many scientific projects are not professional programmers, but rather just one of the climatologists/biologists/chemists… in the project who knows how to program, and sometimes learnt it 20 years and are still using the same progamming language as back then. People still write code in line numbered Fortran77!
    These programs usually work just fine, they know how to write workign code, but to someone like the two of us who knows programming as a subject of it own it often looks hairrasing.
    Hiring a professional programmer to do the work instead is normally not even close to realistic within a typical project budget, so there really isn’t anyone around to fire for their hard to read code.

    #142
    I agree with your point too. I actually often write two separate codes for doing the same job, in different languages, and don’t trust them until they both give the same results.

    Comment by DavidU — 13 Aug 2007 @ 2:45 AM

  156. Timothy. RE 98.

    Apparently the blog at my reply ( same thing happened over at Watts place, so I blame my Wifi )

    I’ll give a condensed version:

    You wrote:
    “As such, they are responding to sources of contamination which may not have been regarded as that important – prior to 1929. Not only does this have little to do with current trends, but it demonstrates a concern for accuracy. And as a matter of fact, the elimination of such contamination tends to reduce the apparent rise in temperatures over the twentieth century rather than magnify it.”

    Actually the don’t show contaimination they Hypthesize it to explain the record. They took cooling out

    My main issue with that paragraph in Hansen 2001 is the lack of supporting data and analysis in the text or figures. If you like I will detail all the missing parts. But look for yourself. Find the ANALYSIS in the text. Not a description of the analysis. Simple example:
    which sites in the “region” were the 5 sites in question compared to?

    Next:

    “In an earlier thread you claimed to be interested in accuracy and were attacking urban sites for their presumed contamination by the urban heat island effect. We of course pointed that you get virtually the same warming trend whether you use all stations or just rural stations – which you didn’t even seem to acknowledge.”

    Let me schematize the UHI arguement for you using Peterson 2003, which Hansen cites ( as submitted) and which Parker quotes.

    1. UHI exists: ( I’ll link studies if you like),
    but see Petersons FIRST SENTENCE.
    2. We expect to see differences between Rural and Urban stations.
    3. These differences are NOT observed ( peterson, Parker, Hansen)in the climate network
    4. THEREFORE, Urban stations must be well sited in COOL PARKS.

    From Parker:
    Furthermore, Peterson (2003) found no statistically
    significant impact of urbanization in an analysis
    of 289 stations in 40 clusters in the contiguous United
    States, after the influences of elevation, latitude, time of
    observation, and instrumentation had been accounted
    for. One possible reason for this finding was that many
    “urban” observations are likely to be made in cool
    parks, to conform to standards for siting of stations.

    From Peterson’s conclusion:

    “Therefore, if a station is located within
    a park, it would be expected to report cooler temperatures
    than the industrial sections experience. But do
    the urban meteorological observing stations tend to be
    located in parks or gardens? The official National
    Weather Service guidelines for nonairport stations state
    that an observing shelter should be ‘‘no closer that four times the height of any obstruction (tree, fence, building, etc.)’’ and ‘‘it should be at least 100 feet from any paved or concrete surface’’ (Observing Systems Branch 1989). If a station meets these guidelines or even if any attempt to come close to these guidelines was made, it is clear that a station would be far more likely to be located in a park cool island than an industrial hot spot. ”

    SO, you get the argument: we expect a difference, we find no difference, THEREFORE urban sites are in cool parks.

    Simple Question: How do you test this last part?
    You look. Go to Surfacestations. Look at Tuscon ( parking lot) Eureka ( on a roof) Santa Rosa ( on a roof)
    Paso Robles ( on cncrete next to a freeway) And newport beach ( on a roof)

    So now the argument looks like this.

    1. UHI exists:
    2. We expect to see differences between Rural and Urban stations.
    3. These differences are NOT observed
    4. Perhaps, Urban stations are well sited in COOL PARKS.
    5. We haven’t found many if any Urban sites in a cool park.
    6. Perhaps the Rural sites are corrupted at the MICROsite level. Things not visible on nightlights,
    things like nearby asphalt, buildings, wind shelter,
    Non compliant things.

    So, how to test #6. Look. Take at look at Tahoe City and Happy Camp.

    Essentially its the same logic as Hansen. He saw weird cooling and assumed the sites must have smething wrng with them and Hypthesized contaimination. We see similair weirdness ( No difference between Urban rural )
    and hypothesize that rural sites have microsite issues.
    Then we take the extra step of LOOKING. Peterson never checked his Supposition of COOL PARKS. We did. After checking 289 sites we havent found an urban site in a cool park. In parking lots? Yes, on Rooftps Yes, on Utility Poles, Yes.

    Now, I would not call the matter closed. It’s never closed. If you like pick up a camera and go find a cool park site.

    Comment by steven mosher — 13 Aug 2007 @ 5:58 AM

  157. RE 56: Long Post timothy.

    I’ll hit a few key points:

    “Sounds like Quine to me.

    Actually he’s not all bad – at least the bits he gets from Pierre Duhem.”

    Really? Quine’s Two Dogmas was published in 1951 and Duhems work was not allowed to be published until 1954.
    ZING!

    Next:

    “Now what about the claim that the world is at least five seconds old.
    I certainly think it is, but you may regard this as nothing more than a “theory”
    which is “underdetermined” by the “data,” where the data consists of the experience
    of “memories” and anything else which I might claim as evidence for an older world.
    But at this point we aren’t even talking about observation per se – we are talking about memory.”

    Quine would say it’s a theory. A well supported theory. One that would be very complicated to
    give up.

    NEXT:

    “So does this mean that when I look at an iceberg floating off,
    it might actually not be floating away? ”

    No. There is a choice. A theory of human perception ( that explains perceptual illusions) Or a theory
    of human delusion. The first is more useful than the second. Both are underdetermined.

    “Does this mean that if I am looking at the temperature displayed by a thermometer
    I am holding is rising, that the temperature might not be rising? ”

    No. There is a choice this time three options. The last being “instruments accurately record physical quantities”
    Again, Quine would say they have the same epistemic status. Some are more useful at things like predictin

    Next:

    “If so, I would begin to wonder whether you are engaging in the philosophy of science
    rather than in some freshman philosophy bullsession.
    I would also have to wonder just how desperate the “global warming skeptics”
    have gotten that they find it necessary to appeal to this kind of reasoning.”

    I think someone who didnt even know that quine published before Duhem needs to actually read Quine.
    And you will find that Confirmational holism makes one more amenable to AGW acceptance.

    NEXT:

    “Falsification can always be avoided by appealing to other data ( sea ice, SST, species migration, etc etc etc).

    This isn’t the way that I normally hear it.

    From what I understand, falsification can always be avoided by appealing to another hypothesis.”

    Both ways actually.

    NEXT:

    “But lets focus on the phrase “appealing to other data.” …..

    A hypothesis or theory which is justified by multiple lines of investigation
    is generally justified to a far greater degree than it would be if it were simply
    justified by any one line of investigation considered in isolation.”

    Yes, just as quine says.

    NEXT:

    “Now the vast majority of the scientific community has accepted the view that:
    1. The earth is getting warmer;
    2. greenhouse gases are largely responsible for this; and,
    3. That what has been raising the level of greenhouse gases are human activities.

    You on the other hand are still stuck on (1). Not dogmatically denying it,
    I understand, but simply doubting it with your healthy, “scientific” skepticism.”

    Stuck on #1. Presently I am looking at #1. Have to start somewhere. Now, force me to decide, and I will say
    Yes, the earth is probably getting warmer. That “probably” needs to be quantified and independantly confirmedand it’s magnatude estimated.

    NEXT:

    So in the interest of science, lets look at the evidence:

    1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.”

    Hmm. Which ten year trend shows a higher rate: 1997-2006 ( last ten) or say (1927-1936)?

    Lets start with that one. I’ll address the other 19 in due course, but first things first. The simple task
    of measuring air temps.

    Comment by steven mosher — 13 Aug 2007 @ 6:44 AM

  158. [[If I low pass filter the US surface temperature time series and examine the derivative of the signal, I see no evidence for this accelerating trend.]]

    Why didn’t you just do a linear regression like everybody else in the world? You’re essentially admitting that you had to distort the data to get the result you wanted.

    Comment by Barton Paul Levenson — 13 Aug 2007 @ 6:49 AM

  159. Why is there an assumption by the deniers that the US temp. data is the best?
    Is this true? I find that my reginal NWS daily temp figures are often several degrees under what my home temp. gauge is recording.

    Comment by rick — 13 Aug 2007 @ 8:23 AM

  160. RE 154.

    You missed two points and invented a third.

    1. The point was not the grammer. The point was attribution. Ruedy wrote a fine mail thanking SteveMc
    for his work. The example I sited showed proper attribution for finding an error.
    Like so for the Slow : ” I would like to thank Person X for finding..”

    2 Nothing was said about NASA. I did not name the person who wrote the mail. And as far as the
    accuracy goes you need to do some more reading.

    3.Who said evil? you. If I had to use a word it would be obsfucating or opaque or lacking grace.

    Some people like Ruedy, Dr. Curry, and Gavin are gracious. Hell, Gavin puts up with me.
    Other folks are less gracious, myself included. Let’s leave it at that.

    Comment by steven mosher — 13 Aug 2007 @ 8:47 AM

  161. Steven (#148) wrote:

    #120
    Thanks Timothy for your response.

    My goal is not to debate every point on your list, but rather to assert that one can be an AGW skeptic and still of sound mind. You presume that someone with a different view on the merit of this science must have some sinister motive – this is a polarising and simplistic view.

    In the scientific community I am part of, members would not mind me proposing the following refutations to your first two points, as it would provide them an opportunity to correct my misunderstanding of the underlying ideas.

    I am not a member of the scientific community – simply a philosophy major turned programmer who is on the outside – and with little practice in statistics. As such, there are certainly people that I would defer to, particularly in this area. Tamino is one obvious case – as I am impressed with his objectivity and skill.

    1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.

    If I low pass filter the US surface temperature time series and examine the derivative of the signal, I see no evidence for this accelerating trend. The rate of recent increase (and length of its duration) is comparable to earlier periods in the record.

    Going off memory at the moment, I remember that the later trend of perhaps the last fifteen years was higher than the trend that of the last thirty, but I believe it was not statistically significant, and I had originally mentioned as much. There may be a valid approach which can demonstrate statistical significance. For example, one might perform an analysis in terms of the monthly averages and filter out the annual cyclical behavior, but I do not know the results of such an exercise. At present I would have to agree with you on this point and I will omit this statement in the future until I know otherwise.

    (One point: even if it is demonstrated at some point that there is such an “acceleration,” I would expect it to be temporary, with a new higher slope being established until such time as we are able to reduce the geometric increase in the rate at which we are emitting carbon dioxide into the atmosphere. We have actually done worse in the past seven years than the previous ten, from what I understand.)

    Steven continues:

    2. These are temperature measurements being taken by planes and satellites, and they show that the troposphere is warming – just as we would expect.

    I believe the latest data does not support this point (for the tropics) and certainly does not match climate modelling predictions.

    There are other obvious examples. What is happening in the arctic is progressing much more rapidly than any of the models would predict – and I suspect that this will lead to more rapid climate change at a global level than models would predict. Alternatively, they are having some difficulty modeling the Indian Monsoon and a small western region of the Tibetean Plateau where glaciers are currently increasing in size as the result of increased snowfall.

    The ability to observe the effect does not automatically imply the ability to model it – insofar as models have to be grounded in the actual physics, not the mere ability to identify trends through statistical analysis. I am sure there are others.

    But the B scenario of Hansen’s 1988 projections performed admirably well in terms of global temperature – and it was primitive by today’s standards, for example, in the fact that it was based upon a single run. But overall, they are doing quite well, for example, in terms of the modeling of ocean circulation as of roughly 2001. The new GISS model is performing far better at modeling clouds, although there continue to be surprises in this area. For example, the recently discovered twilight zone where an invisible zone of higher water vapor extends for several kilometers beyond the visible edges of clouds. Likewise the modeling of both aerosols and the carbon cycle are at fairly early stages of development.

    But I strongly suspect that Hadley’s new approach involving the initialization with real world data including natural variability will become the norm – which should improve both the modeling of climates at a variety of levels. Improved resolution will likewise improve model performance. We were able to model hurricane formation only with the creation of the NEC Earth Simulator – but then in 2003 projected increased cyclone formation in the South Atlantic, and Catarina formed in late March of 2004.

    *

    In any case, I genuinely appreciate the correction regarding temperature trends in the United States and will keep this in mind from now on.

    Comment by Timothy Chase — 13 Aug 2007 @ 8:49 AM

  162. Steven Mosher,

    I will have to get back to most recent posts of 156 and 157 a little later. Currently I am about to go off to work. I hope you will understand. But if someone else would like to respond in the interim, I most certainly wouldn’t have a problem with this, although I would likely still personally respond later.

    Comment by Timothy Chase — 13 Aug 2007 @ 9:17 AM

  163. Re: #13, #18, #21, #22, #27, #30, #33, #41, #74, #79, #135, #145, #156, #157, #160

    Steven,

    Having read each of your posts, in which you use some fancy, uncommon words and refer to theories which I have not seen expressed before (which of course means nothing more than this: you have some education that I don’t have), I am left with one question: How, applying the theories you espouse, can you ever come to believe anything?

    That is a serious question, and I guess we can approach it from the other side as well: What would it take for you to accept that this long list of unusual climate observations are related, and that they describe a coherent theory which is a) highly plausible and b) much more plausible than any other theory which has been applied to the same set of observations?

    As Gavin said, in so many words: how do you explain these observations without a human component?

    Further: academic exercises are all well and good (and I mean exactly that: they are essential and useful); but at what point do we put down the textbooks and notebooks and start trying to enact policies based on our observations?

    Comment by Walt Bennett — 13 Aug 2007 @ 9:50 AM

  164. Gavin started this topic with the sentence: “Another week, another ado over nothing.” How fitting! Now, 162 comments and 20 blog entries later, one has to wonder at most commentators’ ability to get worked up about something that is supposed to be nothing. But I suppose their zeal will not be diminished until the last denialist, skeptic, heretic or dissident has been silenced. This “ado” seems to be part of a recurring pattern in human behaviour.

    Comment by Dodo — 13 Aug 2007 @ 10:00 AM

  165. Freeman Dyson has written a nice article in “Edge” expressing a heresy that GW climate models may be wrong. I think he makes too much of climate models and ignores other evidence, but there is no denying he is still a very smart guy.

    http://www.edge.org/3rd_culture/dysonf07/dysonf07_index.html

    Any comments on his thoughts?

    [Response: Michael Tobis has a good commentary. - gavin]

    Comment by Alex Tolley — 13 Aug 2007 @ 10:25 AM

  166. The point was not the grammer. The point was attribution.

    My understanding is that McIntyre pointed the finger at something that was wrong, but that the NASA group actually figured out what it was and did the corrections.

    So the scenario’s a bit different than in the letter you cited, where the woman in question did a huge amount of work and figured out in detail what was wrong.

    And attribution has nothing to do with correctness, anyway.

    And my point about the source code and work involved stands:

    Note that she computed the phylogenic trees by hand, not using the script, and deduced that there must be an error in the script.

    Publishing algorithms, not code, is SOP in many fields, climate science is not exceptional in this regard.

    Comment by dhogaza — 13 Aug 2007 @ 10:46 AM

  167. Re: #158 (BPL)

    [[If I low pass filter the US surface temperature time series and examine the derivative of the signal, I see no evidence for this accelerating trend.]]

    Why didn’t you just do a linear regression like everybody else in the world? You’re essentially admitting that you had to distort the data to get the result you wanted.

    I disagree. A low-pass filter is a standard, and rather reliable, method of removing fast fluctuations from data, revealing the slower “trend”-like signal. I certaintly wouldn’t characterize it as an attempt to “distort the data.”

    Unfortunately I don’t have monthly data for the lower-48 U.S. states, just annual averages. But looking at that data, I too don’t see any statistically significant acceleration in recent years (since 1975) in lower-48 U.S. temperatures.

    Comment by tamino — 13 Aug 2007 @ 12:01 PM

  168. I don’t want to start a tempest in a teapot, but what is the deal with Hansen’s email?

    Comment by captdallas2 — 13 Aug 2007 @ 12:50 PM

  169. If the lower 48 US states have not been significantly warming (is that what people here are saying), then there must be other places that ARE significantly warming, if the global warming idea is accurate. Isn’t that that whole idea behind “the global average temperature” increasing? Avergage means that some places might stay the same & some might even get colder, but on average the whole data set for the entire world shows a warming trend. (And, it might be that the places currently staying the same or getting colder might eventually show a warming trend, if the problem continues.)

    And if there’s lots of quibbling about how correct the data is, can we also use some other measures of temperature change, such as ice melting. I think the net melting in the Arctic, and the melting in the Old and New Worlds that has allowed some archaeological finds from thousands of years ago makes a good case that the world is warming. Not to mention the warming of the oceans.

    Then there is a good theory to go along with this — the greenhouse effect — which not only explains the natural warmer than expected temps on earth, but also the colder temps on Mars, and much warming temps on Venus.

    It seems to me it doesn’t take a rocket scientist or fancy statistician to come to grips that the world “ON AVERAGE” is warming. I assume that smart people like Steve McIntyre aren’t really suggesting there is no global warming — a idea that would go against this other evidence and a well-established theory.

    I think it’s wonderful that scientists are able to tell us what is happening, so that hopefully we can solve this problem before it gets really bad.

    Comment by Lynn Vincentnathan — 13 Aug 2007 @ 12:57 PM

  170. gavin: In the response to the second comment you refer to “only the result”. But this result is a result after adjustments due to a factor which in “climate science” cools the earth today and therefor must be compensated for, but exactly what is done isn’t published. It’s also a result after other not published steps taken.

    E.g., Hansen showed 1999 that the warmest year on the US record was 1934, but two years later he had a very different temperature record with higher temperatures the last decades, and lower earlier in the 20th century. The plotted data in both charts here:

    http://www.coyoteblog.com/coyote_blog/2007/08/a-temperature-a.html

    The problem is we only get data after the climate scientists has secretly adjusted them. Also we don’t know which non-rural stations that are not used due to rules described on the NASA diss data page; rejection of stations not within the long term trend. We don’t have raw station data or algorithms for selection or adjustment.

    [edit]

    [Response: Presumably that is new definition for the word secret? As in ‘secretly’ publishing all the adjustments and consequences in the open peer reviewed literature? Please read the references: Hansen et al 2001. – gavin

    Comment by Magnus Andersson — 13 Aug 2007 @ 12:59 PM

  171. re: 9. “When “deniers” make a claim (like yours), it’s based on the lack of serious research or considerable effort, and if an error is pointed out, it’s excused away or triggers a scrambling attempt to change the subject.”

    Indeed. Or running away entirely, unable to admit being in error intentionally or not. In fact, we see it here by various anti-science layman skeptics/deniers who throw out any possible, usually unpublished comment (e.g. the surfacestations.org dribble) about GW desparately trying to make it stick. And who keep repeating it as if repetition somehow magically makes it true and to heck with literally thousands of climate researchers who publish in peer-review journals. Or who make absurd ascertains without absolutely any data or science to support their statements yet when confronted still do not make an apparent effort to learn why they were wrong.

    Comment by Dan — 13 Aug 2007 @ 1:18 PM

  172. If the lower 48 US states have not been significantly warming

    But they have been significantly warming, no one in his right mind disputes that. The discussion has been whether the warming has been accelerating, and to my mind, over the last 30 years or so it hasn’t — the warming has been steady rather than getting faster.

    The rate of warming seen in surface temperature data (since the correction to USHCN data and update of NASA GISS) since 1979 (the beginning of satellite temperature data) matches almost exactly the rate seen in satellite data for the lower-48 states of the U.S. This is illustrated in here.

    Comment by tamino — 13 Aug 2007 @ 1:20 PM

  173. Re 121
    I am so glad to hear that the DUSTBOWL will not form until 2080.

    http://www.drought.unl.edu/dm/monitor.html and the state of the Sierra Snow Pack had me worried.

    On the other hand, just as the Arctic sea ice is melting faster than the models predicted; perhaps North America may be drying out faster than the models predicted?

    I do not think that one dry year makes a drought or a trend, but any noticeable change in the heat distribution of the Northern Hemisphere should put us on alert for follow-on effects.

    Prudent men should consider and prepare.

    Comment by Aaron Lewis — 13 Aug 2007 @ 1:26 PM

  174. #169
    As I mentioned in an earlier post here I am in Sweden right now and here they are noticing the warming in ways that they don’t need any instruments for.
    E.g. the news here has recently reported that there are now wild oak spreading in an area about 200 miles north of were oak could be found 15 years ago. There has been oaks in that area before, a few thousand years ago.
    This is near a town called Örnsköldsvik, you can find it in Google earth, about 63 degrees north, which is quite far north even when considering that they are warmed by the Gulf stream here.

    Comment by DavidU — 13 Aug 2007 @ 1:29 PM

  175. RE 166.

    You missed the points again. I make it simple for you.

    As some have claimed the algrithms are documented in the text.

    1. They are not. They are generically described.
    2. IF THEY WERE documented in the text, the issue remains:
    3. Does the code implement the algorithm as designed.

    [edit]

    [Response: 'Algorithms' are just descriptions in various flavours of formality. If there is a question of whether the code correctly implements the algorithm it will be apparent if someone undertakes to encode the same algorithm and yet comes up with a different answer. Absent that, there is no question mark over the code. So if I generically describe a step as 'we than add A and B to get C', there is only a point in checking the code if you independently add A and B and get D. That kind of replication is necessary and welcome. With that, all parties will benefit. Simple demands to see all and every piece of code involved in an analysis presumably in the hope that you'll spot the line where it says 'Fix data in line with our political preference' are too expansive and unfocused to be useful science. - gavin]

    Comment by steven mosher — 13 Aug 2007 @ 1:42 PM

  176. Re #169: [I think it’s wonderful that scientists are able to tell us what is happening...]

    What’s even more wonderful is not that climate science can tell us what is happening, but that it told us, years ago, that it would happen. Isn’t that what science is supposed to be about: not just explanation, but prediction?

    I’m going to harp some more, but there’s a quote in #157 that’s illustrative of the backwards viewpoint some people have taken:

    “Now the vast majority of the scientific community has accepted the view that:
    1. The earth is getting warmer;
    2. greenhouse gases are largely responsible for this; and,
    3. That what has been raising the level of greenhouse gases are human activities.”

    A more accurate statement would be something like

    The vast majority of the scientific community has accepted the view that:

    1) The amount of CO2 in the atmosphere has risen considerably during the industrial period, and many lines of evidence show that the increase is human-caused.

    2) Theory, well supported by experiment, says that increased atmospheric CO2 will trap more IR, leading to warmer temperatures.

    3) There are many different lines of evidence that show that the Earth has in fact warmed by an amount consistent with theory.

    This makes all the fuss over a minor adjustment to a few points in one data set (of many) supporting one line of evidence (again, of many) seem rather out of proportion, doesn’t it?

    Comment by James — 13 Aug 2007 @ 2:24 PM

  177. I have this doubt:
    If it is man-made CO2 that pushes global warming, naively I would exspect that the US should be among the world regions in which the warming is MORE severely felt.
    But now, after NASA revision, it seems that US are among the places LESS affected by long term warming…

    There is something to ponder upon or everything is perfectly clear?

    Comment by Mario — 13 Aug 2007 @ 2:47 PM

  178. Gavin, could you address your fallibility as a scientist? Could you be wrong? Have you ever been wrong? How much do you really know? I know you trust in your models and conclusions, but I have learned never to trust anyone claiming to have the final word (on anything). I personally think your position on gobal warming is more rational than the deniers claims, but I would be a fool to put any money on your predictions (if I were a gambling man). Also could you please address your fallibility without using “if we wait any longer it will be too late” or without using the word ‘consensus’? Thanks much.

    [Response: Moi, infallible? Hardly. I've made my fair share of coding errors and the like. But the whole point is that you shouldn't pay any particular attention to me or any individual scientist - we all might have prejudices and biases that might colour what we say. However, assessment bodies like the IPCC and National Academies are much more impartial and since they go through very rigorous peer review , the chances that once person's biases end up in the final product is very small. So, don't listen to me. Read the IPCC report instead. - gavin]

    Comment by Michael — 13 Aug 2007 @ 2:48 PM

  179. Re: #177 (Mario)

    I think your primary misconception is that CO2 emissions tend to have a stronger impact on the region in which they’re emitted. The truth is that CO2 is a “well-mixed” gas which stays in the atmosphere a long time, so it quickly (on a timescale of about 7 or 8 months) distributes itself around the globe. Hence CO2 emissions tend to affect the entire globe, rather than preferentially affecting the region in which they’re emitted.

    Something that *does* tend to preferentially impact the region in which they’re emitted, because they have such a short atmospheric lifetime, are sulfate aerosols. These have a cooling effect, and historically have been very strong in the U.S. This may partially explain why the U.S. has shown less global warming over the century than the globe as a whole.

    Finally, during the “modern global warming era” — 1975 to the present — as far as I can tell the U.S. has warmed just as fast as the rest of the world.

    Comment by tamino — 13 Aug 2007 @ 2:48 PM

  180. tamino (#167) wrote:

    But they have been significantly warming, no one in his right mind disputes that. The discussion has been whether the warming has been accelerating, and to my mind, over the last 30 years or so it hasn’t — the warming has been steady rather than getting faster.

    I wouldn’t be at all surprised if statistics can this in the case of global temperatures. Probably already has. But I actually doubt that an analysis in terms of the monthly data would turn up anything. My main point was simply that using the appropriate methods one may be able to uncover signals that superficially look like they would be drown out by the noise.

    Anyway, my apologies.

    I really should have found a better way of making the same point. I will be more careful in the future.

    Comment by Timothy Chase — 13 Aug 2007 @ 2:59 PM

  181. Aaron Lewis (#173) wrote:

    Re 121
    I am so glad to hear that the DUSTBOWL will not form until 2080.

    Actually spelling it as two words seems to be more common. But one word spelled that way would seem to be a definite improvement over how I have spelled it in the past.

    http://www.drought.unl.edu/dm/monitor.html and the state of the Sierra Snow Pack had me worried.

    On the other hand, just as the Arctic sea ice is melting faster than the models predicted; perhaps North America may be drying out faster than the models predicted?

    The more information and the more resources the better.

    My central point in bringing it up in the first place is that most Americans still seem to be under the impression that climate change will have relatively little impact upon the United States – but this isn’t what the models say come the 2080s. Personally, I think that what may happen in Asia is a great deal more important, at least in terms of the numbers and global effect.

    However, the United States has a disproportionate effect upon global policy, and this is largely a function of conservative attitudes. Despite the recent urbanization of conservativism, I suspect that pointing out the consequences of climate change in relation to US agriculture may be one of the more effective arguments, at least in the states.

    I do not think that one dry year makes a drought or a trend, but any noticeable change in the heat distribution of the Northern Hemisphere should put us on alert for follow-on effects.

    Prudent men should consider and prepare.

    I agree. And things may move more quickly as a result of the meltdown in the Arctic – and this is something that I may mention in passing. However, despite my earlier mistake regarding temperature trends, I think it is important to claim no more than what the weight of the evidence (either in terms of modeling or obvious trends) can bear. Otherwise it is all too easy for those on the other side to label us alarmists.

    There are things which I worry about more than the drought in either the United States or Asia. I suspect that is obvious. But I won’t make it the centerpiece in any attempt to convince others that we need to do something about climate change.

    But that is my own personal approach. I can certainly see others reasonably taking a different view.

    Comment by Timothy Chase — 13 Aug 2007 @ 3:26 PM

  182. Re #144 I have to say I agree with FCH that source code should be made available in science generally. Although I take the point that doing any more than just putting it online somewhere is going to take resources, I don’t think that is a sound argument against doing at least that much. Certainly in my own field (agent-based modelling), I’m among those trying to establish a norm that code, along with parameter settings and metadata concerning the hardware and software environment in which model runs were undertaken, should be put in the public domain.

    Comment by Nick Gotts — 13 Aug 2007 @ 3:51 PM

  183. re 169. Lynn V. wrote”If the lower 48 US states have not been significantly warming (is that what people here are saying), then there must be other places that ARE significantly warming, if the global warming idea is accurate. Isn’t that that whole idea behind “the global average temperature” increasing?”

    Interesting developments in Crete.

    http://www.npr.org/templates/story/story.php?storyId=12707473&ft=1&f=1004

    Not that GW is necessarily the only culprit … but poor sustainability choices combined with apparent warming seems to have their drawbacks…

    Comment by J.S. McIntyre — 13 Aug 2007 @ 3:51 PM

  184. #170: Yes, gavin. These documents we have, but I’m probably a bit disappointed if that’s all you have. I’ve just started to go through them as good as possible, but of course here is no raw data, program code, or most important I don’t think there are detailed algorithms. It seems a lot to be a defence of methodology. Before I ask more specific questions I will however search for all answers on the site and elsewhere.

    Now, the question why the US Temperature Anomaly 1880-1998 data 1999 are so different from that 2001 wasn’t answered. I shall not have to find it out myself, which adjustments that differ and the impact from them respectively. I stronlgy doubt it can solve this from these (more heuristically rethorical than mathematical?) documents that you gave me a link to.

    Again: Why is the US Temperature Anomaly 1880-1998 data on page figure 6, page 37 in “GISS analysis of surface temperature change” (1999) so very different from US Temperature Anomaly 1880-1999 data an page 22 in “A closer look at United States and global surface temperature change” (2001).
    These documents:
    http://pubs.giss.nasa.gov/abstracts/1999/Hansen_etal.html
    http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html

    Just a general, not a detailed, answer why. Thanks!

    [Response: The answers are in the papers! Basically, it is because of the corrections due to Time of Observation biases and calibrations for known station moves and the like. - gavin]

    Comment by Magnus Andersson — 13 Aug 2007 @ 3:52 PM

  185. # 178 Michael

    “but I would be a fool to put any money on your predictions (if I were a gambling man).”

    So, would you put money against GISS predictions? If you think you’d be a fool to put $$ on their predictions, surely it would be very unfoolish to put money against them? Do you think a global 5-year average around 2020 will be the same? colder? than one around 2005? I’m still hunting someone to propose that side of a bet over on http://www.longbets.org...

    Comment by John Mashey — 13 Aug 2007 @ 3:54 PM

  186. Gavin, you make peer review sound flawless (re: response in #178).

    http://www.eurekalert.org/pub_releases/2004-08/cu-bse081204.php

    [Response: Interesting study. But still, I have never described peer review as flawless. When done properly and thoroughly (as it was for the IPCC reports) it is very useful at improving the quality of a document. When done badly..... well, let's just say that it isn't always done well. It's just one step at improving the credibility of a statement. Necessary, but not sufficient! - gavin]

    Comment by Dave Blair — 13 Aug 2007 @ 4:13 PM

  187. Re: #179 Tamino

    Thanks for the explanation,

    but your answer induces another “naive” doubt:
    if CO2 is “world global” in its warming effects
    while “short lifetime” sulfate aerosols are much more local

    then for the 1940-1975 cooling trend I would also expect a neat divergence
    between, say, the northern and southern emisphere of the globe

    Now for example in

    http://iabp.apl.washington.edu/Papers/JonesEtal99-SAT150.pdf

    (see figure 4, p. 10)

    I can’t see that much of a difference ,,,

    Comment by Mario — 13 Aug 2007 @ 4:22 PM

  188. Simple demands to see all and every piece of code involved in an analysis presumably in the hope that you’ll spot the line where it says ‘Fix data in line with our political preference’ are too expansive and unfocused to be useful science. – gavin

    Then just host a tarball–or give public read access to the SVN/CVS repo–and go about your business, doing the “useful science”. Is there any reasonable argument for withholding any of the relevant source code? How is that “expansive”?

    [Response: Because working codes are generally a mixture of scripts, programs, utilities, dead ends, previous methodologies and unused options and not 'nice' web applets that anyone can run. Pulling together everything needed to the analysis so that it can be run elsewhere 'out of the box' is a significant undertaking. We've done that for climate models because that flexibility is required due to the number of users. For programs that just one person runs, it's much more difficult and time consuming in general. Before going to all that bother, demonstrate that there indeed is some ambiguity in the published descriptions. Without that, this conversation is just posturing. - gavin]

    Comment by DaveS — 13 Aug 2007 @ 4:28 PM

  189. If a paper says ‘we then add A and B’, you don’t need code that has “C=A+B”. – gavin

    No, but if the paper says C=f(g(h(A + g(h(B))))), it would be “nice” to have f(), g(), and h() fully specified and reproducible, with full access to the datasets containing A and B.

    [Response: Read the paper and you will see that the GISS methodology for the urban adjustments are not that complex and all the raw data is already available. - gavin]

    Comment by DaveS — 13 Aug 2007 @ 4:30 PM

  190. I saw the site on temperature anomilies. I wonder is there one on rainfall anomolies?

    Comment by David Price — 13 Aug 2007 @ 5:38 PM

  191. Re 181
    Drought in California will have major impact on 30 million people, from lettuce farmers that flood their fields with Sierra snow melt to computer programmers that want a morning cup of coffee. A drought like the one that started in 1280 would be worse for California than any foreseeable earthquake, volcano, or plague.

    I would rather risk being labeled an alarmist by speaking up, then to have people perish as a result of my not raising the issue soon enough, or not making my warning sufficiently urgent. So at what point do you raise the alarm? 2 % probability? 50 %? Do you wait until you are certain? By then, the danger may be so close that it is impossible to avoid. When does the philosopher speak? I think that a lot of people are going to come to grief because respectable scientists do not raise the alarm in time.

    I think that the changes in the Arctic are evidence of changes in Northern Hemisphere surface sea temperatures and atmospheric circulation patterns. I think the surface sea temperatures in the Eastern Pacific are similar to those that I saw last year at this time, and I feel that those sea surface conditions contributed to California’s lack of precipitation last winter. Last year, I thought it was gong to give us a wet year, and instead the storm track moved north and we were dry. I think the same thing is going to happen this year. What is my confidence level? Maybe 33% – not high, but give the stakes, it is a heck of a bet. I would not give it a second thought if the Arctic ice was not melting, and my fruit trees were not blooming a day earlier ever year.

    What happens if I shout DROUGHT too loudly? It is not like shouting fire in a crowded theater.
    The worst that could happen is a few farmers put in drip irrigation systems and a few home owners convert their lawns to sedums. It is not like everybody is going to run for Oregon. Heck, half of the households in our little cul-de-sac do not believe in global warming anyway.

    Comment by Aaron Lewis — 13 Aug 2007 @ 6:00 PM

  192. Presumably that is new definition for the word secret? As in ’secretly’ publishing all the adjustments and consequences in the open peer reviewed literature? Please read the references: Hansen et al 2001. – gavin

    Here is Hansen et al paper on the urban adjustment: “The urban adjustment, based on the long-term trends at neighboring stations, introduces a regional smoothing of the analyzed temperature field. To limit the degree of this smoothing, the present GISS analysis first attempts to define the adjustment based on rural stations located within 500 km of the station. Only if these stations are insufficient to define a long-term trend are stations at greater distances employed.”

    I have some questions:
    – How does the analysis “attempt to define the adjustment based on rural stations”? What is the methodology?
    – How are the local rural stations chosen?
    – What is the test for insufficiency?
    – In the event that local stations are deemed insufficient, how do you choose the longer range ones?
    – Why 500km? Was that arbitrarily chosen?
    – What is the methodology for making the adjustment for long range ones? How does it differ?

    Also from the paper, regarding station history adjustments: “One of the best opportunities to make useful station history adjustments is in the United States. The USHCN stations were selected as locations with nearly complete temperature records for the 20th century, but also because they have reasonably good station history records that permit adjustments for discontinuities.”

    – Um… this doesn’t describe the adjustment… at all.

    Now, maybe I’m missing something, but I don’t see any papers referenced here which could answer any of these questions. Perhaps that is why some people are characterizing it as “secret”.

    I’ll be the first to admit that I have NO IDEA whether there are or are not published papers out there that answer these, but they aren’t cited in Hansen et al 2001. With no mention of methodology on such important alteration of the raw data, how can this possibly be called “peer-reviewed”? I really don’t mean to sound snarky…. I’m sure I’m just missing something.

    [Response: Umm... try reading past the abstract - all the full papers are freely available online. - gavin]

    Comment by DaveS — 13 Aug 2007 @ 6:03 PM

  193. Umm… try reading past the abstract – all the full papers are freely available online. -gavin

    I quoted from sections 4.1.2 (Station History Adjustment) and 4.2.2 (Urban Adjustment), the relevant sections on the adjustments. I honestly quoted the most descriptive excerpts from the two sections… a full reading does not convey any more information toward answering my questions that do those excerpts. (You HAD to know that, so why such a dismissive response?)

    All of my questions still stand.

    [Response: Apologies if I jumped to conclusions, but I'm reading the same paper as you and the answer to your first question is clear: the rural stations are determined by the night light index (Section 3 and Plate 1). For the others, 'insufficient' refers to whether there are any rural stations within the radius (with enough values to define a clear trend). Of course, 500km was somewhat arbitrarily chosen - you want to minimise the size of the circle of influence while still maintaining enough stations for the method to work. The station adjustments are simply taken from USHCN itself - there is nothing extra added for that (and so no description required). If you think these choices matter, do it yourself with different numbers and see what happens. I guarantee that trying to do it for yourself will give you a much greater insight into the problems and potential solutions than simply looking at badly written fortran.... - gavin]

    Comment by DaveS — 13 Aug 2007 @ 6:21 PM

  194. #161
    No worries.

    Now, moving on to global temperature anomalies, does anyone think there is an accelerating trend in the global data? Does anyone have some analysis of the time series to support this?

    regards, Steve

    Comment by Steven — 13 Aug 2007 @ 6:55 PM

  195. Re Gavin’s response in 175:
    “If there is a question of whether the code correctly implements the algorithm it will be apparent if someone undertakes to encode the same algorithm and yet comes up with a different answer.”

    Gavin,

    Geez. Just support the release of the code and scripts. The only thing worse than if there were more errors would be if there were more errors and no one caught them. Through support of keeping it difficult to find errors you are diminishing your value as a scientist.

    [Response: You completely misread my point. I am certainly all for as much openness as possible - witness the open access for the climate models I run. But the calls for the 'secret data' to be released in this case are simple attempts to reframe a scientific non-issue as a freedom of information one (which, as is obvious, has much more resonance). My points here have been to demonstrate that a) there are no 'secret' data or adjustments, and b) that there is no reason to think there are problems in the GISTEMP analysis. The fact that no-one has attempted to replicate the analysis from the published descriptions is strong testimony that the calls for greater access are simply rhetorical tricks that are pulled out whenever the initial point has been shown to be spurious. - gavin]

    Comment by John Norris — 13 Aug 2007 @ 7:11 PM

  196. First of all, Gavin, I wanted to say that its cool of you to come in here and actually interact with us and answer these types of questions. Regardless of whether we agree with you on a particular point (or are even qualified to agree or disagreee), you deserve credit for that. It’s rare.

    Onto my response…

    …the answer to your first question is clear: the rural stations are determined by the night light index… –gavin

    Well, the paper says that the “analysis” is “based on” the unlit stations. The use of such otherwise superfluous language suggests that there is something more going on than a simply-described, routine adjustment.

    And regarding the insufficiency, should it not be made clear what constitutes a “clear trend”? The results could NEVER be reproduced without that information, which leads me back to my last question: how was this “peer-reviewed”? Are there publicly-available peer review documents?

    Also from the paper: “The hinge date [for the urban adjustment] is now also chosen to minimize the difference between the adjusted urban record and the mean of its neighbors.”

    What makes anyone think that this is correct? Is there some research that lends credibility to this approach? There is nothing cited for that either. It sounds like this method would arbitrarily minimize UHI when adjusting the data… rather, it sounds like the hinge point is chosen in a way that guarantees the highest possible trend. Maybe I’m reading that wrong though.

    [Response: I think you have read it wrong. They don't want the urban trends to have an influence at all, and so they look at the nearby rural trends. The urban long term change is then adjusted (using a two piece linear fit to the mean rural station trend) so that it matches the rural trend. They use a a two-piece linear fit (with an arbitrary hinge) because you don't want to arbitrarily fix a particular date at which the urban trends become important (they were doing that prior to the 2001 paper). You could do better I think, for instance fitting a low order spline to the rural numbers, but I'm not sure that the improvement in fit would justify the work. The main point is not that the Hansen et al method is 'right', because there is no 'right' answer. There are just different approaches. Eli Rabett had a good example of how this works - and his 'reverse engineering' revealed that a) the GISS adjustment is a simple two-piece linear trend and b) the urban trends after adjustment are the same as the rural trends. That's prima facie evidence that the code does what it claims to do. - gavin]

    Comment by DaveS — 13 Aug 2007 @ 7:17 PM

  197. Re: #187 (Mario)

    I agree that from the graphs in the paper you reference “I can’t see much of a difference.” So, I took the NASA GISS data for the northern and southern hemispheres, and computed the difference (northern – southern) from 1880 to the present. You can view the graph here.

    As you can see, the northern hemisphere did indeed cool relative to the southern hemisphere from about 1945 to about 1975.

    Comment by tamino — 13 Aug 2007 @ 7:24 PM

  198. Re: #196 (tamino)

    One more note: for most of the century it’s evident from the graph that the northern hemisphere warms relative to the southern. That’s because land warms faster than ocean (due to the immense thermal inertia of the oceans) and most of the land is in the northern hemisphere. The period 1945-1975 diverges from this pattern.

    Comment by tamino — 13 Aug 2007 @ 7:41 PM

  199. What happened to my comment on #196? Was there something inappropriate? Please let me know.

    Comment by bjc — 13 Aug 2007 @ 7:52 PM

  200. I urge everyone to review Eli’s rework, but make sure you read Steve McIntyre’s commentary.

    Comment by bjc — 13 Aug 2007 @ 8:23 PM

  201. So Gavin, besides posting replies here, how was your weekend? You have done yeoman’s work answering the queries. Thanks. I think a sticking point is that most of us would have a hard time replicating the GISS temps based on Hansen, et al. 2001 simply because we are not trained in the field. But I am sure many outside my field would have trouble replicating some of the vaccine work I published based on their reading of the Materials and Methods sections I painstakingly wrote. That’s the nature of the beast. Training in a scientific field does count for something.

    I would just like to point out that with ClimateAudit.org down, Steve McIntyre has posted a couple of items on Anthony Watts’ blog (“Does Hansen’s Error “Matter”? – guest post by Steve McIntyre” and “Lights Out – Guest post by Steve McIntyre”). And I would recommend tamino’s data analysis to get an idea of the effects of the correction. It looks to me like the correction of the data for the lower 48 leads to a better slope match with the satellite data.

    Comment by Deech56 — 13 Aug 2007 @ 8:32 PM

  202. RE #191, & Drought in California will have major impact on 30 million people, from lettuce farmers that flood their fields with Sierra snow melt to computer programmers that want a morning cup of coffee. A drought like the one that started in 1280 would be worse for California than any foreseeable earthquake, volcano, or plague.

    I just started SIX DEGREES by Mark Lynas (had to order it thru http://www.amazon.co.uk) & am starting on the first chapter. He writes about what happened in 1280 as happening with a 1 to 2C increase. And I understand (from other sources) that’s already in the pipes from previous GHG emissions. Yes, such an increase & concomitant harms will be very very bad for much of the U.S. (& the world).

    Comment by Lynn Vincentnathan — 13 Aug 2007 @ 9:17 PM

  203. “We of course pointed that you get virtually the same warming trend whether you use all stations or just rural stations – which you didn’t even seem to acknowledge.”
    This is being claimed wrt US temps.
    Is this true wrt global temperature studies? That is, only rural stations have the same trend? If so, what is the published study on it?

    Comment by Patrick — 13 Aug 2007 @ 9:29 PM

  204. Oh, PUH-leeze, Gavin. You’re not talking to the computer illiterate. I’d tinker with the models if I had the right FORTRAN compiler, but all my best efforts — and I was working in FORTRAN 28 years ago — have been for naught. One thing that publishing the code would do is get it ported to something that doesn’t require a proprietary compiler. The next thing is I suspect someone would find a way to parallelize the code that would let it be run on distributed networks, rather than being limited to tightly coupled, shared memory systems — or at least, I’ve been told it currently only runs on tightly coupled systems.

    [Response: Are we talking about climate models now? The available ModelE code was only for OpenMP systems, but does compile on a number of different platforms and compilers. Our current in-house version has MPI as well, and will compile with g95 (runs pretty slow though). At some point in the near future, I'll update the available code to the version with the same physics but more flexible coding. - gavin]

    Comment by FurryCatHerder — 13 Aug 2007 @ 9:58 PM

  205. Gavin thanks for your reply to 195. Obviously we are keeping you busy. I appreciate your dedication to the science and the discussion. Onto your comments:

    “You completely misread my point. …”

    Not sure how, I completely read your words. Your words said that the code should not have to be publicly inspected; the public should create a new program based on the original algorithm. If the public gets different results, then the public can raise an issue with the Author.

    Reviewing code cuts to the chase. Making a new program makes the process more complex. Why on earth would you want to make it more complex?

    ” … I am certainly all for as much openness as possible – witness the open access for the climate models I run. …”

    I am for as much openness as possible too, please support release of Hansen’s source code and scripts.

    “…But the calls for the ’secret data’ to be released in this case are simple attempts to reframe a scientific non-issue as a freedom of information one (which, as is obvious, has much more resonance). …”

    I don’t think so. I think people are curious how it works and if it really works correctly. Sounds like science to me. If you are correct then get it released, and the issue as you described goes away. If I am correct people will take the information, run it, look at, size it up, and comment. How can that not be good for Climate Science?

    ” … My points here have been to demonstrate that a) there are no ’secret’ data or adjustments, and b) that there is no reason to think there are problems in the GISTEMP analysis. …”

    a) The code and scripts are apparently kept secret from me – thus the implementation of the data and adjustments is indeed secret. I don’t think your point a is valid.
    b) Before Steve McIntyre found the subject GISTEMP problem you describe above, I don’t think you had reason to believe that that GISTEMP problem was there. I don’t think your point b is valid either.

    ” … The fact that no-one has attempted to replicate the analysis from the published descriptions is strong testimony that the calls for greater access are simply rhetorical tricks that are pulled out whenever the initial point has been shown to be spurious. – gavin”

    Funny, I think it is strong testimony when someone releases important results publicly but withholds important information on generating those results. Hansen’s code and script details are obviously very important towards generating those GISTEMP results.

    It is perfectly reasonable to subject every piece of code for completely transparent review if the code establishes one of very few standard data sets for GW measurement, as Hansen’s does. It may be poorly configured code and scripts that are embarrassing for Hansen to release, but the world will get past that if it otherwise passes scrutiny.

    [Response: The methodology is important to the results, and the methodology is explained in detail (with the effect of each individual step documented) in the papers. The fact that the results are highly correlated to the results from two independent analyses of the mostly the same data (CRU and NCDC) is a strong testimony to the robustness of those results. However, you are I think wrong on one point, the rhetoric for more access and more data is actually insatiable. As one set of code is put out, then the call goes up for the last set of code, and the code and results from the previous paper, the residuals of the fits, and for the sensitivity tests and so on. Given that all this takes time to do properly and coherently (and it does), there will never be enough 'openness' to squash all calls for more openness. Whatever the result from releasing the current code (which very few of the people calling for it will ever even look at), the 'free the code' meme is too tempting for the political advocates to abandon. People who are actually genuinely interested in all of these questions, will, I assure you, be much happier in the end if they code it themselves. Think of it as tough love. ;) - gavin]

    Comment by John Norris — 13 Aug 2007 @ 10:34 PM

  206. Hansen et al 2001 says: “Only 214 of the USHCN and 256 of the GHCN stations within the United States are in “unlit” areas.”

    The data set http://data.giss.nasa.gov/gistemp/station_data/station_list.txt
    lists 294 USHCN and 362 GHCN with lights=0 and 308 USHCN and 371 US GHCN as dark (A). Can you please obtain a list of the 214 USHCN and 256 GHCN sites that were actually used in Hansen et al 2001. What accounts for the difference between the numbers in the data set and the article.

    [Response: I have nothing to do with that analysis and so you need to ask the authors. Bear in mind that GISTEMP is being updated in real time, as are the source datasets. What was available in 2001 is not necessarily the same as what is available now. But like I said, ask them. - gavin]

    Comment by Steve McIntyre — 13 Aug 2007 @ 11:06 PM

  207. #205. For applied economics articles in major economics journals e.g. American Economic Review, it is mandatory to archive code and data as used, at the time of submission of the article. There’s no reason why this sort of “best practices” should not also be adopted in climate science, where there are important policy considerations.

    And by the way, if you’re worried that no one’s going to look at the code, I promise that I will look carefully at the code.

    [Response: As implied above, you will be much better off doing it yourself. It's not a complicated procedure, and you could try all sorts of different methodologies on a consistent platform. If you come up with something substantially different, I'll be surprised, but that would be constructive science. Graduate students sometimes put a lot of effort in deciding which choice to make in an analysis. My advice is invariably to simply do it one way and then go back and see if doing it the way other matters. If it doesn't matter, it isn't worth worrying about, and if it does, then you have an interesting result. The point is that idle theorising about potential issues is a waste of time when it is only a matter of days to actually find out. Work it out and see. If you are worried about microsite issues, do the analysis with only the 'good' sites. If you worry about urban issues, throw out the urban stations. etc. Each persons interest's are independent, and the direction their tests will take them is unique. Waiting on someone else to do your analysis for you is foolish. -gavin]

    Comment by Steve McIntyre — 13 Aug 2007 @ 11:14 PM

  208. The fact that no-one has attempted to replicate the analysis from the published descriptions is strong testimony that the calls for greater access are simply rhetorical tricks that are pulled out whenever the initial point has been shown to be spurious. – gavin

    I think that’s nonsense. Noone should have to “attempt” to replicate the results. They should be able to pull the actual, complete and unaltered dataset that was used and replicate it. They should see each and every station that was used to do the urbanization adjustment for any particular station. Every bit of that should be available, and there is absolutely no defensible reason why it isn’t.

    I’m sorry, Gavin, but you’re wrong on this. People shouldn’t have to jump through hoops and do tons of trial-and-error hoping to “replicate” the results of a paper that somewhat ambiguously (despite your differing opinion on that) describes methodology, nor should their failure to do be considered evidence that they are asking for the data in bad faith.

    And, in the end, it shouldn’t matter if they are asking in bad faith or not. Every last scrap of data should be available to everyone, even people who disagree with your conclusions, and ESPECIALLY to people who are trying to tear apart your conclusions.

    However, you are I think wrong on one point, the rhetoric for more access and more data is actually insatiable. As one set of code is put out, then the call goes up for the last set of code, and the code and results from the previous paper, the residuals of the fits, and for the sensitivity tests and so on…

    So, what’s wrong with that? I was shocked over the last few days to learn the extent to which those things AREN’T available, to be honest.

    [Response: Underlying all of this is, I think, a big misconception about what replication means in an observational field like climatology. Everyone is working with ambiguous and imperfect data - whether that is from weather stations, paleo-records, models or satellites. Every single source needs to be processed in ways that are sometimes ad hoc (how to interpret isotopes, how to model convection, how to tie two satellite records and, yes, how to adjust for urban heating effects). Robustness of a result is determined by how sensitive the results are to those different ad hoc assumptions (do different climate models have the similar sensitivity, do results from south Greenland match those of North Greenland, does the UAH MSU record match the RSS MSU record). The important replication step is not the examination of somebody's mass spec, or the analysis of lines of code, but in the broader results - do they conform with other independently derived estimates? Climate science is not pure mathematics where there is (sometimes) a 'right' answer - instead there are only approximations. These are outlined in innumerable papers which for decades have been the method of recording procedures, assumptions and sensitivities. To be sure, there are sometimes errors in coding (in the MSU record, or in climate models), but these come to light because the end results seem anomalous, not because there are armies of people combing through code. The independent replication of results is by far the more important stage in evaluating something.

    Take the Greenland ice cores. The US and EU wanted to drill an ice core in central Greenland. They could have drilled one core , split the samples and done indpendent analysis of each sample. That would have given a good test of how good analytical techniques are. Fine. But what they elected to do was to drill two separate cores, 30 miles apart and do everything independently. This was much, much more useful. Firstly they found that for most of the cores the results were practically identical (which demonstrated the robustness of the analytics as well as sharing the samples would have), but more importantly, they found that the cores diverged strongly near the bottom - a sure sign that there was something wrong at one or (as it ended up) both sites. Without the independent replication that non-robustness would have not been uncovered. Thus while there are already two independent replications of the GISS temperature analysis (CRU and NCDC) - both of which show very similar features, there is always room for more. That would demonstrate something. Looking through code that does a bi-linear fit of data will not. - gavin]

    Comment by DaveS — 13 Aug 2007 @ 11:19 PM

  209. “Patrick: can you explain to us your expertise in physics, algorithms, statistics, analysis of imperfect data, software engineering, simulations? You have done most of this stuff professionally, right? [Of course, as an anonymous poster, it will be hard for us to check.]”

    I already replied to this besides-the-point challenge but it was not posted.
    Not sure why. Let me briefly say: I have a PhD in Computer Science and know many of the matters/areas related to this subject sufficient to do some work here, but my skill/background is besides the point. I joined the call others made for the scientists in the area to do good, professional science by being as reproducible as possible with data and data analysis algorithm transparency. Open the details and code up for review. If I am being asked to do what those working in the field won’t do, I’d have to decline, as I have a job. But Steve McIntyre seems eager to review things, so work with him. Willingly. (See Judith Curry, #69, she ‘gets it’.)

    Comment by Patrick the PhD — 14 Aug 2007 @ 12:13 AM

  210. A problem with this thread is that we have two extremes, as seen before in other disciplines:

    - Respected domain researchers (like Prof. Curry in this case), occasionally joined by others, like statisticians or software engineers who might have something constructive to add, who clearly want to improve the science, and are happy to make normal-science critiques. Reasonable people can and do disagree about the mechanisms, cost tradeofss, etc.

    On the other hand:

    - People who have little interest in improving the science, or actually getting better answers, and mainly want to slow down research whose answers they don’t like, raise the publication bar very high, and in general, cost-effectively waste researchers’ time.

    If you don’t believe this goes on:

    Consider “Good Epidemiology Practices (GEP)”, which describe the rules for doing good studies. This is good, and versions are created/debated by various researchers. Who could argue with that?

    In Allan Brandt’s “The Cigarette Century”, p306-307: it turns out that in late 1992:
    “Following years of fighting epidemiologists, Philip Morris now initiated a campaign for “Good Epidemiological Practices,” organized to “fix” epidemiology to serve the industry’s interests by changing standards of proof. One of the objective of the program, an internal memo explained, was to “impede adverse legislation.”
    A really detailed analysis can be found (including the role of Steve Milloy) in:
    http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1446868
    The tobacco case is unusual, in that large amounts of *internal* documentation is publicly available.
    Google: philip morris gep tobaccodocuments.org
    gets plenty of pointers, including strategy:
    ‘generate letters to editors (scientific/general media) promoting study and highlighting EPA/other weak ETS methodology inc… encourage libertarian policy groups to promote study and criticise the weakness of epidemiology, especially on ETS * independent scientists to push feature articles promoting confounders arguments.’

    Effective strategies proliferate.

    For a view of the more widespread use of these techniques [for instance, Milloy has moved on from secondhand smoke to fighting AGW), see:

    Chris Mooney, “The Republican War on Science”, especially:
    Ch6 Junking “Sound Science”
    p 71: ‘”sound science” means requiring a higher burden of proof before action can be taken to protect public health and the environment.’

    The Data Quality Act is supposed to assure the Federal Agencies “use and disseminate accurate information”. Who could argue with that?
    See:
    http://library.findlaw.com/2003/Jan/14/132464.html
    http://www.washingtonpost.com/wp-dyn/articles/A3733-2004Aug15.html
    http://en.wikipedia.org/wiki/Data_Quality_Act
    It has 12 separate references in Mooney’s book. On the surface, it’s supposed to assure good data. In practice, it has been used to inhibit publication of awkward data.
    ======

    Anyway, I continue to be amazed at Gavin’s patient willingness to reply to posters who seem more like anonymous drive-by sockpuppets to me and often seem clueless about serious software engineering and what it costs. Having spent some time looking at climate websites, I think GISS does a terrific job cost-effectively making data and (lots of) code available.

    I appreciate the different level of effort involved in getting the code done for a research study, and producing a systems program product that other people are expected to use. Again, if somebody is really interested in improving worldwide data, spending most of their effort chasing 2% USA Lower-48 is very weird. Reasonable, well-informed people can and do disagree about good procedures, but much of this thread seems right out of the Philip Morris GEP or Data Quality Act playbooks, and it does not help good science…

    Comment by John Mashey — 14 Aug 2007 @ 12:16 AM

  211. Gavin wrote in #205: However, you are I think wrong on one point, the rhetoric for more access and more data is actually insatiable. As one set of code is put out, then the call goes up for the last set of code, and the code and results from the previous paper, the residuals of the fits, and for the sensitivity tests and so on.

    When you work on something unimportant, you are right, folks don’t care much and nobody asks many questions. But if you are working on something important, then I’m sorry but people will ask for more and more information. That’s the way it goes.

    But don’t these types of questions come up anyway when someone is peer reviewing the work? If I am reviewing another engineer’s work, it’s common for me to ask that certain inputs change so that I can be satisfied of the impact on outputs. It’s a quick sanity check that catches more issues than you might imagine.

    If you are only providing descriptions, then my guess is that most peer reviewers aren’t at all scrutinizing to the level that most think things are being scrutinized.

    If I tell you I have a million lines of code that prove something you believe to be a stretch, and my code is secret, then investment needed by you to demonstrate an error in my thinking is astronomical if you have to recreate everything from scratch.

    This is what absolutely bugs me about all this climate research. As I’ve noted before, 1MLOC typically has 5,000 to 10,000 bugs lurking about unless you have an army of folks on QA.

    [Response: As I said above, complex codes can't be derived directly from the papers and so should be available (for instance ModelE at around 100,000 lines of code - and yes, a few bugs that we've found subsequently). The GISTEMP analysis is orders of magnitude less complex and could be emulated satisfactorily in a couple of pages of MatLab. - gavin]

    Comment by Matt — 14 Aug 2007 @ 12:45 AM

  212. Gavin,

    Here is some speculation that might be worth looking into by someone with sharper physics than mine.

    I grew up in the desert of Eastern Washington State. I go back from time to time, and the weather has cooled. It used to be that every couple of years, late June or early July would have temperatures in the 110s. That doesn’t seem to happen now; I’ve been there in June when the temp didn’t get over 95 for a week. The common attribution of this change is irrigation. There are new irrigated vineyards, orchards, and even corn where there used to be only dry-land wheat.

    My understanding of such things tells me that increasing humidity should depress high temperatures (and put a floor under lows). If this is true, then we may have the opposite of urban heat islands. At least in places where irrigation is increasing.

    In poking around the web, I could find no records of relative humidity. I did find that about 2% of the total area of the US is under irrigation (to the tune of about 150 billion acre-feet a year); that the area under irrigation fell by about 10% from 1972 to 1984.

    There are some interesting possibilities in looking at humidity as a moderator of temperature. Should high temperature in areas that humans have dessicated (LA has captured entire rivers to feed their water needs) count in temperature trends? Should areas that irrigation has cooled? Was 1934 so hot because so much of the country was very dry? Did the growth of irrigation suppress temperature in mid 20th century?

    Comment by Tim McDermott — 14 Aug 2007 @ 1:43 AM

  213. #146
    Thanks for that Tamino. That’s a link I’ve already used on two other forums. Grand stuff.

    Comment by Mike Donald — 14 Aug 2007 @ 1:47 AM

  214. A common definition of whether a software application is implemented correctly is that it “conforms to the specification” (a similar definition is sometimes used for the quality of products produced by a manufacturing process).

    If I follow Gavin’s argument (and I do respect and appreciate Gavin’s efforts even if I disagree on this point), we should test Microsoft Word by developing a duplicate of Microsoft Word based on a hypothetical (since its from MS) detailed specification and if the duplicate performs differently than Microsoft Word, then we may have found an error in Microsoft Word (or the duplicate)? I suppose this approach might work but it does not seem to be a particularly realistic way to test software for conformance with the specification nor is it used widely to test the accuracy of software implementations. For all but relatively simple algorithms, it is a potentially labor intensive and time consuming approach to checking the accuracy of an implementation. This is not an issue of “political advocates” but straight forward traditional software engineering and also a major tenet of the open source movement. Thanks for listening.

    Comment by Eric — 14 Aug 2007 @ 1:47 AM

  215. John Norris wrote: Reviewing code cuts to the chase. Making a new program makes the process more complex. Why on earth would you want to make it more complex?

    Reading to code won’t cut to the chase. In the first place, the code is the least informative artifact in the software development process. It has had too much of the what and why removed. What you really need to understand a program is a statement of the requirements for the program, a layout of the architecture of the program, and then a description of the high and low level design of the components of the program. Once you have all that, then you can start to make sense of the code. I doubt that much documentation exists.

    I once worked for the Navy modeling *mumble*mumble* The point of the exercise was to gain understanding of certain phenomena and how certain weapons responded. Our product was understanding, not software. Our work cycle was, approximately, to find an area that didn’t seem to be right to us. We might do some research, we might run our simulation against new conditions. Then we would decide to try something, code it, and run it. Then if it was a better match for our reference data, we would keep it and start over. We didn’t have to be perfect; we couldn’t be perfect. We just had to keep getting better.

    If we had to have operated in the kind of fishbowl you think would be good for climate science, our productivity would have gone in the toilet. Everything would have taken two or three times longer, because we would write documents we didn’t need for our work, but would be required by our auditors. And we would have been interrupted constantly by people wanting to know why we didn’t use longer variable names (F77, of course), don’t we know that common blocks are the root of all evil, and good gawd there’s a GOTO.

    Exxon Mobile made, after taxes, nine thousand dollars every second of 2006. To think that they would not have folks poring over the code looking for things that just look funny (so that they could feed it to a blog to raise a stink) is, well. Not everybody just wants to understand.

    Comment by Tim McDermott — 14 Aug 2007 @ 3:01 AM

  216. Re #206

    Tim,

    It is already known that humidity sets a ceiling to high temperature. For instance in a jungle the temperature seldom goes above 80 F but in deserts over 100 F is not uncommon.

    Water has several effects on climate with the first being that it prevents the surface from warming, because it absorbs the solar radiation as latent heat. This leads to more water vapour which is a greenhouse gas, and this heats the air near the surface. But water vapour is a lighter gas than air with a molecular weight of 18 compared with air at around 30. Thus the ‘wet’ air convects because it is warmer and lighter. The convection leads to cooling and the water vapour condenses and forms clouds. This cuts of the supply of solar radiation to the surface during the day and causes cooling. At night the blackbody radiation from the clouds keeps the surface warmer than on clear nights.

    So it is not really the humidity that is causing the cooling. It is just a rather unpleasant side effect from the surface water.

    Comment by Alastair McDonald — 14 Aug 2007 @ 5:18 AM

  217. [[Others say that this pattern change is CO2 induced. Neither side has yet to prove their case. I know the GISS people are high on CO2 causation, but they lost credibility when they keep their code secret (mistakes and all) from those who want to check their work.]]

    The code from the GISS GCM is available on the web:

    http://www.giss.nasa.gov/research/modeling/

    As for CO2 accumulation raising surface temperatures, that has been clear since John Tyndall demonstrated that CO2 is a greenhouse gas back in 1859. It doesn’t depend on modern GCM results.

    Comment by Barton Paul Levenson — 14 Aug 2007 @ 6:15 AM

  218. Re #172 — Tamino, you were right and I was wrong. For some reason I had gotten it into my head that you (or the original poster) was saying the lower 48 were not warming, whereas apparently you were saying the warming trend was not accelerating. My bad.

    Comment by Barton Paul Levenson — 14 Aug 2007 @ 6:22 AM

  219. [[I have this doubt:
    If it is man-made CO2 that pushes global warming, naively I would exspect that the US should be among the world regions in which the warming is MORE severely felt.
    But now, after NASA revision, it seems that US are among the places LESS affected by long term warming…
    There is something to ponder upon or everything is perfectly clear?
    ]]

    Mario —

    CO2 is well mixed in the troposphere due to convection and turbulence. On a regional scale the proportion of carbon dioxide in the air is pretty much the same everywhere.

    Comment by Barton Paul Levenson — 14 Aug 2007 @ 6:25 AM

  220. For all but relatively simple algorithms, it is a potentially labor intensive and time consuming approach to checking the accuracy of an implementation.

    So if the GISS code implements a relatively simple algorithm, you don’t mind if only the algorithm(s), not the code’s been published?

    That would seem to be the implication of your post.

    Gavin:

    The GISTEMP analysis is orders of magnitude less complex and could be emulated satisfactorily in a couple of pages of MatLab

    Hmmm, hey, it *is* relatively simple.

    Comment by dhogaza — 14 Aug 2007 @ 6:29 AM

  221. [[Now, moving on to global temperature anomalies, does anyone think there is an accelerating trend in the global data? Does anyone have some analysis of the time series to support this?]]

    I took the GISS global annual temperature anomalies from 1881 to 2000 and divided them into 12 decades, each with a time variable T = 1 to 10, and regressed the figures on T for each decade. I then took the coefficient of the T term for each decade and regressed that on the decade number 1-12. The trend was up but not significant, which is perhaps attributable to the small sample sizes involved.

    Comment by Barton Paul Levenson — 14 Aug 2007 @ 6:55 AM

  222. BlogReader #72 said: Odd you didn’t mention that maybe some plates might be sliding lower into the ocean. Exactly how much of a rise is there? And how does it vary across the globe?

    Well, plates do slide against each other, but sea level changes due to tectonics tends to be in the order of 1cm per thousand years, and this would generally result from doming events. Sliding plates can cause more dramatic changes when there is elastic springback after tension has been released, but there haven’t been any such events in human recorded times as far as I have found.

    Any change in sea levels due to tectonic events would have both local and globals results: local changes will be due to the changes in plate tensions and buoyancy – global levels will result if the size of the oceanic basin changes.

    Comment by john mann — 14 Aug 2007 @ 7:02 AM

  223. Realclimate said:
    “Another week, another ado over nothing.

    … In the global or hemispheric mean, the differences were imperceptible (since the US is
    only a small fraction of the global area)….”

    From the link in 184: Hansen et al., 2001 says-
    “… Although the contiguous U.S. represents only about 2% of the world area, it is important that the analyzed temperature change there be quantitatively accurate for several reasons.
    Analyses of climate change with global climate models are beginning to try to simulate the patterns of climate change, including the cooling in the southeastern U.S. [Hansen et al., 2000]. Also, perceptions of the reality and significance of greenhouse warming by the public and public officials are influenced by reports of climate change within the United States…”

    If Hansen et al. said accurate analysis of temperature changes are “important”, can we conclude now that the call for transparency is not actually “another ado about nothing”?

    If I were inclined to read between the lines, that paragraph from Hansen et al, might raise my eyebrows for different reasons too, but that’s a topic for another day.

    Comment by scp — 14 Aug 2007 @ 8:03 AM

  224. On the Toronto Star’s website this morning, Stephen McIntyre is quoted as saying that he caused “a micro-change. But it was kind of fun.”

    Comment by Bob Beal — 14 Aug 2007 @ 8:07 AM

  225. #207. Gavin, I think that the problem that you’re failing to come to grips with is that when results are used for policy purposes, people expect different forms of due diligence than little academic papers, which have received only the minimal due diligence of a typical journal review. People are entitled to engineering-level and accounting-level due diligence.

    The reason that current replication practices in econometrics require archiving of code is that this makes post-publication due diligence much more efficient. There are a number of interesting links and discussion of replication policy at Gary King’s website http://gking.harvard.edu/replication.shtml . The McCullough and Vinod articles are relevant and were relied on in the American Economic Review adopting its replication policy. (The editor at the time is presently the Chairman of the Federal Reserve.)

    Your statement that no one had apparently tried to emulate the Hansen methods is itself evidence of the burden in trying to run the gauntlet of assembling the data and decoding the methods and precisely illustrates the obstacles to replication discussed in McCullough and Vinod (and its references) and why the American Economic Review changed its policy to require such archiving.

    In addition, the GISS temperature series is essentially an accounting exercise, rather than a theoretical exercise. In an accounting audit, you don’t just hand a bunch of invoices to company auditors and say – well, do your own financial statements. Yes, maybe they’d learn something from the exercise, but that’s not the way the world works. Their job is not to re-invent the company’s accounting system, but to verify the company’s accounting system. Sure there’s a role for re-doing Hansen’s accounts from scratch, but there’s also a role for examining what he did. If he’d archived his code, then it would have been easy to see where he switched from using USHCN raw data to adjusted data. You’d ask – why did you change data sets? You might also consider the possibility that if they’d gone to the trouble of properly documenting and archiving their code, maybe they’d have noticed the error themselves.

    The other problem that arises is that, if I do what you said – emulated GISS methods and arrived at different numbers, it’s not hard to imagine a situation where GISS said that my implementation of their method was wrong [edit] and right away everyone has a headache trying to sort out what’s going on.

    The purpose of inspecting source code is precisely to avoid these sorts of games. I asked for code to avoid pointless controversies [edit]. Contrary to an impression that you’ve given, I wouldn’t try to run their code as is. My own practice is to re-write the code in R (as you would do in Matlab), recognizing that it is fairly trivial, and then try to test variations.

    However even in “trivial” code, little things creep in. If you can read their Fortran code, this can elucidate steps and decisions that may not be described in the written text. [edit]

    If one wants to test the impact of (say) using only rural stations on Hansen’s numbers or of using “good” stations on US temperature data, something that is on my mind, then you need to benchmark the implementation of Hansen’s methods against actual data as used and actual results, step by step, to ensure that you can replicate their results exactly and then see what the effects of changing assumptions or methods are. Only by such proper benchmarking can one ensure that you are analyzing the effect of rural stations and not unknown differences in methodology. This seems so self-evident that I don’t understand why you are contesting it.

    [Response: Because, frankly, I find the 'audit' meme a distraction at best. I am much more interested in constructive science. Scientifically, independent replication - with a different set of 'trivial' assumptions is far more powerful (vis the Greenland ice core example) than any amount of micro-auditing. If there is a difference, then go to the code differences to see why (ie. UAH and RSS), but if you can show that the main result is robust to all sorts of minor procedural changes, then you've really said something. You have all the data sets from USHCN, GHCN, and GISS and you have demonstrated in a number of plots that all the GISS adjustment does is make a bi-linear adjustment to the stations based on close neighbour rural stations. How difficult is that to code? If the net result is significantly different than the GISS analysis then look into it further. If it isn't, then why bother? In this field, methodology is not written in stone - it's frequently ad hoc and contains arbitrary choices. Pointing out that there are arbitrary choice is simply telling us what we already know - showing that they matter is the issue. That kind of constructive analysis is how the rest of the field works - if you think you can do better and make better choices that are more robust to problems in the data, then that makes a great paper. Simply saying something is wrong without offering a better solution is just posturing. It's worth pointing out that the GISTEMP analysis started out exactly because they were unhappy with how the station data were being processed elsewhere. - gavin (PS. edits to keep discussion focussed)]

    Comment by Steve McIntyre — 14 Aug 2007 @ 8:33 AM

  226. For those interested, I’ve made a few supplemental charts from Gavin’s datasets, in particular the US and Global temperatures on the same chart:

    http://graphoilogy.blogspot.com/2007/08/us-temperature-revision.html

    Comment by Khebab — 14 Aug 2007 @ 9:02 AM

  227. Gavin:
    You don’t seem to have addressed the point that the AEA has and presumably enforces a policy of archiving all data and code. Are you suggesting that such a practice has no value?

    Khebab:Thank you, these are useful charts especially the one overlaying US and Global temperatures. (The one I twice asked Gavin to produce!) It sure focuses attention on the 30s and the question of whether the US pattern was really regional or global. I quickly checked Canadian data for the same time period and it seems to conform to the US pattern, which of course means that its weighting is significantly higher than 2% – probably approaching 13% of land based temperature measures.

    Comment by bjc — 14 Aug 2007 @ 10:07 AM

  228. I’ve never bothered to reproduce the entire NASA GISS analysis, mainly because I trust their results and reproducing it would be too much work. Besides, HadCRU and NCDC have done essentially the same work (albeit with different algorithm choices) and got essentially the same results — that’s pretty damn robust.

    Most of the work would be in acquiring and formatting all the data and metadata. The fact is that making public all the code for all the programs and all the scripts wouldn’t much lighten the workload. Besides, I’m not at all interested in finding out whether running the same program on the same data will produce the same results — that’s obvious! Nor am I interested in going through all the programs and all the scripts line-by-line looking for potential errors. Program/script code can sometimes reveal simple errors, but it doesn’t really help very much in determining whether the answer is right according to the stated algorithm. Debugging my own code is a royal pain; debugging someone else’s is an exercise in self-flagellation.

    I’d much rather acquire the data and metadata, then write my own programs to process it according to the stated algorithm. If my results agree, then I’d know that they got it right — and I did too. If not, I’d start debugging my own code until I was satisfied it was correct. If the results still disagree, I’d say so.

    I’d never bother to debug someone else’s code. There are too many ways to make the code look wrong when in fact it’s right! And too many ways to make the code look right when it’s wrong. I’m not interested in disentangling someone else’s twisted logic, which very well might look right/wrong when in fact it’s wrong/right.

    If NASA wants to release the code, and people want to pore over it line-by-line, more power to ‘em. But I’d rather my tax dollars were spent funding real NASA research than preparing programs and scripts for public release, to feed a call for “openness” which is really more motivated by a desire to discredit than a desire to discover.

    As for actually reproducing the analysis from what is published, it’s a lot of grunt-work but really is not that complicated. Why haven’t the denialists done so? Here’s my theory: it involves a lot of actual work. You guys want to know whether NASA got it right? Get busy — put up or shut up.

    Comment by tamino — 14 Aug 2007 @ 10:24 AM

  229. Economics would have much less need to verify code correctness if its theory was ever allowed to meet observation.

    Comment by Goedel — 14 Aug 2007 @ 10:29 AM

  230. #226. Khebab, Could you show the global data before and after? That’s what’s important, after all.

    Comment by Rob Negrini — 14 Aug 2007 @ 10:33 AM

  231. Didn’t “1998 is the warmest year” claim originate with the NOAA/WMO figures? That is, wasnt it NOAA the media was citing all this time, not NASA/GISS. Have NOAA’s rankings of largest anomalies been affected here, or do they remain the same?

    [Response: Globally, all the indices showed that 1998 was a record breaker. In the GISS analysis, 2005 just pipped it to the post (as it does in NCDC product). For CRU, 1998 is still the warmest. The differences between the years are small, and the different products have slightly different rankings. Nothing from NOAA or CRU is affected by this correction to the GISTEMP analysis (and even there the global mean changes are too small to see). - gavin]

    Comment by Climate Tony — 14 Aug 2007 @ 10:40 AM

  232. Dr. Curry writes “Climateaudit has attracted a dedicated community of climateauditors, a few of whom are knowledgeable about statistics (the site also attracts ‘denialists’)” and “in the long run, the credibility of climate research will suffer if climate researchers don’t ‘take the high ground’ in engaging skeptics….”

    Mr. McIntyre and the “few … knowledgeable about statistics” are facing a choice Saul Alinsky has written about in “Rules for Radicals” — when you have a little success as a critic, you will need to decide whether you’re trying to improve the institution, or tear it down. You then choose either to stay with the people who got you to the gates, outside, or leave them to go inside.

    Dr. Curry’s good advice is the same advice she gave over at CA to the people about the kid whose high school paper chart error got pointed out last month, deflating her attack on Hansen — when someone points out an error, check it, fix it, and move on.

    A huge one-sided error like that high school kid’s graph blows the whole paper.
    Little errors, when those are caught, that improves the product.

    The successful ‘auditors’ who understand statistics are, in fact, improving the product. This can’t please the ‘denial’ crowd.

    Comment by Hank Roberts — 14 Aug 2007 @ 10:43 AM

  233. Re: #230 (Rob Negrini)

    You can see that comparison here.

    Comment by tamino — 14 Aug 2007 @ 10:47 AM

  234. Re: #227
    I’ve just added the chart.

    Comment by Khebab — 14 Aug 2007 @ 11:00 AM

  235. If anyone wants to go through the source program of the GISS climate model, God bless them! Below is a small sample of FORTRAN statements from ModelE taken from “Field Notes From A Catastrophe” by Elizabeth Colbert(p 101-102)
    C***COMPUTE THE AUTOCONVERSATION RATE OF CLOUD WATER TO PRECIPITATION
    RHO=1.E5*PL(L)/(RGAS*TL(L))
    TEM=RHO*WMX(L)/(WCONST*FCLD+1.E-20)
    IF(LHX.EQ.LHS) TEM=RHO*WMX(L)/(WMUI*FCLD+1 .E=20)
    TEM=TEM*TEMC
    IFITEM.GT.10.) TEM=10.
    CM1=CMO
    IF(BANDF) CM1=CMO*CBF
    IF(LHX.EQ.LHS) CM1=CMO
    CM=CM1*(1-1/EXP(TEM*TEM)+1*100*)*(PREBARIL+1)+
    *PRECNVL(L+1)*BYDTsrc)
    IF(CM.GT.BYDTsrc) CM=BYDTsrc
    PREP(L)=WMX(L)*CM
    END IF
    C**** FORM CLOUDS ONLY IF RH GT RHOO
    219 IF(RH1(L).LT.RHOO(L» GO TO 220
    C**** COMPUTE THE CONVERGENCE OF AVAILABLE LATENT HEAT
    SQ(L)=LHX*QSATL(L)*DQSATDT(TL(L),LHX)*BYSHACM=CM1*(1.-1./EXP(TEM*TEM))+1.*100.*(PREBAR(L + 1) +
    TEM=-LHX*DPDT(L)/PL(L)
    QCONV=LHX*AQ(L)-RH(L)*SQ(L)*SHA*PLK(L)*ATH(L)
    *-TEM*QSATL(L)*RH(L)
    IF(QCONV.LE.O.O.AND.WMX(L).LE.O) GO TO 220 C**** COMPUTE EVAPORATION OF RAIN WATER, ER
    RHN=RHF(L)
    IF(RHF(L).GT.RH(U) RHN=RH(L)

    There must be untold thousands of lines of code. It takes the largest computers today, a month for a single run to simulate 100 years of climate .It would be an enormous job for any individual, or even group of individuals.
    Another difficulty would be that even though the components of the model that use the basic laws of physics,are the same for different models, the ones that are parameterized are different for different models.

    Comment by Lawrence Brown — 14 Aug 2007 @ 11:09 AM

  236. Link to “the 2001 paper describing the GISTEMP methodology” is broken. Climate Audit has been sabotaged. An act of Gaia?

    [Response: link fixed... thanks. - gavin]

    Comment by barry — 14 Aug 2007 @ 11:12 AM

  237. When 5 of the top 10 hottest years were within the last decade, people worried. Now only 4 have happened since the 80s. again, the perception of a problem has diminished significantly.

    Your criticism of climate science would be more persuasive if weren’t wrong, yourself.

    5 of the top 10 hottest years GLOBALLY are still in the last decade. Which has been the claim all along. As has been pointed out, the “G” in “AGW” doesn’t stand for “The Lower 48″.

    Sheesh.

    Comment by dhogaza — 14 Aug 2007 @ 11:18 AM

  238. Skeptic/deniers often claim, (like at Watts Up With That?) that climate scientists (like Hanson) won’t release their algorithms/SW (should see a joke in there…) I doubt that, and if not so, where can I find evidence of the algorithms/SW to show around?

    Comment by Neil B. — 14 Aug 2007 @ 11:54 AM

  239. In response to Steve Mosher’s #156

    I wrote:

    As such, they are responding to sources of contamination which may not have been regarded as that important – prior to 1929. Not only does this have little to do with current trends, but it demonstrates a concern for accuracy. And as a matter of fact, the elimination of such contamination tends to reduce the apparent rise in temperatures over the twentieth century rather than magnify it.

    steven mosher (#156) responded:

    Actually the don’t show contaimination they Hypthesize it to explain the record. They took cooling out

    My main issue with that paragraph in Hansen 2001 is the lack of supporting data and analysis in the text or figures. If you like I will detail all the missing parts. But look for yourself. Find the ANALYSIS in the text. Not a description of the analysis. Simple example: which sites in the “region” were the 5 sites in question compared to?

    Looking at the raw data for Lake Spaulding, you can most definitely see what would appear suspect: it goes from roughly 11.75 to 7.25 within the span of approximately ten years – very early on in the records. That is suspicious. It is particularly suspicious when you compare it with the neighboring rural stations.

    Which stations?

    This will give you a list:

    http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=39.32&lon=-120.63&datatype=gistemp&data_set=0

    There are four rural neighboring stations within 71 miles.

    Here is a chart of the raw data:

    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425745010040&data_set=0&num_neighbors=1

    You can compare it with the neighboring rural stations:

    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425745010040&data_set=0&num_neighbors=3

    One station doesn’t go back that far. Another shows almost comparable variability – but would not result in such a distortion, but the nearest neighbor shows nothing comparable.

    Additionally, he does not simply posit contamination, but points out that we know in fact that such contamination took place in those years – and identifies the known potential sources. Can we say that these stations were affected in this way? Of course not. If we knew the exact way in which they had been “contaminated,” we could correct for this. But since we do not have this information, the uncertainty of the effect combined with the evidence that such an effect may have played a part in the temperature records is enough to throw out those years of data.

    More importantly, the reason for dismissing the earliest data (at least in the case of the station that I investigated) is that the extreme variability at the earliest point was inconsistent with that of other stations and unrepresentative of any long term trend. I believe you will find that the same is true of the other stations if you are inclined to investigate as I have done. And as he said, the distortion would have been transmitted to the urban stations as the result of their methodology.

    *

    In any case, this is very early on in the temperature record. What we are concerned with when it comes to global warming is principally from 1978 forward. An adjustment of this sort would presumably require more justification during this period – particularly since we have better kept records and more rigorous procedures in place.

    I wrote:

    In an earlier thread you claimed to be interested in accuracy and were attacking urban sites for their presumed contamination by the urban heat island effect. We of course pointed that you get virtually the same warming trend whether you use all stations or just rural stations – which you didn’t even seem to acknowledge.

    Steven Mosher wrote:

    Let me schematize the UHI arguement for you using Peterson 2003, which Hansen cites ( as submitted) and which Parker quotes.

    1. UHI exists: ( I’ll link studies if you like),
    but see Petersons FIRST SENTENCE.
    2. We expect to see differences between Rural and Urban stations.
    3. These differences are NOT observed ( peterson, Parker, Hansen)in the climate network
    4. THEREFORE, Urban stations must be well sited in COOL PARKS.

    No need to link to the studies – I found them myself. Finding them takes only a few minutes, but the reading a bit longer.

    Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous
    United States: No Difference Found
    THOMAS C. PETERSON
    VOL. 16, NO. 18 Journal of Climate 15 SEPTEMBER 2003
    http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf

    Steven Mosher wrote:

    From Parker:
    Furthermore, Peterson (2003) found no statistically significant impact of urbanization in an analysis of 289 stations in 40 clusters in the contiguous United States, after the influences of elevation, latitude, time of observation, and instrumentation had been accounted for. One possible reason for this finding was that many “urban” observations are likely to be made in cool parks, to conform to standards for siting of stations.

    That is a very brief synopsis – enough to give the reader some idea of what is in a well-known paper – which they should probably read anyway, since in my estimation it is an excellent example of what research should be like.

    Peterson goes through quite some history in terms of the studies which were performed prior to his study and lists the various factors the authors failed to take into account. There are uncertainties, and he acknowledges them. As far as the various factors which would influence station temperatures are concerned, he listed what the major ones were and the empirically determined adjustments that these would require – as identified by past literature – and then filtered out those effects.

    I quote:

    d. The approach used in this analysis

    The main insight from the literature review is that most assessments of urban heat island contamination do not rigorously deal with potential inhomogeneities in the data. When inhomogeneities have not been fully dealt with, it is impossible to have confidence that the analyses correctly determined the impact of urbanization on the temperature record. However, when adjusting long time series for inhomogeneities due to factors such as station moves and changes in observing practices, the effect of urbanization may be inadvertently compensated for as well. Thus, it is doubtful that these two intertwined issues can ever be 100% successfully separated. The work presented here, therefore, will not look at differences in trends. Instead, the approach used will evaluate the effect of urban warming in a subset of the U.S. network by comparing temperatures of nearby rural and urban stations.

    …However, one can adjust the data to account for some natural and most artificial inhomogeneities. Specifically, careful attention will be paid to adjusting the data to account for the natural effects due to differences in elevation and latitude as well as the artificial effects due to differences in time of observations, differences in instrumentation, and the effects of non-standard siting practices, namely, rooftop installations. Once the data are adjusted for these factors, it will be possible to accurately assess the impact of urbanization on the climate record.

    A detailed analysis of each of these factors as well as explanation of the manner in which such factors may be expected to affect the readings is given. Additional studies are cited for each of these effects. Progress, building upon earlier, more well-established work. The residual would be what is left over after all other factors are accounted for. This is what is attributed to the urban heat island effect: 0.04 C. It is not statistically significant. Once results were obtained, additional tests were performed for robustness.

    You wrote:

    SO, you get the argument: we expect a difference, we find no difference, THEREFORE urban sites are in cool parks.

    Peterson suggests that there are a variety of well-known reasons why the urban heat island effect is relatively insignificant. Park cool island effects are certainly one of these. Another potential mitigating factor will be bodies of water – since urban sites are more likely to be near bodies of water. Clouds are more likely to form over urban areas as the result of urban heat, but in the process they are more likely to shade those areas where sites are located, reducing the effects of solar radiation.

    Simple Question: How do you test this last part?
    You look. Go to Surfacestations. Look at Tuscon ( parking lot) Eureka ( on a roof) Santa Rosa ( on a roof)

    How do you test for a strawman or cherry-picking? Park cool islands are just one of the well-known effects which may account for the unimportance of urban heat islands. Besides, a photo won’t pick up the thermal, but this is precisely what we are interested in as heat cells are what determine the boundaries which isolate a park cool island from its surrounding environment.

    Peterson states:

    The gradients of temperature within a city can be quite steep. Examining UHI using a radiosonde mounted to a car, Klysik and Fortuniak (1999) found ”permanent existence of heat cells” during the night in which ”each housing estate placed on the outskirts of the city distinguisheditself very sharply from surroundings interms of its thermal structure. Open areas (gardens, parks, railway yards) were then sharply separated regions of cold air. Thermal contrast at the border between the housing estates and the fields covered with snow (horizontal gradients of temperature) reached several degrees centigrade per 100 m.”

    Likewise, as has been pointed out to you on a rather large number of occasions:

    http://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/

    … a site which is in a poor location will not produce a trend. A site where an air conditioner is installed beside it at some point will not produce a trend – but a jump. These are things which would be picked up by statistical analysis – but they would not produce a trend. As has also been pointed out to you on numerous occasions, we get virtually the same trend whether we are using all stations or simply rural stations. We get quite similar trends whether we are using surface stations or lower troposophere. Lower troposphere has higher variability, but shows essentially the same trend.

    Steven Mosher wrote:

    Now, I would not call the matter closed. It’s never closed. If you like pick up a camera and go find a cool park site.

    So I have noticed.

    Comment by Timothy Chase — 14 Aug 2007 @ 12:30 PM

  240. This is a response to Steven Mosher’s #157. My apologies at outset, but this will include some philosophy, although principally in defense of the contextual, falliblistic self-correcting approach of empirical science – as Steven Mosher’s position is fundamentally opposed not to the approach of climatology but of science. However, I doubt that I will ever have to deal with anything of this nature again, at least in this forum.

    Please feel free to pass over it if this is not something which interests you. However, if you are inclined to read it, I have tried to make this easier by dividing it into titled sections.

    *

    Duhem and Quine

    I had written:

    Sounds like Quine to me.

    Actually he’s not all bad – at least the bits he gets from Pierre Duhem.

    Steven Mosher responded:

    Really? Quine’s Two Dogmas was published in 1951 and Duhems work was not allowed to be published until 1954.
    ZING!

    You are speaking of “The Aim and Structure of Physical Theory,” but this is an English translation of the book “La Theorie physique: son objet et sa structure,” which was published in 1906, although the original article which expressed the central insight was published as an essay in 1892 as “Quelques reflexions au sujet des theories physiques.”

    In all honesty, I couldn’t remember after all these years whether it 1892 or 1893, so I had to look it up. But as part of an essay which critiqued early twentieth century empiricism that included logical positivism in all major forms, first individually and then as whole by means of self-referential argumentation, operationism, operationalism, etc and finally offering a critique of the analytic/synthetic-dichotomy by means of self-referential argumentation, I refered to it in a critique of Karl Popper’s principle of falsifiability and expounded roughly the same argument.

    As I have said, I was a philosophy major.

    *

    The Fatal Flaw in Relativism and Radical Skepticism

    I had written:

    Now what about the claim that the world is at least five seconds old. I certainly think it is, but you may regard this as nothing more than a “theory” which is “underdetermined” by the “data,” where the data consists of the experience “memories” and anything else which I might claim as evidence for an older world. But at this point we aren’t even talking about observation per se – we are talking about memory.

    You responded:

    Quine would say it’s a theory. A well supported theory. One that would be very complicated to give up.

    Since he is a strong coherentialist, he would argue that even such distinctions as those between identification and evaluation, perception and emotion, subject and object are not ultimately based in observation, but are simply part of a web of belief in which even the law of identity is a subjective preference of sorts. His argument is essentially that in dealing with quantum mechanics, one may choose to employ an alternate logic. This much is true – as such an alternate could very well offer one greater theoretical economy.

    But from this he concludes that it may be appropriate to abandon even at root the law of identity itself. As such, his “pluralistic realism” is fundamentally a form of extreme relativism. This however runs into a problem in that for an alternate logic to be regarded as an alternative to conventional logic, it must be internally consistent, but one must presuppose conventional logic and the law of identity in some form simply in order to test for internal consistency.

    Any physical theory which is internally consistent would be logically untenable. Even in the case of quantum mechanics, one would have to abandon the theory if it proved internally inconsistent. However, assuming quantum mechanics is tenable, which is something which I believe we could both agree upon for our present purposes, it must be internally consistent.

    Quantum mechanics may be counterintuitive – and it is.

    However, for it to make specific predictions, even if they are expressed only in probablistic form, it must be internally consistent – otherwise, as a matter of logic it would be possible to derive any contradictory set of predictions in which case it would be untestable. Given the need for internal consistency, the law of identity is at root unavoidable, although one may in logic choose a formalism which expresses itself in some alternate formulation of it.

    I will also point out that extreme relativism presupposes a form of radical skepticism in which the world that exists independently of our awareness of it is fundamentally unknowable. It should be clear given your statements regarding Quine’s theory that this applies to Quine as well. However, extreme skepticism is self-referentially incoherent.

    This means that it has a fatal flaw which is closely related to the internal inconsistency which results from stating “I know that there is no knowledge.” This problem lies in the fact that when one asserts that radical skepticism is true, implicit in one’s assertion is the assertion that it is something which one knows. As such, there is a fundamental, fatal internal inconsistency implicit in its affirmation which renders it entirely untenable.

    *

    The Purpose of this Forum

    Now I should quickly point out that the above is merely a brief summary. To properly deal with the issues which I have mentioned above at the level of technical philosophy would take a great deal longer. But this is not a graduate level philosophy course. This is a forum devoted to climatology. For this reason, I will not go into any more depth on these issues than I have treated them here. Duhem’s thesis could very well be a different issue as it deals with the interdependence of scientific knowledge, but since it is already something that we both agree upon I believe that is unnecessary.

    *

    The Nature of Science

    Now with regard to climatology, no doubt you will point out that in my previous post it became quite clear that in arriving at any of the conclusions of the authors, one had to presuppose a fairly extensive background of assumptions – in very large part the conclusions of earlier papers. However, it should should also be clear that this is how science works. It is cummulative.

    Even when an earlier theory is superceded by a more advanced theory, however much the form in which the more advanced theory is expressed differs from the earlier theory, it must be consistent with the evidence which formed the basis for the acceptance of the earlier theory. The form changes, but much of the substance is preserved.

    This forms the basis for the correspondence principle which you are no doubt aware of. As such, there is nothing invalid in the cummulative nature of climatology. Additionally, it is clearly falliblistic. We will make mistakes. But given the systemic nature of human knowledge and empirical science, we can expect to uncover our mistakes in time.

    There are degrees of justification, and where a given conclusion is justified by multiple independent lines of investigation, the justification that it receives is often far greater than the justification that it would receive from any one line of investigation considered in isolation. This applies to all empirical science. As such, the fact that its methods are fallible and its conclusions fail to achieve cartesian certainty cannot be held against it any more than it could to any empirical proposition.

    *

    Given the preceding sections, I believe we can skip much of the rest which follows, at least those sections dealing with issues in philosophy and the philosophy of science.

    *
    You state:

    “You on the other hand are still stuck on (1).” Not dogmatically denying it, I understand, but simply doubting it with your healthy, “scientific” skepticism.”

    Stuck on #1. Presently I am looking at #1. Have to start somewhere. Now, force me to decide, and I will say, Yes, the earth is probably getting warmer. That “probably” needs to be quantified and independantly confirmed and it’s magnatude estimated.

    We will return to some of this in a moment.

    But how much is to be determined by means of scientific investigation, and the results of such an investigation are to be regarded as our best estimate until further scientific investigation determines otherwise. It is not something to be decided by means of philosophy, word games or ideology.

    So in the interest of science, lets look at the evidence:

    1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.”

    Hmm. Which ten year trend shows a higher rate: 1997-2006 ( last ten) or say (1927-1936)?

    Statistically, since 1978 they have been increasing worldwide according to according to virtually every study which has investigated it. Likewise, the global temperature was considerably higher for in 1998. This has been sufficient for every major scientific body which has taken a position on climate change to acknowledge that it exists and for mainstream science to acknowledge that by any rational human standard it is quite dangerous. Word games at this point will according to our endanger the a great many people, the world economy and quite possibly even more.

    Lets start with that one. I’ll address the other 19 in due course, but first things first. The simple task of measuring air temps.

    We have dealt with that at length – in the thread:

    http://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/

    Now returning to your earlier statement, you begin by quoting me and then respond:

    “You on the other hand are still stuck on (1).” Not dogmatically denying it, I understand, but simply doubting it with your healthy, “scientific” skepticism.”

    Stuck on #1. Presently I am looking at #1. Have to start somewhere.

    But this is not what has been suggested ever since you began participating in the debate here.

    From your previous post:

    Now, I would not call the matter closed. It’s never closed. If you like pick up a camera and go find a cool park site.

    This has clearly been your attitude since you first arrived – which is in line with the radical skepticism that I have dealt with above.

    *

    Your Motivation

    Previously I had stated that the active skeptics with regard to anthropogenic global warming were generally either motivated by a misguided concern for the economy, financial concerns or ideology. The last of these was exemplified by you and your interaction with Ray Ladbury. At the time I did not know what your ideology was, but it was obvious that you are fairly intelligent and that your motivation has nothing to do with a concern for the genuine facts of the matter.

    As I like to know who I am dealing with, I decided to do some digging.

    Within five minutes, I found that you are the president of Population Research Institute, a spinoff of Human Life International, a pro-life organization. You advocate population growth as you view any attempt at zero population growth as being contrary to your pro-life stand.

    Here is the evidence for you position as president of PRI:

    An Interview with Steven W. Mosher, President of the Population Research Institute
    By John Mallon
    http://www.pop.org/main.cfm?id=151&r1=10.00&r2=1.00&r3=0&r4=0&level=2&eid=678

    Here is the logic of your ideological position against the acknowledge the acknowledgement of what your organization views as the environmentalist nature of climatology – as expressed by your vice president:

    300 Million and the Environment
    Friday, October 20, 2006
    By Joseph A. D’Agostino
    http://theologyofthebody.blogspot.com/2006/10/300-million-and-environment.html

    Now I do not care to debate ideology with you. However, your ideology is irrelevant to climatology and your approach is fundamentally anti-science. You will not be swayed by any evidence or argumentation.

    We have no further reason to debate you.

    Comment by Timothy Chase — 14 Aug 2007 @ 12:48 PM

  241. “If anyone wants to go through the source program of the GISS climate model, God bless them!”

    Sure, I’ll go through it, and I suspect many others in the ‘open source’ SW community would jump on the chance to help others utilize this.

    Where is it available?

    [Response: http://www.giss.nasa.gov/tools/modelE - this is the code that was run for the AR4 simulations, and so is now a little out of date (Feb 2004) - thus any issues are likely to have been fixed in our current version. But it gives you a feel for what the models are like and how they run. I plan on putting up the latest version at some point in the near future. - gavin]

    Comment by Patrick — 14 Aug 2007 @ 12:56 PM

  242. > #20 Niller writes:

    2) The North Pole is melting so that there will soon >be a North-West Passage to which Canada is laying >claims.

    Your information is wrong. The Northwest passage is already open for commercial purposes.

    The open, peer-reviewed journal Science reported that in 1999 the Northwest passage was already open:

    “In 1999, Russian companies sent two huge dry docks to the Bahamas through the usually unnavigable Northwest Passage, which winds through the labyrinthine…”

    This is old news in the open, world-wide, peer-reviewed scientific community.

    http://www.sciencemag.org/cgi/content/summary/291/5503/424

    http://www.google.com/search?hl=en&q=1999+russians+northwest+passage+bahamas&btnG=Google+Search

    Comment by Richard Ordway — 14 Aug 2007 @ 1:11 PM

  243. This general reader found Gavin’s comments dead on. The only point of vulnerability seems to be the danger that non scientists, like Mcintyre, can animate the boob circus masters to work through the political system and fasten a choke hold on a field of science. You can see what happens then if you review the grim story of Bush administration interference and outright sabotage of science for political ends in the blog by Rick Piltz, Climate Science Watch.
    I suspect it will not be enough to gather round the water hole from time to time and agree on how unfair all this is, and perhaps the scientists in this field, climate science, who will always be working with close approximations and arbitrary bits here and there will need to set up a self defence organization. What will they need to defend? Their right to work the science as they please, not as meddling strangers want to control them to do, which would strangle the science. Perhaps it is turning out that Mann had exactly the right way to handle the non scientific opposition, stiff them, knock them down, give them a few more kicks. Like Terry O’Reilly played Hockey–always finish a check.
    The union of concerned scientists seems to have impact, and you can see they use some of the weapons of the system itself.

    Comment by garhane — 14 Aug 2007 @ 1:13 PM

  244. Re: #225, and Gavin’s response,

    As a software engineer myself, I make sense of where Steve is coming from. Software is the analytical tool by which these scientific conclusions are reached. As I recall in reading my history of the development of computers in climate science, it was not possible to crunch the numbers faster than the time span being measured, until the advent of fast computers. This means that these computers, and the software which runs on them, are the tools.

    In fact, to go Steve one better: it matters on what hardware the simulation was run, and it matters what the OS was, and it matters what the release level was, and it matters what else was on that computer. Any of these things could cause corruption.

    As Steve points out, this is about policy-level decision making, and I can assure you that this member of the public seeks the highest possible standards from science which has such potential to define economic paradigms far into the future. Probably the future of the nation state depends on how well nations organize to confront AGW.

    So, Gavin, there is just a little import to these issues. I’m sure you’d agree. And while your point is valid that good science means doing it yourself and seeing what happens, Steve is also right that controversy (“my results are different than yours”) only leads us back to where we are. In other words, audit-as-you-go is a very sensible approach.

    I disagree with Steve on one point: the auditing can, and probably should be
    in-house. It will take considerable understanding of the processes involved, to fairly audit them. However, the auditor needs to have broader knowledge than the climate scientists and engineers, in a few areas: 1) he/she must be well versed in the issues I cited above regarding hardware and OS; 2) he/she must have a strong background in statistics and other forms of analysis. It would be mighty helpful if this person or persons had previous software auditing experience.

    The sad truth is that there is no alternative. Gavin, your approach leads us here: there could be an error in both your process and somebody else’s; verifying them against each other will not reveal the error. Or: your results are different than somebody else’s, but he is poorly funded and has little support, and his concerns are dismissed. Or: Nobody has the time or money to replicate your gigantic simulations, so there is nothing with which to compare your results.

    Unless you have some other, heretofore unknown empirical method for validating the results of your simulations, then I side with Steve: your processes must be thoroughly and competently audited, on an ongoing basis.

    The work you do is just too important.

    Comment by Walt Bennett — 14 Aug 2007 @ 1:34 PM

  245. RE #225 & when results are used for policy purposes, people expect different forms of due diligence than little academic papers, which have received only the minimal due diligence of a typical journal review. People are entitled to engineering-level and accounting-level due diligence.

    I don’t know much about this discussion, the codes & data sets, but I think there is a big difference between balancing the account books and addressing climate change so that we avert harms.

    I remember how as a bookkeeper many years ago (before they used computers for it) I sweat bullets trying to find that 1 cent mistake I had made. I couldn’t just give them a penny, I had to find the mistake and correct it.

    However, there may be some parallels with engineering a bridge so that it is strong enough to withstand maximum traffic load under worse-case conditions, except that with global warming we’re not talking about a bridge load of cars crashing into a river, but the possibility of huge problems around the world that may harm millions or even billions of people over the long time the CO2 is in the atmosphere (maybe up to 100,000 years even — see http://www.realclimate.org/index.php/archives/2005/03/how-long-will-global-warming-last/).

    In that situation, we need to err on the side avoiding global warming and its harms, than on the side of helping fossil fuel companies that refuse to diversify.

    As I’ve mentioned on this site many times, mitigating GW can be done to a large extent cost-effectively, and the potential harms are so great, that we really don’t need much evidence global warming is happening and is or will be harmful. Scientific standards of .05 significance are way too stringent. Even if there is less than a 50/50 chance the climate scientists are right that global warming is happening and is or will be harmful, we need to mitigate the problem. That should be the policy of a moral, even self-interested society. Anything else is tantamount to dereliction of the policy-maker’s duty. And it means we shouldn’t even be having debates about policy at all. It should have been our policy to mitigate GW with all our effort since at least 1990 — 5 years before the 1st scientific studies reached .05 on GW.

    But here’s an idea. Rather than spend a lot of time and expense trying to find the mistakes of climatologists (I think they find each other’s mistakes — and get feathers in their caps when they find some gross ones that really matter), those who question the climate scientists’ accuracy and conclusions could really mess them up by getting the whole world to substantially mitigate GW. That would mess up their data sets. They wouldn’t have any increasing CO2, and if they are right that the temps rise with carbon, then they wouldn’t have any rising temps either (allowing for some lag time to pass). And then what could their codes do about it, without such great data?

    Comment by Lynn Vincentnathan — 14 Aug 2007 @ 1:52 PM

  246. re: #241 Patrick
    Great! [although, it's a little strange that anyone really interested in this, didn't know where the code was: SOFTWARE is one of the top-level menu items on the GISS website], but when people make a big effort to provide well-documented, relatively-portable code, it’s nice if people read it.

    This is actually an interesting test case:
    a) How about describing your background, i.e., familiarity with F90, OpenMP, and the relevant physics? What target machine(s) and software will you be using? (likewise for other people you get interested in this).

    b) How about a status report, say once a week? Tell us how far you’ve gotten, and how much effort it took?

    [These are not new questions, I've often asked these of people using software I'd written, to get feedback about match with intended audience.]

    Comment by John Mashey — 14 Aug 2007 @ 1:55 PM

  247. What effect did the drought and lack of good irrigation have in 1934 average temperature as compared to more recent years where even in during a drought millions of acres are irrigated and potentially reduce the extreme high temps due to evaporation.

    Comment by Ray — 14 Aug 2007 @ 2:58 PM

  248. In 241 Patrick is willing to go through the program,and even sounds eager to do so. As Gavin points out in one of his responses above, Model E contains about 100,000 lines of code. Back in the stone age of computers, I carried boxes of punch cards for both programs and data to a CDC 6600 located at a local university. If there was a bug in the program,it was a painstaking process to locate and correct for it. And these were programs with a lot less statements(by a factor of 100).
    If folks are willing to go through it, to search for discrepancies, my hat’s off to them.

    Comment by Lawrence Brown — 14 Aug 2007 @ 3:33 PM

  249. Re: #247
    Interesting point.

    One may wish to consider the effect of irrigation on the Ogallala Aquifer:
    http://www.waterencyclopedia.com/Oc-Po/Ogallala-Aquifer.html
    http://en.wikipedia.org/wiki/Ogallala_Aquifer

    Comment by John Mashey — 14 Aug 2007 @ 4:39 PM

  250. Rod asks about the effect of irrigation. Google Scholar has plenty of info, try a search or two. For example:
    http://www.agu.org/cgi-bin/wais?hh=A41E-0074

    Comment by Hank Roberts — 14 Aug 2007 @ 5:24 PM

  251. Tamino, your claim in #9 that it didn’t “cool” between 1940 and 1975 is incorrect. It’s not much, but if you chart global mean at NOAA’s GCAG, you’ll see the anomaly went from slightly under 0 to about -.1 C

    It’s not much “cooling”, sure. The trend is only -.01 a decade and the significance is only 82%. But it did go down.

    (GISS analysis using either 51-80 61-90 or 71-00 as the base shows -.14 (slightly confusing me as to why changing the base doesn’t affect the anomaly.))

    But I did notice you saw the cooling trend for 1945-75 later in #197 I see. And you make a good point about the software in #228. Which makes all this discussion before and later on about “the code” rather perplexing.

    ———————-

    Gavin, I understand your point in #188 perfectly, that’s a great way to explain it. I wish you’d put it that way in #43 :) So just let’s give all that junk to those asking for it and we don’t have to spend any more time on it. If that’s how it’s all used, just give it that same way and be done with it. I do disagree a bit with #189, at least if Dr. McIntyre has a reason to say he doesn’t fully understand it, which I have no reason to doubt. So if somebody like he with as much insane detail as he goes into statistically on his site can’t figure it all out, who can? For example, your response to #211, if it’s just “a couple of pages of MatLab” why is all this debate going on? Or is he just complaining for no reason? I don’t see that. Plus as he later points out, it’s not the original stuff anyway if he “reproduces it” Why reinvent the wheel? Although you do make good points about balancing against the other methods that other adjustment schemes make showing that it’s probably stable.

    I commend you for the link to the references but many of those papers and the papers they reference are not linked or available. Which ones have the algorithms in them is rather difficult to tell, also. And as John N. pointed out in #75, it’s not all that detailed, I don’t see it. I think the point of all this is that rather than those that want to independently validate everything needing to demonstrate anything, but rather it’s up to those being audited to assist. I suppose that’s what the argument is about. That and “read all the papers” isn’t a very satisfying answer to a lot of this.

    And your response to #195… Perhaps others have tried to “replicate the analysis from the published descriptions, and it’s more difficult or less complete than you’re making it out to be? (I’m just asking if you’ve considered that as well as you could have….) And in #196, the link to Eli’s blog and the discussion there shows there is an issue that “both sides” are sure they have valid points. I don’t think the views are mutually exclusive.

    But again, I believe the disagreement is if “start Reading The Fine RFCs, all the informtion you need is there” is a valid answer or not. I’m sure some specifics might clear this up, so now we’re back to “just give all of it and let’s stop pointlessly debating matters of opinion”. Your discussion with John N. and your reply in #205 is prime evidence of the conflicting viewpoint, and your response to Steve M. in #206… Um, I believe he has and hasn’t gotten an answer. And #208′s response seems rather not an answer to the point that was being made. etc further on down the line with other back and forth comments.

    Thanks.

    ———————-

    Dylan in #50, if you actually read what Dr. McIntyre says on his blog and the subjects he’s interesting in talking about (mainstream accepted scientific literature) that’s been the point all along; it’s not what the data shows, it’s making sure it’s correct by independently validating it. That’s what confuses many of us; the audits could show more warming than previously thought, so why do so many attribute other motives to what’s just a bunch of data collection and validation?

    ———————-

    Patrick, #51, I certainly agree with you.
    Good point in #144 FCH.

    ———————-

    But John M., in #53, those are the GCMs. (Later discussion in #240 on starts talking about the GCMs again…). But the adjustment software is not there. But goood point!!!

    Instead of arguing about .1C or .003C or whatever, or complaining about doing station surveys, why don’t we all work on doubling or tripling their budget? (Perhaps more support would occur if the results could easily be replicated and easily independently verified?) And in #138, the point is this is a publicially funded organization, the citizens own that code. If somebody has experience coding or not is immaterial. If it’s copywritten or purchased software, why not just say that? If it’s a mess and not one package, why not just give it up? What’s the big deal about giving a researcher code that exists only to adjust the data?
    Certainly it has a limited use to anyone else. Or how about a published paper with instructions and/or algorithms detailed “all in one place” instead, then? All the pushback or ignoring of the subject confuses some of us. On the other hand, your comments in #210 strike me as somewhat like Gavin’s in #208. As I said above, I think the issue of “serious software engineering and what it costs” is beside the point — if all the materials are little tidbits and expensive to put together, give them out as is! “Prove there’s a need” is not really a good answer.

    shrug

    ———————-

    That is a good point in #155 David. I knew a person with a doctorate in physics that did a lot of complicated analytical coding in C, but wasn’t a programmer. I understood the basic flow but the code was a mess. Aside from the fact I didn’t understand the math itself which was the bulk of everything. :)

    ———————-

    All other discussion of other issues, or quibbling about details, detracts from what should be the conversation and goals. One such distraction is trying to compare periods less than 15-30 years. (like 2002-2006) We should be talking like Gavin did in the note in #87 about US temp trends (” For the more recent period 1975-2006: 0.3 +/- 0.16 deg C/dec”) My take on it all is “So 1934 and 1998 are the same anomaly for the US. So what.”

    My question is the base period still 1961-1990 instead of 1971-2000? And does it matter and why or why not?

    Comment by Hal P. Jones — 14 Aug 2007 @ 5:34 PM

  252. Re: #251 (Hal P. Jones)

    We’ve been through this before, on another thread. But it’s an important distinction, so here goes …

    I don’t dispute that 1975 was slightly cooler than 1940. I disputed the claim that “1940-1975 the temperature was falling.” I often hear it said that mid-century we experience 30 or more years of cooling, and frankly, it just ain’t so.

    Fit a trend line 1940 to 1975; the slope is slightly negative (cooling). Now fit a trend line 1950 to 1975. The slope is slightly positive (warming). And that is *not* 30+ years of cooling. It’s more correct to say that it cooled from about 1944 to 1951 (7 years), then levelled off for 24 years.

    Then there’s the fact that neither the cooling 1940-1975 nor the warming 1950-1975 is statistically significant. It’s not cooling or warming; it’s fluctuating.

    The oft-repeated claim of 30+ years of global cooling mid-century, is excellent fodder for denialist propoganda. But it’s simply not true.

    Comment by tamino — 14 Aug 2007 @ 6:31 PM

  253. # 251 Hal

    Somehow the message isn’t getting through.
    How much computer-based scientific research & software engineering do you do? I’m happy to debate with people with relevant experience [as always, reasonable people can differ], but when people essentially keep insisting things are free, it goes rather contrary to a *lot* of experience.

    My concern is to *cost-effectively* help good science happen, and I want my tax dollars to be used well. I’d be delighted to see GISS budget doubled or tripled … and I’d bet we’d see more things on websites, but I’d trust GISS to figure that out, given that they do better than many, from what I can see.

    Yes, 2X-3X more budget … but no strings. I wouldn’t insist on procedures designed to slow down research, like those I mentioned in #210. PLEASE go read some of the things I pointed to there.

    Comment by John Mashey — 14 Aug 2007 @ 6:41 PM

  254. John Mashey, thanks for #210. This is really cautionary:
    http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1446868

    Comment by Hank Roberts — 14 Aug 2007 @ 6:54 PM

  255. Thanks Tamino. I think the issue here is one of phrasing. The proper way to say that is “The trend from 1940-1975 went in a negative direction.” And if we talk about “The trend from 1880-2006 went in a positive direction” that’s fine too.

    Sure, I don’t disagree that 7 years of falling coupled with 24 years of being steady can be described a great many ways. My point is that choosing your measurement period can allow anything to be shown.

    My outlook is that if we talk about what it’s doing at some arbitrary point (pick one: 1893-1900, 1900-1909, 1929-1944, 1944-1976, 1992-1998, 1992-2006, or make up your own) as being the greatest rise/fall of x years, that’s a dis-service. That there’s no doubt that from 1950-2006 it’s gone up a lot (.1C/decade, globally) — that is the key, and the subject I think. The meaning of that is a different subject and I would say so is the accuracy and a whole lot of other subjects rolled into one. It’s difficult if not impossible to try and talk about all of them at once!!!!

    All I’m saying is that it’s globally trended up .1C/decade in how we measure it over the last 56 years (100% significance). We shouldn’t be discussing short term trends, that’s my only real point.

    Comment by Hal P. Jones — 14 Aug 2007 @ 7:09 PM

  256. There is an unfortunate interpretation that those wishing to see source code wish only to find fault. But verification (or what some call “auditing”) cuts both ways – while it may find faults, the process also results in verification that a method is implemented properly and would thereby strengthen the interpretation of the output of the computer model. I suspect there is a strong organizational culture difference from those of us in engineering versus those in academic settings. We do make mistakes in engineering – and go through significant quality assurance steps to prevent errors from occurring, or finding errors that might have occurred. Perhaps a description of what quality assurance steps are used in the design, implementation and verification of various GISS data analysis and models would be helpful?

    Comment by Eric — 14 Aug 2007 @ 7:10 PM

  257. Re: Gavin’s response to McIntyre’s #225. I’m an academic lightweight, but I feel I have a good reason to disagree. Having the code and making specific changes to it is like doing controlled experiments. I think that is a very good way to do sensitivity analyses. Changing multiple factors at once seems a less effective method of learning, even if it could be a faster way to find a more robust model. Perhaps I’ve misunderstood the point?

    Comment by Steve L — 14 Aug 2007 @ 7:17 PM

  258. I understand your message John, I just don’t think the same way about it that you do. I hope you understand that key point. I spent quite a bit of time explaining what this debate is about. One group doesn’t see the need and the other group does. Fine.

    GISS does a wonderful job, and I don’t question anyone’s motives. It just seems to me that this non-issue is taking up a lot of effort better spent elsewhere. I have no control over their funding, but if I did I’d increase it. Moot.

    Nope, not a programmer. Immaterial. Zip up everything in the code directory and sub directories and make it available for download. Case closed, questions finished. Issue done. Costs nothing. The code is produced by the government and is not secret, it should be available. Regulations don’t prohibit the release of it. All this other stuff is just cobwebs in the way. Who cares why anyone wants it or if it needs to be checked? Just put it out there.

    But I’ll tell what my qualifications are. I worked for the government for many years, and know exactly how everything functions (probably more than I want to, in fact!!!) Owned businesses, been in charge of large teams of people, ran the IT (with the help of multiple team leaders) in an organization of hundreds. I have been involved in this industry since the ’70s and my degree is in computer science. I have almost 10 years experience teaching computers and networking down to the digital theory level and up to the business and policy aspects. Also have training and some various level of experience in Ada, tcl/tk, dBase, BASIC, shell scripts, batch files, HTML, XML, Pascal, various IDE stuff, CVS, and so on. I certainly consider myself qualified to discuss these issues, and on multiple levels. Political, economic, social, scientific and otherwise.

    Comment by Hal P. Jones — 14 Aug 2007 @ 7:56 PM

  259. Here’s the correct link to “the 2001 paper describing the GISTEMP methodology”.

    http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf

    Comment by barry — 14 Aug 2007 @ 8:27 PM

  260. Have any of the people who want to audit any of the existing models done their own model? I’d think that would be a convincing exhibit of bona fides, to show that they know how this stuff works and make their own code public.

    I realize that some infighting about what data to include would be an issue —some people may want to rule in or out particular data sets or methods.

    But if a group set up a public-source model, they’d get a lot of attention and be able to prove their competence in the field.

    Comment by Hank Roberts — 14 Aug 2007 @ 8:45 PM

  261. Re #251 (Hal Jones): Just to note that McIntyre is two degree levels short of being “Dr.”.

    Comment by Steve Bloom — 14 Aug 2007 @ 9:35 PM

  262. Re: #259

    Hank,

    I really think the best way is for NASA-GISS (and other similar organizations) to have robust DP audit departments – I’d be shocked if they didn’t have something along those lines established already.

    The issue would be: what is their expertise in this sort of validation? In other words, as I mentioned earlier, there are issues which go beyond climate science and even beyond the programming language used. These areas of expertise are not critical for the software engineers and scientists who do the modeling, but they are essential for the auditors.

    I’d love to hear from Gavin with regard to what NASA-GISS already has in place.

    Of course, Steve would say: not enough, or else why did he and not they find the data-switch error?

    Comment by Walt Bennett — 14 Aug 2007 @ 9:48 PM

  263. Re: #197-198 (Tamino)

    Many thanks again for the effort in producing a well documented answer to my naive doubts

    your graph at
    http://tamino.files.wordpress.com/2007/08/nh-sh.jpg
    on the northern-southern hemisphere temperature anomaly is really interesting and thought provoking

    For example…

    1. If the main driver of global warming is man-provoked CO2 increase, then it would seem that already in the ’80s of 19th century (!)
    Northern Europe-US “industrial revolution” was somehow able to make itself felt thru a (quite fast) Northern hemisphere differential warming
    True: railways and carbon burning were rising very fast then,
    but can this be enough?
    one could give a look look – say – at figure 2 of
    http://www.epa.gov/climatechange/emissions/globalghg.html

    2. Thermal inertia of the southern oceans must really be “immense”, as you say, because even in recent years, when northerners sulfate aerosols are far in the past, and CO2, with its “7 or 8 months” diffusion time over the globe, the North-South differential is still rising!

    but this – I suppose – could for example be the effect of a possible strong acceleration in recent northern CO2 emissions…

    3. But then another notable kind of “thermal inertia” must be operating in the US, otherwise, after the demise of sulfate aerosols, one could expect a quick alignment to – say – a kind of northern hemisphere mode,

    that is a QUICKER warming “to recover lost time”, so to say.

    On the contrary, now US among the northern regions seem to be the warming laggard,
    as the sulfate aerosols era had left a lasting heritage…

    Now if we assume some kind of big “thermal inertia” all can be neatly explained, and this is perhaps the correct thing to do,

    but then

    unless we also find a robust way of independently verifying and measuring “thermal inertia”,

    admission in the discourse of this additional “free entity” reduces greatly the forcefulness of our theoretical construction,
    because other competing explanations of global warming would become workable too:

    because it is enough that these “competitor theories” adjust the non-directly-measured-but-conveniently-assumed “free entity” at the level that best fits their needs.

    One would then be forced then to admit a higher level of ignorance on climate mechanisms than it’s pleasant to do.

    Comment by Mario — 14 Aug 2007 @ 10:25 PM

  264. “so why do so many attribute other motives to what’s just a bunch of data collection and validation?”

    I guess I am more cynical, but after seeing what Bush did to the gulf Coast after the storms, and after seeing him put the scientist guy that companies use as paid witness to defend lead poisoning cases into the EPA with a high ranking position , it is pretty obvious to me that there are other motives and agendas at play here. And one side has proven itself to lie and exaggerate exponentially more than the other.

    Comment by FP — 14 Aug 2007 @ 10:35 PM

  265. Re #260 – So what is your point?

    Comment by Gerald Machnee — 14 Aug 2007 @ 10:59 PM

  266. Steve L: Having the code and making specific changes to it is like doing controlled experiments. I think that is a very good way to do sensitivity analyses. Changing multiple factors at once seems a less effective method of learning, even if it could be a faster way to find a more robust model. Perhaps I’ve misunderstood the point?

    Not a software type, are you? The nasty fact is that software is formally chaotic. There are several ways to get to that conclusion, but the one I like best is that a running program in a Von Nuemann computer is an iterated map, and iterated maps are known to be chaotic. Which adds up to the fact that making random changes to code usually blows things up.

    That is beside the point, however. The game that Gavin, or any modeler of natural processes, plays with his models is to try to learn how things works. Coding a new feature, or trading one way of calculating with another are, essentially, experiments. The code is not important. The equations expressed in the code are important. And running the model is how you evaluate a particular set of equations. The rules of economics or engineering don’t really apply here. In a profound way, a sensitivity analysis is meaningless in this domain.

    Comment by Tim McDermott — 14 Aug 2007 @ 11:15 PM

  267. Gerald, 260 corrects an error in 251; Dr. McIntyre is someone else.

    Comment by Hank Roberts — 14 Aug 2007 @ 11:28 PM

  268. The side discussion of tectonics in the context of sea level rise is getting a bit tangled (“BlogReader said: Odd you didn’t mention that maybe some plates might be sliding lower into the ocean . . .and john mann replied, “Well, plates do slide against each other, but sea level changes due to tectonics tends to be in the order of 1cm per thousand years . . .”). And this epic set of comments is WAY too good to get sidetracked, so I won’t wade in with much more.

    Barton summed it up nicely: Sea level is rising on average, as determined by several independent lines of evidence (satellite ranging; satellite gravity measurements; tide gauge analyses, among others) – but the effects on individual coastlines are decidedly local. A previous Realclimate post (http://www.realclimate.org/index.php?p=314) touched on these issues without explicitly dealing with coastal changes (unless I missed a more relevant post; sorry if so). In most places eustatic sea level rise due to AGW is purely bad news. All this is probably best taken up in another thread sometime.

    Comment by Rich Briggs — 15 Aug 2007 @ 12:26 AM

  269. Re. #258: amazing. That paper contains not one formula that I can see. If the algorithm is as trivial as claimed, surely it might fit into the space of one of the many graphs supplied in the index?

    Has anyone here gone to the trouble of turning those ten pages of dense text into something more directly comprehensible (i.e., a formula)? If so, would they mind posting it here?

    Comment by Ralph Becket — 15 Aug 2007 @ 12:36 AM

  270. Gavin, thanks for the link, I’ve downloaded source and am looking through it now. While you may argue such code-sharing isn’t important like primary work, it certainly helps in diffusion of knowledge.

    JohnM: My platform is Linux. In 25 years of SW experience, I have studiously avoided Fortran as best I could, but unfortunately have to run into it occasionally; I can certainly compile, read and understand as well as most other languages I work with. As I mentioned before, I have a day job and other responsibilities besides, so my intention in doing this myself is for curiosity/education, not to find any bug. Should it parlay beyond that I cant commit to now; expect no weeklys, they are painful enough even when paid.

    The one thought I did have that might be a contribution was the thought that, like the SETI project, there is perhaps a way to harness many people’s machines to run these GCM simulations. Have others had this idea as well? I think so. Anyway, an available open-source GCM code base would be an enabler of such an effort, hence my ‘what the heck, I’ll run it’ bid to take it.

    [Response: Watch this space... - gavin]

    Comment by Patrick the PhD — 15 Aug 2007 @ 12:43 AM

  271. I feel persuaded by Gavin on this one RE: response to McIntyre.

    Both don’t seem to disagree in the value of the check and balance system of replicability. Where they disagree is in the required level of precision in the record. However, in this case, the accuracy of the record remains the same even after the corrections, for all intents and purposes (and that accuracy was coupled with a good level of precision nonetheless). It seems as if McIntyre is not so much concerned with accuracy (and I think that Gavin was right to refer to his exercise as “micro auditing”) as he is with precision; I don’t think he has made the case for why greater precision must be had (at least in this instance).

    What Gavin is arguing is that when a question of accuracy arises, it is much more fruitful to form a competing replication. Since there are different codes used for the record constructions anyway (we can, I think, rightly assume this), it’s clear that the code changes aren’t going to affect whether or not the record is accurate. Analysing a replication through creating a competing (and perhaps, null) replication will tell you a lot more.

    Justin

    Comment by Justin — 15 Aug 2007 @ 1:49 AM

  272. When a “skeptic” scientist offers up a view of why they may disagree with your theory you are very quick to dismiss them (I have never seen so much as a “well you have a point..”). So people do build things from scratch and it ends up in a “he said-she said” argument.
    This issue gives everyone a huge opportunity to move closer together on climate change analysis as it cuts to the heart of many skeptics’ problems. Computer models. Silly dim old skeptics can’t quite believe how computer models are able to model something as complex as the weather. Out to 100 years in the future. But they do, and many people say that such forecasts should dictate public policy. So when you say it’s about the science, build your own model, you are being disingenuous: it has become both about the science and the models. Climate science without the models would be meaningless.
    I am a drive-by skeptic with no scientific training. What do they call such people? Let me see..oh yes, voters. There are billions of us. Threads like this, seeing the many ways you have tried to deny there is a need to open any aspect of your scientific investigation to anyone who wants it, makes it less likely we will vote for any “anti-global warming” measures which today we see as futile and economically destructive.

    Comment by Alan K — 15 Aug 2007 @ 2:02 AM

  273. re 259 H Roberts …if a group set up a public source model…
    It is a surprise that this hasnt been proposed already.
    The Linux project shows that massive public codes can be developed.
    With modern cluster and sharing management large amounts
    of computer power is freely available.
    Maybe the Free Software Foundation could be the vehicle.
    Or, Anthony Watts seems to be able to inspire public
    participation. All we need is one Torvalds!

    Comment by hillrj — 15 Aug 2007 @ 3:24 AM

  274. Re #259: I don’t know the details, but IIRC NCAR’s CCSM is set up more or less for wide access, which raises the question of why the “auditors” are interested in the GISS model in particular.

    Re #262: It was an appeal to lack of authority, Gerald.

    Comment by Steve Bloom — 15 Aug 2007 @ 3:31 AM

  275. Well, apologies to Jim Hansen, but that has to be the worst coding style I’ve ever seen. No indentations or blank lines, barely any comments, gotos all over the place… if he’d wanted to make it unreadable he couldn’t have done a better job. And I assume that’s Fortran-77? It sure doesn’t look like ’95. God forbid that I knock ’77, because it kept me employed as a programmer for eight years, but please tell me they’re not still coding in it.

    Comment by Barton Paul Levenson — 15 Aug 2007 @ 6:48 AM

  276. Tamino states in #252 that:

    “I often hear it said that mid-century we experience 30 or more years of cooling, and frankly, it just ain’t so. . . . Then there’s the fact that neither the cooling 1940-1975 nor the warming 1950-1975 is statistically significant. It’s not cooling or warming; it’s fluctuating. . . . The oft-repeated claim of 30+ years of global cooling mid-century, is excellent fodder for denialist propoganda. But it’s simply not true.”

    According to the NOAA global temperature anomaly data, prior to 1981, the highest average global temperature anomaly in a single year was .2143 degrees Celsius, in 1944 (The average global temperature anomaly in 1940 was .1187 degrees Celsius, nearly .1 degrees Celsius below the peak level attained in 1944). From January 1945 to December 1976 (a total of 384 months) the global temperature anomaly was lower in 369 months, or 96% of the total. The standard deviation of the first difference of the global temperature anomaly is about .108 degrees Celsius. In the same 1945 to 1976 period, in 317 months, or 83% of the total, the global temperature anomaly was greater than one standard deviation below that average in 1944. Furthermore, in 206 months in that same period, the global temperature anomaly was greater than two standard deviations below the 1944 average.

    One would have expected that, were the global temperature “not cooling or warming; [but] fluctuating” between 1945 and 1976, we would have expected that roughly 50% of the data points in that period would have been above and below that 1944 average, that 33% would have been one standard deviation below and that 5% would have been two standard deviations below.

    For those who claim that this writer is cherry picking, if we take the average global temperature anomaly for the 1940 to 1944 period of .1434 degrees Celsius, the corresponding percentages are 90%, 60% and 31%. These percentages are significantly different from expected percentages of 50%, 33% and 5%.

    Furthermore, Tamino claims that the relatively flat temperatures between 1950 and 1975 dispel the notion that one could consider that there was a general cooling trend. The average global temperature anomaly during this period was -.0036 degrees Celsius (the corresponding figure for the 1952 to 1975 period is .0063 degrees Celsius). In 277 months out of 312 (89%) during this period the global temperature anomaly was below the 1940 to 1944 average; 179 months (57%) were 1 standard deviation below the 1940 to 1944 average; 93 months (30%) were two standard deviations below the 1940 to 1944 average.

    Slimply put, the global temperature anomaly was, very simply, cooler in the 1945 to 1970 period and these cooler temperatures were statistically significant compared with the five-year period between 1940 and 1944 when global temperatures reached their highest level attained before 1980. Regardless of your position on AGW, the above analysis is not “denialist propaganda”, but a careful examination of reality based on the data.

    Comment by John Tofflemire — 15 Aug 2007 @ 7:02 AM

  277. Re #226. Thanks for the graphs. Looks much better now that global warming doesn’t fly off the chart anymore. Let’s hope the GISS graphichs person takes note.

    Comment by Dodo — 15 Aug 2007 @ 7:36 AM

  278. #272 Alan,

    This incident proves that climate scientist take skeptics seriously when they bring something new and worthwhile to the table.

    But all I see from skeptics is that that recycle the same already refuted arguments over and over again, millions of times, I would estimate. Nobody has time to even take note of this stuff.

    For instance, your just trucked out the old “models don’t work” chestnut again. Been there, done that, nobody has time to keep refuting it over and over again for every skeptic with a keyboard [edit]. There are just too many of you.

    Comment by Tom Adams — 15 Aug 2007 @ 8:09 AM

  279. re: 272. Do you apply your model skepticism broadly? Or selectively to science? For example, the next time you step on an airplane for a flight, do you realize that models were used to develop and test the plane? Or do you demand to see the open models that were used first before allowing the plane to take off? BTW, those models were also tested and peer-reviewed.

    “…which today we see as futile and economically destructive.” Ah, now the unobjective statement behind your question becomes obvious. “WE”, the voters do not see that at all. Check the various polls re: the need for action on GHGs.

    Comment by Dan — 15 Aug 2007 @ 8:10 AM

  280. Re: #276 (John Tofflemire)

    You have misunderstood me. I’ll say it again…

    I have never disputed that the period 1950 to 1975 was cooler than the period 1940 to 1945. What I dispute (correctly) is that is was cooling *from* 1945 *to* 1970. If you want to say that mid-century we saw 25-30 years of “cooler,” that’s one thing. But if you say 25-30 years of “cooling,” then you are quite simply mistaken.

    The “denialist propoganda” to which I refer is the claim that the globe was cooling for 30+ years mid-century. The impression which is intended is that the planet cooled, and kept cooling, for three decades. It just ain’t so.

    We beat this to death already on an earlier thread.

    Comment by tamino — 15 Aug 2007 @ 8:20 AM

  281. # 278 Tom, why should people not continue to express doubts? Just because you say so? [edit]

    # 279 I do apply my model scepticism broadly. I have done enough (albeit financial) modelling to know that you can make an output what you want to make an output. A plane will have been proved to fly – ie in real life. Would you believe a model that told you what type of flight will exist in 100yrs?

    Comment by Alan K — 15 Aug 2007 @ 8:52 AM

  282. RE #277, that’s science for you. We lay people look at something and see not much happening, but a scientist looks at it and sees all sorts of things. It’s a matter of many years of education and training. And I’ll trust what the scientists say over what the untrained laypeople say any day.

    BTW, I just read on ClimateArk.org that British scientists have predicted the temps will level off for 2 years, then after 2009 rise sharply, and all hell is about to break loose. See:
    http://www.climateark.org/shared/reader/welcome.aspx?linkid=81736 and
    http://www.climateark.org/shared/reader/welcome.aspx?linkid=81891

    Of course, if you’re 99 and expect not to be here then, you’ll probably miss the worst.

    Comment by Lynn Vincentnathan — 15 Aug 2007 @ 9:03 AM

  283. 279. “re: 272. Do you apply your model skepticism broadly? Or selectively to science? For example, the next time you step on an airplane for a flight, do you realize that models were used to develop and test the plane?”

    Dan, an inapt analogy. We have had tens of thousands of actual airplane flights of experience to validate that the models are matching observation. Airplanes were not built on models alone, but multiple test flights. We have yet to live through the 21st century to validate climate models longterm. Its an ongoing validation and calibration, year by year.
    For climate modelers, the challenge is that its such a complex system with complex cause-and-effect, that one can match past experience yet have a model with not much skill in predicting the future (viz 281).
    For example, I could give you a computer model that perfectly matches how the stock market performed up to today. Would you put your entire life savings into using it to make a bet on the stock market in the future? I suspect you would give it multiple ‘trial runs’ before making such a bet, right?

    [Response: You are erecting and knocking down strawman arguments. Climate models have nothing to do predictions of the stockmarket. In climate modelling, there are indeed many 'trial runs' that give people confidence in their projections. See previous threads on this exact same point: here and here. Further discussions of climate models is OT in this thread. - gavin]

    Comment by Patrick the PhD — 15 Aug 2007 @ 9:27 AM

  284. We haven’t touched on this before that I have seen, but something occurred to me last night as I read the latest climate story from Elizabeth Kolbert in The New Yorker. Her story is about declining bee populations. I also saw in last week’s Newsweek that certain Central American frogs were wiped out in 2 years in the late 1980s. Also, certain butterflies no longer exist in their previous habitat, having moved north and up to seek the cooler temperatures they prefer.

    In other words, nature is changing out from under us, long before we ‘feel’ the impacts of climate change in our daily lives. And long before ice sheets break up, or storm patterns change for the worse, or previously wet regions turn to desert.

    Long before that, animals and plants which are much more sensitive to environmental changes will feel the effects and be forced to adapt. This will have unknown consequences for man, but one thing is for sure: there will be consequences.

    Comment by Walt Bennett — 15 Aug 2007 @ 10:42 AM

  285. Another issue with having the code is that there is a very strong tradition in the open source community of improving code. Some of the fastest compilers, as I think John Mashey will testify to, are products of open source efforts.

    On the subject of “support”, I was an early UNIX user — I’ve seen 7th Edition and System III source code, as well as some variants that never saw the light of day. We managed to do just fine without support from Bell Labs. The same was true in the early days of Linux, back when we installed it using a stack of floppy disks. There was no support for Linux then, and look at it today. I wouldn’t be surprised to learn that a lot of the platforms running these models are Linux systems :)

    Comment by FurryCatHerder — 15 Aug 2007 @ 11:05 AM

  286. Steve McIntyre observed that the GISS corrections does attempt to eliminate UHI for urbanizing sites. However, Steve McIntyre also observed that the GISS corrections added a warming trend to sites that is rural in nature and hasn’t moved for quite some time. I’d like Gavin to explain the physical reason why a warming trend was added to rural sites and how it was added. This is no small matter. Subtracting UHI from urbanizing sites, but adding UHI from non-urbanizing sites, still keeps UHI effects in the GISS analysis. The validation of climate models can’t be done with surface data that is corrupted with effects unrelated to climate change. This makes any error found a BIG deal in my mind, because it opens the possibility of more errors.

    [Response: The virtue of reading the references:

    We reiterate a caveat that we have discussed elsewhere [Easterling et al., 1996b; Peterson et al., 1998c; Hansen et al., 1999]: the smoothing introduced in homogeneity-adjusted data may make the result less appropriate than the unadjusted data for local studies.

    Why might that be?

    The urban adjustment, based on the long-term trends at neighboring stations, introduces a regional smoothing of the analyzed temperature field.

    Therefore any one station, even if rural, has its trend set by the average of raw rural station data in the area. – gavin]

    Comment by VirgilM — 15 Aug 2007 @ 11:11 AM

  287. Re: #286

    Agreed.

    My point was, climate change will affect these populations in a direct way long before they affect humans in a direct way.

    And yet, these indirect effects will have significant consequences for humans. For example, without bees we cannot grow as much of certain fruits, vegetables and even nuts as we demand.

    That’s just one of what will certainly be many examples.

    Comment by Walt Bennett — 15 Aug 2007 @ 11:38 AM

  288. #283 & We have yet to live through the 21st century to validate climate models longterm.

    Well, see the scientists want to do as much as they can now with whatever evidence they have, before what actually happens during the 21st century has a chance to validate or invalidate their models. That’s because by 2100 there may not be any scientists left. They may have to do like the rest of the remnants of humanity will be doing — scrounge around for food, fight off mauraders, escape mega floods, hurricanes, storms, and forest fires, and such. :(

    Comment by Lynn Vincentnathan — 15 Aug 2007 @ 11:52 AM

  289. Gavin, So what if the http://www.surfacestations.org effort determines that some of the rural stations used in the regional average is corrupted with non-climate change effects? This may not subract enough trend from urbanized sites and add too much trend to the rural sites.

    Of course, the nearest USHCN site to me is Huntley Exp Station, MT. They irrigate all around the station, so while it can be considered rural, they have land use effects corrupting the climate change signal (posible cooling effect during May-Aug?). Is it possible to know how the levels of irrigation has changed in the area during the last 100 years? And even if we knew the irrigation levels, do we know how to correct the data?

    I think we need to fund a climate station network that sites stations free of land use effects and changes.

    [Response: The regional trend is the average trend of the regional rural stations. In the raw data, some will be more than the mean, some less. There is a funded climate station network - CRN - which is exactly what you want. - gavin]

    Comment by VirgilM — 15 Aug 2007 @ 12:13 PM

  290. Ahem, looks like the numbers got moved. Just to clarify, #260 is Hank Roberts discussing people making their own ways of performing adjustments. #261 is Steve Bloom correcting me on Steve McIntyre not having a doctorate (The title I used in #251) Hank (#267) responds to Gerald (#265) asking what the point of Steve Bloom’s comment about not having the title of doctor. (So Hank, same person, wrong title).

    Which is correct, I was wrong, I went and looked it up. It’s a bach sci from U Toronto in 1969 and graduating Oxford 1971 after studying philosophy, politics, and economics. I’m unsure if Oxford included a degree or what kind if so, it just says graduated. I just always thought he had a PhD, sorry.

    Also, Ralph (269) comments on “#258″ (barry’s link to Hansen 2001 in #259 ) hillrj (#273) is commenting on Hank in #260 not #259, Steve B. (#274) has #259 and #262 but that’s #? and #265, etc.

    FP, in #264 you are talking about the US govt and matters of politics vis ulterior motives. I was talking about the people surveying or auditing.

    Justin, in #271 you talk about precision. I don’t think that’s it. It’s more an issue of validating a method by analyzing the method itself versus validating it by creating a different method that does the same thing. I suppose the discussion is all about which way is the “better way” to see if things are doing what they’re supposed to? Gavin thinks the other methods around already do that. Steve wants the checking the original method directly. So what I see is that one doesn’t see the need and the other doesn’t understand why the need has to be seen. I don’t think there’s a solution to that.

    Steve B, your #274 I believe you’re saying that my mistakenly calling him Dr. as a title is trying to say what I wrote was an appeal to (lack of) authority. Reading the bio, it seems to me that regardless, he’s qualified to statistically analyze these sorts of things. YMMV

    #276. John T. #280 tamino I suppose it depends if you’re talking about constantly cooling every year versus the general trend.

    Comment by Hal P. Jones — 15 Aug 2007 @ 1:13 PM

  291. Regarding the adjustments, at one time (August 1999)1934, 1921, 1931, 1953 and 1998 was the order of US temperature records.

    See:
    http://www.giss.nasa.gov/research/briefs/hansen_07/fig1x.gif

    From:
    http://www.giss.nasa.gov/research/briefs/hansen_07/

    [Response: Interestingly, that was prior to the adoption of the USHCN corrections for time of observation biases etc. - which is what the 2001 paper was all about. - gavin]

    Comment by John Wegner — 15 Aug 2007 @ 1:38 PM

  292. This is definitely an interesting to discussion to read as non-climatologist scientist. All of the attention and scrutiny is, in the long run, very good for the science.

    I would agree w/ many of the comments above…the skeptics, denialists, auditors, whatever…they are making the case for AGW stronger with their probing. I’m interested to hear about the results of the due diligence.

    And I would love to contribute computing power in the SETI-type way mentioned in #270.

    Comment by Brian — 15 Aug 2007 @ 1:44 PM

  293. #292
    Are you looking for something like this?
    Climateprediction@home

    Comment by DavidU — 15 Aug 2007 @ 2:10 PM

  294. Hal,

    when you say “validating the method”, do you mean validating the code or validating the method without its coded form?

    Justin

    Comment by Justin — 15 Aug 2007 @ 2:17 PM

  295. Re#293: DavidU…thanks!…I will give that site a look

    Comment by Brian — 15 Aug 2007 @ 2:23 PM

  296. I trust those who are clamoring for access to the computer code used by climatologists in order to scrutize those codes for possible errors are similarly motivated to do quality control checks on the models underlying gloom and doom economic forecasts of dealing with AGW:

    Emissions Dilemma High Cost Of Reductions Estimated; Others Say Doing Nothing Would Cost More

    By ALAN ZIBEL

    Associated Press

    August 14, 2007

    WASHINGTON

    Making big cuts in emissions linked to global warming could trim U.S. economic growth by $400 billion to $1.8 trillion over the next four decades, a new study says.

    The study published Monday by a nonprofit research group partially funded by the power industry concludes that halving emissions of carbon dioxide – the main greenhouse gas linked to global warming – will require “fundamental” changes in energy production and consumption.

    The Electric Power Research Institute said the most cost-effective way to reduce the level of carbon dioxide in the atmosphere is to make many changes at once, including expanding nuclear power, developing renewable technologies and building systems to capture and store carbon dioxide emitted from coal plants. Reducing demand for fossil-fuel power is also key, the institute said.

    The EPRI cost estimate is based on a 50 percent cut in total U.S. carbon emissions from 2010 levels by 2050. Without such a cut and the shifts in technology it would bring, the Energy Department projects that U.S. carbon emissions will rise from about 6 billion metric tons a year in 2005 to 8 billion metric tons by 2030.

    The report calls for more modest cuts in emissions than some proposals being considered in Congress. Bigger cuts could well be more expensive…

    Disclaimer: Many years ago, I was involved in research funded by EPRI.

    Comment by Chuck Booth — 15 Aug 2007 @ 2:50 PM

  297. Hal P. Jones (#290) wrote:

    Ahem, looks like the numbers got moved. Just to clarify, #260 is Hank Roberts discussing people making their own ways of performing adjustments. #261 is Steve Bloom correcting me on Steve McIntyre not having a doctorate (The title I used in #251) Hank (#267) responds to Gerald (#265) asking what the point of Steve Bloom’s comment about not having the title of doctor. (So Hank, same person, wrong title).

    Quick note: even when on rare occasion a post gets removed (and this has happened to me before when I implied a rather ill-chosen comparison), if you have hyperlinks to the posts that you are refering to, people will still be able to follow. Anyway, thank you for the effort that you are putting into keeping the context.

    It helps.

    Personally, I have a variety of problems with Steve McIntyre. First, he is not a climatologist, and he has deliberately made use of statistical methods which he should know are invalid to try and discredit the hockey stick. Second, he tries to create the impression that taking photos of the stations will be sufficient to determine whether they are capable of providing accurate and useful information, that is, that somehow the photos will show whether or not the stations are in park cool islands, whether other staticians are using the appropriate statistical methods for separating the signal from the noise, etc.. Third, he cherry-picks the stations and misrepresents the data which is being recieved from them. Fourth, he pretends as if somehow a problem with a particular station will result in a continuing upward trend when all it would produce is a jump, not a trend. Fifth, he pretends as if this is the only source of information we have for reliably determining trends.

    So I do not question his qualifications as a statician. [edit]

    That said, he is attracting more talent at present. Who knows ?- maybe something of real value will come out of his group despite his involvement. Stranger things have happened.

    But I won’t be holding my breath.

    Comment by Timothy Chase — 15 Aug 2007 @ 3:15 PM

  298. Re #296, makes me wonder why anyone thinks that fossil fuels starting with Oil are going to be around in suficient quantities to fuel economic growth for decades to come. From all of the available evidence I would suggest not. And to think that something else is available to replace Oil, then gas and coal immediately with no economic pain is at the present seemingly presumptous and slightly foolish.

    Still Peak Oil people are seem as doomongers much like environmentalists and as yet not quite mainstream enough thinking.

    Comment by pete best — 15 Aug 2007 @ 4:21 PM

  299. Chuck Booth (#296) wrote:

    I trust those who are clamoring for access to the computer code used by climatologists in order to scrutize those codes for possible errors are similarly motivated to do quality control checks on the models underlying gloom and doom economic forecasts of dealing with AGW:

    Emissions Dilemma High Cost Of Reductions Estimated; Others Say Doing Nothing Would Cost More…

    Electric Power Research Institute
    http://www.sourcewatch.org/index.php?title=Electric_Power_Research_Institute

    Hmmm… Exxon funding, Chauncey Starr in previous years froom George C. Marshall Institute…

    Starr, Chauncey (George C. Marshall Institute)
    http://www.mediatransparency.org/recipientprofileprinterfriendly.php?recipientID=137

    Involved in…

    HARVARD CENTER FOR RISK ANALYSIS
    According to its website, the HCRA “was launched in 1989 with the mission to promote public health by taking a broader view. By applying decision science to a wide range of risk issues, and by comparing various risk management strategies, HCRA hopes to empower informed public responses to health, safety and environmental challenges by identifying policies that will achieve the greatest benefits with the most efficient use of limited resources.” (http://www.hcra.harvard.edu/about.html; accessed 03/29/06)

    according to Center for Science in the Public Interest / Integrity in Science Database
    http://www.cspinet.org/integrity/nonprofits/harvard_university.html

    … tactics out of tobacco industry playbook.

    Not good.

    PS

    Chuck – Great to see you made it to this front now that the other has cooled down.

    Same war, though.

    Comment by Timothy Chase — 15 Aug 2007 @ 4:48 PM

  300. Resources for researching the disinformation industry:

    Integrity in Science Database
    Center for Science in the Public Interest
    http://www.cspinet.org/integrity

    Source Watch
    http://www.sourcewatch.org

    Center for Media and Democracy: PR Watch.org
    http://www.prwatch.org

    Media Transparency
    http://www.mediatransparency.org

    Climate Science Watch
    http://www.climatesciencewatch.org

    DeSmogBlog
    http://www.desmogblog.com

    Society of Environmental Journalists
    http://www.sej.org
    See inks for more: http://www.sej.org/resource/index18.htm

    Comment by Timothy Chase — 15 Aug 2007 @ 5:04 PM

  301. Still Peak Oil people are seem as doomongers much like environmentalists and as yet not quite mainstream enough thinking.

    Favorite Far Side cartoon:

    Two fish are outside their small fishbowl watching a fire consume their fishbowl castle. One says to the other, “Thank heavens we made it out in time. Of course, now we’re equally screwed.”

    Comment by Jeffrey Davis — 15 Aug 2007 @ 5:17 PM

  302. re: #301
    Peak Oil
    ASPO-USA will be held in Houston in October. You can check who’s speaking and decide whether they actually know anything, or whether they are random doommongers.
    (ASPO = Association for the Study of Peak Oil & Gas),
    http://www.aspo-usa.com/

    Alternatively, look at
    http://www.lastoilshock.com/

    and get Strahan’s book (Amazon Canada or UK, not USA).

    Peak = 2015 +/- 5 years is the consistent estimate. Personally, I’m planning for $10/gal gas here within 10 years.

    Peak Oil & Global Warming are rather tightly coupled. If we do our best to burn up oil fast, we not only increase CO2, but when the gas gets really expensive, we get to have a World Depression that makes it hard to invest very quickly in replacements, and will almost certainly mean we’ll be burning a lot of coal before we figure out how to sequester it.

    Maybe we can avoid this; younger folks will get to see it firsthand!

    Comment by John Mashey — 15 Aug 2007 @ 6:36 PM

  303. Tamino says (#280):

    “The “denialist propoganda” to which I refer is the claim that the globe was cooling for 30+ years mid-century. The impression which is intended is that the planet cooled, and kept cooling, for three decades. It just ain’t so.”

    In #256 Tamino said:

    “It’s more correct to say that it cooled from about 1944 to 1951 (7 years), then levelled off for 24 years.”

    In fact, one could claim, with a reasonable degree of validity, that it is “denialist propoganda” the earth warmed, and kept warming, for the 30+ year period from 1976 to 2006. For example, the average global temperature anomaly in 1976 was -.1182 degrees Celsius (using the NOAA data). In 1981 the average global temperature anomaly was .2392 degrees Celsius, an increase of .3574 degrees Celsius in just 5 years. In 1996, the average global temperature anomaly was .2564 degrees just .0172 degrees higher than 15 years earlier. In 1998, just two years later, the average global temperature anomaly was .5772 degrees Celsius. In comparison, the average global temperature anomaly in 2006 was .5391 degrees Celsius or .0381 degrees Celsius cooler than in 1998!

    Thus, one could claim that the earth’s temperature rose rapidly between 1976 and 1981, then essentially “fluctuated” between 1981 and 2006, except for the rapid increase between 1996 and 1998. Under such logic one could further claim that the earth has not “warmed, and kept warming for three decades” to pharaphrase Tamino’s statement noted above.

    Of course, such a claim is sheer nonsense. The earth has been in a warming period since 1976 and to deny that is to deny reality. That is, temperatures have, on average, risen and have, on average, tended to remain at elevated levels. Similarly, between 1944 and 1976, temperatures, on average, fell and, on average tended to remain at supressed levels. The difference is that the total temperature increase in the warming period since 1976 has been much greater than the total temperature decrease that took place in the 1944 to 1976 cooling period. That is also reality.

    Now, Hal Jones notes (#290) that:

    “#276. John T. #280 tamino I suppose it depends if you’re talking about constantly cooling every year versus the general trend.”

    In fact, between 1976 and 2006, in 17 years the year-on-year change in the average global temperature anomaly was positive and in 13 years the year-on-year change in the average g.t.a. was negative. In other words, average temperatures fell nearly half the time year-on-year during a warming period! The earth’s temperature fluctuates up and down whether the period is warming or cooling. It NEVER constantly warms or cools!

    Why is all of this important and why is this thread not “much ado about nothing”? The U.K.-based Climate Research Unit last week forecast that the average global temperature anomaly in 2014 will be .3 degrees Celsius higher than in 2004. Since the NOAA average g.t.a . in 2004 was .5344 degrees Celsius, the CRU is forecasting that the NOAA g.t.a. in 2014 will be .8344 degrees Celsius (+/- some confidence interval). There is a lot riding on this forecast for, if it is correct, then it would be impossible for anyone but the delusional to argue that AGW is real. On the other hand, if the actual average g.t.a. is significantly lower than this figure (crossing our fingers that we don’t have an accursed major volcanic eruption to screw things up), the AGW theory as currently held will very much be in doubt. Thus, it is crucial that the temperature measurements taken over the next seven years be as accurate as possible. Steve McIntyre has everyone a favor (most of all AGW proponents) here in this regard.

    Comment by John Tofflemire — 15 Aug 2007 @ 6:55 PM

  304. Sorry! I meant to say: “it would be impossible to argue for anyone but the delusional to argue that AGW is false”

    Comment by John Tofflemire — 15 Aug 2007 @ 6:58 PM

  305. RE #3

    Nick, you are referring to Ramanathan, V. et al (2007). That is not the conclusion and the study has no bearing on the 1940-70 period.

    Comment by Chris — 15 Aug 2007 @ 7:10 PM

  306. As Gavin has continued to point out, this study indeed means nothing. See the graph on global temperatures which will not change.
    http://www.epa.gov/climatechange/images/triad-pg.gif

    The U.S. map gives you some feel for how variable climate change can be regionally and locally over long periods of time
    http://www.epa.gov/climatechange/images/tempanom.gif

    Comment by Chris — 15 Aug 2007 @ 7:13 PM

  307. #306

    And what assurance do you have the global temperature will not change?

    Comment by Bill Nadeau — 15 Aug 2007 @ 8:26 PM

  308. Re 288, Lynn says:”Well, see the scientists want to do as much as they can now with whatever evidence they have, before what actually happens during the 21st century has a chance to validate or invalidate their models. That’s because by 2100 there may not be any scientists left.”

    This will be more than compensated for by the fact that there won’t be any economic forecasters left either. Theres a lot of angst about the future of civilization,as we approach the point of no return. Noboby wants to see our progeny as nomads roaming the the arctic on a subsistence level living. So in that light when our backs are to the wall,which they soon will be,the skeptics and their “uncertainty” arguments will be drowned out by reality.
    BTW the source code for ModelE is readily available on line.See Gavin’s response in #211. He may have been playing cat and mouse for awhile and then concluded that some of us we’re too challenged to find it(like yours truly).

    Comment by Lawrence Brown — 15 Aug 2007 @ 9:33 PM

  309. In this high polarized debate, I feel that everyone should remember that the vast majority do not deny that man has an impact on this planet. The question is how much, how soon. That is not irrational.

    The label denier is a bit much. Skeptical is better. While the correction of the US historical data does not indicate cooling it does indicate warming without acceleration. That would be a good thing. While that may apply to only 2% of the earth’s surface, it is an indication that implementing new technology to reduce emissions and improve efficiency may be made before the tipping point with less financial impact.

    If you are absolutely certain that all the data is 95% plus correct, unwilling to fine tune the data with other statistical methods, unwilling to bare peer review from less than cliquish peers, then the science is settled. Some are not that certain.

    To the request for code, that should not be required unless someone has built their own model based on the provided algorithms and data and found significant differences. If the results cannot be replicated from the provided information by a competent source, then comparison of code may be in order. Is that unreasonable?

    Since the majority of the science now boils down to statistical analysis, should not a variety of statistical methodology be used to validate the models?

    I am not a climatologist, only studied engineering, the only thing I can offer is the KISS rule. If it is a statistical problem, get a statistician.

    Comment by dallas tisdale — 15 Aug 2007 @ 9:43 PM

  310. Bill, I think Chris means that you won’t be able to see the difference in the chart — before and after this particular correction being discussed here is made. The picture — at that scale — won’t change visibly. Chris, izzat what you meant?

    Comment by Hank Roberts — 15 Aug 2007 @ 9:52 PM

  311. Quick question.

    1998 was originally calculated to be warmer than 1934, and was so until at least 2001. By this year, it was calculated to be warmer, which the recent correction reversed. Does anyone know when 1998 surpassed 1934 in the NASA calculations? I just read a story where it was said that 1998 had “long been believed” to be warmer than ’34 but that’s a bit of an exaggeration.

    Comment by cce — 15 Aug 2007 @ 10:53 PM

  312. Sorry that should be “1998 WASN’T originally calculated to be warmer than 1934 and WASN’T so until at least 2001″

    Comment by cce — 15 Aug 2007 @ 11:33 PM

  313. #311. You are incorrect to say that “1998 was originally calculated to be warmer than 1934″ – I presume we’re talking U.S. here. 1934 was originally almost 0.6 deg C warmer than 1998 (Hansen et al 1999) and NASA 1999 news release. In the next two years, 1998 gained 0.6 deg C on 1934.

    Contrary to one of Gavin’s posts, the time-of-observation bias adjustment was included in Hansen et al 1999.

    [Response: Not so. Read the abstract of Hansen et al (2001): "Changes in the GISS analysis subsequent to the documentation by Hansen et al. [1999] are as follows: (1) incorporation of corrections for time-of-observation bias and station history adjustments in the United States based on Easterling et al. [1996a]“. – gavin]

    The dramatic increase in 1998 relative to 1934 appears to originate in Karl’s “station history adjustment”, which was added to NASA calculations between 1999 and 2001, with dramatic results. [edit]

    [Response: Also not true. The Plate 2 in Hansen et al (2001) clearly shows that the effect of the TOBS adjustment between the 1930s and 1990s is larger than that of the station history adjustment (both of which are significant however). - gavin]

    Comment by Steve McIntyre — 16 Aug 2007 @ 12:35 AM

  314. re: #270
    Well, if not weekly, let us know what happens sometime; most of the people who’ve argued about this have just seemed to disappear.

    ====
    re: SETI@home, etc

    1) Here’s a good list of such projects:
    http://en.wikipedia.org/wiki/List_of_distributed_computing_projects, of which one is:

    2) http://en.wikipedia.org/wiki/Climateprediction.net

    3) Distributed PCs can work well for certain kinds of work, of which SETI@home, finding primes, etc are good examples. Note that 2) is an ensemble project: that is, each PC runs a completely separate simulation that will fit there, and people look at the ensemble results. [Of course, if someone really doubts a simulation code, running a bunch of instances on different machines won't remove their doubt ... in fact, it would increase the need to check each run. :-)]

    Algorithms that work well this way usually have following characteristics:
    A) There are huge number of independent tasks.

    B) Ideally, as a machine becomes idle, it gets the next task (hopefully small), spends a lot of time computing, and then hands back a short answer [ideally, YES or NO], i.e., the work has a HIGH compute:communication ratio.

    (A lot of particle physics people have used workstation/PC farms to look for interesting events (which hardly every happen). Each system takes an event, spends a lot of time looking for interesting patterns, and then returns YES or NO. SETI@home is similar to this, as are various other kinds of problems. Somewhat similar have been renderfarms at special-effects shops; I don’t know current times, but in olden days a system might run 2 hours to generate 1 frame of a movie.)

    C) If you get a result back from a machine that you don’t own, whose software environment might be unknown, you can VERIFY interesting results easily. Hence, if some PC doing SETI@home says “YES”, you can easily check that by rerunning the algorithm. [If a PC somehow uses bad code and misses the little green men, you may not notice.] Of course, if you are dealing with PCs you don’t trust, you can at least send the same inputs to multiple machines and compare.

    D) Unfortunately, none of this helps much to parallelize a single run of physics-based time-stepped gridded algorithms across multiple distributed PCs, especially with dense 3D grids.

    I don’t think any of the projects in the above are of this type. If you know of a project that is (seriously) doing this, please post.

    People can and do make serious CFD or FE codes work on Linux clusters, for example, but it certainly takes work, and people usually use fast Ethernet, Myrinet, IB, etc, and dedicated machines to minimize latency.

    (more later, relatives visiting for a couple days).

    Comment by John Mashey — 16 Aug 2007 @ 1:24 AM

  315. #279 oh and “Ah, now the unobjective statement behind your question becomes obvious. “WE”, the voters do not see that at all. Check the various polls re: the need for action on GHGs.”

    I’m sure every poll says there is a need for action on GHG, the same polls probably say people will vote for higher taxes to help the poor and would be happy to donate 10% of their income to charity. The reality, meanwhile, is that people actually vote with their feet.

    eg. the first one off the top of google

    http://www.oag.com/oag/website/com/OAG+Data/News/Press+Room/Press+Releases+2006/Global+Warming+fears+fail+to+dampen+demand+for+air+travel+0910064

    Comment by Alan K — 16 Aug 2007 @ 1:35 AM

  316. I would imagine that the uncertainties on these numbers would mean that any reordering of the rankings would be statistically meaningless as you point out. What are the uncertainties on these numbers, because without them I’m not even sure how to interpret what I’m looking at? This also means that there shouldn’t have been any hoo-ha when 1998 and 2006 turned out to be the warmest years.

    Comment by Jake Ruseby — 16 Aug 2007 @ 2:16 AM

  317. Gavin,

    As you say, “The algorithms, issues and choices are outlined in excruciating detail in the relevant papers” and “The error on the global mean anomaly is estimated to be around 0.1 deg C now.”

    However, to me the fact that no one caught a 0.15 deg C error in the US data for several years shows that the level of transparency has been inadequate to the goal of achieving accuracy. It would certainly be easier for organizations providing “corrected” data to include all the corrections in their database than for outside researchers to hunt through various articles and from them make guesses about exactly when and how various corrections have been made to each piece of data. Until this level of transparency has been achieved, I think it is reasonable for people to be skeptical of claims that any area’s temperature record is accurate to 0.1 deg C.

    [Response: The error in the global mean anomaly is around 0.1 deg as I said. The fact that a revision 0.15 deg C in the US made no appreciable difference to the global mean implies that local area errors need to be much larger to have a significant impact. Note that no claim is made that individual stations data is correct to 0.1 deg C - the low error for the global mean comes from the power of large numbers and the large scale averaging that goes on. - gavin]

    Comment by DWPittelli — 16 Aug 2007 @ 8:29 AM

  318. Yes Gavin, I understand that that the US is only 2% of the world’s area and so the 0.15 C in the US is trivial globally. However, no one noticed a 0.15 C error in the US data for several years, and I am skeptical that data in most of the rest of the world is under much better scrutiny.

    Comment by DWPittelli — 16 Aug 2007 @ 8:49 AM

  319. Given the amount of noise in the data, it is not surprising that the 0.15 C error wasn’t detected immediately. Year to year variation in the USA’s mean temperature is far greater than 0.15 deg. So the error would likely not have been spotted by *anyone* until several subsequent years of data were collected and analyzed.

    And given that skeptics have had full access to both the corrected and raw data for years, and given that there has been plenty of funding available for skeptics’ activities, the fact that this 0.15 boo-boo is all the skeptics have to show for their efforts is an indication that the data overall are pretty robust. If there *were* serious problems with the data, a couple of dedicated analysts with a few 10′s of K of funding from Exxon-Mobil would have been able to uncover them. Exxon and others would have been throwing money at research and analysis instead of paying to have puff-pieces published in the National Enq^H^H^HReview, WSJ, and other partisan publications.

    Comment by caerbannog — 16 Aug 2007 @ 9:23 AM

  320. Dallas Tisdale said: Since the majority of the science now boils down to statistical analysis, should not a variety of statistical methodology be used to validate the models?

    The adjustments to historic temperature is not “the models.” Climate models are generally constructed from first principles of physics and chemistry. They are independent from historical data except that the skill of the model is judged, in part, by its ability to reproduce historical trends. Since there are several independent temperature series that all agree to the level of accuracy needed to evaluate models, there seem to be better places to spend the science budget.

    Comment by Tim McDermott — 16 Aug 2007 @ 10:09 AM

  321. re 309 –

    Some thoughts from a non-scientist type who has been watching this unfold and come to some conclusions accordingly regarding labels such as denialism, skepticism and the so-called “debate”. I don’t comment on this site to often – I don’t have the technical background to hold my own, for one thing. I, like many people in my situation, have to rely upon what I can read in reports and science popularizations. Often, it comes down to observing little things like tactics, asking questions about motives, and understanding the basics of how science works across all fields. In that regard I am qualified to comment.

    Simply put, this is a debate, yes, but only in the rhetorical sense, IMHO. As a scientific debate, I believe the facts supporting GW so far outweigh the arguments against GW that to call it a debate would be, at best, silly. There is just too much data, from far too many sources, to suggest that there is any legitimacy to the so-called AGW “Skeptics” position.

    In fact, from all appearances, what is really occurring is what David Michaels, professor at the George Washington University School of Public Health, correctly characterized as manufactured uncertainty:

    http://www.ajph.org/cgi/content/abstract/95/S1/S39

    See also:

    http://www.ucsusa.org/news/press_release/ExxonMobil-GlobalWarming-tobacco.html

    http://thepumphandle.wordpress.com/2007/01/11/exxonmobil-says-it-will-stop-manufacturing-uncertainty-%E2%80%93-who-is-next/

    http://www.motherjones.com/news/feature/2005/05/some_like_it_hot.html

    Peer review is another indicator. If there is a real scientific debate, it would be there where I expect to see it in play. But this does not seem to be the case:

    http://www.sciencemag.org/cgi/content/full/306/5702/1686

    It also helps to understand the expertise of the people involved. Paralleling the so-called Evolution-Creationism debate, the majority of these folks criticizing GW science have no background in the science they seek to criticize, do no research of substance into the phenomena but instead engage in often spurious critiques of the science, much like Intelligent Design’s Discovery Institute, which does no research, just publishes books and op-ed pieces while funding efforts to undermine school policy in places like Kansas and Pennsylvania. Even from a casual perspective, this has to raise questions as to the legitimacy of the critiques they offer of the people who actually studied and work within the field of climatology.

    A great dissection at how far apart the methodology of the so-called “skeptics” is from that of the climatologists can be found here:

    http://www.realclimate.org/index.php?p=74

    While on the surface this link it is a discussion of sci-fi author Michael Crichton’s cherry-picking of data to support his work of fiction, it is also an excellent snapshot of the types of arguments and tactics employed by the denialists over the past few years. Far too often, we see denialists take bits and pieces of data out of context and attack the resulting straw man they create (like the attempt to discredit Mann et al’s Hockey Stick), creating an illusion of a debate, of doubt far in excess of the actual uncertainty, that is at best disingenuous and, if I may be so bold, possibly criminal, particularly if the long-term effects of AGW end up being as dire as the middle-of-the road projections suggest.

    In light of this, to suggest that the GW “Skeptic” crowd is truly Skeptical in the sense of how skepticism is employed in science is, at best, quite a stretch:

    http://www.skeptic.com/about_us/discover_skepticism.html

    Or are you seriously willing to suggest that people like Crichton or the makers of “The Great Global Warming Swindle” …

    http://www.realclimate.org/index.php/archives/2007/03/swindled/

    …represent a skeptical approach to the issue? To suggest that this current issue which, more and more, appears to be the tempest-in-a-teapot the lead article makes it out to be, somehow refutes global warming (as many of the folks Denialist camp appear to want people to believe)?

    I could go on, but I think you get where I’m coming from. It’s not any one thing that causes problems for the AGW “Skeptic” crowd, but instead an overwhelmingly obvious pattern of tactics and less-than-forthright behaviors that cause problems for them with anyone who takes the time to watch and research their methodology in action over time. They are not doing any science of significance; they are not offering anything new to give the science something to chew on. Instead, they are sewing uncertainty, often playing fast and loose with the facts. People like Steve McIntyre can keep playing this game, doing their part in enabling behavior in the general population classically referred to as ‘Fiddling While Rome Burns’, and they will likely succeed in the continual seeding of doubt, at least for a time.

    But whatever it is you think they are doing, it is apparently not about engaging in legitimate skepticism.

    Comment by J.S. McIntyre — 16 Aug 2007 @ 10:45 AM

  322. DWPittelli (#318) wrote:

    Yes Gavin, I understand that that the US is only 2% of the world’s area and so the 0.15 C in the US is trivial globally. However, no one noticed a 0.15 C error in the US data for several years, and I am skeptical that data in most of the rest of the world is under much better scrutiny.

    If you just think in terms of the normal distribution, assuming thermometers could only measure with an accuracy of a degree, the larger the number of thermometers, the more accurate the average, and a large enough number of thermometers could make the uncertainty regarding the average temperature arbitrarily small where the uncertainty of the average is proportional to the inverse of the square root of the number of thermometers.

    Statistics 101.

    I assume that those who have actively participated in this debate for a while will be aware of this.

    Others might want to check out the following for a look at this from a refreshing perspective…

    The Power of Large Numbers
    July 5th, 2007
    http://tamino.wordpress.com/2007/07/05/the-power-of-large-numbers

    PS

    I might like to point out that this is a special case of the principle that a conclusion which recieves justification from multiple independent lines of investigation is often capable of a degree of justification far greater than that which it would receive from any one line of investigation considered in isolation. The evidence from surface stations, satellite measurements of troposphere, sea surface temperatures, etc all add up, and much more quickly than someone just off the street might think.

    Comment by Timothy Chase — 16 Aug 2007 @ 11:06 AM

  323. Incidentally, while I most certainly don’t mean to suggest that it is common, no doubt there are some who believe or at least suspect that climatologists are deliberately manipulating the numbers in order to make it appear as if temperatures are going up or going up more rapidly than they actually are.

    Time to retire that belief. As the result of the recalculation, 1998 went down, not up.

    Additionally, for those who believe that climatologists simply aren’t concerned with accuracy, it is worth keeping in mind that the recalculation was part of a deliberate and systematic attempt to improve accuracy. It succeeded.

    And looking at the chart for the global average temperature trend, there really isn’t room for much doubt with regard to the direction of the trend, or for that matter, the near magnitude of the trend.

    Comment by Timothy Chase — 16 Aug 2007 @ 11:26 AM

  324. [[ Silly dim old skeptics can’t quite believe how computer models are able to model something as complex as the weather. Out to 100 years in the future.]]

    You have weather confused with climate. Weather is chaotic and can’t be predicted beyond about five days. Climate is a long-term regional average (formally, 30 years or more), and is deterministic. An example to distinguish the two: I don’t know what the temperature will be tomorrow in Cairo, Egypt. But it’s a safe bet that it will be higher than in Stockholm, Sweden.

    Comment by Barton Paul Levenson — 16 Aug 2007 @ 11:35 AM

  325. re: #281 FCH
    I must run off, and I’ll come back to this, but FCH: think about why your analogy doesn’t fit very well.

    - Open source is useful and cost-effective for some kinds of software.
    - It isn’t for others, i.e., the return on the effort to to make it widely open, document it appropriately, respond to bug reports, etc isn’t worth it.

    In particular, UNIX/Linux source:
    - was/is used by programmers who typically want to use the resulting programs often, for real work, and as needed, make modifications to make them do additional work, or write related systems software
    - provided a way to get a large mass of widely-useful software onto some hardware platform or other
    - was/is of direct use to software engineers with the relevant expertise and motivation to make it work, without or without any help
    - is structured as large collections of usually-small, relatively-independent modules, of which many have purposes and code easily accessible to a high school student with no special background. Anybody who can read and compile C can add a new flag to a lot of commands.

    - UNIX: Ken & Dennis & Brian & co were happy to make source available for *lots* of things … but had they been told they had to make everything they ever did available, and that their code would be assessed for its quality, and that various random people would be suggesting changes they needed to consider … that would haver been the end of that. As it happens, over time, some people got to be considered as people who might actually have useful suggestions, but only because we proved over some years that we understood what was going on and were actually useful. Without naming names, some people were appalled at some of the “improvements” that were done elsewhere…

    It is a labor of love for Linux and his lieutenants to do what they do.

    Let us also reflect that GCC (a fine piece of software), in the last few years, has finally started to acquire global optimizations of the sort found in {MIPS, HP, IBM, etc} compilers 20 years ago. Likewise, things like XFS for Linux didn’t happen because a few people were looking at filesystem code and decided they’d like to play at doing better :-)

    Now, if there were a UNIX/Linux systems programmer-sized community of people whose daily work is climate-simulation&analysis, who have both the relevant scientific backgrounds and programming skills and motivation to knowledgably examine source code and help improve it, that would be nice.

    [One of these years, I'll have to do an essay that backtracks the 60-year history of open-source, and look at why it works well where it does, and not where it doesn't ... but not here :-)]

    Comment by John Mashey — 16 Aug 2007 @ 11:38 AM

  326. #324, the problem is that you can’t just average climates. There are just to many variables. The climate on a Caribean island is much too different than in the Canadian Rocky Mountains. Regional and microclimates are important as is variance, wind, humidity, clouds, etc etc.

    Comment by Dave Blair — 16 Aug 2007 @ 11:49 AM

  327. Ref: 320

    I understand. The historical data and proxy data may benefit from further statistical analysis. The data used to determine the skill of the models, sorry if validate was a poor word.

    As far as the expense, given the urgency of the situation?

    It is obvious that the Earth is warming and obvious that some portion of that warming is anthropogenic. I am confident that the business as usual mentality is changing, a change for the better. I just don’t share 95% confidence in the estimates of the rate of global warming.

    Comment by dallas tisdale — 16 Aug 2007 @ 12:02 PM

  328. Over at Number Watch they have a chart showing the ever increasing “Difference Between Raw and Final USHN Data Sets” as well as photos of temperature sensors near outdoor air-conditioning fans, etc. They claim the first graph shows the adjusting factors are partly responsible for the warming trends, and their second complaint is that temperature stations aren’t monitored and corrected well enough (or is the first the answer to the second?) Comments please, ty.

    [Response: The adjustments to the US data are not responsible for global trends. Continental trends on every continent (except Antarctica) show similar patterns. The adjustments certainly do matter, and the largest one is related to changes in the time of day people in the US took measurements, other ones deal with station moves and instrument biases. Are they claiming that known problems shouldn't be corrected for? - gavin]

    Comment by Neil B. — 16 Aug 2007 @ 12:14 PM

  329. Dave Blair (#326) wrote:

    #324, the problem is that you can’t just average climates. There are just to many variables. The climate on a Caribean island is much too different than in the Canadian Rocky Mountains. Regional and microclimates are important as is variance, wind, humidity, clouds, etc etc.

    Actually you can – if you perform multiple runs with slightly different initial conditions. It gives you the spread – and it gets more accurate the higher the resolution.

    With the NEC Earth Simulator from a few years back we were performing 32 trillion floating point calculations per second. Things have improved since then. Hadley is now using the different initial conditions from empirical observations over consecutive days and running their models with measurements from past years they have found surprising accuracy in terms of their projections over the near-term scale of a decade.

    And that is with models grounded in physics, not some attempt to fit the model to the data. Climate models do the former, not the latter.

    Comment by Timothy Chase — 16 Aug 2007 @ 12:14 PM

  330. #316 & This also means that there shouldn’t have been any hoo-ha when 1998 and 2006 turned out to be the warmest years [in the U.S.].

    I don’t think there’s been any hoo-ha over which year in the U.S. recorded the highest temps. Even in years when the entire world records the highest temp (which is more pertinent to the concept of “global” warming), there’s no hoo-ha on any channel I watch. (Until last week I didn’t have cable, so maybe the science channels mentioned it.)

    The sad truth is that the well-oiled media just don’t mention global warming much at all (and they talk about solutions even less), with only a slight pick up after Katrina (or A.K.), followed by a sharp deceleration. The public brainwaves have been flatlined by the media on global warming.

    The only thing that might grab media and public attention is very very severe global warming disasters, one upon another (like the equivalent impact of a Katrina happening every month … and assuming we are not at war at the time) … which means we will have already passed the runaway tipping point of no return by the time people take global warming seriously enough to start doing something about it.

    I sincerely hope I’m wrong.

    Which brings me to the best GW policy IMHO: Hope for the best, and expect (& try to avert) the worst.

    Comment by Lynn Vincentnathan — 16 Aug 2007 @ 12:20 PM

  331. Re: 313 “You are incorrect to say that ’1998 was originally calculated to be warmer than 1934′”

    That’s why I corrected it in the post immediately following.

    Just to reiterate,

    1998 (US) was originally calculated to be cooler than 1934.
    This remained so until at least 2001.
    By 2006, it was recalculated to be slightly warmer (that is, it had a “bigger number”)
    Now, it is slightly cooler again.

    Does anyone know when 1998 “surpassed” 1934, since skeptical sites seem to be making a big deal about how “long” 1998 was on top.

    “In the next two years, 1998 gained 0.6 deg C on 1934.”

    Certainly you mean 0.06 degrees.

    [Response: Actually it's larger than that - about 0.3 deg C. The difference that the TOBS adjustment and station history moves made in the US ranking was quite large. For the 1930's to 1990's there is about 0.2 deg C for TOBS bias correction, 0.1 deg C for station history adjustment, 0.02 for instrument changes, -0.03 for urban effects as applied to the USHCN raw data - all of this is described in the 2001 paper (see plate 6). McI's claim of 0.6 is probably from a misreading of figure 6 in the 1999 paper which used the convention of Dec-Nov 'years' rather than the more normal Jan-Dec average. The appropriate comparison is Figure A2 (d) (1999) and Plate 6(c) (2001). - gavin]

    Comment by cce — 16 Aug 2007 @ 12:29 PM

  332. You are reported in the Daily Telegraph as saying that global warming is “a global phenomena”. I hope the hack got it wrong and you in fact called it “a global phenomenon”, which it is BTW.

    [Response: Possibly I can blame the phone connection.... - gavin]

    Comment by barkerplace — 16 Aug 2007 @ 1:21 PM

  333. #327 & I just don’t share 95% confidence in the estimates of the rate of global warming.

    It does seem that the rate is fairly slow in lay (though perhaps not geological) terms. And we seem to be only at the beginning of seeing the GW trend come out of the noise. I think the first studies to reach .05 on AGW was in 1995. So it may be too early to tell how fast the warming might speed up, or stay steady, or decelerate. That’s why they have a big range in projected scenarios (involving variations in sensitivity and GHG emissions).

    My thinking has been that scientists tend to underestimate the problem, since they can only work with quantifiables. There are many factors that are thought to have impact, but are not (easily) quantifiable. Like the melting of the ice sheets & mechanics of their disintegration.

    Now using my kitchen physics, when I defrost my Sunfrost frig (www.sunfrost.com) everything looks very stable, then after a long time lag (kitchen, not geological, time), the top ice sheet just breaks off KA-BOOM. Something like catastrophe theory in mathematics (which I know very little about) might be more applicable to cryosphere dynamics than linear algebra.

    Then, of course, when the world’s ice and snow start vanishing en mass, that leaves dark land and sea to absorb more heat.

    Other factors include a warming world releasing nature’s stores of GHGs — as in ocean hydrates and permafrost. Again, I guess it’s hard to quantify what these rates will be. But I imagine there will be threshhold points for this — the point at which ice melts, say, at various levels of the underground or sea (tho sea currents impact this too) — and recent studies find some ocean methane hydrates at shallower levels than previously thought, and other studies indicate stored permafrost carbon going a lot deeper than previously thought. Undersea landslides are also a factor.

    And then at a certain point of land/vegetation desication (warming causes more WV to be held in the atmosphere, taking it out of the land & plants), and fiercer wind storms, we can expect greater forest and brush fires. Those not only reduce the CO2 sequestering plant-life, but also release CO2 into the atmosphere. And again, I imagine that such events would be hard to quantify, though I’m sure scientists are busy working on that.

    There’s just a whole lot left out of the codes and equations and calculations that might not only indicate increases in the warming, but also accelerations of it — like some wild domino effect (I remember the nuclear fission demonstration as a kid with mousetraps and ping-pong balls).

    I recenlty read on ClimateArk.org that British scientists have predicted the temps will level off for 2 years, then after 2009 rise sharply, and all hell is about to break loose. See:
    http://www.climateark.org/shared/reader/welcome.aspx?linkid=81736 and
    http://www.climateark.org/shared/reader/welcome.aspx?linkid=81891

    But, again, does their “code” contain all these hard-to-quantify factors? Are they being too scientifically cautious with “fudge factors” if at all they included them. Maybe it’ll even be worse than they suggest, but I sure HOPE (Help Our Planet Earth) that they are wrong and we luck out.

    Comment by Lynn Vincentnathan — 16 Aug 2007 @ 1:22 PM

  334. Gavin said:

    Therefore any one station, even if rural, has its trend set by the average of raw rural station data in the area

    Even if there is no indication of anomalous data in the rural station?

    [Response: The GISS analysis is done to get the best regional trend, not the most exact local trend. Once again, read the reference: "the smoothing introduced in homogeneity-adjusted data may make the result less appropriate than the unadjusted data for local studies". If you want to know exactly what happened at one station, look at that one station. If you want to know what happened regionally, you are better off averaging over different stations in the same region. The GISS adjusted data is *not* the best estimate of what happens at a locality, but the raw material for the regional maps. Think of an example of a region with two rural stations - one has a trend of 0.15 deg/dec, the other has a trend of 0.25 deg/dec - the regional average is 0.2 deg/dec, and in the GISS processing, both stations will have the trend set to 0.2 deg/dec prior to the gridding for the maps. Different adjustments for different purposes. - gavin]

    Comment by nanny_govt_sucks — 16 Aug 2007 @ 2:25 PM

  335. Gavin,

    In reading Hansen 2001 I Came across this:

    “The strong cooling that exists in the unlit station data in the northern California region is not found in either the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted)”

    Now, the elimination of this data on the presumption of Contamination, doesn’t lead to much of a change. That is not my issue. My issue is the documentation of the process. Since the “other sites” in the region are not listed I can’t really duplicate the analysis.

    But, I tried. So I looked at Lake spaulding and sites within 70KM or so. I had no idea what kind was used around each site to do the checks. Anyway If you compare Lake Spaulding with Tahoe City, Colfax, Nevada city you can see that Lake spaulding has a cooling trend from 1914 -1931 ( not 1927) that differs from these other stations. Then Lake spaulding starts correlating with nearby stations in a more regular fashion. The problem is I’m curious about what objective statistical method was used to judge Homogeniety? Text doesnt say. Related question.
    When you ingest data from USHCN, do you just ingest
    Monthly Means? or do pull in Daily detail data? ( tmax, tmin)

    Second I looked at Crater Lake NPS HQ( which is a very cold place relative to its surrounding sites ) Which sites did Hansen 2001 compare Crater Lake to? The two I glanced at showed no trend differences with Crater Lake, They were warmer than crater Lake station.. In fact the difference in bias ( there was no trend difference)
    between Crater Lake and the other Station was fully accounted for by altitude differences ( using Hansens 6C per KM). Crater Lake just happens to be an isolated snowy cold place, But since warming trend is what matters ( hey the artic is cold) I was wondering if I could get a couple of pointers before I plunge into Crater Lake in any depth.

    Can I get a pointer to the EXACT test used for homogeneity and a pointer to the exact list of stations that Crater Lake NPS HQ was compared to ( or a radius from its location)

    Comment by steven mosher — 16 Aug 2007 @ 4:00 PM

  336. #329, Timothy, weather prediction is also based on physics and have some large computing power running their models too. The prediction for weather this afternoon will probably be pretty accurate, but 2 – 3 weeks from now? Same for climates predictions, you say they are accurate for the next decade, maybe so and time will tell, but when we hear the predictions for 50 or 100 years from now that when you wonder why all the movie stars and politicians are getting involved.

    Comment by Dave Blair — 16 Aug 2007 @ 4:17 PM

  337. Two questions. Beginner´s questions, actually:

    - Is there any chance of mankind quit using fossil fuels before the planet runs out of it? (Unlikely in my view, but I would like to hear more educated guesses)

    - What levels of CO2 would we reach if all that buried carbon is released to the atmosphere? (and what would that mean in terms of GW?)

    Comment by Alexandre — 16 Aug 2007 @ 4:21 PM

  338. #335
    Well the obvious question here is: Have you sent an email to the authors of the paper you discuss?

    Unless they are retired or dead the authors of a paper is normally best persons to ask things. Unless you have asked them you will rapily become suspected of prefering to make a lot of noise rather than actually getting answers.

    Comment by DavidU — 16 Aug 2007 @ 4:40 PM

  339. In another argument (sci.physics.foundations), comment was supplied on http://www.numberwatch.co.uk/manmade.htm

    What I found interesting here was the website was showing substantially different graph from global satellite temperature than the RSS, UAH curves. Can anyone enlighten me on where their graph may have come from?

    Also in that context, http://www.dailytech.com/Blogger+Finds+Y2K+Bug+in+NASA+Climate+Data/article8383.htm
    claims “Hansen refused to provide McKintyre with the algorithm used to generate graph data, so McKintyre reverse-engineered it. The result appeared to be a Y2K bug in the handling of the raw data.”
    Now given the algorithms seems to be published, I find the claim weird. Anyone know the back story here??

    Comment by Phil Scadden — 16 Aug 2007 @ 4:55 PM

  340. Dave Blair, your belief that studies of weather and climate are the same is one that’s frequently asserted; it’s been answered elsewhere repeatedly; response would be off-topic here.

    You could find that clarified in the basic info at
    http://www.realclimate.org/index.php/archives/2007/05/start-here/

    Comment by Hank Roberts — 16 Aug 2007 @ 4:56 PM

  341. [[#324, the problem is that you can’t just average climates. There are just to many variables. The climate on a Caribean island is much too different than in the Canadian Rocky Mountains. Regional and microclimates are important as is variance, wind, humidity, clouds, etc etc.]]

    Hit the wrong button. Apologies if this message shows up twice.

    The things you cite — wind, humidity, clouds — certainly affect the local temperature. But the temperature itself always measures the same thing — the heat content of the surface or the low atmosphere. It’s perfectly valid to average the temperatures of different regions, since it’s the same sort of thing being measured.

    Comment by Barton Paul Levenson — 16 Aug 2007 @ 5:05 PM

  342. [[ What levels of CO2 would we reach if all that buried carbon is released to the atmosphere? (and what would that mean in terms of GW?)]]

    We have at least enough coal and oil to quadruple the CO2 in the atmosphere, which would most likely raise global temperature about 5 K. We’d probably lose the polar ice caps; summer at each pole would always be enough to melt all the ice. A lot of the present coastal cities would be under water, including New York, Miami, Houston. The entire country of Bangladesh would be submerged.

    Comment by Barton Paul Levenson — 16 Aug 2007 @ 5:12 PM

  343. RE Steven Mosher (#335)

    As we have both noticed, Gavin tries to keep the door open for everyone. But I would like it if we could keep this polite.

    I believe it would be in everyone’s interest.

    Of course I realize that you are making an effort in this way at present. But in light of whats happened before, I think you can understand and perhaps even share my concern.

    Comment by Timothy Chase — 16 Aug 2007 @ 5:26 PM

  344. If denialists, instead of giving climate scientists nonsense arguments, would go outside and plant some trees, perhaps we will survive a few more centuries.

    http://environment.newscientist.com/article/dn12496-forget-biofuels–burn-oil-and-plant-forests-instead.html

    Comment by catman306 — 16 Aug 2007 @ 5:27 PM

  345. Phil, look up the ‘Global Warming Swindle’ stuff, I think the first graph may be from that program; it’s being posted about on a lot of skeptic discussions but with no attribution that I can find for it.

    Comment by Hank Roberts — 16 Aug 2007 @ 6:07 PM

  346. Because a) the raw data are publicly available and b) papers are supposed to contain enough detail to allow others to repeat the analysis. If a paper says ‘we then add A and B’, you don’t need code that has “C=A+B”. – gavin

    And this is why so many serious scientists have trouble dealing with the public. This may be true and it’s probably even ideal in the academic and scientific world. But a refusal to release the code in the political arena is suicide. You went through this with Mann’s code so why repeat the same mess? Are those lessons so easily forgotten? Heck the refusal to release the code was discussed in the halls of Congress almost ten years after the fact!!!! Is it really so easy to forget? The vast majority of people are going to assume you are hiding something when you refuse to release the code. A large portion of the legitimate skeptics will view the paper as an attempt to confuse and stall those who are trying to replicate your work. If you have a legitimate reason to release the code then say what it is. Squeezing extra papers from software is a legitimate reason. Release part of the code if you have to. Refusing to release the code simply because you feel like keeping it secret is political suicide.

    Comment by Sparrow (in the coal mine) — 16 Aug 2007 @ 6:09 PM

  347. sparrow’s points explain why so much money has been spent to encourage denialists: more pressure on climate scientists, lessened probability that real steps might be taken to actually do something about global warming, steps that will in most likelihood, cost somebody money. We have denialists because their existence helps save somebody money.

    Comment by catman306 — 16 Aug 2007 @ 6:46 PM

  348. Phil, that “28 years” graph is found in Powerpoint file written by a David Archibald for the Lavoisier Group – slide #1, but he gives no source for it or much else. Without a source, it’s just argument.

    Comment by Hank Roberts — 16 Aug 2007 @ 6:49 PM

  349. The skeptics are troublesome for GISS and climate scientists everywhere.

    Solution: Transparency.

    It is as simple as that.

    [Response: We publish hundreds of papers a year from GISS alone. We have more data, code and model output online than any comparable institution, we have a number of public scientists who comment on the science and the problems to most people and institutions who care to ask. And yet, the demand is always for more transparency. This is not a demand that will ever be satisfied since there will always be more done by the scientists than ever makes it into papers or products. My comments above stand - independent replication from published descriptions - the algorithms in English, rather than code - are more valuable to everyone concerned than dumps of impenetrable and undocumented code. - gavin]

    Comment by John Wegner — 16 Aug 2007 @ 7:09 PM

  350. Justin, I think the two go together (#294)), the method and the calculations to implement the method’s goals. The code purports to do something. If you have the specific code used, you can analyze it and ensure that it accomplishes what it’s supposed to be doing. Um, if I write a program to generate random numbers, I can run that program over a period of time and validate they are random. I am trying not to go into anything OT here…

    Thanks for the idea! I can see why some would have problems with those issues, Timothy (#297) But let me cover those issues from what I see happening. 1. I don’t know enough about it to know if anyone did or didn’t do anything to Dr. Mann’s graphs. But the point of that is you want an outsider, who is an expert at a discipline, to examine the scientific validity of something from they’re angle. How good he is at it or not, like I said not my area and a different issue anyway. 2. He’s not trying to do anything with the stations. Photographic documentation is part of the standards, and it lets you go relook and see how things are later without having to go back. Plus some aren’t sited well. 3. That may be. I think we have to wait and see once every one of them is looked and so everything can be analyzed. I am only interested in having the best data available. No matter which way it goes. 4,5 I don’t get that out of it.
    Let’s see. It’s not his project, he doesn’t have control over who goes to what site where. Sure, perhaps some of it is a little over-enthusiastic, but maybe just because it’s interesting and exciting . Sure, instead of “good” it’s maybe better to use “meeting siting standards and minimizing adjustments” and not “bad” but “not meeting siting standards and needs a lot of adjustments” would be more neutral, but it gets tedious to read that. However, if 7 people get 7 sites and 6 are “bad” and 1 is “good” and he compares them to see what effect that may have, I don’t consider that “cherry picking” it’s just what’s there. I can’t fault anyone for focusing on the one issue of the sites and possible effects and not bringing in glaciers, sea ice, ocean temperatures, carbon dioxide and methane levels, albedo and the rest into every discussion and aspect of this.

    But Pete (#298) some of this all assumes that solar, wind, hydrogen, hydro, nuclear, clean coal, and the like won’t become more viable and less expensive in the future, that no new oil discoveries of any import will ever happen, and that at some time the oil in shale won’t become at the same relative expense as that in the ground. Who knows, perhaps those with untapped liquid oil are buying it from others to save their own? I think there’s a reason it’s called “black gold”.

    Now you’ve lost me Timothy (#299). Why wouldn’t an oil company (or any industry) fund endeavors to protect its interests? Governments and schools do it all the time. If I’m doing experiments that could turn out to be helpful to some entity, why wouldn’t they fund me? They’re going to fund somebody, even if it’s those trying to find new sources of oil, doing R&D into related fields (solar panels for example), and so on. I don’t know about you, but I like having gasoline, electricity, available food and the like. It’s just economics. Think about this: When the government of the United States making money off of every bit of gasoline sold in the entire country starts complaining about oil company profits (which they benefit from every single one of,) why are their motivations any less suspect than that of say Chevron/Texaco?

    You are misunderstanding me I believe, from what you said in (#303) John. I’m not making any comment on the validity, scope, or importance of the trend from year x to year y or how somebody interprets it. Or the validity, scope, or importance of the measurements themselves or how somebody interprets it. I’m just saying if you go year to year you can say one thing, if you pick specific periods or specific lengths of time, you can say other things. I totally agree that we need to know as much as possible exactly what’s going on, and as accurately as possible. That the trend of the measurements shows we are warming over time is not really up for debate, anyone can chart it at NOAA.

    Speak for yourself Lawrence (#308) I certainly want to see our progeny as nomads roaming the arctic on a subsistence level…. But that’s beside the point, it’s always been easy to get ModelE I believe.

    Well cce it’s like this (#311) The difference is not great enough for it to matter between the two years, nor the effect upon the global numbers even if it was. But the point is that finding and correcting errors is the goal of many people, regardless of what “they’re trying to do” or others think “they’re trying to do”. On “either side”.

    Jake, it’s all about the trend dude. (#316)

    I don’t think this is the same conversation going on in (#317), since I agree with both DWPitelli’s comment and Gavin’s response.

    You’ve lost me also caerbannog (#319). Your first paragraph is fine, the second is just, just, just… Not very helpful to supporting the arguement in the first.

    J.S. McIntyre (#319). I see it all the time from everyone. Nobody has a lock on the rhetoric about this subject.

    In an issue such as this, there’s going to be a very very wide range of people that believe a whole lot of things(#323). Matters of opinion are always such.

    (#324), I don’t think that’s a very good analogy Barton. “Climate” has nothing to do with saying “It will be cold in the Arctic and Antarctic this December” or “Water boils at 100C”. (Comparing Egypt and Sweden on the same day.) Nah, I don’t have a better one. But your comment in (#341) bears a bit of explanation; “the temperature” itself always measures the same thing; the temperature of the area you are measuring in the material you are measuring it with the device you’re using to measure it.

    I think that rather than how you phrased it Dallas (#327), it is obvious the anomaly is trending up, and if the measurements are stable as to the anomaly, then we are warmer “now” than we were “then”.

    Alexandre in #337 most of those two questions are answered in my posts I think. The short answer suddenly is…. Nobody knows the answer to either. But we can make projections, some of which will be true and others not, to various levels of accuracy and margin of error. That’s what everyone is discussing all the time. All that you can get is the opinions of those who have looked into various aspects of this.

    Comment by Hal P. Jones — 16 Aug 2007 @ 7:32 PM

  351. Re #335: [Anyway If you compare Lake Spaulding with Tahoe City, Colfax, Nevada city you can see that Lake spaulding has a cooling trend from 1914 -1931 ( not 1927) that differs from these other stations.]

    I happen to live in the area, and thought I remembered something about this lake. Sure enough, the first hit from Google gives me this:

    “Lake Spaulding rests at an elevation of 5,014 feet in a glacier carved bowl of granite. The lake has a surface area of 698 acres surrounded by giant rocks and a thick pine forest. The lake was originally built for hydraulic mining in 1912…”

    OK, so people built a dam, and changed a square mile or so of rocky river valley into a lake. Water has a lot more thermal inertia than rock, and at those middle elevations it would probably get a good thickness of ice accumulating every winter. Wouldn’t you expect a cooling trend in nearby temperatures, quite independently of any larger trends?

    Comment by James — 16 Aug 2007 @ 7:40 PM

  352. Re 321: Amen.
    Nobody ever erected a statue to a critic.

    Comment by Lawrence Brown — 16 Aug 2007 @ 8:11 PM

  353. RE 349

    Gavin. [edit]

    Its two pages of code… At least tell us which method you used for In homogeniety testing?

    Easterling? Kohler? Salinger? Zhang? SNHT? Berry?
    Vincent? There are bunches. Which was used?

    [Response: RTFR. GISS only adds the urbanisation adjustment and does not do any homogeneity testing, the station history adjustments are taken from USHCN as is clearly described. The only further culls were for obvious weirdness such in the early Cal. data as was, again, clearly described. What is so hard about reading the papers? - gavin]

    Comment by steven mosher — 16 Aug 2007 @ 8:39 PM

  354. Re: Steven Mosher

    Steve,

    One thing.

    I don’t ever really harbor any hard feelings towards anyone. And assuming you feel the same way and can make it out to Seattle some time, I would be willing to spring for a couple of drinks.

    Strictly non-alcoholic. Hopefully tea or coffee would be alright.

    Here’s my email address:

    timothychase at g mail.com (No spaces, well you can figure that out.)

    Honestly – feel free to take me up on this.

    Comment by Timothy Chase — 16 Aug 2007 @ 8:41 PM

  355. RE 322 and others: I have repeatedly seen the assertation that if you have large enough numbers errors in data will be statisically insignificant. However, lets say I am conducting an experiment in the biology lab measuring how the production of an enzyme by bacteria in petri dishes is effected by different nutrients. And some of those dishes were contaminated by fungus tha competed for those nutrients. If only a few were effected that might not change the results. But if 50% or 30% or even 20% were contaminated no matter how many dishes I had I would always get incorrect results. One of the ways to protect against this is to visually examine the dishes (including with a microscope) to look for contaminants. At some point it is important to come out from behind the computer screen and check where the data is coming from, how it is collected and look for possible errors in tabulation/computation.

    A very recent example of finding that the real world does not always fit the computer model is: http://www.nasa-news.org/documents/pdf/Wentz_How_Much_More.pdf

    Comment by Gary — 16 Aug 2007 @ 8:44 PM

  356. Fix the bad data.

    From NOAA:

    In summary, climatic normals are intended to serve as a baseline for both spatial and temporal comparison. They are values derived from the frequencies of identically distributed observations over a 30-year period of record. At most locations, however, non-climatic
    influences such as station relocations, siting changes, instrument changes and recalibrations, etc. preclude obtaining a climatically homogeneous record of daily observations for 30 years. The statistical problem of detecting the full range of these inhomogeneities from the observational record is currently intractable.

    Comment by steven mosher — 16 Aug 2007 @ 8:59 PM

  357. You don’t “fix” data — nobody’s going to go back through history and change what’s recorded.

    You replicate — with better instruments. That’s what’s being done, rolling out new stations with better gear and more consistent criteria.

    You run in parallel for a while. That gives you a parallel record of the old instrument sites and the new ones.

    Then you can evaluate the old data because you’ve been able to check each of the old instrument setups running in parallel with the new instruments.

    This is, perhaps, exactly what the know-nothings want to avoid by insisting the old data be fiddled or discarded.

    Because a consistently biased observer is still a reliable observer —- and the long record made by a consistently biased observer becomes more valuable once you’ve run the old observer and the new observer in parallel.

    Get a grip, folks, the new instruments going out set up with the new criteria are going to make all the old data _more_ useful.
    Just as it is.
    Without throwing it out. Without fiddling with it.

    Comment by Hank Roberts — 16 Aug 2007 @ 9:56 PM

  358. Timothy Chase Says:
    11 August 2007 at 4:03 AM
    I had written (#19):

    Earth will feel the heat from 2009: climate boffins
    By Lucy Sherriff
    10th August 2007 15:31 GMT
    http://www.theregister.co.uk/2007/08/10/climate_model/

    It appears that climatologists are in the process of improving their short-range forecasting.

    papertiger (#29) responded:

    Interesting. It seems to be inproved just enough to cover the time period right after the next election.

    You sure this is a non political website?

    Different country – Hadley out of England – notice the UK in the website address. But I suppose they could be part of a global conspiracy. You have to watch those conspiratorial types pretty darn closely – particularly the scientists when they start getting uppety….

    So you think that climate models can predict weather? You endorced their site as if it were authoritative. That same office predicted an unremarkable 2007 summer for the UK. As we have seen the UK had an exceptionally wet summer, well below average temperatures, and widespread floods.

    Why do you endorce this model, if not due to politics?

    Comment by papertiger — 16 Aug 2007 @ 11:14 PM

  359. Come on, that’s a link to The Register.
    Good grief, if you don’t read the actual science paper, at least read the comments of someone who has:

    “… So I read the actual paper containing the new predictions. It turns out that the press reports are considerably overblown (surprise!); they give the unmistakeable impression that the HadCRU team has made definitive predictions of the future progress of global warming over the next decade or so, as though we now know with confidence how global average temperature will evolve up to 2014. If you read the actual paper you’ll find that is simply not so…. ”

    http://tamino.wordpress.com/2007/08/14/impure-speculation/

    Comment by Hank Roberts — 16 Aug 2007 @ 11:43 PM

  360. Gavin said:

    Think of an example of a region with two rural stations – one has a trend of 0.15 deg/dec, the other has a trend of 0.25 deg/dec – the regional average is 0.2 deg/dec, and in the GISS processing, both stations will have the trend set to 0.2 deg/dec prior to the gridding for the maps.

    Why not just say the regional average is 0.2 deg/dec and leave the individual station data alone?

    Anyway, I thought the purpose of the GISS adjustments is to correct any urban heating, not just to homogenize all stations.

    [Response: The GISS-adjusted record is simply the last step in the process before you grid the data, it is not a claim that this is what was the most accurate history for that station. The purpose is to provide a regional and global analysis that is based on rural trends. - gavin]

    Comment by nanny_govt_sucks — 16 Aug 2007 @ 11:47 PM

  361. Paper indeed sir! Humans are pattern-seeking creatures. Scientists deal in substantiated trends. See the difference between this and your fallacy? If not, review and report back.

    Comment by Mark A. York — 17 Aug 2007 @ 12:04 AM

  362. The graph of global mean temperatures has disappeared fron Gavin’s text. Let’s hope it will soon re-appear with the y-axis similar to that in the US graph. This is basic stuff in statistical graphics: if you compare two time series, you visualize them in similar coordinates. And no values off chart, please.

    It’s not that GISS does not know how to visualize their global data. There is for example one graph, showing that global warming more or less stopped about six years ago:
    http://data.giss.nasa.gov/gistemp/graphs/Fig.C_lrg.gif

    Comment by Dodo — 17 Aug 2007 @ 1:23 AM

  363. Re in-line comment to #349 [My comments above stand - independent replication from published descriptions - the algorithms in English, rather than code - are more valuable to everyone concerned than dumps of impenetrable and undocumented code. - gavin]

    But Gavin (and you know I’m no denialist, nor do I think this matter undermines the immense weight of evidence from AGW from numerous independent lines of research), code should not be impenetrable and undocumented. If it is, that’s a flaw in the work. Whether it’s a big flaw or a little flaw depends on context, but it’s a flaw.

    Comment by Nick Gotts — 17 Aug 2007 @ 4:03 AM

  364. #359 – Errr… it looks more like the anomaly had a large peak in 1998 and has then returned to a rising trend. Also helpful if you show data before 1997.

    Comment by Nick — 17 Aug 2007 @ 4:35 AM

  365. The demands for transparency and the cackling over a teensy adjustment in temps don’t reflect well on a group that allowed an order of magnitude change in a y-axis scale to back up one of their talking points. And that’s simply as a question of form and rhetoric. Counter science with science, folks, not with Death by Quibble.

    Comment by Jeffrey Davis — 17 Aug 2007 @ 8:01 AM

  366. “So you think that climate models can predict weather? You endorced their site as if it were authoritative. That same office predicted an unremarkable 2007 summer for the UK. As we have seen the UK had an exceptionally wet summer, well below average temperatures, and widespread floods.”

    Actually, despite the fact that the summer isn’t yet over, they predicted the temperature to be about average – and it has been so far. They also predicted the northern part of the UK to tend towards wetter than average. The southern part was predicted to tend towards drier and that’s where the main error on the part of the forecast has been. The UKMO state that the seasonal forecast is experimental and has a success rate of roughly 60%.

    However, that is using a different forecasting model (it’s based on NAO predictions and SSTs partly using statistical techniques), and by a completely different group (the seasonal forecasts are carried out by the UKMO whereas the climate research is done by the Hadley Centre which is a semi-autonomous division – that might seem a trivial distinction but it’s not). The DePreSys model uses the HADCM3 model but with improved data initialisation. It is more akin to running the current UKMO GM for ten years than their seasonal forecast.

    That said, the project has been running for over four years and is based in the UK, so any link with US elections is rather fanciful and a bit silly.

    Finally, their paper is really a marker for a work in progress. It’s possibly the first time that anyone has done a climate prediction of this type. They acknowledge the limitations of the DePreSys forecast and I’d expect the forecast to change with newer runs.

    Comment by Adam — 17 Aug 2007 @ 8:40 AM

  367. [ gavin : My comments above stand - independent replication from published descriptions - the algorithms in English, rather than code - are more valuable to everyone concerned than dumps of impenetrable and undocumented code. ]

    Huh? I’m a computer programmer and the first thing I skip when tackling a problem is the docs and go straight to the code as that’s where the skeletons lie. What you’re advocating is like Ford telling a jury that they shouldn’t look at the blueprints of their pinto or at executive memos but rather at press releases.

    This is akin to security by obscurity. It is the last thing that I would expect from scientists.

    [Response: Again, you miss the point. It is better for someone to come up with the same answer by doing it independently - that validates both approaches. - gavin]

    Comment by BlogReader — 17 Aug 2007 @ 8:47 AM

  368. Skeptics leaping on the need for NASA to revise their figures have reached Sweden, where I work. Letters to the editor in regional newspapers (hd.se), no less. Under the heading “En Obekväm Nyhet” which means “An Uncomfortable Piece of News”, the letter writer neglected to mention that the new figures were for the US, and simply said that temperatures in 1934 were as warm as 1998, that this news (studiously avoided by the press because it’s so uncomfortable) would “disappoint” people with so much invested in the climate threat, and even that “the climate catastrophe has been cancelled”!

    Needless to say, there’s a critical reply/correction on the way to the editor!

    Keep up the good work RealClimate team.
    P.

    Comment by Paul Miller — 17 Aug 2007 @ 8:48 AM

  369. RE351.

    The cooling TREND at Lake spaulding ( compared to nearby sites ) is nearly linear. That would indicate
    smething like a sensor going bad. Post 1931 it matches the other sites in the area nicely. Statin history records might show replacement of the sensor.. Havent checked that yet.

    Comment by steven mosher — 17 Aug 2007 @ 9:20 AM

  370. RE 175. Gavin inlined:


    Response: ‘Algorithms’ are just descriptions in various flavours of formality. If there is a question of whether the code correctly implements the algorithm it will be apparent if someone undertakes to encode the same algorithm and yet comes up with a different answer. ”

    Well, If we both made the same error they would match.
    If we used different math libraries and one had a flaw
    they would mismatch for a different reasn.
    If they didnt match, how would we resolve the mismatch?
    By sharing each others code. Plus Nasa software policy
    encourages you to share code.

    Next:

    “Absent that, there is no question mark over the code. So if I generically describe a step as ‘we than add A and B to get C’, there is only a point in checking the code if you independently add A and B and get D. That kind of replication is necessary and welcome. With that, all parties will benefit. ”

    I would have to more than merely claim that the results
    didn’t match, right? I suspect that you would want to
    see my code. Can you imagine if 10 people tried to
    match what you did and then all sent you their code to
    check. that would be rather a time waste for you.

    Next:
    “Simple demands to see all and every piece of code involved in an analysis presumably in the hope that you’ll spot the line where it says ‘Fix data in line with our political preference’ are too expansive and unfocused to be useful science. – gavin”

    I don’t expect to find comments like that. Fundamentally I believe in transparency. The default should be release the code, unless there is an IP issue.

    [Response: Why talk about hypotheticals? Do your emulation and see. - gavin]

    Comment by steven mosher — 17 Aug 2007 @ 9:42 AM

  371. J.S. McIntyre (#319). I see it all the time from everyone. Nobody has a lock on the rhetoric about this subject.
    ==============

    Actually, for the sake of reference, it was post 321.

    No one said otherwise, and I find it interesting you would infer that was my sole point.

    As I outlined in my remarks, there is a very large difference between what we see emerging from the people promoting the science of Global Warming and the so-called “AGW Skeptics”, far beyond just “rhetoric”.

    Comment by J.S. McIntyre — 17 Aug 2007 @ 10:09 AM

  372. re #367,

    Do Climatologists do the programming themselves or do they give the algorithms to programmers or comp. sci.students(in the case of a University) to program?

    [Response: Depends on the size of the group. The Hadley Centre and NCAR have specific programmers, GISS is a smaller institution and the scientists do most of the work themselves though they do get some help from GSFC programmers. - gavin]

    Comment by Dave Blair — 17 Aug 2007 @ 10:18 AM

  373. Re 358 papertiger “the UK had an exceptionally wet summer, well below average temperatures” in fact temps for June 1.1 degrees above 1961 – 1990 average; temps for July 0.3 degrees below 1961 – 1990 average.

    is this the same papertiger in ref 24:

    Just to expand on my last point at 12, (somehow) many see this as indicating that we can’t trust anyone (especially Hansen) to handle the data properly.
    I think the point is that we shouldn’t have to trust someone as in a single person or entity such as Nasa GISS to develope what is in effect policy for our country.
    It’s un democratic and un scientific.
    A person could make a mistake. Ahem.

    So we can’t trust anyone handling data? I know who I would trust with attempting to be accurate and honest. Sadly typical that those who cast aspersions on the validity of this scientific work don’t seem to be able (or willing?) to treat facts with such respect as those they criticise.

    Comment by kevin rutherford — 17 Aug 2007 @ 10:32 AM

  374. > I would have to more than merely claim that the results didn’t match, right?

    You’d have passed a peer review and gotten a publication in a science journal.
    You’d have coauthors, whose track record and reputation would support your conclusion.

    You’d be taken seriously and people would assume you were doing honest science, and look for flaws in your approach.

    Why not try it, as Gavin and many others suggest? So far every group that _has_ done a climate model finds much the same result.

    Once you’ve done it — you’ll understand how it’s done.

    As Willy Ley supposedly said, analysis is all very well, but you can’t understand a locomotive by melting it down and analyzing the mess. You have to build one to understand how it works.

    Comment by Hank Roberts — 17 Aug 2007 @ 10:42 AM

  375. Gavin: While the magnitude of what happened is small, the dimensionality — errors by a respected agency — is bad for the agency. My town once had a school superintendent whose approach to criticism was to 1)ask the selectmen to form a study committee 2)appoint the critic as the chair 3)make the “full resources of the department” available for the study. The dynamics of this response are interesting. First, the critic cannot decline without losing credibility. Second, the school department still maintained most of the control of the study process. Third, it got to know what was being found and correct additional problems before they got out of hand. Further, the committees sometimes found solutions that would have required paid consultants, so they more than paid for themselves. Everyone was happy.

    It is not possible for anyone to review all or even most of a GCM, so there will always be questions. You might find it useful to have a mechanism in place for semi-formal review of data, design, or algorithms by outsiders via the internet. If it were all done on the web, it might even pay for itself.

    Comment by Allan Ames — 17 Aug 2007 @ 10:59 AM

  376. Papertiger (#358) wrote:

    So you think that climate models can predict weather? You endorced their site as if it were authoritative. That same office predicted an unremarkable 2007 summer for the UK. As we have seen the UK had an exceptionally wet summer, well below average temperatures, and widespread floods.

    Why do you endorce this model, if not due to politics?

    I don’t know what exactly they said about the summer of 2007 other than a forecast made back in January of a 60% chance of breaking temperature records – but that would have been world wide. Precipitation for Great Britain? Not seeing anything as of yet. But perhaps you have a link to their prediction.

    In any case, precipitation is supposed to increase for England over time. A large part of it is geography. Located between the polar and ferrel cells, it is at a latitude of low pressure where warm moist air will tend to result in precipitation. It has the whole Atlantic to the east, and increased evaporation over the Atlantic will make floods more likely in England. Moreover, this is just the trend which we have been seeing over the past several decades. Roughly linear. But winter is when they get more precipitation – and it is during that time of the year that precipitation has increased by 35%.

    By contrast, we are expecting the hadley cell south of the United States to expand, moving the high pressure between the hadley and ferrel cells north which will diminish precipitation. A higher rate of evaporation will mean that soil dries out more quickly. And in continental interiors this will become especially pronounced as the land is and will warm up more quickly, leading to a lower relative humidity. Then we should also see that precipitation events either diminish in frequency or overall amount, except for extreme events which cause flooding.

    *

    All of this is independent of what the weather does during any particular year. As climatologists generally tell you, their predictions are about average behavior, not the weather on any particular day, month or year. But the Hadley forecast is different. Climatology doesn’t attempt to predict the weather for a given period but the attractor, a probability distribution in what is called phase space in which the weather for any given day will be embedded. It does this by performing many different runs with slightly different initial conditions.

    The butterfly effect will cause the “forecast” for any given day to different from run to run, but since climatology is only concerned with the average behavior, the butterfly effect is for the most part irrelevant. Physics is what drives the average behavior, the probability distribution as a whole. Some models will be better at capturing that behavior than others, depending upon how individual runs are calculated, for example, in terms of the resolution or the approach to calculation, which at a certain level becomes a matter of resolution as well.

    *

    In terms of climatology as a whole, the one thing that we have the most accurate understanding of is how higher levels of greenhouse gases will affect the system the climate system as a whole. This is a matter of radiation physics. Something we are able to understand in terms of quantum mechanics, measure in the labs and observe with infrared imaging of the atmosphere. This is as solid as it gets.

    But unfortunately, while we can control the amount of greenhouse gases which we put into the atmosphere, the effects of this, at least in terms of carbon dioxide, the effects of our behavior won’t be felt until roughly forty years hence. Then the paths as determined by our behavior in the present will begin to diverge.

    Carbon dioxide stays in the atmosphere. Therefore the effects of our behavior in terms of carbon dioxide will necessarily have a cummulative effect. The more years we have high emissions, the more carbon dioxide there will be in the atmosphere, and thus the higher the temperature which will be required for the amount of thermal radiation leaving the system to equal the amount of thermal radiation entering the system. The basis for this amounts to little more than the radiative properties of carbon dioxide and conservation of energy.

    However, there is one big uncertainty regarding carbon dioxide: the feedback assoicated with the carbon cycle. At what point will various positive feedbacks kick in? This partly a matter of how plants respond, precipitation patterns, ocean circulation and so on.

    *

    Anyway, I won’t deny that I am political. For example, despite their personal flaws, Winston Churchill is my favorite statesman of the twentieth century and I have a great deal of affection for George Orwell. I really doubt they liked one another, but thats a different matter.

    Likewise, I place a great deal of emphasis upon individual rights and property rights, the free market as a result of my understanding of economics and a great deal of emphasis upon climatology and the importance of limiting threat of climate change. At a certain level, I am probably less political than most in that I try to always give precedence to identification rather than evaluation.

    But I am human and I have plenty of flaws. I make plenty of mistakes. I will get annoyed with people, sometimes strongly so. I will lose my temper. No doubt I have my prejudices. There are people I distrust, and people I consider friends. I even have my favorite television series: Babylon 5, although it hasn’t been on air for years.

    However, science knows no politics. Particularly when one is dealing with physics. It is essentially the study of cause and effect. Despite the complexity of the phenomena it studies, climatology falls into that category.

    Now of course individual scientists do have their own personal politics ad prejudices, and this will color their views, but assuming one is living in a free society, the evolution of our scientific understanding over the long-term will be essentially independent of that. largely for the same reasons that the free market works so well, at least as I understand it.

    *

    Anyway, I am not sure how much stock I put in the Hadley forecast, particularly for next year. I put more stock in the general trend which it predicts over the next decade, but even then I have some doubts. For example, it is quite possible that NASA has a better model. From what I understand, they are both very good models, world class.

    I would hate to have to pick between the two. But I believe that the general approach that the people at MET are employing will be more powerful than what we have done in the past, initializing the model with our best measurements of real-world data regarding ocean conditions and the like from consecutive days.

    In any case, this approach is new. It isn’t what they would have used to arrive at their predictions for this year. So they may be more successful at predicting the next whatever their predictions were for this year.

    I am hopeful that it will work. Assuming it does, this approach will give us a better understanding of the conditions in which we tend to plan various projects. While it won’t help us control the trends, if it works it will at least help us with mitigating the effects of climate change.

    Comment by Timothy Chase — 17 Aug 2007 @ 11:01 AM

  377. RE 353. Gavin inlines.

    “RTFR. GISS only adds the urbanisation adjustment and does not do any homogeneity testing, the station history adjustments are taken from USHCN as is clearly described. The only further culls were for obvious weirdness such in the early Cal. data as was, again, clearly described. What is so hard about reading the papers? – gavin]”

    Well. Obvious weirdness is not an algorithm. The text states that there are 5 stations in Norcal ( actually one is in Oregon) That exhibited a cooling trend not shared by other sites in the region. ” the region” is not specified. Electra is 400 miles from Crater lake. SO, my first question is what do you mean by region? That’s simply a practical question. Which sites were looked at to do the comparsion. And its a fair question. Sites within 1200Km, 1000km, 100km.

    Now when you compare Lets say, Lake spaulding, to its three closest neighbors you will see that it is definately “weird”. A gross measure of weirdness is
    (TahoeCity_Tmean – LakeSpaulding_Tmean) A simple linear regression on this shows that (T-S) changes on a nearly linear basis from 1914-1931. .27C or something thereabouts with a Rsq of .92. Thereafter the slope of (T-S) is roughly .06. Same goes for the pairing of Nevada city and Lake Spaulding. The comparison with other nearby sites was similair but not as dramatic.

    The question is What is the TEST for weirdness? Was a weirdness test performed? Now the paper cites Easterling and Peterson on finding Inhomogenieties within series. So, I assume you used a calculation to quantify weirdness. The text doesnt say.

    For interested folks you will find a nice description of some of the issues here:

    http://www.climahom.eu/software/docs/Prezentation_T_homogenization.pdf.

    Page 32-34 shows an adequate description of methods.

    Anyway. Lake Spaulding Looks weird from 1914-1931.

    Crater Lake just looks cold. Again, there is no list in the text or footnotes of “nearby station” . So I picked the closest one Prospect, to see if (P-C) “temps at Prospect-Temps at crater lake” were “obviusly weird”

    well, the only thing I saw, at first blush, was that crater Lake ( station altitude 6475 ft) was consistently cooler that Prospect ( station altitude 2482ft) The trend of (P-C) looked to be fairly linear.

    If you corrected for the difference in altitude, the sites came out pretty close on absolute Temp. I didn’t see any weirdness in trend differences.

    Hence my question.

    a. Was Crater Lake compared to other sites?
    b. Which sites?
    c. What test for weirdness was performed.

    I assumed it was a test for inhomogeniety and assume that since easterling & peterson are sited that this was the test performed.

    The issue isnt the .01 change.

    [Response: You appear to still be confusing me with the authors of the papers. If you think there is something of interest in all of this, do a specific study on that area. I have no knowledge, nor interest, in the temperatures of Crater Lake in 1920. Doing the analysis yourself will make clear all of the answers to your questions. If you think that the procedure was invalid, do the study that demonstrates that. - gavin]

    Comment by steven mosher — 17 Aug 2007 @ 11:29 AM

  378. Steven Mosher (#370) wrote:

    don’t expect to find comments like that. Fundamentally I believe in transparency. The default should be release the code, unless there is an IP issue.

    Steven,

    I would have to say that like me, you are something of an open book. I for one reason, and you for another. You are a fairly public figure. That happens when one testifies before Congress and writes several books, some of which are currently selling on Amazon at a reduced price.

    At a certain level, I would have to say that I even admire you. In some ways we are a bit alike. I think both of us are roughly as passionate about our values. And this means that people can see rather deeply into both our characters – if they know how to look.

    Comment by Timothy Chase — 17 Aug 2007 @ 11:35 AM

  379. Re # 336 David Blair
    “you say they are accurate for the next decade, maybe so and time will tell, but when we hear the predictions for 50 or 100 years from now that when you wonder why all the movie stars and politicians are getting involved”

    The GC models are based on known laws of physics, well-understood biogeochemical cycles, assumptions about GHC emissions levels (up, down, or no change), etc. They can’t account for unpredictable events, such as volcanoes or asteroids hitting the earth. Are you expecting the laws of physics to change any time soon? Or maybe a major alteration to the carbon cycle?

    Comment by Chuck Booth — 17 Aug 2007 @ 11:39 AM

  380. Yes, this tiny insignificant NASA mistake is “Another week, another ado over nothing.” It’s weird how we can go on and on about nothing.

    But here’s something, tonight (Fri, Aug 17) on Bill Moyer’s Journal on PBS (9 pm in my area), Bill will interview Mike Tidwell, author of The Ravaging Tide: Strange Weather, Future Katrinas and the Coming Death of America’s Coastal Cities.

    That’s something significant.

    Comment by Lynn Vincentnathan — 17 Aug 2007 @ 11:42 AM

  381. The argument about conditions at rural weather stations is dumb. Talk to anyone in the fruit industry or agricultural extension agents, and bloom dates have been coming earlier and earlier. Moreover, you can see the date of greening from satellite data. No heat pumps or asphalt in all those sections and sections of orchard.

    Every apple tree, every grape vine is a little integrating thermometer. Any one of them may not be as accurate as a laboratory thermometer, but there are so many of them, that over all, their ability to detect small changes in temperature is high. As long as we are getting earlier greening dates and earlier bloom dates, things are getting warmer.

    Bottom line; the actual temperature measured at a weather station does not matter. What matters is how warming affects the plants and animals that we need to survive. What matters is how the heat stresses people. What matters is how the heat stresses our electrical grid. What matters is how heat affects our water supplies. What matters is the expanded range of insect pests and other pathogens. The temperatures measured at at any set of weather stations does not indicate the actual scope of any of these issues. Really, the numerical value of temperature is a proxy for actual damages.

    Comment by Aaron Lewis — 17 Aug 2007 @ 11:49 AM

  382. Aaron, Timothy — excellent posts.

    As a reader here, beating my familiar drum — I do urge you to provide cites with your statements, to help out new readers coming to the topic, especially later on.
    It’s a lot of extra work to do that (this is why programmers don’t comment code they don’t expect to publish, too!).

    For later readers, it’s the footnotes and cites that separate sound writers from the merely opinionated posters. Without those only the people who already know the literature can tell the two groups apart. I know you’re good. Please show off (grin) for later readers.

    Comment by Hank Roberts — 17 Aug 2007 @ 12:18 PM

  383. Going fast now.
    Take a look at the jetstream. It hints that a second monster surge of warm air from Siberia is heading over the north pole onto Greenland. This is something I’ve never seen. Has the polar air circulation pattern changed?
    In any event, a second arctic ice melt is now underway.

    Comment by ken rushton — 17 Aug 2007 @ 12:30 PM

  384. Ken, can you provide a link to what you’re talking about please?
    Have you read about the two big patterns described, in this study? http://pubs.giss.nasa.gov/abstracts/2007/Liu_etal_2.html

    Comment by Hank Roberts — 17 Aug 2007 @ 1:01 PM

  385. Speaking of climate models and surface temperatures, the folks at the Hadley Met Office have a recent paper in science that applies to this issue and seems fairly interesting:

    Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model. Smith et al Science Aug 10 2007

    It seems to be an effort to use global coupled circulation models to estimate the internal variability of the climate system over the next decade. Internal variability refers to unforced changes in the climate system (see here for the definition of a forcing). These would include things like El Nino and fluctuations in ocean circulation and heat transport. The model they use is something called the “Decadal Climate Prediction System” (DePreSys) based on the Hadley Center Model. Animations from the Hadley Center are available here.

    To test this strategy they used a number of ‘hindcasts’ covering timeframes from 1950 – 2005 or so. They seem to do a pretty good job of it. Here’s an important quote:

    “Because the internal variability of the atmosphere is essentially unpredictable beyond a couple of weeks, and the external forcing in DePreSys and NoAssim is identical, differences in predictive skill are very likely to be caused by differences in the initialization and evolution of the ocean.”

    (the difference is that DePreSys contains information on initial conditions, and NoAssim does not. If you think about a weather (atmosphere) forecast model for the one-week future, it’s very sensitive to initial conditions. Thus, DePreSys seems to be an ‘ocean weather’ forecast model)

    Once again, the point is made that the more comprehensive the data on current ocean conditions is, the better the decadal predicitons will be (just as in weather models, which rely entirely on the satellite and radiosonde networks for initialization). Unfortunately, the head honchos at NASA no longer believe that monitoring the Earth system is part of their job…???

    The paper concludes by presenting a forecast for global surface temperatures for the coming decade:

    “Both NoAssim and DePreSys, however, predict further warming during the coming decade, with the year 2014 predicted to be 0.30° ± 0.21°C [5 to 95% confidence interval (CI)] warmer than the observed value for 2004. Furthermore, at least half of the years after 2009 are predicted to be warmer than 1998, the warmest year currently on record.”

    (Yes, 1998 is still the warmest year on record)

    This seems like an interesting paper – do any of the professional modelers out there have any comments?

    Comment by Ike Solem — 17 Aug 2007 @ 1:28 PM

  386. My comments above stand – independent replication from published descriptions – the algorithms in English, rather than code – are more valuable to everyone concerned than dumps of impenetrable and undocumented code. – gavin

    Gavin you are talking scientific utility when politics is about perception. They are independent of each other. Whether the code is impenetrable or not is irrelevant to public perception. Maybe there is a happy middle ground (such as an official page or RC blog post dedicated to listing papers algorithms and where to find the data) but the current policy isn’t cutting it. That being said I do not envy what you have to deal with. I’m sure simply reading/answering 400 comments on one blog post is a major drain on ones time.

    Comment by sparrow (in a coal mine) — 17 Aug 2007 @ 1:45 PM

  387. Gavin,

    As long as we are culling weird cooling data ( Lake spaulding is messed up, probably a sensor giong bad over time) How about we clean up the “weird” warming issues:

    To wit:

    ““The Recent Maximum Temperature Anomalies in Tucson: Are They Real or an Instrumental Problem?”

    http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=21224

    Excerpt:
    “…during 1986 and 1987 maximum temperature records were being set on 21 days and 23 days respectively. In 1988 this increased to 38 days, and to 59 in 1989.” and “With one exception, in 1988 and 1989, there were no other stations within 1000 miles that set any kind of record on the dates…””

    Turns out NOAA changed sensor Suppliers…

    Comment by steven mosher — 17 Aug 2007 @ 1:53 PM

  388. This off-topic, but I think it’s pretty important and don’t know where else it would be appropriate to post.

    Nature News has a report on a new paper published in Ecology Letters that deals with how rising temperatures stunt the growth of tropical forests, which would severely decrease their ability to remove CO2 from the atmosphere or even make them net emitters of the gas.

    Here is the link to the news report:
    http://www.nature.com/news/2007/070806/full/070806-13.html

    And the reference to the paper:
    Feeley, K. J. et al. Ecol. Lett. 10, 461-469 (2007)

    And here are some important excerpts from the news report:

    “Global warming could cut the rate at which trees in tropical rainforests grow by as much as half, according to more than two decades’ worth of data from forests in Panama and Malaysia. The effect — so far largely overlooked by climate modellers — could severely erode or even remove the ability of tropical rainforests to remove carbon dioxide from the air as they grow.

    The study shows that rising average temperatures have reduced growth rates by up to 50% in the two rainforests, which have both experienced climate warming above the world average over the past few decades. The trend is shown by data stretching back to 1981 collected from hundreds of thousands of individual trees.

    …. The trends measured by Feeley suggest that entire tropical regions might become net emitters of carbon dioxide, rather than storage vessels for it. “The Amazon basin as a whole could become a carbon source,” Feeley says.

    Feeley and his colleagues analysed data on climate and tree growth for 50-hectare plots in each of the two rainforests, at Barro Colorado Island in Panama, and Pasoh in Malaysia. Both have witnessed temperature rises of more than 1ºC over the past 30 years, and both showed dramatic decreases in rates of tree growth. At Pasoh, as many as 95% of tree species were affected, Feeley and his colleagues report. The research has also been published in the journal Ecology Letters”

    I’m afraid there are too many positive feedbacks in the climate system that will soon kick in and give us a really hard time.
    But here we are speeding down a road that all rational analysis indicates leads to an abyss, while the “skeptics” (Steve Moshers) of the world keep quibbling about the color of the numbers on the speedometer.

    I’m sure that as the Roman Empire was falling, lots of “skeptics” were debating on the barbarians’ sense of fashion and whether they were really such unfriendly folk.

    Comment by Rafael Gomez-Sjoberg — 17 Aug 2007 @ 2:03 PM

  389. Hank (#382) wrote:

    As a reader here, beating my familiar drum — I do urge you to provide cites with your statements, to help out new readers coming to the topic, especially later on.

    It’s a lot of extra work to do that (this is why programmers don’t comment code they don’t expect to publish, too!).

    I will try to do more of that in the weeks ahead, but as I intend to put my links together on the web over time, it will probably become easier to keep track and refer to them. Currently I tend to work of memory a little too much and the good majority of what I write is extemporaneous so too many references may slow me down a bit.

    I promise I will try harder, though, even if it slows me down a bit.

    Comment by Timothy Chase — 17 Aug 2007 @ 2:37 PM

  390. Re 383 just got back from 81N on ellesmere, had two weeks of 15-20C.

    Comment by ziff house — 17 Aug 2007 @ 2:46 PM

  391. 379, Chuck, asteroids hitting the Earth are also based on physics. Weather prediction is also based on physics, complex models, and supercomputing as is climate science. However, weather predictions are easily testable. The prediction from science are influnenced by the number of starting variables, formulas and the complexity. There is no mathematical formala for different cloud types, you have to learn them from pictures. We can also look at Earthquake prediction for a comparable science – again the results are unreliable.

    Weather prediction is an excellent comparison to climate science, tobacco science is a poor comparison.

    Comment by Dave Blair — 17 Aug 2007 @ 3:01 PM

  392. RE 378.

    Timothy. Wrong again. First you try to sneak by a GE Moore argument by me ( here is a hand) without giving him attribution. Moore is one of my favorities. You should have recognized my Moorian trick in the UHI discussion.

    Anyway, Now you confuse me with Steven W.

    This has happened on several previous occassions.

    1. I had a friend being interviewed for a TS/SAR position at Monterey. DIS was giving him the full monty inspection talking to all his friends. The lead investigator made the same mistake you did, thinking my middle initial was W.

    2. When I applied for my TS/SAR The same question came up. Visiting PRC, back in the day, was a No no. To make matters worse I had a chinese girlfriend. Proving I wasnt him was fun but easy. I’m cute, he’s not. he’s Catholic, I’m agnostic. Funny story. I went to a local Charity event. ( a liberal thing, I’m libertarian ) I get introduced to a catholic priest and he mistakes me for Steven W.

    3. I get lots of nice emails back from think tanks and talk show hosts who think I’m him. I’m always straight with them, so I offer you the same consideration.

    When you get time you should read my dead friend

    http://en.wikipedia.org/wiki/William_A._Earle

    And the most serene soul I ever had the pleasure to study with.

    http://en.wikipedia.org/wiki/Erich_Heller

    A cool guy from the Earle gang. He was a grad student with Earle while I was doing my Honors with Earle. very funny dude.

    http://en.wikipedia.org/wiki/Peter_Suber

    Next I will twist your brain and get you to read Alvin Plantinga. You can of course google him… oh wait wiki has him. Nice guy. uber brilliant with a very weird take on certain issues. you go read. It’ll set your hair on fire

    Comment by steven mosher — 17 Aug 2007 @ 3:01 PM

  393. Ike Solem #385:

    Try tamino (Open Mind), James Annan (James Empty Blog) & William Connelly’s blogs (Stoat) (linked on the right). There’s some more stuff there.

    Comment by Adam — 17 Aug 2007 @ 3:10 PM

  394. Re #390: Wow. That’s hot enough for some serious melting. Did you happen to observe or hear about any notable effects?

    Comment by Steve Bloom — 17 Aug 2007 @ 3:33 PM

  395. This issue makes we wonder where the seams are in the global or U.S. temperature data between the various methods of temperature measurement. Like the seam between the switch from alcohol to mercury thermometers and likewise from mercury thermometers to satellite and digital measures. Does anyone know what years these seams are? Thanks.

    Comment by Dr. J — 17 Aug 2007 @ 3:40 PM

  396. > Now you confuse me with Steven W. [Population Research Institute --hr]
    > This has happened on several previous occassions.

    Gavin could edit probably edit those errors claiming you’re the other guy, if you point them out.
    Misattribution happens, but leaving confusion around isn’t kind to later readers.

    Speaking of leaving confusion around,

    > a GE Moore argument …. my Moorian trick in the UHI discussion.

    I wish you would distinguish between a science discussion and a debate. This goes along with my pleas for cites.
    Hard argument in science is different, or should be, by intent. I don’t know if this is taught nowadays.

    I recall my dad teaching biology grad students, long ago, in the 1950s, to argue in seminars —- never meaning to “win” but always to clarify and, perhaps, improve everyone’s necessarily incomplete and muddy view of the subject — to help one another, as Beckett says, “fail better” next time.

    Comment by Hank Roberts — 17 Aug 2007 @ 3:47 PM

  397. In reponse to comment 377, Gavin says ……”Doing the analysis yourself will make clear all of the answers to your questions. If you think that the procedure was invalid, do the study that demonstrates that. – gavin]”

    The more I get into the discipline of global warming, the more I come to believe that most,if not all, skeptics do not do original work. They believe their job is to criticize the work of others. They appear to object to having others do what they, the critics do, that is to open themselves to being criticized, in much the same way that many power companys begrudge having to pay their own rates to buy back power from individuals who generate a surplus of power from solar panels. Why do something original, when you can throw darts at the work of someone else?

    Comment by Lawrence Brown — 17 Aug 2007 @ 4:11 PM

  398. Re #368. Maybe the Swedish paper wouldn’t have jumped to the wrong conclusion if GISS had published an orderly press release about what was wrong and so on. By trying to hide the mistake under the carpet our dear Real Climate scientists helped the other side’s extremists to misunderstand the news. Good lesson.

    And to our deep disappointment, Gavin’s world temperature graph is back unchanged, with the misleading y-axis spoiling the US-ROW comparison, and the ridiculous off-chart point making a “dramatic” effect.

    Comment by Dodo — 17 Aug 2007 @ 4:23 PM

  399. Re #390

    Actual highest temperature (at Eureka, Ellesmere) in the first 15 days of August was 17 deg C. Only 6 days above 10 deg C (daily high). Average of mean temp for period was 7 deg C, so yes a little warm for the Canadian Arctic, but hardly extreme!

    Which hot spot were you at actually (where you had the two weeks of 15-20)?

    Comment by snrjon — 17 Aug 2007 @ 4:28 PM

  400. #397, There are other ways to prove accuracies in climate data, especially recent ones, I do this myself by a new unique method of measuring heat in the atmosphere by using the sun as a fix sphere of reference. The traditionnal techniques used to come up with global average temperatures match sun oblateness variances. Critics, must come up with their own way in comfirming or denying the validity of GT measurements. My opinion has never been negative of most official GT results published. My own independent work reinforces that opinion to the point that sceptics seem to have overused their armchairs, and are largely not contributing anything but nonsense in the greater quest of narroying down a more exact GW trend .

    Comment by wayne davidson — 17 Aug 2007 @ 4:29 PM

  401. Nick Gotts, BlogReader and others — I am a (retired) computer scientist and have often and continue to write undocumented one-off computer programs which become more and more impenetrable as the experimental results are developed. Such programs were and are never intended to be read by anyone else.

    So Gavin, bless his patient soul, has the right of it completely: go roll your own.

    And as for perceptions, any rational observer will conclude that this matter is indeed a tempest in a teapot.

    Comment by David B. Benson — 17 Aug 2007 @ 4:43 PM

  402. Steven Mosher (#392) wrote:

    Timothy. Wrong again. First you try to sneak by a GE Moore argument by me ( here is a hand) without giving him attribution. Moore is one of my favorities. You should have recognized my Moorian trick in the UHI discussion.

    Actually I haven’t studied any Moore although it would be possible for me to use someone’s argument without knowing it – if I had run into it in the past. Things tend to get rather thoroughly integrated if you think about then too much. Likewise, being a good Aristotelean, I tend to think that there is very little which is new under the sun. Whatever we might do already exists as a potentiality of human action.

    But the “hand” obviously goes back to Descartes in his room by the fire. And who knows? He might have been recalling Plato’s “The Republic,” one of my favorite books, specifically the scene in the cave.

    It pays to read the classics. Philosophers tend to recycle things a fair amount, oftentimes as a tribute to earlier philosophers.

    Anyway, Now you confuse me with Steven W.

    This has happened on several previous occassions.

    Well, I am sure you can see why.

    As a libertarian, you would in all likelihood be strongly anti-Communist. Likewise, he prides himself on his education, having obtained a PhD. He is strongly opposed to acknowledging the scientific basis of climatology, quite passionate about it, actually, and willing to go to great lengths against that which he opposes.

    Finally, he has a background in deconstructionism. In some ways, not that unusual for a libertarian nowadays, but this is one of the areas which have claimed some familiarity.

    *

    I wouldn’t call myself a libertarian but a classical liberal. Same thing, I suppose. And a large part of my motivation in the past was the result of my opposition to totalitarianism. I tried infiltrating a cult that had used brainwashing in the past at one point, for example.

    I also have passing familiarity with Alvin Plantinga. If I remember correctly, he posed a self-referential argument against evolutionary biology a while back. I suspect he knew better. Some poor creationist tried to spring it on me at one point – without attribution. There is some good analytic philosophy out there, though. For example, I like some of the work that has been done on the relationship between coherentialism and foundationalism.

    Comment by Timothy Chase — 17 Aug 2007 @ 5:12 PM

  403. Re#400: “Critics, must come up with their own way in confirming or denying the validity of GT measurements.”

    I agree. Those who are truly and scientifically skeptical of results need to carry out the work and present it to the scientific community. It seems like the ‘auditors’ are claiming that they don’t have to do this, but at the same time claiming they are only doing what scientists should (i.e., be skeptical of conclusions).

    Well, guess what, we have a framework for doing that in a formal way and it’s called science. Why can’t the auditors just propose, design, and conduct science like everyone else?

    In fact, here’s the link to the AGU conference in December, abstracts are due Sept. 6th.
    http://www.agu.org/meetings/fm07/

    Comment by Brian — 17 Aug 2007 @ 5:14 PM

  404. Responding to someone else, Hank Roberts (#396) wrote:

    I wish you would distinguish between a science discussion and a debate. This goes along with my pleas for cites.

    Hard argument in science is different, or should be, by intent. I don’t know if this is taught nowadays.

    Debate certainly has its place in our society, but I myself prefer dialogue.

    Debate and dialogue are about as different as night and day. In debate, there exists a genuine tendency almost to the very core of it to think in terms of opposition, to adopt an us vs. them view of the world and thereby place pragmatic concerns above truth itself.

    In contrast, dialogue is a cooperative endeavor in which the “us vs. them” is regarded essentially as a passing illusion. The goal of dialogue is the discovery of the truth, and in it one recognizes the fact that by cooperating with others and sharing one’s insights one stands a far better chance of understanding reality than if one were working alone.

    As such I believe the proper intent and orientation of science and dialogue are the same.

    Comment by Timothy Chase — 17 Aug 2007 @ 5:38 PM

  405. Gavin’s response to #367: [Response: Again, you miss the point. It is better for someone to come up with the same answer by doing it independently - that validates both approaches. - gavin]

    If they differ?

    [Response: Then you investigate further. -gavin]

    Comment by dallas tisdale — 17 Aug 2007 @ 6:52 PM

  406. [Response: Then you investigate further. -gavin]

    But, but, that means they might actually have to do some work, rather than sit around trying to poke holes in the work of others.

    Comment by wildlifer — 17 Aug 2007 @ 7:14 PM

  407. A rather obvious reason why auditors and skeptics are not jumping at the chance to build their own climate models is because there are not millions of dollars to fund the computers/servers, programmers, and statisticians required such a project. Correct me if I’m wrong, but someone who asks the U.S./Canadian/Any-European government to fund a model that is being conducted by a “skeptic” would not be receiving any funding.

    [Response: Actually, I don't know about that. Presumably they would start with one of the existing models, they'd have to demonstrate some proof of concept of their idea and that they were capable of doing the work involved. Red flags for the funders would be if there was some pre-determined conclusion before any experiments were done, or if the literature was extremely selectively cited in the proposal. To get funding, you need to deal with the best arguments from the mainstream, not cherry-picked talking points. It's not too high a hurdle if people are serious. - gavin]

    Comment by Carl G — 17 Aug 2007 @ 8:16 PM

  408. Re #401
    It may well be that, once released as it is, the “code” proves itself quite unusable, i. e. too difficult to be decrypted by the average, and also somewhat better than average climate auditor…

    (Just usable to be rerun as it is to obtain uselessly identical results…)

    But saying to somebody that perhaps you have kept waiting for months:

    “even if you ask for it, I will not give my uncommented code to you,
    and that’s BECAUSE I’m worried that you would lose your precious time in vain attempts of mastering its complexity”

    has a quite hollow ring in it.

    Even a non-paranoid is thus brought to imagine that some additional untold reason MUST stay there behind.

    It could be the desire, easy to understand, to fully own the result of one’s work, or also the fear to open easier grounds for totally stupid and factious arguments…

    Possibly there are cases in which these elements somewhat justify a “more reserved” attitude,

    but for figures as these, deemed to be weighty enough to justify fundamental changes in the lifestyle of the entire mankind, it seems to me (and I suspect to many others) these petty problems are unmeasurably irrelevant,

    So the minimum acceptable to make it easier for the “auditors” and to silence skeptics, should be releasing together with the results:
    input data, algorithms explanation AND runnable code.

    Comment by Mario — 17 Aug 2007 @ 9:19 PM

  409. Anyone participated in this?

    “Through the program for Climate Model Diagnosis and Intercomparison, LLNL provides the
    international leadership to develop and apply diagnostic tools to evaluate the performance of climate
    models and to improve them. Virtually every climate modeling center in the world participates in this
    unique program.” http://www.ofes.fusion.doe.gov/FusionDocuments/07SCOverview.pdf.

    Comment by Hank Roberts — 17 Aug 2007 @ 9:43 PM

  410. gavin> It is better for someone to come up with the same answer by doing it independently – that validates both approaches.

    How does this compare with the methods used to find the error in the satellite temperature data?

    [Response: That is the approach that was taken then. An independent analysis was done (RSS) making a whole set of different assumptions to the original UAH analysis. It ended up giving a different answer. At which point people started combing through the codes for reasons why. It was only when there was an independent emulation to compare to that the problem (with the LECT correction) was discovered. After all, people had been saying that there was something fishy about the UAH analysis for years prior to that. - gavin]

    Comment by Steve Reynolds — 17 Aug 2007 @ 10:14 PM

  411. [ Again, you miss the point. It is better for someone to come up with the same answer by doing it independently - that validates both approaches. - gavin ]

    I think that validates both people made the same assumptions and / or mistakes.

    [ Benson: undocumented one-off computer programs which become more and more impenetrable as the experimental results are developed. Such programs were and are never intended to be read by anyone else. ]

    I’m hoping that climate modeling software is more than just a one off hobby project. Implying that they can’t be understood by someone else makes me think of junior programmers that wave their hands when trying to describe why their program does something when they don’t really know.

    Or to put it another way: a fudge factor here, a smoothing of data there, a rejection of rural sites there and pretty soon what you’re modeling is what you want to see.

    [Response: The GISS Climate model code is available: http://www.giss.nasa.gov/tools/modelE , the issue with the GISS urban adjustment is orders of magnitude simpler. - gavin]

    Comment by BlogReader — 17 Aug 2007 @ 10:17 PM

  412. Hank Roberts wrote (#396) in response to Steven Mosher (#392):

    > Now you confuse me with Steven W. [Population Research Institute –hr]
    > This has happened on several previous occassions.

    Gavin could edit probably edit those errors claiming you’re the other guy, if you point them out.
    Misattribution happens, but leaving confusion around isn’t kind to later readers.

    Hank,

    He isn’t saying that I confused him with Steven W. Mosher before, but that other people had confused him with Steven W. Mosher before.

    He states:

    Anyway, Now you confuse me with Steven W.

    This has happened on several previous occassions.

    1. I had a friend being interviewed for a TS/SAR position at Monterey. DIS was giving him the full monty inspection talking to all his friends. The lead investigator made the same mistake you did, thinking my middle initial was W.
    2. When I applied for my TS/SAR…
    3. I get a lot of nice emails…

    As such, judging from his post, I have made this mistake only once.

    But you are right.

    Back when he was giving Ray Ladbury such a hard time and I had made that comment about people with ideological commits. It got him so riled that he turned his attention to me, so I wrote a post in which the first three sentences were (No Man is a (Urban Heat) Island, Comment #193):

    Not personally.

    However, if you have ever taken time out for economics you might have learned about the division of labor. Population growth tends to result in that sort of thing and the efficiencies of scale which follow from it.

    The Population Research Institute headed by Steven W. Mosher is devoted to population growth, so at that point I was trying to get his attention. Then my last sentence in that post was:

    But in any case, one begins with identification which precedes evaluation, and in communication, one begins with the assumption that others are engaged in a similar process – until one has sufficient evidence for thinking otherwise.

    Not that subtle, I must admit, but I was trying to tell him that not only did I know who he was, but that I had the evidence. Other than that, it was a pretty innocuous post. After that we didn’t hear from him for several weeks, so I kind of assumed I was right.

    But then he came back and after a bit focused on me rather intently.

    At the end of two long posts, I responded #240:

    Here is the evidence for you position as president of PRI:

    An Interview with Steven W. Mosher, President of the Population Research Institute
    By John Mallon
    http://www.pop.org/main.cfm?id=151&r1=10.00&r2=1.00&r3=0&r4=0&level=2&eid=678

    Here is the logic of your ideological position against [the acknowledgement of the scientific nature of climatology due to your organization's view that it is a trojan horse for environmentalism] – as expressed by your vice president:

    300 Million and the Environment
    Friday, October 20, 2006
    By Joseph A. D’Agostino
    http://theologyofthebody.blogspot.com/2006/10/300-million-and-environment.html

    Now I do not care to debate ideology with you. However, your ideology is irrelevant to climatology and your approach is fundamentally anti-science. You will not be swayed by any evidence or argumentation.

    We have no further reason to debate you.

    (Note: the sentence with the brackets was garbled due to my exhaustion at the time, so I have just cleaned it up.)

    Anyway, he disappeared for a couple of days after that. But what I had to say just recently (#378) would have made him nervous if he knew how to read between the lines. And if left unchallenged, it would have gotten other people interested, and they would have started digging.

    Who knows?

    If they had done some digging, they might have discovered that what I was hinting at involved things that would cast Steven W. Mosher in a very bad light.

    You must admit there are some similarities – just as I pointed out in #402. Personally, it wouldn’t surprise me at all if Steven W. Mosher were obsessed with RealClimate – given that Steven Mosher’s little war on the scientific status of global warming. But I might regard it as cause for some concern.

    Anyway, I am glad that our Steven Mosher isn’t that Steven Mosher.

    It sets my mind at ease.

    Comment by Timothy Chase — 17 Aug 2007 @ 10:43 PM


  413. A rather obvious reason why auditors and skeptics are not jumping at the chance to build their own climate models is because there are not millions of dollars to fund the computers/servers, programmers, and statisticians required such a project. Correct me if I’m wrong, but someone who asks the U.S./Canadian/Any-European government to fund a model that is being conducted by a “skeptic” would not be receiving any funding.

    If the skeptics had any prospects of posing a serious challenge to the current scientific consensus, they sure as h-e-double-hockeysticks would have gotten tons of funding from Exxon or the Saudis or whomever.

    Given that Exxon and Co have been very generous in terms of funding scientifically incompetent hacks, you can be sure that they would not hesitate to throw truckloads of money at *competent* scientists who could make a serious scientific case against global warming.

    But in spite of the *billions* of dollars of cash they have on tap, the big oil companies have not been able to find *any* competent climate scientists to fund.

    That should tell you something…

    Comment by caerbannog — 17 Aug 2007 @ 10:45 PM

  414. Nitpickery: Drop the comma to get the web page (the blog software underscores punctuation sometimes, breaking links)
    http://www.giss.nasa.gov/tools/modelE

    These docs, linked from that page, answer a lot of questions
    http://www.giss.nasa.gov/tools/modelE/HOWTO.html#part0_3

    But gee, Gavin, can’t you make it run on a Palm Pilot or an iPod so every child can play? …. sorry ….

    Comment by Hank Roberts — 17 Aug 2007 @ 11:24 PM

  415. I see some repetitive misconceptions in the blog arguments between skeptics and the rest of us. One is a confusion of weather, particularly local weather, with overall trends in climate. Another is a tendency to talk about science independently of the world around us. It’s clear to me that it’s warmer on average now than it was 40 years ago, and I know older people were saying the same thing to me 40 years ago. One simple example is Martha’s Vineyard’s main harbor which in the early 1900s froze over hard; I don’t know when this stopped. Also, climate change appears to include polarization of warm and cold. This year in the US and Europe (and maybe elsewhere) we’ve seen extremes on both sides. In the US those above the jet stream see cool, those on it see wild, and those below it see hot. If the hot is hotter and the cold is colder, that’s a change that is part of the same phenomenon.

    About the northern European summer, I’ve heard La Nina mentioned. It made me think of other readings on the subject of how the Gulf Stream mechanism is being broken down, though I get the impression it is not being blamed for the summer about to be past. Here’s the latest I saw on this, not from the primary source, but a reasonable summary nonetheless:
    http://news.bbc.co.uk/2/hi/science/nature/6946735.stm
    For those of you who prefer an excerpt:
    “Writing in the journal Science, they say it may now be possible to detect changes related to global warming.
    The Atlantic circulation brings warm water to Europe, keeping the continent 4-6C [1C=1.8F] warmer than it would be otherwise.
    As the water reaches the cold Arctic, it sinks, returning southwards deeper in the ocean.
    Some computer models of climate change predict this Atlantic Meridional Overturning Circulation, of which the Gulf Stream is the best-known component, could weaken severely or even stop completely as global temperatures rise …
    Last year the same UK-led team published evidence that the circulation may have weakened by about 30% over half a century.”

    One common thread I see in the reams of calculations meant to prove that data adjustments call theory and observation into question is a tendency to overlook the connection of all parts of the planet with all other parts, and the way their patterns play with each other. As an artist, I love the whorls and spirals, and can’t help looking at them as all connected.

    I’m always surprised that people can ignore all those telling pictures of changed glaciers everywhere, and the wild weather and related deaths, problems with water and wars for scarce resources, drought, wildfires, and say something nasty and in many cases untrue about Al Gore as if that were some kind of answer.

    Comment by Susan — 18 Aug 2007 @ 12:04 AM

  416. RE: # 383: “going fast now” sorry not to provide a link:
    look at the lasee 4 days of this animation of the Jetstream:

    Comment by ken rushton — 18 Aug 2007 @ 12:41 AM

  417. Re 382 (Hank)
    Everything in 381 can be found in IPCC

    However, I confirmed some of the information in the IPCC by going to the produce market and talking to fruit growers. I looked in my journal for when my fruit trees bloomed. I checked to see how ripe my apples are this week. I talked to USDA. I talked to ag guys with mud on their boots. All good data points , (if put that in the context of Sampling Techniques by Cochran. I still use the 3d edition.)

    The truth is not in peer reviewed journals, the truth is in nature. You are not likely to find a peer reviewed journal telling you that you can melt a hole for ice fishing with a dog turd, or that you can melt a channel across an ice covered pond with a handful of coal dust so you can get to town in your wooden canoe. That is what every boy should learn by doing. Peer reviewed journals never talk about the really important things in life. And, global warming is about the really important things in life.

    Three years ago when I was saying that in my experience with snow and ice, the Greenland Ice Cap was likely to disintegrate faster than the models and IPCC were suggesting, and everyone thought I was a kook. Now, it seems that I may have been right. See (http://www.dailyindia.com/show/166067.php/Greenland-ice-cap-meltdown-to-cause-22ft-floods). I look at (http://stratus.ssec.wisc.edu/products/rtpolarwinds/) and consider how those winds are blowing across open and warm water, and I think 300 years might be a little optimistic. Ice turns to rubble in the rain. (Ask an ice climber – someone that looks at 20,000 feet of ice every year as if his life depended on it. Ja, I’ve done a little ice climbing.) And, as long as the ice is there, it is going to support a thermal low that will pull air (full of latent heat) off the North Atlantic Drift. Sometimes peer review confuses conventional wisdom and being correct. An unstated assumption in the conventional wisdom about the melt of the Greenland ice was that the Arctic sea ice would remain intact. It is not.

    Regarding the issue of fresh water under the Arctic Ice (as constrained by the Arctic gyres), a quick Google brings up articles such as http://ams.confex.com/ams/pdfpapers/84555.pdf. Then, thinking through the physics from different starting conditions brings the conclusion that Arctic Ice is different from Antarctic Ice, because the Arctic Ice is floating in fresh water. (Fresh water delivered by all of the rivers that drain into the Arctic.) If you Google, you can find other people have come to the same conclusion. I have not checked for peer review, but the physics works and I am confident that it is correct. I trust my physics more than I trust peer review. I trust regardless of the fact that the role fresh water under Arctic ice is ignored in the NSIDC pages on Arctic sea ice. Regardless of the fact that I do not see it in the community climate models. This bit of physics is enough to explain why the ice is melting faster than the models predicted. On the other hand, maybe it is in the models and I just do not see it. Or, maybe by 2008, we will have a climate model that says the Arctic ice will be gone by 2010. What else do you suppose it will say?

    Today, I am looking at (http://pm-esip.msfc.nasa.gov/amsu/index.phtml?2) and (http://www.osdpd.noaa.gov/PSB/EPS/SST/sst_anal_fields.html) and asking what they can tell me about how Antarctic deep waters are interacting with deeply submerged Antarctic ice. Interesting physics.

    Did Darwin cite a bunch of journals or did he; go, look, measure, and think?

    I believe that every instrument goes out of calibration way too often. I believe and that every piece of software has bugs in it. in the fall, I carry a fruit knife, because I know our apples have worms in them, but I eat them anyway. I believe that the deer are out in the orchard eating my apples right now!(Sound of hooves on the walk.)

    Comment by Aaron Lewis — 18 Aug 2007 @ 1:50 AM

  418. Further to #407, I’ve wondered about this: I suspect that if they were sufficiently motivated a company like ExxonMobil could get a climate model up and running pretty quickly. They have large scale computing, they have experts in computer modelling, they can download modelE to get themselves started,you’d probably want to employ someone with experience in climate modelling and a couple of similar ‘postdoc’ level people… but what would they do with it?

    I’m unconvinced of the value of self-appointed non-expert auditing; as a practising scientist I find multiple evidence lines (i.e. satellite or borehole reconstructions) far more convincing than a heavily audited single reconstruction. I’d also be interested in complete re-analyses based on the same data.

    Comment by SomeBeans — 18 Aug 2007 @ 1:52 AM

  419. $405 &

    Gavin’s response to #367: [Response: Again, you miss the point. It is better for someone to come up with the same answer by doing it independently - that validates both approaches. - gavin]

    If they differ?

    [Response: Then you investigate further. -gavin]

    I guess science and accounting DO have something in common (re #225 & 245). As a bookkeeper I would add things up, but I’d add them up again to make sure my adding was correct. If I got a different sum, I’d add them up a third time. Sometimes I got 3 different sums. :)

    Comment by Lynn Vincentnathan — 18 Aug 2007 @ 9:14 AM

  420. Re #401 [I am a (retired) computer scientist and have often and continue to write undocumented one-off computer programs which become more and more impenetrable as the experimental results are developed. Such programs were and are never intended to be read by anyone else.]

    Then in my view, they are not best practice in science, which is by its nature a collective endeavour, and should strive for as much transparency in method as possible. I know what you describe is very often done, I’ve done it myself, and I don’t want to suggest it seriously undermines the fine work done by Gavin and colleagues – but maybe, if they had felt they should make sure all the code they use is suitable for the public domain, this error would have been noticed earlier, either by them or by someone else.

    Comment by Nick Gotts — 18 Aug 2007 @ 9:33 AM

  421. RE412: Hi Tim.

    Maybe I’m W after all. If you wiki here is a hand you
    will see why I thought you were cribbing a GE Moore argument.

    on snarky days my version goes “here is a finger!”

    I come and go here so sometimes won’t see all your
    threads.

    Sorry if you think I’m singling you, you just happen to be entertaining and engaging, more so than most.

    Cheers

    Comment by steven mosher — 18 Aug 2007 @ 9:36 AM

  422. It’s cost effective for the fossil fuel brigade to fund sceptic organisations. Very cost effective.

    Spending money on simulations would just prove the AGW viewpoint and if they didn’t I dare say the climatologists would love to get their own back.

    http://www.exxonmobil.com/Corporate/Files/Corporate/gcr_contributions_public06.pdf

    Comment by Mike Donald — 18 Aug 2007 @ 9:43 AM

  423. RE #418 & Exxon running a climate model. I, for one, wouldn’t trust their code, esp if the results disproved GW. With millions of lines of code, I imagine they would be able to slip a few sneaky things in.

    There’s been a lot of talk about “heroes” — sportsmen, people caught in life-threatening situations, and such — but the climate scientists are my heroes. When this site opened, I was actually a bit surprised that not only were there scientists untempted by oil money who had not gone over to the dark Exxon side, but were unafraid of attacks by the unknowledgeable, wrongly motivated, and (perhaps in some cases) Exxon-funded.

    I regret having read some comments on another blog (linked in some post above) against Hansen. I was saddened (but not surprised) by the meanness and viciousness. The same people who would participate in destroying other’s property and person through global warming, and adamently deny they had anything to do with it (against all evidence that they had), would not be above such verbal wickedness.

    James Hansen and the other climate scientists, and those valiantly struggling to reduce GHGs and to encourage others to do so, are my true heroes in this day and age.

    Okay, the scientists made an insigificant error, which changes nothing when corrected (and they did correct it). I imagine if they later found 1887 to be the warmest year in the U.S., or even the world, it probably wouldn’t disprove AGW, since AGW is about trends, not single numbers.

    When the new warmest year comes out, say, in 2008, what will the scientists’ response be? “A single year’s stats do not prove anything, even if they do fit the trend [which has already indicated global warming].” As much as I would like to see more people convinced about global warming, I respect this scientific approach.

    Comment by Lynn Vincentnathan — 18 Aug 2007 @ 9:55 AM

  424. [[By trying to hide the mistake under the carpet our dear Real Climate scientists helped the other side’s extremists to misunderstand the news.]]

    If you think your opponents can’t honestly believe their position, but must secretly agree with you and be lying for evil reasons, then you don’t understand the issue at hand.

    Nobody tried to “hide the mistake under the carpet.” That’s an accusation of lying, and is itself a lie.

    Comment by Barton Paul Levenson — 18 Aug 2007 @ 10:37 AM

  425. [[The truth is not in peer reviewed journals, the truth is in nature. You are not likely to find a peer reviewed journal telling you that you can melt a hole for ice fishing with a dog turd]]

    Well, that’s the problem with science, all right. It just hasn’t devoted enough attention and analysis to dog turds.

    Comment by Barton Paul Levenson — 18 Aug 2007 @ 10:42 AM

  426. Timothy.

    Care to comment on the sensor supplier change I noted in #387?

    It’s a peer reviewed paper. Sensor supplier changed.
    TMAX goes up, Sensors changed back in the 1990.
    USAF also had a problems with this supplier and
    the particular sensor in question.

    Will this undermined AGW. No. Will this undermine the credibility of “authorities” and diminish thereby the force of appeals to authority?

    Timothy. Renounce the bad sensors.

    Comment by steven mosher — 18 Aug 2007 @ 11:14 AM

  427. #413: Given that Exxon and Co have been very generous in terms of funding scientifically incompetent hacks, you can be sure that they would not hesitate to throw truckloads of money at *competent* scientists who could make a serious scientific case against global warming.

    But in spite of the *billions* of dollars of cash they have on tap, the big oil companies have not been able to find *any* competent climate scientists to fund.

    Flimsy argument, as it cuts both ways too easily:

    If AGW were true, then you can bet every government in the world would be falling all over themselves to ensure it didn’t blossom into a larger problem.

    But in spite of the *trillions* that world governments have on tap, they have opted to do nothing about it. Hence, it must not be true.

    Pretty weak, eh?

    Comment by Matt — 18 Aug 2007 @ 11:24 AM

  428. re: 427

    Everything isn’t symmetrical.

    Comment by Jeffrey Davis — 18 Aug 2007 @ 11:51 AM

  429. Re #426 [But in spite of the *billions* of dollars of cash they have on tap, the big oil companies have not been able to find *any* competent climate scientists to fund.

    Flimsy argument, as it cuts both ways too easily:

    If AGW were true, then you can bet every government in the world would be falling all over themselves to ensure it didn’t blossom into a larger problem.]

    It’s your analogy that’s flimsy. The oil companies (primarily Exxon) that are funding and supporting denial that AGW is a serious problem, must be assumed to want to convince governments and publics of this position. If there were competent climate scientists who had (for example) knowledge of some important new negative feedback mechanism they believed might seriously alter future climate outcomes, and wanted to incorporate in a new model, funding them would be a highly cost-effective approach to doing so. I can see no reasons for them not to do so if any such climate scientists existed.

    Governments, on the other hand, are subject to enormous pressures against taking effective action to curb AGW. First, direct pressure from the oil companies and other special interest groups such as power companies, auto manufacturers, building firms and airlines. Second, because many of the measures they would need to take would be highly unpopular, e.g. raising gasoline prices and taxes on air travel. Third, because measures taken by any one government would be insufficient to limit the problem, and many such measures would, if taken by one government, put its economy at a disadvantage relative to those of other states: hence all the attempts at international agreements.

    Comment by Nick Gotts — 18 Aug 2007 @ 11:59 AM

  430. Brad DeLong called Tobin Harshaw who was the ‘Opinionator’ at the New York Times: http://delong.typepad.com/sdj/2007/08/tobin-harshaw-o.html

    Comment by Patrick — 18 Aug 2007 @ 12:17 PM

  431. Some years back I recall a climatologist posting, here I think, that he worked on very sophisticated climate models — with significant research that is proprietary, that couldn’t be talked about — for the petroleum industry, and commenting that all the large oil companies have used them for years. They understand what happened, that’s where the oil came from.

    They’re modeling the climate in which the sediment _accumulated_ to be able to predict where to drill _now_ for oil, accounting for continental drift.

    Comment by Hank Roberts — 18 Aug 2007 @ 12:19 PM

  432. #429: The funding given by environmental groups to groups showing AGW exists far exceeds that given to other groups by oil companies. In any case, this argument is irrelevant (ad hominem); if exxon funded scientists can show that AGW doesnt exist, one should be able to find where their model(s) differed in assumptions and argue against those assumptions decisively.

    My concern, in general, is that journal articles favorable towards the idea of AGW have been published for quite a long time. This is because, whether you admit it or not, a climatology journal is more likely to publish a paper showing the grand importance of climatology over a paper which concludes that the climate is more or less unpredictable and/or uncontrollable [I am not suggesting that this is the sole basis for publication, merely that is is an ever-present bias].

    Then, modelers funded by environmental groups or governments under political pressure to “do something” about AGW are making hundreds of microassumptions about the data favoring AGW (with out without citing the aforementioned published papers). When the data comes from various datasets with multiple adjustments made to most rows, the possibility for conscious or unconscious data manipulation increases greatly; anybody who has ever tried to beat a rival model or make a political point (usually that you, the modeler, or your dept. needs more funding or is justified in existing) has felt this constant temptation. Other assumptions that greatly worry me are that AGW modelers claim to be able to predict in, say, decade-sized bins how much the avg. global temperature will increase, but will admit if pressed that they cannot say much about a given year, and can say virtually nothing about any given region of a planet in a given year. If that is the case, how are assumptions about the damages and benefits of global warming worldwide being made… are they being conjured out of thin air? are they completely unreliable MLEs? Policy is being made on this very point: that global warming is bad, and that the bad outweighs the good.

    Finally, I am curious how these models are cross-validated (or if they are validated at all) and what type of model is typically used to fit the data.

    This is my first foray into sharing my thoughts on AGW, hopefully they aren’t old and tired points/question.

    Comment by Carl G — 18 Aug 2007 @ 1:05 PM

  433. #432
    Carl, could you please clarify what you mean when you say the following?
    “Finally, I am curious how these models are cross-validated (or if they are validated at all) and what type of model is typically used to fit the data.”

    Comment by DavidU — 18 Aug 2007 @ 1:31 PM

  434. Re #417 [Did Darwin cite a bunch of journals or did he; go, look, measure, and think?]

    The dichotomy is a false one. Although there were a few scientific journals in Darwin’s time, journals did not have the central place they do today in a scientific community that is very much larger. However, Darwin was highly active in an international network of scientific correspondence, without which his work, and in particular the marshalling of evidence for “The Origin of Species”, would not have been possible. See http://www.darwinproject.ac.uk/.

    Comment by Nick Gotts — 18 Aug 2007 @ 1:36 PM

  435. Re#431, Hank says: “They’re modeling the climate in which the sediment _accumulated_ to be able to predict where to drill _now_ for oil, accounting for continental drift.”

    I’m not sure exactly what you’re referring to, so correct me if I’m wrong. This isn’t really climate modeling in the sense this thread has been discussing. This is more about modeling the how, why, when, and where of sediment transfer and accumulation. Climate indeed has significant effects on sediment flux (e.g., more precip –> more erosion –> more runoff –> more sediment pumped into a basin, and so on). And, yes, where there are large accumulations of sediment, you find hyrdrocarbons.

    Sedimentary researchers are borrowing concepts from GCMs for modeling sedimentary systems. For example, see the CSDMS project. Atmospheric and climate scientists are probably the leaders in quantitatively understanding, and therefore modeling, the complex interactions of various forcing, feedbacks, teleconnections, and the like.

    The fact that oil companies model complex systems should come as no surprise…and to say they don’t have ‘competent’ people is also ludicrous. They realize the strength of the scientific case for AGW, they’re not idiots…the focus is on misinforming and confusing the public re policy. The independent ‘auditors’ are just one piece of that puzzle. I would like to see the auditors produce some real science and have it be ‘audited’ by climate scientists….I have a funny feeling that’s not gonna happen.

    Comment by Brian — 18 Aug 2007 @ 1:37 PM

  436. I agree wholeheartedly with Lynn Vincentnathan(#423). Dr. James Hansen is my hero also. I trust him.

    My trust is not shared by the likes of a Jack Kelly, who writes editorial opinions in the Toledo Blade(today, for instance, 8/18/07). He rips Dr. Hansen, Al Gore, and who knows who else, who report on the severe global warming we’ve already experienced, and will experience, if we don’t cut our burning of fossil fuels drastically. His trash in today’s Blade is about the fifth such one he’s written in the last year or so. How The Blade allows him to write such dribble is a mystery to me, but it does seriously cause me to question continuing my subscription to that paper. The parts I need can be read at the local library.

    It’s amazing the denial of AGW in this area. The local rag is partly to blame. The convenient denialists grab onto an article like Kelly wrote, and turn off any furthur reading, or research.

    Facts like the Arctic ice hitting it’s lowest point in recorded history this summer, and the projection that it’s likely to be ice free in the summer of 2030 get lost in the orgy of celebrating over NASA’s very minor mistake. Also thrown out is the fact that the global temperature from 6/06 to 6/07 was the warmest ever.

    Comment by Jack Roesler — 18 Aug 2007 @ 1:38 PM

  437. 432, the flat-earth theory gets even less funding than the anti-global-warming theory. Does that mean we should be skeptical about the idea that the earth is not flat?

    Exxon and the coal industry have billions or trillions to lose, if there were even a 5% chance that funding scientific work to disprove global warming would pay off then they could sink in billions. The fact that that do nothing more than relatively little PR funding speaks volumes.

    Comment by Tom Adams — 18 Aug 2007 @ 1:39 PM

  438. Re #427: [If AGW were true, then you can bet every government in the world would be falling all over themselves to ensure it didn’t blossom into a larger problem.]

    Not to get off into politics, but I really think you need to take a more cynical^H^H^H objective view of government. Consider for instance the cost of levee and bridge maintenance, and why governments don’t fall all over themselves to ensure those don’t blossom into larger problems…

    Comment by James — 18 Aug 2007 @ 2:08 PM

  439. #437: already said that this was an irrelevant point. perhaps oil companies know that nobody will accept the results of a team that they fund precisely because any result will be discarded as biased from the outset. But, again, it does not matter who funds what, only what the assumptions and processes involved in the modeling are.

    #433: For example: You can, using a neural network and enough variables, fit 100% of the data in any dataset given enough complexity in the NN architecture (you can achieve something close to this in other models as well). However, such a model would almost certainly predict the future with very little accuracy. This is because the variability in observed data (i.e. station temperature records) leads to artificial associations between the input variables and what level the observed data “happened to be at” in your data set. Until your completed model is used to predict already known values from another data set that weren’t used in building the model, or (better yet) until you wait to see how the predictions compare to the actual future values, the model cannot be trusted. I have not even directly asked a climate modeler, but I have seen statements online that imply that this process doesnt occur… obviously one shouldn’t judge based on heresay, so I am asking a more knowledgeable group about the modeling process.

    Btw: “the modeling process” may differ widely from model to model, in which case I’d appreciate hearing some things about the variety of what’s out there.

    Comment by Carl G — 18 Aug 2007 @ 2:12 PM

  440. to simplify/add to my last post: what temperature predictions more than 5 years out have been at all accurate? Could a model now predict the mean global temperature for 2010-2015 accurately? If it cannot, it is completely worthless.

    Furthermore, this prediction should come from a model TODAY. You cannot continuously feed new data into a model without destroying your ability to evaluate it’s accuracy. Assume the model is bad. As the predictions begin going astray, new data will re-adjust the parameters so that the predictions are accurate once again. Acceptable alternatives are using a validation data set (not used in building the model) or by simply using the predictions from an older model and comparing them to current temperatures.

    Is there data available to compare predicted temperatures for 2005-2007 from a late 90s (or any other arbitrary time periods) model to actual temperatures ? This is actually rather important. If the model fails to work, this would be a very troubling sign. If it is consistently accurate, one removes the question of predictive power and leaves only the question of whether or not the data itself represents what we are trying to measure.

    [Response: See Hansen's 1998 projections. -gavin]

    Comment by Carl G — 18 Aug 2007 @ 2:22 PM

  441. Re #432 “Policy is being made on this very point: that global warming is bad, and that the bad outweighs the good.” Really? News of this policy development seems to have escaped me.

    Comment by Jerry — 18 Aug 2007 @ 2:23 PM

  442. Ahh one last thing. Gavin, The two charts you show in the article have different Axes.. one shows anomalies between -1.5 and +1.5 and the other between +-.6.

    This is not your “beck” moment, but it might be interesting to avoid the “chartsmanship” issue.

    Just a thought.

    [Response: This makes no sense. They are different quantities, with different levels of noise. Why should they be on the same scale? Global means have much less variability than regional ones, or local ones. Each scale is chosen to show the whole record in as much detail as is practical. (Plus they are taken directly from the GISS website and so weren't drawn by me in any case). - gavin]

    Comment by steven mosher — 18 Aug 2007 @ 2:46 PM

  443. #439
    I am familiar with neural network and the phenomnon you describe is not something that is particular to them. You can laso fit a simple polynomial to match any finite set of data points, given a high enough degree of the polynomnial(s). Thi is indeed a problem when it comes to things like statistical models use in e.g. regression.

    Climate model are _not_ statistical models.

    A climate model is a first principles model, like for example the models used to study air flow around an aeroplane wing before it is built. These models are based on the fundamental laws of physics not statisical fits. One starts out with the laws of mechanics describing fliund flow, heat transfer and so on, then set up the basic geomtry of the system. For a climte model this would the earth, it topography and the presceence of water and other materials at an initial time. Then one lets the model run and reads of data as time pases in the model. The accuracy of a model is judged for example by starting it up with the known weather and climate in 1965 and then comparing the models prediction with the observerd climat in the following years.

    If the model does not do well one does not just change some parameters to make it fit the data, as one would have do nein a statistical model. Instead one looks at what part of the actual physics one has left out and tries to add that without making the model so complex that our computers can’t run it.
    The climate models used today do give correct predictions for the 1970′s if you start them out at some time in the 1960′s.

    Comment by DavidU — 18 Aug 2007 @ 3:00 PM

  444. #441: I think that goes without saying; we would want to increase global warming if the positives (defined as some socioeconomic metric) outweighed the negative, or at least would not care so much to try and stop it. Or, alternatively, if AGW is shown to not exist in addition to showing that NGW (N = natural — not sure of the jargon used in this field yet) does exist:
    1) The phenomenon could be ignored (if found to be a net positive)
    2) Resources could be spent to move people out of coastal areas, to begin buying farmland further north in canada, etc (if found to be a net negative).

    It is most likely that global warming, regardless of the cause, will cause both positive and negative effects in most areas with some distribution of net positive and net negative effects across the globe. I find it ridiculous that many people assume that GW is a net negative across the globe.

    Comment by Carl G — 18 Aug 2007 @ 3:02 PM

  445. #432 #439
    Regarding the decade predictions with a lack of predictions for specific places and points in time.

    Part of the problem here is that “climate” is essentially a description of the average temperatures, humidities, amounts of rain and so on in an area during a year. Large averages like this are much easier to predict than the effects at a specific place.

    A simple everyday example of this is the bubbles in a pot of boiling water. From physics we know that when water is heated enough it will boil. So if we turn up the heat on the stove we can predict that the water in the pot will begin to boil. Predicting at which precise point at the bottom of the pot the first bubble will form when the water begins to boil is much much harder.

    Comment by DavidU — 18 Aug 2007 @ 3:08 PM

  446. Tea or coffee, Steven?

    Comment by Timothy Chase — 18 Aug 2007 @ 3:30 PM

  447. Re #444 [It is most likely that global warming, regardless of the cause, will cause both positive and negative effects in most areas with some distribution of net positive and net negative effects across the globe. I find it ridiculous that many people assume that GW is a net negative across the globe.]

    Do you have specific evidence of anyone “assuming” that, rather than arguing it? There are very good reasons to expect it, the most general of which is that social, technical and ecological systems are adapted to current climatic conditions, and rapid change in any direction is likely to be highly disruptive. For more specific reasons, see for example the IPCC AR4 Working Group III Summary for Policymakers (http://www.ipcc.ch/SPM040507.pdf) or the Stern report (http://www.hm-treasury.gov.uk/independent_reviews/stern_review_economics_climate_change/stern_review_report.cfm).

    Comment by Nick Gotts — 18 Aug 2007 @ 3:33 PM

  448. BlogReader — The one-off projects are not ‘hobbies’. The results are the basis for scientific papers.

    Nick Gotts — Actually, this technique IS the best scientific practice. Rather than taking the (large amount of) time to polish and document the program, it is more productive to write and polish the paper.

    I have never heard of a case in which a referee wanted to see the computer code to complete his task of checking that the submitted paper was, at least on the surface, correct.

    Comment by David B. Benson — 18 Aug 2007 @ 3:51 PM

  449. [[The funding given by environmental groups to groups showing AGW exists far exceeds that given to other groups by oil companies.]]

    Oh, really? What are the respective numbers, and what is your source?

    Comment by Barton Paul Levenson — 18 Aug 2007 @ 4:34 PM

  450. #443: Thanks, I did not know this (re: first principles model vs. statistical model). So, then, the modeling process is to find which of the known physical laws are required to be inputs in the model given a desired level of accuracy. Should infinite computing power be available, the model would be perfect (assuming perfect input data).
    All laws used in current modeling are or have been assumed to be accurate. Climate skeptics, then, can challenge the truth of the physical laws, human errors in the modeling, or the input data only.

    Do the current models predict the 1980s, 1990s, 2000s (so far) well? I would be concerned only if error continued to rise decade by decade (indicating a rising trend that does not exist).

    #445: Complete agreement here. I am very annoyed when MLEs of where the first bubble will appear are touted as accurate because the overall “averaged” model is accurate, though (ex. hurricane forecasts).

    #447: I would argue that canadian tundra // Siberia will undoubtedly become more valuable if local temperatures rise. This is a silly example, but there are many areas where the net effect of warming would be positive even when considering the fixed cost of relocating persons/industries. Even then, we should rather consider [Global warming costs - global warming benefits] and compare it to the costs of reducing greenhouse gases; if it is cheaper to do nothing, that is what should be done. It’s better to spend $1T to move everyone off the shoreline, out of floodplains, and out of deserts than it is to spend $2T so people can stay put (this assumes that money can represent sociopolitical concerns, and also that spending $2T would actually do anything for the climate). The world isn’t going to stay in a constant state on any political/economic/social front, regardless of global warming’s effect. I will look at the IPCC’s assumptions sometime in the next few days and comment back here if the thread is still alive.

    Comment by Carl G — 18 Aug 2007 @ 4:36 PM

  451. [[Could a model now predict the mean global temperature for 2010-2015 accurately? If it cannot, it is completely worthless.]]

    Classic non sequitur. A model can predict the temperature in 2015 given that everything else (including trends) stays the same. Nobody can predict the future exactly, but we don’t have to in order to know that global warming is a bad idea.

    Comment by Barton Paul Levenson — 18 Aug 2007 @ 4:37 PM

  452. [[ I find it ridiculous that many people assume that GW is a net negative across the globe.]]

    You’d be surprised by how many people consider trillions of dollars in property damage and millions of deaths to be a bad thing.

    Comment by Barton Paul Levenson — 18 Aug 2007 @ 4:40 PM

  453. Re #448 [Nick Gotts — Actually, this technique IS the best scientific practice. Rather than taking the (large amount of) time to polish and document the program, it is more productive to write and polish the paper.

    I have never heard of a case in which a referee wanted to see the computer code to complete his task of checking that the submitted paper was, at least on the surface, correct.]

    I think we’ll have to agree to disagree. I don’t dispute that yours is currently the majority opinion, but one journal I write and review for (Journal of Artificial Societies and Social Simulation) now includes in its form for reviewers a question about whether code is made available. I believe (I’d need to check this) there has also been discussion recently in Nature about whether statistical programs used to generate results should be required to be open source. Also, it’s not just a matter of publication: if code is impenetrable and undocumented, even the author themselves cannot reasonably be confident it is doing what it is supposed to do.

    Comment by Nick Gotts — 18 Aug 2007 @ 4:46 PM

  454. Carl G. writes: “I find it ridiculous that many people assume that GW is a net negative …”

    Carl, rate of change. We know quite well how fast natural systems adapt to changes.
    The rate of global change from human activity is 100x faster than any event in the past other than a major asteroid strike.
    http://www.sciencemag.org/cgi/content/abstract/303/5659/827?ijkey=5S8jAr5Xb.x0A

    From that page look at the Supplemental Material (PDF), the second chart on p.6. That’s the change in the rate of change — very little change for many tens of thousands of years, then rapid change in the ecology as the climate changed naturally since the end of the last ice age. And we’re now driving change 100x faster.

    We know how fast natural systems can change from paleoecology work — because they either moved along with the climate or they died out if the rate of change was too fast. Rates of change well within what has happened in the past, naturally, suffice to drive species out of areas where they can live. When it happens one continent at a time, life goes on elsewhere.

    100x faster, this century, globally.

    Comment by Hank Roberts — 18 Aug 2007 @ 5:01 PM

  455. Hank Roberts (#431) wrote:

    They’re modeling the climate in which the sediment _accumulated_ to be able to predict where to drill _now_ for oil, accounting for continental drift.

    For example, if one models the cretaceous period, one should better be able to identify the regions where oil shale will have formed. This would cut down on the cost of exploratory drilling, or better yet, identify areas which are likely to contain oil deposits – prior to obtaining the rights to drill there.

    Comment by Timothy Chase — 18 Aug 2007 @ 5:13 PM

  456. Nick Gotts — That is interesting and new to me.

    I meant to write that the code is impenitrable by anybody but the suthor. For the sort of one-off that I am considering, no documentation is required because it is written in a very clear programming language (Standard ML) and I know what each part is supposed to do. But it was not written as a product, anymore than a typical lab bench experiment is intended to be used, or understood in detail, by any but the experimenter.

    Computer programs written for distribution are another matter. There, attention is paid to the readability of the code, and there is some documentation provided.

    Comment by David B. Benson — 18 Aug 2007 @ 5:16 PM

  457. #450
    The current models have done a good job. For the accuracy on state of the art models you will have to ask someone else, e.g. Gavin, I’m a computational physicist, not a climate modeler, myself. But if you look at the paper Gavin mention at the end of #440 you can see how close to the current actual climate the models in the late 80′s got. Needless to say both models and computers have advanced a lot since then.

    Most skeptics seem to object to far less concrete things than the ones you mention, but the things you mention are what they, at least indirectly, challenge.

    Comment by DavidU — 18 Aug 2007 @ 5:20 PM

  458. Re #457

    David, you don’t really think that Gavin is going to cite papers that are not close to the climate modeled, do you?

    Every financial document supplied in the UK has to contain the phrase “Past performance is not a guide to future performance.” Yet people still have the simplistic idea that if scientists can find a projection by them which fits the past performance of the climate to the past, then that is a guide to the future performance of the climate.

    Both the economic system and the climate system are chaotic. Need I say more?

    Comment by Alastair McDonald — 18 Aug 2007 @ 6:01 PM

  459. Re 440: “Could a model now predict the mean global temperature for 2010-2015 accurately? If it cannot, it is completely worthless.”

    My understanding is that models make projections of possible future outcomes based on choices that society makes in the economic,social and political arenas. There’s a time lag between choices made today and their future effects, which are not yet affecting short term projections.
    As far as your feeling, that if they can’t accurately predict the warming for 2010-2015 then they’re completely worthless, falls short on two grounds. First,just because something isn’t completely efficient,doesn’t mean it’s unnecessary. If this were so we’d have scrapped our Dept. of Defense long ago. Second, do the words ‘risk assessment’ mean anything to you? If you own a home and carry fire insurance on it, is it because you have accurately predicted that your home will burn down? Of course not. You’ve made an assessment that the cost of paying the insurance ocmpany is worth the protection, against the worst possible outcome, should the uncertain occur. It’s preferable to doing nothing even though uncertainty exists.
    There are uncertainties in AGW for sure. But look around. Check out the difference in extent of most mountain glaciers compared to 30 years ago, or the extent of sea ice in the arctic ocmpared to a few decades ago.There are many manifestations all around us. Should we stand around whistling Dixie? Or should we be doing what the wise and practical homeowner does and start protecting our one and only world?

    Comment by Lawrence Brown — 18 Aug 2007 @ 6:56 PM

  460. Alastair McDonald (#458) wrote:

    Both the economic system and the climate system are chaotic. Need I say more?

    The weather is chaotic, Alastair. The climate system is stable except at certain branching points – if for example you were balanced on the edge of an albedo flip. A switch in bimodal ocean circulation would be another. This is all physics. No tulip craze here.

    Comment by Timothy Chase — 18 Aug 2007 @ 7:01 PM

  461. #429 Nick Gotts: It’s your analogy that’s flimsy. The oil companies (primarily Exxon) that are funding and supporting denial that AGW is a serious problem, must be assumed to want to convince governments and publics of this position.

    Your belief is that if another fuel source arrived tomorrow that Exxon et al would be out of business? Exxon is in the business of moving energy from point A to point B. Today, point A is the middle east, and point B is your local gas station. Where point A begins in the future is largely irrelevent. Someone will need the skills to broker deals and ensure supplies are flowing. If it’s biodiesel you can bet Exxon will still play a major role, and even if it’s locally produced hydrogen you can bet the major oil companies will still play a role. Never underestimate the power of a brand. Dell could be selling lawn tractors starting tomorrow and they’d quickly figure out the supply chain and beat John Deere at their own game.

    The same can be said for electric cars. GM wants to sell you a car, with a belief attached to that car that the car makes your life better. If it’s an electric car that folks want, the GM will make an electric car. If it’s a squirrel powered car, then GM will make a squirrel powered car.

    As the old saying goes, the railroad companies failed because they beleived they were in the business of trains. They weren’t. They were in the business of moving goods from point A to point B. They viewed trucks as competition rather than as a way to shore up their weaknesses.

    Walmart, FedEx, Microsoft, Exxon all exist NOT for the reason most believe they exist. The rules can change, and they will still be on top. Why? Because few can do what they do as efficiently. Their strength is understanding the complex. Feeding a country of several hundred million the trillions in $ they require in fuel is not a task for anyone except a large corporation. The fuel can be whatever you want it to be.

    Comment by Matt — 18 Aug 2007 @ 7:40 PM

  462. #460 Timothy Case: The weather is chaotic, Alastair. The climate system is stable except at certain branching points – if for example you were balanced on the edge of an albedo flip. A switch in bimodal ocean circulation would be another. This is all physics. No tulip craze here.

    And you think financial markets aren’t chaotic?

    Chaotic systems are also usually very not well understood. Afterall, if you really understood the knobs that mattered, then slight changes in initial conditions wouldn’t result in big changes at the output.

    Comment by Matt — 18 Aug 2007 @ 7:50 PM

  463. #457 DavidU: But if you look at the paper Gavin mention at the end of #440 you can see how close to the current actual climate the models in the late 80’s got.

    Here’s a fun exercise (unfortunately I can’t post graphics). Take a historic global temperature record expressed as anomoly. Import that into your favorite graphics package, and build a triangle shape that graphically shows Hansen’s prediction: In 20 years, the warming will be a max of +0.7′C or a min of 0.17′C.

    Now slide that triangle around the record and get a feel for just how often that prediction would be correct. The “prediction” he made is correct about 50% of the time. If you limit the prediction to 15 years, that prediction is correct about 65% of the time since 1850. Hansens prediction was also true from 1905 all the way through 1948. In fact, if you believe the temp will increase, find a period in time where Hansen’s prediction was NOT true! There weren’t that many times.

    I could make a statement today about the the position of the DJIA in 20 years quite easily. Does that mean I understand the DJIA? Not really. All it really shows is that I was able to look at past performance, and come up with a window that would fit about 50% of the time in a bull or bear market. Make a prediction, then wait.

    [Response: You confuse statistical forecasting which knows nothing about the underlying physics (and in your case is simple linear extrapolation) with physical modelling based on first principles. How is a linear extrapolation going to help assess a prudent level of emissions? Or assess the likelihood of changes in rainfall or storminess? Physics-based modelling is both much harder, and more useful. - gavin]

    Comment by Matt — 18 Aug 2007 @ 8:02 PM

  464. You folks should know about this if you don’t already and have a response. It’s not an amateurish looking product:

    http://xxx.lanl.gov/PS_cache/arxiv/pdf/0707/0707.1161v2.pdf

    Falsification Of The Atmospheric CO2 Greenhouse Effects Within The
    Frame Of Physics
    Authors: Gerhard Gerlich, Ralf D. Tscheuschner

    [edit]

    [Response: It's garbage. A ragbag of irrelevant physics strung together incoherently. For instance, apparently energy balance diagrams are wrong because they don't look like Feynman diagrams and GCMs are wrong because they don't solve Maxwell's equations. Not even the most hardened contrarians are pushing this one.... - gavin]

    Comment by Neil B. — 18 Aug 2007 @ 8:28 PM

  465. Re 434
    One of the things the Darwin did, is write letters asking people to go, look, and measure for him. When, he was writing his book on worms, he went and talked to guys with mud on their boots.He was my kind of a guy.

    Comment by Aaron Lewis — 18 Aug 2007 @ 9:18 PM

  466. Re # 432, 439 CarlG: “Finally, I am curious how these models are cross-validated (or if they are validated at all) and what type of model is typically used to fit the data.” ““…the modeling process” may differ widely from model to model, in which case I’d appreciate hearing some things about the variety of what’s out there.”

    You might check these references (links provided to the abstracts):

    D. Sornette, A. B. Davis, K. Ide, K. R. Vixie, V. Pisarenko, and J. R. Kamm (2007)Algorithm for model validation: Theory and applications. PNAS 104, 6562-6567
    http://www.pnas.org/cgi/content/abstract/104/16/6562

    R. K. Heikkinen, M. Luoto, M. B. Araujo, R. Virkkala, W. Thuiller, and M. T. Sykes (2006)Methods and uncertainties in bioclimatic envelope modelling under climate change. Progress in Physical Geography 30, 751-777
    http://ppg.sagepub.com/cgi/content/abstract/30/6/751

    G. A. Fine (2006)Ground Truth: Verification Games in Operational Meteorology. Journal of Contemporary Ethnography 35, 3-23
    http://jce.sagepub.com/cgi/content/abstract/35/1/3

    M. Lahsen (2005)Seductive Simulations? Uncertainty Distribution Around Climate Models. Social Studies of Science 35, 895-922
    http://sss.sagepub.com/cgi/content/abstract/35/6/895

    M. Donatelli, M. Acutis, G. Bellocchi, and G. Fila (2004)New Indices to Quantify Patterns of Residuals Produced by Model Estimates. Agron. J. 96, 631-645
    http://agron.scijournals.org/cgi/content/abstract/96/3/631

    Oreskes N., Shrader-Frechette K., and Belitz, K. (1994) Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences. Science,
    Vol. 263. no. 5147, pp. 641 – 646
    http://www.sciencemag.org/cgi/content/abstract/263/5147/641

    Comment by Chuck Booth — 18 Aug 2007 @ 10:43 PM

  467. Barton Paul Levenson> You’d be surprised by how many people consider trillions of dollars in property damage and millions of deaths to be a bad thing.

    While I will agree that sea level rise will eventually cause a lot of property damage, where is the evidence that AGW would cause more deaths than would be caused by mitigation efforts?

    That includes extending the time required for the people of developing nations to rise out of poverty, and the likely reduction of resources supplied from developed nations to help provide clean water and reduce disease.

    Comment by Steve Reynolds — 18 Aug 2007 @ 11:20 PM

  468. Chuck’s right about those, and those are just a few examples. The literature on comparisons of the various large climate models, and the work done comparing their results, is just astonishing to me as an outside reader, as soon as I start looking for it.

    The Pew Trust had addressed this a while back, I just found their page here, with a pointer to the Lawrence Livermore climate model comparison and validation program (that I’d just found a few days ago from another angle).

    Pew covered this in describing Crichton’s errors: http://www.pewclimate.org/state_of_fear.cfm

    Excerpt follows:

    Crichton seems unaware that the discussion of climate model validation is a common feature of publications utilizing these models and model errors and biases are often explicitly quantified and described. Similarly, Crichton appears unaware of the various model comparison, evaluation, and validation projects that currently exist. For example, the Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory has been conducting model comparison and validation tests since 1989 (including the climate models used by the IPCC), and published a publicly available report of its research in the summer of 2004. [see http://www-pcmdi.llnl.gov/ ]

    Comment by Hank Roberts — 19 Aug 2007 @ 12:24 AM

  469. As far as I know environmental organisations fund very little *research*, they’re substantially PR organisations. In the UK I’d expect climate research to be mainly funded by the NERC, I assume there are some direct funding mechanisms for the Met. Office.

    I don’t see any problems with climate modelling results coming out of the oil industry, they do a lot of science in other areas rather competently. If it goes into the peer-reviewed literature and stands the test of time then it’s a useful contribution.

    Comment by SomeBeans — 19 Aug 2007 @ 2:04 AM

  470. #458
    Yes Alastair unless you want to do more than mentioning buzzwords you need to say a lot more!

    If you take some time to learn the mathematics behind chaotic systems you will learn that even though they are unpredictable in some ways, this is the part that is emphasised in the popular science books, they are also highly predictable when it comes to averages. This kind of stability is studied in an area called “ergodic theory”, and it is a theory the mathematical sense of the word not in the sense “theory = hypothetical”.

    You also seem to be repeating the the exact misstake that I have already talked about. Climate models are not statistical fits to the past! They are first principle models, or simulation if you like that word better. If you do not believe that system with chaotic behaviour can be modeled in this way then you have far better things to worry about. For example, the air flow models used to design modern aeroplanes are exactly this kind of model, including chaotic turbulent flows around the wings of the aeroplane.

    Financial systems are not hard to model because they are chaotic, they are hard to model because we do not know any mathematical laws underlying them. Because of this we have to make guesses and introduce random components in the models, and that leads to a completely different kind of unpredictability.

    Regarding your first point, yes I expect that if there were papers where a model made a bad prediction, and the reason for the problem was not already well understood, Gavin would mention it. Gaps like that, in any model, normally point the way to good improvements and are the kind of things scientists really keep an eye open for.

    Comment by DavidU — 19 Aug 2007 @ 3:55 AM

  471. re: 442 and axes

    An objection about axes (axises?). In this thread? Well, at least there’s no irony shortage.

    Comment by Jeffrey Davis — 19 Aug 2007 @ 6:44 AM

  472. Re #470

    I should have written that both the financial markets and the climate are dynamical systems. Chaos is just one state of a dynamical system when there is strong positive feedback. The global climate system has been relatively stable for the last 10,000 years because negative feedbacks have dominated. Now, however, with the positive feedbacks from the ice albedo effect out of control in the Arctic, then we can expect the whole climate system to become unstable and a runaway warming to follow.

    It is possible to build a computer model which reproduces chaos, but since the climate has been stable, the climate scientists have not needed to incorporate that sort of code. In other words they have matched their models to the climate they know, not the real one!

    When they have tried to reproduce the abrupt changes which happened before 10,000 years ago they have failed, but tehy don’t like to admit that.

    There were about twenty models used in the IPCC report and all gave different results for climate sensitivity. If Gavin and Jim Hansen’s model gave the right results then the other nineteen must have given the wrong results. Why didn’t Gavin cite them? They outnumbered his model 19 to 1. They are the consensus!

    Comment by Alastair McDonald — 19 Aug 2007 @ 10:18 AM

  473. Gavin, I understand what you said — the two graphs at the top came from GISS, and illustrate
    “… different quantities, with different levels of noise. Why should they be on the same scale? Global means have much less variability than regional ones, or local ones.”

    It might be useful to help people get a better idea how their region is varying, compared to other regions.
    I know where I live, in the Pacific coastal fog belt, we’re not experiencing much local change.
    If it weren’t for the science being done, we locally wouldn’t have a clue anything was changing in the world.
    It might help people be a bit more aware that change is happening elsewhere, beyond their horizon.

    There’s a song called “Before Believing”
    ____
    “How would you feel if the world was falling apart around you
    Pieces of the sky were falling in your neighbor’s yard
    But not on you
    Wouldn’t you feel just a little bit funny
    Think maybe there’s something you oughta do ….”
    ————-
    ——— http://www.lyricsdepot.com/emmylou-harris/before-believing.html

    Comment by Hank Roberts — 19 Aug 2007 @ 10:21 AM

  474. Re #467 [While I will agree that sea level rise will eventually cause a lot of property damage, where is the evidence that AGW would cause more deaths than would be caused by mitigation efforts?

    That includes extending the time required for the people of developing nations to rise out of poverty, and the likely reduction of resources supplied from developed nations to help provide clean water and reduce disease.]

    Aside from sea-level rise, which if more than minimal will deprive many millions of homes and livelihoods, likely effects include disappearance of high-altitude glaciers and snows on which around 1/6 of the world’s population depend for water supply; drought-affected areas increasing; flood events increasing; disruption of fisheries; increase in malarial areas. Tropical areas and the poor are likely to be disproportionately affected – most of what benefits there are will accrue in higher latitudes. (All this from the IPCC AR4 WGII Summary for Policymakers). These are only the most likely and most direct effects. Disruption of the Asian monsoon, if it happens, will be devastating. Increased variability and unpredictability of conditions year to year, likely in a dissipative system being pushed out of its current basin of metastability, could make things very difficult for farmers worldwide. Less directly, many of the above effects would carry the risk of refugee crises, disease outbreaks, state breakdown, and what I’d call “wars of distraction” launched by ruling elites under pressure.

    On the other side, while there will undoubtedly be high costs to any serious attempt at mitigation, this would also require something like a global agreement (covering at least the rich world, India and China, and probably other states with large and currently poor populations) which would inevitably have to bring in issues other than greenhouse gas emissions – such as those you mention – if only because these states will say, reasonably enough, that they cannot bring their populations on board without serious help in those other areas.

    In other words, if you really care about clean water and reducing disease, get behind climate change mitigation. Not only is it directly essential to solving those issues (tropical glacier melt, malaria); it could also serve as a catalyst for wider international cooperation on more equal terms than in the past.

    Comment by Nick Gotts — 19 Aug 2007 @ 11:07 AM

  475. RE #427 & If AGW were true, then you can bet every government in the world would be falling all over themselves to ensure it didn’t blossom into a larger problem. But in spite of the *trillions* that world governments have on tap, they have opted to do nothing about it.

    See, this is how it works. Politicians need money to run for office. The oil companies are happy to oblige. For instance, they give about the same amount to both the Clinton and Bush Sr campaigns. That way, no matter who wins, they have a door into the Oval Office.

    Then there are the media — who not only report on global warming (or fail to do so), but also on elections. Well who funds them? Advertisers. I remember when ComEd was coming up for a long term contract in Chicago, all of a sudden I saw lots of ads on TV for ComEd electricity. Since IL was not deregulated and ComEd was a natural monopoly there, I thought, that’s strange that they would advertize when everyone has to buy from them. Then it dawned on me, yes, their contract is coming up & what they are buying is media support more than public support.

    So, actually, we probably never will get a president who does anything about global warming. They’ve all sold their souls to the dev-oils.

    Comment by Lynn Vincentnathan — 19 Aug 2007 @ 11:38 AM

  476. Gavin says in #463: [Response: You confuse statistical forecasting which knows nothing about the underlying physics (and in your case is simple linear extrapolation) with physical modelling based on first principles. How is a linear extrapolation going to help assess a prudent level of emissions? Or assess the likelihood of changes in rainfall or storminess? Physics-based modelling is both much harder, and more useful. - gavin]

    Exactly! Predicting with a highly simplified model of a not-well-understood complex system amounts to little more than guessing. We can put some simple physics behind it to make it look impressive, but in the end it’s guessing.

    This paper, to me, really underscores this: http://www.cgd.ucar.edu/ccr/publications/meehl_cmip.pdf

    Here we have a bunch of models in very close agreement on CO2 sensitivity, but not even close on precipitation rates. How can that be?

    How do you rule out that each modeler went in with a pre-conceived notion of what the CO2 sensitivity should be (and remarkably all models got very close to that) but at the same time the models were all over the place for rain? Is there really a rigorous physical model at the heart of all this, or is it a bunch of small physical systems strung together with feedbacks and feedforwards tweaked by the modeler to get the output they expect (confirmation bias)?

    if the former, then we’d expect independently derived models to have the very similar output for CO2, rain, clouds, etc. If the latter, then at the heart of each model is really nothing more than intuition (a la Hansen’s prediction).

    [Response: Your reading is based on an incorrect assumption that climate sensitivity is something you can easily fix. That's just not so. The range in the CMIP simulations wasn't all that narrow in any case (2 to 4.1ºC at equilibrium I think), and yes, precipitation is more variable (1 to 3% increase per degree of warming). But the tweaking is not done on the sensitivity to CO2 but to the present day climatology - of rainfall, clouds etc. All of the interesting sensitivities are emergent properties, and I assure you that they are not simply tunable based on intuition. - gavin]

    Comment by Matt — 19 Aug 2007 @ 12:45 PM

  477. Nick Gotts> …disappearance of high-altitude glaciers and snows on which around 1/6 of the world’s population depend for water supply; drought-affected areas increasing; flood events increasing; disruption of fisheries; increase in malarial areas.

    Water supply issues can be addressed (and improved over present conditions) by building dams.

    Most of your other concerns are speculation, with no consensus that they are likely to be severe (or even exist for some, such as malaria).

    If human life is your standard, aggressive AGW mitigation is likely a very counter-productive approach.

    Comment by Steve Reynolds — 19 Aug 2007 @ 1:35 PM

  478. Re #461: [GM wants to sell you a car, with a belief attached to that car that the car makes your life better. If it’s an electric car that folks want, the GM will make an electric car.]

    This is at best a half truth, because GM has invested large amounts of money to a) persuade large segments of the buying public that it wants a particular sort of car; and b) establishing a design & manufacturing infrastructure to build the sort of car that they’ve persuaded their market to want. Those establish a feedback cycle with a lot of inertia, so that they keep on building & selling particular sorts of cars, even when – as we’ve seen over the last 40 years or so – large parts of the market clearly prefer something different. Undoubtedly any company caught in such a cycle could break out of the loop, but few actually manage to do so.

    I see this same pattern at all levels, from individuals who seemingly can’t alter self-destructive behaviors on up to entire societies. Indeed, I expect this is at the root of a lot of denialism: the denialists have established a set of habits, and reject any information that suggests a need to change them.

    Comment by James — 19 Aug 2007 @ 2:17 PM

  479. #472
    Yes they are both dynamical systems, if they were not then both of them would be static in time. Feedbacks are not what gives rise to chaos, but for some dynamical systems they can. The mathematically simplest chaotic system are simply composed of two linear functions, no fancy stuff needed.

    The fact that you think that one needs to input some kind of special “chaos code” into the models just show that you lack a basic understanding of chaotic systems. First principles models used to model both climate and fluid flows are in themselves chaotic on some scales unless there is something artificially dampening the system.

    I can only repeat the basic fact that you again ignore. These models are _not_ constructed by fitting them to climate in the past. They are based on setting up the laws of physics together with our best knowledge of the state of earth at _one_ point in time and are then alloved to run.

    This is exactly the kind of first-princples modelling used to build aeroplanes, cars, boats, computer chips, and used by e.g NASA and ESA to compute orbits for space mission to other planets.

    Comment by DavidU — 19 Aug 2007 @ 2:28 PM

  480. Hansen distinguishes between the captains of industry and the jesters.
    He’s right; there are very, very few people right now whose opinions matter — the big top decision-makers.

    Comment by Hank Roberts — 19 Aug 2007 @ 3:48 PM

  481. Steve Reynolds (#477) wrote:

    Water supply issues can be addressed (and improved over present conditions) by building dams.

    What we are speaking of is years of drought over large areas occasionally interrupted by periods of severe flooding. These will include Australia, large parts of Asia (with over a billion facing severe water shortages in the latter half of this century due to the disappearance of glaciers on the Tibetean Plateau), and severe droughts in both the US southwest and southeast as the result of the expansion of the Hadley cell to the south.

    You might try dams and irrigation – assuming you can find the fresh water to dam and have means to transport it, and can somehow irrigate a large enough area to make a dent in the problem, but then you face a much higher rate of evaporation from a more arid climate. The soil will dry out more quickly. Then there is the increasing heat stress which already appears to be having a measurable effect in terms of the atmosphere upon the ability of plants to sequester carbon dioxide during the hotter, drier years.

    Steve Reynolds (#477) continued:

    Most of your other concerns are speculation, with no consensus that they are likely to be severe (or even exist for some, such as malaria).

    No doubt it is possible to cherry-pick the more speculative. However, hemorrhagic dengue is already in Taiwan and establishing itself in Mexico.

    Lethal type of dengue fever hits Mexico
    By Mark Stevenson
    Sunday, April 1, 2007 – Page updated at 02:04 AM
    http://seattletimes.nwsource.com/html/nationworld/2003645837_dengue31.html

    In fact hemorrhagic dengue is in the process of becoming endogenous to Taiwan due to warmer winters…

    Second dengue fever patient dies in Taiwan
    (November 1, 2006)
    http://www.sciencedaily.com/upi/index.php?feed=Science&article=UPI-1-20061101-13131500-bc-taiwan-dengue.xml

    … and then there is India:

    More dengue fever cases reported in India
    (October 17, 2006)
    http://www.sciencedaily.com/upi/index.php?feed=Science&article=UPI-1-20061017-10114900-bc-india-dengue.xml

    But lets look at some more and what climatology has to say about this sort of thing:

    Climate connections: The International Research Institute for Climate Prediction (IRI) specializes in providing climate prediction information in support of human development. In the Greater Horn of Africa, an area comprising parts of ten countries, new climate models are showing accuracy in predicting outbreaks of the cattle disease Rift Valley Fever. The region’s economy, heavily dependent on cattle trade across the Red Sea, can be devastated by blanket trade bans related to this disease. Maxx Dilley of the IRI has been working with several countries and organizations to encourage more rational cattle trading regulatory system based on accurate reporting and transparent governance, supported by information provided by the climate models. On the other side of the continent, in the West African Sahel, nomadic herders are at risk of many infectious diseases including epidemic malaria and bacterial meningitis (both deadly to non-immune populations and thus the cause of catastrophic health problems that severely hamper social and economic development in the region). IRI health scientists Madeleine Thomson and Steve Connor are part of a new study called NOMADE, which uses climate information to predict and then combat these climate-related diseases in the Sahel. The NOMADE study aims to inform regional epidemic control initiatives spanning six West African countries from Mauritania to Chad, which aim to dramatically improve health and economic development prospects in this semi-arid region.

    Earth Institute Fights Poverty Worldwide
    Scientists target hunger, malaria, economic development
    http://www.earthinstitute.columbia.edu/news/2004/story02-13-04.html

    Steve Reynolds (#477) continued:

    Most of your other concerns are speculation, with no consensus that they are likely to be severe (or even exist for some, such as malaria).

    Not much more speculative than my assumption that I will be able to get to work tomorrow without becoming a casuality in a fatal car accident, that the company I am working for will still be in business by the time I arrive and the building that I work at will not have burned to the ground as the result of arson. In fact, I am quite hopeful that I will receive a paycheck by the end of the pay period. With regard to malaria, see above.

    Steve Reynolds (#477) continued:

    If human life is your standard, aggressive AGW mitigation is likely a very counter-productive approach.

    I suppose it depends upon what you mean by “aggressive AGW prevention.” If you meant getting rid of all internal combustion engines tomorrow, I believe you might be right. But when one considers the droughts and famine which are virtually a given with the current business-as-usual approach, changing our trajectory over time while we have the time would seem the proper course. Mitigation will be an almost futile exercise if we don’t.

    Comment by Timothy Chase — 19 Aug 2007 @ 3:58 PM

  482. RE Steve 477: I find that view simplistic at best. Let’s push the reasoning: water supplies threatened, let’s build dams. In the process of doing so, let’s have a few more species compromised or extinct. The ecosystems including these species then are changed and will inevitably become less productive and less generous of their services to us humans. So we will have to devise other (energy consuming) engineering solutions to compensate for that: no fish, let’s do aquaculture, and industrially produce the pellets to feed the fish. Of course, this will require more work and energy than the natural systems (check out how much energy and organic matter is necessary to make a pound of farmed salmon). From one engineering solution to another, we progressively change this all planet into an exclusive support system for humans. When that is all done, what kind of quality of life can these humans expect? How much work will they have to shoulder? How “rich” will any one of them actually be? I remember something similar to that being considered in a SF book called “The Godwhale” (T.J. Bass). It was not a very optimistic view of how satisfying, or even viable, such a world would be.

    We have no precise no precise idea of how our societies will be affected by large scale collapses of ecosystems. However, it looks like we are willing to do that experiment.

    Comment by Philippe Chantreau — 19 Aug 2007 @ 4:23 PM

  483. Timothy Chase (#322):

    Perhaps they also taught you in Statistics 101 not to assume a normal distribution in your errors.

    Such an assumption is especially unwarranted in the case of temperature readings, given that siting problems almost always will increase temperature readings compared to clean sites (e.g., UHI, asphalt, buildings; I have not yet heard of a thermometer placed 10 feet from a tank of outgassing liquid nitrogen, although such is possible).

    [Response: Think about this. If a tree grows, or a station is moved from the south side to the north side of a building, if you go from a city centre to an airport, if you 'do the right thing' and get rid of the asphalt etc... all of these will have the add a cooling artifact to the record. Assumptions that all siting issues are of one sign is simply incorrect. - gavin]

    Comment by DWPittelli — 19 Aug 2007 @ 4:24 PM

  484. Matt, I think you’re missing Gavin’s point about the difference between statistical and physical modeling, so let’s take a simpler example: planetary orbits.

    If you were to sit out in the backyard every night with a good telescope and collect data every night on the positions of all the planets, after a while you’d have a dataset that you could then use to predict the future positions of the planets – no physics required at all. How long would you have to do this in order to get a comprehensive predictive dataset? (Hint – it relates to the longest orbital period… for example, Saturn takes 29.4 years to circle the sun…)

    On the other hand, you could use the physics developed by Kepler and Newton to mathematically predict the positions of the planets into the future – but could you make an absolute deterministic prediction? No, you could not – and see http://www.pnas.org/cgi/content/full/98/22/12342 for a discussion as to why not.

    Now, let’s try and make the leap to the climate system. Just as one simple example of the statistical effort to look at climate, there’s a historical record known as “The Farmer’s Almanac”. This is a simple compilation by farmers of planting dates and crop yields – and in the past, when the climate was stable, this was actually a good guide for farmers. In today’s increasingly unstable climate, no one pays any attention to the old almanacs. This should in itself be a qualitative indication that the climate is becoming unstable – again, that’s a purely statistical argument.

    So, what about deterministic modeling of the climate system? There’ve been many, many discussions of this on RC, and perhaps you should go look at Learning from a simple model, as just one of many examples. Or see the links in #385 above.

    It all depends on what kind of questions you want to ask. For weather predictions, accuracy disappears within a few weeks – but for ocean forecasts, accuracy seems to have decadal scale accuracy – and when you go to climate forcing effects, the timescale moves toward centuries, with the big uncertainties being ice sheet dynamics, changes in ocean circulation and the biosphere response.

    Thus, in climate science, both the statistical and the ‘deterministic’ methods are coming up with similar results. Most scientists would view that as pretty good confirmation of predictions made back in the 70s.

    Comment by Ike Solem — 19 Aug 2007 @ 5:15 PM

  485. Timothy Chase > But when one considers the droughts and famine which are virtually a given with the current business-as-usual approach…

    Timothy, your sources seem to be mostly news articles and interest groups. Where is the peer-reviewed science that indicates these dire predictions are likely? Even the IPCC admits that they do not know if the cost of mitigation is less than the cost of doing nothing.

    There is a similarity in ignoring real science in both the extreme alarmist and denialist arguments.

    Comment by Steve Reynolds — 19 Aug 2007 @ 5:55 PM

  486. Dodo,

    You are the one who seems to be guilty of cherry-picking here. Why did you choose
    that one graph in #362 to emphasize? Why not link to the whole collection at:

    http://data.giss.nasa.gov/gistemp/graphs ?

    Anyway, noone is trying to hide problems under the carpet – with the possible
    exception of NOAA, whose remarkable decision to switch to a 1971-2000 baseline for
    their temperature anomaly calculations has been utterly ignored by press outlets.

    Here’s the background:
    “As far as the NOAA issue goes, the use of a baseline to calculate temperature
    anomalies relates to the issue of what is meant by ‘anomaly’. Now, in 2000 NOAA
    decided to start using the time period 1971-2000 as the baseline for calculating
    their anomalies, in contrast to the widely accepted use of the 1961-1990 time period
    for their baseline.

    The differences in the two anomalies are fairly dramatic; see for example

    using NOAA’s 1971-2000 baseline, summer 2006:
    http://www.emc.ncep.noaa.gov/research/cmb/sst_analysis/images/archive/monthly_anomaly/monanomv2_200606.png

    Using the 1961-1990 baseline: summer 2006
    http://www.bom.gov.au/bmrc/ocean/results/SST_anals/SSTA_20060625.gif

    Also, NOAA uses the 1971-2000 baseline for their 2005 Arctic Climate Report, but
    does not explicitly discuss this. Obvious, this gives a perception that warming in
    the Arctic is much less severe then it actually is.”

    So, why did NOAA do that? I’ve emailed them repeatedly asking for an explanation,
    with the constant response of “We’ll get back to you on that”. They’ve held
    workshops on this issue, and it turns out that NOAA data plays a different role than
    NASA-GISS data:

    “NOAA’s climate data and forecast products are key elements of the weather-risk
    market. The lack of bias in NOAA’s products and NOAA’s provision of equal access are
    of utmost importance in providing a “level playing field” for the weather-risk
    market. Most of the major participants in the weather-risk market employ specialists
    with an expertise in meteorology and climatology. Broadening the weather-risk market
    to a wider range of companies requires that NOAA’s climate data and forecasts evolve
    and become easier to understand and use.

    Let’s walk through this: NOAA shifts to a 1971-2000 baseline in 2000 (they are
    claiming that this is the ‘normal’ period). If you buy weather risk insurance,
    deviations from ‘degree-day-normal’ conditions are used to calculate payouts. Thus,
    if more recent conditions are considered ‘normal’, that creates a) a perception that
    the warming is less than it is, and b) less financial risk for the weather insurance
    industry.

    This is deliberate data manipulation by a government agency for very questionable
    purposes (and they still won’t go on record about their rationale for doing this!).
    Obviously, they should switch back to the generally accepted 1961-1990 baseline,
    shouldn’t they?

    Comment by Ike Solem — 19 Aug 2007 @ 6:05 PM

  487. This kind of spinning makes me gag.

    http://www.boston.com/news/globe/editorial_opinion/oped/articles/2007/08/19/warming_debate_scene_1_take_2/

    Comment by catman306 — 19 Aug 2007 @ 6:09 PM

  488. Philippe Chantreau> From one engineering solution to another, we progressively change this all planet into an exclusive support system for humans.

    While I think that is a little overstated, I understand your point.

    Please notice that I qualified with ‘If human life is your standard,…’. If human life is not your standard, then you probably can justify aggressive AGW mitigation.

    I’m defining ‘aggressive AGW mitigation’ as something more than a small (less than $20/tC) carbon tax or equivalent.

    Comment by Steve Reynolds — 19 Aug 2007 @ 6:22 PM

  489. US landmass is 2% of total earth surface, about 8% of dry land right?

    what percentage of GISS cited temp guages are in US versus rest of world?

    How many temp guages are sited in oceans? percentage of total?

    Comment by Dave — 19 Aug 2007 @ 7:33 PM

  490. Re #467 Steve Reynolds says:

    “While I will agree that sea level rise will eventually cause a lot of property damage, where is the evidence that AGW would cause more deaths than would be caused by mitigation efforts?

    That includes extending the time required for the people of developing nations to rise out of poverty, and the likely reduction of resources supplied from developed nations to help provide clean water and reduce disease.”

    I would be curious to know what commitments to these worthy efforts would be undermined by programs to mitigate CO2 emissions? Somehow, I doubt that commitments in those areas are likely to increase dramatically in the midst of the mounting problems created by climate change. I believe that a major source of funds for CO2 mitigation would come from resources otherwise committed to business-as-usual energy production.

    The cost of climate change in agriculturally productive areas, plus damage from sea level rise is likely to vastly exceed the cost of mitigation. Just a personal estimate, of course.

    Comment by Ron Taylor — 19 Aug 2007 @ 8:15 PM

  491. Steve Reynolds (#485) wrote:

    Timothy, your sources seem to be mostly news articles and interest groups. Where is the peer-reviewed science that indicates these dire predictions are likely? Even the IPCC admits that they do not know if the cost of mitigation is less than the cost of doing nothing.

    You state, “Even the IPCC admits that they do not know if the cost of mitigation is less than the cost of doing nothing.”

    So you stated in the 16 May 2007 thread A bit of philosophy, comment #30:

    I do not think there is any consensus that says that. Even the IPCC has doubts about cost/benefit of mitigation at any CO2 level:

    “Limited and early analytical results from integrated analyses of the costs and benefits of mitigation indicate that these are broadly comparable in magnitude, but do not as yet permit an unambiguous determination of an emissions pathway or stabilization level where benefits exceed costs [3.5].” (from SPM3)

    I responded in #57:

    I don’t believe they were denying the basics of marginal utility theory.

    Given a some unit of resources (dollars, for example), one devotes it to where it is most needed, then with the next dollar you do the same with regard to the needs that are left. But at some point, the utility associated with satisfying a given need (in the descending order of climate change needs) will be less than that which the dollar might satisfy elsewhere for some other kind of need. They were not suggesting that there shouldn’t be any resources devoted to preventing climate change, otherwise it kind of defeats the whole purpose of issuing the report. What they were pointing out is that we aren’t exactly able to determine the precise point at which point other needs become more pressing – per dollar of investment.

    You state, “Timothy, your sources seem to be mostly news articles and interest groups. Where is the peer-reviewed science that indicates these dire predictions are likely?”

    Well, lets look at drought.

    Here is a brochure from the Hadley Centre – a little more readable than some of the more technical papers, but it has references to the peer-reviewed papers if you decide you want to look them up.

    Effects of climate change in developing countries
    November 2006
    http://www.metoffice.gov.uk/research/hadleycentre/pubs/brochures/COP12.pdf

    It describes some of the recent drought conditions, compares observed drought and modeled drought conditions from 1950 (observed was roughly 20%) to 2000 (observed was roughly 30%), then makes projections based upon climate models and the business as usual SRES A2 scenario where roughly 50% of the world’s land will be experiencing drought by 2100 at any given time. But this is probably on the conservative side.

    Now lets look at glaciers – since they are what feed many of the world’s major rivers – such as the six major rivers of China.

    Here is something from the beginning of a paper in Current Science:

    Global warming has remitted in large-scale retreat of glaciers throughout the world. This has led to most glaciers in the mountainous regions such as the Himalayas to recede substantially during the last century and influence stream run-off of Himalayan rivers.

    Glacial retreat in Himalaya using Indian Remote Sensing satellite data
    Anil V. Kulkarni, et al
    CURRENT SCIENCE, VOL. 92, NO. 1, 10 JANUARY 2007, pg 69

    Now just limiting ourselves to what has gone on so far:

    The investigation has shown overall 21% reduction in glacial area from the middle of the last century. Mean area of glacial extent was reduced from 1.4 to 0.32 sq. km between 1962 and 2001. In addition, the number of glaciers has increased between 1962 and 2001; however, total areal extent has reduced. The number of glaciers has increased due to fragmentation. Numerous investigations in the past have suggested that glaciers are retreating as a response to global warming. As the glaciers are retreating, it was expected that tributary glaciers will detach from the main glacial body and glaciologically they will form independent glaciers. Systematic and meticulous glacial inventory of 1962 and 2001 have now clearly demonstrated that extent of fragmentation is much higher than realized earlier. This is likely to have a profound influence on sustainability of Himalayan glaciers.

    Glacial retreat in Himalaya using Indian Remote Sensing satellite data
    Anil V. Kulkarni, et al
    CURRENT SCIENCE, VOL. 92, NO. 1, 10 JANUARY 2007, pg 74

    To get a sense of what is happening worldwide, you can check the global glacier mass balance chart at the bottom of this page:

    SOTC: Glaciers
    http://nsidc.org/sotc/glacier_balance.html

    That is from the National Snow and Ice Data Center’s:

    State of the Cryosphere
    http://nsidc.org/sotc/

    You can also check here to see a map of glacier advance and retreat throught the world:

    http://www.globalwarmingart.com/wiki/Image:Glacier_Mass_Balance_Map_png

    What little blue you see is where glaciers are advancing. All the yellow and brown represent retreat.

    Now remember, glaciers feed many of the world’s major rivers. Once the glaciers of the Tibetan Plateau are gone, the six major rivers running through China will dry up. Incidentally, if you are concerned only with climate change insofar as it affects US agriculture I can look that up, too.

    Steve Reynold wrote:

    There is a similarity in ignoring real science in both the extreme alarmist and denialist arguments.

    I am refering to the science, and as far as I can tell you haven’t even made an effort to bring any of that to the table.

    Comment by Timothy Chase — 19 Aug 2007 @ 8:15 PM

  492. PS to #489

    I should mention that Steve Reynolds #485 was responding to #481. Somehow he only remembered to mention who he was responding to rather than what he was responding to. I thought I would help.

    … and now for something completely different:

    Well, not really.

    Here is an interesting paper from earlier this month on the Asian Brown Cloud. Locally it is supposed to amplify the greenhouse effect in the lower troposphere as much as greenhouse gases. Reducing the pollution will significantly reduce the rate at which glacier mass balance is lost in the Himalayas. Anyway, the bit I will quote indicates the importance of these glaciers to the fresh water supply of Asia.

    The above model results have significant implications because the observed air-temperature trend over the elevated Himalayas has accelerated to between 0.15–0.3 K per decade18 during the past several decades. This large trend is thought to be the major cause for the Himalayan-Hindu-Kush glacier ablation. The Himalayan-Hindu-Kush region has seen a marked retreat in the glaciers that serve major Asian rivers such as the Yangtze, the Indus and the Ganges. The rapid melting of these glaciers, the third-largest ice mass on the planet, if it becomes widespread and continues for several more decades, will have unprecedented downstream effects on southern and eastern Asia.

    Warming trends in Asia amplified by brown cloud solar absorption
    Ramanathan, et al
    Nature 448, 575-578 (2 August 2007)
    http://intl.emboj.org/nature/journal/v448/n7153/full/nature06019.html
    (open access)

    However, there is a price of sorts – while locally the Asian Brown Cloud amplifies the greenhouse effect, globally it masks the greenhouse effect due to the aerosol-induced global dimming.

    Comment by Timothy Chase — 19 Aug 2007 @ 9:00 PM

  493. belated response to steve bloom 394, snrjon 399, i was at the head of Tanquary Fiord, an area sheltered by 2 mountain ranges and i think in recent years those temps are not unusual. Streams that were trickles 11 yrs ago were running high [uncrossable]. on the other hand glacier toes were in the same position as they were on the 75 topo. from the air every stream had good flow from permafrost melt, there is little snow as the arctic is quite dry. the ice sheets looked desiccated like ice cubes left in the freezer too long.

    Comment by ziff house — 19 Aug 2007 @ 9:50 PM

  494. DWPittelli (#483) wrote:

    Timothy Chase (#322):

    Perhaps they also taught you in Statistics 101 not to assume a normal distribution in your errors.

    Such an assumption is especially unwarranted in the case of temperature readings, given that siting problems almost always will increase temperature readings compared to clean sites (e.g., UHI, asphalt, buildings; I have not yet heard of a thermometer placed 10 feet from a tank of outgassing liquid nitrogen, although such is possible).

    I was speaking of random errors – and given the nature of the “error” which was discovered, this seemed quite appropriate, but you are speaking of a systematic bias on the part of a given site or sites.

    If you are speaking of a poorly installed site, this will not create a continually rising trend. If you are speaking of the urban heat island effect, we can use just rural sites and we get virtually an identical trend. If you are speaking of the installation of a barbacue or air conditioner right next to a site, this will produce a jump, not a continually rising trend.

    If you are thinking about urban growth, this is something which can be and is now tracked by satellite measurements of night lights. Likewise, we can perform satellite measurements of the lower troposphere which produce trends with virtually the same slope, although somewhat higher variability. Not much room for the Urban Heat Island effect there.

    *

    Given the nature of the world, conclusions are rarely guaranteed by any given set of evidence. But the more evidence one acquires and/or the more independent lines of inquiry which lead to the same conclusion (e.g., that the average global temperature is rising at a given rate), the more justification that conclusion receives.

    Yet there are those who think that if it is possible to cast some doubt upon any given piece of evidence, no conclusion can ever be justified. Such an approach is roughly comparable to the obsessive/compulsive disorder in which one has to keep checking to make sure that the door is locked because the last five times one checked to see whether it was locked, it is at least possible that one is misremembering. If someone with such a disorder keeps this up, they will never leave the house.

    If your house is burning down, you better hope that the firemen who are getting ready to leave and put out that fire aren’t suffering from this disorder.

    I believe we have a fire to put out.

    PS

    Gavin does an inline to #483 which is worth checking out and deals with some other aspects, namely, whether errors in site temperature measurements will generally be positive.

    Comment by Timothy Chase — 19 Aug 2007 @ 10:41 PM

  495. Is anyone aware of a website somewhere hosting versions of the GHCN monthly mean data in CSV format, or at least in a format easily read into Excel etc.? Something like the format of the GISS data would be perfect.

    Comment by Dylan — 19 Aug 2007 @ 11:07 PM

  496. Timothy Chase(not using numbers because they change)> Here is a brochure from the Hadley Centre…

    I looked at your referenced PR material, but did not see much supported by peer-reviewed studies supporting your dire predictions. They did have projections of increased food production due to higher CO2 concentrations, though.

    For all the glacier concerns: I still have not seen a good reason why building dams is not a complete solution. It is what we already do where glaciers do not exist.

    Comment by Steve Reynolds — 20 Aug 2007 @ 12:06 AM

  497. Re 483. “[Response: Think about this. If a tree grows, or a station is moved from the south side to the north side of a building, if you go from a city centre to an airport, if you ‘do the right thing’ and get rid of the asphalt etc… all of these will have the add a cooling artifact to the record. Assumptions that all siting issues are of one sign is simply incorrect. - gavin]”

    Isn’t this a misunderstanding? What Gavin describes are just measures for removing a warm bias from the microclimate, not adding a cooling artefact. A thermometer is supposed to be in the shade anyway, so trees and south side-north side issues will not cause artificial cooling in well-mixed air.

    Comment by Dodo — 20 Aug 2007 @ 1:42 AM

  498. Ike Solem: quote for ocean forecasts, accuracy seems to have decadal scale accuracy unquote

    Do the ocean forecasts for the NH correctly display the late thirties/early forties temperature blip? Better still, do they post facto track the blip when it is not smeared by the Folland Parker adjustment?

    I see there’s an interesting model that uses SSTs as the driver for climate change — I’ve wondered why no-one uses lighthouse records (westerly facing lighthouses would give a fair proxy for SSTs) to clean up the SST record.

    JF

    Comment by Julian Flood — 20 Aug 2007 @ 5:06 AM

  499. [[ where is the evidence that AGW would cause more deaths than would be caused by mitigation efforts?]]

    Common sense?

    Comment by Barton Paul Levenson — 20 Aug 2007 @ 8:24 AM

  500. [[For all the glacier concerns: I still have not seen a good reason why building dams is not a complete solution. It is what we already do where glaciers do not exist.]]

    Less glacial runoff = less water to dam
    More drought = less water to dam
    Hotter, dryer conditions = faster evaporation from dam reservoirs.

    Comment by Barton Paul Levenson — 20 Aug 2007 @ 8:56 AM

  501. [[Isn’t this a misunderstanding? What Gavin describes are just measures for removing a warm bias from the microclimate, not adding a cooling artefact. A thermometer is supposed to be in the shade anyway, so trees and south side-north side issues will not cause artificial cooling in well-mixed air.]]

    If the surroundings of a station are getting lighter in color with time it will induce a spurious cooling trend.

    Comment by Barton Paul Levenson — 20 Aug 2007 @ 8:57 AM

  502. #496- Steve reynolds- building dams is expensive, and not always a good idea in an earthquake zone.
    Plus, you need nice narrow valleys with good rock to anchor the foundations in, or else you need tens of millions of tons of rocks and earth and concrete to make a large gravity dam. This would still fail to solve issues with evaporation.
    Suffice to say, if dams were a really good idea for this problem, i would expect construction companies to be queueing up for work, and dam building programs to be making their way onto the agenda. That they appear not to be, suggests that dams will not solve this problem. If we have some engineers from India reading this, perhaps they will be able to comment.

    Otherwise, you could provide some evidence that dams can be built in the correct places, and that people are thinking of doing so.

    Comment by guthrie — 20 Aug 2007 @ 9:02 AM

  503. http://oyhus.no/AbsenceOfEvidence.html

    Comment by Hank Roberts — 20 Aug 2007 @ 9:48 AM

  504. Re: 496 and 502 Dams as a solution to a water shortfall

    Dam-building is not to be taken lightly – they can have undesirable effects on people and their cities (displaced) and the environment ( severely disturbed). The Three Gorges Dam is a great example of that:

    http://www.american.edu/ted/threedam.htm

    “However, social costs of resettlement and environmental damage
    are enormous. Environmental sustainability of the project in
    relation to massive resettlement and ecological damage is to be
    focused in this paper. Chinese officials estimate that the
    reservoir will partially or completely inundate 2 cities, 11
    counties, 140 towns, 326 townships, and 1351 villages. About 23800
    hectares, more than 1.1 million people will have to be resettled,
    accounting for about one third of the project’s cost. Many critics
    believe resettlement would fail and create reservoir refugees. The
    forced migration would raise social unrest. Many of the residents
    to be resettled are peasants. They would be forced to move from
    fertile farmland to much less desirable areas.
    However, social costs of resettlement and environmental damage
    are enormous. Environmental sustainability of the project in
    relation to massive resettlement and ecological damage is to be
    focused in this paper. Chinese officials estimate that the
    reservoir will partially or completely inundate 2 cities, 11
    counties, 140 towns, 326 townships, and 1351 villages. About 23800
    hectares, more than 1.1 million people will have to be resettled,
    accounting for about one third of the project’s cost. Many critics
    believe resettlement would fail and create reservoir refugees. The
    forced migration would raise social unrest. Many of the residents
    to be resettled are peasants. They would be forced to move from
    fertile farmland to much less desirable areas….
    The project will also cause devastating environmental damage,
    increasing the risk of earthquakes and landslides. It will also
    threaten the riverþs wildlife. In addition to massive fish
    species, it will also affect endangered species, including the
    Yangtze dolphin, the Chinese Sturgeon, the Chinese Tiger, the
    Chinese Alligator, the Siberian Crane, and the Giant Panda.
    Moreover, silt trapped behind the dam will not only deprive
    downstream regions, but also will impede power generation from the
    back-up. Construction of the dam would require extensive logging
    in the area. Finally, the dam and the reservoir will destroy some
    of Chinaþs finest scenery and an important source of tourism
    revenue.”

    Comment by Chuck Booth — 20 Aug 2007 @ 10:13 AM

  505. #478 James: This is at best a half truth, because GM has invested large amounts of money to a) persuade large segments of the buying public that it wants a particular sort of car; and b) establishing a design & manufacturing infrastructure to build the sort of car that they’ve persuaded their market to want.

    From an engineering perspective understanding, strong, lightweight structures is key to all automobiles and you can bet the auto companies spend a fortune trying to make all cars (even monster SUVs) much lighter. That same technology—in fact all the non-engine technology—is directly applicable to hybrids and pure electric cars. The major pieces needed for both, batteries, brushless motors and controllers, will be purchased from suppliers.

    So the cost to switch from 1% hybrid/99% ICE to 99% hybrid and electric to 1% ICE will be relatively small IF the public is willing to live with the tradeoffs.

    [edit]

    At some point in the very near future an electric car will offer more for less money, and at that point you can bet every car maker in the world will switch. But until those economics actually make sense, they won’t. You can chalk that up to black helicopters or simple market demands.

    [I see this same pattern at all levels, from individuals who seemingly can’t alter self-destructive behaviors on up to entire societies. Indeed, I expect this is at the root of a lot of denialism: the denialists have established a set of habits, and reject any information that suggests a need to change them.]

    Or perhaps those behaviors aren’t as destructive as you claim them to be???

    [Response: Please keep the rhetorical excesses to a minimum. There are plenty of places for that elsewhere on the web. - gavin]

    Comment by Matt — 20 Aug 2007 @ 10:26 AM

  506. Steve Reynolds wrote: “There is a similarity in ignoring real science in both the extreme alarmist and denialist arguments.”

    That is incorrect. There is no similarity whatsoever. Denial of the reality of anthropogenic global warming, or denial of the likely horrific consequences thereof, is entirely based on ignoring “real science”. Many of your comments here are a good example.

    On the other hand, “Alarm” is an entirely appropriate response to the “real science” of global warming. Numerous “real” scientists conducting “real” scientific research on global warming, whose results are published in “real” peer-reviewed journals, have been quoted in interviews as characterizing their own research as “alarming”.

    Comment by SecularAnimist — 20 Aug 2007 @ 10:26 AM

  507. #487 Catman306: This kind of spinning makes me gag

    Which aspect of the article bugged you the most? I’ll note that the “science is settled” argument evokes the same response in me.

    [Response: I'm curious. Where have you seen this claim made here? I think it more likely that you are falling for a classic false dichotomy: the false idea there are simply two classes of scientific knowledge, 'settled' versus 'unsettled'. No scientist thinks this way, and no statements here or in the IPCC reports can be read this way. The only people using this phrase appear to be politicians trying to spin any uncertainty into total ignorance. I would advise against arguing with caricatures. - gavin]

    Comment by Matt — 20 Aug 2007 @ 10:30 AM

  508. Steve Reynolds wrote that mitigating anthropogenic global warming will lead to “… extending the time required for the people of developing nations to rise out of poverty, and the likely reduction of resources supplied from developed nations to help provide clean water and reduce disease.”

    Where is the evidence that mitigating global warming will have either of these outcomes? I have never seen any such evidence, only this talking point repeated over and over.

    On the other hand, several other commenters have already posted links to real science that strongly indicates that unmitigated anthropogenic warming will exacerbate the poverty of developing nations, decimate existing fresh water supplies and increase disease. And several international agencies, including those working to advance the Millennium Development Goals of reducing poverty worldwide, have opined that unmitigated global warming threatens to undermine this agenda and negate any advances in reducing poverty and promoting well-being in the developing world.

    Comment by SecularAnimist — 20 Aug 2007 @ 10:34 AM

  509. Too late for the Yangtze dolphin. It’s “officially” extinct.

    http://www.guardian.co.uk/environment/2007/aug/08/endangeredspecies.conservation

    Comment by wildlifer — 20 Aug 2007 @ 10:57 AM

  510. Gavin has a good sense of things when he writes:

    “the false idea there are simply two classes of scientific knowledge, ’settled’ versus ‘unsettled’. No scientist thinks this way, and no statements here or in the IPCC reports can be read this way. The only people using this phrase appear to be politicians trying to spin any uncertainty into total ignorance. I would advise against arguing with caricatures. – gavin”

    Agreed spinning any uncertainity into ignorance is rhetoric. As is spinning any hunch into QED, rhetoric
    as well. A while back, I beleive there was a well deserved pummeling of Beck here. The issue of
    Gore’s Accuracy came up. Precisely, claims made in AIT. Given the current reordering of the hottest
    year batting order, do you care to comment on Gore’ accuracy? For the record.

    [edit]

    [Response: Gore's statement was the that nine of the ten warmest years globally occurred since 1995. This is true in both GISS indices and I think is also true in the NOAA data and CRU data. So that's pretty accurate. - gavin]

    Comment by steven mosher — 20 Aug 2007 @ 11:16 AM

  511. Re 491 – Timothy, you keep referring the the loss of glaciers on the Tibetan plateau causing a reduction in flow of six major rivers in China. Actually, the situation is more serious than you indicate. The six rivers are vital sources of fresh water, not just for China, but also for India, Bangladesh, Pakistan, and much of Southeast Asia. The rivers are the Yangtze and Yellow in China, the Indus in Pakistan, the Ganges and Brahmaputra in India and Bangladesh, and the Mekong in Southeast Asia. The scope of the potential problem would appear to be much greater than you are indicating.

    Comment by Ron Taylor — 20 Aug 2007 @ 11:42 AM

  512. On the reasoning that NOT to mitigate GW is good to developing nations, brought up especially by Steve Reynolds.

    Here in Brazil I hardly see an advantage of further warming. Our economy is largely dependent on agriculture, and climate change is already a motive of concern, as traditional farmland is having less rain than before. Permanent rivers have stopped flowing in a specially harsh dry season. We started having hurricanes. Dry spells are happening in the Amazon, suggesting some major change ahead. Coffe-bean plantation is slowly being pushed southwards, as its flowers don´t resist temperatures over 35ºC.

    I have seen people advocate less GW-mitigation on behalf of poor countries. Please don´t do it. It´s either naïve or plain mean.

    Comment by Alexandre — 20 Aug 2007 @ 11:49 AM

  513. Mr Chase, as far as I can tell the argument still stands “The cost of mitigation may not be less than the cost of doing nothing”. If Humanity were to vanish from the face of the earth tomorrow we would still see warming, partly because of the long term affects of the CO2 we have already added to the environment. Your cost estimates should not be warming vs no warming, but warming vs slightly more warming.

    Comment by Michael — 20 Aug 2007 @ 11:56 AM

  514. Re # 477 Steve Reynolds: “Most of your other concerns are speculation, with no consensus that they are likely to be severe (or even exist for some, such as malaria).”

    Researchers who study this seem to think it very definitely is a problem, a serious one:

    M. Pascual, J. A. Ahumada, L. F. Chaves, X. Rodó, and M. Bouma (2006) Malaria resurgence in the East African highlands: Temperature trends revisited.
    PNAS | April 11, 2006 | vol. 103 | no. 15 | 5829-5834
    http://www.pnas.org/cgi/content/full/103/15/5829 (Open Access)

    See also the Commentary on this article in the same issue:
    Malaria risk and temperature: Influences from global climate change and local land use practices
    Jonathan A. Patz*, and Sarah H. Olson. PNAS | April 11, 2006 | vol. 103 | no. 15 | 5635-5636
    http://www.pnas.org/cgi/content/full/103/15/5635 (Open Access)

    Perhaps Steve Reynolds knows something the researchers don’t?

    Comment by Chuck Booth — 20 Aug 2007 @ 12:01 PM

  515. Alexandre:
    The data suggests that Brazil is not warming, unless you live in one of the major Brazilian conurbations. On the other hand massive changes in land use will certainly impact regional climates and regional rainfall.

    Comment by bjc — 20 Aug 2007 @ 12:07 PM

  516. Has anyone asked the many other big climate modeling projects whether they were using the data with which this newly discovered problem was reported?

    Comment by Hank Roberts — 20 Aug 2007 @ 12:16 PM

  517. Alexandre, there is no sincerity in the “poor country” anti-mitigation argument. It is simply another rethorical talking point. It’s interesting to note that visible media figures have latched on to that argument while also arguing against increasing aid to poor countries. Furthermore, all the problems deemed more urgent than GW will be made worse by it. The ultimate farce is to argue that increased CO2 will improve yields without considering rain patterns (whose importance you’re aware of in Brazil), biological agents, or, as you mentioned, plain temperature. I’m sure that coffee would grow fine with high CO2 concentrations, so long as the temp remains below 35, conditions easily achieved in the lab.

    Comment by Philippe Chantreau — 20 Aug 2007 @ 12:33 PM

  518. bjc #513
    Could you please post that data you mention? That interests me even if I´m not a scientist. I live in a small town some 200Km north of Sao Paulo, and changes in the frequency of frosts and duration of Winter cold are noticeable comparing to, say, 20 years ago. Agricultural research institutes and the textile industry, for example, are already reacting to those changes.

    And the initial reasoning remains: policies of deliberate slow mitigation of GW are hardly in the interest of poor, tropical countries.

    Comment by Alexandre — 20 Aug 2007 @ 12:37 PM

  519. Re 516: I was under the following impression: GCMs use physics only and are adjusted for better understanding of physical phenomena or inclusion of new physical parameters; temps are used as an indicator of how well the model performs. Considering how small the change is (as very well shown by Tamino on Open Mind), I would not expect change to be warranted. Am I mistaken?

    Comment by Philippe Chantreau — 20 Aug 2007 @ 12:40 PM

  520. You know, I wonder if the denialists on this thread such as Matt, Dodo or Steve Reynolds would care to comment on the following topic (which actually relate to the original topic of the post):

    NOAA’s decision to switch to a 1971-2000 baseline in 2000. Isn’t this a deliberate manipulation of data, which results in far lower temperature anomalies being reported by NOAA as ‘data’? Isn’t this a much larger (and deliberate) distortion of data than the original subject of this paper?

    No comments?

    [Response: I don't think so. First off the baselines don't have any affect on the trends and they are the key for climate change issues. Secondly, NOAA has more constituencies than just climate scientists. People need info on climatology for all sorts of purposes (insurance, engineering etc.) and it makes sense to provide up to date numbers. This is a bit of a red-herring.... - gavin]

    Comment by Ike Solem — 20 Aug 2007 @ 12:42 PM

  521. Re: 511+512 (et al)

    Climate change is bad for agriculture. Farmers plant this year what grew best last year. They have to. There’s no other metric. If temperatures and rainfall flop around, yields suffer. Greater energy in the atmosphere will produce greater variability. Simply by definition. Even a zero sum game — no change in global means for temps and rainfall — would reduce yields due to an increased change in regional figures.

    Not for nothing is agriculture the image of stability. Undermine stability and Katy bar the door.

    Comment by Jeffrey Davis — 20 Aug 2007 @ 1:05 PM

  522. Ike, you’ve been sparring with denialists so much, you’re starting to think like them (yikes!). However, I have no doubt that anything comparable to what you mention but going the “other way” would set the denialist blogosphere ablaze…

    Comment by Philippe Chantreau — 20 Aug 2007 @ 1:08 PM

  523. Re #505: I think you missed my point completely. For instance, you say:

    [From an engineering perspective understanding, strong, lightweight structures is key to all automobiles and you can bet the auto companies spend a fortune trying to make all cars (even monster SUVs) much lighter.]

    Which I suppose is true as far as it goes (especially if you factor in the cost of lighter materials), but misses the obvious: the easiest way to make a car lighter is to make it smaller. That’s where the advertising/manufacturing cycle comes into play. So consider your following point:

    [At some point in the very near future an electric car will offer more for less money, and at that point you can bet every car maker in the world will switch.]

    That might well turn out to be correct, but I would bet that if it does you will still have US automakers turning out oversized electric SUVs that get 15 miles/Kwh, while the Japanese build smaller vehicles that get 35, and it’s perfectly possible to build something that gets 70.

    The point is that there are two possible ways to reduce CO2 emissions. There’s the electric SUV approach, which uses as much or more energy than at present, but gets it from less CO2-intensive sources such as the electric grid. Then there’s the “drive a smaller car less”, which reduces the actual amount of energy used. A lot of marketing effort goes into creating & reinforcing public attitudes that favor the first, and denigrate the second.

    Comment by James — 20 Aug 2007 @ 2:03 PM

  524. Steve Reynolds,
    While I wholeheartedly support the position that we need to be very careful not to make things worse by our mitigation efforts, I think your reasoning is flawed.
    First, there has been plenty of good research done on potential risks due to climate change. Here’s a webpage from EPA:
    http://www.epa.gov/climatechange/effects/health.html

    Now, certainly whether these effects will occur is uncertain, but the proper way to deal with this in risk mitigation is to scale effort by the potential cost of the adverse event multiplied by its probability. And indeed if there are a range of costs with different probabilities, you integrate over the probability vs. cost distribution. DOD has carried out similar studies, and there have been peer-reviewed studies of various aspects (e.g. crop yields, extinction, biodiversity, etc.).
    Given that all the infrastructure of civilization has evolved during the past 10000 years of relative climatic stability, it is not unreasonble to assume that significant changes will impact that infrastructure adversely.
    However, even if there are benefits, we will be most likely to capitalize on them if climate change occurs at a pace that can be managed by a market economy rather than a command economy.
    Another problem I see in your reasoning is that you assume that it is either climate change mitigation OR development. This is a false dichotomy. Development will occur–third world nations will not ask our permission. India and China certainly did not. Development and climate change are not alternative agendas–rather they are two sides of the same coin–sustainable development. Somehow we have to develop economies that can grow without trashing the environment.

    Comment by Ray Ladbury — 20 Aug 2007 @ 2:06 PM

  525. If the error was only in data after 2000, why did the temperatures pre-2000 change? E.g., why did the relative rankings of 1934 & 1998 change?

    Comment by WVL — 20 Aug 2007 @ 2:08 PM

  526. Gavin, I find it troubling that you are showing resistance to weather station ‘investigations’. Do you have a good grasp on the quality of the US network data? Worldwide data? Shouldn’t you be leading the charge on these station audits (or at least involved or supportive) since you use this data in your models?

    [Response: Resistance is useless.... where have I heard that before? Seriously though, I haven't expressed any resistance to their efforts. I have expressed a great deal of scepticism about whether they will achieve anything, but it's a free country and people can go around photographing things if they want. And for the umpteenth time, the data is not 'used in models'. -gavin]

    Comment by Michael — 20 Aug 2007 @ 3:09 PM

  527. Steve Reynolds (#496) wrote:

    Timothy Chase(not using numbers because they change)> Here is a brochure from the Hadley Centre… (#491)

    I looked at your referenced PR material, but did not see much supported by peer-reviewed studies supporting your dire predictions. They did have projections of increased food production due to higher CO2 concentrations, though.

    For all the glacier concerns: I still have not seen a good reason why building dams is not a complete solution. It is what we already do where glaciers do not exist.

    You are correct with regard to India and China. Greater precipitation implying greater biomass for rice and wheat, although this will be largely offset by decreased nutritional value.

    Dams – we can probably look into that in greater depth later.

    They will have increased harvests as the result of increased precipitation according to the Hadley projections. However, the point remains that according to the very same projections, 50% of the world will be experiencing drought at any given time. Additionally, the nutritional value of rice (according to FACE open air experiments) will be diminished to a by nearly the same degree as biomass increases by 2050. If nutritional value were to decline by an equal amount, there would be no net benefit.

    Please see:

    Rising carbon dioxide could make crops less nutritious
    Jia Hepeng
    4 March 2005
    http://www.scidev.net/News/index.cfm?fuseaction=readNews&itemid=1969&language=1
    (Sorry – no technical article found as of yet.)

    We also need to keep in mind the timing of the rainy season even where precipitation increases, and this will adversely affect many crops…

    El Nino events typically lead to delayed rainfall and decreased rice planting in Indonesia’s main rice-growing regions, thus prolonging the hungry season and increasing the risk of annual rice deficits. Here we use a risk assessment framework to examine the potential impact of El Nino events and natural variability on rice agriculture in 2050 under conditions of climate change, with a focus on two main rice-producing areas: Java and Bali. We select a 30-day delay in monsoon onset as a threshold beyond which significant impact on the country’s rice economy is likely to occur. To project the future probability of monsoon delay and changes in the annual cycle of rainfall, we use output from the Intergovernmental Panel on Climate Change AR4 suite of climate models, forced by increasing greenhouse gases, and scale it to the regional level by using empirical downscaling models. Our results reveal a marked increase in the probability of a 30-day delay in monsoon onset in 2050, as a result of changes in the mean climate, from 9-18% today (depending on the region) to 30-40% at the upper tail of the distribution. Predictions of the annual cycle of precipitation suggest an increase in precipitation later in the crop year (April-June) of ~10% but a substantial decrease (up to 75% at the tail) in precipitation later in the dry season (July-September). These results indicate a need for adaptation strategies in Indonesian rice agriculture, including increased investments in water storage, drought-tolerant crops, crop diversification, and early warning systems.
    (pg. 7752)

    Research Institute in the Philippines suggest that rice yields are closely linked to mean minimum temperatures during the dry season; for every 1 C increase in the minimum temperature, rice yields decrease by 10% (24). At a global scale, increased CO2 concentrations could partially offset expected yield declines caused by lower soil moisture and higher temperature, but recent models suggest a significantly smaller fertilization effect from CO2 than previously predicted (25). Global models that combine precipitation, temperature, and CO2 effects for the A2 scenario generally show reduced yields in the tropics and increased yields in temperate zones (26). (ibid, pg 7756)

    Assessing risks of climate variability and climate change for Indonesian rice agriculture
    Rosamond L. Naylor, et al
    PNAS | May 8, 2007 | vol. 104 | no. 19 | 7752-7757
    http://www.pnas.org/cgi/content/abstract/104/19/7752

    Now I will of course include increased precipitation and crop biomass in India and China whenever revisiting these topics in the future. However, I am curious whether you will do the same with respect to the diminished nutritional value of these crops, the increased global prevailence and severity of droughts, diminished agricultural output in Indonesia, etc, or, as is suggested by your response, do you intend to “accentuate the positive” with regard to climate change by omitting the costs?

    Comment by Timothy Chase — 20 Aug 2007 @ 3:40 PM

  528. I understood you J.S. I was just making the point that I’ve seen both “sides” hit the rhetoric machine at times. I’m not commenting on who does it more, it probably depends on where you are and who else is there. Like this disagreement if output matching between two methods or not is the same thing as a code review, or if one approach is “better” than the other. Just trying to be neutral. In fact, I’d like to see both done. Even if I had the skills and the time to do both, one of them I couldn’t do. Gavin obviously believes one is worthless to do. McIntyre obviously believes one is very worthwhile to do. I don’t know. And I don’t really even care. Audit it. Don’t. Whatever.

    Lawrence Brown you mentioned original work. An audit of “the code” would be original work, wouldn’t it? The results and the output could even be useful to doing the adjustments in the future. Like I said, why not do both? He’s willing to do one of them.

    Some here are saying replicate the adjustments, then if there are problems, dig further. If McIntyre is willing to go through the code itself, which seems a much more difficult thing to do, if all is okay there’s nothing else to do. That argument would be, why worry about the intermediate step? Especially when his goal isn’t to verify the output, it’s to verify the code itself and understand exactly how it’s doing what it’s doing? Hasn’t anyone wanted to ever take apart a car to see how it’s put together (rather than how to build one)?

    Sheesh, let him spend the time even if it’s a waste of it. I don’t see why everyone cares so much one way or the other.

    I find it interesting, the idea that if a scientist works for or is funded by (Exxon-Mobile, Chevron-Texaco, Shell) then they are corrupted. Let’s see, what would happen if they couldn’t get any scientists. No exploration for more sources, no drilling, no refining, no quality control… I don’t know about you, but I like having gasoline available, and having it be less than a million dollars an ounce. Plus it’s quick to stop into the store for beer and cigarettes. Come on, they’re not “fighting” or “promoting” global warming, they’re spending their time finding, getting, processing and delivering gasoline. (and of course, governments make more money on their then any of them do either individually or as a total.) I will agree that if they’re doing anything, they’re doing PR. But as I started out, nobody has a lock on rhetoric, and nobody has a lock on PR, either.

    I don’t trust or distrust anyone based upon where they get their funding or not, who they work for or don’t, or anything else. I find this interesting on a level of the discussion and the ideas.

    Comment by Hal P. Jones — 20 Aug 2007 @ 3:44 PM

  529. Ron Taylor (#511) wrote:

    Re 491 – Timothy, you keep referring the the loss of glaciers on the Tibetan plateau causing a reduction in flow of six major rivers in China. Actually, the situation is more serious than you indicate. The six rivers are vital sources of fresh water, not just for China, but also for India, Bangladesh, Pakistan, and much of Southeast Asia. The rivers are the Yangtze and Yellow in China, the Indus in Pakistan, the Ganges and Brahmaputra in India and Bangladesh, and the Mekong in Southeast Asia. The scope of the potential problem would appear to be much greater than you are indicating.

    I genuinely appreciate the correction. As you can see, I am still having problems just keeping up with the projected changes in precipitation.

    Thank you.

    Comment by Timothy Chase — 20 Aug 2007 @ 3:48 PM

  530. Prior to this incident, I think that I would have been with the “release the code and let them spend their time deciphering it if that’s what they want to do” faction. But we’ve just seen a huge media circus over a tiny correction that doesn’t affect anybody’s conclusions.

    It is pretty much a certainty that a program of any length is going to contain some bugs, most of them trivial, as well as some statistical and mathematical issues that somebody might quibble with.

    If people are prepared to point to a statistically insignificant correction as undermining all conclusions, they will go nuts over any kind of actual bug, whether or not it affects the output.

    Not to mention the inevitable ejaculations of “I’m a software engineer, and this is the worst code I’ve ever seen! How can anybody trust anything from these guys?”

    So I think you’re strategy is the right one. If you release the code, you’ll be spending your time in endless arguments about such things as whether rounding errors in the least significant digit can build up. Describe the algorithms in English, and if anybody thinks that you screwed them up badly enough to make a difference, let them recalculate the averages and show that they get meaningfully different numbers.

    Comment by trrll — 20 Aug 2007 @ 4:09 PM

  531. Re 520. Thanks Ike, for bringing up the concept of climate, for a change. For practical reasons, it is defined as a collection of meteorological measurements, their averages and fluctuations over a period of 30 years. So if we look at the warming between 1976 and 1998 (no significant warming since), we don’t even have a time series long enough to call a climate.

    But as we have all seen, “global climate” has different rules than our classical climates, with which we defined Koeppen’s zones. So we can pick any period of weather data and read all kinds of climate messages from it. And everybody (I hope) gets to play the game: skeptics are happy to point out that global warming stopped at the end of 2001.

    Comment by Dodo — 20 Aug 2007 @ 4:18 PM

  532. PS to 527, response to Steve Reynolds (#496)

    This might have some relevance regarding Hadley projections of greater harvests in India and China with CO2 fertilization and increased precipitation in those areas…

    The presumed benefits of CO2 enrichment have been overestimated as the result of the use of enclosure studies. Free-air concentration enrichment demonstrates enhanced yield of approximately half of what was projected by enclosure studies – which have as of yet (2006) to be incorporated into the models.

    Model projections suggest that although increased temperature and decreased soil moisture will act to reduce global crop yields by 2050, the direct fertilization effect of rising carbon dioxide concentration ([CO2]) will offset these losses. The CO2 fertilization factors used in models to project future yields were derived from enclosure studies conducted approximately 20 years ago. Free-air concentration enrichment (FACE) technology has now facilitated large-scale trials of the major grain crops at elevated [CO2] under fully open-air field conditions. In those trials, elevated [CO2] enhanced yield by ~50% less than in enclosure studies. This casts serious doubt on projections that rising [CO2] will fully offset losses due to climate change.
    (pg 1918)

    The FACE experiments clearly show that much lower CO2 fertilization factors should be used in model projections of future yields; however, the present experiments are limited in the range of growing conditions that they cover. Scientists have not investigated the interactive effects of simultaneous change in [CO2], [O3], temperature, and soil moisture. Technological advances suggest that large-scale open-air facilities to investigate these interactions over controlled gradients of variation are now possible. Although we have projected results to 2050, this may be too far in the future to spur commercial R&D, but it must not be seen as too distant to discourage R&D in the public sector, given the long lead times that may be needed to avoid global food shortage.
    (pg 1921)

    Food for Thought: Lower-Than-Expected Crop Yield Stimulation with Rising CO2 Concentrations
    Stephen P. Long
    30 JUNE 2006 VOL 312 SCIENCE 1918-1921

    Comment by Timothy Chase — 20 Aug 2007 @ 4:19 PM

  533. RE Gavin’s response: Okay, that does make some sense… though I would think that anyone buying weather insurance would prefer to use the 1961-1990 baseline for calculation of insurance payouts, while anyone selling weather insurance would certainly prefer the 1971-2000 baseline. If the argument is that the decision to switch baselines was based on economic reasons – that I can agree with. What seems objectionable is NOAA’s use of this baseline in their (purely?) scientific report on The State of the Arctic. That little anomaly chart on the cover is based on the 1971-2000 baseline, isn’t it? What would it look like with the 1961-1990 baseline?

    RE NOAA and our upcoming hurricane season:
    I was just perusing NOAA’s website and they are still claiming that the main driving force behind this season’s predicted greater-than-normal hurricane activity is the Atlantic Mutidecadal Oscillation…

    http://www.noaanews.noaa.gov/stories2007/s2905.htm

    “The climate patterns responsible for the expected above-normal 2007 hurricane season continue to be the ongoing multi-decadal signal (the set of oceanic and atmospheric conditions that have spawned increased Atlantic hurricane activity since 1995), warmer-than-normal sea surface temperatures in key areas of the Atlantic Ocean and Caribbean Sea, and the El Nino/La Nina cycle”

    Regarding El Nino, here’s the recent update from Australia: http://www.bom.gov.au/climate/enso/
    “The past three or four weeks has seen a gradual strengthening of La Nina indicators: the near-equatorial Pacific has cooled both on and below the surface, the Trade Winds have been mostly stronger than normal and cloudiness has been lower than average over much of the tropical Pacific. However, it’s too early to tell if these signs are the beginnings of a sustained trend, especially when considered in the context of the fluctuating ENSO indicators that were apparent between May and mid-July. So the chance of a La Nina developing is still probably about 50:50, although it’s difficult to make this kind of assessment with high precision.”

    50:50? Hardly a definitive cause.

    There are indeed warmer-than-normal sea surface temperatures in the Atlantic basin, but NOAA appears unable to ascribe that to anything – are they still banned from using the phrase “global warming”?

    As far as the AMO, the notion that NOAA is promoting here is that since 1995 the ‘conveyer belt’ has speeded up, bringing warm water further north. Well, the measurements should reflect that theory…

    See John A. Church’s Perspective in Science, 17 Aug 2007 : Oceans: A Change in Circulation? It appears based on ship-borne measurements that northward heat transport has decreased by 20% in the North Atlantic. The main point is that there is such poor historical data that it’s hard to be clear about what’s been going on, and only now is it even possible to estimate the annual variability… but nowhere does there seem to be any real factual support for the AMO theory that NOAA has repeatedly put forward ( see http://www.magazine.noaa.gov/stories/mag184.htm and RC guest commentary 2006).

    RE #522
    Am I ‘thinking like a denialist’? I’m not quite sure what that would entail… overemphasizing certain minor issues while ignoring everything else? I’d be very happy to see evidence for decreased climate sensitivity and a slower pace of global warming – that would mean we have extra time to get off fossil fuels – but that just doesn’t seem to be the case – rather, the opposite seems to be true.

    Also, the political efforts to silence (some) NOAA scientists are pretty well-documented, aren’t they? NOAA is supposed to be a reliable source for journalists and the public… and they’re failing that mandate.

    Comment by Ike Solem — 20 Aug 2007 @ 4:48 PM

  534. Regarding the oft-mentioned “CO2 fertilization”:

    The growth of plants in most of the world is limited by many factors different from the CO2 concentration in the atmosphere. The most important of such factors are water availability, soil fertility, amount of sunlight (for example, many rainforests are growth-limited by cloudy skies), and temperature.

    So in most places you can add as much CO2 as you want and the plants won’t grow one iota faster because their growth is limited by one or more of the other factors. Plants will only grow faster in a CO2 enriched atmosphere if they have all the water, soil nutrients, and sunlight that they want, plus the right temperatures. The only possible exception to this is water availibility, but this is just my own speculation. Plants loose a lot of water by evaporation when they open their stomata (pores in their leaves) to absorb CO2 (a small percentage of plant species, especially cacti, have particular mechanisms to drastically reduce this problem). So increased CO2 will allow them to open the stomata for a shorter amount of time and thus loose less water. This could allow for extra growth, but only if there are enough soil nutrients, light, and the temperature is right.

    See also my post #388 that talks about recent findings of stunted growth in tropical forests where the temperature has increased by jut one degree C. Many crops would also be stunted by higher temperatures, and by large shifts in precipitation.

    Here is an excerpt from a USDA publication that looked at the effect of temperature on plant photosynthesis:
    (http://www.ars.usda.gov/research/publications/publications.htm?SEQ_NO_115=156279)

    “High temperatures often inhibit plant growth, with photosynthesis considered among the most sensitive plant functions to high temperature. Most temperate C3 plants exhibit a broad photosynthetic temperature optimum between 20° and 35°C with peak CO2 assimilation often occurring near 30′C. It is well understood that increasing leaf temperatures beyond this range reduces photosynthetic efficiency by stimulating photorespiration.”

    This web site has a nice, short introduction to photosynthesis:
    http://wc.pima.edu/~bfiero/tucsonecology/plants/plants_photosynthesis.htm

    Check also the Wikipedia entry on photosynthesis.

    So it is much more likely that a warming planet will see reduced crop yields, rather any potential benefit from “CO2 fertilization”. As the paper cited above says, the optimum temperature for most temperate plants is 20 to 35degC, so you can imagine what would happen to agriculture once temperatures in many places of the world climb into the mid/upper 40′s in the summer for longer periods of time. Also, if temperature stress increases the rate of photorespiration (see Wikipedia) the stressed vegetation becomes a net source of CO2. This is a pretty nasty positive feedback!!!

    Comment by Rafael Gomez-Sjoberg — 20 Aug 2007 @ 5:37 PM

  535. RE#533

    I’m afraid I need to eat a little crow here… after looking over NOAA datasets it turns out that the CruTem2v dataset that the State of the Arctic report is based on is indeed using the 1961-1990 baseline – it’s only the ‘degree-day-normals’ for the continental US that use the 1971-2000 baseline. Mea culpa. The rest should be accurate.

    Comment by Ike Solem — 20 Aug 2007 @ 5:43 PM

  536. Re Ike 533: I was just (jokingly) referring to the “conspiratory” NOAA idea (although you have a point as to what they’re permitted to say). Denialists love conspirations and spot green helicopters (running on biodiesel, I’m sure) as easily as they find gazillions in funding for “alarmist climatologists.”

    Comment by Philippe Chantreau — 20 Aug 2007 @ 6:54 PM

  537. bjc & Brazilians — My understanding is that the ITCZ has been moving northwards. The last time this occurred, AFAIK, was about 40,000 years ago. At that time, almost the entire Amazon basin was savanna. A Dr. Jose Mendoza predicts that the Amazon basin will become a warm, dry savannah once again…

    Comment by David B. Benson — 20 Aug 2007 @ 7:39 PM

  538. #523 James: The point is that there are two possible ways to reduce CO2 emissions. There’s the electric SUV approach, which uses as much or more energy than at present, but gets it from less CO2-intensive sources such as the electric grid. Then there’s the “drive a smaller car less”, which reduces the actual amount of energy used. A lot of marketing effort goes into creating & reinforcing public attitudes that favor the first, and denigrate the second.

    The “drive a smaller car” mandate is an example of you looking at your own life and deciding everyone should be just like you, isn’t it? And why single out the SUV? Why not go after anyone with a home air conditioner? Why not go after those that take airplane flights for non-essential purposes? Where would you draw the line?

    Comment by Matt — 20 Aug 2007 @ 8:09 PM

  539. Gavin in #507: Response: I’m curious. Where have you seen this claim made here? I think it more likely that you are falling for a classic false dichotomy: the false idea there are simply two classes of scientific knowledge, ’settled’ versus ‘unsettled’.

    I don’t hear the “SiS” (“science is settled”) claim from most on this board, but there are a few that really want to embrace the SiS and move on to talking about just how bad the countless disasters will be. And I think they believe those disasters are a certain thing.

    But the original poster was referring to the popular media, and I was merely noting that while the article he noted really made him ill, that I thought it reasonable and that the articles or statements from the popular media that the “SiS” rubbed me the wrong way.

    I heard an interview with Laurie David on NPR in which anyone that called with the slightest disagreement was bashed over the head with the “SiS–let’s move on” argument.

    I’m glad you asked, Gavin. It makes me feel a bit better about the debate as a whole.

    Comment by Matt — 20 Aug 2007 @ 8:17 PM

  540. To add to what David Benson says:

    The transition of the Amazon from rainforest to savannah will involve huge fires that will release enormous amounts of CO2 into the atmosphere. I shudder to think of that possibility.

    In 2005 the Amazon had the worst drought in 40 years (http://www.nature.com/news/2005/051010/full/051010-8.html), and this year seems to be heading in the same direction (http://news.mongabay.com/2007/0529-amazon.html). The forest might not be able to handle too many consecutive years of below-average rainfall.

    At the same time, Colombia (where I was born) has been experiencing unusually heavy rainfall since 2005 because of the same northward shift in the InterTropical Confluence Zone (ITCZ). This has already caused severe flooding, land-slides, crop losses, etc. All the rain that was supposed to fall on the Amazon is falling now on Colombia.

    The weather almost everywhere in the world is visibly shifting towards more extreme conditions, while the “skeptics” keep arguing about the color of the housing of some thermometers in Wyoming.

    Comment by Rafael Gomez-Sjoberg — 20 Aug 2007 @ 8:24 PM

  541. #521 Jeff Davis: Climate change is bad for agriculture. Farmers plant this year what grew best last year. They have to. There’s no other metric. If temperatures and rainfall flop around, yields suffer. Greater energy in the atmosphere will produce greater variability. Simply by definition. Even a zero sum game — no change in global means for temps and rainfall — would reduce yields due to an increased change in regional figures.

    Not for nothing is agriculture the image of stability. Undermine stability and Katy bar the door.

    If you have driven across the US, you will see all sorts of stuff being grown at a range of latitudes. Major corn production occurs in Minnesota and Missouri which have dramatically different climates.

    Heck, the most massive vegetables you have ever seen are grown in Alaska (http://www.gadling.com/2007/07/16/giant-mutant-like-vegetables-at-alaska-state-fair/) because of the longer days.

    We’ve adapted our crops for each location over hundreds of years of genetic engineering, and I suspect farmers continue to do this year over year in real time as the climate changes.

    Comment by Matt — 20 Aug 2007 @ 8:24 PM

  542. #479 DavidU: I can only repeat the basic fact that you again ignore. These models are _not_ constructed by fitting them to climate in the past. They are based on setting up the laws of physics together with our best knowledge of the state of earth at _one_ point in time and are then alloved to run.

    I’d be more inclined to believe you if I didn’t see so many constants scattered throughout the various models.

    I gave up counting the number of constants inside the first files I opened in modelE ghy_drv.f and ghy.f.

    In some (few) places, the constants have references. In other places, there is nothing to explain why a certain constant was selected.

    In some places, we see constants specified with excruciating detail–8 sigificant digits or more.

    In other places, we see one or two SDs.

    In GHY.f we see a plethora of comments that, frankly, are scary. “broot back to original values”, “changed..for 1.5m root depth”,”back to full heat capacity, to avoid cd oscillations”, “set im=36, jm=24″, “divide canopy heat capacities by 10.d0″

    And “use snow conductivity of 0.088 w/m/c instead of 0.3″

    Sure, there’s a lot of physics here. But, as I oringally asserted, it looks like there is a heck of lot of tweaking based on intuition. Is there a master doc that explains all this?

    Sorry, but a casual romp through the source tends to re-affirm by belief that there’s a lot of intuition in the model.

    Comment by Matt — 20 Aug 2007 @ 9:08 PM

  543. Re #538: [The “drive a smaller car” mandate is an example of you looking at your own life and deciding everyone should be just like you, isn’t it?]

    No, it’s an example of looking at available technology, and thinking about how CO2 emissions could most easily be reduced. I draw on my own life for illustration: a practical example is worth hours of theoretical argument. I manage to live quite happily without a large SUV; and indeed I’m wealthier and happier because I do so, therefore the advertising is false.

    [Why not go after anyone with a home air conditioner?]

    Because it didn’t come up? But since you raised the subject, I’ll just mention that thanks to decent insulation and some shade trees, I manage to live comfortably without one. That’s more money in that stays in my pocket instead of going to the power company :-)

    [Why not go after... Where would you draw the line?]

    I’m not sure what you mean by “go after”? I’m talking about changing people’s attitudes, and the choices they make as a result, so we might look at the selling of air travel, and possible less CO2-intensive alternatives. I don’t, in fact, understand why people willingly subject themselves to the various unpleasantnesses of commercial air travel, especially when their destination is rapidly becoming indistinguishable from their starting place. Why fly halfway around the world to see McDonalds’ signs in Chinese?

    Comment by James — 20 Aug 2007 @ 9:14 PM

  544. Himalayan Glaciers

    Ron Taylor (#511) wrote:

    Re 491 – Timothy, you keep referring the the loss of glaciers on the Tibetan plateau causing a reduction in flow of six major rivers in China. Actually, the situation is more serious than you indicate. The six rivers are vital sources of fresh water, not just for China, but also for India, Bangladesh, Pakistan, and much of Southeast Asia. The rivers are the Yangtze and Yellow in China, the Indus in Pakistan, the Ganges and Brahmaputra in India and Bangladesh, and the Mekong in Southeast Asia. The scope of the potential problem would appear to be much greater than you are indicating.

    I looked it up. Here is an article describing what you are talking about:

    Himalayan Glacier Retreat Blamed on Global Warming
    Tuesday, January 16, 2007
    http://geology.com/news/2007/01/himalayan-glacier-retreat-blamed-on.html

    .. and here is the technical paper the above article was based on:

    Glacial retreat in Himalaya using Indian Remote Sensing satellite data
    Anil V. Kulkarni, et al
    CURRENT SCIENCE, VOL. 92, NO. 1, 10 JANUARY 2007
    http://www.ias.ac.in/currsci/jan102007/69.pdf

    Thanks again.

    Comment by Timothy Chase — 20 Aug 2007 @ 9:43 PM

  545. Re # 528 Hal P.Jones: “I find it interesting, the idea that if a scientist works for or is funded by (Exxon-Mobile, Chevron-Texaco, Shell) then they are corrupted. ”

    Not necessarily corrupted,but certainly restricted in what they can publish and say publically. Not surprisingly, corporate scientists are paid to serve the companies goals. As a result, some of their scientific data are never published because they contain proprietary information. And some data are published, but scientists’ research papers are screened by company lawyers before they are submitted for publication. Clearly, the six Nobel Prizes awarded to scientists at Bell Labs
    (http://tinyurl.com/29u3uc) attest to the fact that some corporations sponser cutting-edge research. But, I strongly suspect that any Exxon-Mobile geologist who says publically that AGW is a serious threat to the planet will quickly be out of a job. We’ve already seen what happens to NASA scientists who offer their opinion about AGW.

    Comment by Chuck Booth — 20 Aug 2007 @ 9:44 PM

  546. Re #538:

    Why worry about the production of CO2 at all? Just go carbon neutral. Plant a few trees, pay someone in India not to drive, piggy back onto someone else’s private jet, hide be hind “you’re just attacking the messenger”, etc.

    The fact is, the US could reduce it’s CO2 production by 5% overnight if those that believe it should be reduced actually reduce their production by measly 10%. Hybrid cars are economically feasible, but guess what, the believers think it’s industry that is the problem and not their own consumption. The “government will save us from ourselves” mentality.

    Comment by Bill Nadeau — 20 Aug 2007 @ 9:51 PM

  547. A bit off topic, I suppose, but FYI: Media Matters for America (www.mediamatters.org) maintains a collection of its blog posts highlighting misrepresentation of global warming science in the media: Climate of Smear: Global Warming Misinformation (http://mediamatters.org/action_center/global_warming/.
    Guaranteed to elevate your blood pressure.

    Comment by Chuck Booth — 20 Aug 2007 @ 10:40 PM

  548. [[skeptics are happy to point out that global warming stopped at the end of 2001]]

    Denialists say all kinds of stupid things, but that one kind of takes the cake. Here are the mean global annual temperature anomalies for 2001 to 2006 (NASA GISS):

    2001 57
    2002 68
    2003 67
    2004 60
    2005 76
    2006 65

    A cursory examination shows that every value later than 2001 is higher than the value for 2001.

    Comment by Barton Paul Levenson — 21 Aug 2007 @ 6:10 AM

  549. Re: 533

    The baseline on the State of the Arctic report cover figure is 1961-1990. The curve is from CRUTEM2v (see Fig. 6 in the report), which uses 1961-1990 as a baseline. I don’t think the 0 line is particularly important in this case, since you can see the trend without it.

    Comment by Harold Brooks — 21 Aug 2007 @ 6:43 AM

  550. Matt (#541) wrote:

    Not for nothing is agriculture the image of stability. Undermine stability and Katy bar the door.

    If you have driven across the US, you will see all sorts of stuff being grown at a range of latitudes. Major corn production occurs in Minnesota and Missouri which have dramatically different climates.

    Well, it is projected that by 2020 we will no longer be able to grow wheat in the United States. That would seem significant.

    Heck, the most massive vegetables you have ever seen are grown in Alaska (http://www.gadling.com/2007/07/16/giant-mutant-like-vegetables-at-alaska-state-fair/) because of the longer days.

    Unfortantely global warming won’t lengthen the day down here. Additionally, you are talking about a cooler environmentment up there, perhaps the fifties during the summer. Plenty of moisture. Down here with the expansion of the hadley cell, a higher rate of evaporation and lower relative humidity in the continental interior, it will be rather dry for crops.

    We’ve adapted our crops for each location over hundreds of years of genetic engineering, and I suspect farmers continue to do this year over year in real time as the climate changes.

    I have some hope for genetic engineering. Retroelements were involved in the domestication of rice. If I remember correctly there is a family of MITEs. Fairly small even for retroelements. Still active. Likewise, we have been radiating seeds for some time – in order to develop new varieties. But that takes time – and we will need to have a better understanding of the genomes of the species we are modifying if we are going to try and speed that up. Even artificial selection takes time – and depends upon pre-existing genetic variation. Still, it won’t help us much in the lower 48 if there isn’t enough water.

    Oh, and you might want to remember that since domesticated plants are adapted to our needs, they are fairly pampered. Weeds will do better with climate change. So will various natural pests.

    Comment by Timothy Chase — 21 Aug 2007 @ 7:20 AM

  551. Matt, Your insinuation that climate models are riddled with adjustable parameters provides an excellent example of why it is pointless to have the code “audited”. The problem is not with the coding, but rather with the fact that most of those calling for auditing don’t understand nearly enough about climate science to determine whether any anomalies or deficiencies they see are significant. It is exactly parallel to the whole debate over weather station siting. Finding a few badly sited weather stations will not make the problem of changing climate go away. Finding a few bugs or deficiencies in the models will not make the issue go away. There are simply too many independent lines of evidence and investigation, all of which point to 1)the fact that climate is changing, 2)that rising CO2 is the predominant culprit and 3)that the sensitivity is about 3 degrees C per doubling.
    Those calling for stringent auditing of code or station siting could learn about the science of climate change with a fraction of the vain effort they are devoting to poking holes in it. They would then understand that the issue needs to be addressed, and that the sooner we start addressing it, the more likely we will be able to do so without draconian restrictions on our liberties and economic well being.

    Comment by Ray Ladbury — 21 Aug 2007 @ 8:10 AM

  552. FYI, another example of sloppy “auditing” is discussed here: http://atmoz.org/blog/2007/08/20/audit-the-auditor/

    Comment by caerbannog — 21 Aug 2007 @ 10:12 AM

  553. #550 Timothy: Well, it is projected that by 2020 we will no longer be able to grow wheat in the United States. That would seem significant.

    Where do you come up with this? You are stating that we cannot grow wheat in the year 2020 in spite of us being able to grow wheat today all the way down to the US-Mexico border? Source?

    Comment by Matt — 21 Aug 2007 @ 10:17 AM

  554. RE #475 & 427 & why governments aren’t doing much.

    What I wrote above has merit, but I also thought of other reasons. The main one is that AGW is caused mainly by nonpoint-source pollution. That means it’s caused by us, more than by governments and single businesses (though they also contribute mucho).

    So, while there’s really quite a bit governments can do (that they aren’t doing now, esp USA) to reduce their own GHGs, and pass regs and laws and incentives to get the public to do so, ultimately we the people have to solve the problem. And if the public were in on this, businesses would be providing us with lower GHG emission products — which is slowly beginning to happen, esp since businesses save money doing so.

    I was sort of surprised when in the early 90s the Jewel food chain in the Chicago area went on the gov’s Green Lights program, got a low interest loan to change all their conventional tube lights to ones with reflectors and electronic ballasts (reducing lighting electricity by 3/4 & saving the food chain $1 million per year, paying off the loan within the 1st year), that they didn’t use that as a marketing strategy: “Jewel cares about the Earth!” But at least I made the effort to shop only at Jewel, and not the other supermarkets. And I think many others would switch to companies that sold the same products but involved less GHG emissions.

    Comment by Lynn Vincentnathan — 21 Aug 2007 @ 10:39 AM

  555. #551 Ray Ladbury: Matt, Your insinuation that climate models are riddled with adjustable parameters provides an excellent example of why it is pointless to have the code “audited”. The problem is not with the coding, but rather with the fact that most of those calling for auditing don’t understand nearly enough about climate science to determine whether any anomalies or deficiencies they see are significant.

    Sorry, Ray, but once science begins to drive public policy and control the distribution of billions of dollars then an extra level of scrutiny must be applied.

    Imagine your statement above and apply it to bridges or airplanes. It is absolutely absurd. FAA engineers understand a fraction of what Boeing engineers understand about flight and building airplanes. But that isn’t their job. Their job is to understand enough to make sure that Boeing is doing their job correctly. If you cannot explain your source code well enough to someone in another field with a solid technical background, then you have really failed in making your case.

    Agree with your statement that there are many lines of evidence. But those are mostly historical in nature: they show what HAS warmed. The models are important because they are forward looking and show what WILL warm. Additionally, these all exist independently. Whether or not there is evidence things are warming doesn’t matter to the integrity of the model. If we ARE warming (and we are), then the integrity of the model becomes more important (back to the public policy bit above).

    Comment by Matt — 21 Aug 2007 @ 11:26 AM

  556. > once science begins to drive public policy and control
    > the distribution of billions of dollars then an extra
    > level of scrutiny must be applied.

    By definition, once science is added to politics, an extra level of scrutiny has been made available.

    Your arguments are all in the direction of holding off applying the extra information science offers, and staying with market and political control of how money’s spent.

    Right?

    Comment by Hank Roberts — 21 Aug 2007 @ 12:27 PM

  557. “FAA engineers understand a fraction of what Boeing engineers understand about flight and building airplanes.” Total nonsense. Aeronautical engineers understand flight and building airplanes, regardless who they work for. Engineers working for Boeing on automation systems used to assmeble subsections or manage parts inventories may not have a clue about flight. FAA engineers who work on aircraft certification understand everything there is to know about flight and building airplanes. For communication to happen, you must have common background between the parties communicating. It is not enough to be a software engineer to analyze code used for GCMs. You also have to know what there is to know about climate, hence the truth of Gavin’s point earlier on this thread about the codes and the validity of Ray’s remarks. All this is not much more than a distraction.

    Comment by Philippe Chantreau — 21 Aug 2007 @ 12:52 PM

  558. Re #546: [Hybrid cars are economically feasible, but guess what, the believers think it’s industry that is the problem and not their own consumption.]

    In the real world, most of us can only consume what manufacturers choose to offer for sale. As it happens, I’ve been driving a Honda Insight hybrid for the last four years. (Averaging 70.5 mpg.) I’d like to replace it with something with even better fuel economy, but guess what? Nothing built today even comes close. How can I choose a more fuel-efficient vehicle, if the automakers choose not to make one?

    Comment by James — 21 Aug 2007 @ 1:31 PM

  559. RE 483.

    GAVIN inlined.

    “Response: Think about this. If a tree grows, or a station is moved from the south side to the north side of a building, if you go from a city centre to an airport, if you ‘do the right thing’ and get rid of the asphalt etc… all of these will have the add a cooling artifact to the record. Assumptions that all siting issues are of one sign is simply incorrect. – gavin]”

    1. Microsite issues can be (+ ) OR (- ) in sign. YES.
    2. Changes in instrumentation ( cables) required
    siting closer to buildings. Think.

    3. You can speculate about the distribution or INVESTIGATE. Consider this. Consider that you screw up and read in the wrong file from USHCN! some sites will be hotter some will be cooler.. What was the actual outcome gavin? was the outcome a net positive
    or net negative?

    4. Growing trees? Shading of a site will hit TMAX.
    Looking at TMAX ( NOT TMEAN)
    will give you a cleaner signal of this potential
    contamination. Also you are likely to see TMAX change
    in a NONLINEAR FASHION reaching an asymptote when the tree fully shades the station at all times. a signature of sorts.

    5. Asphalt hits TMIN. It stores heat ( like the ocean)
    and gives it up slowly. Narrowing of Diurnals, narrowing of varience in Tmin, is a first order sign of
    of UHI contamination.

    6. Speculating that the distribution of micro site issues will be mean =0 is a nice hypothesis. The way we intend to test this is to survey as many sites as feasible and then crunch numbers.

    [Response: What numbers are you crunching? No-one is looking at the actually effect any of the issues really have. Where are the controls? You can ideally speculate all you want about how dramatic it will all be in the end, but absent someone demonstrating that it makes a real difference, there is nothing going on. Plus there are dozens of real reasons to expect TMIN to go up faster than TMAX - it does not imply UHI. Same with your other pop 'fingerprints' - gavin]

    Comment by steven mosher — 21 Aug 2007 @ 1:46 PM

  560. Havent read this yet.

    Page 3 summarizes microsite issues…

    http://ams.allenpress.com/archive/1520-0469/10/4/pdf/i1520-0469-10-4-244.pdf

    Comment by steven mosher — 21 Aug 2007 @ 1:58 PM

  561. If you cannot explain your source code well enough to someone in another field with a solid technical background, then you have really failed in making your case.

    So the source code should contain a tutorial on climate science? Be realistic…

    Comment by Robin Levett — 21 Aug 2007 @ 2:05 PM

  562. intersting reading.

    http://gking.harvard.edu/files/replication.pdf

    Money shot: “If I give you my data, isn’t there a
    chance that you will find out that
    I’m wrong and tell everyone?

    Yes. The way science moves for-
    ward is by making ourselves vul-
    nerable to being wrong.”

    and this:

    It is the policy of the American Economic Review to publish papers only if the data used in the analysis are clearly and precisely documented and are readily available to any researcher for purposes of replication. Authors of accepted papers that contain empirical work, simulations, or experimental work must provide to the Review, prior to publication, the data, programs, and other details of the computations sufficient to permit replication. These will be posted on the AER Web site.

    Comment by steven mosher — 21 Aug 2007 @ 3:12 PM

  563. #542
    Well I can’t really comment on the specific code you have read, since I have neither written it nor read it. But most of your comments seem rather pointless without their full context.
    I’ll just make two comments

    1. That different constants have different number of significant digits is not strange at all. You don’t usually specify the price of a candy bar
    with 8 significant digits, but if you are giving tolerances for motor parts for a jet engine you’ll want as many digits as you can get. Different things have different relevant scales.

    2. Even though the dynamics of a model is given by basic laws of physics you will still have quite a few parameters that you need to specify, both for the initial conditions at the time where you start running the model and also to specify that it is actually this planet you are simulating.
    The same set of dynamical laws applies to the behaviour of the atmosphere of Mars and the earth. So in order to simulate the climate of the earth you will have to say where there are continents, mountain ranges, the amount of gas in the atmosphere, and something as simple as the size of the earth.
    Parameters are not the same thing as intuition based fiddling, they are needed to specify that it is the climate of our planet that is modeled.
    Neither do parameters rule fiddling out, in order to do that you will have to actually know enough of the science to understand the model on you own.

    Comment by DavidU — 21 Aug 2007 @ 3:13 PM

  564. #555
    Here you demonstrate a serious missunderstanding about how verification works. The FAA engineers have to understand at least as much as the people at Boeing. They might not need to design new planes on a day to day basis, but they could to the job if that was what they had to do.
    You also underestimate the gap in knwoledge between scientific fields when you think that one should be able to quickly explain everything to someone with “a solid technical background”. I work in materials physics and it would take me weeks to fully explain much of my work to someone with a PhD in e.g. astrophysics. We both would have a lot of background knowledge of physics, but not the relevant one for understanding the other field without actually learning a lot of it. Not to mention how clueless we would be if we talked to some mathematcians about their reasearch.
    The span of modern science is vast.

    Comment by DavidU — 21 Aug 2007 @ 3:28 PM

  565. steven mosher (#559) wrote:

    5. Asphalt hits TMIN. It stores heat ( like the ocean) and gives it up slowly. Narrowing of Diurnals, narrowing of varience in Tmin, is a first order sign of of UHI contamination.

    Steven,

    TMIN goes up on account of the opacity of greenhouse gases. It is at night that the earth will tend to cool off by thermal radiation, but if you have the feedback between the atmosphere and ground, it will take longer for the thermal radiation to leave the system.

    As for whether or not the record is accurate, try comparing it to the strictly rural or to the lower troposphere. Virtually identical trends – although there is greater variability in the lower troposphere.

    These guys aren’t advocating zero population growth – they are just doing science.

    Comment by Timothy Chase — 21 Aug 2007 @ 3:40 PM

  566. Ray Ladbury> First, there has been plenty of good research done on potential risks due to climate change.

    Yes, but as stated in your link: “Health outcomes in response to climate change are the subject of intense debate.” Many here want to claim that these are proven to be severe effects.

    Ray> …you assume that it is either climate change mitigation OR development.

    I do not mean to assume that. I’m just saying that agressive (costing more than $20/tC) mitigation is likely to do more harm than good. I think I’m in agreement with the majority of economists on this, so I do not see why some (not Ray) are calling me a denialist on this basis.

    Comment by Steve Reynolds — 21 Aug 2007 @ 4:30 PM

  567. 508 SecularAnimist Says: ‘Where is the evidence that mitigating global warming will have either of these outcomes? I have never seen any such evidence, only this talking point repeated over and over.’

    There was a lot of discussion of this here:

    http://www.realclimate.org/index.php?p=453
    comments 185 and after.

    Comment by Steve Reynolds — 21 Aug 2007 @ 5:07 PM

  568. Hank Roberts Says: “Your arguments are all in the direction of holding off applying the extra information science offers, and staying with market and political control of how money’s spent. Right?”

    For myself, seeing a successful audit of the AGW evidence and quantitative effects would make me support more agressive action.

    Comment by Steve Reynolds — 21 Aug 2007 @ 5:17 PM

  569. #561 Robin: So the source code should contain a tutorial on climate science? Be realistic…

    Of course not. But the source should have some measure of tracability back to a real-world equation or constant some place. Some places in the source are great: the reference to IPCC2, for example, for ice accumulation. Perfectly clear.

    Other places you just see a constant that got changed by almost an order of magnitude. No explanation. No refernce to a paper. Tweaking?

    [Response: Units. -gavin]

    Comment by Matt — 21 Aug 2007 @ 7:17 PM

  570. #558 James: In the real world, most of us can only consume what manufacturers choose to offer for sale. As it happens, I’ve been driving a Honda Insight hybrid for the last four years. (Averaging 70.5 mpg.) I’d like to replace it with something with even better fuel economy, but guess what? Nothing built today even comes close. How can I choose a more fuel-efficient vehicle, if the automakers choose not to make one?

    Well you can either assume “the man” is holding out on you and not offering you that 200 MPG engine or the tires that never wear out, or you can assume you are bumping against the laws of physics.

    My calcs show a 1500 pound car (two seater Smart car) would require about 11 HP to cruise at 60 MPH (including various drags), and about 60 HP to go from 0..60 in 12 seconds. A rule of thumb is 1/10 GPH per horsepower, so 11 HP cruise would be 1.1G, or 54 MHG. Your heavier car is already beating the rule of thumb by quite a significant margin, which is really testament to the great engineering.

    If you want greater MPG, either live with a smaller car, increased pollution output, or reduced acceleration.

    Interesting that when the Smart car was brought to the US, it had to have additional pollution controls added to comply with US laws. That brought the MPG of that car down from 60 to 37 MPG (although the article indicates 50 MPG should have been possible). http://www.wired.com/cars/futuretransport/news/2005/05/67405

    Article also gives some insight on how poorly these types of cars have sold world wide, which really indicates your desires are a very slim majority of drivers world wide.

    Comment by Matt — 21 Aug 2007 @ 7:32 PM

  571. Gavin, I think you have done a great job of responding to these comments. One observation on all this – many of the critics don’t seem to want to do any coding, and don’t seem to thoroughly read the climate papers. I liked your statement about ‘tough love’. The intent in asking for the code seems to be not to do science, but to discredit it.

    Comment by KH — 21 Aug 2007 @ 9:17 PM

  572. I’m afraid that I agree with the skeptics that even when published our codes are unnecessarily impenetrable, inadequately validated, inadequately linked to the literature (which itself is unnecessarily impenetrable, although IPCC reports are a big help in the latter regard).

    We climatologists do not seem willing to acknowledge that our unanticipated responsibilities really do require more formality and more accountability than was the case when we were pursuing what amounted to a peculiar and idiosyncratic academic curiosity.

    Our most adamant critics do not seem willing to acknowledge how difficult, expensive and risky such a change would be even in the best, most civilized and most supportive of circumstances. Such benign circumstances are not the ones those same critics are, for the most part, willing to grant us.

    Comment by Michael Tobis — 21 Aug 2007 @ 9:30 PM

  573. Gavin — I echo what KH says…

    Comment by David B. Benson — 21 Aug 2007 @ 9:33 PM

  574. Re #568:
    Steve Reynolds, who would you nominate as auditors of the AGW evidence? Karl Rove? the Pope? your mother? me? maybe somebody with a Nobel Prize in literature or economics? the Dalai Lama?
    Maybe some other scientists that know a lot about climate & earth sciences?

    You seem to not have the foggiest idea of how science is done in this day and age.

    All scientific disciplines, and especially all the natural sciences, have pretty good auditing systems already in place: replication of experiments/analysis, and peer review. The whole scientific community is engaged in a constant auditing of each other’s results. We all use somebody else’s results/work as a basis for our own work, and we constantly replicate what other scientists do. If somebody writes a paper explaining a particular methodology for doing an experiment or analyzing data, or provides some new data, somebody else is going to try using that methodology or those data in their own work. If the method or data are not good, chances are very high that this second person is going to spot the problem pretty quickly. I certainly don’t want to base my own experiments or analyses on flawed methods or data. Plenty of cases of scientific fraud have been spotted that way. And the more newsworthy a particular scientific result/report is, the faster it will be audited by replication, expansion and/or derivation. We are all after fame and glory and proving that some newsworthy result/data is flawed or can be improved substantially is a very good way to gain notoriety. So there’s a strong motivation in science to outdo one another, and that has an implicit “auditing” component to it. I’m not a climate scientist so my experience is in other disciplines, but I would be very surprised if climate science works very differently. I think the system works as well as human fallibility permits.

    So please, if you think that the whole community (thousands) of scientists that work on climate research are not doing a good job of auditing one another, you better come with a very good idea of who should audit their work. It must be somebody that understands all the intricacies (every nook and cranny) of the subject.

    You and all the other pro-audits are very welcome to go back to school and get a PhD in climate/earth sciences so that you can begin auditing things yourselves. All this monday-morning quarterbacking is pretty silly.

    Comment by Rafael Gomez-Sjoberg — 21 Aug 2007 @ 9:46 PM

  575. There’s one good measure of how well “auditing” works — check the financial system.

    Comment by Hank Roberts — 22 Aug 2007 @ 12:13 AM

  576. In the US the agency rule-making and enforcement processes are well known for the opportunities for public involvement and how democratic the processes are, but this not unlimited. There are limits on the openness because without limits involved parties could grind things to a halt to prevent a decision they did not like.

    The calls for transparency in science might just be an end run around the safeguards that allow agencies to work. The tactics are very similar. The calls for codes to be released are like the requests for documents in legal proceedings that are attempts to drag the process out with the goal of getting the other side to quit by make the processing expensive and time consuming.

    Comment by Joseph O'Sullivan — 22 Aug 2007 @ 12:20 AM

  577. Of course not. But the source should have some measure of tracability back to a real-world equation or constant some place. Some places in the source are great: the reference to IPCC2, for example, for ice accumulation. Perfectly clear.

    Other places you just see a constant that got changed by almost an order of magnitude. No explanation. No refernce to a paper. Tweaking?

    At what level of detail do you want this referencing? To a climatologist who knows what he’s looking at, is the code as opaque as you claim it is?

    I’m a lawyer; I’m familiar with trying to explain complicated legal concepts simply, I’m also familiar with people who think that all they need to do is read a couple of books to know how to be a lawyer. No amount of commentary in the source code will satisfy those who want it to stand on its own without further explanation; that is, no amount of commentary short of a full course in climatology. If you want to audit the code, learn the climatology first.

    Comment by Robin Levett — 22 Aug 2007 @ 2:45 AM

  578. I’ve been grappling with Time of Observation Bias and in particular the adjustments made to the annual temperature data to adjust for TOB.

    TOB occurs when temperatures for the prior day are included in the current day’s record. If the prior day was warmer or cooler than the day of record then this could result in a too high or too low a value for the maximum or minimum respectively.

    Time of Observation Bias can affect monthly averages and the monthly figures are adjusted for TOB. All of which is fine. The problem is the adjusted monthly figures appear to be compiled into annual figures (correct me if I am wrong), which have a significant TOB adjustment.

    There are two reasons why annual and multi-annual data temperature data should not have any significant Time of Observation Bias.

    The first reason is that over a year TOB can only result from the day prior to the period in question. If the period is a year then any TOB from a single day will be averaged over a large number of days and hence will be very small, i.e. TOB must be trivial over a year or longer. So, while individual months can have significant TOB, each subsequent month in the series has a TOB in the opposite direction for the simple reason its bias results from data that should have been included in the previous month’s data. And so over the whole year any bias in individual months is eliminated (except of course the bias from the day prior to the start of the year).

    Even if this were not true (and I am quite sure it is) there is a second reason there cannot be a significant TOB over a year.

    The two halves of the year have more or less equal and opposite monthly TOB. If the first half of the year has a warming bias then the second half of the year will have an equal cooling bias assuming the Time of Observation remains constant, which the adjustment method assumes.

    If the annual temperature data has a significant TOB and hence TOB adjustment, it must be noise from the TOB estimating method, because it cannot be from Time of Observation biases accumulated over the whole year, because it’s impossible to accumulate such biases.

    [Response: I think you misunderstand the nature of the TOB. The situation is more that historical temperatures were taken in the afternoon. Now they are taken in the morning. That imparts a cooling bias to all temperatures and does not cancel out in any averaging procedure. - gavin]

    Comment by Philip_B — 22 Aug 2007 @ 5:15 AM

  579. #571 “The intent in asking for the code seems to be not to do science, but to discredit it.”

    As those who do not accept that significant AGW is occurring appear to be unable to mount a case through the peer-reviewed science literature, the intent is, yes, to discredit but also to intimidate. It is the tobacco industry tactic all over again.

    If AGW skeptics have valid hypotheses to explain the various datasets (not just temperature but other observations as well), why are they not showing up in quantity in the peer-reviewed literature?

    Comment by richard — 22 Aug 2007 @ 7:35 AM

  580. re 574

    “…the more newsworthy a particular scientific result/report is, the faster it will be audited by replication, expansion and/or derivation. We are all after fame and glory and proving that some newsworthy result/data is flawed or can be improved substantially is a very good way to gain notoriety.”

    This can’t be emphasized enough. This faux “debate” has been going on for years and the so-called AGW “Skeptics” have, at best, seen literally all of their pet criticisms routinely shot down but, more important, have been unable to substantiate any of their attempts to “refute” the science in the only arena that actually matters – the scientific arena.

    Comment by J.S. McIntyre — 22 Aug 2007 @ 9:23 AM

  581. Matt,
    The fact of the matter is that most people calling for “auditing” do not have the background to assess the code in the first place. Just because one can string together a few lines of code doesn’t mean you have the background to understand scientific code in any field.
    The case for climate change depends in no way upon the models. It was laid out in 6 easy steps recently on this site. None of those steps was particularly dependent on modeling. Where the code is important is in LIMITING our assessment of risk–is it credible that all the ice at both poles will melt? The models say no, not in the short term. Is it credible that we could have a runaway greenhouse effect on Earth? Again the models say no. The fact that there are uncertainties in the models does not discredit their predictions–it just means we have to weight their predictions with those uncertainties.
    Want to understand the models? Learn the science. Then it will be obvious what the code is doing?

    Comment by Ray Ladbury — 22 Aug 2007 @ 9:46 AM

  582. Gavin,

    You inlined

    “[Response: Gore’s statement was the that nine of the ten warmest years globally occurred since 1995. This is true in both GISS indices and I think is also true in the NOAA data and CRU data. So that’s pretty accurate. - gavin]”

    The trailer for AIT. The first pronuncment is that “the 10 hottest years measured occured in the last 14 years and the hottest was 2005.”

    Can you toss me a pointer to the data with the associated errors?

    [Response: I'm pretty sure it refers to the NCDC land+ocean analysis: http://www.ncdc.noaa.gov/oa/climate/research/anomalies/anomalies.html - the hottest ten years are all 1995 onwards - they've recently upgraded their analysis, so it might have been slightly different a year ago. In the GISS analysis, 1990 sneaks into the top ten (displacing 1997), and so the phrase would have been in the last 16 years (assuming it's written in 2006). Errors are estimated to be around 0.1 deg C on any individual year, and so there is a little uncertainty in any ranking, but nothing would change the basic thrust. - gavin]

    Comment by steven mosher — 22 Aug 2007 @ 11:32 AM

  583. In the letters column of the London Daily Telegraph today sombody claims that recent research has shown that the sensitivity of the climate to co2 is a third lower than previously thought. Has anybody heard anything about this? If so where does it come from?

    [Response: It refers the Schwartz paper alluded to above. The conclusion is unlikely to stand – but watch this space.. – gavin

    Comment by David Price — 22 Aug 2007 @ 11:54 AM

  584. # 555 Matt: “.. once science begins to drive public policy and control the distribution of billions of dollars then an extra level of scrutiny must be applied.”

    I don’t see any evidence that climate science is driving public policy in the U.S. (at least not on the federal level). But, why should scientific concerns about AGW be any different from other scientific concerns that have influenced public policy, such as smoking, AIDS, pollution, declining fisheries, etc? Isn’t scientifically-informed public policy the reason we (in the U.S.) have the National Academies of Science (“Advisors to the Nation on Science, Engineering, and Medicine”; http://www.nationalacademies.org), not to mention the National Institues of Health, National Science Foundation, NOAA, NWS, USGS, Centers for Disease Control and Prevention, et al?

    Comment by Chuck Booth — 22 Aug 2007 @ 12:24 PM

  585. re 581.

    That’s a good one.

    The first I looked at as a “programmer” with no knowledge of science had a glaring error in the very first routine. They read data in from an external source without performing any checks on whether they had picked up the right file. Sounds kinda familiar.

    The second model was DOD verified. At the end of day
    I found that half the code didnt execute because of
    a mistaken goto. Had no clue about what was going on in that code.

    The third model was a phased array radar. I had no clue
    what the heck that thing did. two days work and I find an error in the 3D transformation matrix — the guy left out a sin term or cos term. The code had been in use for 10 years. Fully validiated. That pissed a bunch of people off who saw 10 years of data go down the drain.

    Next piece of code was also validated in use for ten years. 1 week, found the piece of code that was dead
    ( ie not executed, but expected to be executed)
    Invalidated 10 years of study. opps.

    These were simple rudimentary checks. Did you read the right File? Did you record the file you read? Does every bit of your code execute? Do you have test cases?

    NONE of this requires Climate science knowledge.
    The fact that a frigging mining guy found errors is
    prima facia evidence that an code review is in order.

    The irony of the last point is rich and creamy.

    Attacked as a know nothing “mining guy” McIntyre
    finds an error.

    he finds an error without source.
    Ray says they could not find errors with SOURCE

    The bottom line is this. Hansen and crew are scientists.
    They are not engineers. They are not software engineers.
    were they, they last little burp would have been less
    likely.

    Comment by steven mosher — 22 Aug 2007 @ 12:28 PM

  586. One interesting question the brouhaha highlights is, “why did we see global cooling from ~1940 to 1970?”

    One possible answer:

    On the Effect of the World War II Bombing and the Nuclear Bomb Test to the Regime Shift of the Mean Global Surface Temperature (SAT SST) Abstract; 2001 paper in Japanese.

    Perhaps we just need to start slagging Nevada again? And lay off the Iranians and North Koreans, so they can get start atomic testing saving the planet?

    Comment by EthanS — 22 Aug 2007 @ 12:54 PM

  587. Re #570: [Well you can either assume “the man” is holding out on you and not offering you that 200 MPG engine or the tires that never wear out, or you can assume you are bumping against the laws of physics.]

    No, you’re missing an important point: we aren’t talking about 200 mpg carburetors here. We know that it’s possible to build a car that gets better than 70 mpg, because there’s one sitting in my driveway. (And I can see many ways to tweak the design for better economy.) We also know that it isn’t being sold any more, and that the nearest competitors get markedly poorer fuel economy. That’s fact.

    [Article also gives some insight on how poorly these types of cars have sold world wide, which really indicates your desires are a very slim majority of drivers world wide.]

    Which is my other point. How much money do the automakers spend on advertising fuel efficient cars, versus how much on the oversized gas guzzlers? It’s the feedback loop again: the guzzlers sell because they’re advertised (and a lot of the advertising sells qualities that are independent of brand). Change the advertising, and you’ll change what sells.

    Comment by James — 22 Aug 2007 @ 1:20 PM

  588. RE Steven (#585):

    I hope you don’t mind if I don’t quote your entire piece, but you make some good points, and I think the post is well worth reading. But I would like to address some concerns with regard to climate models.

    As I understand it, NASA makes available the following: documentation, source code, the various external data sets – in a variety of configurations. With regard to the IPCC simulations, the output from these are made available as they are completed. Validation, technical discussions and papers dealing with various modules and their coupling are also available. Then of course there is all the technical literature on climatology itself.

    Everything one would need to evaluate the models – assuming one has the appropriate level of expertise. Well beyond me, obviously. However, it always pays to keep in mind that, given the nature of human cognition and communication, there will always be aspects which are tacit rather than articulated. Much of cognition and communication consists of shifting the boundary between the two.

    It might surprise many to know this, but given the need for standardization, McDonald’s has a virtual encyclopedia occupying an entire shelf in which they have attempted to articulate every detail in order to achieve that standardization. I know – I worked there at one point. Science probably wouldn’t progress so quickly, given the complexity of its subject matter, if scientists attempted to always articulate things with that level of detail. But they come rather close.

    *

    Incidentally, I know it has been repeated on a number of occasions, but I think it might help to remind people that climate models are not based off of surface temperature records. They aren’t based off of trends. They are based off of first principles, physics, basically, so any “error” regarding the small fraction of a degree by which 1934 and 1998 were different is irrelevant to their validity – and of small relevance in terms of the trends in global average temperature.

    Comment by Timothy Chase — 22 Aug 2007 @ 1:25 PM

  589. PS to #588

    My apologies for not hyperlinking to #585 (22 August 2007 at 12:28 PM) in #588 (22 August 2007 at 1:25 PM). I was paying more attention to the content and had honestly forgotten.

    Comment by Timothy Chase — 22 Aug 2007 @ 1:31 PM

  590. A man goes to the hospital with severe chest pain, a shooting pain in his left arm, shortness of breath, and when the intern on call listens with a stethoscope she hears a highly irregular heartbeat, typical of heart attack victims. The intern orders an EKG, which shows the classic pattern of heart attack, so she pronounces that he’s suffered a major heart attack and orders the appropriate treatment.

    Suddenly another doctor comes in. Hold the phone! The software used by that EKG machine has never been validated! It’s not “open source!” It can’t be trusted! Tell that patient to go home, we’ll call him back as soon as everyone agrees that the EKG software doesn’t have a “bug.”

    Validating the EKG software is a good idea. But let’s not make the already overworked interns do it, and let’s not make the already underfunded hospital pay for it. And since we have a plethora of lines of evidence of lifethreatening illness — so many that even if the EKG is totally SNAFU there’s still no doubt — quit stalling, for GOD’S SAKE get that patient into the critical care unit. Stat.

    Comment by tamino — 22 Aug 2007 @ 2:06 PM

  591. Gavin, why don’t you want to post what this really proves, namely that all the ‘station temperature error would be detected and corrected as part of the process.’ This is blatantly not true or the rather large errors that have been carried for the last seven years would have been detect. Also, even though you do not want to admit it, in Hansen (2001) he specifically say he is relying on the data from the stations to be accurate, but we now know that per WMO/NOAA/NWS guidelines, the stations are not sited correctly. This means that there is no way to know the accuracy of the data. This means that Hansen UHI could be wrong which would significantly change the whole instrumented picture! That is why this error is so important, it shows that errors are not detected or corrected!

    [Response: You are simply mistaken. Jumps in stations temperatures are indeed found in the NOAA data processing and are incorporated into the GISS analysis. However, GISS does not do that analysis, NOAA does, and the error in the processing was at GISS. Therefore, NOAA had no chance to find that error and your claims that this shows that the NOAA analysis is lacking, have no merit whatsoever. The bottom line remains, do the calculation to show that your issues have a practical effect. - gavin]

    Comment by Vernon — 22 Aug 2007 @ 3:13 PM

  592. #591,
    Vernon – there are constant and continuing efforts to check and correct the surface temperature datasets. For example: Comparison of trends and low-frequency variability in CRU, ERA-40, and NCEP/NCAR analyses of surface air temperature, Simmons et al, JGR 2004

    As they point out,
    “In reality, however, observational coverage varies over time, observations are themselves prone to bias, either instrumental or through not being representative of their wider surroundings, and these observational biases can change over time. This introduces trends and low-frequency variations in analyses that are mixed with the true climatic signals. Progress in the longer term depends on identifying and correcting model biases, accumulating as complete a set of historic observations as possible, and developing improved methods of detection and correction of observational biases.”

    One of the motivations for this paper (18 pages of close-spaced comparison and discussion of the CRUTem2v, ERA-40 and NCEP/NCAR analysis) was the claim by Kalnay and Cai (2003) that much of the reported warming over North America was local and due to urbanization and land use changes (based on NCAR/NCEP). As a result of all this effort, the authors can say, with some confidence, that

    “Results for North America cast doubt on Kalnay and Cai’s (2003) estimate of the effect of urbanization and land use change on surface warming.”

    It’s pretty clear that climate scientists are well aware of the effect of observational biases, and also that they spend a great deal of time and effort on objectively taking these biases into account.

    P.S. For those who are interested, they also explain why anomalies are used rather than absolute measurements:
    “The CRU data are anomalies computed with respect to station normals for 1961–1990, a period chosen because station coverage declined during the 1990s. The reanalyses have accordingly been expressed as anomalies with respect to their own monthly climatic means for 1961–1990…. Working with anomalies rather than absolute values avoids the need to adjust for differences between station heights and the terrain heights of the assimilating reanalysis models.”

    Comment by Ike Solem — 22 Aug 2007 @ 5:04 PM

  593. Validating the EKG software is a good idea. But let’s not make the already overworked interns do it, and let’s not make the already underfunded hospital pay for it. And since we have a plethora of lines of evidence of lifethreatening illness — so many that even if the EKG is totally SNAFU there’s still no doubt — quit stalling, for GOD’S SAKE get that patient into the critical care unit. Stat.

    I’m sure the “auditors” would be willing to provide photographs. That’ll fix up everything!

    Comment by wildlifer — 22 Aug 2007 @ 5:04 PM

  594. A note on terms:
    “auditing” — as defined by people who do auditing — doesn’t mean what the CA people want to do. They want to do what’s called inspection; auditing is sampling for quality control.

    This is from one of the real experts in studying how people make mistakes. Did you know about half of all big spreadsheets have significant errors in them? Most bankers don’t ….

    January 2007
    A Rant on the Lousy Use of Science in Best Practice
    Recommendations for Spreadsheet Development, Testing, and Inspection
    Ray Panko
    University of Hawaii

    “Convergent Validation — Science works best when there is convergent validation, meaning that you try to measure the same thing in several ways. If the results all come out the same, then you can have some confidence in the science.

    “… we have three sources of information about spreadsheet error detection…. (I will use the term inspections instead of auditing, because auditing means sampling for quality control, not the attempt to detect as many errors as possible).”

    http://panko.cba.hawaii.edu/

    Comment by Hank Roberts — 22 Aug 2007 @ 6:09 PM

  595. Another quote from Dr. Pankow, and I think this one goes to the bone on the approach required to be helpful:

    “Although we still have far too little knowledge of spreadsheet errors to come up with a definitive list of ways to reduce errors, the similarity of spreadsheet errors to programming errors suggests that, in general, we will have to begin adopting (yet adapting) many traditional programming disciplines to spreadsheeting….

    “In programming, we have seen from literally thousands of studies that programs will have errors in about 5% of their lines when the developer believes that he or she is finished (Panko, 2005a). A very rigorous testing stage after the development stage is needed to reduce error rates by about 80% (Panko, 2005a). Whether this is done by data testing, line-by-line code inspection, or both, testing is an onerous task and is difficult to do properly. In code inspection, for instance, we know that the inspection must be done by teams rather than individuals and that there must be sharp limits for module size and for how many lines can be inspected per hour (Fagan, 1976). We have seen above that most developers ignore this crucial phase. Yet unless organizations and individuals are willing to impose the requirement for code inspection or comprehensive data testing, there seems little prospect for having correct spreadsheets in organizations. In comparison, the error reduction techniques normally discussed in prescriptive spreadsheet articles probably have far lower impact.

    “Whatever specific techniques are used, one broad policy must be the shielding of spreadsheeters who err from punishment. In programming, it has long been known that it is critical to avoid blaming in code inspection and other public processes. For instance, Fagan (Fagan, 1976) emphasized that code inspection should never be used for performance evaluation. As Beizer (Beizer, 1990) has emphasized, a climate of blaming will prevent developers from acknowledging errors. Edmondson (Edmondson, 1996) found that in nursing, too, a punitive environment will discourage the reporting of errors.

    Quite simply, although the error rates seen in research studies are appalling, they are also in line with the normal accuracy limits of human information processing. We cannot punish people for normal human failings.”

    http://panko.shidler.hawaii.edu/SSR/Mypapers/whatknow.htm

    Comment by Hank Roberts — 22 Aug 2007 @ 6:26 PM

  596. Steven Mosher, No I didn’t say that you or anyone else couldn’t find errors–just that you would have no idea whether they were significant or not.
    Yes, McIntyre found an error–an error that changes nothing of significance about the scientific consensus. Code always has errors. Analyses always have error and uncertainties. This does not invalidate the code or the analysis.
    Science is a human activity that takes into account the fact that people will make mistakes. That is why the strands of evidence for anthropogenic causation of climate change are many and varied. An error here and there will do nothing to dent the strength of evidence. Correction of such errors is just science as usual. Those who jump all over this process merely demonstrate that they don’t understand science.

    Comment by Ray Ladbury — 22 Aug 2007 @ 7:00 PM

  597. ====CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth.====

    Where can I get an original paper which mentions that CRU does not include Artic Temperatures?

    Comment by David Ahlport — 22 Aug 2007 @ 8:31 PM

  598. Re # 585 I wrote, “I don’t see any evidence that climate science is driving public policy in the U.S. (at least not on the federal level). ”

    I just saw in yesterday’s (Aug. 21) newspaper that the U.S. Congress is proposing to spend $6.7 billion in the next fiscal year to combat global warming, an increase of nearly a one-third from this year. Bills moving through Congress would increase funing to reduce GHG emissions and oil dependency, and promote the use of geothermal and other renewable energy sources. So, I guess one could argue that AGW concerns are driving public policy to some degree. On the other hand, in a total budget of about $3 trillion (for 2007), $6.7 billion seems pretty trivial, esp. compared to the defense budget of nearly $700 billion (including the so-called War on Terror[ism]).

    Comment by Chuck Booth — 22 Aug 2007 @ 10:38 PM

  599. Steven Mosher and the other climate audit folks are really excited about all this. They should consider what it would be like looking at it from another, layman-type, perspective (mine): a guy who prides himself for nitpicking every little bit of data he can get managed to find an error that has no significance (which he called himself a micro-error). He had done it before, finding a same type of discrepancy (with Mann’s paper) that did not affect the overall results(confirmed by other studies/lines of evidence). That is the fruit of intense (full time?) effort spent at trying to invalidate scientific stuff he dislikes, or believes is sloppy, driven by anterior motives, whatever. Countless hours of effort to find 2 errors that have no real significance on the results. Impressive.

    As far as I’m concerned, it could very well mean that the guys actually doing the research (those who have their names before the date in the parentheses) are careful enough about what is significant that all that the “auditors” can find is not. Just my opinion, of course.

    BTW, Tim’s other 19 points, permaforst, species shifts, satellite data, boreholes, etc, have not been properly challenged yet I think…

    Comment by Philippe Chantreau — 23 Aug 2007 @ 11:41 AM

  600. re: #590

    That’s a false analogy.

    The EKG software will have been subjected to indepedent Verification, Validation, Qualification, and Certification prior to release for production use in its intended areas of applications. Additionally, all users of the software will be trained in its correct applications and in understanding the results from the applications. Finally, I will say that the software will very likely have been developed under approved Software Quality Assurance procedures and processes and is maintained under additional SQA procedures and processes.

    [edit]

    Comment by Dan Hughes — 23 Aug 2007 @ 2:23 PM

  601. Anyone who reads comp.risks knows the problems.
    http://catless.ncl.ac.uk/Risks/24.80.html

    Comment by Hank Roberts — 23 Aug 2007 @ 3:59 PM

  602. Hank Roberts (#601) wrote:

    Anyone who reads comp.risks knows the problems.
    http://catless.ncl.ac.uk/Risks/24.80.html

    True, but you could ask someone to forward you an email as well. If you habitually underestimate the intelligence of those you are dealing with, you might even forget to reformat it. In this respect, comparable to a certain profile…

    Comment by Timothy Chase — 23 Aug 2007 @ 5:17 PM

  603. re: #600
    All application areas that I’ve ever dealt with have their own tradeoffs between development and QA, i.e., how much money and schedule time will one spend on the latter. Research software is different from production software in its tradeoffs as well, in any area.

    Medical software has no bugs?
    Be serious.
    [I used to work with builders of medical systems like CAT & MRI scanners. Those people were very careful, but...]

    Comment by John Mashey — 23 Aug 2007 @ 6:46 PM

  604. Hank Roberts (#36) wrote:

    Re Peter Ward, after reading his book, I don’t see why you say “If he were right, basically there wouldn’t be anything we could do about it at this point anyway.”

    You are right. I had misread the following interview. Fortunately I was able to find it again.

    Peter Ward
    The scientist on climate change, mass extinctions, and other crazy global-warming consequences
    07-12-07
    http://www.lacitybeat.com/article.php?id=5816&IssueNum=214

    Second paragraph states:

    More ominously, Ward argues, a universally sweltering climate kick started a feedback cycle, disrupting the delicate balance that keeps our planet breathing. To reach a carbon-saturation of 800 to 1,000 parts per million – a figure currently posited by climate modelers – could be to precipitate a level of destruction not seen for millions of years. And it won’t take millions of years to get there. “The first big mass mortalities of humans will likely start around 2050,” Ward says. “And by 2100, this will just be an unrecognizable globe to us.”

    1000 ppm – we certainly won’t reach that by 2050.

    I have an interest in him in part because of my interest in the H2S scenario. When the paragraph puts that together with “big mass mortalities” around 2050, it makes it sound like this is the beginning of that scenario. But it can’t happen that early. You need roughly 1000 ppm, and even then for that to be a factor could take a while. From what I understand, with the strong feedbacks from the carbon cycle under BAU, we should be 720 to 1030 ppm by 2100.

    Of course this was before we discovered that the rate of emissions had actually doubled since the 1990s, so where are we headed if this is the new BAU? The more forcing, the greater the feedback. But there he is probably thinking famine. However, it is argued that climate change is already a significant factor in some wars and famines.

    But when I think of 2040, I am thinking that we really can’t do much to affect how bad things will be at that point. Its already been decided, more or less. What we do now will determine where we go from there: does it get a little worse or a lot worse?

    Anyway, now I will probably buy the book – as soon as I am able to afford it. Anyway, I should be going to bed.

    Comment by Timothy Chase — 24 Aug 2007 @ 1:53 AM

  605. Re. Ron Taylor (#511), Timothy Chase (#529) and others:

    The retreat of many mountain glaciers in Asia is a serious issue with respect to water resources for tens of millions of people (many people, indeed). But I think it an exaggeration (perhaps inadvertent) that it is a serious issue for hundreds of millions of people.

    IPCC AR4 WG1 SPM says
    (under “Fresh water resources and their management” of
    “C. Current knowledge about future impacts”)
    | In the course of the century, water suppries stored in glaciers
    | and snow cover are projected to decline, reducing water
    | availability in regions supplied by meltwater from major mountain
    | ranges, where more than one-sixth of the world population
    | currently lives.
    Note that they say “glaciers and snow cover”, not just glaciers.

    Similarly, Stern Review says in boldface
    (Section 3.2 “Water”, p. 76 of Cambridge U.P. edition)
    | Melting glaciers and loss of mountain snow will increase flood
    | risk during wet season and threaten dry-season water supplies
    | to one-sixth of the world’s population (…).
    Here also “mountain snow” is mentioned, though the text that follows
    sometimes makes confusion (e.g. it says that 250 million people in western China depend on glacier meltwater).

    A scientific review article cited by both IPCC and Stern is
    T.P. Barnett, J.C. Adam and D.P. Lettenmaier, 2005:
    Potential impacts of a warming climate on water availability in snow-dominated regions.
    Nature, 438, 303 – 309.
    As the title suggests, it mainly deals with the projected decline of snow cover rather than of glaciers, and it does say that “approximately one-sixth of the world’s population lives within this snowmelt-dominated, low-reservoir-storage domain”. I think that the phrase “one-sixth of world population” in IPCC and Stern reports should be interpreted in this context.

    Another estimate that seems to have become popular by Stern review is 500 million in the South Asia (Himalaya & Hindu-Kush region) and 250 million in China. I subjectively think that the Chinese part is reasonable if it is the number of people who depend on either glacier melt or snow melt (mostly the latter), but the South Asian part is still exaggeration even with this interpretation. Maybe the text just says that 500 million people live in the Ganges river basin whether or not they depend on melt water. But if so, inserting such a piece of information in this context is very misleading. More likelily, it seems that the fact that the headwaters of many large rivers such as the Ganges is in the Himalayas makes people (including the writer of review reports) feel like all water of these rivers comes from the Himalayas. Actually some does, but some does not.

    Comment by Kooiti Masuda — 25 Aug 2007 @ 3:10 AM

  606. re: # 603

    I did not say, “Medical software has no bugs.”

    Comment by Dan Hughes — 25 Aug 2007 @ 10:49 AM

  607. Re: my comment (#605)

    Excuse me, I made a trivial mistake.
    > IPCC AR4 WG1 SPM says
    It is IPCC AR4 WG2 SPM, not WG1.

    Comment by Kooiti Masuda — 25 Aug 2007 @ 11:57 AM

  608. Peter Ward is, for this century, talking about the more immediate issues. Like:

    “Global wheat stockpiles will slip to their lowest levels in 26 years ….
    … Canadian officials said the country expected its harvest to be slashed by a fifth as a result of drought.
    … Australia – the world’s third-largest wheat exporter and a key supplier to Asian regions and South America – has also warned harvests may be reduced by warmer-than-expected temperatures experienced in the spring.
    Crops in the Black Sea area of Europe, however, have been ruined by bad weather
    … Chinese production is expected to fall by 10% as a result of both flooding and droughts.”
    http://news.bbc.co.uk/2/hi/business/6962211.stm

    Comment by Hank Roberts — 25 Aug 2007 @ 12:13 PM

  609. Two thoughts:
    1) A new way septics self-identify themselves: continuing to claim GISS made a programming “Y2K” error.

    2) News about how adding a new and more accurate set of instruments to an existing data set changes the information (slightly) — the ARGO ocean temperature/salinity data coming in.
    http://www.agu.org/pubs/crossref/2007/2007GL030452.shtml

    Comment by Hank Roberts — 25 Aug 2007 @ 3:51 PM

  610. Re 605 by Kooti Masuda – Thank you for these words of caution. However, this recent article seems to indicate serious concern in China.

    http://www.commondreams.org/headlines06/0507-05.htm

    And this would indicate that the concern is shared in India.

    http://www.commondreams.org/headlines06/0507-05.htm

    Comment by Ron Taylor — 25 Aug 2007 @ 4:27 PM

  611. Sorry – the second link should have been

    http://www.msnbc.msn.com/id/16313866/

    Comment by Ron Taylor — 25 Aug 2007 @ 4:30 PM

  612. Interesting article toay also on China’s dilemma. That country’s situation by itself is a quasi-experiment on the problem of the cost of prosperity

    Comment by Philippe Chantreau — 25 Aug 2007 @ 8:35 PM

  613. Sory, forgot to mention the article is in the NYT.
    http://www.nytimes.com/2007/08/26/world/asia/26china.html

    Comment by Philippe Chantreau — 25 Aug 2007 @ 8:39 PM

  614. In the time it’s taken to spend arguing about this, the code could have been consolidated and re-written 28.53852239181 times. Now almost 3 weeks later, we’re back where we started!

    Comment by Raplh Smythe — 29 Aug 2007 @ 6:20 PM

  615. Can anyone here explain to me how Hansen’s algorithm for combining different data versions works – and thereby show me that his verbal descriptions are (a) accurate (b) sufficient. The problem is described at http://www.climateaudit.org/?p=2018 and some preceding posts.

    In the cases of Praha-Libus, Gassim and Joenssu – to pick three examples- there are two station versions for each station. In each case, during the period of overlap, the values in each version are identical. However in each case, one of the versions has one value missing. As a result, Hansen re-states the values for the other series in the above cases by -0.1, 0.1 and 0.2 deg C. If anyone can explain how these values are calculated based on published literature (ort otherwise), I’d much appreciate it.

    Comment by Steve McIntyre — 2 Sep 2007 @ 3:06 PM

  616. Re #615: Sorry, Lex, you’ve got the data and now it’s time for you to come up with your own numbers. If you get a substantially different result from the professionals, I’m sure there will be plenty of volunteers to go over your work product and figure out where you went wrong. Only after that would there be any value in doing the type of comparison you propose.

    Comment by Steve Bloom — 3 Sep 2007 @ 1:38 AM

  617. I’m hosting a wiki with statistical analysis capability that should be useful. Here is an editable application that illustrates the concept:
    GHCN_Duplicate_time_series_analysis
    . That page serves up the GHCN data for a station, highlighting records with any variance.

    If anyone would point me in the right direction, I’ll try to implement formulae for combining multiple series and share it. Please feel free to try yourself by pressing “edit”, and to contact me if you’d like help.

    Regards, Mike

    Comment by Michael Cassin — 4 Sep 2007 @ 2:30 AM

  618. so after about a month of debate I tried to create my own temperature estimate.
    I used 20 stations,one month January, avoided missing data as much a possible. filled in on one station one year 1991. all are rural zero light.
    these are my results compared to the N.C.D.C for the U.S.A lower 48

    [edited...we're not going to go down the slippery slope of allowing people to post long strings of data in the comments. feel free to create an external URL where the data are available, then just provide that external link in your comment]

    Comment by jacob l — 6 Sep 2007 @ 2:23 PM

  619. I am simply reporting an advert on page A16 of todya’s TNYT: The advert states that “global warming is not a crisis” and asks one to call Al Gore to ask him to debate a Mr. Chris Horner. The supporting ‘evidence’ given includes “NASA says 1934 was a warmest year on record in the U.S., not 1998.”

    The sponor appears to be ‘The Heartland Institute’ and has a web site at

    http://www.heartland.org

    I thought you might care to know about this…

    Comment by David B. Benson — 6 Sep 2007 @ 4:10 PM

  620. For those still interested, Jim Hansen has made the source code for the temperature analysis available.
    http://data.giss.nasa.gov/gistemp/sources/

    Comment by Rob Jacob — 7 Sep 2007 @ 11:35 PM

Sorry, the comment form is closed at this time.

Close this window.

1.512 Powered by WordPress