RealClimate logo


CRU Hack: More context

Filed under: — gavin @ 2 December 2009

Continuation of the older threads. Please scan those (even briefly) to see whether your point has already been dealt with. Let me know if there is something worth pulling from the comments to the main post.

In the meantime, read about why peer-review is a necessary but not sufficient condition for science to be worth looking at. Also, before you conclude that the emails have any impact on the science, read about the six easy steps that mean that CO2 (and the other greenhouse gases) are indeed likely to be a problem, and think specifically how anything in the emails affect them.

Update: The piece by Peter Kelemen at Columbia in Popular Mechanics is quite sensible, even if I don’t agree in all particulars.

Further update: Nature’s editorial.

Further, further update: Ben Santer’s mail (click on quoted text), the Mike Hulme op-ed, and Kevin Trenberth.

1,285 Responses to “CRU Hack: More context”

  1. 601
    floundericiousWA says:

    Indeed, whistleblowers are typically people who already have access to the information in question…that is “people sticking their necks out from within”

    The most charitable description for the hackers is “investigative journalist.” A journalist can be charged with numerous crimes in the course of acquiring and disseminating sensitive information, but the act of reporting the information and notifying the public is a protected act (absent a gag order).

    Journalism require courage because you’re likely to end up with powerful people hating your guts and being sued by everyone under the sun.

    The fact is that these hackers illegally accessed data and there are numerous local, national, and international laws which put them firmly in the wrong.

  2. 602
    Timothy Chase says:

    Here are three more major figures in tobacco and AGW denial campaigns:

    Steven J. Milloy (Mr. Junk Science)
    http://www.sourcewatch.org/index.php?title=Steven_J._Milloy

    Thomas Gale Moore
    http://www.sourcewatch.org/index.php/Thomas_Gale_Moore

    Roger Bate
    http://www.sourcewatch.org/index.php?title=Roger_Bate

    … and a few more organizations involved in both denial campaigns:

    American Enterprise Institute
    http://www.sourcewatch.org/index.php?title=American_Enterprise_Institute

    Americans for Prosperity
    http://www.sourcewatch.org/index.php?title=Americans_for_Prosperity

    Burson-Marsteller (PR firm)
    http://www.sourcewatch.org/index.php?title=Burson-Marsteller

    Cato Institute
    http://www.sourcewatch.org/index.php?title=Cato_Institute

    DCI Group (PR firm)
    http://www.sourcewatch.org/index.php?title=DCI_Group

    Frontiers of Freedom
    http://www.sourcewatch.org/index.php?title=Frontiers_of_Freedom

    Heritage Foundation
    http://www.sourcewatch.org/index.php?title=Heritage_Foundation

    Independent Institute
    http://www.sourcewatch.org/index.php?title=The_Independent_Institute

    International Policy Network
    http://www.sourcewatch.org/index.php?title=International_Policy_Network

    John Locke Foundation
    http://www.sourcewatch.org/index.php/John_Locke_Foundation

    Junk Science (Steven J. Milloy)
    http://www.sourcewatch.org/index.php?title=JunkScience.com

    The Advancement of Sound Science Coalition (TASSC)
    http://www.sourcewatch.org/index.php?title=The_Advancement_of_Sound_Science_Coalition

    Incidentally, I recognize many of these names from the organizations in the Exxon disinformation network that has also been funded by the Scaife, Bradley, Koch and Coors (Castle Rock) foundations — to the tune of over $262,000,000 between 1985-2007. (Not all IRS 990s for 2008 are available yet.)

  3. 603
    Rob says:

    Gavin
    In the CRU (HadCrut) reconstruction, ghcn data is being used. What series are used, ghcn-raw or ghcn-homogenized? If you don’t know could you please ask Phil Jones or any contact you may have at CRU.

  4. 604
    ZZT says:

    Interestingly, there have been other recent times when groups of experts (e.g. the CIA) provided the necessary evidence/justification for political action (cf. the certainty over Iraq’s WMDs).

    Tends to be that after the dust has settled the experts aren’t as vehement.

    Be careful of politicians with an eye for a crisis!

  5. 605
    manacker says:

    Didactylos (576)

    Whistle-blowers get their insider info from whatever sources are available. Despite a lot of speculations out there, it is still unclear who got what information and how they got it, pending a full investigation.

    Whistle-blower protection laws exist in the UK and USA and it will be seen whether or not they cover those that leaked the emails.

    The point is that the data were paid for by taxpayer funding, so the taxpayer has a right to see them if he so desires under the FOIA. To what extent that includes correspondence related to the data between scientists receiving taxpayer funding is a point that will need clarification.

    If data were withheld or destroyed this could represent an infraction of the FOIA, but this is all still pending investigation.

    I personally do not see that much will happen on the legal front, but it is too early to say what happened here and what the legal ramifications and consequences will be for both the whistle-blowers and the scientists involved.

    But those are just my thoughts, which BTW are just as valid as yours.

    Max

  6. 606
    Dan says:

    re: 575. Wow. Most of your gross, illogical assumptions, claims and implications are patently ridiculous without a shred of objective evidence to support them. Science is about hypotheses, data collection, analysis, conclusion, peer-review and additional hypotheses. The emails have no bearing at all on the data collected or the evidence. You would be wise to stick to what is known as scientific facts instead of wishful thinking that the science behind AGW is somehow flawed. Every major climate science society across the world (e.g., the AMS, AMOS, NSF, RMS, NOAA, the AGU) agrees about the role that AGW plays in warming the global average temperatures since the 1970s. The idea that you as a layman somehow know something that literally thousands of peer-reviewed climate researchers do not is the height of scientific arrogance.

  7. 607
    Dendrite says:

    There has been a great deal of talk about trust (or lack of it) in scientists and others involved in this controversy. Given than many of us non-climate scientists and non-americans are not familiar with all the individuals making the running, it would be very helpful if someone could prepare a brief document or table showing which, if any, of the big players (on either side of the argument) have been involved in previous scientific controversies, such as:

    The health risks of inhaling asbestos fibres;

    The health risks associated with lead compounds in vehicle fuel;

    The health risks associated with smoking tobacco,

    The link between HIV and AIDS.

    If zero people fall into any of these categories, this shuld be a simple task.

  8. 608
    RaymondT says:

    Gavin, I recently purchased your fascinating book:” Climate Change: Picturing the Science by Gavin Schmidt and Joshua Wolfe” which you edited and co-authored. In this book, you make a comparison between the development of the theory of continental drift and the AGW theory. You wrote; “However, once it is clear that a new theory explains observations better than the previous theory and that its predictions have been validated, the scientific community will switch over very quickly. For continental drift, that point came in the 1960s, when unequivocal evidence of sea-floor spreading at mid-ocean ridges gave rise to the theory of plate tectonics, suddenly making sense of many different anomalies, including those highlighted by Wegener decades ago.”
    There is a fundamental difference in proving the continental drift theory compared to “proving” the AGW theory. In the former case, the continental drift theory was demonstrated purely by experimental (observational) means (for example the observation of alternating patterns of magnetic polarity on both sides of rift faults). In contrast, the AGW theory relies heavily on numerical modeling since the effect of the CO2 forcing is determined indirectly by history matching previous global temperature records. In the case of the continental drift there is one dominant driver which is the movement of the magma as it approaches and moves the earth’s crust. In the case of the AGW theory there are several drivers. My question is then: “What would be the equivalent of the “magnetic polarity reversal observations” in the continental drift theory which would have switched the thinking of the scientists in the AGW theory?

  9. 609
    David B. Benson says:

    Those actually interested in recent global temperatures should follow the link in my comment #547.

  10. 610
    Ken W says:

    Re: 572
    Bruce, “sticks and stones … ” ;-)

    I’m glad to see that you apparently agree that humans are indeed sometimes responsible for doing bad things and therefore your original post (which inferred those who accept AGW haven’t outgrown some imagined childish tendency to blame things on themselves) was not really based on reality.

    Bruce (#499) wrote:
    “Do you remember as a kid when something went wrong and you were sure it was your fault, but couldn’t figure out what you had done to cause it? Like when parents get divorced and their children blame themselves because of what they (the children) had done?

    This is inherent in human beings, it is a mechanism we use to keep from doing things that will hurt us, and sometimes it goes awry. Are their people in the world that never outgrow this?”

  11. 611
    JLS says:

    Hank,

    On your #462 “JLS, that an old, old discussion going back to the Vietnam era, see e.g.”

    Yup. Pournelle himself admits to having been an unabashed promoter of SDI, which given the immaturity of ABM systems then was more of a competitive-strategy to demoralize the Soviets than a 1980S program deliverable. Russel, Possony, Pournelle and others from think-tanks like RAND were devising such ways to beggar the Soviets by provoking them into disproportionately costly tech development and deployment races. AFAIK Asimov had less access to developments both technical and political and could not produce good critiques of SDI and other Cold War CS.

    Ironically, it turned out that “Nuclear Winter” was neither good modelled climate science. The effects of the technology CS is better grounded as a Cold War-winning story (at least to those formerly of the Reagan administration), yet it is not viewed by historians as the only or even main factor, either.

  12. 612
    Ken W says:

    Re: 575
    “4. …
    Response: An unsubstantiated claim, but it has nothing to do with Climategate in any case”

    So Dr. Weaver and University of Victoria spokeswomen Patty Pitts are just making up the break-ins?

    “6. …
    Response: Not “virtually identical”. The satellite (tropospheric) record shows a slower warming trend than the surface record, even though the GH theory indicates that it should be more rapid”

    RSS TLT 1979 – 2008 trend is warming of 0.153 K/decade
    GISS Surface Temperature 1979 – 2008 trend is warming 0.177 K/decade.

    Read the link below and then come back explaining how the datasets aren’t “virgually identical”
    http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/

    “7. …
    Response: A matter of opinion. Is Steve McIntyre, for example, financially beholden to anyone (such as IPCC), or is he an independent operator (bring evidence for your response)? His qualifications as a statistician are unquestioned.”

    Given his 30 year career in the mining industry, one shouldn’t rule out that Mr. McIntyre may have a financial interest in the on-ongoing profitability of various mining and oil companies. And a BS in math hardly qualifies someone as being a statistician of “unquestioned” qualifications. My own undergraduate degree in Math didn’t even require 1 introductory statistics course (though I took 1 as an elective).

    “8. …
    Response: A matter of opinion. A group of independent auditors acting in the service of the public could scrutinize the science, in the interest of the taxpaying public, who paid for, and hence who owns the science.”

    This has already been done. Multiple times. Try reading any of the publications put out by the National Academies of Science (from any G8 country), the US Climate Change Program, the American Geophysical Union, or virtually any other scientific organization.

    http://dels.nas.edu/dels/rpt_briefs/climate_change_2008_final.pdf
    http://www.climatescience.gov/default.php
    http://www.agu.org/sci_pol/positions/climate_change2008.shtml

  13. 613
    Doug Bostrom says:

    Whoops, the limbo bar has been lowered again. Any contortionists care to try slithering under? How about Max? Max is double-jointed, maybe he can take a shot at it.

    http://www.guardian.co.uk/environment/2009/dec/08/met-office-warmest-decade

    http://www.nytimes.com/2009/12/09/science/earth/09climate.html?hp

  14. 614
    RaymondT says:

    To Philip Machanick, Thanks a lot for the links to the papers especially the one by Foster et al. The paper argues against the claim by McLean et al. 2009 which states that the decadal and longer term variation in large scale tropospheric temperatures can be explained by a single factor-the El-Nino/ENSO. Even before reading this paper my impression was that the multi-decadal oscillations were beyond the ENSO scale as presented in the WCC3 conference in Geneva this year. I would be curious to see how McLean et al will rebutt the paper by Foster et al. My understanding from the presentation of Mojib Latif at the WCC3 was that the ENSO is an initial value problem but that the multi-decadal oscillations were beyond the ENSO scale and was a mixture initial value and boundary forcing problems. Could an ENSO oscillation have long term effects in terms of changing the global humidity content which would last longer than the actual ENSO itself ? I understand that climatologists assume that the relative humidity is constant globally.
    I really enjoy this website which I recommend whenever I have an opportunity especially when I blog with skeptics. It is important that skeptics (such as myself) express their understanding about climate with climatologists.

  15. 615
    Geoff Wexler says:

    [Originally intended for Eric’s thread on balance which is now closed]

    The point of balance or neutrality appears to have suddenly shifted. In the UK, almost every single interview about the Copenhagen conference includes a question about the email leaks]
    ——————————————-
    ——————————————–
    Evolution, Neutrality and the greenhouse gas theory of global warming.

    Yesterday archaeological reporter/actor Tony Robinson launched Channel 4’s (UK) new series “Man on Earth”.

    This is what he has written about the series. “Climate change is part of our heritage, and throughout various moments of history, we’ve either adapted to the changing climate and have ultimately been successful, or we have failed to adapt, and ignored it, in which case disaster has arisen. We can choose either to deal with it or not. We have in the past, and it’s there to see.”

    That appears to emphasise adaptation over mitigation. The first programme dealt with past episodes of global warming and cooling and involved interviewing paleoclimatologists and visits to museums containing human skulls. So why am I still grumbling, at least about the first programme?

    Because these huge natural climate swings were dealt with in some detail without once mentioning the greenhouse gas contribution. This is so often how this topic is taught. There was nothing in the last 50 minutes (I missed the beginning) which might have displeased Fred Singer who propagates the “climate is always changing” mantra. It is just as if some editor had said that ‘we’ll leave out greenhouse gases because they are controversial’. This was combined with another fault, i.e that the whole story was presented as if it was 100% certain. Just history. When will greenhouse gas mechanisms take their rightful place in the teaching of science instead of being segregated as part of green environmentalism and entertaining controversy? Perhaps later in the series?

  16. 616
    Louis6439 says:

    Manacker: You are dead right: “Complete transparency and openness to independent audit in science are paramount.” Anyone who can’t (or refuses to) see this is not a scientist, no matter how many letters follow their name.

  17. 617
    richard says:

    I am totally depressed! Just read comments over at Andy’s Dot Earth site and you’d think more people would get it. The dump all OVER the Copenhagen meeting without considering the poor and the homeless. Forget the climate science people – what about helping the poor of the world?? Even if the science is wrong – can’t we find the heart to help another human being out? It’s depressing. Or disgusting. Are there no morals anymore?

  18. 618
    Radge Havers says:

    SA on manacker

    “That means you are repeatedly, deliberately, knowingly lying — and, you are lying to people who know you are lying, and you know that they know you are lying. Which seems a strangely futile thing to do.”

    Generally, it’s an attempt at asserting power and a way of showing contempt. It’s probably futile to make an appeal to constructive behavior in this case, but good luck with that anyway.

  19. 619
    John MacQueen says:

    Re: 576 Didactylos

    There is no evidence the data was stolen.

    All the evidence IMHO points to an internal leak likely by a computer systems administrator.

    The file seems to be a dump for a possible FOI act disclosure that was denied.

  20. 620
    jerryg says:

    Max writes:
    “1. a crime was committed by the hackers;
    Response: This depends on whether or not the whistle-blowers are protected under the UK (or US) whistle-blower protection law

    2. the stolen emails were released in a way that indicates the attempt to disrupt the Copenhagen conference rather than permit any meaningful investigation of their contents;
    Response: A matter of opinion. It could have been simply to expose the weakness of the AGW premise”

    I thought a whistle-blower would release everything. Instead, we have a hacker who “manipulated” the emails and code to create the impression he wanted. Why would the hacker “hide” the rest of the data he has? Could it be that he “deleted” the emails and code that didn’t fit his agenda?

  21. 621

    manacker #575:

    Is Steve McIntyre, for example, financially beholden to anyone (such as IPCC), or is he an independent operator (bring evidence for your response)? His qualifications as a statistician are unquestioned.

    Here is his personal bio off his own web site. Where do you see “statistician” in there?

    In any case, who are you alleging is financially beholden to the IPCC? Here’s their budget. Most of the money is for meetings and administration.

  22. 622

    Max #575: “3. the fossil fuel industry is lobbying Washington at roughly fourteen to one compared to environmentalists;
Response: This claim is blatantly false – the taxpayer funding for climate research in support of the AGW premise as promulgated by IPCC far exceeds any funding by the fossil fuel industry for research to prove the contrary.”
    Incorrect. U.S. government subsidies (direct and indirect business subsidies) to the fossil fuels industry exceed $20 billion annually, and that doesn’t include national security spending (some of which is fossil-fuel related.) Since all money is fungible, spending on climate research pales in comparison to what the U.S. taxpayer contributes to the fossil fuel industry’s research and public-relations budgets.
    But it is true that the fossil fuel industry would not have to spend much to get all the data and run the models or run their own models, in order to disprove climate science. So no doubt they have done so, and couldn’t disprove it. Which is why they have aimed at shooting the messenger (i.e. the scientists.)

  23. 623
    kevin king says:

    Just a thought on the code leaked from the CRU.

    In commercial software companies the journey from
    development to production is a long one.
    But one thing is a given. The
    devs, integrators and production support operators all
    abide by certain well documented standards and methodologies
    that ensures what comes out at the other end doesn’t
    resemble in any shape or form that which was left for all the world and his
    dog to see on that server in Tomsk.
    This is probably the software industries equivalent
    of the scientific method. Ultimately rubbish in rubbish
    out.

    Standards include very basic things:

    Unit testing and reproducible results
    If I check something
    at 18.00 then I expect an email in my mailbox
    the next day when some or any of the test cases fail.
    I also expect to see a pretty clear exception message
    telling me why it failed.

    Code reviews
    self documenting code
    Continuous builds – see above
    Logging standards
    EXCEPTION HANDLING – THE MOST IMPORTANT PART OF
    ANY CODE! IF SOMETHING FAILS I HAVE TO KNOW ABOUT IT.
    Portable code – ie no hard coded
    references to someone’s home directory

    Use of version control system to store
    source code, tagging of source code, branching
    of source code etc – ie release management
    cvs to store data and META-DATA so I know what the
    hell the data means.
    standard build and packaging tool

    etc..etc..

    What is shocking about the CRU revelations is the TOTAL lack
    of any on these things.

    When I was new to development I once checked in some code into CVS without
    adding any comments. About half an hour later the lead architect took me aside
    and told me never to do it again. So I guess I never did. I know how important it is to
    track changes to software and to have an audit trail. Of course the code here is not being deployed
    to a commercial site, but the same rules must apply. The consequence
    of not applying the above rules is self evident. If they aren’t applied
    and they were not and are not being applied at the CRU it’s clear that only rubbish
    can come out.

  24. 624
    Chris Hirst says:

    It comes down to trust. Scientific experts produce papers that lay people will never read and often cannot understand. But can we trust them to do a good job and make impartial conclusions? The CRU emails would suggest that increasingly we can’t.

    I generally agreed the AGW thesis and was in favour of carbon reductions, but now I’m not sure. Transparency is essential if there is going to be any hope of getting something passed in Congress. There’s simply no way it can happen now as long as the research is seen as tainted. And it IS tainted, no matter how much AGWers think it’s an outlier or it doesn’t matter.

    Dude, it matters. I don’t trust these guys right now. Get it right. If you can’t get anything passed, you will have no one to blame but yourselves. Science doesn’t do arrogance well.

    Thanks for the blog Gavin. Lots of good info here, but I would seriously lose the sarcasm. It simply doesn’t work well any more.

  25. 625
    David B. Benson says:

    Justin Waits (599) & RaymondT (608) — Please read “The Discovery of Global Warming” by Spencer Weart:
    http://www.aip.org/history/climate/index.html
    after reading Andy Revkin’s review:
    http://query.nytimes.com/gst/fullpage.html?res=9F04E7DF153DF936A35753C1A9659C8B63

  26. 626
    David B. Benson says:

    RaymondT (614) — I’m an amateur here, but for reasons given in Ray Pierrehumbert’s
    http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
    and some of his papers (available from his web site), El Nino is rather unlikely to have effects which last long beyond the end of the episode.

    I’m still attempting to understand El Nino nad one of the better web resources seems to be
    http://oceanworld.tamu.edu/resources/ocng_textbook/chapter14/chapter14_02.htm
    (Kindly let us know if you find a still better one!)

  27. 627
    Rod B says:

    Timothy Chase (602), your character assassination net casts a large trail. I’m impressed.

  28. 628
    grumpy software architect says:

    kevin king #623 – why do you think that that CRU code is part of a software product? The same rules do not apply to single use software at all. Your examples are all down at the coding level in a product development process and you left out the stuff about quality management: applying different approaches to defect discovery, managing defect injection processes, balancing software lifetime against development cost. Continuous build and standardised logging FFS… the point is how defect injection and removal work over time, not whether the code is commented (regardless of your personal scars).

    If we were to treat the statistical scripting that produces these numbers in the gold-plated cost-per-loc way that a lot of these coders want, paradoxically problems would NEVER get fixed, there would be strong incentives to NOT go back and mess with the algorithms, there would be incentives to say “well, its just too expensive to fix that part of the code, we will work around it in the new development”, and if you haven’t heard that, you haven’t worked on a seriously large and complex software product that costs real money to build. I’d rather an environment where people are inclined to mess with the code and compete with each other in producing the right numbers, not an environment where the algorithms and the judgement calls in their selection are hidden behind layers of code review, commenting policies and a “standard build and packaging tool” all stored on the one true Subversion server.

    People keep pointing to current development processes and a number have referred to OSS development, but many seem to have forgotten the scandalous business of the changes to the Linux memory manager years ago, where a significant change was made to the memory management behaviour in a maintenance release. Yes, the code was reviewed to death and it worked, but it still hurt a bunch of people because the higher level design decisions were not reviewed by, and could not be reviewed by, the code review processes that some people seem to want to apply to the CRU programs.

    Another and much older example from a very large and significant software product, an OS: in the early 1980s that code had a significant bug in it in that users could be locked out of the OS if they were unexpectedly disconnected. The bug was still there in the mid 90s when I encountered it and I happened to know someone who years earlier had worked on finding and fixing the bug. They knew roughly where the problem was, but the risk to the system of rewriting that code was too great (the Linux MM problem from early this decade is the example of what happens when you take the risk and it does not work out).

  29. 629
    Michael Dodge Thomas says:

    The latest from Willis Eschenbach,

    http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

    and published today (12/8) has been sent my way by skeptic correspondents several time this evening, and IMO needs to be addressed on a priority basis.

  30. 630
    Timothy Chase says:

    kevin king wrote in 623:

    In commercial software companies the journey from development to production is a long one.

    Agreed.

    kevin king wrote in 623:

    But one thing is a given. The devs, integrators and production support operators all abide by certain well documented standards and methodologies that ensures what comes out at the other end doesn’t resemble in any shape or form that which was left for all the world and his dog to see on that server in Tomsk.

    Left there by whom? I don’t think the author of that intended it either for production or publication.

    Additionally it sounds like you are assuming the standards applicable to a software company. Not some inhouse project. Present-day standards — as opposed to something that may have been a decade or older.

    kevin king wrote in 623:

    Unit testing and reproducible results.

    At my former company (prior to its getting bought by something larger than Microsoft) we had unit testing — done by the developer. We had smoke-testing, done by a tester assigned to the team. We had regression testing to insure (as best as possible) that none of the earlier functionality was broken by the fixes.

    This was automated, run by someone who wrote the scripts for such testing. At least until we got rid of the local testers and shipped work overseas. Then we went over a year without any regression testing. No one apparently knew how to run the software that ran the scripts.

    kevin king wrote in 623:

    If I check something at 18.00 then I expect an email in my mailbox the next day when some or any of the test cases fail. I also expect to see a pretty clear exception message telling me why it failed.

    Preferably.

    kevin king wrote in 623:

    Code reviews.

    We talked about it. Oddly enough when things were getting especially desperate, we had already gone through several rounds of layoffs, and everyone was doing the work of three.

    kevin king wrote in 623:

    Self-documenting code.

    Sounds great. Also sounds rather recent. With Classic VB for example (which came out during the 1990s and ceased being supported by Microsoft in April 2008) there was no self-documenting code. You had inline comments. The only self-documenting code I have seen so far is documented in XML (where objects, properties and methodes are automatically documented) and at that point you are talking object-oriented.

    That is fairly recent. VB5 and VB6 were pseudo- object-oriented and compiled, yet they were not self-documenting. And VB6 came out in 1998.

    What about scripting languages? Fortran (which is common in climatology) is compiled and in its most recent incarnations is likely self-documenting, but when was this introduced? IDL — which came out in 1977 — the language that this code was apparently written in — is described as being only partially self-documenting even in its most recent incarnations.

    kevin king wrote in 623:

    Continous builds – see above

    Yep – definitely a software development company. We were doing software for major companies with massive networks – the software was used for monitoring those networks in order to more efficiently manage the hardware they had rather than buy more hardware – and we didn’t usually do a build more than once a week.

    The client software we developed (part of a three-tier solution working against an Oracle database) was large enough that the project had ceased to compile in Visual Studio even when it was only half the size. We had a construction worker who taught himself assembly, then C, then C++, and then VB. He wrote a program such that it could compile individual projects (custom components, including controls and DLLs) without keeping open the projects that they depended upon as part of the project group.

    That was inhouse and it was run manually. Rarely more than once a week. (If we ran it more than once a week chances were someone had screwed up.) And it (first half of the name was “Build” and the other half refered to a lady dog — the idea of the coder that wrote it) didn’t get put into the code repository until 2005. That was our inhouse software.

    The rest of your “requirements” likewise sound like a professional software development company — not inhouse software development — and not like you are used to reworking legacy code.

    Ideally things should meet these standards, but inhouse software and legacy software are unlikely to meet such standards. And even today some of what you demand simply isn’t available in languages.

    kevin king wrote in 623:

    Of course the code here is not being deployed to a commercial site, but the same rules must apply.

    Modern standards which are largely applied by modern, professional software companies that in many cases simply could not have been applied a decade or more ago — and some of which simply can’t be applied to some languages even today.

    The code was obviously legacy. The language it was written in was from the 1970s and it may have been almost as old. Even in a professional software development company intent upon selling software for use by large clients many of those standards would not have been standard even in the first few years of this millennium.

    And there is every reason to believe that what came out of that software never saw the light of publication. You are trying to apply standards to it which could very well have not even been conceived of at the time that the code was originally written.

    I would have to say that either you are unfamiliar with the history of software development or are being deliberately anachronistic in your application of the current standards of software development to software developed in an earlier time.

  31. 631
    John E. Pearson says:

    Re 623: I disagree with your entire post. I essentially never do any of that stuff and I assure you that I don’t produce rubbish. If I did all the standards type stuff that you insist is necessary I’d never get anything done. I haven’t looked at the CRU code and have no idea how big it is. Research codes are notorious for being poorly documented. Whether or not exception handling is as important as you claim or not depends on how the code is used.

  32. 632
    manacker says:

    Lee A. Arnold

    Youwrote:

    “U.S. government subsidies (direct and indirect business subsidies) to the fossil fuels industry exceed $20 billion annually, and that doesn’t include national security spending (some of which is fossil-fuel related.)”

    You are comparing apples with oranges when you talk about “subsidies” to the energy companies. How about the corporate taxes they pay? The net flow of money is from the fossil fuel industry.

    But that is irrelevant. What we are talking about here is government taxpayer funding of climate research to support the AGW premise versus funding from energy companies to prove the opposite.

    And here the pro-AGW government funding far outweighs what energy companies are spending to counter AGW.

    In other words, my statement is correct that “the taxpayer funding for climate research in support of the AGW premise as promulgated by IPCC far exceeds any funding by the fossil fuel industry for research to prove the contrary.”

    Max

    [Response: Wrong, wrong, wrong and wrong. Research from government sources is not designed to ‘support the AGW premise’ – the vast majority supports the satellite progams which I’m pretty sure are not deciding which photon they register based on whether it supports warming or cooling. And the research grants that are funded at NSF can be looked up – and although I’ve challenged people repeatedly, no-one has ever found any evidence of claims that they are only funding reseach to support AGW. And finally, the research that oil companies pay for (and I have a number of colleagues who’ve received research grants on paleo-climate from the likes of Shell etc.) is perfectly fine in general and not tied to any political outcome. You are confusing the millions the oil companies have spent/spend on spreading disinformation with research. CFACT, CEI, FoS, Heartland etc. do not do scientific research. – gavin]

  33. 633
    John Mashey says:

    Kevin King:
    Sigh.
    How long have you been in this business?

    I am afraid you do simply do not understand the differences between engineering production software that must exist in multiple releases, have test harnesses, manage databases, etc, etc with software engineers, QA/release teams, etc … versus code researchers write for themselves to analyze data, sometimes for one-shot papers.

    OF COURSE one does all that stuff for production-grade code. GISS does a pretty good job this way when appropriate. It’s easier now than it used to be.

  34. 634
    Jason says:

    619:

    The file seems to be a dump for a possible FOI act disclosure that was denied.

    I would be interested in seeing the wording of an FOIA request that could manage to gather together that collection of documents.

    It would have to look something like “Everything in your home directory that could be even remotely interesting, plus everything else.”

    I’m not an expert on the relevant legislation but I would have expected genuine FOIA requests to be far more specific than anything that could have resulted in that collection.

    I’m a little bit puzzled by this line of reasoning anyway, though — if it was gathered in response to an FOIA request that was subsequently and legally denied, doesn’t that mean that anybody who releases that data anyway is not a whistleblower but rather automatically a lawbreaker?

    And why hasn’t anyone come forward and said “Hey, that looks like it was collected in response to my FOIA request — look, here is what I asked for: …”?

    Simply saying it was stolen by a hacker seems far more parsimonious.

  35. 635
    Jason says:

    Kevin King wrote:

    What is shocking about the CRU revelations is the TOTAL lack
    of any on these things.

    Why?

    We’re talking about straightforward implementations of mathematical tools that are run to generate data. Not enormous idiot-proof shrink-wrapped products used by millions of people, or flight control systems.

    My company creates scientific software that sells for the price of a good car. We put a lot of effort in to making that software high performance, user-friendly, idiot proof, and bug free, using all the normal tools of the commercial software design trade.

    However, I also often write little one-off programs to test a theory and refine an algorithm. Once I have it working, then I go to all the additional trouble of making the code meet our standards and incorporating it into the normal code base. This easily takes at least ten times as much effort as actually writing the test code itself did. It is therefore not surprising that scientists on a shoe-string budget wanting to write their own software to test their theories fail to adhere to software design Best Practices. The software is not the product; the data is.

    It isn’t even a disaster that this means their little test programs (and they are little) may have bugs in them affecting their results because the ultimate test is not thousands of sceptics pouring over their source code, or feeding in physically impossible values to test exception handling, but rather other researchers writing completely independent code, often using completely independent data, seeing if they can come up with the same results.

    Nobody is saying policy should be decided on the back of what CRU’s in-house software produces. If it didn’t basically agree with all the other lines of evidence then it would be ridiculed and the authors would be scratching their heads trying to see what they did wrong.

    Frankly, I’ve seen a lot worse. They even have variable and function names more than a single letter long! And comments referencing published articles! If you really think that code is all that bad then you seriously need to get out more.

  36. 636
    Jason says:

    Lee A. Arnold:

    But it is true that the fossil fuel industry would not have to spend much to get all the data and run the models or run their own models, in order to disprove climate science. So no doubt they have done so, and couldn’t disprove it. Which is why they have aimed at shooting the messenger (i.e. the scientists.)

    This is, indeed, the ultimate question that I’ve never even seen sceptics ask themselves, let alone attempt to answer:

    If the evidence did not support AGW, and the scientists were fudging the figures, why would Exxon et al continue to fund “think tanks” and “policy institutes” rather than simply spending that money to acquire and analyse the raw data and release their findings along with the source code to prove once and for all that AGW was a scam?

  37. 637
    Geoff Wexler says:

    Re #623
    Software people and science people.
    Much of recent computational physics since the second world war grew up before it was called software and certainly before it was commercial. It was often written informally by scientists who had not taken computer science degrees. That did not stop it from being hugely successful. Much theoretical and applied physics ought to have collapsed by now if your criticism had been seriously significant. Lots of invalid PhD theses would have had to be junked. The reason why this did not happen is that the more important calculations ended by being checked by others who wrote their own programs. That is not very different from the way experimental results are checked.

    The record of commercial software has not been so wonderful. What other industry managed to sell so many products for so long to so many people which broke down so many times? And who is responsible for the lax security at the UEA server, software experts or scientists? The same question applies to the hackers themselves.

  38. 638
    Richard says:

    On providing context to Phil Jones’s statement “The two MMs have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the U.K., I think I’ll delete the file rather than send to anyone…,” Gavin responded with “The FOI requests started in 2007 and they were turned down then…not because there is anything wrong or embarrassing about the data, but because some of it is restricted by agreements with third parties.” (this is from a Nov 24 post in this thread)

    IMO, Gavin’s statement does not pass the common sense test. If there were third-party agreements in place then Jones would hardly have to resort to deleting data. I can see no other reasonable context for Jones’s comments other than a williness to destory data in opposition to the law rather than see it fall into the hands of those whom he clearly sees as the enemy. This behavior can be fairly described in many ways, but being consistent with the generally accepted principles of scientific research would not be among them of them.

    [Response: That was hyperbole born of frustration- no data was destroyed and none will be. And there are third party agreements. – gavin]

  39. 639

    Justin Watts: If the planet is 4.5 billion years old, how can we predict how its climate will behave by studying a small amount of data from a mere 10,000 or even 1 million years, if it were available?

    BPL: Because climate operates on a characteristic time scale of 30 years, not billions.

  40. 640

    Raymond T: My question is then: “What would be the equivalent of the “magnetic polarity reversal observations” in the continental drift theory which would have switched the thinking of the scientists in the AGW theory?

    BPL: The high-altitude spectral observations taken in the 1940s which disproved the “saturation” objection to AGW theory raised by Angstrom and Koch in 1901.

  41. 641

    JLS: “Nuclear Winter” was neither good modelled climate science.

    BPL: Actually, it was. The famous 1984 “Nuclear Autumn” paper righties still cite as “disproving” it got the plume heights wrong by a factor of three. See Turco et al. 1991.

  42. 642

    John Macqueen: All the evidence IMHO points to an internal leak likely by a computer systems administrator.

    BPL: WHAT “evidence?” Care to cite it?

  43. 643

    #636 Jason:

    Back in the 90s, the Global Climate Coalition led by ExxonMobil DID fund scientists to determine the cause of global warming. GCC’s own scientists concluded that human emissions of GHGs were responsible so they promptly squelched that report and began funding front groups to deny the science and to confuse the public.

    Revkin writes about this here:

    http://www.nytimes.com/2009/04/24/science/earth/24deny.html

  44. 644

    Jason, exactly. I’ve often cited the mutually inconsistent nature of common denialist arguments as good reason for believing the mainstream.

    But another very good one is looking at the choices made by Heartland, Cato, et al: if they are so confident that reality is “on their side” then it would be logical to hire researchers and fund studies. Instead, the vast preponderance of effort goes into hiring PR people.

    http://hubpages.com/hub/Climate-Cover-Up-A-Review

  45. 645
    kevin king says:

    Just have to totally disagree with 631.
    If you don’t test basic code you’ve got a problem.
    You’ ve evidently never worked in a production environment.
    Exception handling is the thing. Ah ha..the CRU doesn’t write
    code for production environments. Okay but It still seems to
    me the need to handle exceptions correctly and consistently is
    important. Exceptions are not handled correctly in some of the code from the CRU.

    And John Mashey 633. The emails talk about different releases of
    the software. There are multiple versions. Are you seriously suggesting no releases are made and
    not stored in a repository that is regularly backed up? Are they
    stored on the developer’s laptop?

    I would actually have to advertise agile development here
    and simple unit testing. Okay I have no experience of coding Fortran
    modules but let’s assume the following.
    I write a piece of code that runs a collection of combined algorithms
    that I put together to combine data from 2 different datasets.
    I now add another piece of code that uses the above that I’ve
    packed off undocumented in some file I forgot to name correctly. Let’s
    call it fudge.gy.
    The new piece of functionality appears to work. It returns apparently
    reasonable results. Unfortunately I’m not aware of the fact the
    script I forgot to test is silently swallowing exceptions and returning
    unreliable data. Had I constructed some unit tests against which
    I could compare input and output data I would have picked this up.

    I mean okay, let’s skip the continuous builds. Let’s just
    have a simple reproducible set of unit tests where I can test input data
    against output data. Surely that’s not too much to ask.

    And 635. Of course I have seen worse code. But trillions of dollars
    rest on code like this. Not much of an excuse.

  46. 646
    John E. Pearson says:

    Re: 639 “ustin Watts: If the planet is 4.5 billion years old, how can we predict how its climate will behave by studying a small amount of data from a mere 10,000 or even 1 million years, if it were available?

    BPL: Because climate operates on a characteristic time scale of 30 years, not billions.”

    And because the concern is over what happens during the next 100 years not the next 100 million years. There are on the order of a billion people who rely on the water that flows out of the Himalayan glaciers. If those rivers become seasonal or begin undergoing large seasonal fluctuations in volume in the next 50 years the upheaval will be unprecedented. Note that this is the case whether the current temperatures have occurred 0 times or 1,000 times in the past 100 million years. Denialists love to go on about how the current temperatures and co2 levels are not unique in world history. What is unique, as Gavin has pointed out dozens of times, is the combination of rapid rise in conjunction with a world population approaching 7 billion and an advanced technological society.

  47. 647
    John E. Pearson says:

    645: Nowhere did I say I don’t test my codes. Obviously I do. The fact is that whenever I have something I think is worth saying that involves computer codes I publish it and if other people find it interesting they will write their own code and reproduce it. In that way my results are verified and it is in this way that the CRU code has been validated. Even if it were true that responding to climate change will cost trillions, trillions are not resting on any the output of any single code.

  48. 648
    Josh L says:

    Hi RC,

    Hat’s off for keeping up with the high volume of questions. Could I also please 2nd the request by 629, by asking for an explanation of the article he linked – http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/ .. I’ve just read it and I’ve curious to know your opinion as to why the results differ greatly from the previously published figures?

    [Response: See here. – gavin]

    Also, one more, if I may –

    Reading through the headlines on that site, I also came across a rant about one of the emails referring to the CRU being potentially funded by Shell Oil and/or other oil companies.

    Apologies if you’ve already responded to a similar question – I’m just getting a grip on this whole situation, but can you explain how the funding requests were concluded and if I could ask why the CRU would risk exposing themselves to claims of conflict of interests, in the circumstances? Here is the link to the article – http://wattsupwiththat.com/2009/12/04/climategate-cru-looks-to-big-oil-for-support/

    Thanks for your time,

    -Josh

    [Response: Oil companies have sometimes funded real scientists to do research on issues that were interesting to them. Deep time paleo-climate is a frequent subject for instance. But at no time to the companies sponsoring the research get to determine the outcome (at least not with any of the scientists I know). This is very different to oil companies sponsoring disinformation efforts on climate change which involve no research but plenty of misrepresentation instead. The former is fine, the latter, not so much. – gavin]

  49. 649
    KTB says:

    Somebody else posted this earlier, but I’m posting this again as it seems important.
    The blog postings methodology seems sound and the article is well written. I can’t directly spot any glaring errors.
    I’m myself curious how these “homogenizations” are made, especially for stations this isolated. The answer may be that individual stations don’t matter, but there must be some reason why this station’s data is processed the way it is. This would also shed light on how the data is processed generally.

    Thanks in advance, I know you are busy and probably tired of “debunking”/explaining this sort of stuff, but this article was in my opinion well written, and I can’t find it addressed anywhere else.

    http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

    [Response: See here. – gavin]

  50. 650
    John says:

    635: “It isn’t even a disaster that this means their little test programs (and they are little) may have bugs in them affecting their results because the ultimate test is not thousands of sceptics pouring over their source code, or feeding in physically impossible values to test exception handling, but rather other researchers writing completely independent code, often using completely independent data, seeing if they can come up with the same results.”

    Let me just quote a bit about the scientific method from Wikipedia:

    “Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, thereby allowing other researchers the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.”

    And this is where the climate scientists at large fail. You cannot scrutinize the data fully. You cannot look into the details of the methodology since the little unreported details are hidden in the source code and you plain and simple cannot determine if the scientist is producing poor and error prone code. And this state of affairs will of course give scientist producing poor code a competitive advantage in the rush for pushing out papers.

    And we need to fix this now! Open the data and open the code. Otherwise there will always be a cloud of mistrust against your results.

    [Response: I can see that you feel strongly about this. So let’s make a deal – you accept any studies for which the data is public domain and the code available and we’ll just stick to discussing the conclusions drawn from that. Obviously, I’m not responsible for the whole community and so I can’t enforce anything, but there is nothing of importance that can’t be replicated using open data/source code. Or would the presence of a single solitary study using proprietary data devalue the work of everyone else? If you answer yes to that, then there is very little point continuing any dialogue. – gavin]