RealClimate logo

Technical Note: Sorry for the recent unanticipated down-time, we had to perform some necessary updates. Please let us know if you have any problems.

Where’s the data?

Filed under: — group @ 27 November 2009

Much of the discussion in recent days has been motivated by the idea that climate science is somehow unfairly restricting access to raw data upon which scientific conclusions are based. This is a powerful meme and one that has clear resonance far beyond the people who are actually interested in analysing data themselves. However, many of the people raising this issue are not aware of what and how much data is actually available.

Therefore, we have set up a page of data links to sources of temperature and other climate data, codes to process it, model outputs, model codes, reconstructions, paleo-records, the codes involved in reconstructions etc. We have made a start on this on a new Data Sources page, but if anyone has other links that we’ve missed, note them in the comments and we’ll update accordingly.

The climate science community fully understands how important it is that data sources are made as open and transparent as possible, for research purposes as well as for other interested parties, and is actively working to increase accessibility and usability of the data. We encourage people to investigate the various graphical portals to get a feel for the data and what can be done with it. The providers of these online resources are very interested in getting feedback on any of these sites and so don’t hesitate to contact them if you want to see improvements.

Update: Big thank you to all for all the additional links given below. Keep them coming!

407 Responses to “Where’s the data?”

  1. 251
    Aaron Lewis says:

    Re 241: Ray,
    True, the models are consistent with each other. However, how consistent are they with nature? How many of those models forecast the timing and extent of the recent and ongoing Arctic sea ice decline? How many suggested that (GRACE) might be detecting loss of Antarctic ice in 2009?

    I suggest that the failure to forecast Arctic sea ice decline is symbolic of the model’s inherent inability to deal with initial ice melt as a rather local process, rather than as a larger volumetric process.

    The problem is that even small amounts of ice melt set up albedo and water vapor feedback process that result the abrupt melt of adjacent ice. More worrisome is that local melt can set up mechanical stresses that result in progressive structural collapse.

    I assert that all ice melt processes resulting from global warming will occur much more abruptly than suggested by any of the GCM.

    Each of the disciplines had its own literature, they exchanged ideas. They knew the kinds of approaches that other groups were using for their models, and nobody took a dramatically different approach. The builders of each model were drawing on the peer reviewed literature. None of the 23 models were built by “lone wolf” scientists. Nobody pulled a “Feynman” and made a point of not reading the literature. Thus, similar conceptual flaws are in each model.

  2. 252
    donQ says:

    @Christopher Hogan.

    I also downloaded and ‘played’ with code.
    This code is considerably different and has sufficient (actually quite good) documentation along with it.
    If you compare modelE with (which a source code that was suggested very early in this thread) you will see/understand the ‘frustration’ and the criticisms.

    I’d rather ‘play’ with modeE …


  3. 253
    paulina says:

    Where IS the data, indeed?

    I’m looking for the evidence to support the claim made by “some prominent climate scientists” in the New York Times, as presented by Andy Revkin:

    “[Some prominent climate scientists] say that the e-mail messages…have damaged the public’s trust in the evidence that humans are dangerously warming the planet”

    What basis is there for claiming that the public’s trust in this evidence has been damaged?

    What basis is there for attributing that alleged damage to the email messages, rather than, for instance, to misleading media on the issue?


  4. 254
    caerbannog says:

    “If I pay taxes, then my meteorological services at all levels should provide me with the data for free as a citizen.”

    Then why did I have to pay 25 bucks for my driver’s license renewal? A hundred bucks for my passport?

  5. 255
    Ron Broberg says:

    Fast, cheap, good. Pick two.

    Click here for fast and cheap.

    I’ll update this later in the week as I beat out installation and configuration documentation.

  6. 256
    Matthew says:

    I want to thank Gavin for supporting this discussion. On the whole, I think that it has been fruitful.

    I especially appreciated the comment by Nick Barnes about the project using Python.

    My PhD was in statistics, and I have experience modeling nonstationary multivariate biological (behavioral and physiological) time series using nonlinear dynamic models in SAS and Fortran-77. These projects, as I’ll call them, are much smaller in scale than the climate modeling.

  7. 257
    dhogaza says:

    I’ll leave the question about big-endian for the programmers; surely there’s somewhere else to pursue that.

    Well, I’m a big-endian programmer, but what do you expect, seeing as I’m 55 years old? (sob :( )

    (which end is big is left as an exercise for the reader, QED).

  8. 258
    information = data says:

    Int. J. Climatol. 25: 1055–1079 (2005)
    Published online in Wiley InterScience

    ( DOI: 10.1002/joc.1148
    FROM 1796 TO 2002
    Armagh Observatory, College Hill, Armagh BT61 9DG, Northern Ireland, UK
    Received 5 May 2004
    Revised 15 November 2004
    Accepted 15 November 2004

  9. 259
    Mike Flynn says:

    I am a self uneducated retiree with some scientific background.

    Does anybody on either side of this argument claim to be able to foresee the future? As far as I can see, computer models have the same predictive ability as chicken entrails, albeit using chicken entrails you get to eat a chicken as a side benefit.

    Although it is tempting to believe that the past can be extrapolated into the future, it just ain’t so.

    What usually happens is that people (not just scientists) confuse “assumptions” with “predictions”. Newton’s thoughts on gravity. Einstein’s thoughts on gravity etc., were all deemed to be practically useful at the time. Good enough to “predict” eclipses, demolish the luminiferous ether (believed in by a majority of scientists, I believe), and so on.

    The future is unknowable. Prove that you can foresee even the next 5 seconds repeatedly with one hundred percent accuracy, and I will “predict” that civilisation as we know it, will end. You can easily work out why.

    I won’t hold my breath.

    [Response: Fortune telling is a class-B misdemeanor in NY state, and obviously no-one here is doing that! However, predictions are the lifeblood of science. Given certain, well-defined, conditions, scientific theories will predict what the result of a specific experiment will be. If the conditions are not met (the temperature is different, or it wasn't a vaccuum etc.), the prediction doesn't work. With respect to the future of the climate system, there are some external forces that are quite predictable - the orbit of the Earth, and its variations, the continuing rise in CO2 emissions, etc. Another example would be the climate consequences of a big volcano. In each case, you can use climate models to project the impacts on temperature, circulation etc. In fact, people have done this. In 1991 Hansen et al predicted the temperature drop that was expected from the Pinatubo eruption well in advance of it happening. In 1988, the same team projected how temperatures would increase under 3 scenarios of rising greenhouse gases. For the scenario that came closest to being realised, the trends were a pretty good match to what actually happened. Other groups have made skillful projections for the El Niño events and their consequences months in advance. So finding useful information about the future is not impossible, even if it isn't fortune telling as described in the NYS penal code. - gavin]

  10. 260
    Cardin Drake says:

    “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”
    I’m afraid that it is true, and it appears to be even worse that it sounds.
    One would think that they could simply reverse the adjustments, and recalculate the original data. If you look at the annotations of the programmers doing the coding, however, it appears that they have lost track of how the original data has been adjusted.

    [Response: The issue is not any adjustments, but simply that they maintain a master database with all of the data sets merged together. Having done something similar (though smaller scale) with multi-sourced data - some I digitised myself, some done by others, gathered over a 10 year period, including calibration adjustments and typo and meta data corrections, I think I have a pretty good idea why you can't just break it down into constituent parts. As with CRU, all I have to give out is the merged product. - gavin]

  11. 261
    Tilo Reber says:

    Where is the raw HadCRU, GISTEMP, and NOAA NCDC data. Saying that it comes from GHCN doesn’t help. We have to know which stations over what time periods are being used. Reproduction is also going to require a site history for the surface stations. For example, if a site is moved, or if there is a change of time for data sampling. There seems to be a shortage of this kind of information – especially for HadCRU. Then we need all of the adjustment algorithms. This is probably more important than the code. Given the algorithms, we can quickly write our own code. Then, having the raw data, the meta-data, and the algorithims, reproduction should be possible. Excuses like, “we did this and you didn’t” won’t work. If we didn’t do what the sources did, it’s because the sources didn’t supply the information about what was done. Then if reproduction isn’t possible, there is a problem. I hope that this is not just one more example of Real Climate trying to appear to be transparent while at the same time hiding the important stuff.

    [Response: Oh, I don't know. Perhaps you could start by reading the papers? Or looking at the GISTEMP code? You keep saying stuff is being hidden, but when the whole thing is in front of you, you don't look. Someone else in the comments has already downloaded the GISTEMP code and got it to work in the time that this post has been up, while you have been complaining for months that it's all some secret. Who do you think has more credibility? - gavin]

  12. 262
    Jody says:

    Gavin, I hate to ask you to add one more thing to your already sizeable list, but since you’ve done an outstanding job dealing with the craziness so far I thought I’d suggest it here.

    Could you put do a FAQ of the “ClimateGate” scandal, like you did for the objections to AGW? “I read that the CRU deleted the data.” “No, no data was released. See here and here for more info.” That type of thing.

    If I knew enough about everything….

  13. 263
    Neal Asher says:

    “Anyway, with the links provided, could a skeptic/denier perform an analysis that would disprove man made global warming?”

    The comment above appeared in an earlier post. I suggest that anyone putting asking questions like that should google Betrand Russell’s teapot, or just go here:

  14. 264
    James Allan says:

    Couldn’t not contribute an aerosol data source ;-)

  15. 265
    AndyL says:

    The Data Sources page looks like an excellent resource. Will it be maintained, or is this a one-off exercise?

    [Response: It's set up as a special page, so it won't go off the front page, and we will aim to keep it up to date. There's been a lot of suggestions of data portals even we didn't know about, so it probably is going to be useful. - gavin]

  16. 266

    Mike Flynn:

    As far as I can see, computer models have the same predictive ability as chicken entrails

    BPL: Do a little more research. Start here:

  17. 267
    Milo says:

    There’s plenty of climate model output available here:
    It is requested that those downloading the data make a note of what they would like it for but that’s the only requirement.

  18. 268
    Matthew L. says:

    Wow, what a major storm in a teacup. Although I find it disturbing that CRU appears to have destroyed a lot of the raw data, the availability of satellite measurements over the period of the most recent warming mean that there is plenty of data out there that is independent of any possible CRU ‘fiddling’.

    As a layman, I have no way of using the data supplied in the links here. For most people it is more interesting to see graphical representations. I find the following site very useful for comparing the various records as it has them all, including the most recent data, all in the same place. It appears to be mainly factual, although do I detect a very slight sceptical slant? It also has some links detailing some interesting stories from history showing how climate change has affected real people in the past.

    particularly this chart which overlays the various data sets:

    As an aside, can historical records be used in any scientific way to inform climate models? For instance, is it possible to know whether the climatic conditions that generated the following fascinating story are in any way indicative of global conditions, or can these kinds of multi-year cooling events be a purely local phenomenon? Swedish ice-bourne invasion of Denmark; the Treaty of Roskilde

    [Response: I detect the same thing. There are some rather odd methodological choices, a distinct lack of understanding the concepts of significance and some weird editorialising. So not climate4me! - gavin]

  19. 269
    Mike G says:

    Neal, you apparently either don’t understand Russell’s teapot or you don’t understand the basis for AGW (or both). The point of Russell’s analogy is that it’s impossible to prove a negative for a belief that is based on personal credulity rather than evidence. The analogy does not apply to claims based on empirical evidence.

    We know from direct measurements of the absorption spectra that CO2 is in fact a greenhouse gas. We know from direct measurement that it is increasing in concentration in the atmosphere. We know from direct measurements of the isotopic ratios of that CO2 that most of the increase is due to fossil fuel use and land use changes rather than natural sources like volcanoes. We know from measurements that the stratosphere is cooling while the troposphere warms, and that the warming is greatest at night, both observations which are consistent with greenhouse warming and inconsistent with almost any other mechanism.

    Performing an analysis disproving anthropogenic warming would mean showing that either MANY of these measurements are in serious error or that there other mechanisms exist which can explain the evidence better. In either case, the analysis would prove a positive- the existence of errors, or more powerful mechanisms.

  20. 270
    Dan Hughes says:

    Ron at #255

    The README.TXT file indicates successful completion of Steps 0-3.

    Is that correct or does the README.TXT file need to be updated?


  21. 271
    Ron Broberg says:

    @Dan Hughes#270: Is that correct or does the README.TXT file need to be updated?

    The README.txt is correct. Until I can either produce the SBBX.HadR2 file in STEP4 or consume it in STEP5, I won’t claim success for the STEPs 4 and 5. I think some of my problem is just a confusion of what STEP4 does. Looking closer, it appears to update an existing SBBX.HadR2 file with data from oiv2mon files. But if I start with an existing SBBX.HadR2 downloaded from GISS, I bump into the bigendian/littleendian issue. This doesn’t seem insurmountable … but I have a day job. I’ll see what I can do tonight.

    If anyone has constructive suggestions, they can email at ronbroberg in the company of

  22. 272
    Norman Loeb says:

    The new Data Page is a wonderful idea. One small correction: Under Climate data (processed): Cloud and radiation products (ISCCP, CERES-ERBE), I suggest changing the CERES-ERBE link to the following:

  23. 273
    dale says:

    RE: 50 Jimi Bostock

    If you really want skeptics to embrace your movement (theories), you should do better than attack them. Most people I know, when challenged, try to explain their position, backed by FACTS. Your name-calling and arrogance don’t help your cause. Those who attack generally do so from fear. Why don’t you take time, retrench; and present fully your methods, data, and code. Perhaps then the argument will be settled, by removing the emotional controversy. (But, you need to release everything – warts and all.)

  24. 274
    David says:

    This data sets page is a great resource. There are some here I did not know about. Thanks for putting it together. Do you mind if I link to it?

    A good source of data is NCAR
    I’m not sure how you would list this in your hierarchy (there is much
    data available, perhaps only some of it relative to climate research.)
    Some of this data is available to everybody, some of it is restricted to people with NCAR computing accounts (mostly due to restrictions placed on the data when they receive it, and other practicalities.) Lets face it, WMO40 was/is
    a dual edged sword – it gave us access to some information that was previously
    difficult (or impossible) to obtain – at the same time it restricted
    what we could do with the data. I’d love to see a time when all data
    collected for weather and climate research could be freely exchanged,
    but I think that day is sometime in the future, and may never happen.
    We used to make global hourly SYNOP observations available, but the
    Irish Met office was kind enough to let us know that only 6 hourly
    observations were covered by WMO resolution 40. Everything else
    was considered “special” data – and every country had their own
    rules on “special” data. We don’t have staff to figure out
    which is which, and to separate it, so we had to stop making
    the data available.
    For those interested, a quick google search found
    It would be nice if the WMO, or some other body, made available
    an archive of all available data. Maybe someday this will happen.
    I’m sure it just a matter of money.

    My understanding is the data that CRU “lost” was in the form of paper
    output and obsolete tapes. This apparently happened in the mid 1980′s.
    Sorry I cannot provide a reference for this; this “understanding”
    occurred over the last few days from reading various web sites
    and blogs – and may not be correct. I know that we have had to
    recycle stacks of paper, and magnetic tape, because it became
    impractical to store, and resources were not available to put
    them on more modern storage. The reality is that no University
    has the capability, or the responsibility, to make any data
    they may have once had available to others. People say that the
    data should have been kept because now important (and expensive)
    decisions are being made based on the analysis of that data.
    But in 1985 that was not known.

    If anybody here has influence with IPCC or WMO, perhaps you can encourage
    them to make such a freely available data repository available?
    But again that would cost money.

    Since the “science” has apparently been
    more or less settled, perhaps that money should go
    toward possible mitigation efforts? But I still think an
    international data repository would be very valuable.
    While I’m tempted to suggest that all climate scientists
    retire, or move on, and no new ones be made, I think that would also be
    a mistake. To store and make the data available, and
    to retain climate scientists, is a lot cheaper than
    trying to resolve the mess we have (or might have) found
    ourselves in. If the data is available to everybody,
    perhaps one of them might find a mistake, or something new.
    The reality is, like it or not, the science has not
    been settled – sure there is some evidence the Earth
    might be warming, and CO2 is an obvious cause, but
    if you really think we know what is going on, I suggest
    you might be wrong (of course, you could be right).

    I have concerns about how the data might have been analyzed. But I
    have no interest in reanalyzing it. Well, I have the interest,
    but not the time. Enough people have done this – there might
    be problems looking for a signal in so much noise, but I think
    the people who have done this work did it as carefully and properly
    as possible.

    Climate scientists find themselves in an awkward position. They
    have found themselves thrust in to a position of guiding policy.
    This was in part because they had no choice – they found something
    that may be a real concern – in order to get that concern heard
    they had to publicize it. There is some concern that they trust
    this into the public view in order to enhance their career, or their
    funding. This might be true in some cases, but I have no doubt
    that there is a real concern – and when you are concerned
    about something you have to try to do something about it.
    Scientists have been wrong before, and MAYBE they are this time.
    But they would be negligent if they saw a potential problem,
    and did not try to fix it, even if it meant they have to
    delve into politics (where most of them have little skill)

    You have to admit, when we can’t predict the weather for next week,
    the public has to wonder about our ability to predict the weather
    (or climate, which is after all just the average of weather)
    for the next decade. Yes I know they are different problems, but
    most of the public does not see that, and after all they are mostly
    based on similar sets of data and models.

  25. 275
    Walter Pearce says:

    Just heard the Diane Rehm panel discussion on Copenhagen, which included Michael Mann along with a “resident scholar” from the American Enterprise Institute. I commend it to anyone, especially in the skeptic community, who still has questions about the email incident as well as the reality of the recent warming.

  26. 276

    The medical profession has discovered that a poison contained in a United Nations variant of fudge was the cause of a debilitating brain disease among scientists, now termed Climate Wars Syndrome (CWS).

    The disease was secretly suspected by sceptical scientists to have spread rapaciously among the scientific community for two decades and to have taken a terrible grip over the reasoning powers of many. Victims can be identified by their green and alarmist complexion. Other side effects include an irrational hatred of mankind and a Tourette syndrome-like verbal abuse of anyone who uses fossil fuels. Threats of violence may occur. The world first learned of these sensational developments from the Internet on Friday November 20th 2009. The story broke that both the underlying cause of CWS and an effective treatment had been discovered by the due diligence of one man working at the UK’s Climate Research Unit (CRU). A vast community of Internet surfers soon memorialised these profound events by naming them, ‘Climategate.’

    From leaked documents we understand that the catalyst for this epoch change in science occurred when a climatologist and self-taught computer programmer known only as ‘Harry’ was sat at his laboratory computer chewing on some fudge. Only after three long years working on this problem and in a sudden eureka moment, did it finally dawn on him. In Harry’s hands was the cause of brain fog mystery.

    “F**k! It’s the fudge! It’s serial!” he cried.

    Inadvertently, Harry has become the hero the public associate with solving one of the great mysteries of modern science. Since those findings have appeared on the Internet the world has quickly accepted that it was the UN’s foul fudge that caused scientists to suffer this dreadful disease.

    Meanwhile, epidemiologists and clinicians have been quick to identify the hallucinagenic properties of the offending fudge to further unravel the mystery. Incredibly, the fudge has been found to contain a psychotropic substance that acts primarily upon the central nervous system where it alters brain function, resulting in changes in perception, mood, consciousness and behavior leading patients to feel delusions of grandeur and a sense of spiritual purpose in their lives.

    It appears lone-wolf Harry, wiling away his time in the CRU laboratory subliminally faced the truth and by a process of ‘cognitive dissonance,’ shocked himself out of the effects of the psychotropic intoxicant, a drug now known to cause the hallucinogenic appearance of a mythical beast known as, ‘Man-Bear-Pig’ (MBP). Other experts who have replicated Harry’s experiments confirm the efficacy of the cognitive dissonance reasoning process as a cure. Apparently, most recovering ‘addicts’ (for this fudge-eating was clearly an addiction) soon notice a change starting with improvements in the appearance of their eyes which lose their tainted green colouration.

    Other convalescing climatologists, that body of scientists identified as the worst fudge sufferers, are reporting the same side effects as Harry. Symptoms include anxiety, guilt, shame, anger, embarrassment, stress, and other negative emotional states that torment the patient. Epidemiologists have coined the name ’Climate War Syndrome’ (CWS) to describe the fudge-induced malady. Both ‘Climategate’ and ’Climate War Syndrome’ (CWS) have fast entered common usage giving a new handle on what was one of the great mysteries of our time.

    Of course, like any serious disease, there will always be patients who won’t respond well to treatment. Those worst cases permeated with the deepest shade of green are believed to be James Hansen, Michael ‘upside down’ Mann and Phil Jones whom, its feared, may all need to be quarantined in isolation for several years.

    [Response: Not really appropriate, but a step above the, umm... let's say, less cerebral sceptic offerings... - gavin]

  27. 277

    In the “documents” section of the FOIA2009 archive, there are several IDL programs that add “artificial corrections” to temperature data (for a list of files that match “artificial” see here). One such program is This graph shows tree-ring data before and after the corrections defined in the file (for more background see here. Doesn’t the corrected graph look like they hockey stick? Were such corrections used in the construction of the hockey stick?

    [Response: No. These "corrections" were never used anywhere. See comments passim. - gavin]

  28. 278
    Tilo Reber says:

    “As with CRU, all I have to give out is the merged product. – gavin]”

    How do you know that the merged product represents the original product. Unless you either have the original, or enough metafile data to point you to the places where you got the pieces of your merged products, then the idea that your merged product represents reality is simply unsupportable.

    [Response: You are welcome to go back to the original sources and do it again yourself. The point is that there is only one product (the current database), but the raw data still exists (in it's less than perfectly accessible form). - gavin]

  29. 279
    donQ says:

    @Ron Broberg
    it is good to see that you are trying to make this GISTEMP work.

    Your first attempt appears to have been less successful than you originally thought and someone had to call you on it. So while you were not trolling (sorry for that) you were over excited and produced personal, as it were, attack based on your confusion.

    Hopefully you will get it to work and in the process create good documentation and truly become the first person in this thread to have ran GISTEMP.

  30. 280
    Garrett Jones says:

    I just saw an article in the UK Times that claims the orignal temp data (ie: raw unadjusted in any way) has been destroyed/mislaid during a move of facilaties. Further according to this article, the only thing now in possession of the CRU is the smothed and corrected data produce in accordance with CRU’s model. Could you comment as to whether this is true and on a hypothetical, were it in fact true would the research in fact be repeatable, in the of a good experemnet can be repeated and arrive at the same answer? Sorry some one may haved this before just did not have the heart to wade through all the coments. Thanks.

    [Response: The raw data is in the custody of the met services who originated it. CRU is just a collation, not a temperature measuring organisation. - gavin]

  31. 281
    Nick Barnes says:

    Ron, if you are using GNU Fortran, the enddianness magic can be done by setting GFORTRAN_CONVERT_UNIT before invoking the executable. This can set the endianness of any “units” (like file IDs) to big-endian, little-endian, platform-endian, swap-endian, etc.

    On the ccc-gistemp project I hacked out all the ksh driver scripts a long time ago – replacing them with a single /bin/sh script – and rearranged all the files into consistent directories, and so on, so my code and scripts aren’t the same as the released GISTEMP any more, but the relevant part of my file reads like this:

    echo “====> STEP 5 ====”
    # GISS binary data files are big-endian, so ours are too, including
    # the intermediate file generated by and the SBBX.HadR2 ocean
    # file which comes out of STEP4. We use GFORTRAN_CONVERT_UNIT to tell
    # our code to treat all these files as big-endian. This Will Not Work
    # if the Fortran compiler is not GNU Fortran.

    GFORTRAN_CONVERT_UNIT=”big_endian:10,11,12″ bin/SBBXotoBX.exe 100 0 > log/SBBXotoBX.log
    GFORTRAN_CONVERT_UNIT=”big_endian:10,11″ bin/zonav.exe > log/zonav.Ts.ho2.GHCN.CL.PA.log
    GFORTRAN_CONVERT_UNIT=”big_endian:10,11,12″ bin/annzon.exe > log/annzon.Ts.ho2.GHCN.CL.PA.log

    Hoping this makes it through the RC comment filter. If this interests you, feel free to join us on ccc-gistemp.

  32. 282
    VagabondAstronomer says:

    After days of reading through this, the more tempting it is to simply throw up my hands in frustration. The denialist will still continue to deny AGW, regardless of how much data you give/throw/otherwise convey to them. The primary forces opposed to AGW are simply too well financed; their pockets are very deep, they have control over numerous outlets and are quite crafty at picking-and-choosing-and-releasing data that supports their claims that AGW is not real, and that while climate change MAY BE real (and that is a big, fat MAY BE), it is most certainly not human in origin. This is not science against science here any longer; it is a clash of belief systems. The funny thing is that, like most belief systems, it is easier to say that the side with data supporting their claims has turned their science into a near religion. This is what happens with Creationists; their number one charge against evolutionists is that the evolutionists have turned their “theories” into religion.
    The media outlets that show a strong pro-industry bias are almost always controlled, in some fashion, by parties that stand to lose big if any mandates are handed down. Of course they are going to fight this, as this is going to be the biggest paradigm shift in recent human history.
    I fear that there will soon be a new group emerging from this; the GCC embracers. These are former denialists who now accept that AGW is real, but alas, it is too late to do anything about, so we simply need to just move along and accept our new, somewhat warmer and regionally flooded world.
    Just a few thoughts…

  33. 283
    Peter Houlihan says:

    Before the “emails” were released Micheal Mann had a paper in press. It appeared in the recent issue of Science.

    Global Signatures and Dynamical Origins of the Little Ice Age and Medieval Climate Anomaly
    Michael E. Mann et al. Science 27 November 2009:
    Vol. 326. no. 5957, pp. 1256 – 1260

    Mann is one of the folks being lambasted for a lack of transparency. This paper is a case study in transparency and data availablilty.

    From the text of the paper:

    “Further details of reconstruction procedure, associated statistical validation and skill assessments, uncertainty estimation procedures, data used, and MATLAB source codes for the analysis procedures are provided in the Materials and Methods.”

    When you go to the online supplement, you will find this:

    “Computer codes and data. Computer codes (MATLAB), data, and supporting information for analysis in main paper. File are packaged as a compressed archive, in *.zip format; users should download the compressed file to their machine and decompress the file on their local hard drive, using the instructions below. (22 MB) [This is a link in the online version”

    And should we forget were we could be spending our time and energy, it’s a very interesting paper and well worth a read.

  34. 284
    David says:

    Re: “You’re using a common name as a userid — whoever you are, people won’t know if the next “David” along who asks a naive question is from you again, playing games, or from someone with an honest question.”

    That’s a fair point, and I apologize. I won’t do it again, no matter how much I may be bothered by the tone of some of the posts here.

  35. 285
    donQ says:

    @Ron Broberg,
    using your instructions + code provided and starting from scratch, while running Ubuntu 9.10 x64 in Parallels and using this after “./” produces:

    start STEP0

    At line 10 of file sorts.f (unit = 1, file = ‘antarc.txt’)
    Fortran runtime error: End of file
    Bringing Antarctic tables closer to input_files/v2.mean format
    collecting surface station data
    … and autom. weather stn data
    … and australian data
    replacing ‘-’ by -999.9, blanks are left alone at this stage
    adding extra Antarctica station data to input_files/v2.mean
    0.00 not ok
    0.00 not ok
    created v2.meanx from v2_antarct.dat and input_files/v2.mean
    removing pre-1880 data:

    GHCN data:
    removing data before year 1880.0000
    created v2.meany from v2.meanx

    Replacing USHCN station data in v2.mean by USHCN_noFIL data (Tobs+maxmin adj+SHAPadj+noFIL)
    reformat USHCN to v2.mean format
    replacing USHCN station data in by USHCN_V2 data (all adjustments, but ignoring fill-ins)
    unifying station-ids
    ./unify_us_ids[3]: 9641C_200907_F52.avg: cannot open [No such file or directory]
    mv: cannot stat `9641C_200907_F52.avg.1′: No such file or directory

    Finding offset caused by adjustments
    extracting US data from GHCN set
    removing data before year 1980.0000

    Getting USHCN data:
    removing data before year 1880.0000
    done with ushcn

    Created ushcn-ghcn_offset_noFIL
    At line 21 of file cmb2.ushcn.v2.f (unit = 3, file = ‘ushcn-ghcn_offset_noFIL’)
    Fortran runtime error: End of file
    created v2.meanz

    Replacing Hohenspeissenberg data in v2.mean by more complete data (priv.comm.)
    disregard pre-1880 data:


    Is this similar to the output that you are seeing?

  36. 286

    Interested in direct links to cosmic ray count data, AMO, PDO, and the major Nino SST indices?

  37. 287
    Tonyb says:

    53 Jiminmpls said:

    “28 November 2009 at 9:40 AM
    Fair is fair. I demand that the Heartland Institute, George Marshall Institute, American Petroleum Institute, Western Fuels Association, Sen Inhofe, McIntyre, Singer, Ball, Michaels, and all other organizations and individuals attempting to influence public policy in regards to climate change IMMEDIATELY make their entire email archives available for public scrutiny and analysis.”

    According to most of the people posting here none of those organisations have anything at all worth saying so what would be the point?

  38. 288
    Rob Bauer says:

    The Antarctic Glaciological Data Center (AGDC) at NSIDC houses a collection of Paleoclimate/Ice Core data
    ( collected under the NSF Antarctic Glaciology program.


  39. 289

    There’s a data-processing program buried in the FOIA archive, called It adds hard-coded corrections to raw tree-ring data. The effect is like this. Did this program play a part in the composition of the Hockey Stick graph?

    [Response: No. See comments passim. - gavin]

  40. 290
    Rod B says:

    Ray Ladbury (241) says, “Matthew, there are no true skeptics left–merely the ignorant, the wilfully ignorant, the denialists and the wingnuts.”

    So what do you call people who deny the reality of true skeptics (your self serving definition aside.)

  41. 291
    Jason Patton says:

    Not sure if it would be data overload, but RCM code is pretty easy to find too. I know from NARCCAP, at least, MM5, WRF, and RegCM3 have code available. I think WRF might make you do a quick registration first, though. It’s probably not worth listing RCM output since it’s highly individualized.

  42. 292
    Alfio Puglisi says:

    I can confirm what Ron Broberg (#271) says. I also arrived at step 4 and met following message:

    iyrbeg-new 1880 iyrbeg-old 1476853760

    where the latter number is “1880″ in reverse endianess. I am using Ubuntu 32bit on intel.

    By the way, it took me between a little more than one hour to make the thing run, including download of everything, installing most of the needed tools (my Ubuntu box didn’t even had Ksh and the Fortran compiler installed) and rewriting the Python extension installer script. Not out-of-the-box, but not so hard either :-)

  43. 293
    Alw says:

    Re 129

    Gavin the quote from Hansen 2001 stated metadata defining “all” changes, not as you put it “any metadata”.

    Whilst the change of time of measurement is one of the multitude of changes, it does not comprise metadata defining all of the changes.

  44. 294
    IJH says:

    Okay, here’s a question about the Times missing raw data story beyond where the data still is: what did the CRU mean when they said:

    “Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues.”

    Wouldn’t the cost of keeping a few more numbers on tapes (or whatever) be trivial compared to cost of the whole enterprise? Or was something other than cost the issue?

  45. 295
    Ray Ladbury says:

    Rod B., A true skeptic must acquaint him or herself with the evidence, acknowledge it and seek plausible alternative explanations. Do you know of any true skeptics Rod? I don’t. If you know of any, perhaps you can tell me their theories on how the troposphere can warm and the stratosphere cool via a mechanism other than a greenhouse one. Or what credible alternative energy mechanism accounts for 30 years of warming and the loss of trillions of tons of ice. That’s for starters.

  46. 296

    Gavin, thanks for your reassurance that the corrected data was not used. I’m looking for “comments passim” that you mentioned. Perhaps I’m just a blind old fool, but I’m not finding them in this list of comments. Can you direct me to your previous answer? Thank you for your patience. Kevan

  47. 297
    Ron Broberg says:

    @donQ#285: Is this similar to the output that you are seeing?

    Yes. Next update, I’ll post an output log.
    Also, I scripted a diff for the NASA GISTEMP output and my file. The largest diffs are in the endpoints – 1880 – 1900 or so and are likely related to the Hohenpeissenberg file. Currently I am downloading the one hosted in NL rather than use the one provided in the NASA tarball. I’ll switch back to NASA’s and rerun tonight.

    @Nick Barnes:
    Thanks for the tip. I’ll give them a try tonight.

  48. 298
    Dave E says:

    #50–”I have never known anything to be settled in my long life”–so you go to bed hoping that you won’t fall of the earth before you wake up, gravity being one of those things that still isn’t settled. You hear there is some cooling going on–when it’s shaping up to be the hottest decade ever–each of the last 10 year have been among the 15 hottest years. Yes, 1998 was hot, but the trend still seems to be up. Maybe you should look a little more closely at what the sceptics say and see if it really holds water–that’s the basis of the camparison of deniers to creationists–they both ignore the evidence of their surroundings.

  49. 299
    Shirley says:

    Gavin, I’m sorry to bother you with more nonsense, but I’d trying to deal with some people making a fuss about the Times UK article about the magnetic tapes and paper notes that were thrown out to save space. I know I saw something here at RC on this, but haven’t been able to find any clarification despite some fancy “ctrl-F work in the posts. Also, in the advent of that article, it might be a time/sanity saver to update something about that in the body of this or the “…context” blog, to clarify quantity, quality and perhaps extent of documentation available to reconstruct (ie: how much methodology was written up if anyone wanted to backwards engineer the oldest data to extrapolate what would have been on the tapes) or how that data might have compared to other data sets from the same time/now. I personally see data on 1980s magnetic tapes as valuable as data scribbled on cocktail napkins late at night in the dark, but I’m only an undergrad… I guess I’d better not lose any of my notebook pages from now until I die or I might be in trouble. Anyway, I’d like to have something to tell people who think this is some big meaningful plot or omission of science to put it into perspective.

  50. 300
    Mark V Wilson says:

    On the question as to whether some Briffa data were used in one of the hockey stick graphs you say “No, see comments passim”. I’ve searched the comments for “Briffa” and “hockey stick” and not found anything. What comments answering the question are you referring to?

Switch to our mobile site