RealClimate logo


Technical Note: Sorry for any recent performance issues. We are working on it.

1934 and all that

Filed under: — gavin @ 10 August 2007

Another week, another ado over nothing.

Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.

This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.

The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).

There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particularly error being introduced), it says:

The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.

More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).

In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.

Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).

However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.

But hey, maybe the Arctic will get the memo.


620 Responses to “1934 and all that”

  1. 101
    Ron Taylor says:

    Re 98 – Tim, I am so grateful for your knowledge, skill and patience in deconstructing plausible sounding strawman arguments and revealing them for what they are. Keep up the good work – and I am sure that it involves a great deal of work on your part. Your posts are a great asset on this site.

  2. 102
    Allan Ames says:

    It is troublesome that in this day and age we still seem to be using, or over using, an 18th century approach to data capture, a thermometer mounted on a post. Problem is, thermometers don’t really measure what we need to know about energy in the atmosphere, so their measurements are largely irrelevant for models. Thermometers report some unidentified hodge-podge of LTE, upwelling and down welling radiation fields fields and air currents and radiations from the surround, whatever that happens to be.

    What we need — it is 2007 after all — are lateral and vertical radiation field measurements concurrent with pressures, separate pressure fluctuations, wind velocity, and humidity, all sampled at least every five minutes. Stations should be located inside and outside UHI’s. I say let’s stop wasting time with postoffice or whatever thermometers. Let the private sector manage and sell this data. If NASA is going to be in the business of modeling, it needs clean, complete data and trash the rest of it.

  3. 103
    Vincent Gray says:

    Hansen’s correction shows that in the United States temperature oscillates with a frequency of about 65-70 years. Thjis oscillation was identified by Schlesinger and Ramankutty in 1994. The last peak was 1940 and we are about to experience the next decline. There is no evidence for “global warming” from Hansen’s record.

    The same oscillation, without overall warming, is to be found in the published regional records for China, and for many well-maintained local records.

    The supposed “global warming” shown in the surface record is an artifact caused by the many inaccuracies and biases in the recording af local temperatures (where average temperatures are never measured) and their influence by local changing circumstances.
    [edit]

  4. 104
    Dodo says:

    Re #7. Was there an answer to this reasonable request? It would be quite illustrative to see the US and global anomalies on a similar scale, don’t you think? Concerning the global anomaly graph, somebody should tell the graphics person not to plot values off the chart, but to choose the y-axis so that everything fits inside the picture.

  5. 105
    ghost says:

    RE: 92/Richard: Monbiot says that tobacco company documents disclose a concerted effort to fog the secondary smoke issue by discrediting science in general–using AGW research as the focal point of the disinformation attack. From that flowed junkscience.com, its progeny, and most if not all of AGW denial. As such, the denial collective toils, consciously or unconsciously, in the service of tobacco company (and by derivation, extractive company) profits. How proud they must be.

    88. “It’s really no surprise how the antiwarming crowd jumps on any small error in the science …to discredit all the science.”

    This was a classic tactic of the tobacco industry. Any small error in a science article was used to describe the research as ‘flawed’ and unusable.

  6. 106
    Jerry says:

    #103

    The probability that the cumulative effect of local “inaccuracies and biases” would be a statistically sigificant global warming signal is miniscule. (Unless there’s a conspiracy… calling Michael Crichton!)

  7. 107
    FP says:

    Three Questions.

    Had McIntyre discovered the error change the data in the other direction and made it much warmer would he have brought the error to light?

    Would 1934 still be the warmest year had calculations for the year started in any other date than January 1 through December 31? Had the years been calculated from any other date, it seems very likely that time period we call a year would not stand out at all. The years before and after 1934 were considerably much colder unlike the years in recent decades. Recent years are all much closer together in warmth, hence the mean average being higher for a longer period.

    1934, Dust Bowl right? If in 1998 we scraped the plains of plants so that dust could blow around would it have significantly beat the pants off 1934 for hottest year on record ever?

    [Response: Actually, it wasn't clear which way the error would go once it was spotted. I doubt that it would have got as much publicity if it had gone the other way though. - gavin]

  8. 108
    Neil B. says:

    As I said at DeLong’s typepad, where he has a good piece on this now:

    You can draw a straight line right across the 1934 peak on the revised graph [as I could see it there], and see that it is still lower than the 200x wiggles.

    Yeah, Rush was blabbering about fraudulent NASA scientists, and “Seventy years ago, it was way warmer than it is now” etc. BTW, why can’t groups of people like NASA scientists sue for libel if someone defames them?

    [Response: It's not worth the bother..., though sometimes one is tempted! - gavin]

  9. 109
    Ron Taylor says:

    In 103, Vincent Gray writes: ‘Hansen’s correction shows that in the United States temperature oscillates with a frequency of about 65-70 years. This oscillation was identified by Schlesinger and Ramankutty in 1994. The last peak was 1940 and we are about to experience the next decline. There is no evidence for “global warming” from Hansen’s record.’

    How interesting. I realize you are only saying that Hansen’s data does not demonstrate global warming, but you imply that global warming does not exist, and is merely a phase of a natural oscillation.

    Since, as you say, this idea has been around since 1994, and since the conclusion contradicts the conclusions of hundreds of peer reviewed studies, surely dozens of peer reviewed articles have been published in support of the idea. Could you refer us to, say, three or four published in the past five years?

  10. 110
    Wolfgang Flamme says:

    #58
    Gavin, I think that explains the difference, thank you.

  11. 111
    TCO says:

    The Y2K comment was McIntyre being flip. It has caused unneeded confusion. One of the dangers of blog science (tendancy for catchy headlines and the like, versus boring titles).

    The station picture thing was related in an indirect way. Watts et al. had been looking at individual station pictures and also in some cases correlating them to the station temp graphs. Looking at individual temp graphs showed some large adjustments tending to happen at JAN2000. That’s what popped out to the eye and led to the insight. The adjustments are actually bimodal vice normal around the mean, so that it was easier to see an effect than one might expect. (None of this is to defend the station photo effort, which I’ve ridiculed. But to explain the form of the connection as it is. Really much more one of these things of one idea leading to another that happens in science. Heck some of my most interesting results from lit searches did not come from a search or references to references, but from being physically in the library books or bound journals and what you end up finding next to each other…)

  12. 112
    TCO says:

    49: (science being theory or experiment led). Both can happen. Fractional quantum hall effect, Michelson Morley experiment, Mossbuaer effect, Pennicilin discovery, etc. are examples of unexpected results. Of course, it is also often the case that instruments are specifically designed based off of an experiment.

  13. 113
    Timothy Chase says:

    Vincent Gray (#103) wrote:

    Hansen’s correction shows that in the United States temperature oscillates with a frequency of about 65-70 years. This oscillation was identified by Schlesinger and Ramankutty in 1994. The last peak was 1940 and we are about to experience the next decline. There is no evidence for “global warming” from Hansen’s record.

    The same oscillation, without overall warming, is to be found in the published regional records for China, and for many well-maintained local records.

    The supposed “global warming” shown in the surface record is an artifact caused by the many inaccuracies and biases in the recording af local temperatures (where average temperatures are never measured) and their influence by local changing circumstances.

    The article is a bit dusty after thirteen years, I would presume. As such I was unable to find it, but I did find the abstract. It turned out to be a letter to Nature. Not peer-reviewed as I understand it, but nevertheless of some quality, I believe.

    I have bolded the sentence which I found most significant, given the context:

    An oscillation in the global climate system of period 65–70 years
    Michael E. Schlesinger & Navin Ramankutty
    Nature 367, 723 – 726 (24 February 1994)
    http://www.nature.com/nature/journal/v367/n6465/abs/367723a0.html

    IN addition to the well-known warming of 0.5 °C since the middle of the nineteenth century, global-mean surface temperature records1–4display substantial variability on timescales of a century or less. Accurate prediction of future temperature change requires an understanding of the causes of this variability; possibilities include external factors, such as increasing greenhouse-gas concentrations5–7 and anthropogenic sulphate aerosols8–10, and internal factors, both predictable (such as El Niño11) and unpredictable (noise12,13). Here we apply singular spectrum analysis14–20 to four global-mean temperature records1–4, and identify a temperature oscillation with a period of 65–70 years. Singular spectrum analysis of the surface temperature records for 11 geographical regions shows that the 65–70-year oscillation is the statistical result of 50–88-year oscillations for the North Atlantic Ocean and its bounding Northern Hemisphere continents. These oscillations have obscured the greenhouse warming signal in the North Atlantic and North America. Comparison with previous observations and model simulations suggests that the oscillation arises from predictable internal variability of the ocean–atmosphere system.

    Despite the title of Chapter 11 “Regional Climate Projections,” you seem to have thought that its purpose was of detailing observations made it the past rather than making projections regarding the future. Now it would appear that the letter which you are citing in support of your highly idiosyncratic views was written by authors who clearly do not share them. You might wish to consider reading things more carefully before commenting on them.

  14. 114
    John Wegner says:

    As I posted in the Arctic sea ice extent thread, the fact that Arctic sea ice is at an historic low this year is more the result of data changes made this January.

    Prior to these changes (which may also contain the same kind of errors as this thread is about) 1995 was the lowest Arctic sea ice extent.

    Here is a BEFORE and AFTER of the changes made in the data this January (hopefully the adjustments do not contain errors.)

    http://img401.imageshack.us/img401/2918/anomalykm3.gif

  15. 115
    DocMartyn says:

    #114 nice graphic, but wait until you see the present day sallatlite pictures vs the (modeled) satallite pictures of 100 years ago.

  16. 116
    Timothy Chase says:

    John Wegner (#114) wrote:

    As I posted in the Arctic sea ice extent thread, the fact that Arctic sea ice is at an historic low this year is more the result of data changes made this January.

    Prior to these changes (which may also contain the same kind of errors as this thread is about) 1995 was the lowest Arctic sea ice extent.

    Here is a BEFORE and AFTER of the changes made in the data this January (hopefully the adjustments do not contain errors.)

    http://img401.imageshack.us/img401/2918/anomalykm3.gif

    John, the following chart from 2005 would suggest that you are once again being creative…

    28 September 2005
    Sea Ice Decline Intensifies
    Figure 1: September extent trend, 1978-2005
    http://nsidc.org/news/press/20050928_trends_fig1.html

    I must admit, though, that this is better than your decimal place trick.

  17. 117
    Rich Briggs says:

    papertiger said:

    “I am wondering if the ocean is rising and flooding out Indonesia, why isn’t it flooding Cape Cod? are they not all at sea level?”

    It ain’t quite so straightforward. Simply put, all sea level curves are local. This is for a few reasons. For one, the massive ice loads present at the last glacial max depressed the earth’s crust beneath them (intuitive) made it bulge upward along their edges (maybe not so intuitive, but similar to the beam problems I’m sure you enjoyed in diff eq). The removal of these loads caused a global isostatic readjustment, still ongoing, as the mantle flows in toward the depressions. This means that some places (previously beneath loads) are still rising quickly, others (those on the bulges) are falling slowly, and still others (in the far field) have had complicated post-glacial rises then falls.

    Another important factor is that the global sea surface is nowhere near ‘flat’ in the way you might be picturing. Gravity varies spatially, as does the mass of seawater, resulting in hills and valleys on the sea surface. These move in time, sometimes very dramatically (e.g. El Nino, ENSO, etc.).

    Finally, many places are tectonically active, and interseismic strain accumulation (and subsequent release during fault rupture) can complicate measurements mightily. This is the case in much of Indonesia, for example.

    All said, measurements of sea level rise try hard to take these complications into account.

    I’m not sure but I think Cape Cod might be in the forebulge region of the Laurentide ice sheet, so today the coastline there may be sinking in part due to post-glacial isostatic adjustment. Thus the question to ask is if recent sea level rise at Cape Cod has increased in rate (because the rate of isostatic change should be nearly imperceptible at the century scale).

    It’s Saturday night – off to a party!

  18. 118
    Steven says:

    #59
    Yes, I do understand the irrelevance of ‘fitting’ a single frequency sinusoid, – that was my entire point (I work in science/engineering so understand Fourier analysis) – Sorry, my remark was perhaps obtuse. I was simply criticising the hyper-analysis of linear trend segments in this data set (at arbitrarily defined endpoints, with subjective methods of pre-filtering the data)
    #55
    Yes, I work in research and publish myself. Have you involved yourself in an examination of the GISS data sets? Things are not as transparent as you readily state. The fact that this error has been present for so many years (referenced so often), and required great effort to reverse-engineer, indicates this is a serious problem.

    It is not as if people did not care, and this was an error ‘missed’. A stream of scientists/engineers have been calling for a more transparent accounting, and we have been ignored under the absurd cackling of ‘Denialists!’

    I think it is time for less social/political polarity on these issues, before things get entirely out of hand (if they have not already). The climate science community would do themselves no harm by applauding Stephen McIntyre’s, shall we say? – tenacity. Perhaps working with him in addressing other concerns raised within the CA sphere (that has the participation of many scientists/engineers). This is a different world from the traditional ‘publishing’ science – with serious ramifications for humanity and earth – therefore you will have to put up with us ‘outsiders’ having our say, auditing your data, questioning your results – even if it is outside the realm of our immediate expertise. This is the burden of having chosen such an important realm of scientific investigation – so be it.

    regards, Steve

  19. 119
    steven mosher says:

    RE 98. Ataraxia. Pyrrhonism. Wiki it.

    Hansen had several choices. See if you can name them all, grasshopper.

  20. 120
    Timothy Chase says:

    Steven (#118) wrote:

    It is not as if people did not care, and this was an error ‘missed’. A stream of scientists/engineers have been calling for a more transparent accounting, and we have been ignored under the absurd cackling of ‘Denialists!’

    Steve,

    There is a great deal of evidence besides US land-based surface temperatures which demonstrates that global warming is quite real and that it is accelerating.

    I rattled off a list of twenty points (just before #57), where one of the most dramatic of which in my mind is global glacier mass balance – and by our best estimates, it threatens severe water-shortages for over a billion in Asia alone by the end of the century. Twenty largely independent lines of evidence that the world is undergoing a process of global warming, I will add, can and will take on a life of its own as various positive feedbacks come into play.

    What is happening right now up in the arctic should be a pretty good indication of this. You guys might not take it seriously or might like to think it is all made up, but a number of national governments would appear to take it quite seriously – judging from how the US, Russia and Canada are trying to stake claims on the oil below – which until now have been out of reach.

    However, your guys see the reading on one US temperature rise 0.02 degrees Celsius and the other fall 0.01 degrees Celsius, and are ready to say that global warming never happened. The mineralogist who crowned himself “climate auditor” is doing everything he can to encourage this. Somehow I think he is one of the last people in the world that one should ever call upon to keep a profession honest.

    Your guys like to complain that you need more “transparency.” You can download the climate models if you want to, and if you are willing to look a little, you can generally find the data. Heck, you can get the raw data on all US land stations and stations throughout the world along with charts if you are willing to visit:

    GISTEMP – EdGCM Dev – Trac
    http://dev.edgcm.columbia.edu/wiki/GISTEMP

    The guy in charge of the project is even trying to get pictures of all the stations for you and help you identify how close they are to urban centers. But somehow a bunch of guys snapping pictures who refuse to learn statistics, mathematics, physics or the simple fundamentals of the discipline they are attacking or even what information is actually available are going to tell climatologists how to conduct their science.

    Give me a break!

    I am no more of an expert than yourself, but at least I am willing to learn.

  21. 121
    Timothy Chase says:

    PS to #120

    Steve,

    As the charts in Gavin’s essay above suggest, it is quite true that the United States has had it easier than much of the rest of the world so far. Don’t count on our luck continuing as this progresses, though. If the models are as on the money as Jim Hansen’s twenty-year predictions (which were based on a single run with model much more primitive than today, mind you), we are going to make up for it.

    By 2080, there will be a permanent dust bowl forming in the southwest and another in the southeast. We won’t be able to grow wheat south of the Canadian boarder. Wheat. In a world that will be facing severe water shortages, greatly reduced agricultural harvests and the depletion of fish populations.

    This stuff is serious – and you guys are doing everything you can to stand in the way of the science because its telling you what you don’t want to hear. You want to continue with business as usual – even though we know where that will lead.

    Oh well.

    Maybe tomorrow I can quit trying to figure out the the twists and turns of human psychology and focus on learning more of the science. Genuine human achievement – in a world which may soon be seeing a great deal less of it. I guess we will see.

    Time for bed.

  22. 122
    Carl says:

    Tim:”It turned out to be a letter to Nature. Not peer-reviewed as I understand it, but nevertheless of some quality, I believe.”

    Hmm? Letters to Nature are certainly peer-reviewed, and held to the highest standards as they are the most well-read of all scientific publications.

  23. 123
    Tim McDermott says:

    I have been musing on why the “we need to audit the scientists” meme offends me so much. An exemplar of the meme from SCP in #23:

    I’m not confident in the US data, let alone those from the rest of the world. Peer review is for science. The IPCC can do all the peer review they want. That doesn’t cut it for economies. Economies use accounting and audits. Failure to do so leads to situations like Enron and WorldComm. If we just want to do science, stick with peer review. If someone wants to influence economies, it’s long past time for auditors to start poking around.

    If NASA doesn’t want to publish source code and all data, at a minimum maybe they should hire an accounting firm to audit climate related data, methods and processes and to issue a public report on the quality they find (and that firm should hire Steve McIntyre ; -).

    Leaving aside the irony that both Enron and WorldComm were audited every year by an outside accounting firm, and the implied comparison of climate scientist with convicted felons, I have to wonder who the audit memers think they could hire to perform a valid audit? I’ve done enough data analysis to know that you have to know the field to know which data manipulations are valid and which aren’t. How does an accountant, say, decide which adjustments to the temperature records are acceptable? Accountants have the GAAP, FASB, and IRS code and regulations to guide them when they are checking a company’s books. Climatology doesn’t have any equivalent. Except, perhaps, for getting a PhD in Physics, Chemistry, Biology, Oceanography, or Geology and then doing post-doc time with climatologists. In the same way that a non-accountant is going to get hopelessly lost trying to audit exxon-mobile’s books, so will non-climatologists trying to audit climatology.

    There is another dimension of difficulty with the audit meme. Unless the auditors are at least as skillful in climatology as those being audited, there is a powerful draw for the auditer to start seeing what is being audited as a tutorial: “this is how to solve this kind of problem and that technique works here” sort of attitude. And an auditor who falls into this trap will never find a problem.

    Finally, you are not likely to entice a skillful climatologist into auditing someone else’s work.

    Since we all make mistakes, how should we verify the work of climate scientists? Not suprisingly, science already has a mechanism: replicate results. The most effective way to check a scientist’s work is to try to reproduce what he did. And guess what, the climate science community is already doing that. Is there only one historical temperature data set? No. Is there only one GCM? No. Are there lots of studies that come up with a hockey stick? Yes.

    So the answer to the audit meme is that old-time science: reproduce results. The climate audit folks are free to build their own temperature history and publish the results for scrutiny. They are free to build their own models and run them out to 2100 to see what they predict. That’s science.

    (anybody want to start a pool on how long it is until someone starts blaming black helicopters on the IPCC?)

  24. 124
    John Wegner says:

    The Before and After sea ice extent images I linked to in #114 are from the Cryosphere Today.

    The Before image is the saved version from the WayBackMachine in December 2006 (before all the data was changed), while the after is today’s graph (so it contains six months more data.)

  25. 125
    John Wegner says:

    Here are two Visible satellite images from a few hours ago of the NorthWest Passage (the satellites don’t centre right over it so you need a few different pictures.)

    You could probably get through with a ship right now.

    http://rapidfire.sci.gsfc.nasa.gov/realtime/single.php?2007223/crefl2_143.A2007223202001-2007223202500.4km.jpg

    http://rapidfire.sci.gsfc.nasa.gov/realtime/single.php?2007223/crefl2_143.A2007223184001-2007223184500.4km.jpg

  26. 126
    Ron Cram says:

    Gavin,
    I still have not seen an explanation for the adjustment to years prior to 2000. The error McIntyre found should only have affected data for years 2000-2007. Why the adjustment to 1998 and 1934?

    [Response: The corrections were of different signs in different stations. The urban adjustment uses the rural trends (which may have changed slightly) to correct urban stations and therefore the correction may differ slightly from what was used before. That made quite a few 0.01 or 0.02 differences before 2000. Compare this (from May) with this (from July). Note as well that occasionally there are corrections and additions to the GHCN data for earlier years that get put in at the same time as the monthly update. Thus this magnitude of difference - fun though it might be to watch - is not climatologically significant. - gavin]

  27. 127
    Lynn Vincentnathan says:

    If 1934 had been the warmest year globally, we may have expected that 5000 year old Alps ice man to have melted back then, instead of in the early 90s. Where’s their thinking cap? Though, of course, local weather can be different from the global average.

  28. 128
    Jay says:

    I read in climateaudit.org that GISS does not share its code/algorithm for “fixing” the data. Gavin can you inform us why so? Just curious.

    [Response: The algorithms are fully described in the papers - basically you use the rural stations trend to set the urban stations trend. It's a two-piece linear correction, not rocket science. - gavin]

  29. 129
  30. 130
    Timothy Chase says:

    John Wegner (#125) wrote:

    The Before and After sea ice extent images I linked to in #114 are from the Cryosphere Today.

    The Before image is the saved version from the WayBackMachine in December 2006 (before all the data was changed), while the after is today’s graph (so it contains six months more data.)

    John,

    Regarding the sea ice anomalies…

    Ok. The images which you are using are the correct images.

    However, I would look at them very closely. For example, the low that you are picking out for 1996 (actually the end of 1995) is especially narrow. Given the size of the graphic, it isn’t that great of a surprise that it isn’t showing up from one year to the next. I picked out the low for the end of 2005, put my finger on it and allowed your graphic to switch from one year to the next.

    Blammo! Same point.

    Principally what you are seeing is compression as the result of including one additional year in the same size graphic. It gets rather dramatic in terms of the difference in the images as you get towards the end, but this has to do with how quickly things are progressing in the arctic. Incredible.

    Now I believe you can find all of the data files on the web. However, what I am seeing is principally gzs in ftps. And there is a great deal of data in those files. Additionally, the methods used to process those files into the graphics that you see at various sites is described in some detail, although perhaps not as much detail as I might like.

    You’ve got to remember: the people who process the files and produce the graphics have jobs. The graphics are essentially labors of love. And to describe them in the detail that you or I might like would be time taken away from something else. If you want them to produce the level of detail, step by step instructions for the reproduction of their graphics, this will take resources.

    But no doubt there is more that would be required to make this genuinely available to everyone with a net connection. For example, you would want to have the software available for free. But it would be pretty big. So preferably the software should reside on an internet server, perhaps as perl. This way anyone would have access to it. But then since a fair number of people might be using it at the same time, you will want the servers to be able to handle the traffic. More resources. The you should have the actual code being downloadable. More resources.

    Are you beginning to get the picture? What all the “auditors” out there want is to have all of the resources dedicated to making it possible for them to duplicate what the experts do point-for-point and pixel-by-pixel. But they want these resources without any expansion of the budgets for the relevant agencies and organizations. And this means time and money taken away from a great many other things.

    Anyway, someone who is more familiar with the rendering of data or of the statistical methods which are used might be able to say more. I myself have only a basic understanding. But I do know how to put my finger on the screen. And I believe that little more than this is required to understand the differences between the two images that you see.

    My apologies for suggesting that you might have manipulated the images. Clearly you haven’t. I was just remembering the traveling decimal point from a few weeks back.

  31. 131
    Timothy Chase says:

    Count Iblis (#128) wrote:

    European Temperature records are also wrong

    Yep.

    Or to be more precise, they were “wrong.”

    Following your link I see that we had previously underestimated how bad things currently are relative to the past – because we had overestimated how bad things were in the nineteenth century. This means that our current trajectory is worse than we thought and that things are likely to get worse than we thought sooner than we thought.

    European heat waves double in length since 1880

    New data published by NCCR Climate researcher Paul Della-Marta show that many previous assessments of daily summer temperature change underestimated heat wave events in Western Europe. The length of heat waves on the continent has doubled and the frequency of extremely hot days has nearly tripled in the past century. In their article in the Journal of Geophysical Research–Atmospheres Della-Marta and his colleagues from the University of Bern present the most accurate measures of European daily temperatures ever. They compiled evidence from 54 high-quality recording locations from Sweden to Croatia and report that heat waves last an average of 3 days now–with some lasting up to 13 days–compared to an average of around 1.5 days in 1880. “These results add more evidence to the belief among climate scientists that Western Europe will experience some of the highest environmental and social impacts of climate change,” Della-Marta said.

    NCCR Climate
    http://www.nccr-climate.unibe.ch/

    Nice spin though, Count Ilbis.

  32. 132
    Ron Cram says:

    Gavin,

    In #126 above, I asked about the changes prior to the year 2000 when the error McIntyre found related to years 2000-2007. I was, of course, referring to changes to U.S. temp anomalies as that was the dataset that NASA changed. Your answer did not directly address why years prior to 2000 should be affected. Are you saying that in the corrected dataset NASA lumped in some other adjustments as well as the adjustments required to fix the error McIntyre found? Also, you linked to global temp anomalies which does not address the question. For US temp anomalies, the old version is found at http://web.archive.org/web/20060110100426/http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt and the new version is at http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt As you can tell by comparing the two, a large number of adjustments were made in years prior to 2000, but (as far as I know) no explanation for these changes has been made. Has NASA explained these changes somewhere so I can read more about them?

    [Response:Actually, that old version is from 2006. I can't say from this whether the changes were from the latest correction, or simply additions/corrections to the database from GHCN. If it is from the latest correction, it's because the GISS urban adjustments come after the step we are discussing now. Therefore those changes could propagate down the line. Since the adjustments are linear in nature, the further back in time you get the more they'll have an impact I suppose. - gavin]

  33. 133
    Jay says:

    Gavin, thanks for the links. I looked at Hansen’s paper on the urban correction/surface temperature (my first cursory look at a climate science paper… makes Ed Witten’s papers look terse). Seems pretty simple I agree (even simpler than Classical Mechanics, i.e., rocket science). Hope this is not too simplistic and that your models are predicting future trends accurately and verified repeatedly as physicists usually expect. Anyways, good luck and thanks for responding to curious strangers.

  34. 134
    IL says:

    Gavin – response in #128 and earlier – I can’t tell whether you are being deliberately difficult or genuinely don’t understand some people’s concerns.

    Yes, the papers describe what the algorithms SHOULD do but does the ACTUAL CODE do exactly what you think it does? Clearly in one instance (at least), it did not.

    You seem to imply that the code is so simple that it does not need to be released, it only does A+B=C but if the actual code is released, anyone can verify that it correctly does A+B=C. Until the actual code and the basic station data that it processes are released, you are going to get some people wondering if there are more mistakes and they want to see exactly how corrections to individual data sets are applied. Science should be completely transparent and reproducible – so show people exactly how you arrive at the actual numbers. If the coding is fine then you are not vulnerable on that accusation – sure, people will then argue whether the UHI corrections (for example) are correctly applied or of the right magnitude but that is how science advances. If there are further coding problems then swallow your pride and thank whoever finds them for helping everyone understand the science better.

    Judith Curry makes excellent points in #69 – even if you dismiss others as skeptics or deniers, please listen to her!

  35. 135
    steven mosher says:

    RE 101.

    Timothy is not practicing “deconstruction” A deconstruction of my text would involve a variety of techiques to show how distinctions I used or binary oppositions I used were undermined by themselves in the text. Or perhaps how the implications of various tropes ( figures of speach like metaphor ) served to undermine the “stability” of the text, leaving one in a state of not knowing or not being able to fix or determine the meaning of the text. In short, deconstruction, is a method of commentary that multiplies the meanings of texts and shows how they cannot be controlled by method.
    That’s the way Derrida explained it to me. Although I find it ironic that his anti method has come to mean “any kind of critique”

  36. 136
    ks says:

    “Yes, the papers describe what the algorithms SHOULD do but does the ACTUAL CODE do exactly what you think it does?”

    that would be the purpose of repeating the experiment yourself. the code would not need to be released to allow another person the ability to verify it. really not a hard concept.

    also, the above corrections were not code errors as far as I’ve read.

  37. 137
    Mark UK says:

    The happiness of the denialists in this case is the same as the happiness from the Creationists/ID crowd when a part of evolution theory turns out to be incorrect or incomplete. They can use it to say that the scientists were wrong… This is actually a fantastic example of science at work. Somebody found the error, it was corrected. good.

  38. 138
    John Mashey says:

    #134: IL

    I’m all for as much openness as we can get, but:

    IL: how much software have *you* written, QA’d, documented, released, and made available *for free* (or close)? Have you done the work to make code portable among machines? Do you ship extensive test suites?

    Do you understand that all this costs money and time?

    Please make a case that you actually understand this well enough to have a non-naive discussion. For instance, I think a fine topic for RC would be:

    + start with the general issue of current science publishing that involves computer generated results. What’s the tradeoff between:
    - (one extreme) just publishing the results, like in the old days and
    - (other extreme) provide complete, portable, well-software-engineered source code, documentation, makefiles, extensive test suites, with code tested on a variety of platforms ranging from individual PCs (Windows, Macs, Linux, at least), through supercomputers.
    [This takes a lot of work, and not all science researchers have professional-grade software-engineering resources available. Should that be required by journals?]

    + apply this in the climate research world. How much effort does this take? What do people do?

    [There are serious arguments in other disciplines about similar issues, i.e., there are real issues about which reasonable, informed people can disagree.]

    My personal opinion is that GISS does a very good job on software engineering, release, and accessibility … but then I only have 40 years experience with such things, so maybe I’m still naive about it, and could certainly be convinced otherwise by informed arguments.

    IL: Assuming you are American, how much more taxes are you willing to pay to increase the resources for government agencies to put more of their code professionally online? There are lots of other government numbers generated by computers, and it is by no mean obvious that spending $X more for GISS to do even better would be more generally useful than spending $X on agencies whose results are also important, but whose code is far less accessible.

    If you’re not American, you need to convince those of us who are that we should spend more money on this.

    For instance, total areas, or world’s 510,065,600 km^2:
    17,075,200 km^2 Russia (3.3%)
    9,984,670 km^2 Canada (1.96%)
    9,826,630 km^2 USA (1.93%)
    2,166,086 km^2 Greenland (0.42%)
    14,056,000 km^2 Arctic Ocean (2.76%)

    If I *really* wanted to improve the accuracy of *global* numbers, I’m not sure I’d spend most of my effort chasing USA numbers around.

    In particular, if I were Canadian, and I really wanted better numbers, I’d be chasing:
    http://climate.weatheroffice.ec.gc.ca/Welcome_e.html
    http://www.msc-smc.ec.gc.ca/ccrm/bulletin/national_e.cfm

    The latter says “spring temperatures” +1.5C over 60 years, i.e., ~.25C/decade. Is that accurate? Has the code been checked? Canada is slightly bigger than the USA and therefore carries slightly more weight in global calculations, although of course each are only ~2%. Canada has plenty of seriously-rural weather stations. Has any auditor visited them, lately? Russia is 3.3%, but Canada is probably easier to check, especially for Canadians.

    If one *really* wants to get more accurate calculations, Canada is slightly more important (by area) than the USA, although, of course, the numbers might not be to everyone’s taste, since climate science expects Canada to warm faster than the US.

    Note: none of this is intended to ding Canada: I actually care about Canadian temperatures and precipitation because we own ski condos in B.C. and pay taxes up there.

  39. 139
    DavidU says:

    #134
    In my own field it is actually considered best to NOT release code, but give algorithms and data. The simple reason being that code is hard to read and misstakes in someone else’s code are easy to miss. Instead it is much better if the person wishing to verify a result writes his/her own code directly from the described algorithm. If the new program does not give the same result then either the algorithm is incorrect or one of the two programs.
    Normally it is easier to write your own new code than to completely understand the other persons code too.

  40. 140
    Timothy Chase says:

    steven mosher (#135) wrote:

    Timothy is not practicing “deconstruction.” A deconstruction of my text would involve a variety of techiques to show how distinctions I used or binary oppositions I used were undermined by themselves in the text. Or perhaps how the implications of various tropes ( figures of speach like metaphor ) served to undermine the “stability” of the text, leaving one in a state of not knowing or not being able to fix or determine the meaning of the text. In short, deconstruction, is a method of commentary that multiplies the meanings of texts and shows how they cannot be controlled by method.

    That is hard deconstructionism – well-explained in synoptic form.

    From what I understand, soft deconstructionism may simply distinguish between meaning and significance, where the meaning of the text is independent of either the author or the reader, but is dependent upon a shared language, but where the significance will be dependent upon the reader’s context, which will include their historical context, the issues of the day, their values and standards. There will be ambiguities, but these can generally be resolved by reference to the larger context within the original text. But of course the significance, even for the author, may best be identified and explored by bringing in other elements.

    However, the methods of hard deconstructionism typically consist of tearing one element or another out of its original context so that the reader will no longer refer to the original text, but will instead focus on the secondary text’s elaborate schemes of overinterpretation through ambiguous, archane and often highly idiosyncretic language where the interpreter seeks to impose his or her own often political agenda upon the original text, or alternatively, seeks to show that the written or even spoken word offers only the illusion of communicating the intent of the writer or speaker.

    Of course, if the latter were true, the hard deconstructionist would be entirely incapable of communicating it. What we are speaking of at this point is simply an elaborate attempt to “demonstrate” the self-referentially incoherent position of radical skepticism with respect to all intended meaning.

  41. 141
    Rod B says:

    Timothy (56), sorry but positively as simulating whole bunches of possible indications, most of which have measurement errors greater than indicated trend, and most of which come with very mushy cause and effect, into an undeniable global truth is every bit as bad, scientifically, as skeptics (but not me) justifying our position because the AGW theory is not 100% proven in every aspect and detail and tied up in nice little packages with pretty bows. You forgot the “cherry trees blossomed earlier last year” as another undeniable “proof”.

  42. 142
    David B. Benson says:

    Re #139: DavidU — I agree that writing my own code is often easier than understanding someone else’s.

    However, when the two programs do not give the same answer, it is oft the case that both are wrong!

  43. 143

    To keep this in proper perspective, the area of the U.S. is 3.6 million square miles(9.3 million Sq.km) and the surface of the globe is 197 billion(with a B) sq. Mi.or 509.6 billion sq. Km. The ratio is 1.8×10^-5. Too small for any global consequences. Global warming isn’t affected.

    We’ve learned once again that to err is human.Which is why peer review, and constructive criticism are so important to the sciences, and well as other in other areas of human endeavor.

  44. 144
    FurryCatHerder says:

    Re #138:

    First, for my credentials — over the years I’ve personally written somewhere on the order of 500,000 lines of code. I’ve also designed another several hundred thousand, reviewed several million, and QA’d some unknown amount. There are a lot of people here who use code I’ve written over the years. Some open source, some proprietary.

    ANYWAY, it doesn’t cost anything at all to open source software, except some FTP bandwidth, and it greatly improves both the quality and functionality of the code in the long run. And frankly, if the code is so hard to comprehend that no one else with programming skills can read it (I have 28 years in the industry), the programmers need to be fired.

    If someone were asking me my opinion, I’d say that the people who are spending the money don’t want to lose their budgets for either software developers or giant computers. That’s one of the biggest reasons software is kept closed.

  45. 145
    steven mosher says:

    I want to Contrast the stand up attitude of Ruedy, the good sense of Dr. Curry and Gavin’s openness with something quite different.

    Here is the start of an email, written by a principal in this matter. I will not name him.

    “Recently it was realized that the monthly more-or-less-automatic updates of our global
    temperature analysis (http://pubs.giss.nasa.gov/abstracts/2001/Hansen_etal.html) had a flaw in
    the U.S. data. In that (2001) update of the analysis method (originally published in our 1981
    Science paper – http://pubs.giss.nasa.gov/abstracts/1981/Hansen_etal.html) we included
    improvements that NOAA had made in station records in the U.S., their corrections being based
    mainly on station-by-station information about station movement, change of time-of-day at
    which max-min are recorded, etc.”

    It was realized. Mistakes were made.

    Some examples from other areas and how they responded:

    http://www.the-scientist.com/news/display/39805/

    http://pineda-krch.blogspot.com/2007/06/show-me-code.html

    Quoting from the latter to show the guy at columbia how it us suppose to be done.

    “Dear Colleagues,This to inform you that we must retract Hall, B.G. and S. Salipante. 2007. Measures of clade confidence do not correlate with accuracy of phylogenetic Trees. PLoS Comp. Biol 3: (3) e51.As a result of a bug in the Perl script used to compare estimated trees with true trees, the clade confidence measures were sometimes associated with the incorrect clades. The error was detected by the sharp eye of Professor Sarah P. Otto of the University of British Columbia. She noticed a discrepancy between the example tree in Figure 1B and the results reported for the gene nuoK in Table 1. At her request I sent her all ten nuoK Bayesian trees. She painstakingly did a manual comparison of those trees with the true trees and concluded that for that data set there was a strong correlation between clade confidence and the probability of a clade being true. She suggested to me the possibility of a bug in the Perl script. Dr. Otto put in considerable effort, and I want to acknowledge the generosity of that effort.”

  46. 146
  47. 147
    sam says:

    Land area Canada 9.09 million square km
    Land area US 9.16 million square km

  48. 148
    Steven says:

    #120
    Thanks Timothy for your response.
    My goal is not to debate every point on your list, but rather to assert that one can be an AGW skeptic and still of sound mind. You presume that someone with a different view on the merit of this science must have some sinister motive – this is a polarising and simplistic view.

    In the scientific community I am part of, members would not mind me proposing the following refutations to your first two points, as it would provide them an opportunity to correct my misunderstanding of the underlying ideas.

    1. We have surface measurements in the United States which show an accelerating trend towards higher temperatures.

    If I low pass filter the US surface temperature time series and examine the derivative of the signal, I see no evidence for this accelerating trend. The rate of recent increase (and length of its duration) is comparable to earlier periods in the record.

    2. These are temperature measurements being taken by planes and satellites, and they show that the troposphere is warming – just as we would expect.

    I believe the latest data does not support this point (for the tropics) and certainly does not match climate modelling predictions.

    Christy J. R., W. B. Norris, R. W. Spencer, J. J. Hnilo (2007), Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements, J. Geophys. Res., 112, D06102, doi:10.1029/2005JD006881″

    “Several comparisons are consistent with a 26-year trend and error estimate for the UAH LT product for the full tropics of +0.05 ± 0.07, which is very likely less than the tropical surface trend of +0.13 K decade−1.””

    regards, Steve

  49. 149
    Ron Taylor says:

    Re 140: Thank you, Timothy Well said.

  50. 150

    As The former Mayor of New York once said “When I make a mistake it’s a beaut”. The Surface Area of Earth is
    197 million square miles ( with an M). What are three orders of magnitude among friends? It’s still holds that global climate figures aren’t affected.


Switch to our mobile site