RealClimate logo

Antarctic warming is robust

Filed under: — gavin @ 4 February 2009

The difference between a single calculation and a solid paper in the technical literature is vast. A good paper examines a question from multiple angles and find ways to assess the robustness of its conclusions to all sorts of possible sources of error — in input data, in assumptions, and even occasionally in programming. If a conclusion is robust over as much of this as can be tested (and the good peer reviewers generally insist that this be shown), then the paper is likely to last the test of time. Although science proceeds by making use of the work that others have done before, it is not based on the assumption that everything that went before is correct. It is precisely because that there is always the possibility of errors that so much is based on ‘balance of evidence’ arguments’ that are mutually reinforcing.

So it is with the Steig et al paper published last week. Their conclusions that West Antarctica is warming quite strongly and that even Antarctica as a whole is warming since 1957 (the start of systematic measurements) were based on extending the long term manned weather station data (42 stations) using two different methodologies (RegEM and PCA) to interpolate to undersampled regions using correlations from two independent data sources (satellite AVHRR and the Automated Weather Stations (AWS) ), and validations based on subsets of the stations (15 vs 42 of them) etc. The answers in each of these cases are pretty much the same; thus the issues that undoubtedly exist (and that were raised in the paper) — with satellite data only being valid on clear days, with the spottiness of the AWS data, with the fundamental limits of the long term manned weather station data itself – aren’t that important to the basic conclusion.

One quick point about the reconstruction methodology. These methods are designed to fill in missing data points using as much information as possible concerning how the existing data at that point connects to the data that exists elsewhere. To give a simple example, if one station gave readings that were always the average of two other stations when it was working, then a good estimate of the value at that station when it wasn’t working, would simply be the average of the two other stations. Thus it is always the missing data points that are reconstructed; the process doesn’t affect the original input data.

This paper clearly increased the scrutiny of the various Antarctic data sources, and indeed the week, errors were found in the record from the AWS sites ‘Harry’ (West Antarctica) and ‘Racer Rock’ (Antarctic Peninsula) stored at the SCAR READER database. (There was a coincidental typo in the listing of Harry’s location in Table S2 in the supplemental information to the paper, but a trivial examination of the online resources — or the paper itself, in which Harry is shown in the correct location (Fig. S4b) — would have indicated that this was indeed only a typo). Those errors have now been fixed by the database managers at the British Antarctic Survey.

Naturally, people are interested on what affect these corrections will have on the analysis of the Steig et al paper. But before we get to that, we can think about some ‘Bayesian priors‘. Specifically, given that the results using the satellite data (the main reconstruction and source of the Nature cover image) were very similar to that using the AWS data, it is highly unlikely that a single station revision will have much of an effect on the conclusions (and clearly none at all on the main reconstruction which didn’t use AWS data). Additionally, the quality of the AWS data, particularly any trends, has been frequently questioned. The main issue is that since they are automatic and not manned, individual stations can be buried in snow, drift with the ice, fall over etc. and not be immediately fixed. Thus one of the tests Steig et al. did was a variation of the AWS reconstruction that detrended the AWS data before using them – any trend in the reconstruction would then come solely from the higher quality manned weather stations. The nature of the error in the Harry data record gave an erroneous positive trend, but this wouldn’t have affected the trend in the AWS-detrended based reconstruction.

Given all of the above, the Bayesian prior would therefore lean towards the expectation that the data corrections will not have much effect.

The trends in the AWS reconstruction in the paper are shown above. This is for the full period 1957-2006 and the dots are scaled a little smaller than they were in the paper for clarity. The biggest dot (on the Peninsula) represents about 0.5ºC/dec. The difference that you get if you use detrended data is shown next.

As we anticipated, the detrending the Harry data affects the reconstruction at Harry itself (the big blue dot in West Antarctica) reducing the trend there to about 0.2°C/dec, but there is no other significant effect (a couple of stations on the Antarctica Peninsula show small differences). (Note the scale change from the preceding figure — the blue dot represents a change of 0.2ºC/dec).

Now that we know that the trend (and much of the data) at Harry was in fact erroneous, it’s useful to see what happens when you don’t use Harry at all. The differences with the original results (at each of the other points) are almost undetectable. (Same scale as immediately above; if the scale in the first figure were used, you couldn’t see the dots at all!).

In summary, speculation that the erroneous trend at Harry was the basis of the Antarctic temperature trends reported by Steig et al. is completely specious, and could have been dismissed by even a cursory reading of the paper.

However, we are not yet done. There was erroneous input data used in the AWS reconstruction part of the study, and so it’s important to know what impact the corrections will have. Eric managed to do some of the preliminary tests on his way to the airport for his Antarctic sojourn and the trend results are as follows:

There is a big difference at Harry of course – a reduction of the trend by about half, and an increase of the trend at Racer Rock (the error there had given an erroneous cooling), but the other points are pretty much unaffected. The differences in the mean trends for Antarctica, or WAIS are very small (around 0.01ºC/decade), and the resulting new reconstruction is actually in slightly better agreement with the satellite-based reconstruction than before (which is pleasing of course).

Bayes wins again! Or should that be Laplace? ;)

Update (6/Feb/09):The corrected AWS-based reconstruction is now available. Note that the main satellite-based reconstruction is unaffected by any issues with the AWS stations since it did not use them.

375 Responses to “Antarctic warming is robust”

  1. 151
    AL says:

    Dear sir,

    I commented earlier on the double entry of the racer rock entry on your “corrected” graph. Having looked further at the graphs I now realise that the second entry is actually the result from your “difference” graph which has erroneously made its way onto the “correctced” graph. I realise that these are initial results, but maybe you should modify them such that the graph points lie on the graph in question and not on another graph?



    [Response: Actually I added the extra point because the initial bottom panel didn’t show the RR point at all. I agree it’s not perfect, but given that the creator of the graph is now at one of the ends of the Earth, it will have to do for now. – gavin]

  2. 152
    dhogaza says:

    As for SAP, one of the database variants is open source

    Yes, SAP open sourced it after the competition (Oracle, mostly) kicked it to the curb.

    The important part of SAP – i.e. the part that provides revenue – is not open source.

    ORACLE provide and contribute to many open source projects. Including InnoDB, the transactional database engine for MySQL the M in LAMP.

    However, nothing critical to Oracle’s commercial success is open source. They leverage the open source world to increase the value of their closed source products, enhancing their revenue.

    As for the code for things like MS Vista, this is available under certain circumstances to Governments

    Which is analogous to how scientists share information with each other.

    Note that none of the companies above do the kind of open sourcing that the denialosphere is demanding of climate (but not other) science.

  3. 153
    dhogaza says:

    SOX Certification requires a ton of testing for correctness.

    Once again, testing is not a proof of correctness. Your education is apparently lacking …

    When you say “professional scientists in the field – the researcher’s peers”, would you not consider computer scientists peers, at least for that part of the work that is in their area of expertise?

    Not particularly. I’m qualified to review the code that makes up a GCM, but not the mathematical model it implements, the physics underlying the mathematical model, etc. I would essentially have to take the code’s word for it, so to speak. Useless unless you think bitching about programming style tells us something about the accuracy of the model results.

    Do you really think your average computer scientist is qualified to “audit” the code in any reasonable way? I don’t. And I have absolutely no doubt that the average denialist screaming for the code is not.

  4. 154
    dhogaza says:

    There’s a nice irony here, given that academic computer scientists as a whole aren’t particularly noted for their superior programming skills …

    A bunch of them are a hell of a lot better at probing odd corners of discrete mathematics and the theory of computational complexity than they are at writing industrial strength code.

  5. 155
    Mike Walker says:

    National Academy of Sciences 6-22-06. “Our view is that all research benefits from full and open access to published data-sets and that a clear explanation of analytical methods is mandatory. Peers should have access to the information needed to reproduce published results, so that increased confidence in the outcome of the study can be generated inside and outside the scientific community.”

    Full article here:

    If a researcher’s work is good, then full transparency will prove that it is. If it is not, then full transparency will also prove that it is not. Either way, science benefits. Those who don’t believe this, simply do not have confidence that their science will stand on its own.

    [Response: You seem to be making the assumption that transparency is a magic bullet that will allow us to separate good science from bad. That is simply not the case. A perfectly executed analysis based on a faulty assumption is bad science regardless of transparent the code is. – gavin]

  6. 156
    Rod B says:

    tamino (142), I think BPL was implying that his analysis, which has CO2 and Temperature going in very tight lockstep, can be projected as a general rule. That degree of lockstep correlation certainly does not come anywhere near to the relationship of eons ago, for example. OTOH, if he really was professing that the CO2-temp correlation is applicable only from late 1800s to early 2000s and only for that period’s range of concentration and temp, I have little disagreement — though the 10–41 equivalent certainty is quite a stretch: that’s a 10–41 factor smaller than your Planck’s constant (…showoff!)

  7. 157
    Mike Walker says:

    dhogaza #153: 7 February 2009 at 10:00 PM says “Do you really think your average computer scientist is qualified to “audit” the code in any reasonable way?”

    I agree, I don’t think “research code” would hardly ever benefit from an audit by a computer scientist. I think all that is desirable would be simple testing by someone competent in software assurance that the code works as intended. The person who programmed something never catches all of the errors.

    The main benefit from publishing the code is that the code provides detailed documentation of the mathematical and statistical methods used. The code itself is unimportant, but what it documents about the research is where the value is.

  8. 158
    Richard Simons says:

    It seems to me that if you are interested in checking the validity of a model, you should not be pestering for the code. Instead, you should get the details of the model, then write your own, different, code and see if the results are the same. That way, you are much less likely to miss any errors that might be in the code.

    As a complete outsider to climate research, I get the feeling that a lot of this demand for code is driven by a desire to find reasons to ignore the implications of the work while putting in as little effort as possible.

    [Response: Bingo! – gavin]

  9. 159
    Terry says:

    Gavin Re #146: I have always subscribed to the philosophy that proper peer review should always be undertaken independently. From my perspective and in my own professional area of atmospheric science that means that review verification should always be done a-priori using alternate methods or developing the method using ones own knowledge of the subject matter. I never ask for the code or spread sheets, and prefer to re-create them from my own knowledge. On the other hand the original should provide sufficient information that provides the reviewer with sufficient guidance to correctly replicate the steps involved. On the other hand it is not an easy task to explain the same kind of information for the type of work that climate change requires. However, with the advent of package systems eg MATLAB (that I know nothing about) or SAS for example, a roadmap is relatively easy to generate. I follow this site regularly as i do CA and WUWT, and I dont share the same opinion on the intent of those sites, as some do, nor do I share the same opinion of some on those sites regarding RC. There are bone-heads on both sides in this battle of the blogs. On this particular argument, I am of two minds, but I lean towards your view, especially in regard to reviews needing to be a-priori, but as above a roadmap of the methodology would be useful. Those who wish to re-construct the results can then use their own procedures by following it. Interestingly enough, I see one poster at CA who is sucessfully de-constructing the Steig data using alternate procedures. Now That is how it should be done.

  10. 160
    Mark says:

    Ron 149, I think it was “alms” to the enemy. Not arms.

  11. 161
    Danny Bloom says:

    Doug McKeever, above, Feb. 29 at 2:59 pm said: “I watched the catchy little cartoon mentioned by #2 (Danny Bloom) and although it is cute and some of it is even a bit thought-provoking, the animation is marred by an error that many of my students fall for, too, until they think it through. The cartoon implies that melting ice near the North Pole will cause sea level to rise. Of course, as the cartoon says, the North Pole is over ocean not land. Melting of LAND ice causes sea level to rise, or displacement of land ice into the sea, but melting of sea ice (pack ice) does not raise s.l. A simple demonstration it to monitor the water level in a glass both before and after an ice cube melts….the water level doesn’t change ( if the TEMP of the water increases, that will raise the level slightly as the warmer water has lower density and thus greater volume, but that’s minuscule in a glass of water).”

    Duly noted, Doug. I will pass on your good comments to the cartoonist in New Jersey who made that video. I often get confused on that issue too. But yes, important point, thanks.

  12. 162
    AL says:


    the fact that you added the point probably explains why the point appears to be in the wrong position. Would it be an idea to mention the fact that you have added a point to a graph in future? Were any other points added or changed?

    [Response: no. – gavin]

  13. 163
    tamino says:

    Re: #156 (Rod B)

    I think BPL was implying that his analysis, which has CO2 and Temperature going in very tight lockstep, can be projected as a general rule

    I prefer actually to read what he wrote. Like his very first two sentences:

    Some global warming deniers claim that the correlation between temperature and carbon dioxide is vague and coincidental. I’ll check that here with what’s called a “linear regression.”

  14. 164
    Jaydee says:

    152. dhogaza

    This is getting a little off topic, but:

    “Open Source is not standard practice in commerical enterprises.”

    “However, nothing critical to Oracle’s commercial success is open source. They leverage the open source world to increase the value of their closed source products, enhancing their revenue.”

    Sure Oracle is not open source. But Oracle must have had a reason for shelling out $ for InnoDB, mainly that MySQL has started to get significant traction in areas that it cares about.

    MySQL is the database behind Wikipedia with 1.2 TB of data as of 2006. It could have changed since than

    Note also that Oracle actually runs its own website with Oracle on Linux.

    My main point is that for most companies it is wrong to say that they use closed source or open source. Most use both, whether they know it or not.
    Back on topic, I am in complete agreement with the general point that open sourcing the code used for analysis is not necessary. In fact is is better to have various different sets of software used so that differences in results can point out issues with any program.

    I also agree with Gavins new post on replication which concludes that it is (generally) more important to look at the assumptions and conclusions drawn rather than expect to find a errors in the code.

  15. 165
    dhogaza says:

    Sure Oracle is not open source. But Oracle must have had a reason for shelling out $ for InnoDB, mainly that MySQL has started to get significant traction in areas that it cares about.

    This has nothing to do with the entirely bogus claim that “open source is standard practice in the commercial world”, made originally by someone whose company depends on sales of (ahem) proprietary, closed-source software.

  16. 166
    dhogaza says:

    My main point is that for most companies it is wrong to say that they use closed source or open source. Most use both, whether they know it or not.

    And I never claimed otherwise. Please read more closely.

  17. 167
    James says:

    Mike Walker Says (7 February 2009 at 11:16 PM):

    “The main benefit from publishing the code is that the code provides detailed documentation of the mathematical and statistical methods used. The code itself is unimportant, but what it documents about the research is where the value is.”

    Except that this is seldom the case in practice, in my experience anyway. (I might mention that I’ve earned a good share of my income for the last couple of decades from re-engineering & extending scientific & engineering programs.) The code’s really a last resort for trying to figure out what it’s supposed to do. It’s far more efficient to have some sort of external information about what it’s supposed to do – the sort of information you might find in a paper – before you try to figure out whether that’s what it really does, or whether it can be done faster or with less memory.

    The best documentation is, in fact, documentation.

  18. 168
    Ike Solem says:

    This open source argument is getting ridiculous. Scientists don’t present their lab notebooks for public inspection unless their is a lawsuit or legal matter – for example, scientists who find lead contamination in medications can expect to have their notebooks examined in detail during legal cases against the manufacturer, and they can expect that the lawyers will use any errors or mistakes in their notebooks to attempt to discredit their work – and usually the goal is to portray the scientist in question as sloppy or unreliable. This is just the standard legal approach, always aimed at influencing the opinions of the jury, not at explaining the science to them.

    The lack of desire of many posters on this thread to discuss the actual science is interesting in that regard – in fact, this thread sounds like a collection of trial lawyers out to discredit the witness, doesn’t it?

    So, let’s take a look at a somewhat similar (but different) situation: satellite- and radiosonde-derived temperature trends in the Arctic. The results are controversial – everyone agrees things are warming, but the question is about the structure of the warming and the role of snow and ice cover:
    “Vertical structure of recent Arctic warming” Graversen 2008

    Here we examine the vertical structure of temperature change in the Arctic during the late twentieth century using reanalysis data. We find evidence for temperature amplification well above the surface.

    Several scientists looked at this, disagreed and sent the following letters off to Nature:
    “Arctic tropospheric warming amplification?” Thorne 2008
    “Recent Arctic warming vertical structure contested” Grant et al 2008
    “Arctic warming aloft is data set dependent” Bitz & Fu 2008

    The point here is that Graverson et. al published a reanalysis of data that was disputed by others, who showed that different datasets don’t reproduce the trend aloft. No one is disputing the overall warming trend; rather they are disputing the spatial distribution of the warming trend across the region.

    Gaversen et al replied to the criticisms:

    Essentially, the argument is over whether the reanalysis approach comes up with the right answer. This kind of scientific debate is normal, and you probably never saw this issue come up in the press – because everyone involved agrees that the Arctic is warming rapidly, and that’s not really a very controversial position? The press likes to run articles on global warming that feature a prominent denialist on one hand, and some climate scientist on the other – if both scientists agree on global warming, that might upset the paper’s owners.

    For example, take the newspaper chain, one of the last homes of blatant denialism in the U.S. media. Typical recent headlines include “Oceans are cooling according to NASA.” Justin Berk, Examiner, Jan 21 2009 – although everyone knows that that was a result of a flawed instrument. Why does the press get away with such blatant lies, and why do other news outlets ignore it? “Press lies to public” – that’s not a good headline, I suppose.

    A similar article by the Examiner is “An inconvenient truth: The Earth is cooling”, 12/23/08. That one repeats the talking points – we’ve been cooling since 1998, the oceans are cooling, etc. etc. This PR effort at pushing the “earth is cooling theme” is not helped by scientific studies that show warming, so those studies get attacked. Why? Well, the Examiner paper ownership is a little revealing:

    From Slate: The Billionaire Newsboy: Making sense of Philip Anschutz’s Examiners. By Jack Shafer, March 24, 2005.

    Quote: “Nobody thinks Anschutz is a fool. An oil wildcatter raised by an oil wildcatter, he moved into the railroad business in the early 1980s..

    The oil industry and the railroad industry are among those most opposed to government regulations and renewable enery initiatives that would eliminate coal combustion as well as foreign oil imports, the final result being a coal-free low-carbon energy infrastructure. The railroads are opposed because they make a living transporting coal from mine to power plant; that’s where much of the money is made. Thus, it is really not surprising that a billionaire involved in oil and railroads would sponsor a network of newspapers that engages in lying to the public on climate matters.

    The same general theme also prevails in Britain. If you want to see a vitriolic response to this paper, take a look at the British Spectator, owned by the Barclay Brothers, another set of reclusive (British) billionaires: As Mark Twain might have put it, there are three kinds of lies — lies, damned lies and global warming science. and so on.

    Here’s the heart of their argument:

    The flaw they identified was that, since Antarctica has so few weather stations, the computer Steig used was programmed to guess what data they would have produced had such stations existed. In other words, the findings that caused such excitement were based on data that had been made up.

    Yes – that is called a data-infilling procedure, and it’s what you have to do when not enough data has been collected. It’s also used in the oceans, where the paucity of data is also a problem in past reconstructions of temperature. Any scientist who was really concerned about the issue would also call for MORE data collection, more satellites – but the denialists never do – in fact, the likes of Pielke et. al have publicly stated that we shouldn’t be funding more data collection programs.

    Well… I can’t wait to read the submitted contributions of all the skeptics, and their detailed examinations of all the reanalysis data and results – to Nature and Science, that is, not to this thread.

  19. 169
    Mike Walker says:

    James #167: 8 February 2009 at 1:09 PM writes: “The best documentation is, in fact, documentation.”

    Agreed, but if [edit–enough of this here. any additional generic commentary on issues of ‘replication’, etc. should take place in the comment thread of the most recent posting which deals specifically w/ such topics]

  20. 170
    Ron Taylor says:

    Mark (160) the orginal quote may have been “alms” but my colleague clearly meant “arms.” And I think that applies in this case. Do not give ARMS to the enemy. What many denialists are looking for is arcane details that they can attack. Why provide them with that if that is their goal?

  21. 171
    Hank Roberts says:

    BBC science writer deals with coverage and blog responses:

  22. 172
    Mark says:

    Ron, “alms” means aid. arms as armaments would be merely a subset.

  23. 173
    Hank Roberts says:

    This should make the difference between arms and alms quite clear in practice:

  24. 174
    Brian Klappstein says:


    My choice of “eyeballed” spatially representative stations didn’t return 1980 as the warmest year in Antarctica as you pointed out. However, I’ve just run across references to some papers that apparently do show 1980 to be the warmest year in Antarctica over at WorldClimateReport: Chapman et al 2007 and Monaghan et al 2008.

    Any comment?

    Regards, BRK

  25. 175
    Hank Roberts says:

    Brian, did you read the above information, or look up the papers?
    WCR doesn’t give you the citing papers or references to what they’re saying. They’re an advocacy site.
    Brian, use your web browser and find “monaghan” in this thread.
    You’ll find the name immediately before the words
    Brian Klappstein Says: 5 February 2009 at 11:03 AM

    Beyond that, the links below took me eight seconds to find. Keep track, see if what you found at WCR gets brought up to date. It’s an advocacy site, not a science site. You can do better, easily.

    and see

    “There is very convincing evidence in this work of warming over West Antarctica,” said Andrew Monaghan, a scientist at the National Center for Atmospheric Research in Boulder, Colo., who was not involved with the research.

    Dr. Monaghan, who had not detected the rapid warming of West Antarctica in an earlier study, said the new study had “spurred me to take another look at ours — I’ve since gone back and included additional records.”

    That reanalysis, which used somewhat different techniques and assumptions, has not yet been published, but he presented his revised findings last month at a meeting of the American Geophysical Union.

    “The results I get are very similar to his,” Dr. Monaghan said.

    Quoted from NYT January 22, 2009

  26. 176
    Brian Klappstein says:

    Hank Roberts #175

    Thank you for your courteous response. WCR may be an advocacy site but they did something valuable for people like me who generally don’t have access to these papers, they published the temperature series graphs therein.

    In any case, I look forward to Dr. Monaghans update.

    Regards, BRK

  27. 177
    naught101 says:

    gavin said: And what do we do for people who don’t have access to Matlab, IDL, STATA or a fortran 95 compiler? Are we to recode everything using open source code that we may never have used? What if the script only runs on Linux? Do we need to make a Windows GUI version too?

    Nothing, no, that’s ok, and no.

    The point of open source would be letting others improve on your work. ie. Just release what you have, then if someone really wants a Windows GUI version, then they can code that on top of what you’ve already produced. There’s absolutely no onus to make it work for everyone, just to show people what you’re doing (they may then choose to re-write it in a different language, and may improve it, or may make some stupid mistakes, but at least it’ll all be in the open).

  28. 178
    Hank Roberts says:

    > something valuable … published the graphs

    At the top of this thread, click the link to the Nature article page.

    There, look at the box with the red bar at the top “ARTICLE LINKS” where you’ll see these with links:

    * Figures and tables
    * Supplementary info
    * See Also
    o Editor’s Summary

    This is the link from “Figures and tables”

    Collect the entire set. Always look at the primary source, and compare what advocacy sites show. Sometimes they show everything. Not always though.

  29. 179
    Jeff Id says:

    I got the RegEm to work. My last comment didn’t show up in moderation so cut this one if you need to ;) . Since I don’t have the parameters you used these were my best guesses.

    [Response: Well done. Now you have learned something. You now also have the means to test what the results are and are not sensitive too – try looking at monthly anomalies, rather than raw values, the impact of including more eigenvalues and the various other issues discussed in Tapio’s papers. – gavin]

  30. 180
    Jeff Id says:

    I’m sorry, it’s very difficult to tell when a post is cut or there is a transmission problem. This is the same as before so if it was cut that should speed your decision along.

    Are you saying the AWS data is normalized prior to usage because I wasn’t sure from this chunk in methods where it says —

    “and using the READER occupied weather station temperature data to produce a reconstruction of temperature at each AWS site for the period 1957-2006.”

    Would you mind telling me what value was used for
    This one has me pretty confused, what would you consider an appropriate value for a TTLS run.

    Finally, I would also be interested in running the satellite data and processing algorithm to mask for clouds. Is there some ETA on that?

  31. 181
    pete best says:

    Re #168, An excellent post Ike and poignant. Yes the lies on climate change are also often repeated all of the time by the people who minds prefer to believe these articles. I come across them all of the time in the comments sections of newspapers climate science articles.

    Its a load of left wing nonsense (for some reason its a left wing conspiracy) which I believe means that all of the people wanting to see a low carbon economy are against capatalism which is a slightly odd argument.

    It has been cooling since 1998. Again another odd one but used all of the time as a decade is not enough time statistically for any climate analysis as stated here many times.

    The Sun is to blame and the world is cooling. in other words the Sun is not active and it is solely responsible for temperatures in earth. Again we know that its 0.2 W/M^2, which is significant but not that signiticant when it comes to all forcings both -ve and +ve.

    The most recent thread comment is once again about Hansen original papers on the A,B,C models are warming which they make out were wrong. Once again RC has covered this and Hansen was more right that wrong 20 years ago but hey a wrong is a wrong, right ?

    It just goes on and on and on this way. Its all getting a bit pointless and very technical here at RC. This Antarctic work by Steig is going to be the new hockey stick for the forseeable future I imagine for new the denialists are arguing against the very tools and methods scientists use to get the results they did for this study. I am slightly amazed that RC has actually posted a thread on the subject as why should climate scientists be grilled on their methods, the same methods and tools repeated and used for a lot of past papers and the sciene as a whole. Thousands of peer reviewed papers havs been submitted and published this way and hence why all of a sudden is it seemingly wrong. Do we now have to change the tools and methods of climate science or science itself for I imagine that science is where these tools and methdos come from.

    Utterly Bizarre.

  32. 182
    jeff Id says:

    “I am slightly amazed that RC has actually posted a thread on the subject as why should climate scientists be grilled on their methods, the same methods and tools repeated and used for a lot of past papers and the sciene as a whole.”

    And I am actually interested in understanding. If you take Dhog’s word for anything I’m a climate naturalist, however Dhog is a politician with loud bluster and little knowledge. I am more interested in the detail. If the antarctic is warming, I want to understand the calculations because that is important to know. If the math is clearly disclosed and reproduced those who wish to debunk it can try but will ultimately fail. As gavin once told me — the data is the data.

    Concealing the math and data will do no good and appears (falsely) to be evidence of an ends justify the means policy.

    Let us have the code and data, the chips will fall as physics intends.

  33. 183
    Ray Ladbury says:

    Jeff Id says: “Concealing the math and data will do no good and appears (falsely) to be evidence of an ends justify the means policy.”

    Absolute horse puckey. If you can’t derive them for yourself with perhaps a little bit of assistance, you don’t have the expertise to understand them in any case. Why the hell do you think it takes a decade to become a climate scientist? What the hell do you think graduate students are doing for all that time? Do you really think that everything you don’t understand must be easy? Jeeebus!

  34. 184
    Hank Roberts says:

    > concealing

    Climatology: The Purloined Letter

  35. 185
    dhogaza says:

    Concealing the math and data will do no good and appears (falsely) to be evidence of an ends justify the means policy.

    There you are with your accusations again …

  36. 186
    Hank Roberts says:

    By the way, if the occasional commenter “concerned of Berkeley” is concerned and from California (the IP number would tell, maybe)
    reading and asking locally would be a good start.

    Asking questions about what you’ve read shows you’re trying and gives an idea how much you understand. You won’t get far if you just repeat that you’re lost and confused. Give your last known location, and what you can see from where you are now. We’ll help find you.

  37. 187
    pete best says:

    I am sure that a lot of people are interested in the methods, computer code and mathematics used to demonstrate that the Antarctic is warming in the paper but if climate science (and others sciences) want closure on this then surely the whole thing should be independently analysed by a scinetific body of relevant people maybe from an esteemed US or UK organisation such as the Royal Society or the MET office much like the hockey stick is.

    If Climate Audit have an issue with the methods used and want to attempt to tell the resto of us that the computer code and data used is an issue then surely they are calling into question the whole crux of climate science. This needs to be resolved so we can move on. If this paper stands then post peer review work will find the issues. They tried to find them with the hockey stick but that also stood up to very close scrutiny.

  38. 188

    pete, it ain’t going to happen like that. This is partly what the IPCC was intended to do, after all, and the denialist world has imagined that into a sinister conspiracy to wreck the American economy. (Maybe they will now uncover evidence that the IPCC pioneered securitized (finance) swaps.) Moving on is precisely the last thing denial-world wants.

    An instructive example could start with your comment: “They tried to find them with the hockey stick but that also stood up to very close scrutiny.” Yeah, but in denial-world today, the hockey stick was completely discredited [edit-no need to repeat the usual canards]

    IMO, what will happen is that the denial-sphere will be slowly ablated by repeated contact with reality. The bad news is that we don’t have a lot of time, but the good news is that recent political/cultural history shows that this process does continue, even in a media environment that is less than ideal. In the mean time, we have to continue countering the spin with fact and perspective, more or less calmly and civilly–with wit, brevity and a little humor as appropriate.

  39. 189
    Hank Roberts says:

    Or, you can remember that Eric left for Antarctica last week and will be back in three or four months with another data collection that will take a year or so to analyze, then we’ll know more.

    Good grief, people, why not just demand a special prosecutor.

    Nothing needs to be resolved and we’ve been moving on.

    The gang hollering back there is doing the usual routine.

    Why all the effort all of a sudden to question everything and yell fraud everywhere? Could it be ….

    Oh yeah, time to fill up the news columns with crap so the journalists can claim they’re publishing “balanced” reporting next to the AAAS meeting news.

  40. 190
    Jay says:

    I’m curious if you saw this in the Guardian today.

    Any comments? Apologies if you’ve already addressed it.

    [Response: Oh, very nice. Not quite sure what a ‘computer jockey’ is though. Maybe it’s someone who skillfully uses computers to best work out what’s happening in the world. That would be different to an advocacy scientist who will say anything, however contradictory and misleading, in order to come to exactly the same conclusion every time – (I should point out that I have a hard time seeing where the ‘scientist’ bit comes in). Pat will be on the Hill today wasting everyones time with yet another piece of ‘advocacy’. Maybe I’ll get a mention! – gavin]

  41. 191
    Mike G says:

    I really have a hard time understanding how exactly having the code provides closure or resolves anything other than that Eric didn’t fudge the results. The meat of replication is in the methods, not in the code.

    Lets look at a real-world example from my line of work. Say I aim to produce a survivorship curve and determine maximum sustainable yield for a sea anemone using the Gulland-Holt method of the von Bertalanffy Growth Model. That probably makes no sense to most readers here, but knowing nothing more than that- none of my data, no script, it should be a fairly trivial task for anyone who knows population modeling to reproduce the work because the models are not original to this application (neither is RegEM). They should also be able to judge whether this method is appropriate or not for this system.

    Now if I distribute the file that will calculate this and the data I collected, assuming you could figure out how to properly enter the data, you will get the same answer as me. Do you know if it’s a reasonable answer? Do you know if the program is a correct representation of the method? Do you know whether it’s an appropriate method to use for this system anyway? Does it allow you to go on to expand on the work? Not unless you already understand the method well enough to begin with that it should be obvious how to reproduce it with minimal hand holding. All the program proves to you is that I didn’t fudge the results and I know how to write a functional script. Being asked for evidence of this is pretty insulting to a scientist.

    Also, as a scientist, my job is to increase our understanding of reality. Why would I put effort forth to help out someone who has a reputation of trying to muddy the waters?

  42. 192
    dhogaza says:

    Good grief, people, why not just demand a special prosecutor.

    That wouldn’t satisfy them, special prosecutors tend to be somewhat objective …

  43. 193
    william says:

    #181 Pete Best
    You made the comment:
    “The Sun is to blame and the world is cooling. in other words the Sun is not active and it is solely responsible for temperatures in earth. Again we know that its 0.2 W/M^2, which is significant but not that signiticant when it comes to all forcings both -ve and +ve.”

    I’m not sure this was sarcasm or you mis-typed. The sun is ultimately responsible for temperatures on earth. Are you pointing out what our current knowledge is for variations in the sun’s output and the degree that those variations relate to variations in the earth’s temperature?

  44. 194
    kevin says:

    #181 Pete B: [obvious sarcasm] “the sun is solely responsible for temperatures”

    #193 William: “the sun is ultimately responsible for temperatures”

    Do you see the difference between those statements, William? (It seems as if you’re trying to set up a straw man to beat up on.)

    To be clear: we all know where the energy comes from. But the people who act as though variations in the sun’s activity account for all observed modern warming (i.e. “the sun is solely responsible for temperatures”) are ignoring “our current knowledge is for variations in the sun’s output and the degree that those variations relate to variations in the earth’s temperature,” as well as our current knowledge about all other forcings and feedbacks in the system.

  45. 195
    Hank Roberts says:

    Fiscal policy: Science or Advocacy?

    When economics hits politics it is difficult to determine when the economist becomes the advocate (see, e.g., the green jobs thread at this blog and, most recently, comments on this post). Fisheries scientists have the same problem. Serendipitously, here is how AFS President Bill Franzin addressed it in the January 2009 issue of Fisheries (page 4):
    In this article, I review an important issue about the adoption of policy positions by a professional scientific society such as AFS: the fine line between scientific objectivity and advocacy. …

  46. 196
    William says:

    #194 Kevin
    I don’t think there’s any doubt that the discussion is about how much energy is received from the sun and what happens to the temperature of the earth when it gets here. The details are in explaining the dgree of influence for changes in albedo, changes in land use, the oceans, clouds, aerosols, volcanos, changes in the solar cycle and the orbit and tilt of the earth as well as an increase in CO2 in the atomosphere from today’s .0383% to something greater.

  47. 197
    Chris says:

    #188 Kevin: “In the mean time, we have to continue countering the spin with fact and perspective, more or less calmly and civilly–with wit, brevity and a little humor as appropriate.”

    So when I tried to argue in the summer in the North Pole Notes thread that the Arctic had not reached a tipping point, was that “spin” that had to be countered? Or was I the one countering the spin of other posters…..? (Of course I was accused of far worse than spin.). It’s a shame this press release from the Met Office appeared 6 months too late to back me up: – by Met Office Head of Climate Change:

    “Recent headlines have proclaimed that Arctic summer sea ice has decreased so much in the past few years that it has reached a tipping point and will disappear very quickly. The truth is that there is little evidence to support this…….”

    Of course, she also mentions the strong evidence pointing to “a complete loss of summer sea ice much later this century”. Sure, that’s a more reasonable starting point for debate in my opinion.

    Back to #188: “…Moving on is precisely the last thing denial-world wants…
    …IMO, what will happen is that the denial-sphere will be slowly ablated by repeated contact with reality…”

    I suggest that the more you can avoid the labels “denial”, “denialosphere” etc (c.f. “civilly” above?), the more success you may have. I can’t see what it achieves, and would suggest “skeptic” is more polite.

    Using offensive terms is just likely to provoke more of the attitudes you disapprove of. And then we end up with the polarisation as summed up by Ms Pope: “Overplaying natural variations in the weather as climate change is just as much a distortion of the science as underplaying them to claim that climate change has stopped or is not happening.”

    N.b. I don’t remember using such terms myself here (“alarmist” perhaps?), but feel free to correct me if I’m wrong.

  48. 198
    Mark says:

    MikeG, how does the code help prove the ***science*** right or wrong?

    And if you have 5 hours to either do code publishing or more experimentation which gets the most science done?

  49. 199
    Hank Roberts says:

    Hat tip to Robert Grumbine:

    Hat tip to ReCaptcha:
    “Sunfire Concerto”

  50. 200
    Mike G says:

    Mark, I think you’re trying to argue exactly my point. Having the code doesn’t prove or disprove anything unless the author either completely made up the results or the code doesn’t actually perform the method the author says it does. You have to understand the reasoning behind the method to say anything more about the code than “yep, it runs and produces the same answer.” If you already understand the method the code itself shouldn’t be required to replicate the results.

    Like Gavin pointed out, if you’re looking for errors that change the result or its significance, the code would be about the last place to look. Maybe Eric should take it as a compliment that the obfuscation crew haven’t found substantive errors elsewhere so now they’re looking to the code as a last hope. ;)

    Now, if someone truly wanted to expand on the science, existing code would be a great starting point, but who here really thinks that’s what the CA and WUWT crowd has in mind when they demand to see it?