Antarctic warming is robust

Naturally, people are interested on what affect these corrections will have on the analysis of the Steig et al paper. But before we get to that, we can think about some ‘Bayesian priors‘. Specifically, given that the results using the satellite data (the main reconstruction and source of the Nature cover image) were very similar to that using the AWS data, it is highly unlikely that a single station revision will have much of an effect on the conclusions (and clearly none at all on the main reconstruction which didn’t use AWS data). Additionally, the quality of the AWS data, particularly any trends, has been frequently questioned. The main issue is that since they are automatic and not manned, individual stations can be buried in snow, drift with the ice, fall over etc. and not be immediately fixed. Thus one of the tests Steig et al. did was a variation of the AWS reconstruction that detrended the AWS data before using them – any trend in the reconstruction would then come solely from the higher quality manned weather stations. The nature of the error in the Harry data record gave an erroneous positive trend, but this wouldn’t have affected the trend in the AWS-detrended based reconstruction.

Given all of the above, the Bayesian prior would therefore lean towards the expectation that the data corrections will not have much effect.

The trends in the AWS reconstruction in the paper are shown above. This is for the full period 1957-2006 and the dots are scaled a little smaller than they were in the paper for clarity. The biggest dot (on the Peninsula) represents about 0.5ºC/dec. The difference that you get if you use detrended data is shown next.

As we anticipated, the detrending the Harry data affects the reconstruction at Harry itself (the big blue dot in West Antarctica) reducing the trend there to about 0.2°C/dec, but there is no other significant effect (a couple of stations on the Antarctica Peninsula show small differences). (Note the scale change from the preceding figure — the blue dot represents a change of 0.2ºC/dec).

Now that we know that the trend (and much of the data) at Harry was in fact erroneous, it’s useful to see what happens when you don’t use Harry at all. The differences with the original results (at each of the other points) are almost undetectable. (Same scale as immediately above; if the scale in the first figure were used, you couldn’t see the dots at all!).

In summary, speculation that the erroneous trend at Harry was the basis of the Antarctic temperature trends reported by Steig et al. is completely specious, and could have been dismissed by even a cursory reading of the paper.

However, we are not yet done. There was erroneous input data used in the AWS reconstruction part of the study, and so it’s important to know what impact the corrections will have. Eric managed to do some of the preliminary tests on his way to the airport for his Antarctic sojourn and the trend results are as follows:

There is a big difference at Harry of course – a reduction of the trend by about half, and an increase of the trend at Racer Rock (the error there had given an erroneous cooling), but the other points are pretty much unaffected. The differences in the mean trends for Antarctica, or WAIS are very small (around 0.01ºC/decade), and the resulting new reconstruction is actually in slightly better agreement with the satellite-based reconstruction than before (which is pleasing of course).

Bayes wins again! Or should that be Laplace? ;)

Update (6/Feb/09):The corrected AWS-based reconstruction is now available. Note that the main satellite-based reconstruction is unaffected by any issues with the AWS stations since it did not use them.

Page 2 of 2 | Previous page

375 comments on this post.
  1. François GM:

    Dr Schmidt,

    I have three questions:

    We now know there were errors at two Antartica stations. How do we know there were no errors in the measurements at the other stations ?

    [Response: One never knows there are no errors. So you aim to show results are robust despite that possibility. - gavin]

    What would be the normal procedure to inform the editors at Nature of the errors in the paper ?

    [Response: People send in corrections if necessary, or refer to it in another publication. - gavin]

    Have scientists also tried to interpolate data in undersampled regions that show warming – or only in those that show cooling ?

    [Response: The interpolation doesn't care what the trends are. Similar techniques have been used for sea surface temperature data and the like - they show warming, but I don't really follow your question. - gavin]

    [Response: Since the study clearly (e.g. Figure 3) reconstructs cooling over East Antarctica during certain sub-intervals (e.g. 1969-2000) and warming over others (e.g. the long term, 1957-2006) its hard to make any sense of the question. -mike]

  2. Danny Bloom:

    There is very funny — and sad — animation video here about the north pole melting by New Jersey creator Dominic Tocci here:

    http://toccionline.kizash.com/movies/the_north_pole_is_melting/

  3. Ricki (Australia):

    Again we have a debate around the small issues when the big issue of “is AGW real” loses out. Of course, if Antartica was cooling, that still made no difference to the predictions of how we must reduce emissions. This paper confirming that Antartica is in fact just like the rest of the planet, WARMING, is yet another nail in the coffin of the deniosphere.

    I restate what I have said before on this site, the issue of reducing emissions is not political and will equally effect both side of politics.

    We are intelligent beings capable of concieving what the future holds. We have to use this knowledge to act as stewards of this planet (the only one we have).

    PS. not criticising RC, I think you guys are doing a realy important thing here — keep up the good work.

  4. Aaron Lewis:

    At the other end of the world USGS has published :Past Climate Variability and Change in the Arctic and at High Latitudes (http://www.climatescience.gov/Library/sap/sap1-2/final-report/)

    Some of which is well worth reading. Chapter 6 has good tidbits about the Greenland Ice Sheet, and comments on modeling ice.

  5. Fred Spinks:

    Above you say that the source for the Nature cover is the satellite data. Is that really so, i.e. the picture illustrates a ~25 year trend, or is the source the interpolated and extended ground station data over the full 50 years?

    [Response: It's the interpolated and extended trend over 50 years. - gavin]

  6. jeff Id:

    A link to my recent post requesting again that code be released.
    [edit]
    I believe your reconstruction is robust. Let me see the detail so I can agree in public.

    [Response: What is there about the sentence, "The code, all of it, exactly as we used it, is right here," that you don't understand? Or are you asking for a step-by-step guide to Matlab? If so, you're certainly welcome to enroll in one of my classes at the University of Washington.--eric]

  7. Hank Roberts:

    Eric, he’s convinced himself you have some seekrit code. Note how his line goes from “the” code to “your” code over a few lines of text.

    He doesn’t want the tools. He wants paint-by-numbers advice on using it with the database. What he calls “antarctic code” — cookbook help.
    Homework help. Science fair project help.

    Heck, you might consider doing that when you’re back from Antarctica, not for the anklebiters, but for high schoolers; maybe someone like Tamino or Robert Grumbine would take it on as an educational toolkit.

    “Here’s the tool kit. Here’s the database. Now, let’s look at the data, and think about how to use these tools. How do we approach a data set like this? Well, the first step is …..”
    _________________
    “Offices Nicelys” says ReCaptcha.

  8. Hank Roberts:

    Ah, Eric’s inline response came as I was typing mine. Same conclusion.
    __________________
    “Quayle executors”

  9. MarcH:

    Query regarding manned stations and possible affect on validating Automatic Stations against manned stations.

    Looking at Amundsen-Scott data from GISS (see link below) there is considerable difference in variability of readings between 1957-1977 and post 1977. The change appears to coincide with significant changes at the base.

    Given there actually is such a thing as a measurable “heat island effect” on manned temperature stations in Antarctica (and the chart for Amunsen-Scott suggests there is) would such an effect affect your validation of automatic stations to manned stations such as Amundsen-Scott?

    Any comments?

    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=700890090008&data_set=1&num_neighbors=1

    This graph shows a zig-zag pattern from 1957 to about 1977 with a temperature range of about 1 degree. From 1977(?) onwards the amplitude of temperature range dramatically increases up to about 3 degrees C. The overall trend is gently upwards but there is a step down in 1977.

  10. dhogaza:

    Well, here’s what Jeff Id is really saying …
    [edit -- thanks for your support dhogaza, but I'm not allowing 'jeff id's' rants to show up here, even if passed on to me by someone else--eric]
    Minus the bluster at the end, it’s the same-old same-old – climate science is a fraud.

    People like this should simply be ignored. He’s going to claim fraud no matter how patient Eric is or how much help Eric gives him.

    [edit]

  11. dhogaza:

    Ah, Jeff Id, I thought I recognized the name …

    He’s famous for mathematically proving that temperature reconstructions which have a hockey stick shape are a statistical artifact.

    He’s eagerly awaiting notification that he’s won the Fields Prize.

  12. Rosli Omar:

    Dear Gavin,

    If I remember right, previously climate scientists were claiming that the cooling trend in East Antartic was anticiapted by climate models even as the West was warming. Now that it is shown East Antartica is also warming (is that the correct inference?), not just West Antartica, how to reconcile the claim?

    thanks & best wishes,
    rosli

    [Response: Um.. No. You have most of this wrong. You might go back and read our earlier posts which explain this.--eric]

  13. dhogaza:

    thanks for your support dhogaza, but I’m not allowing ‘jeff id’s’ rants to show up here, even if passed on to me by someone else

    OK, fine … I understand … interested folks can just click on jeff id’s handle and judge for themselves …

  14. chuck:

    any comments on how this will effect the error in the overall trend estimates?
    is it still the same?
    cheers,

    [Response: I'm sure it will make no meaningful difference whatsoever. And keep in mind, as is very clear in the paper, the using the AWS is not the best method in the first place. There simply is not enough AWS data over enough time to demonstrate good validation at most places. That's why we used the satellite data, not the AWS data, as our primary tool for estimating the spatial covariance of the temperature field.--eric]

  15. Brian Klappstein:

    “….that Antartica is in fact just like the rest of the planet, WARMING…”

    (#3 Ricki)

    True enough if you calculate the trend from 1957. However, how much has the continent warmed since 1980? I don’t see much of a warming trend over the last 25 years, certainly not in the biggest area, East Antarctica.

    I would guess from looking at some of the station data that warmest year ever in the station record for most of Antarctica is 1980, so in that sense it’s certainly not like the rest of the planet.

    Regards, BRK

    [Response: Nope! Best way to do scientific work is to do it, rather than guessing (though as Gavin shows, thoughtful guessing often saves a lot of pointless speculation). The graph below shows the warmest monthly anomaly for each station; if you average over years first you get the same answer (the median warmest monthly anomaly is July 1992; median warmest year is 1991; if you look at the long term records only (so excluding the AWS) you get June 1989 and 1989, respectively. West Antarctica alone the warmest year in the records (have to include the AWS as there are no long term stations) is 1996. Of course, I’m not including 2007 or 2008 because our satellite data stops in 2006 so we cut everything off there for apples-to-apples comparison. Dave Bromwich tells me 2007 is the warmest at Byrd (central West Antarctica). I haven’t taken the time to look. You can though! It’s all on the READER web site.
    Still, your point is not unreasonable — not all of Antarctica has warmed significantly, and in the last few decades East Antarctica’s trend has flattened. Nowhere do we claim otherwise.–eric

  16. Edward Greisch:

    I think François GM’s third question was trying to trick you into admitting that you fudged the data. Just say that you did no such thing. You turned the mathematical crank. You didn’t cherry pick the data. You used standard mathematics to test all of the data for errors in the data gathering machinery. The machinery broke down twice. End of story.

    Most people don’t understand that science is a hard job. They may think that you have a “kushy” job. Nature never hands you textbook data. Scientists are NOT authorities. NATURE is the only authority. You are not a political movement, you are a scientist. Most people accuse scientists of sticking together. That is why everybody in the country or world needs to be trained in science. They need to experience for themselves the difficulties you went through in writing this paper. There are those that will say that your paper is completely wrong merely because it is less perfect than God could have done.

    Q, on Star Trek, was correct when he called us a pathetic species. Or at least our K-12 science education is pathetic.

    I just started reading “Only a Theory” by Kenneth R. Miller. He says that the feature of Americans that makes the US the most scientific country in the world and also the most anti-scientific country in the world is that Americans are DISRESPECTFUL. The problem is that most Americans misdirect their disrespect. YOU are our representative in Antarctica. You are the best and smartest person we have for finding out what is happening to the climate there. Other Americans should see you as a friend. Instead, they imagine all kinds of nonsense.

  17. Hank Roberts:

    http://scienceblogs.com/aetiology/2007/09/deck_is_stacked_against_mythbu.php

    Don’t repeat bad information; that’s its way of spreading, taking you over long enough to replicate. Defend yourself from the passionate urge to repeat bad information — even if you think you’re combating it. It wins when you let it repeat through you.

    “Correcting misinformation can backfire.
    “… within 30 minutes, older people misremembered 28 percent of the false statements as true. Three days later, they remembered 40 percent of the myths as factual.”

  18. Alan:

    Somewhat loosely related, but also an example of binary thinkink.

    There are reports about a paper http://www.theage.com.au/national/droughts-cause-found-20090204-7xxk.html that say a warm Indian ocean is responsible for Australia’s drought and that La-Nina doesn’t bring rain to the SE. I understand that the recent La-Nina have not brought rain as expected but surely that expection was based on previous observations.

    Could this be an instance of a tipping point that “tipped” in 1992?

  19. Alan:

    Re: #17

    Hank, correlation is not causation, it could just be that comprehension and memory deteriorate with age. I’m 50 and I’m surprided “older people” even remeber that much about a leaflet. It’s also about how much you care, eg: name everything you ate/drank in the last 48hrs. If the subjects are sick, and do care, then a leaflet won’t cut it, they will want to talk to a doctor.

    And even if there conclusion is correct then to me this just means the clarity and simplicity of the myth busting is important. I come here looking for that clarity and often find it (such as the elegant saturation argument for why water is a feedback and not a forcing). This partcular article has been no different and is an excellent reference for mythbusting the still sizable proportion of psuedo-skeptics amoungst the world’s opinon columnists.

  20. Mark:

    re #19: “correlation is not causation”. How often is that seen, even when “causation” is never put forward, just the FACT of correlation.

    If 80% of the myths are remembered as truths, then that is a correlation between repeated writing of the myth and the belief of that myth and a LACK of correlation between correcting the myth and the belief of that myth.

    Where in there is the causation mentioned?

    NEVER.

    DO NOT overuse pithy little statements else you start using them when they do not belong. And it doesn’t belong here. No causation was proposed. Just there was correlation.

  21. Fred Spinks:

    Gavin (or Eric assuming he’s not incommunicado),

    I’m looking at the plot of the reconstructed AWS trends at the end of this entry and a few things jump out.

    1. 62 stations???]

    [[63 -- the graph is cut off at 62 for some reason (hey, I'm traveling!). #63 was 0.124 degrees/decade, and is now at 0.120 degree/decade. O.M.G!]

    2. It’s a plot so there are no significance details, but only 5 of 47 not in West Antacrtic show negative trend. How is it possible that all of Antarctica is not demonstrably warming?
    3. Trying to think like a McIntyre now and pick on outliers, which stations are #2, #36 and #60?

    [Response: #2 and #36 are at the base of the Antarctic Peninsula (on the Weddell side). #60 is 'Theresa', not too far from "Harry". If you were still thinking like McIntyre, you'd accuse me of dishonestly including those two stations which are arguably 'really' on Antarctic Peninsula in my average for West Antarctica (they are just south of 72 S). Sorry, but doing that makes the comparison with the satellite data even better. Oh wait, about face! Channeling SM.. hang on a sec.... "Better idea: Steig is right about West Antarctica. It's the Antarctic Peninsula where he made up data. Yeah, that must be it. The Antarctic Peninsula is cooling!!" ...sigh...--eric]

  22. William Hughes:

    In the maps above you are using a two dimensional figure (area of a circle) to graphically represent a one dimensional value (temperature). This makes your graphic a bit hard to follow and under-represents the point you are making. Also the data to ink ratio for those maps is low.

    I refer you to “The Visual Display of Quantitative Information” by Tufte. If you haven’t already read it you will find it directly applicable to your work here.

    Thanks for clarifying this issue as I appreciate the calm persepective on what it means.

  23. william:

    Post #15 Brian
    It’s possible that AWS’s don’t show a warmest year prior to 1980 because they had not been installed prior to 1979.

  24. Jason:

    As pointed out elsewhere, Steig et al say the following:

    “Although [Monaghan et al 2008] concluded that recent temperature trends in West Antarctica are statistically insignificant, the results were strongly influenced by the paucity of data from that region. When the complete set of West Antarctic AWS data is included, the trends become positive and statistically significant, in excellent agreement with our results.”

    If I understand this correctly, you are saying that Monaghan et al 2008 did not use the complete set of West Antarctic AWS data, and that the statistical significance of the West Antarctic trends (in the surface data) depends on using data which Managhan et al did not. Is this correct?

    I note that the selection of data in Monaghan et al was not arbitrary. In fact, they may have noticed the hiccup at Harry before anybody else:

    “Harry Station data are suspicious after 1998; data used in analysis are through 1998.”

    Does Eric have any comment on the respective decisions made as to the inclusion of data in Monaghan et al versus Steig et al?

    Would Monaghan et al have reached the same conclusions as Steig et al if they were using AVHRR data?

    Given the recent kerfuffle over the Harry data, is possible that the original Monaghan conclusion about the statistical insignificance of the West Antarctic surface data was correct?

    [Response: You can search on Monaghan in google and you'll see that he has publicly stated he likes our study. I've talked with Andy at length about this and they've redone their analysis and now get our result. I don't know all the details. They are working on a paper on it. Andy might be willing to respond if you ask him, but I suspect he's smarter than me (i.e. he doesn't pay much attention to what a few random people with time to post comments on blog have to say. ;) --eric]

  25. Brian Klappstein:

    “…[Response: Nope! Best way to do scientific work is to do it, rather than guessing (though as Gavin shows, thoughtful guessing often saves a lot of pointless speculation)...."

    The graph provided in the response shows the warmest year in the manned station record appears to be 1989. The AWS's look to have a warmest year of 1997 or 1998. However, as evidence against my speculation of 1980 as the warmest year in Antarctica, the AWSs don't go back to 1980 with a few rare exceptions, and probably a lot of the manned station's don't either.

    [edit] If you looked at only the stations which span 1980, how many points on that graph would be eliminated?

    I contend that 1980 was probably the warmest year in the Antarctic record, if you consider a spatially weighted average of the stations which span 1980.

    Regards, BRK

    [Response: Curious about what analysis your contention is based on, but no. the warmest year in the average of all occupied stations is 1996. Eric really has left the building now, so more detailed analysis you'll have to do yourself or wait until he gets back in 3 months time. - gavin]

  26. Hank Roberts:

    Alan riffs on what I posted, Mark riffs on Alan’s post. Look at the link, follow it to the WaPo article, to contribute your useful critique of the paper studying the effect of repeating myths to debunk them. Don’t rely on secondary or worse sources and believe you’ve criticized a research paper. That’s what’s happening with the Antarctic issue, eh?. Read the original, critique that. It saves much wasted time and confusion.

  27. henry:

    Eric – about your chart in #15, can I ask a few things?

    1. Where would I find a database of the numbers you used?

    2. Any particular reason you “split” the data into the automatic and occupied? Could they be re-grouped into west/east sites, irregardless of the type?

    3. It appears, just from eyeball, that the “median warmest year” changes, depending on the type of station. The automatic median appears later (about ’98) than the manned does (about ’90). Would separate charts support this observation?

    3. Is there data available of the coolest monthly anomaly for each station?

    Just point me to the data, I’ll do the homework…

    [Response: All of the data is at http://www.antarctica.ac.uk/met/READER . The quality of the manned station data is much better than the AWS stations, which only started to be deployed recently in any case. - gavin]

  28. Hank Roberts:

    Oh, and, Alan — repetition of myth had been edited out of Dhog’s posting. That’s the reason I reminded Dhog, there’s a reason not to repeat myths when trying to debunk them. One myth, many pointers, eh?

  29. Tenney Naumer:

    These people never publish any of their oh-so-learned debunkings in peer-reviewed journals.

    Yet they are so deluded that they think they are way ahead of the crowd.

    I guess it is just one more conspiracy.

    Recaptcha Antoinet Judging

  30. william:

    #15 Brian
    I went to Antarctic Data site provided by Gavin to Henry at #27 (http://www.antarctica.ac.uk/met/READER)and looked at all the temperature records for all 65 sites.
    I’m not sure what the chart is trying to describe in terms of warmest year but 39 of the sites did not begin reporting data until 1991 or later. Only two sites were even operational in 1980. Additional sites began reporting temps by year as follows: 1981-(2), 1983-(1), 1984-(2), 1985-(2), 1986-(5), 1987-(4), 1988-(2), 1989-(1) and 1990-(5).

    [Response: That's the AWS data. The long term manned stations are here: http://www.antarctica.ac.uk/met/READER/surface/stationpt.html - gavin]

  31. Aaron Lewis:

    Re 19
    NO! Hank has it exactly correct.
    When correcting a myth or falsehood, NEVER repeat it. This is the advice that public relations firms always give their clients. Lawyers do NOT repeat the charges brought against their clients, they simply deny all wrong doing. At the highest level, White House press briefings for the last few years often did not even answer questions, they simply recited their side of the story. These guys (and gals) were pros. Watch the tapes and learn. Unfortunately, we have a more complex storyline to put across. Go take a course in public relations. Public Relations is different from teaching, because in a teacher-student relations are sustained and the student expects to be tested.

  32. Brian Klappstein:

    “…[Response: Curious about what analysis your contention is based on, but no. the warmest year in the average of all occupied stations is 1996. Eric really has left the building now, so more detailed analysis you’ll have to do yourself or wait until he gets back in 3 months time. - gavin]…”

    (Gavin)

    I took a sampling of stations, leaving out those right next door to another station to get as even a spatial representation as possible for all of Antarctica. I used Scott Amundsen, Scott Base, Vostok, Faraday, Halley, Casey, Syowa, Davis, Novolazarevskaya, and Dumont.

    After rationalizing them to STDev numbers, I just averaged the series to get a crude but effective guess at the warmest year in Antarctica. 1996 is warm year, but the warmest is 1980.

    Regards, BRK

  33. dhogaza:

    Oh, and, Alan — repetition of myth had been edited out of Dhog’s posting.

    Actually, no myth was edited out of my post (not that I buy into that particular argument in the first place). What was edited out was certain accusations made against Eric made by Jeff Id at his blog.

  34. Mark:

    Hank, 26. I only riffed on the “correlation is not causation” being used when there was only the fact of correlation (without speculation as to whether there is “a causation”).

    That and “Never attribute to malice that which can be explained by incompetence”. Both are HEAVILY overused. And the latter one where incompetence is, if anything, WORSE than malice.

  35. william:

    #30 William
    That explains it.
    Thanks for the link Gavin!

  36. pete best:

    http://www.newscientist.com/article/dn16545-antarctic-bulge-could-flood-washington-dc.html

    Looks like Antarticas mass exerts a gravitational influence but as it lessens so does it gravity and hence water bulges around some parts of the USA. Interesting if true.

  37. Doug McKeever:

    Pardon me for departing from the higher intellectual plane normal for RealClimate posts, but my comment is far simpler than the many eloquent and vociferous comments regarding technicalities in the Antarctic study in this thread.

    I watched the catchy little cartoon mentioned by #2 (Danny Bloom) and although it is cute and some of it is even a bit thought-provoking, the animation is marred by an error that many of my students fall for, too, until they think it through. The cartoon implies that melting ice near the North Pole will cause sea level to rise. Of course, as the cartoon says, the North Pole is over ocean not land. Melting of LAND ice causes sea level to rise, or displacement of land ice into the sea, but melting of sea ice (pack ice) does not raise s.l. A simple demonstration it to monitor the water level in a glass both before and after an ice cube melts….the water level doesn’t change ( if the TEMP of the water increases, that will raise the level slightly as the warmer water has lower density and thus greater volume, but that’s minuscule in a glass of water).

  38. Hank Roberts:

    > certain accusations made against Eric …

    Label doesn’t matter. ‘Myth’ is from the newspaper or blog or press release. See the study for the research.

    The point is, don’t repeat bad information.

    Myth, accusation, whatever label you use — don’t repeat the bad information, it makes it more memorable.

    Even when you follow with a correction, you aren’t helping if you’ve repeated the bad info first.

    Clue. It’s how people work. Aaron is right. Look at how professionals convince. They understand the science. You should too, to be effective.

    Repeating bad info in order to yell about it is self-indulgent and counterproductive. No rule against doing that if it’s what you like doing.

    My advice — don’t. I’m delighted to see that kind of stuff edited out when it’s reposted here. Good.

  39. dhogaza:

    I used Scott Amundsen, Scott Base, Vostok, Faraday, Halley, Casey, Syowa, Davis, Novolazarevskaya, and Dumont.

    You used 10 of the 62 stations used by Steig?

  40. Ryan O:

    Read RC for a while, but never posted. Posting virgin here. Anyway, I have some input on this issue.
    .
    To check for a gross error due to the Harry-Gill merger (short of re-running the reconstruction with the corrected data), all you need to do is regress a time-series of the residuals between RegEM and the READER data for each station. If Harry led RegEM to infill inappropriately such that the trends (which is what the authors were looking at) were changed significantly, then the regression for each station would show a significant slope.
    .
    I did this for all 63 stations and found that the effect was very minor and irrelevant to Steig’s, et.al., conclusions. All of the stations show essentially no slope. So this skeptic has convinced himself that Harry is inconsequential to the conclusions in the paper.
    .
    Anyway, graphs and scripts are posted at CA. Cheers, guys.

  41. Jim Eager:

    Re Doug McKeever @37, true, floating ice does not normally alter water level as it melts, unless the ice is melting in salt water and the ice has a lower percentage of salt than the surrounding water. In that case there is a small, but never the less positive net increase in water level. That said, the net change is negligible in practical terms.

  42. william:

    A big thanks to Gavin for pointing me to the Antarctic Climate Data used by Eric for this study.
    As a learning experience, (and don’t laugh!) I’m entering the McMurdo Station average monthly temperatures from from 1956 to date into excel and then using one of the stats functions to plot a least squares line! So far I’m up to Dec 1976 and when I experimented with plotting a graph so far, viola! I got a chart showing a trend line with an upward slope.

    Within excel it shows two results: r squared = .003 and y = .0066x-18.156. I understand that the first figure is the upward slope over the time period I have inputted but can someone explain what the -18.156 y intercept and r squared values are really describing for me in terms of the temperature information from McMurdo?

    Sorry this is so basic, but I’m facinated to work with the real numbers for the first time especially since my last stats course was 30 years ago.
    Thanks

  43. tamino:

    Re: #32 (Brian Klappstein)

    You should not have “rationalized to STDev numbers.” This reduces the impact of those series which show higher variance; your method is crude but not effective.

    I ran the same sample, without “normalizing” the data, and for that sample 1996 is definitely warmer than 1980.

  44. ike solem:

    There seems to be some misunderstanding of the general business of attribution and robustness and causation and correlation when it comes to 1) trends in data produced from observing physical systems and 2) the predictions of numerical models of the physical system and 3) experiments involving the physical system.

    In climate science, #3 is almost never possible. A physicist might run an experiment several times to collect more data; not possible on Earth-scale systems. Thus, you must rely on #1 and #2.

    Take a related issue, the rise in ocean heat content over the past century or so: Simulated and observed variability in ocean temperature and heat content, PNAS, 2007

    Observations show both a pronounced increase in ocean heat content (OHC) over the second half of the 20th century and substantial OHC variability on interannual-to-decadal time scales. Although climate models are able to simulate overall changes in OHC, they are generally thought to underestimate the amplitude of OHC variability.

    Using simulations of 20th century climate performed with 13 numerical models, we demonstrate that the apparent discrepancy between modeled and observed variability is largely explained by accounting for changes in observational coverage and instrumentation and by including the effects of volcanic eruptions…

    If the question is expansion of the subtropical dry zones and precipitation changes, then the data from rain guages, snowpack levels, soil water measurements, river flow etc. is compared to the predictions of forced climate models. For example, see Human-Induced Changes in the Hydrology of the Western United States, Science, Jan 2008. This is a pressing issue:

    The west naturally undergoes multidecadal fluctuations between wet and dry periods (12). If drying from natural climate variability is the cause of the current changes, a subsequent wet period will likely restore the hydrological cycle to its former state. But global and regional climate models forced by anthropogenic pollutants suggest that human influences could have caused the shifts in hydrology (2, 13–15). If so, these changes are highly likely to accelerate, making modifications to the water infrastructure of the western United States a virtual necessity.

    Regardless of what the question is, this is how the detection-and-attribution studies are done. If it’s Arctic sea ice thickness, then the data from nuclear submarines is compared to sea ice thickness models forced by observed greenhouse gas emissions and volcanic eruptions, etc. If the models and the data disagree, then something is missing – for example, the volcanic eruptions and aerosols were not included in early climate models. It’s also possible that the data collection was itself flawed, and that the model is correct.

    The key issue is to make sure that models and data are kept separate. Here, we run into the complexitity of data-infilling procedures – models produce unlimited data, but the real world is a different story. Assumptions made during data-infilling may not be accurate (especially as regards ocean circulation). Still, it’s the only way to make comparisons, and the estimates are tied to real-world data. The only way to get better estimates is to collect more data.

    These studies always include issues like the role of urban heat islands, the role of agricultural irrigation and surface cooling, the role of El Nino and other cycles, etc – those who claim that they don’t are either dishonest or ignorant. For example, in California the drought cycles are mostly set by El Nino cycles (say, 50% effect). However, this is against a background of steadily rising winter temperatures due to fossil-fuel and deforestation-induced global warming – meaning that when “normal” drought conditions occur, they are going to be worse than ever seen before, and the “normal” heat waves are going to break all records (as in Australia today).

    Another related example of a detection-and-attribution issue is the steadily collapsing ice shelves around Antarctica. For example, Reuters states unambiguously that Antarctic ice shelf set to collapse due to warming – Mon Jan 19, 2009. Is that justified?

    Well, if you look at the surface temperature ABOVE the ice shelf, as these recent realclimate posts do, then you see a warming trend, and if you look at the waters UNDER the ice shelf, you see a warming trend. So, yes, that is attributable to global warming – even though the exact date of the breakup might depend on “natural cycles”.

  45. Tenney Naumer:

    William, did you figure it out yet?

    Secretary Active

  46. Alan:

    “Correlation is not causation” – Seems like I poked a sacred cow.

    Hank, I appologise I got your original post out of context, I thought you were talking about the article. However the correlation in the study is that “state a myth, old people become confused”. There is nothing that says it CAUSES people to become confused as everyone here seems to blighthly accept. I agree that repeating a myth will make it more well known, however repeatedly debunking it will it make well known for being a MYTH.

    Oh and lawyers and politicians do not debate in order to enlighten, they debate to win. They do not have to worry about intellectual honesty they simply have to convince people. These are the people who create myths, they are not into busting them unless it’s politically convienient. The suggestion that science should use this style of debate completely misses the point of scientific debate.

  47. jeff Id:

    Gavin, Please describe the meaning of climatology in this statement in methodologies. Is it meaning RegEm reconstructions of the temperature stations?

    “We make use of the cloud masking in ref. 8 but impose an additional restriction that requires that daily anomalies be within a threshold of +/-10 C of climatology.”

    [Response: The average temperatures for that month over the whole satellite record. Nothing to with the reconstruction. - gavin]

  48. Rod B:

    Pete (36), interesting and entertaining article. But was it tongue-in-cheek? Or were they smoking too many of those no-name cigarettes? :-P

  49. Rod B:

    Alan, well said post (46). But to be picayune, OT, and likely unpopular, I’ll stick up a little for lawyers. Lawyers can not introduce in a court anything that is not probative and has already been generally broached by other witnesses. They can’t really create myths. Politicians are not so restricted, so they can and do make up myths.

  50. Ryan O:

    Sorry . . . should have asked this earlier with my other post. I was wondering if Dr. Steig was planning on placing the updated reconstructions on his website? I know he’s off to very cold places right now and this is probably the last thing on his mind. :) If he was planning on placing the updated reconstructions there, I would just like to thank him in advance.

  51. Hank Roberts:

    Sorry for the digression, Gavin, my bad.

    People, read the study. It’s mentioned here in a broader article:
    http://www.unc.edu/~sanna/ljs07aesp.pdf (find “CDC” for that bit).

    Here’s the source:

    http://www.journals.uchicago.edu/doi/abs/10.1086/426605
    DOI: 10.1086/426605
    How Warnings about False Claims Become Recommendations

    “… Repeatedly identifying a claim as false helped older adults remember it as false in the short term but paradoxically made them more likely to remember it as true after a 3 day delay. This unintended effect of repetition comes from increased familiarity with the claim itself but decreased recollection of the claim’s original context. Findings provide insight into susceptibility over time to memory distortions and exploitation via repetition of claims in media and advertising.”

    These are the voters you’re looking for. Note how effectively the PR sites use this repetition of false claims. Don’t do it.

  52. Alan:

    #49: Thanks, I like MY lawyer too. I hesitated and was going to put columnists but the original post I was alluding to used lawyers so I followed on.

    #50: I apreciate the point but I belive there is much more to it than simply stating the myth, “care factor” and comprehension skills are just two of the most obvious. If repitition was the only thing at play then how does it explain the roaring success of badastronomy.com that (almost) single handedly turned the “moon landings were faked” from a long standing myth (with it’s own documentry) into an international joke?

  53. Michael Tobis:

    Eric, you snark: ” What is there about the sentence, “The code, all of it, exactly as we used it, is right here,” that you don’t understand? “

    I don’t understand how you think that could be true. You link to a nicely documented and from all appearances elegant library of matlab functions. Where are the data files? Where is the script that invoked those functions and plotted those graphs?

    There is absolutely no substantive reason this should not be distributed along with the relevant publication. You shouldn’t be sniffing at people who want to replicate your work in toto. You should be posting a complete makefile that converts input data into output data. This is common practice in exploration seismology, thanks to the example of John Claerbout at Stanford University, and that in a field where there are sensible commercial reasons for confidentiality. A related effort, called Madagascar, is being developed at U Texas and is 100% open source.

    The paradoxical backwardness of science in regard to adopting the social lessons of high tech is well analyzed in this blog entry by Michael Nielsen.

    RC again climbs on its high horse, doing none of us any good. You guys are the good guys. Please act like it.

    [Response: Michael, with all due respect, you are holding climate science to a ridiculously high ideal. i.e. that every paper have every single step and collation available instantly upon publication for anyone regardless of their level of expertise or competence. I agree that would be nice. But this isn't true for even one of the 1000's of papers published each month in the field. It certainly isn't a scientific necessity since real replication is clearly best done independently and science seems to have managed ok up til now (though it could clearly do better). Nonetheless, the amount of supplemental information has grown enormously in recent years and will likely grow even more extensive in future. Encouraging that development requires that the steps that are being taken by the vanguard be praised, rather than condemned because there are people who can't work out how to set up a data matrix in Matlab. And what do we do for people who don't have access to Matlab, IDL, STATA or a fortran 95 compiler? Are we to recode everything using open source code that we may never have used? What if the script only runs on Linux? Do we need to make a Windows GUI version too?

    Clearly these steps, while theoretically desirable (sure why not?), become increasingly burdensome, and thus some line needs to be drawn between the ideal and the practice. That line has shifted over time and depends enormously on how 'interesting' a study is considered to be, but assuming that people trying to replicate work actually have some competency is part of the deal. For example, if there is a calculation of a linear trend, should we put the exact code up and a script as well? Or can we assume that the reader knows what a linear trend is and how to calculate one? What about a global mean? Or an EOF pattern? A purist would say do it all, and that would at least be a consistent position, even if it's one that will never be satisfied. But if you accept that some assumptions need to be made, you have a responsibility to acknowledge what they are rather than simply insist that perfection is the only acceptable solution. Look, as someone who pretty heavily involved in trying to open out access to climate model output, I'm making similar points to yours in many different forums, but every time people pile on top of scientists who have gone the extra mile because they didn't go far enough, you set back the process and discourage people from even doing the minimum. - gavin]

  54. Dr Terry:

    Looks like there are more problems with BAS data identified over at CA.

    [Response: Identified at BAS you mean. The first two changes are dealt with above, with only minimal impact. However, if you look at what was changed in the last two changes, it was only months with very spotty coverage which doesn't make it into the mean monthly data used in the Steig et al paper. For instance, look at the difference between the .html and .txt files (http://www.antarctica.ac.uk/met/READER/aws/Clean_Air.All.temperature.txt and http://www.antarctica.ac.uk/met/READER/aws/Clean_Air.All.temperature.html). Obviously if any further errors are found, the analysis will be looked at again. - gavin]

  55. Alan:

    My last OT post on “simplicity and clarity” vs “hard and fast rules” in mythbusting and science in general.

    What I am saying is:
    Politics and advertising aim to convince the audience via emotional resposes such as trust, fear, etc. Science aims to enlighten their audience by asking them to think, but i gets worse, science also asks it’s audience to be self critical in the first instance.

    Some acedemic backup:
    Dawkins, Sagan and many others have pointed out that a very large portion of mandkind who were taught science were taught the way I was (ie: a dictionary of factoids), most still do not even realise science is a philosophy, let alone what it is that’s unique about that philosophy.

    My $0.02
    If all you want to do is convince voters then by all means use a PR firm rather than a website such as this one. Personally I want to have my cake and eat it, ie: convince via enlightenment.

  56. Doug Bostrom:

    Re #48: See NASA sea surface topography information: http://topex-www.jpl.nasa.gov/index.html

  57. Florifulgurator:

    Release the “complete code” or not?
    (Assuming straight-forward code.)

    Methinks to not immediately release it verbatim is a better service to science/software quality assurance:

    1) If the complete code contains an error, then an unwitting auditor might run it and get the same (but wrong) values, and thus an error could get overseen.

    2) If an auditor has to assemble the code independently, an error could be avoided (or perhaps a different one introduced), resulting perhaps in different values, and then it is clear that an error exists.

    If existence of an error is established, THEN the complete code should be released asap to locate the error.
    On the other hand, the complete code should be released ONLY after independent verifications have shown identical results.

    There is good reason to keep some details “secret”.

    (A good practice in software QM would be to recode things in an entirely different language. Not easy to grasp for some. I once got yelled at for verifying C/Tcl-generated results in Perl.)

  58. pete best:

    Dear RC,

    Isn’t part of the verification and peer review of these types of studies one that references the software used or/and written to analyse the data and the algorithms and testing proedures involved. I am presuming that all scientific software written for such purposes is of a scientific standard and if not then verified by other teams of developers to verify it.

    Therefore I am presuming that CA and other peope raging/waging war here on this subject are in error about this study and its data sources, software, algorithms and methods used. All completely scientific in nature and valid?

  59. dhogaza:

    Findings provide insight into susceptibility over time to memory distortions and exploitation via repetition of claims in media and advertising.”

    So what evidence do you have that a study such as this, working with a presumably random sample of older people, applies to the kind of self-selecting, possibly more educated than average, and on average likely younger audience scavenging for knowledge at a science site?

    I’d say you’re skating on thin ice to claim that the efficacy of myth debunking in a science blog addressing such an audience necessarily parallels the efficacy of advertisements during Perry Mason reruns (for instance).

    I don’t buy it, I doubt that you have supporting data, and I will continue to repeat and debunk myths in forums such as this whenever I please.

  60. wmanny:

    #54 R

    “Obviously if any further errors are found, the analysis will be looked at again.”

    Just to clarify, I take it by the title of the thread that RC’s view is that Nature’s cover story is sound, the data are robust, and the the last week’s questions and corrections amount to mere quibbles. Further, RC is convinced that these errors are stand-alone, and do not indicate potential sloppiness in the rest of the data. Do I assume correctly?

    [Response: The results featured on the cover and in the paper are based on AVHRR satellite data, not AWS data. The latter was used only as a secondary check on the results. And even the results using the AWS data are insensitive to the issues raised thus far. Really not that difficult to understand. -mike]

  61. Jason:

    Gavin,

    In response to #53 you say that:

    “you are holding climate science to a ridiculously high ideal. i.e. that every paper have every single step and collation available instantly upon publication for anyone regardless of their level of expertise or competence”

    I agree that this is a very high standard, and one that is not realistic.

    But it IS important that sufficient information be released that experts in the field can replicate your results.

    Pointing somebody to the Matlab RegEM package and to the original source data is most certainly not sufficient.

    [Response: For someone who is an expert? For sure it's enough. I might post on a recent replication exercise I did and how it really works, rather than how people think it should work. - gavin]

    [edit - Eric isn't here to discuss this, and so lay off that until he is.]

  62. duBois:

    I fail to see how access to the code is useful to anyone other than the crank and the dilettante. The code is not the process being described. The code isn’t the reality. A scientist hoping to duplicate the results of an experiment would go his own way.

  63. Mark:

    pete #58.

    Try writing that again. There is nothing that lasts long enough to work out what you’re saying.

    E.g. if you’re saying that all data has to be made available and all processes, then you’re wrong. The paper itself has enough information to say what the conclusions and the way they were arrived at was.

    You can recreate a similar study and if you get a different answer, you can post that you had a different result. If you got the same, post that you got the same result through a different avenue (this proves the robustness of the original conclusion).

    Full data disclosure can HELP when such a repeat test comes up with an experimental answer that is different in that why can be investigated (but BOTH systems need their original data exchanged. After all, the second could be wrong.

    However, full disclosure can be BAD. If you just repeat the same thing, you may not have any different result since you repeated the same thing. And thereby miss out on the inherent error in the original.

    But it can also be bad if you pass all your info to all and sundry. Then someone sees a fit that looks odd and complains. Not knowing that this was already countered within the original paper. Why? Because they aren’t looking for how accurate the paper is, they’re looking for how it could be WRONG. Skeptical of the paper, they are not skeptical of their own conclusions.

    A bit like some of the more infamous names on RC.

    HOWEVER

    I don’t know whether this post was needed. Your post was that bad.

  64. Michael Tobis:

    In reply to Gavin at #53:

    I agree that there is no general solution. Indeed, I have just spent two weeks trying to get CAM3.1 running on a machine where it worked fine eight months ago! Distributing HPC cluster code and helping people run it is a huge problem, but not the one I am addressing here.

    It’s clear we are talking about pure Matlab scripts in the present case. That’s a different matter: moderate sized plain text scripts running on a widely available commodity platform. Stuff you can archive and track and ship. Stuff that could be extremely useful to other people doing research on related matters, whether they are our colleagues or self-appointed “auditors”.

    The basis for my complaint is the claim that a URL pointing to the libraries can realistically be called not just “the code” but “all of it” with emphasis (editorial comment to #6 above).

    That is simply and, to anyone who actually touches code, obviously, ridiculously, false.

    Whether actually keeping track of the complete workflow with each bit of output is onerous is another question. It is indeed the case that climate scientists are not trained to do so, but in fact once you get in the habit, it turns out to be considerably less work, rather than more so. Since this is well-known to commercial programmers, (or even commercial designers and writers) who tend to be better trained in software tools, their expectation is that you are being disingenuous rather than just missing the point of how software (and other digital production) is normally produced in professional settings.

    So there are really two main questions: if this is hard work for the scientist, for heaven’s sake, why is it hard work? (And the corrolary: how are you confident that your results are correct?)

    The second question is, if you have the workflow properly organized, should you share it? In most cases the answer to this second question would, in my opinion, be affirmative.

    Claiming that “the code, all of it” is at the link provided in #6 is risible. It doesn’t help in improving the reputation of climate science, which I thought after all was a main point of the exercise here.

    A single statement by an RC editor that is demonstrably and visibly false is a very large error. You guys have taken on a big and important job with this site, and you have my gratitude and respect for it. However, that implies taking some care to avoid obvious and embarrassingly wrong mistakes. This sort of thing is costly, and you need to take as much care with your commentary as with your articles.

    Finally, there is the question as to why scientists refuse to make any explicit effort to learn anything or adopt anything from the private sector in matters of management or software development in particular. This is a long-standing complaint of mine. Scientists have an attitude that we are in some sense superior to other professionals. We are different but not especially smarter or better, and our isolation does us no credit.

    [Response: Michael, you are grossly overstating your case here. "scientists refuse"? In general? This is obviously false. Scientists are adopting new things all the time - but there is a learning curve which can't be wished away, and people have to have a good idea that it is worth learning something new. To that end, the importance of the IPCC AR4 archive in demonstrating to modelling groups the benefit from external analysis can not be overstated. Showing that this helps the scientific process is much more efficient than hectoring scientists about how closed minded they are. Scientists are actually quite rational, and allocate investments according to relatively well known potentials for return. This doesn't change in an afternoon because someone comes up with a neat API for a new python script. - gavin]

  65. Nicolas Nierenberg:

    Dr. Schmidt, I think you are taking the argument about posting code and processes to the extreme. No one is assuming that you have to do something so that someone who is not literate in the particular area can reproduce the result. On the other hand having read through the on line description of methods in the paper I can’t conceive of how I would even begin to process the satellite data. I don’t know which sites have been selected and why, and I’m sure I don’t know what functions precisely with various options would be required.

    Having looked at the AWS data for example I’m sure that Dr. Steig has scraped it from the original site. I’m equally certain that the satellite data has been scraped and processed from the original site. I think a reasonable request is to simply make available the data files and intermediate steps in their current form. Essentially what Dr. Steig has said that he provides to “legitimate” researchers.

    As a final note, while putting up the complete process and data may sound burdensome, it is my experience that the discipline of putting this together in a repeatable way would most likely save time in the long run when doing this type of work, as well as substantially improving the quality. This is why it is standard practice in commercial enterprises.

    [Response: Open source is not standard in commercial enterprises. - gavin]

  66. Nicolas Nierenberg:

    This is something that I have had difficulty following from the paper. If the AVHRR data only begins in 1979 how can it be the main source of the trends that begin in 1957?

    [Response: It's not. The trends come primarily from the long-term manned stations. The spatial pattern comes from the satellites. - gavin]

  67. caerbannog:


    Well, here’s what Jeff Id is really saying …
    [edit — thanks for your support dhogaza, but I’m not allowing ‘jeff id’s’ rants to show up here, even if passed on to me by someone else–eric]
    Minus the bluster at the end, it’s the same-old same-old – climate science is a fraud.

    People like this should simply be ignored. He’s going to claim fraud no matter how patient

    Eric is or how much help Eric gives him.

    [edit]

    If Jeff Id wants to be taken seriously, then he should follow the example of “John V” of opentemp.org and perform his own independent analysis.

    IIRC, What started out as a skeptic’s (John V’s) critical look at GISSTemp turned out to be a nice confirmation of the quality of the GISSTemp product. And that, as I recall, kinda took the wind out of surfacestations.org’s sails.

  68. Nicolas Nierenberg:

    So this is also unclear to me from the paper. So the occupied land station results are used unmodified, but the satellite data is used to compute how temperatures in unmeasured areas co vary with the manned weather stations?

    [Response: Yes. - gavin]

    Also since the results were changed albeit only slightly with the change in the AWS data they are influencing the trends in some way. Are they used as primary data like the manned sites, or more in the way that the satellite data is used to fill in the gaps?

    [Response: Just like the satellite. - gavin]

  69. Nicolas Nierenberg:

    Dr. Schmidt, as the chairman of a public company that produces one of the most popular open source reporting products in the world I know for a fact that large commercial enterprises use open source every day! We can start with Apache and go from there. (Can I include a plug? :-)

  70. ike solem:

    Re#63, Nicolas.

    Perhaps your request is reasonable… similarly, I’d like to see all correspondence between the American Petroleum Insitute directors, their PR firm Edelman, and the corporate management of all the members of the American Petroleum Institute. A complete record of all communications between Edelman & API employees and media corporation managers would be great, too.

    Likewise, can we have the same for all members of the Edison Electric Insitute (the coal-electric lobby group) and the Western Fuels Association?

    As you say, “As a final note, while putting up the complete process and data may sound burdensome, it is my experience that the discipline of putting this together in a repeatable way would most likely save time in the long run when doing this type of work, as well as substantially improving the quality. This is why it is standard practice in commercial enterprises.”

    Unfortunately, the API doesn’t have a very good record:

    In April 2005, it was reported that API was overseeing a $27 million study on the health effects of benzene, funded by the major oil companies BP, Chevron Texaco, ConocoPhillips, Exxon Mobil and Shell Chemical. The study was launched in 2001, in response to a National Cancer Institute study that found “workers exposed to average levels of benzene had a risk of developing non-Hodgkin’s lymphoma more than four times greater than the general population.” … Benzene is a component of gasoline, so oil companies were concerned that tighter benzene regulations would affect their operations.

    Although the API research wouldn’t be complete until 2007, information from “depositions, proposals to oil companies and other documents collected by a Houston law firm in unrelated lawsuits “suggests that “the results of the study already have been predicted.”

    There’s nothing new about this: in 1998, the API was caught trying to recruit “independent scientists” to act as covert spokespeople for their agenda: INDUSTRIAL GROUP PLANS TO BATTLE CLIMATE TREATY, By JOHN H. CUSHMAN JR, April 26, 1998

    The draft plan calls for recruiting scientists to argue against the Administration, and suggests that they include “individuals who do not have a long history of visibility and/or participation in the climate change debate.”

    It appears that they’ve run out of credible “independent third party spokespeople” – but it would be nice to have the complete record of all their activities – don’t you agree?

  71. Nicolas Nierenberg:

    Following up on my earlier post, after reading Dr. Schmidt’s reply to my query and going back to the article I think I can explain in English how this reconstruction worked. I think it is pretty clever, which I guess shouldn’t be surprising.

    They wanted to figure out continent wide trends for the Antarctic over a significant period of time, but the issue is that the sites providing temperature data are quite sparse, with poor coverage in some areas. However since about 1979 there is satellite coverage that can be used for temperature measurement. So what they did was to take observations during the period of time where there was satellite coverage and see whether they could predict the temperature of the areas covered by satellite alone. They additionally tested using AWS data to see if they could predict that.

    The math to do the prediction is non trivial as it seems the covariance depends on things like the season, and other factors, but including all these factors they report that they get a good result. In other words by plugging the manned station data into their prediction machine they come up with a reasonable approximation of the values that are recorded by the satellite and the AWS.

    To test this they made sure that the system worked well in two different time periods of the satellite era. It seems they also took the most complete 15 manned sites and ran it through the engine to see if it did a good job of predicting the other sites. I can’t tell whether they did this for the entire temperature record, or just the satellite era.

    They then ran their engine from the pre satellite era to the satellite era to fill in the gaps in spatial coverage. I assume that in the satellite era they just filled in using the actual satellite data. This is probably automatic with the Regem algorithm.

    I would like to verify that they tested over the entire period the ability of the 15 “best” sites to predict the rest of the sites. Then the only other question is whether the manned station data is solid over the entire period from 1957.

  72. Hank Roberts:

    Eric’s remark appears to have set off quite a blogsturm, but if you go past the first few pages Google finds you’ll see his offer was serious:

    OCEANOGRAPHY … INTRODUCTORY TOPICS WILL INCLUDE THE BASICS OF:
    1) USING MATLAB 2) STATISTICS FROM DATA 3) LINEAR …
    15613 A 3 TTh 1000-1120 JHN 027 STEIG,E Open 4/ 5 J …
    http://www.washington.edu/students/timeschd/SPR2007/ocean.html

    Does MIT have an online free course that teaches the equivalent?
    Clearly a lot of people do want to learn how to use MatLab.

    Here’s one page that discusses making MATLAB available online:
    http://hess.ess.washington.edu/math/docs/al_be_stable/al_be_calc_paper_v2.pdf

    “The MATLAB Web Server is an extension to MATLAB that allows a web browser to submit data to a copy of MATLAB running on a central server, and receive the results of calculations, through standard web pages. We have chosen to use a central server, rather than distributing a standalone application that runs on a user’s personal computer, because: i) the web-based input and output scheme is platform-independent; ii) the existence of only a single copy of the code minimizes maintenance effort and ensures that out-of-date versions of the software will not remain in circulation; and iii) the fact that all users are using the same copy of the code at a particular time makes it easy to trace exactly what method was used to calculate a particular set of results.”

    Note in that paper that even with those precautions, more accurate calculation than MATLAB provides can be done; they say which scientists’ papers to consult. You’ll recognize some of the names.

    Point being — a brief search turns up, besides the current tsurris, quite a few serious studies associated with Eric’s name and MATLAB use. Many of those describe various ways of teaching the tool, or refer to ways to get better results when MATLAB isn’t sufficient. Some of the websites you’ll find offer links to toolkits and tutorials.

    This is not all that hard to find, and I’d never looked for it til now. People who sit demanding what they want be brought to them need to get up and go ask a librarian what’s available for the asking.

    Else you get multiple copies of advice — all outdated — accumulating. That’s seriously discussed as a problem.

    The cautions therein are being illustrated in this thread as people demand access to a toolbox.

    J. Baldwin of the old Whole Earth Catalog cautioned:

    You don’t know how to use a tool until you know six ways to break it — or break what you’re working on with it. You will learn.

  73. dhogaza:

    Dr. Schmidt, as the chairman of a public company that produces one of the most popular open source reporting products in the world I know for a fact that large commercial enterprises use open source every day!

    This true statement is very different than your original false statement which was:

    As a final note, while putting up the complete process and data may sound burdensome, it is my experience that the discipline of putting this together in a repeatable way would most likely save time in the long run when doing this type of work, as well as substantially improving the quality. This is why it is standard practice in commercial enterprises.

    Open Source is not standard practice in commerical enterprises.

    If you believe it is, perhaps you can point me to vendor-authorized sources for:

    1. MS Vista
    2. Oracle 10g
    3. Adobe PhotoShop CS
    4. Apple Aperture

    This list is not meant to be complete.

  74. Jason:

    #69,

    API hides and otherwise obfuscates its correspondence because they have something to hide. Their results are tainted by their agenda, and access to their internal correspondence would reveal that.

    Steig et al do not have anything to hide. We (or at least most of us) assume that their methodology is free of any material bias.

    It is for this reason that we expect much greater disclosure.

    Steig et al is NOT some advocacy paper by a paid interest group. It is a Nature cover story. Our expectations are justifiably different.

  75. Barton Paul Levenson:

    I know correlation is not causation, but in re those deniers who say the correlation between temperature and CO2 is vague and temporary, here’s where I checked and found out they were dead wrong:

    http://www.geocities.com/bpl1960/Correlation.html

    Remove the hyphen before pasting the URL into your browser.(fixed!)

  76. Terry:

    Open Source is not standard practice in commerical enterprises.

    As an employee of a commercial software vendor (mentioned in post #70) I can assure you that open source IS a standard in most, if not all, of the Fortune 100 companies that I work with. Dr. Schmidt, if you have a cite that says otherwise, I’d be glad to see it. As it is, I’m not inclined to take your statement as fact.

    [Response: That's great. But it is clearly true that most commercial software is not open source: SAP, Windows, Oracle, Adobe etc. Someone must be keeping them in business. - gavin]

  77. Pat Neuman:

    Re: #3 (Ricki),
    What will it take to really close the deniosphere? Do we need more ground-based temperature stations in Antarctic?

    The nails in the coffin are not doing the job. For example, on Feb. 18th in Minnetonka, MN … “Wiita’s class will show the Al Gore movie “An Inconvenient Truth,” which suggests man is to blame for a warming planet, and the British-made movie “The Great Global Warming Swindle,” which suggests Al Gore’s claims are a farce.” …

    Is this the best they can do? Who should teach and what should be shown?

    http://www.chanvillager.com/news/schools/climate-change-open-discussion-minnetonka-101

  78. Anonymous Coward:

    Gavin (#53),

    I have no horse in this race and I won’t doubt the science because of trifles such as this one but I have to say that I’m disappointed.
    Of course there’s no need to release anything in order to prove that the paper is not a fraud or something…

    [edit - Eric isn't around to respond to these kinds of claims, and I do not want to speak for him. Please lay off until he can answer fro himself]

    I find the notion that scientists should adopt PR practices quite repelling by the way. PR may fool some people but hopefully not the types who would follow a blog such as RC.

  79. dhogaza:

    Steig et al do not have anything to hide. We (or at least most of us) assume that their methodology is free of any material bias.

    It is for this reason that we expect much greater disclosure.

    Steig et al is NOT some advocacy paper by a paid interest group. It is a Nature cover story. Our expectations are justifiably different.

    Which is why y’all have the same expectations for work done in all fields of science, not just climate science, right? I mean … when Nature publishes a paper on particle physics y’all flood the blogosphere with demands for immediate publishing of all the data, code, etc used by the researchers, right? Y’all are pestering CERN and other facilities for the source code to all the software that runs various colliders, instruments, etc, right?

    I hope any perception on my part that perhaps climate science is being singled out due to political considerations is proven false by evidence that a steady, unwavering series of requests for openness greets the publication of every paper in Science, Nature and other high-profile papers, regardless of subject matter.

    Can you help me out a bit here? Can you document that authors of papers in other fields are being held to the same level of criticism and subjected to the same pressure as climate scientists?

    Is there a “physics audit” or “population ecology audit” site you could point me to, to put any perception of bias I might hold to rest?

  80. Mark:

    #70: why not do that yourself? If the figures are the same then the authors can continue without interruption.

    Oracle says: Masses arriving.

    He knows his stuff…

  81. Hank Roberts:

    > “… is a standard”

    That’s the nice thing about standards.
    There are so many of them.

  82. Michael:

    dhogaza says:
    “Which is why y’all have the same expectations for work done in all fields of science, not just climate science, right?”

    Climate Science affects many of us very personally, as it will take each one of us doing his part to solve this crisis. I can tell you that is the highest reason this topic is so heated.

  83. dhogaza:

    As an employee of a commercial software vendor (mentioned in post #70) I can assure you that open source IS a standard in most, if not all, of the Fortune 100 companies that I work with. Dr. Schmidt, if you have a cite that says otherwise, I’d be glad to see it.

    You and your boss are making the claim, *you* provide the cite.

    I’m curious, is this an ANSI standard or ISO open source standard Fortune 100 companies are following.

    Or, perhaps, are you claiming that many Fortune 100 companies *use* some open source software?

    Which of course is totally irrelevant to the original claim, which is that it’s standard practice for business to release the source to *their* code.

    You guys aren’t just moving the goalposts, you’re knocking them to the ground.

  84. Mike Walker:

    #69 ike solem:

    API as referenced in Gavin’s response in #63 refers to “Application Programming Interface”, not “American Petroleum Institute”.

  85. IANVS:

    Gavin, Many of us concerned citizens really appreciate the long hard work you guys put in to try to keep the public informed. We owe a great deal of what we have learned about climate change from you and other dedicated scientists like James Hansen. My children & grandchildren & I thank you. Keep up the good work.

    What is the best link at RealClimate that addresses in layman terms why the Milankovitch cycles are not the principal cause of the global warming we have been experiencing since the 1800s? Or is there a better treatment of the subject elsewhere, for laymen of course?

    I often encounter claims that Milankovitch cycles are to blame for the current warming, esp. by self-proclaimed earthscientists at blogs like SciGuy. thx http://blogs.chron.com/sciguy/archives/2009/01/since_the_skept.html#c1231955

    [Response: Try the IPCC FAQ 6.1, 2.1, and perhaps Coby. - gavin]

  86. Hank Roberts:

    > the notion that scientists should adopt PR practices …

    And where did read this notion? Not here. You think that was said somewhere? And why do you trust the source?

    Nobody suggested scientists adopt PR methods. I (a reader, not a scientist) suggested that repeating bad information, because we know doing so makes the bad information more memorable. I and a third reader pointed to the research. That’s about how people learn and remember facts.

    Whoever’s spinning this is really good — at PR. You believed it.

  87. SecularAnimist:

    dhogaza wrote: “Can you document that authors of papers in other fields are being held to the same level of criticism and subjected to the same pressure as climate scientists?”

    Scientists in other fields whose work has illuminated certain “inconvenient truths” that threatened the profits of wealthy, powerful industries — e.g. the tobacco, meat or pesticide industries — have been subjected to similar pressures, similarly bogus “skepticism”, and similar accusations of “ideological” motives.

  88. Mike Walker:

    RE:

    Michael Tobis #53
    Jason #61
    Michael Tobis #64
    Nicolas Nierenberg #65
    Anonymous Coward #78

    Transparency, is always good, whether in business or in science. Anytime someone avoids transparency, it doesn’t mean they have something to hide, but it makes you wonder. And I think that is really the crux of this type of issue. Archiving (which really means publishing) the code and data used to perform the research has many benefits and no downside. If the methods used are correct, then confidence in your research is enhanced. If your methods are not correct, then they can be corrected and science is advanced.

    I don’t think that reasonable people are going to hold your code to same standard as commerical software. Noone expects research code to be anywhere nearly as robust as a commercial software package, but clearly it must meet some minimum standards. Basically that it correctly performs as intended. While I don’t dispute the argument that there would be some overhead involved in properly archiving this information, it is minimal and has to be less than the amount of time spent justifying not archiving it.

    To me the bottom line is, if you are confident that your methodology is correct, you have no reason not to archive it, and the only reason not to archive it would you that you are not confident that your methods are correct. As Michael Tobis says in #53: “You guys are the good guys. Please act like it.”

  89. Trent1492:

    @Bart Paul Levenson,

    Thanks for the link to your “test” page! I have just one nit pick and one suggestion. The nit pick is that you said “(this data uses the mean for 1961-1990 as the base figure).” I do believe that GISS uses 1951-1980 as the “base figure”. The suggestion is that you put keep that table up to date for each new year. That way when I link to your to page the inactivist can not claim that the link is dated.

    Once again I want to say thanks for the effort.

  90. dhogaza:

    Heh … yes, SecularAnimist, but it’s not just because they were publishing in highly-regarded journals and therefore held to “higher standards” by “critics” whose only interest was to “hold science to the highest standard”. :) Just like climate science …

  91. Mike Walker:

    Florifulgurator #57) “If an auditor has to assemble the code independently, an error could be avoided (or perhaps a different one introduced), resulting perhaps in different values, and then it is clear that an error exists.”

    Yes, but where does the error exist? The best way to find out would be a direct comparison of the two codes, but you only have one.

    [Response: But you save a huge amount of effort if the independent replication of the result is achieved without that, and that can add a great deal of scientific progress over and above checking someone's arithmetic. See this comment I made when this came up previously. - gavin]

  92. tamino:

    Re: #88 (Mike Walker)

    Archiving (which really means publishing) the code and data used to perform the research has many benefits and no downside.

    The downside is that the level of transparency demanded by far too many “hacks” is work. And when vultures are waiting to tear it to shreds because they want to undermine science rather than advance it, it’s vastly more work. When you’re underfunded in spite of your research being critical to the future progress of civilization, and you have way better things to do than respond to an endless stream of critics with an agenda, you have to make a choice: is my time better spent providing transparency for those who don’t really want that but just want to find fault, or is my time better spent advancing our knowledge of climate science?

    Believe it or not, science already has self-correction mechanisms. Responsible scientists report corrections to the interested parties and knowledge is improved. Vultures blog about what a fraud the scientists are.

  93. Jason:

    Gavin wrote:

    “I might post on a recent replication exercise I did and how it really works, rather than how people think it should work.”

    1. It looks like you did NOT replicate the results in question. :)

    2. Although I haven’t even read your paper (I have read the paper I imagine you are failing to replicate) I can confidently predict that the authors of the original analysis will blame your non-replication on your use of different procedures, procedures which they deem less legitimate than those employed in the original paper.

    Certainly the details of what you and they did in your respective analyses is highly relevant, and absolutely essential to resolving the discrepancy.

  94. dhogaza:

    I don’t think that reasonable people are going to hold your code to same standard as commerical software. Noone expects research code to be anywhere nearly as robust as a commercial software package, but clearly it must meet some minimum standards. Basically that it correctly performs as intended

    If non-scientists are going to tell scientists how to do their job, should there be some tit-for-tat?

    Like, say, maybe scientists should insist that professional software engineers should be required to prove their software correct, using the mathematical methods developed in the late 1960s and early 1970s.

    Since non-practitioners are apparently more qualified than practitioners to determine what practices are acceptable in any given field, software types telling scientists how to do their job shouldn’t balk if scientists insist that software types provide proofs of correctness, right?

  95. Mark:

    Terry #76. Your statement is almost categorically refuted by ANY open discussion thread that talks about FOSS software and especially about how Linux is ready (or not) for the desktop.

    Pop over to El Reg for example (www.theregister.co.uk). Slashdot (slashdot.org) and ZDNet (www.zdnet.com).

    Reading that you get the distinct impression that ANY FOSS program is infinitely inferior to any Microsoft product and that NO “real” company would use it. And anyone looking after such a system is

    a) Expensive
    b) Hard to find
    c) Irreplacable
    d) not a replacement for the Easy, Secure MS AD system that Works So Well(tm)

    Not to mention the strange names.

    Either this horde is a bunch of trained and paid shills or your assertion is incorrect.

  96. Mark:

    BPL #75. It’s even more inane than that. There is causation and correlation is used to TEST IF THE CAUSE IS SEEN!!!!

    It isn’t “Hey, we have warming temperatures. Wonder what’s causing it. CO2 fits the bill, we’ll go with that”.

    It’s “Hey, CO2 is being pumped out and that’s a greenhouse gas. I wonder if there’s evidence of this happening in the real world? If it is, we may need to rethink our energy strategy.”.

  97. Nicolas Nierenberg:

    Dr. Schmidt,

    I was hoping for an attaboy for my brief explanation (post #71) of how the study worked. It may have been obvious to you, but I had the feeling that very few people on this blog actually understood it. The study is very clever, interesting, and well worth understanding. Anyway I guess I just gave myself one!

    For me this blog worked brilliantly in this case. I asked a couple of clarifying questions, went back and read the article again, and the light went on. Thanks!

    As to open source I must be missing your point about this. I don’t understand how that was related to my post #65 which doesn’t mention open source.

  98. dhogaza:

    1. It looks like you did NOT replicate the results in question.

    2. Although I haven’t even read your paper (I have read the paper I imagine you are failing to replicate)

    You might want to read the paper before claiming that Gavin didn’t do what he claims to have done. The paper describes the calculations performed, and notes items that were clarified by the author of the original paper with personal communication (helping ensure that the calculations were, indeed, replicated).

    The paper also notes that more recent datasets were used – this doesn’t affect Gavin’s claim that the original paper’s methodology was replicated.

    Gavin reaches a different conclusion than the authors of the original paper, but not because he didn’t replicate their methodology.

    BTW still waiting for the vendor-authorized source release of Oracle 10g, in particular. I’d like to hack it to be more compliant with the latest SQL standard, and this is impossible without the source.

  99. Mark:

    dhagoza, how about myself, an unknowing ordinary guy (as far as knowing how petrol works) demand proof from the oil companies that their product isn’t safe and isn’t going to go all apocalyptic on us when we hit a magic number? I mean, I don’t know how it COULD destroy the earth, but has ANYONE checked to see if the earth will explode when we take out 100 more barrels?

    I DEMAND PROOF!!!

  100. Mark:

    PS I demand they stop now until they know and have proven that oil extraction is safe and will not cause the world to explode.

  101. dhogaza:

    Reading that you get the distinct impression that ANY FOSS program is infinitely inferior to any Microsoft product and that NO “real” company would use it. And anyone looking after such a system is

    a) Expensive
    b) Hard to find
    c) Irreplacable
    d) not a replacement for the Easy, Secure MS AD system that Works So Well(tm)

    Tell it to Google.

    As for your other posts, I have no idea what you’re talking about.

  102. John Mashey:

    re: 88 Mike Walker

    I’m always interested in why people believe what they believe, because often differing beliefs come from different histories. [We've gone through multiple cases here of that due to differing experiences with different kinds of software models].

    Can you tell us some more where your opinions about software come from? I.e., what sorts of software development (product, research, application domain, sizes, staffing) you’ve been involved with?

  103. Mike Walker:

    dhogaza #94 -6 February 2009 at 4:51 PM wrote: “Like, say, maybe scientists should insist that professional software engineers should be required to prove their software correct, using the mathematical methods developed in the late 1960s and early 1970s.”

    Let me get this right, you are saying climate science shouldn’t be held to a high standard because the computer software industry isn’t? Even if your insights about the software industry are correct, that hardley sounds like a logical argument. “Why should I have to eat my spinach if he doesn’t?”

    Most large business have become very strict about software standards compliance in the last 10 years. Very few of our customers will accept software that is not ISO 90003 compliant, and in the US, SOX compliance is also mandatory. Meeting these standards is expensive, but it is no longer optional for most software companies that sell B2B.

  104. Mike Walker:

    tamino #92 6 February 2009 at 4:45 PM wrote: “The downside is that the level of transparency demanded by far too many “hacks” is work. And when vultures are waiting to tear it to shreds because they want to undermine science rather than advance it, it’s vastly more work.”

    If the science is correct it will stand on its own. I recently read about the efforts to disprove Einstein’s theory of relativity in the early 20th century. We tend to think of relatively as something that has always been accepted, but it met with a lot of resistance when he first proposed it. And the only reason that it is accepted now is because it was vigorously tested and many people tried to prove him wrong.

    Again, I think it boils down to this, if your methods are correct, then transparency will prove that. The only reason to oppose transparency is if you are not confident.

    [Response: No, often it's just a desire not to waste time, because as you say, the science will out. - gavin]

  105. Mike Walker:

    dhogaza 6 February 2009 at 4:51 PM writes: “Since non-practitioners are apparently more qualified than practitioners to determine what practices are acceptable in any given field, software types telling scientists how to do their job shouldn’t balk if scientists insist that software types provide proofs of correctness, right?”

    I think climate science would benefit greatly if it accepted more help from professional statisticians and professional software developers. No one can be expected to be an expert at everything. You won’t find me making critical statements about the scientific or statistical elements of the research, because I am not qualified to do so. But software development is my area of expertise.

  106. David B. Benson:

    IMO software development experience has almost nothing to do with climatology,

    Maybe we could stick with the latter here on RealClimate?

  107. Mike Walker:

    John Mashey #101: 6 February 2009 at 6:35 PM writes: “Can you tell us some more where your opinions about software come from? I.e., what sorts of software development (product, research, application domain, sizes, staffing) you’ve been involved with?”

    I am the founder and CEO of a small computer software development company I founded 25 years ago, snf that specializes in providing ERP related software to large agricultural companies. Our systems integrate financial and operational data to perform planning and forecasting functions. These systems are based heavily on financial and operational data analysis and utilize both statistical and physical models as well as linear programming to plan and monitor company operations. My company has about 50 development staff, not counting specialized consultants working under contract such as mathematicians, accountants and veternarians. We have customers on every continent except Antartica. A typical customer would be Cargill, Inc., so we work mostly with large companies.

    I am NOT an AGW skeptic. But I do have a number of criticisms of climate science in its current state, mostly based on issues relating to computer science. Obviously, I favor more transparency regarding the publishing of the methods used to process the data behind the research. While I do realize there is some history between the principals here and at CA, I think the science is more important than these types of personal issues, and publishing the data and the code would lend a lot of credibility to the research. At least it would with me. And I can’t see why someone who is confident in his methods would not do this enthusiastically.

  108. Doug Bostrom:

    #95: El Reg, eh? Does the name “Steven Goddard” ring a bell, at all?

    Circling back to the original nit about publishing source code, Tamino is entirely correct in asserting that –publishing– versus –using– code is real work. Code can function perfectly but may be extremely difficult for a another party to analyze and (almost as bad from the coder’s perspective) just plain –ugly–. Sort of like going out in public with your shirt and pants on backwards; the basic function of the clothing is still met, but you won’t be happy.

    How about another analogy? If you could fix your own roof, would you be satisfied if your roofer forbade you access to a ladder to do the work? That’s more to with Free versus closed source thing. If you can’t code, you really can’t understand why real coders care so much about the issue, but if you can pound a nail you can certainly understand how annoying it would be if you were not allowed the use of a hammer.

  109. Mike Walker:

    dhogaza #94 6 February 2009 at 4:51 PM writes: “Since non-practitioners are apparently more qualified than practitioners to determine what practices are acceptable in any given field, software types telling scientists how to do their job shouldn’t balk if scientists insist that software types provide proofs of correctness, right?”

    Not at all. We have a huge budget that goes entirely to outside entities who specialize in auditing and testing software to insure compliance with a number of ISO and non-ISO standards. None of our customers would trust their companies operations to our systems if we didn’t.

    But are you implying that climate scientists are some how special in that they could not benefit from having experts in specific fields contribute or audit their work? Are climate scientists expected to be masters of all the fields utilized in their research? I would never trust one of my programmers to design the accounting functionality or the linear programming logic we use in our systems. I have accountants and mathematicians on staff or on retainer for those functions.

  110. Mike Walker:

    David B. Benson #106 6 February 2009 at 8:06 PM writes: “IMO software development experience has almost nothing to do with climatology, Maybe we could stick with the latter here on RealClimate?”

    Software has little to do with climatology, but everything to do with climate science. Virtually everything done in climate science uses computers and computer software to analyze the data. And GCMs, those things don’t run on an abacus or a slide rule.

  111. Mike Walker:

    Doug Bostrom #108: 6 February 2009 at 8:49 PM writes: “How about another analogy? If you could fix your own roof, would you be satisfied if your roofer forbade you access to a ladder to do the work?…”

    I think you make my point precisely. I have tried to fix my own roof and my own plumbing. Now I do not hesitate to call a roofer or a plumber. These are not my areas of expertise, and in the long run it is much cheaper for me to fork it over up front than call them in to fix what I broke. I don’t do my own taxes either.

    So why should a climate scientist do all of his own computer programming, without at least being willing to have it audited for free by posting it on the Internet? Its the whole OpenSource thing. There are tons of people, especially in computer science, only too willing to give away their expertise.

    Everyone likes to be right, myself no less than anyone else. But AGW and the science behind it are too important to our future. Getting the science right should be the over riding concern, and if someone finds a mistake in someone else’s computer code or statistical methods, then that will advance the science and we are all better off.

  112. Mike Walker:

    tamino #92: 6 February 2009 at 4:45 PM writes: “The downside is that the level of transparency demanded by far too many “hacks” is work. And when vultures are waiting to tear it to shreds because they want to undermine science rather than advance it, it’s vastly more work.”

    This is the same problem Einstein encountered when he published the theory of relativity. But hardly anyone remembers the “hacks” and “vultures” that tried to tear down his theories… why? Because he was right and they were wrong. And it was the process of trying to prove Einstein wrong that actually proved his theories to be right. Which is how science works. Its called the scientific method.

  113. tamino:

    Re: Mike Walker

    It’s so easy to trivialize the effort required for others to do things your way. Especially when you don’t have to lift a finger.

  114. Michael Tobis:

    Tamino’s point in #92 is partly true. I find his argument that on the difficulty in publishing results such that they can easily be replicated unconvincing. One should always organize one’s work in that way as a matter of elementary sound practice, and it is commonly practiced elsewhere.

    On the other hand, it is certainly true that the hacks want to find errors to use as weapons in their overdrawn attacks on the core conclusions of science. Coping with those attacks is indeed an important larger issue here. The advantage in this matter to eschewing openness is not clear, though. The failure to use new tools to enhance the efficacy of the scientific method dominates any convenience effect from having unqualified people poking at the code.

    Any confidence that the scientific method will win in the end only works if the scientific method is sufficiently followed for its advantages to work. Not only that but we need it to work fast enough to keep us from that cliff we seem to be barreling toward. Remember that cliff? In principle science will out, maybe, but in practice it can take generations.

    If standard practice in software engineering can accelerate the advantages of the scientific method and nonstandard practice (inattention to workflow as a component of work product integrity) can obscure those advantages, which choice should we take? Perhaps Gavin is right in suggesting that I overgeneralize when I say that scientists generally arrogantly refuse to learn from commercial practice. But those scientists who complain about ‘learning curves’ and ‘demonstrated benefits’ in the context of elementary best practices are not serving as counterexamples to the generalization.

    Rather than being entirely argumentative here, allow me to commend to those willing to address this matter constructively the work of Greg Wilson of the University of Toronto: here is his excellent article called Where’s the Real Bottleneck in Scientific Computing? and his course on software carpentry is here. The relevant lesson for present purposes is automated builds.

  115. John Mashey:

    re: #107 Mike Walker
    Thanks, useful.
    You’d dropped off the earlier thread, but here I explicitly addressed some of the differences between commercial enterprise software products and software written to do science. Could you perhaps go back and take a look at that?

    I have run into this before: especially if someone has spent much of their career in one particular domain, it is all too easy to over-generalize the rules from that into others. This was true even inside Bell Labs 30 years ago, as there were at least half a dozen different combinations of requirements. Messes often occurred when managers moved from one division to another were too dogmatic about bringing all their methodology with them.

  116. dhogaza:

    This is the same problem Einstein encountered when he published the theory of relativity. But hardly anyone remembers the “hacks” and “vultures” that tried to tear down his theories… why? Because he was right and they were wrong. And it was the process of trying to prove Einstein wrong that actually proved his theories to be right. Which is how science works. Its called the scientific method.

    Scientists understand the scientific method. I dare say they understand it better than the self-proclaimed software hackers who insist on telling them how to do their job.

    There’s an implication in your statement that is absolutely false, and that is that climate scientists are somehow trying to prevent people from trying to prove them wrong.

    I suspect you know better.

  117. Mike Walker:

    Michael Tobis #114: 6 February 2009 at 10:43 PM
    John Mashey #115: 6 February 2009 at 10:49 PM

    I really don’t understand how either of your responses have anything to do with archiving code. I haven’t made any comments implying that researchers write bad code, should write better code or should use better tools and procedures. I’m sure that in general most researchers code is not upto standards I would require of my programmers, but my programmers are not scientists and scientists (probably) are not professional programmers. Some may be, I don’t know.

    And, how elegant their code is (or is not) has nothing to do with archiving it. The ultimate measure of computer code is does it work. It can be the ugliest, most poorly written, unmaintainable monstrousity imaginable, but if it has a onetime use, who cares. The reason for researchers to archive their code isn’t to show off their programming skills, it is to allow others to evaluate their research and analytical methods. It is actually more like supplemental but very detailed documentation of their step by step methods. It would be nice if the code were well written from a CS standpoint, but does it really matter? Only if the code doesn’t work, or if there are material errors in the code that affects the results and conclusions reached by the researcher.

    I think you can come up with a million reasons not to archive one’s code, but they are just excuses. There is no legitimate reason not to provide the code that explains the researchers methods better than any SI he can provide.

    [Response: I'm not sure this is true. In the environment that climate studies find themselves, putting out anything that is functional but poorly written simply invites politically motivated attacks on the quality of the code. You can see from this week's shennanigans that anything is fodder for those attacks, and so in many ways legitimate scientists face a damned if you do/damned if you don't dilemma. It is actually not surprising that many opt out. - gavin]

  118. dhogaza:

    This is the same problem Einstein encountered when he published the theory of relativity. But hardly anyone remembers the “hacks” and “vultures” that tried to tear down his theories… why? Because he was right and they were wrong.

    No. Einstein’s audience were his fellow physicists, not “hacks” and “vultures”.

    If the local hilltop moonshine distiller had demanded that Einstein publish his papers at a level he could understand, documented from the level of basic algebra up, piling on accusations of fraud, dishonesty, intentional refusal to provide enough information for him to replicate the work … Einstein’s eyes would’ve bugged out, I’m sure.

    I’m exaggerating, but in principle this is what’s being asked for. Not the level of openness and accessibility professional scientists in the field – the researcher’s peers – need to understand and, if necessary, undermine or disprove the work. But rather some unachievable level of “openness” that will allow any tom, dick, and harry – jeff id, for instance – to “audit” the work.

    It’s odd that individual, random people outside science seem to feel that they, personally, should be able to tell scientists how they must do their jobs.

    Oh, by the way, demonstrating that software adheres to a standard is not a proof of correctness, but since you said “not at all” when I asked my question, can I expect a proof of correctness, along with the source to your ERP product, soon?

    It is open source, I trust.

  119. Nicolas Nierenberg:

    Dr. Schmidt, there has been a lot of back and forth here about generalities of software development and data management. However, in this particular case it seems that the intermediate data and methods are available to “legitimate” researchers. So it seems more a question of willingness to post these, than it is a question of effort or style of development.

    I think then, that the discussion ought to focus on this question. What is the advantage to science of not simply making this available?

  120. Hank Roberts:

    Tell us why you put “legitimate” in quotation marks, Mr. N.

    I recall this or similar words from one of the big ice core grants years back — the cores and data required shared with legitimate researchers who would pay a proportion of the cost of the work done to acquire them. Reasonable.

    How do you know who’s legitimate in a field? Look at their work.
    The cost of the work in time and effort is always considerable.

    The obvious waste of time that can be created by people whose goal is to interfere with what they think politicians will decide, by interfering with the scientific work in the area, is another cost.
    There’s a whole industry devoted to that interference. You know it.

    This is a caution about that sort of cost — it’s well avoided:
    http://www.ajph.org/cgi/content/abstract/91/11/1749
    EK Ong, SA Glantz – American Journal of Public Health, 2001

    Cited by 127

    Climatology is overlapping now with public health. Same conflicts.

    “The life so short, the craft so long to learn.”

    “There are in fact two things, science and opinion; the former begets knowledge, the later ignorance.”

  121. Hank Roberts:

    Cited by 127 — here’s that link, working:
    http://scholar.google.com/scholar?num=50&hl=en&lr=&newwindow=1&safe=off&cites=13366054661093759722

  122. Hank Roberts:

    One from those 127 really should be read, for those who didn’t bother to look at any of the rest of them. Ask yourself what you want to know about this:

    Health Affairs, 24, no. 4 (2005): 994-1004
    doi: 10.1377/hlthaff.24.4.994
    © 2005 by Project HOPE
    Tobacco Control

    The Power Of Paperwork: How Philip Morris Neutralized The Medical Code For Secondhand Smoke

    Daniel M. Cook, Elisa K. Tong, Stanton A. Glantz and Lisa A. Bero

    A new medical diagnostic code for secondhand smoke exposure became available in 1994, but as of 2004 it remained an invalid entry on a common medical form. Soon after the code appeared, Philip Morris hired a Washington consultant to influence the governmental process for creating and using medical codes.

    Tobacco industry documents reveal that Philip Morris budgeted more than $2 million for this “ICD-9 Project.” Tactics to prevent adoption of the new code included third-party lobbying, Paperwork Reduction Act challenges, and backing an alternative coding arrangement.

    Philip Morris’s reaction reveals the importance of policy decisions related to data collection and paperwork.
    ———————————————-

    No other industry except the tobacco industry has had their tactical document files opened to public view — these archives aren’t documenting only one industry’s tactics. This is how it is done.

    This is one reason why scientists in fields under attack share work with legitimate scientists — those who have shown they can do the work.

  123. Nigel Williams:

    It would be very rare if the software for a GCModel was written from scratch. Rather I would expect that it’s made up of a series of fragments of code that have been developed and tested a bit at a time and then combined with the latest bits to make a viable lump of code that reflects the prevailing state of the art.

    With this development style it is very likely that many of the important components are written in obscure and often very ‘expedient’ code, which is likely to be utterly opaque to the casual decompiler, and often similarly difficult for the author to re-visit a few months or years after it was first put together and its function proven.

    I imagine that making code do what the climatologist wants it to do is a very hands-on process – a line; a tweak at a time until the subroutine performs as it should. Its unlikely that it’s like coding a bit of accounting software where the development brief is nailed before it starts. But always if the block diagram doesn’t work the code won’t either, so a lot of time gets spent on the whiteboard.

    My point being that while it is dutiful of authors to make the code of their GCMs available there are very few people in the world who can both understand the syntax and then see how that code relates to the climate parameters involved. Invariably the code will be capable of improvement, but the point is; does it make an acceptable job of telling the story so far.

    Only by checking the code’s output against the skilled expectations a grid square at a time can it be seen to be valid. No mean feat for those who have written it; Next to impossible for any outsider to prove that it works, but easy for them to find some trivial potential fault.

    If critics want to do better they should build their own CGMs from scratch, like the big boys do.

  124. Mike Walker:

    dhogaza #118 7 February 2009 at 12:28 AM writes: “Oh, by the way, demonstrating that software adheres to a standard is not a proof of correctness, but since you said “not at all” when I asked my question, can I expect a proof of correctness, along with the source to your ERP product, soon?”

    SOX Certification requires a ton of testing for correctness. And, of course, as a privately developed product the source code is proprietary.

    But what does this have to do with “archiving” code for government sponsered public research. We are not publishing research conducted on the taxpayers dime that will be used to form public policy. If I were, I would expect to archive the code where it was publicly available.

    When you say “professional scientists in the field – the researcher’s peers”, would you not consider computer scientists peers, at least for that part of the work that is in their area of expertise?

    I’m not expecting the researcher to make any extra effort to explain his code. If it works and correctly produces the results that support the researchers work, then it speaks for itself. If it doesn’t, it also speaks for itself. Why should any climate scientist not welcome experts in related fields to confirm his work. If his work is correct, then why do you think that is not what will happen?

    Nicolas Nierenberg 7 February 2009 at 12:36 AM writes: “I think then, that the discussion ought to focus on this question. What is the advantage to science of not simply making this available?”

    I agree.

    Dr. Schmidt, why do (some) climate scientists appear so apprehensive about having their work reviewed by experts from other fields? Specifically that part of their work that is outside of climate science itself, such as statistics and computer science?

    [Response: I see no evidence for that at all. I see a reluctance to have hordes of know nothings magnify every typo into federal case, but all the good scientists I know welcome constructive advice from any quarter. The key is the word 'constructive' - scientists are mostly driven by the desire to understand something about the world, and help in doing so is always appreciated. People whose only contribution is to malign or cast aspersions, whatever their background, are simply of no use in that endeavour. - gavin]

  125. Mike Walker:

    RE: Gavin’s response to my post #117: Gavin wrote: “and so in many ways legitimate scientists face a damned if you do/damned if you don’t dilemma”.

    While this may be true, I think you have too little faith in science. Advances in our scientific understanding have always been controversial. Galileo, Darwin, Einstein. They all had their detractors, but who remembers their detractors names? If you let your detractors prevent you from doing the best science possible, then they have already won.

    BTW-Am I the only the that can’t see Captcha? It usually takes me 3 or 4 times to get words I can actually make out.

  126. Mark:

    Mike walker, #107, you’re not doing any of the bloody work either.

    Someone can try their own climatology work and publish THEIR paper. If their paper agrees with the current swathe of papers, then these papers are true. Problem is that the ones who don’t believe (and they aren’t skeptics either) don’t want to do their own paper because it won’t show anything remarkably different from the other papers and won’t show what they want to tell either. At least not without so many deliberate errors that the debunking of the paper becomes trivial.

    They can’t stop the papers of others either.

    So they demand that work is done in “make work” to take time away from the science and put it into digging up all the data, explaining each semi-colon and how computers work (all the way up from the hollerith machine to quantum computing).

    Having no time left, they can’t produce any more papers. And so they can then crow “SEE! They can’t prove that AGW is happening any more, so they stopped looking! HA!”.

    And what’s a CEO doing on here demanding open sourcing the code, data, personal history etc? Is your work under GPL3? If not, go and do the work yourself and open your code.

  127. Mark:

    re 101. Well, I have posited that there is a risk in drilling and extracting oil. The risk is the earth blows up at a certain extraction.

    I have not seen any scientific papers trying to assess the risk, so the risk is unknown.

    And, like the people who say “I don’t believe in either side, so let’s do nothing until they BOTH clean up their acts” (which is what the denialists want, so hardly a punishment for them), I say we should do nothing about extracting oil until the risk is PROVEN unfounded.

    Again, like the denialists, I don’t know WHAT could make oil extraction cause the world to blow up, but then again, THEY are the engineers and scientists involved in it, so they should look for it.

  128. ApolytonGP:

    I agree that skeptic critics WILL over-magnify typos. Will NOT publish in archived journals (allowing fair responses and a fixed position of criticism and easy to read, well written snark-free science/math comments) and instead will do their blog games. Will NOT finish their speculative blogged forays. Will NOT write up analyses that show robustness of the work. All that said, the benefits of openness are still far greater than restricting acess because of fear of unfair critics. Especially for something like the Antarctica work that is really a bolf approach to the problem. Analysis of the method here and its suitability may be even more useful than the result from the infilling exercise.

  129. duBois:

    What does re-use of the use of the code prove about the underlying reality? Nothing. What will prove that the experiment as described shows something about reality is an independent methodology.

    What cranks and denier know is that code is a threshing floor of arcana. Lots of chaff to throw in the public’s eyes. Look at how long lasting is the issue of CO2 as leading and lagging indicator. And that’s easy to understand. Look at the efforts to debunk the “hockey stick”. Deniers still bring it up even though completely independent studies discovered the same phenomenon.

    Demand for access to code is a demand for ever tinier bits of minutiae to carp over. You want to debunk an experiment? Re-do it.

  130. Barton Paul Levenson:

    Guys, thanks for the compliments on the web page. I’ll see what I can do about keeping it updated, or at least pointing people to primary sources for the data.

    On the revealing-code issue. That’s not how real peer-reviewed science works. I’ll try to give an example.

    Let’s say a stellar astronomy paper computes a lot of absolute magnitudes. The original code is for a Cray supercomputer, written in a Fortran-95 compiler that cost the university $6,000. Here it is:

        do i = 1, NumNearbyStars
          read(7, 70) StarName, V, pi
    70    format(A17, F7.2, F7.3)
    
          M = V + 5d0 + 5d0 * log10(pi)
    
          write(1, 80) i, StarName, V, pi, M
    80    format(I3, 2X, A17, F10.2, F9.3, F10.2)
        end do
    

    But that code is never published. Now, an amateur who is well-read in astronomy decides to check the calculations. He’s running Windows 95 on an ancient Pentium with software coprocessor emulation turned on to avoid the floating-point bug. The input data file the astronomers used is available on the journal’s web site. He selects three stars as test cases, downloads a freeware Basic compiler, and writes his own code:

        for i = 1 to 3
          read V, p, nam$
          M = V + 5 + 5 * (log(p) / log(10))
          print i; "  ";
          print nam$; "  ";
          print using("##.##", V); "  ";
          print using("#.###", p); "  ";
          print using("##.##", M)
        next i
    
        end
    

    ‘ Data:

        data 11.01, 0.772, “Proxima Centauri.”
        data  9.54, 0.549, “Barnard’s Star. ”
        data  3.50, 0.274, “52 Tau Ceti. ”

    He gets 15.45, 13.24, and 5.69 as the absolute magnitudes of the three stars, respectively. He then checks the astronomers’ output file. Let’s look at three scenarios.

    1. The astronomers got the same figures. Code confirmed.

    2. The astronomers list 15.47, 13.26, and 5.71 for those three stars. The amateur checks his own code for bugs, putting in some simple test cases like parallax = 0.100. Then he writes to the journal. The journal informs the authors. The authors check their code. Turns out the coder dropped some LSD when he was coding and put down “5.02d0″ where he should have put down “5d0″. A correction is printed in the next issue of the journal, but there’s no change to the authors’ conclusion, which is, let’s say, that Hipparcos satellite parallaxes are superior to Yerkes Observatory parallaxes from 1928.

    3. The astronomers’ figures are totally off. Note to journal, journal tells authors, authors check, find a major error which vitiates their entire thesis. The journal prints a retraction, or possibly invites the amateur to draw up his critique in a follow-on paper. This is rare but it does happen.

    In no case does the amateur himself need the original code.

    Gavin, Mike, somebody, is there some way to display a monospaced font in this blog? My code examples always turn out looking lousy no matter how I play with & nbsp ; and similar tricks.

    [Response: I surrounded the code bits with 'pre' and '/pre'. That seems to do the trick. - mike]

    Captcha: “entire secret”

  131. Nicolas Nierenberg:

    Dr. Schmidt, with regard to your answer to #124. Let’s agree that casting aspersiions, ad hom attacks, and trivial observations are a waste of time. This has nothing to do with what the problem is with just posting what Dr. Steig already has available. If other people choose to waste their time with it, why is that your problem?

    #130, no problem in the case where the algorithm, and the description of process is trivial and clear. This is not the case in Dr. Steig’s paper. Even reading the on line methods section of the paper would not provide enough information to know exactly what was done.

    Someone was mentioning Einstein earlier. I can tell you that when a physicist or mathematician publishes they need to include all the work. If someone tries to prove them wrong they don’t have to speculate on how they arrived at the result. The argument can center on much more fundamental grounds.

    In any event the process is moving inexorably towards full disclosure. Because the space limitation in journals is not an issue on line, things like more complete methods are included. Authors are expected to put supplemental information on line. Despite any resistance I’m sure it will simply become standard practice to put all the intermediate work required to get to the result on line, and researchers who are used to holding this type of stuff back will just have to deal with it.

    The nice side effect is that their own work will be more effective and reproducible. This way when that grad student who has been doing the analysis decides to go to another lab, the new person will be able to figure out what is going on.

  132. Eli Rabett:

    To summarize

    “Great fleas have little fleas
    upon their backs to bite ‘em,
    And little fleas have lesser fleas,
    and so ad infinitum.
    Yet data set fleas are larger still
    Than the great disks which hold em
    And no one knows where a misprint lurks
    Until they stumbles upon one”

    Captcha: barked all right

    -ER with apologies Augustus de Morgan

  133. Hank Roberts:

    Yeah. Mr. N, Eric’s in Antarctica for 3 months.
    You want a 3-month field day by the WTFU crowd while he’s out of town.
    What does this accomplish?

    You do understand the point of the uproar is to influence legislation and regulation, don’t you?

    Remember this thread? It’s one of the great examples of how this is being done:

    - Prometheus: Less than A Quarter Inch by 2100 Archives
    Once CO2 is regulated by EPA, then I’d agree with your analogy. …. To show standing, then, Massachusetts needs to show a particular stake in EPA making the … Roger, you asked what possible difference a quarter inch average rise in …
    http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001004less_than_a_quarter_.html

    If you don’t recall it, you should reread it and think about what you’re trying to accomplish and what he’s trying to accomplish there.

    Remember:
    “… more than $2 million for this ‘ICD-9 Project.’ Tactics to prevent adoption of the new code …”

    That’s the way industry uses this.

    Nobody here thinks Microsoft was the first company to use FUD (“Fear, uncertainty, doubt”) in the market, do you?

    Candle in the dark. Lots of windbags around. Keep it lit.

  134. Eli Rabett:

    NN should have a talk with Bill Gates.

  135. Pat Neuman:

    Re #125 Mike said … “If you let your detractors prevent you from doing the best science possible, then they have already won”.

    I agree with that, but doing your best science possible despite the detractors can be costly.

  136. Hank Roberts:

    http://talkingpointsmemo.com/docs/files/1233855544%5BUntitled%5D_Page_2.jpg

    There’s the proposal to cut the science budget.
    This is what creating uncertainty and mistrust of science accomplishes.

  137. Jaydee:

    While I agree with the general points you (Gavin) make about there being no requirement to open source code, I thought I would comment on the prevalence of Free Open Source in business and the software world in general.

    First most of the internet infrastructure is run with open source code, including two thirds of the webservers:

    http://news.netcraft.com/archives/2009/01/16/january_2009_web_server_survey.html

    including http://www.realclimate.org

    http://uptime.netcraft.com/up/graph?site=www.realclimate.org

    Although open source on the desktop is pretty rare in business, on the server side it is common.

    As for SAP, one of the database variants is open source

    http://www.sapdb.org/

    ORACLE provide and contribute to many open source projects. Including InnoDB, the transactional database engine for MySQL the M in LAMP.
    http://oss.oracle.com/

    As for the code for things like MS Vista, this is available under certain circumstances to Governments http://www.out-law.com/page-3296

  138. Eli Rabett:

    The code for more than a few GCMs is available. Two that come to mind are the Community Model and the GISS model. So far, none of the usual suspects have had at it

  139. Rod B:

    BPL (75), Your link is curiously interesting, but hardly a proof certain of your implication. In substance, you first blow out of the water the GW explanation of CO2 following by long periods the rise of temperatures in long ago times. Secondly it does not address the relative marginal changes when the concentration is much smaller or much greater than the data (though you might not have meant that.) Then, what’s very intriguing, if not amusing, you come up with a chance of randomness a million times smaller than Planck’s Constant a la Heisenberg’s uncertainty principle . I know the uncertainty principle does not address probability — but really!! All of which shows that mathematics is a construct and has no control over physics. Your analysis seems to show a decent correlation of CO2 concentration and global temperature within the range measured; this is interesting in its own right and might be indicative of something, but proves nothing.

  140. Timothy Chase:

    dhogaza wrote in 118:

    If the local hilltop moonshine distiller had demanded that Einstein publish his papers at a level he could understand, documented from the level of basic algebra up, piling on accusations of fraud, dishonesty, intentional refusal to provide enough information for him to replicate the work … Einstein’s eyes would’ve bugged out, I’m sure.

    I’m exaggerating, but in principle this is what’s being asked for. Not the level of openness and accessibility professional scientists in the field – the researcher’s peers – need to understand and, if necessary, undermine or disprove the work. But rather some unachievable level of “openness” that will allow any tom, dick, and harry – jeff id, for instance – to “audit” the work.

    Then there are the talk show hosts who have found that they can achieve higher ratings by playing to distrust of science and conspiracy theories among the toms, dicks and harrys of the world, conspiracy theories which are never disproven but only become more elaborate when faced with counterevidence (as in, “… all these other scientists must be in on it, too!”), the well-documented Exxon-funded think tanks, etc..

    It is possible that you might achieve a level of evidence and detail which might satisfy a particular Tom, Dick or Harry, but there will be ten who are just a little more paranoid or ideological or a little less honest or bright ready to fill there shoes and require even more, always resenting the mere existence of any shred of evidence that their politics might have to make room for discoveries of science.

  141. Timothy Chase:

    Hank Roberts wrote in 136:

    http://talkingpointsmemo.com/docs/files/1233855544%5BUntitled%5D_Page_2.jpg

    There’s the proposal to cut the science budget.
    This is what creating uncertainty and mistrust of science accomplishes.

    Hank, you might want to see this:

    Science funding cuts stayed, still in Senate version of recovery bill
    http://www.writeslikeshetalks.com/2009/02/07/science-funding-cuts-stayed-still-in-senate-version-of-recovery-bill/

    It looks like a fair amount of the science funding was preserved. Instead of a 52.32 percent cut we are seeing a 10.74 percent cut. Still, the Senate version isn’t the final version: the final is supposed to be some sort of compromise between the House and the Senate. We will see what happens between now and then, to that, the funding for renewables, the grid, etc..

  142. tamino:

    Re: #139 (Rod B)

    When you say “you first blow out of the water the GW explanation of CO2 following by long periods the rise of temperatures in long ago times” you reveal how ignorant you are. BPL’s analysis does nothing of the kind, and you have to have a rather perverse prejudice to think it does.

    Your statement about “a chance of randomness a million times smaller than Planck’s Constant” is silly. Probability is a dimensionless quantity while Planck’s constant is not, and with a different choice of units I can make Planck’s constant take any numerical value at all (in fact a common choice in particle physics is to choose units such that h = 2*pi). You just wanted to mention Planck and Heisenberg to make yourself look educated.

    As for “proves nothing,” in fact it proves exactly what BPL set out to show: that idiotic claims of no recent correlation between CO2 concentration and temperature depend on cherry-picking, using such a small time span that the noise in the data drowns out the signal.

  143. Martin Vermeer:

    I’ve said it before and I’ll say it again: providing source has nothing to do with scientific replication. Running the code a scientist provides and getting the same result proves nothing scientifically; all it proves is that the result was not made up (and thus insisting on doing this is an implicit insult). Real replication is done writing your own code which for Eric’s work would presumably be just a ‘harness’ of scripts using standard library code and loading and saving data files. All in a day’s work for a practicing scientist.

    The reason this may not be obvious to some is lack of familiarity with standard methods used in the field and well documented in Eric’s link: http://web.gps.caltech.edu/~tapio/imputation/ and Tapio Schneider’s article linked from there. There is a simple backgrounder in SVD and PCA analysis techniques, with a simple example that you may want to read first, linked from my name; Ch. 12.

    Providing source is a courtesy to people outside the community of peers, and contrary to some here, I don’t demand it but, for the record, do highly appreciate it.

  144. Hank Roberts:

    http://www.cosis.net/abstracts/EGU2008/05209/EGU2008-A-05209-1.pdf
    Geophysical Research Abstracts,
    Vol. 10, EGU2008-A-05209, 2008
    SRef-ID: 1607-7962/gra/EGU2008-A-05209
    EGU General Assembly 2008 © Author(s) 2008
    Significant warming of West Antarctica in the last 50 years

    http://www.pnas.org/content/105/34/12154.abstract
    August 12, 2008, doi: 10.1073/pnas.0803627105
    PNAS August 26, 2008 vol. 105 no. 34 12154-12158
    Ice cores record significant 1940s Antarctic warmth related to tropical climate variability
    David P. Schneider, Eric J. Steig
    Edited by Richard M. Goody, Harvard University

    There are a few other papers out there; at least one (which was it?) was reassessed after the current paper was published.

    Total, a handful of serious papers.

    Weigh this against the total volume of bloggery committed on the subject if anyone has a handy tool to get a total (Google, somehow, maybe?). And from experience we can be sure that months and years from now there will still be people new to the subject dropping the name “Harry” into conversations with vague insinuations of fraud picked up by reading the kind of garbage that gets blogged.

    Making the details of the work behind published papers sounds wonderful from the outside. Scientists who’ve experienced having their work taken by someone who can use it faster to their own benefit know the risks of giving away one’s tool kit and knowledge to one’s competitors. That game was played much harder thirty years ago when I was last in academia — no Internet; getting someone’s lab notes or draft grant application and beating them to publication or application wasn’t even wrong. It happened. As Jesse Unruh remarked about California, faculty fought because there was so little at stake.

    I recall people used to set traps for that kind of behavior — ‘carelessly’ leaving out fake lab notebooks (and later even laptops) where they could be stolen, to take down people willing to steal that kind of information and tie them up wasting time instead of doing their own honest work.

    There are always people who’d rather mess with someone else’s work than do their own, for many reasons.

    Openness — and the plagiarism detection tools online — are a good thing. Better documentation of detailed background makes it hard to jump into publication using someone else’s work.

    But the new crowd — the politically active, and the businesses that want to fog and confuse the science to avoid regulation — are far more able to do damage to the research and the chance for support. letting them have all the work and all the background sounds really noble.

    But you need to read the tobacco papers — the only industry that’s had much of its archive disclosed after losing all the way through the court system. What’s disclosed are tactics that are standard business lobbying — they include faking papers, faking science, pressure on journals — and they consider this normal and fair.

    A few other such patterns have been documented — even more recently.
    See below.

    I respect Michael Tobis’s point of view. But when you’re going to open up everything to everyone, you have to know this happens, and think about the amount of money at risk from the science and the payback for those out to destroy your reputation as a scientist.

    Seriously, folks, read the top five here:

    http://scholar.google.com/scholar?num=50&hl=en&lr=&newwindow=1&safe=off&cites=16026924779708694907

  145. Hank Roberts:

    And for those who don’t look, this one at least:

    Int J Occup Environ Health. 2007 Jan-Mar;13(1):107-17.Links
    Comment in: Int J Occup Environ Health. 2007 Jul-Sep;13(3):356; author reply 356-8.

    Industry influence on occupational and environmental public health. Huff J.

    National Institute of Environmental Health Sciences, Research Triangle Park, NC 27709, USA.

    Traditional covert influence of industry on occupational and environmental health (OEH) policies has turned brazenly overt in the last several years. More than ever before the OEH community is witnessing the perverse influence and increasing control by industry interests. Government has failed to support independent, public health-oriented practitioners and their organizations, instead joining many corporate endeavors to discourage efforts to protect the health of workers and the community. Scientists and clinicians must unite scientifically, politically, and practically for the betterment of public health and common good. Working together is the only way public health professionals can withstand the power and pressure of industry. Until public health is removed from politics and the influence of corporate money, real progress will be difficult to achieve and past achievements will be lost.

    _____________
    Climatology _is_ an issue of public health

  146. Rob:

    Mr. Schmidt,
    You do not seem to grasp how software conceptually works.

    In claiming that it could be used for “politically motivated attacks” you seem to believe that a piece of software in the hands of an “evil minded” person can get other results compared to when it is used “nicely”. I am a software engineer, I don’t know much about climate science (I’m interested though) but I know for sure that SW always does the same thing over and over again. that is actually the point in using it.

    Linking to a library of functions and saying -here it is, all of it, is like pointing to a tool box with hammers, nails and saws and tell someone to go build a house. You NEED to explain in what order the tools shall be used (makefile…).

    Best Regards,
    Rob

    [Response: Thanks, but I think I do know what software does ;) . The point I was making above concerns people who aren't generally interested in running the code at all. For some, the names of variables inefficient looping, or cryptic inline comments all become fodder for ridicule whether or not the code works as advertised. Code can always be improved of course, but not everything that people use (with its imperfections) is going to fare well exposed to the glare of people whose only goal is to make you look bad. Thus people who don't have the time to make their code bullet proof often hesitate to make it public at all. This is just an observation, not a reason. - gavin]

  147. Barton Paul Levenson:

    Mike, thanks. I can’t believe I forgot and . Shows what years away from my old field of computer programming can do…

  148. Rob:

    Mr. Schmidt,
    It doesn’t matter how crappy a SW becomes, it does what it always have done and if someone isn’t serious enough to avoid making a point out of the crappiness they are disqualified for further comments. Does that really bothers you?
    If you were to publish the SW, anyone the are truly interested in replicating your work can do so with the help from the original SW, crappy or not.
    Doing so however REQUIRES some basic step by step instructions (makefile, script etc.). This is really my point, please comment.

    [Response: It doesn't bother me particularly, and the code I work on is available for all to see, warts and all. Other people may be more sensitive. As to your real point, the answer is that it depends. I'll have another post tomorrow sometime discussing this in more detail. - gavin]

  149. Ron Taylor:

    Nicholas, in an ideal world I would agree with total openness. But you may not realize the level of damage being done to public understanding and policy discussions by dishonest denialists. In the current environment, I would stick with advice I was given by a colleague in an entirely different context some years ago: Do not give arms to the enemy!

  150. Joseph O'Sullivan:

    People are making lots of comparisons between their areas and the request for information from scientists, so I’ll add my own opinion based on my experience.

    In lawsuits the opposing parties routinely make requests for information. Each side wants to win and a tactic commonly used is to make requests that are onerous. The goal is not to find facts, it’s to make the opponent spend undue amounts of time and money to derail the opponent’s efforts. This type of behavior unchecked could grind the legal system to a halt. To ensure that the system functions there are whole sets of rules that govern the process.

    When rules and regulations are made in the US the public is allowed to get involved, including asking for information about the science that was used making the decisions. Again rules govern the process to make sure it is not bogged down by unreasonable requests.

    Climate science has become as adversarial as the legal and regulatory systems but without the necessary safeguards. I think its no coincidence that some people are focusing on the science.

  151. AL:

    Dear sir,

    I commented earlier on the double entry of the racer rock entry on your “corrected” graph. Having looked further at the graphs I now realise that the second entry is actually the result from your “difference” graph which has erroneously made its way onto the “correctced” graph. I realise that these are initial results, but maybe you should modify them such that the graph points lie on the graph in question and not on another graph?

    regards

    Alan

    [Response: Actually I added the extra point because the initial bottom panel didn't show the RR point at all. I agree it's not perfect, but given that the creator of the graph is now at one of the ends of the Earth, it will have to do for now. - gavin]

  152. dhogaza:

    As for SAP, one of the database variants is open source

    http://www.sapdb.org/

    Yes, SAP open sourced it after the competition (Oracle, mostly) kicked it to the curb.

    The important part of SAP – i.e. the part that provides revenue – is not open source.

    ORACLE provide and contribute to many open source projects. Including InnoDB, the transactional database engine for MySQL the M in LAMP.
    http://oss.oracle.com/

    However, nothing critical to Oracle’s commercial success is open source. They leverage the open source world to increase the value of their closed source products, enhancing their revenue.

    As for the code for things like MS Vista, this is available under certain circumstances to Governments http://www.out-law.com/page-3296

    Which is analogous to how scientists share information with each other.

    Note that none of the companies above do the kind of open sourcing that the denialosphere is demanding of climate (but not other) science.

  153. dhogaza:

    SOX Certification requires a ton of testing for correctness.

    Once again, testing is not a proof of correctness. Your education is apparently lacking …

    When you say “professional scientists in the field – the researcher’s peers”, would you not consider computer scientists peers, at least for that part of the work that is in their area of expertise?

    Not particularly. I’m qualified to review the code that makes up a GCM, but not the mathematical model it implements, the physics underlying the mathematical model, etc. I would essentially have to take the code’s word for it, so to speak. Useless unless you think bitching about programming style tells us something about the accuracy of the model results.

    Do you really think your average computer scientist is qualified to “audit” the code in any reasonable way? I don’t. And I have absolutely no doubt that the average denialist screaming for the code is not.

  154. dhogaza:

    There’s a nice irony here, given that academic computer scientists as a whole aren’t particularly noted for their superior programming skills …

    A bunch of them are a hell of a lot better at probing odd corners of discrete mathematics and the theory of computational complexity than they are at writing industrial strength code.

  155. Mike Walker:

    National Academy of Sciences 6-22-06. “Our view is that all research benefits from full and open access to published data-sets and that a clear explanation of analytical methods is mandatory. Peers should have access to the information needed to reproduce published results, so that increased confidence in the outcome of the study can be generated inside and outside the scientific community.”

    Full article here: http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=6222006

    If a researcher’s work is good, then full transparency will prove that it is. If it is not, then full transparency will also prove that it is not. Either way, science benefits. Those who don’t believe this, simply do not have confidence that their science will stand on its own.

    [Response: You seem to be making the assumption that transparency is a magic bullet that will allow us to separate good science from bad. That is simply not the case. A perfectly executed analysis based on a faulty assumption is bad science regardless of transparent the code is. - gavin]

  156. Rod B:

    tamino (142), I think BPL was implying that his analysis, which has CO2 and Temperature going in very tight lockstep, can be projected as a general rule. That degree of lockstep correlation certainly does not come anywhere near to the relationship of eons ago, for example. OTOH, if he really was professing that the CO2-temp correlation is applicable only from late 1800s to early 2000s and only for that period’s range of concentration and temp, I have little disagreement — though the 10–41 equivalent certainty is quite a stretch: that’s a 10–41 factor smaller than your Planck’s constant (…showoff!)

  157. Mike Walker:

    dhogaza #153: 7 February 2009 at 10:00 PM says “Do you really think your average computer scientist is qualified to “audit” the code in any reasonable way?”

    I agree, I don’t think “research code” would hardly ever benefit from an audit by a computer scientist. I think all that is desirable would be simple testing by someone competent in software assurance that the code works as intended. The person who programmed something never catches all of the errors.

    The main benefit from publishing the code is that the code provides detailed documentation of the mathematical and statistical methods used. The code itself is unimportant, but what it documents about the research is where the value is.

  158. Richard Simons:

    It seems to me that if you are interested in checking the validity of a model, you should not be pestering for the code. Instead, you should get the details of the model, then write your own, different, code and see if the results are the same. That way, you are much less likely to miss any errors that might be in the code.

    As a complete outsider to climate research, I get the feeling that a lot of this demand for code is driven by a desire to find reasons to ignore the implications of the work while putting in as little effort as possible.

    [Response: Bingo! - gavin]

  159. Terry:

    Gavin Re #146: I have always subscribed to the philosophy that proper peer review should always be undertaken independently. From my perspective and in my own professional area of atmospheric science that means that review verification should always be done a-priori using alternate methods or developing the method using ones own knowledge of the subject matter. I never ask for the code or spread sheets, and prefer to re-create them from my own knowledge. On the other hand the original should provide sufficient information that provides the reviewer with sufficient guidance to correctly replicate the steps involved. On the other hand it is not an easy task to explain the same kind of information for the type of work that climate change requires. However, with the advent of package systems eg MATLAB (that I know nothing about) or SAS for example, a roadmap is relatively easy to generate. I follow this site regularly as i do CA and WUWT, and I dont share the same opinion on the intent of those sites, as some do, nor do I share the same opinion of some on those sites regarding RC. There are bone-heads on both sides in this battle of the blogs. On this particular argument, I am of two minds, but I lean towards your view, especially in regard to reviews needing to be a-priori, but as above a roadmap of the methodology would be useful. Those who wish to re-construct the results can then use their own procedures by following it. Interestingly enough, I see one poster at CA who is sucessfully de-constructing the Steig data using alternate procedures. Now That is how it should be done.

  160. Mark:

    Ron 149, I think it was “alms” to the enemy. Not arms.

  161. Danny Bloom:

    Doug McKeever, above, Feb. 29 at 2:59 pm said: “I watched the catchy little cartoon mentioned by #2 (Danny Bloom) and although it is cute and some of it is even a bit thought-provoking, the animation is marred by an error that many of my students fall for, too, until they think it through. The cartoon implies that melting ice near the North Pole will cause sea level to rise. Of course, as the cartoon says, the North Pole is over ocean not land. Melting of LAND ice causes sea level to rise, or displacement of land ice into the sea, but melting of sea ice (pack ice) does not raise s.l. A simple demonstration it to monitor the water level in a glass both before and after an ice cube melts….the water level doesn’t change ( if the TEMP of the water increases, that will raise the level slightly as the warmer water has lower density and thus greater volume, but that’s minuscule in a glass of water).”

    Duly noted, Doug. I will pass on your good comments to the cartoonist in New Jersey who made that video. I often get confused on that issue too. But yes, important point, thanks.

  162. AL:

    Gavin,

    the fact that you added the point probably explains why the point appears to be in the wrong position. Would it be an idea to mention the fact that you have added a point to a graph in future? Were any other points added or changed?

    [Response: no. - gavin]

  163. tamino:

    Re: #156 (Rod B)

    I think BPL was implying that his analysis, which has CO2 and Temperature going in very tight lockstep, can be projected as a general rule

    I prefer actually to read what he wrote. Like his very first two sentences:

    Some global warming deniers claim that the correlation between temperature and carbon dioxide is vague and coincidental. I’ll check that here with what’s called a “linear regression.”

  164. Jaydee:

    152. dhogaza

    This is getting a little off topic, but:

    “Open Source is not standard practice in commerical enterprises.”

    “However, nothing critical to Oracle’s commercial success is open source. They leverage the open source world to increase the value of their closed source products, enhancing their revenue.”

    Sure Oracle is not open source. But Oracle must have had a reason for shelling out $ for InnoDB, mainly that MySQL has started to get significant traction in areas that it cares about.

    http://sql-info.de/mysql/oracle-acquires-innodb.html

    MySQL is the database behind Wikipedia with 1.2 TB of data as of 2006. It could have changed since than

    Note also that Oracle actually runs its own website with Oracle on Linux.

    http://uptime.netcraft.com/up/graph?site=www.oracle.com

    My main point is that for most companies it is wrong to say that they use closed source or open source. Most use both, whether they know it or not.
    —–
    Back on topic, I am in complete agreement with the general point that open sourcing the code used for analysis is not necessary. In fact is is better to have various different sets of software used so that differences in results can point out issues with any program.

    I also agree with Gavins new post on replication which concludes that it is (generally) more important to look at the assumptions and conclusions drawn rather than expect to find a errors in the code.

  165. dhogaza:

    Sure Oracle is not open source. But Oracle must have had a reason for shelling out $ for InnoDB, mainly that MySQL has started to get significant traction in areas that it cares about.

    This has nothing to do with the entirely bogus claim that “open source is standard practice in the commercial world”, made originally by someone whose company depends on sales of (ahem) proprietary, closed-source software.

  166. dhogaza:

    My main point is that for most companies it is wrong to say that they use closed source or open source. Most use both, whether they know it or not.

    And I never claimed otherwise. Please read more closely.

  167. James:

    Mike Walker Says (7 February 2009 at 11:16 PM):

    “The main benefit from publishing the code is that the code provides detailed documentation of the mathematical and statistical methods used. The code itself is unimportant, but what it documents about the research is where the value is.”

    Except that this is seldom the case in practice, in my experience anyway. (I might mention that I’ve earned a good share of my income for the last couple of decades from re-engineering & extending scientific & engineering programs.) The code’s really a last resort for trying to figure out what it’s supposed to do. It’s far more efficient to have some sort of external information about what it’s supposed to do – the sort of information you might find in a paper – before you try to figure out whether that’s what it really does, or whether it can be done faster or with less memory.

    The best documentation is, in fact, documentation.

  168. Ike Solem:

    This open source argument is getting ridiculous. Scientists don’t present their lab notebooks for public inspection unless their is a lawsuit or legal matter – for example, scientists who find lead contamination in medications can expect to have their notebooks examined in detail during legal cases against the manufacturer, and they can expect that the lawyers will use any errors or mistakes in their notebooks to attempt to discredit their work – and usually the goal is to portray the scientist in question as sloppy or unreliable. This is just the standard legal approach, always aimed at influencing the opinions of the jury, not at explaining the science to them.

    The lack of desire of many posters on this thread to discuss the actual science is interesting in that regard – in fact, this thread sounds like a collection of trial lawyers out to discredit the witness, doesn’t it?

    So, let’s take a look at a somewhat similar (but different) situation: satellite- and radiosonde-derived temperature trends in the Arctic. The results are controversial – everyone agrees things are warming, but the question is about the structure of the warming and the role of snow and ice cover:

    http://www.nature.com/nature/journal/v451/n7174/abs/nature06502.html
    “Vertical structure of recent Arctic warming” Graversen 2008

    Here we examine the vertical structure of temperature change in the Arctic during the late twentieth century using reanalysis data. We find evidence for temperature amplification well above the surface.

    Several scientists looked at this, disagreed and sent the following letters off to Nature:

    http://www.nature.com/nature/journal/v455/n7210/full/nature07256.html
    “Arctic tropospheric warming amplification?” Thorne 2008

    http://www.nature.com/nature/journal/v455/n7210/full/nature07257.html
    “Recent Arctic warming vertical structure contested” Grant et al 2008

    http://www.nature.com/nature/journal/v455/n7210/full/nature07258.html
    “Arctic warming aloft is data set dependent” Bitz & Fu 2008

    The point here is that Graverson et. al published a reanalysis of data that was disputed by others, who showed that different datasets don’t reproduce the trend aloft. No one is disputing the overall warming trend; rather they are disputing the spatial distribution of the warming trend across the region.

    Gaversen et al replied to the criticisms:
    http://www.nature.com/nature/journal/v455/n7210/full/nature07259.html

    Essentially, the argument is over whether the reanalysis approach comes up with the right answer. This kind of scientific debate is normal, and you probably never saw this issue come up in the press – because everyone involved agrees that the Arctic is warming rapidly, and that’s not really a very controversial position? The press likes to run articles on global warming that feature a prominent denialist on one hand, and some climate scientist on the other – if both scientists agree on global warming, that might upset the paper’s owners.

    For example, take the Examiner.com newspaper chain, one of the last homes of blatant denialism in the U.S. media. Typical recent headlines include “Oceans are cooling according to NASA.” Justin Berk, Examiner, Jan 21 2009 – although everyone knows that that was a result of a flawed instrument. Why does the press get away with such blatant lies, and why do other news outlets ignore it? “Press lies to public” – that’s not a good headline, I suppose.

    A similar article by the Examiner is “An inconvenient truth: The Earth is cooling”, 12/23/08. That one repeats the talking points – we’ve been cooling since 1998, the oceans are cooling, etc. etc. This PR effort at pushing the “earth is cooling theme” is not helped by scientific studies that show warming, so those studies get attacked. Why? Well, the Examiner paper ownership is a little revealing:

    From Slate: The Billionaire Newsboy: Making sense of Philip Anschutz’s Examiners. By Jack Shafer, March 24, 2005.

    Quote: “Nobody thinks Anschutz is a fool. An oil wildcatter raised by an oil wildcatter, he moved into the railroad business in the early 1980s..

    The oil industry and the railroad industry are among those most opposed to government regulations and renewable enery initiatives that would eliminate coal combustion as well as foreign oil imports, the final result being a coal-free low-carbon energy infrastructure. The railroads are opposed because they make a living transporting coal from mine to power plant; that’s where much of the money is made. Thus, it is really not surprising that a billionaire involved in oil and railroads would sponsor a network of newspapers that engages in lying to the public on climate matters.

    The same general theme also prevails in Britain. If you want to see a vitriolic response to this paper, take a look at the British Spectator, owned by the Barclay Brothers, another set of reclusive (British) billionaires: As Mark Twain might have put it, there are three kinds of lies — lies, damned lies and global warming science. and so on.

    Here’s the heart of their argument:

    The flaw they identified was that, since Antarctica has so few weather stations, the computer Steig used was programmed to guess what data they would have produced had such stations existed. In other words, the findings that caused such excitement were based on data that had been made up.

    Yes – that is called a data-infilling procedure, and it’s what you have to do when not enough data has been collected. It’s also used in the oceans, where the paucity of data is also a problem in past reconstructions of temperature. Any scientist who was really concerned about the issue would also call for MORE data collection, more satellites – but the denialists never do – in fact, the likes of Pielke et. al have publicly stated that we shouldn’t be funding more data collection programs.

    Well… I can’t wait to read the submitted contributions of all the skeptics, and their detailed examinations of all the reanalysis data and results – to Nature and Science, that is, not to this thread.

  169. Mike Walker:

    James #167: 8 February 2009 at 1:09 PM writes: “The best documentation is, in fact, documentation.”

    Agreed, but if [edit--enough of this here. any additional generic commentary on issues of 'replication', etc. should take place in the comment thread of the most recent posting which deals specifically w/ such topics]

  170. Ron Taylor:

    Mark (160) the orginal quote may have been “alms” but my colleague clearly meant “arms.” And I think that applies in this case. Do not give ARMS to the enemy. What many denialists are looking for is arcane details that they can attack. Why provide them with that if that is their goal?

  171. Hank Roberts:

    BBC science writer deals with coverage and blog responses:

    http://www.bbc.co.uk/blogs/thereporters/richardblack/2009/01/clearing_the_air.html

  172. Mark:

    Ron, “alms” means aid. arms as armaments would be merely a subset.

  173. Hank Roberts:

    This should make the difference between arms and alms quite clear in practice:
    http://jennifermarohasy.com/blog/2009/01/led-author-of-antarctic-warming-paper-claims-libel/

  174. Brian Klappstein:

    Tamino#43

    My choice of “eyeballed” spatially representative stations didn’t return 1980 as the warmest year in Antarctica as you pointed out. However, I’ve just run across references to some papers that apparently do show 1980 to be the warmest year in Antarctica over at WorldClimateReport: Chapman et al 2007 and Monaghan et al 2008.

    Any comment?

    Regards, BRK

  175. Hank Roberts:

    Brian, did you read the above information, or look up the papers?
    WCR doesn’t give you the citing papers or references to what they’re saying. They’re an advocacy site.
    Brian, use your web browser and find “monaghan” in this thread.
    You’ll find the name immediately before the words
    Brian Klappstein Says: 5 February 2009 at 11:03 AM

    Beyond that, the links below took me eight seconds to find. Keep track, see if what you found at WCR gets brought up to date. It’s an advocacy site, not a science site. You can do better, easily.

    http://www.agu.org/pubs/crossref/2008/2007JD009094.shtml

    and see

    http://www.nytimes.com/2009/01/22/science/earth/22climate.html?ref=science

    “There is very convincing evidence in this work of warming over West Antarctica,” said Andrew Monaghan, a scientist at the National Center for Atmospheric Research in Boulder, Colo., who was not involved with the research.

    Dr. Monaghan, who had not detected the rapid warming of West Antarctica in an earlier study, said the new study had “spurred me to take another look at ours — I’ve since gone back and included additional records.”

    That reanalysis, which used somewhat different techniques and assumptions, has not yet been published, but he presented his revised findings last month at a meeting of the American Geophysical Union.

    “The results I get are very similar to his,” Dr. Monaghan said.

    Quoted from NYT January 22, 2009

  176. Brian Klappstein:

    Hank Roberts #175

    Thank you for your courteous response. WCR may be an advocacy site but they did something valuable for people like me who generally don’t have access to these papers, they published the temperature series graphs therein.

    In any case, I look forward to Dr. Monaghans update.

    Regards, BRK

  177. naught101:

    gavin said: And what do we do for people who don’t have access to Matlab, IDL, STATA or a fortran 95 compiler? Are we to recode everything using open source code that we may never have used? What if the script only runs on Linux? Do we need to make a Windows GUI version too?

    Nothing, no, that’s ok, and no.

    The point of open source would be letting others improve on your work. ie. Just release what you have, then if someone really wants a Windows GUI version, then they can code that on top of what you’ve already produced. There’s absolutely no onus to make it work for everyone, just to show people what you’re doing (they may then choose to re-write it in a different language, and may improve it, or may make some stupid mistakes, but at least it’ll all be in the open).

  178. Hank Roberts:

    > something valuable … published the graphs

    At the top of this thread, click the link to the Nature article page.

    There, look at the box with the red bar at the top “ARTICLE LINKS” where you’ll see these with links:

    * Figures and tables
    * Supplementary info
    * See Also
    o Editor’s Summary

    This is the link from “Figures and tables”

    http://www.nature.com/nature/journal/v457/n7228/fig_tab/nature07669_ft.html

    Collect the entire set. Always look at the primary source, and compare what advocacy sites show. Sometimes they show everything. Not always though.

  179. Jeff Id:

    I got the RegEm to work. My last comment didn’t show up in moderation so cut this one if you need to ;) . Since I don’t have the parameters you used these were my best guesses.

    http://noconsensus.wordpress.com/2009/02/10/deconstructing-a-reconstruction-of-temperature-trend/

    [Response: Well done. Now you have learned something. You now also have the means to test what the results are and are not sensitive too - try looking at monthly anomalies, rather than raw values, the impact of including more eigenvalues and the various other issues discussed in Tapio's papers. - gavin]

  180. Jeff Id:

    I’m sorry, it’s very difficult to tell when a post is cut or there is a transmission problem. This is the same as before so if it was cut that should speed your decision along.

    Are you saying the AWS data is normalized prior to usage because I wasn’t sure from this chunk in methods where it says –

    “and using the READER occupied weather station temperature data to produce a reconstruction of temperature at each AWS site for the period 1957-2006.”

    Would you mind telling me what value was used for
    OPTIONS.regpar
    This one has me pretty confused, what would you consider an appropriate value for a TTLS run.

    Finally, I would also be interested in running the satellite data and processing algorithm to mask for clouds. Is there some ETA on that?

  181. pete best:

    Re #168, An excellent post Ike and poignant. Yes the lies on climate change are also often repeated all of the time by the people who minds prefer to believe these articles. I come across them all of the time in the comments sections of newspapers climate science articles.

    Its a load of left wing nonsense (for some reason its a left wing conspiracy) which I believe means that all of the people wanting to see a low carbon economy are against capatalism which is a slightly odd argument.

    It has been cooling since 1998. Again another odd one but used all of the time as a decade is not enough time statistically for any climate analysis as stated here many times.

    The Sun is to blame and the world is cooling. in other words the Sun is not active and it is solely responsible for temperatures in earth. Again we know that its 0.2 W/M^2, which is significant but not that signiticant when it comes to all forcings both -ve and +ve.

    The most recent thread comment is once again about Hansen original papers on the A,B,C models are warming which they make out were wrong. Once again RC has covered this and Hansen was more right that wrong 20 years ago but hey a wrong is a wrong, right ?

    It just goes on and on and on this way. Its all getting a bit pointless and very technical here at RC. This Antarctic work by Steig is going to be the new hockey stick for the forseeable future I imagine for new the denialists are arguing against the very tools and methods scientists use to get the results they did for this study. I am slightly amazed that RC has actually posted a thread on the subject as why should climate scientists be grilled on their methods, the same methods and tools repeated and used for a lot of past papers and the sciene as a whole. Thousands of peer reviewed papers havs been submitted and published this way and hence why all of a sudden is it seemingly wrong. Do we now have to change the tools and methods of climate science or science itself for I imagine that science is where these tools and methdos come from.

    Utterly Bizarre.

  182. jeff Id:

    “I am slightly amazed that RC has actually posted a thread on the subject as why should climate scientists be grilled on their methods, the same methods and tools repeated and used for a lot of past papers and the sciene as a whole.”

    And I am actually interested in understanding. If you take Dhog’s word for anything I’m a climate naturalist, however Dhog is a politician with loud bluster and little knowledge. I am more interested in the detail. If the antarctic is warming, I want to understand the calculations because that is important to know. If the math is clearly disclosed and reproduced those who wish to debunk it can try but will ultimately fail. As gavin once told me — the data is the data.

    Concealing the math and data will do no good and appears (falsely) to be evidence of an ends justify the means policy.

    Let us have the code and data, the chips will fall as physics intends.

  183. Ray Ladbury:

    Jeff Id says: “Concealing the math and data will do no good and appears (falsely) to be evidence of an ends justify the means policy.”

    Absolute horse puckey. If you can’t derive them for yourself with perhaps a little bit of assistance, you don’t have the expertise to understand them in any case. Why the hell do you think it takes a decade to become a climate scientist? What the hell do you think graduate students are doing for all that time? Do you really think that everything you don’t understand must be easy? Jeeebus!

  184. Hank Roberts:

    > concealing

    Climatology: The Purloined Letter

  185. dhogaza:

    Concealing the math and data will do no good and appears (falsely) to be evidence of an ends justify the means policy.

    There you are with your accusations again …

  186. Hank Roberts:

    By the way, if the occasional commenter “concerned of Berkeley” is concerned and from California (the IP number would tell, maybe)
    reading and asking locally would be a good start.

    Asking questions about what you’ve read shows you’re trying and gives an idea how much you understand. You won’t get far if you just repeat that you’re lost and confused. Give your last known location, and what you can see from where you are now. We’ll help find you.

    http://www.climatechange.ca.gov/
    http://calclimate.berkeley.edu/
    http://gspp.berkeley.edu/programs/cccc.html
    http://www.globalwarmingcalifornia.net/city/berkeley.html
    http://www.dailycal.org/article/103582/uc_berkeley_study_on_climate_change_sparks_executi
    http://www.telegraph.co.uk/earth/earthcomment/3530703/How-Berkeleys-Bank-could-help-fight-climate-change.html
    http://www-esd.lbl.gov/CLIMATE/index.html

  187. pete best:

    I am sure that a lot of people are interested in the methods, computer code and mathematics used to demonstrate that the Antarctic is warming in the paper but if climate science (and others sciences) want closure on this then surely the whole thing should be independently analysed by a scinetific body of relevant people maybe from an esteemed US or UK organisation such as the Royal Society or the MET office much like the hockey stick is.

    If Climate Audit have an issue with the methods used and want to attempt to tell the resto of us that the computer code and data used is an issue then surely they are calling into question the whole crux of climate science. This needs to be resolved so we can move on. If this paper stands then post peer review work will find the issues. They tried to find them with the hockey stick but that also stood up to very close scrutiny.

  188. Kevin McKinney:

    pete, it ain’t going to happen like that. This is partly what the IPCC was intended to do, after all, and the denialist world has imagined that into a sinister conspiracy to wreck the American economy. (Maybe they will now uncover evidence that the IPCC pioneered securitized (finance) swaps.) Moving on is precisely the last thing denial-world wants.

    An instructive example could start with your comment: “They tried to find them with the hockey stick but that also stood up to very close scrutiny.” Yeah, but in denial-world today, the hockey stick was completely discredited [edit-no need to repeat the usual canards]

    IMO, what will happen is that the denial-sphere will be slowly ablated by repeated contact with reality. The bad news is that we don’t have a lot of time, but the good news is that recent political/cultural history shows that this process does continue, even in a media environment that is less than ideal. In the mean time, we have to continue countering the spin with fact and perspective, more or less calmly and civilly–with wit, brevity and a little humor as appropriate.

  189. Hank Roberts:

    Or, you can remember that Eric left for Antarctica last week and will be back in three or four months with another data collection that will take a year or so to analyze, then we’ll know more.

    Good grief, people, why not just demand a special prosecutor.

    Nothing needs to be resolved and we’ve been moving on.

    The gang hollering back there is doing the usual routine.

    Why all the effort all of a sudden to question everything and yell fraud everywhere? Could it be ….

    http://www.aaas.org/

    http://news.aaas.org/2009/0210aaas-annual-meeting-opens-in-chicago.shtml

    http://www.aaas.org/news/releases/2009/0205sp_sea_level.shtml

    Oh yeah, time to fill up the news columns with crap so the journalists can claim they’re publishing “balanced” reporting next to the AAAS meeting news.

  190. Jay:

    I’m curious if you saw this in the Guardian today.

    http://www.guardian.co.uk/commentisfree/cifamerica/2009/feb/06/antarctic-warming-climate-change

    Any comments? Apologies if you’ve already addressed it.

    [Response: Oh, very nice. Not quite sure what a 'computer jockey' is though. Maybe it's someone who skillfully uses computers to best work out what's happening in the world. That would be different to an advocacy scientist who will say anything, however contradictory and misleading, in order to come to exactly the same conclusion every time - (I should point out that I have a hard time seeing where the 'scientist' bit comes in). Pat will be on the Hill today wasting everyones time with yet another piece of 'advocacy'. Maybe I'll get a mention! - gavin]

  191. Mike G:

    I really have a hard time understanding how exactly having the code provides closure or resolves anything other than that Eric didn’t fudge the results. The meat of replication is in the methods, not in the code.

    Lets look at a real-world example from my line of work. Say I aim to produce a survivorship curve and determine maximum sustainable yield for a sea anemone using the Gulland-Holt method of the von Bertalanffy Growth Model. That probably makes no sense to most readers here, but knowing nothing more than that- none of my data, no script, it should be a fairly trivial task for anyone who knows population modeling to reproduce the work because the models are not original to this application (neither is RegEM). They should also be able to judge whether this method is appropriate or not for this system.

    Now if I distribute the file that will calculate this and the data I collected, assuming you could figure out how to properly enter the data, you will get the same answer as me. Do you know if it’s a reasonable answer? Do you know if the program is a correct representation of the method? Do you know whether it’s an appropriate method to use for this system anyway? Does it allow you to go on to expand on the work? Not unless you already understand the method well enough to begin with that it should be obvious how to reproduce it with minimal hand holding. All the program proves to you is that I didn’t fudge the results and I know how to write a functional script. Being asked for evidence of this is pretty insulting to a scientist.

    Also, as a scientist, my job is to increase our understanding of reality. Why would I put effort forth to help out someone who has a reputation of trying to muddy the waters?

  192. dhogaza:

    Good grief, people, why not just demand a special prosecutor.

    That wouldn’t satisfy them, special prosecutors tend to be somewhat objective …

  193. william:

    #181 Pete Best
    You made the comment:
    “The Sun is to blame and the world is cooling. in other words the Sun is not active and it is solely responsible for temperatures in earth. Again we know that its 0.2 W/M^2, which is significant but not that signiticant when it comes to all forcings both -ve and +ve.”

    I’m not sure this was sarcasm or you mis-typed. The sun is ultimately responsible for temperatures on earth. Are you pointing out what our current knowledge is for variations in the sun’s output and the degree that those variations relate to variations in the earth’s temperature?

  194. kevin:

    #181 Pete B: [obvious sarcasm] “the sun is solely responsible for temperatures”

    #193 William: “the sun is ultimately responsible for temperatures”

    Do you see the difference between those statements, William? (It seems as if you’re trying to set up a straw man to beat up on.)

    To be clear: we all know where the energy comes from. But the people who act as though variations in the sun’s activity account for all observed modern warming (i.e. “the sun is solely responsible for temperatures”) are ignoring “our current knowledge is for variations in the sun’s output and the degree that those variations relate to variations in the earth’s temperature,” as well as our current knowledge about all other forcings and feedbacks in the system.

  195. Hank Roberts:

    http://www.env-econ.net/2009/02/fiscal-policy-science-or-advocacy.html

    Fiscal policy: Science or Advocacy?

    When economics hits politics it is difficult to determine when the economist becomes the advocate (see, e.g., the green jobs thread at this blog and, most recently, comments on this post). Fisheries scientists have the same problem. Serendipitously, here is how AFS President Bill Franzin addressed it in the January 2009 issue of Fisheries (page 4):
    In this article, I review an important issue about the adoption of policy positions by a professional scientific society such as AFS: the fine line between scientific objectivity and advocacy. …

  196. William:

    #194 Kevin
    I don’t think there’s any doubt that the discussion is about how much energy is received from the sun and what happens to the temperature of the earth when it gets here. The details are in explaining the dgree of influence for changes in albedo, changes in land use, the oceans, clouds, aerosols, volcanos, changes in the solar cycle and the orbit and tilt of the earth as well as an increase in CO2 in the atomosphere from today’s .0383% to something greater.

  197. Chris:

    #188 Kevin: “In the mean time, we have to continue countering the spin with fact and perspective, more or less calmly and civilly–with wit, brevity and a little humor as appropriate.”

    So when I tried to argue in the summer in the North Pole Notes thread that the Arctic had not reached a tipping point, was that “spin” that had to be countered? Or was I the one countering the spin of other posters…..? (Of course I was accused of far worse than spin.). It’s a shame this press release from the Met Office appeared 6 months too late to back me up:

    http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20090211.html – by Met Office Head of Climate Change:

    “Recent headlines have proclaimed that Arctic summer sea ice has decreased so much in the past few years that it has reached a tipping point and will disappear very quickly. The truth is that there is little evidence to support this…….”

    Of course, she also mentions the strong evidence pointing to “a complete loss of summer sea ice much later this century”. Sure, that’s a more reasonable starting point for debate in my opinion.

    Back to #188: “…Moving on is precisely the last thing denial-world wants…
    …IMO, what will happen is that the denial-sphere will be slowly ablated by repeated contact with reality…”

    I suggest that the more you can avoid the labels “denial”, “denialosphere” etc (c.f. “civilly” above?), the more success you may have. I can’t see what it achieves, and would suggest “skeptic” is more polite.

    Using offensive terms is just likely to provoke more of the attitudes you disapprove of. And then we end up with the polarisation as summed up by Ms Pope: “Overplaying natural variations in the weather as climate change is just as much a distortion of the science as underplaying them to claim that climate change has stopped or is not happening.”

    N.b. I don’t remember using such terms myself here (“alarmist” perhaps?), but feel free to correct me if I’m wrong.

  198. Mark:

    MikeG, how does the code help prove the ***science*** right or wrong?

    And if you have 5 hours to either do code publishing or more experimentation which gets the most science done?

  199. Hank Roberts:

    Hat tip to Robert Grumbine:

    http://www.scruffydan.com/blog/

    Hat tip to ReCaptcha:
    “Sunfire Concerto”

  200. Mike G:

    Mark, I think you’re trying to argue exactly my point. Having the code doesn’t prove or disprove anything unless the author either completely made up the results or the code doesn’t actually perform the method the author says it does. You have to understand the reasoning behind the method to say anything more about the code than “yep, it runs and produces the same answer.” If you already understand the method the code itself shouldn’t be required to replicate the results.

    Like Gavin pointed out, if you’re looking for errors that change the result or its significance, the code would be about the last place to look. Maybe Eric should take it as a compliment that the obfuscation crew haven’t found substantive errors elsewhere so now they’re looking to the code as a last hope. ;)

    Now, if someone truly wanted to expand on the science, existing code would be a great starting point, but who here really thinks that’s what the CA and WUWT crowd has in mind when they demand to see it?

  201. James:

    Chris Says (12 February 2009 at 5:57 PM):

    “I suggest that the more you can avoid the labels “denial”, “denialosphere” etc (c.f. “civilly” above?), the more success you may have. I can’t see what it achieves, and would suggest “skeptic” is more polite.”

    It might be polite, but it wouldn’t be accurate, since a true skeptic should be impartially skeptical. Those people are anything but impartial: they freely extend willing belief to the flimsiest theories that accord with their pre-existing world view, while exercising their “skepticism” only on what conflicts.

    Furthermore, if they want to be treated politely, perhaps they should take to heart the old saw about getting what one gives…

  202. Mark:

    OK, misread then Mike, sorry.

  203. pete best:

    Re, RE #190, Gavin, Of course you get a mention:

    Perhaps the most prominent place to see how climatologists mix their science with their opinions is a blog called RealClimate.org, primarily run by Gavin Schmidt, one of the computer jockeys for Nasa’s James Hansen, the world’s loudest climate alarmist.

    He is not happy obviously with the idea of something warming and cooling. Apparantly the WAIS does not matter much, its all about the EAIS. It aint warmed much if at all and the warming hapenned in the 1960 and 70 with nothing since so as Antartica is not warming neither is the world ;)

  204. Anne van der Bom:

    12 February 2009 at 5:57 PM Chris:

    Of course, she also mentions the strong evidence pointing to “a complete loss of summer sea ice much later this century”. Sure, that’s a more reasonable starting point for debate in my opinion.

    Is it more reasonable only because the statement is more in line with your opinion? What you like and what is reasonable are different things.

    and would suggest “skeptic” is more polite.

    By definition, a skeptic is someone who has not yet made up his mind. That does not apply to the majority of the ‘denialists’. Try again.

  205. Nick Gotts:

    Chris@197,
    “Denialist” etc. are used because they are accurate terms. Take a look at the Scienceblogs site denialism blog, for documentation of the similarities between AGW denialism and other denialism such as creationism, HIV-AIDS denialism, etc. It has all the same pathologies: quote-mining, conspiracy theories, lack of any coherent theory of its own or significant empirical work, constant repetition of already-refuted objections, mutually inconsistent claims…

  206. pete best:

    Re #204, Anne, does she not have a point, natural variability can cause a lot to seemingly happen when it isn’t yet ? The last time I looked climate sensitivity was the same as the Charney limit in the IPCC reports and hence 3C for 550 ppmv. It could happen faster if the sensitivity is higher (cooling agents are not as cooling as thought perhaps) and longer term feedbacks may double the charney limit but as yet no IPCC update but one from James Hansen which although coming from him might be see as true only means it needs to go through the IPCC process to be true to world opinion and our reaction to it.

    Science is a slow process, IPCC reports are probably about right every 5 years. Do we need anything else quicker of a more disasterous nature to get our Governments to do something more urgently? Is that the message of the so called alarmists I wonder.

    The precautionary principal.

  207. Hank Roberts:

    James, seriously, it’s worth understanding the difference. This is good: http://www.scruffydan.com/blog/
    See “Antarctic warming derangement syndrome” and
    “How real people (aka non-cranks) debate science”
    “a useful way to differentiate between skeptics and deniers”

    Avoiding using the word accomplishes nothing except to give them free rein in the blogs they haunt. And there are paid professionals posting to blogs advocating notions that deny the science, in climate topics.

    I started tracking some of the pros with Google — it led me to more climate scientists’ blogs than I knew existed. The paid pros have time and staff to locate them and copypaste their talking points.

    Check this thread out: http://www.scruffydan.com/blog/?p=1476

  208. Kevin McKinney:

    Chris, my recommendation of civility was just that–not a claim that this is what invariably happens on this site (or any other.) I don’t recall participating much in the debate around 2008 sea ice extent, but if you’re angling for a tip of the hat for predicting that we wouldn’t reach a new minimum record, then it is freely yours. You called that correctly, IIRC.

    As to the “denial” label, that has been debated here before. My take, for what it is worth, is that as several posters said above, it is recommended by accuracy. The Wattses of this world are *not* skeptical–let me use an example from this week. As I am sure many here are aware, the Audubon Society released a study documenting northward shifts of a great many bird species in response to warming. One denialist of my online acquaintance responded by looking at their data (a good thing to do, of course) and noting that a significant number of species had increased populations (a perfectly good observation.) However, he then went on to attribute that increase to the warming as well–with no additional support of the idea whatsoever. Extreme *lack* of skepticism, needless to say! (I’m guessing the explanation of the population increases lies in the DDT ban, not in climatic factors–the timing would be about right.)

    Did I call him out on this lack of skepticism? Yes. Did I call him a “denialist”? No. (Except just now.) I think the label is much more useful in third person than in second. In third person, it accurately names the phenomenon; in second, it tends to sound a whole lot more like simple name-calling. And you are, of course, quite right to say nobody appreciates being called names.

    I’ve had multiple interactions with the particular poster I referred to above. We have disagreed constantly. But I’ve always kept it respectful, and always tried to take the discussion back to the evidence (while trying to follow Hank’s suggestion to repeat the nonsense as little as possible–though this partially conflicts with maintaining some level of comprehensibility for readers jumping late onto the thread.) Unfortunately, I haven’t convinced him on the merits yet, but I notice he’s stopped branding everyone who accepts the science as an hysteric.

  209. Kevin McKinney:

    “I notice he’s stopped branding everyone who accepts the science as an hysteric.” Well, in second person, anyway!

    ;)

  210. Ray Ladbury:

    You know, I’d be more likely to use the label “skeptic” if denialists showed any real,…well, skepticism. Instead, I see a willingness to grasp at any straw no matter how tenuous if it justifies inaction. They tell us, “It isn’t happening!” Then when we point to incontrovertible evidence that it is, we get appeals to explanations that have no workable mechanism and that contradict the evidence. We get accusations of scientific fraus on a scale that would make even the most committed grassy-knoll or moon-landing conspricist blush. It seems that such “skeptics” will embrace any theory except the one that has all the evidence (and the physics) in its favor. That ain’t a skeptic.

  211. Rod B:

    Kevin, Blindly accepting prima facie global warming as the cause of the northern migration is perfectly proper? But attributing increased population to global warming shows a serious lack of scientific questioning? What is the difference pray tell?

  212. Kevin McKinney:

    Rod, the study in question made the attribution–presumably on the basis of the evidence they presented. (“Presumably,” because I have only read the news summary, not the original study.)

    On the other hand, the commenter offered no support for his–shall we call it a “suggestion?” He took a correlation (eyeballed at that), assumed causation, and then presented his conclusion as fact, with no apparent thought for the possibility of confounding variables.

    A clear enough difference for you?

  213. Ray Ladbury:

    Rod, Where did you get that the cause of northward migration was being blindly accepted. Hank has produced studies that provide pretty good evidence that the northward migration is climate related. I’ve seen bupkis for evidence that the population increase can be similarly attributed. Do YOU have any evidence?

  214. Hank Roberts:

    Rod, the difference is the the animals move to stay within the climate/temperature zone. That’s what’s moving.

    You’ve known this for years.

    As climate changes, garden zone map does too, Climate change is happening a lot … the USDA interactive computer format…. means that the new zone map can in fact indicate temperature shifts ….
    http://www.azcentral.com/style/hfe/outdoors/articles/2006/05/18/20060518gardenzone0518.html

    Slight increases in temperature can force a species to move toward its preferred, … Common Garden Plant Threatened By Climate Change
    http://www.sciencedaily.com/releases/2004/01/040108080103.htm

  215. dhogaza:

    Kevin, Blindly accepting prima facie global warming as the cause of the northern migration is perfectly proper?

    But of course we don’t blindly accept it. For many species of plants and animals we have a damned good idea of the climatic environment in which they thrive or perish. To say that ecologists blindly assign changes in ecological patterns to global warming is just another denialist-style smear of science of the kind you insist you don’t do (but do frequently, anyway).

    It’s like the fact that horticultural zones are moving northward in the eastern US. We know it’s due to warming because 1) we know that the plants thriving in those northward-moving zones require more warmth and could not grow so far north in the past despite gardeners trying and 2) we measure temperature and have records of the same.

  216. Kevin McKinney:

    Here’s a link to the Audubon report. Curiously, I didn’t find the population data tables that my antagonist said he looked at. Maybe elsewhere on the Audubon site?

    http://www.audubon.org/news/pressroom/bacc/pdfs/Birds%20and%20Climate%20Report.pdf

  217. Kevin McKinney:

    OK, here they are:

    http://www.audubon.org/news/pressroom/bacc/pdfs/Appendix.pdf

    A lot of squinting required here. Interesting that a conclusion was alleged in very little time. Maybe the “refuting” comment came from somewhere else in denialworld?

  218. MC:

    I was interested in the use of RegEM to fill in the AWS data using information from the satelite data. I also think that making the source code for an analysis available is helpful but as other posters have pointed out, all you really need is adequate method documentation and the data.
    To this end, as a physicist, the first thing I would do to check what was produced in the paper would be to overlay each station’s data with the satelite data, with error bars included. Then make a judgement as to proceed. RegEM is a mathematical method that should be consistent with the much slower hand cranking method of checking each station, having a think and providing a conservative estimate.
    In post 182, you are quite right, Jeff. The data is paramount.
    I see that the AWS data is available. I take it the satelite data is on its way?

  219. Rod B:

    Anne, Nick, et al. And if you keep it up long and loud enough “denialism” and denialist” might eventually even make it into a dictionary as a real word. ;-)

  220. Rod B:

    Kevin (212), I haven’t read the study either but think I can still make a pretty solid assessment that the study contains a bunch of population data that seems to indicate a northern migration. That the migration is caused by global warming can not be anything other than a broad inference — reasonable as it may be — made by the Audubon Society which is exactly what your commenter did. I presume (with much confidence) that they did not actual poll the birds. But if I’m wrong there, I’m curious: what percent said they’re moving because of the climate, what percent for the change of scenery, percent for a better job? ;-)

  221. Rod B:

    Hank and Kevin: Anyone can infer with reason almost anything from a large set of data, given enough math manipulation and scientific wording. But, taking you all at face value that the Audubon and others studies show a reasonable correlation — leading to an indicated causation — of migrating bird populations and climate change, how do those same analysts explain the increasing populations?

  222. Ray Ladbury:

    Rod B. said: “Anne, Nick, et al. And if you keep it up long and loud enough “denialism” and denialist” might eventually even make it into a dictionary as a real word.”

    No, Rod, I would say that if YOU keep it up long enough, denialism will become a persistent enough phenomenon that the woud will have to be included.

  223. Mark:

    221. Do you have the proof of the increased population that

    a) explains the increase
    b) explains undoing the removal of a large possibility of death

    Because you already have “removal of pesticide stops killing pests” which you would expect could lead to an increase of the pest (else why develop a pesticide).

    So, much like “the sun is heating up” is included in AGW, you must include in your model the removal of pesticides increasing population.

    Odd, isn’t it that you are doing exactly what you complain others in AGW of doing (worse, they aren’t even doing it): attributing to one element a result that is clearly attributable to several.

  224. Sekerob:

    Can we plz return to the topic instead of the usual attempts of derailing. Rod B and others have allot of reading to do on bird behaviour driven by climate change. Start e.g. reading up on the Great Tit and the Bewick swan. The death of hundreds of Magellan penguins flushing onto the coast of Brazil.

    Back to the robustness of Antarctic warming… winter Sea Ice peak was 1.2 million km2 square lower than in 2008 over 2007. Current Sea Ice minimum is over 1 million less than the same time 2008 minimum! Sign of cold clime weather noise?, sea currents such as a Indian Ocean hot spot effects?, hidden volcanoes (tongue in cheek)?, air temps? Global cooling in general? I’m sick of these to the 4th decimal “unresolved discrepancy” detractors on a insignificant data point.

  225. cugel:

    Rod B :

    If you look at page 5 of the Audubon report you will see how the attribution of cause is made. You may, of course, suspect that this is the result of data-manipulation for some nefarious purpose; nobody can or would stop you from doing that.

    The question of population numbers (as opposed to ranges) is entirely separate, and would best be addressed directly to the Audubon Society, I think.

  226. t_p_hamilton:

    Rod B said:”Anyone can infer with reason almost anything from a large set of data, given enough math manipulation and scientific wording. ”

    Makes you wonder how scientists ever decide if a theory is right or not. This kind of statement is the last refuge of those who don’t have the ability to show how the data supports their ideas. For example, it is common to hear creationists (another species of denier) make the statement that they just interpret the data differently.

  227. dhogaza:

    I presume (with much confidence) that they did not actual poll the birds.

    Wah, birds can’t talk, plants can’t talk, so we have no way of knowing if they’re responding to climate change!

    Do you have any idea how stupid that sounds, Rod B?

  228. dhogaza:

    Anyone can infer with reason almost anything from a large set of data, given enough math manipulation and scientific wording.

    And, this is just as bad.

    There’s a reason we use the “denialist” word to describe you.

  229. Hank Roberts:

    Rod, you know you’ve asked this sort of thing, maybe even this exact question before. Have you made the slightest effort to look it up for yourself this time? It’s really easy to learn how to learn.

    J. Field Ornithol., 51(3): 220-228
    RECOVERY OF AN AMERICAN ROBIN POPULATION AFTER EARLIER DDT USE
    BY DONALD L. BEAVER

    In a series of papers by Wallace and his students the effects of DDT
    on the American Robin (Turdus migratorius) on the campus of Michigan
    State Univeristy (MSU) and surrounding East Lansing area were doc-
    umented over a 15-year period (Wallace, 1958, 1959, 1960, 1962; Meh-
    ner and Wallace, 1959; Kettunen, 1961; Wallace et al. 1961, 1964; Ber-
    nard, 1962, 1963; Boykins, 1964; Tweist, 1965; Wallace and Boykins,
    1965). DDT spraying on the MSU campus began in 1955 and ended in
    the fall of 1962. After a year of no spraying, DDT was replaced by
    methoxychlor in 1964 but residues of DDT and its breakdown products
    (DDE, DDD) remained high, especially in earthworms. As a result, die-
    offs continued for three years (Bradfield, 1972). Robins began to show
    signs of lessened mortality by 1967 in probable response to decreasing
    levels of DDT in earthworms and soil (Bradfield, 1972). Success of re-
    production was monitored for only one season in 1962 (Tweist, 1965).
    McWhirter and Beaver (1977) noted some successful hatching of robins’
    eggs on the campus, but no formal study has been conducted in the
    intervening 10 years since the last study by Bradfield (1972). The purpose of this paper is to report on the condition of the robin population at MSU 17 years after the last use of DDT to control Dutch Elm disease.

    Just an example from dozens of papers that Google Scholar finds using a few search terms from the paper and your question. This is not _the_ answer for _all_ populations of _all_ birds. But you knew that too, of course.

  230. Jim Galasyn:

    Note also that flightless birds are in severe decline: Most penguin species in rapid decline.

  231. Rod B:

    Ray (222), fair enough retort. But it is still humorous, if not really odd, that so many of you are sometimes eruditely, often stridently trying really hard to label us skeptics with a word that isn’t even recognized in civil societies (other than in wiki…)

  232. dhogaza:

    This is not _the_ answer for _all_ populations of _all_ birds.

    And certainly not for the increasing populations of all birds, given that overall we’re seeing a decline in rather than increase in songbird numbers in the United States …

  233. James:

    Rod B Says (13 février 2009 at 11:38 PM):

    “Anne, Nick, et al. And if you keep it up long and loud enough “denialism” and denialist” might eventually even make it into a dictionary as a real word.”

    It’s not? http://www.askoxford.com/concise_oed/denial?view=uk : “1 the action of denying. 2 Psychology refusal to acknowledge an unacceptable truth or emotion.” Add standard English suffixes -ist and -ism, and there you are :-)

  234. Rod B:

    Mark (223), et al: While missing my point you have also aided it. You simply hypothesize and conjecture (albeit maybe reasonably) a cause for the population increase — the same thought process that Kevin’s commenter gets beat up over. I wasn’t supporting or refuting either scenario. (I actually think there is some justification for the warming scenario, though on the surface seems weaker than claimed — less than 60% moving north, more the 40% moving south doesn’t seem very striking.) I instead questioned the reaction that readily accepts a thinking process and conclusion if it supports your view and blasts the very same process or totally rejects the conclusion if it violates the orthodoxy. (This past year’s heat wave in Texas — probably the highest sustained since good measurements were taken — being solid proof of global warming; that the next worst heat wave was in the early 1920s is insignificant cherry picking by the skeptics, e.g.) More to the point I have no quarrel with making reasonable inferences and conjectures based on the data. I think Audubon probably did just that. But their (or rather other readers) taking it to the level of unassailable irrefutable proof, and contrary or mitigating ideas absolutely not allowed is a bridge too far.

    For those who incredibility believe there is one and only one way to analyze, group, combine and otherwise manipulate a data set, you need more exposure (which is really strange since some of you do this professionally!!??) Cugel, I don’t mean nefariously at all. The Audubon report itself went through a range of conclusions from the same analysts with the same data set: 1. could be anything, 2. climate might be one cause, 3. climate seems to have decent correlation, 4. THAT’S IT! None of that was nefarious, I’m sure; and it might be right. And don’t forget the statistical analysis of correlation and causation is a mathematical construct developed by man, and is not a pure rule of nature bestowed by God (or whoever does this), even though one has to run with the best guess.

    I think this is related to the thread’s topic, though, as Sekerob says, it’s a bit of a stretch. Sorry.

  235. Rod B:

    James, adding suffixes to words is one way to make up your own new words. :-P

  236. Jim Eager:

    “…trying really hard to label us skeptics with a word that isn’t even recognized in civil societies (other than in wiki…)”

    But then necessity is the mother of invention.

  237. Hank Roberts:

    http://www.marine-geo.org/

    —-excerpt follows—–
    The Marine Geoscience Data System (MGDS) provides access to data portals for the NSF-supported Ridge 2000 and MARGINS programs, the Antarctic and Southern Ocean Data Synthesis, the Global Multi-Resolution Topography Synthesis, and Seismic Reflection Field Data Portal. These portals were developed and are maintained as a single integrated system, providing free public access to a wide variety of primarily marine geoscience data collected during expeditions throughout the global oceans. In addition to these Data Portals, MGDS also is host to the US Antarctic Program Data Coordination Center (USAP-DCC) which provides tools to help scientists find Antarctic data of interest and satisfy their obligation to share data under the NSF Office of Polar Programs (OPP) data policy.

    The MGDS expedition catalog and data repository is powered by a relational database, which can be queried through web-based keyword and map searches, and is accessible through web services that follow the specifications of the Open Geospatial Consortium (OGC)….

  238. Rod B:

    Jim E., that’s true. As I said, use it long and loud enough and it will eventually get in. ;-)

  239. Eli Rabett:

    MC, the satellite data is for a few km up. It necessarily is different than surface data due to the lapse rate. The best you could do is compare trends/anomalies. As a physicist, the first thing you should do is figure out what each measurement measures.

  240. Hank Roberts:

    http://www.pnas.org/content/106/6/1844

    “… To avoid extinction, emperor penguins will have to adapt, migrate or change the timing of their growth stages. However, given the future projected increases in GHGs and its effect on Antarctic climate, evolution or migration seem unlikely for such long lived species at the remote southern end of the Earth.”

  241. Eli Rabett:

    An important point often missed is that people who do research on physical systems are quite happy to use mathematical/statistical methods that are imperfect, but Planck’s constant unlikely (sorry Tamino=:), to be so for the systems that they study. This drives mathematicians and statisticians wild.

    The first MBH papers were such a case. Yes, the method was not standard, yes, if you picked stupidly unrealistic parameters you could force a much smaller hockey stick shape under some conditions. McIntyre and McKitrick exploited this.

    Without a doubt, just about any piece of programming for scientific papers has such an issue. It is a feature, not a bug, because, by understanding the system very well, and the math a bit, one makes progress at a much more rapid rate.

  242. James:

    Rod B Says (14 February 2009 at 4:11 PM):

    “James, adding suffixes to words is one way to make up your own new words.”

    Sorry, but that’s just the way the English language works. Make up a new noun, say “quark” for a type of subatomic particle, then you pretty much automatically get “quarks” for more than one of them, though you’re not likely to find it in the dictionary because it’s not a new word, just the (main) way English forms the plural. Same with the -ist suffix for a person who does such & such. Invent the bicycle, and you get bicyclist for free – which I think most people would consider a legitimate, recognized word though it’s not in the Webster’s Dictionary on my desk, and in the on-line OED only as a link to bicycle. Indulge in denial, especially in the psychiatric sense of the second definition above, and you can be labeled a denialist :-)

  243. Ray Ladbury:

    Rod B., Perhaps you guys would prefer the terme “selective skeptics,” since your skepticism only extends to matters of science where the theory is solid and the evidence incontrovertible. You guys seem more than willing to grasp at any straw to justify further inaction.

  244. Hank Roberts:

    Ray, there’s a perfectly good word for the behavior — it’s a general aspect of human cognition, quite well established and understood widely.

    Explained and illustrated here:

    http://news.bbc.co.uk/2/hi/science/nature/7081882.stm

    http://newsimg.bbc.co.uk/media/images/44222000/jpg/_44222685_ostrichfarmbbc203.jpg

  245. Sekerob:

    Rod B et al, for the occasion composed a new word “Ortichitis” the affliction of advanced Denialism. (The Ostrich farmer I knew in France gave them diapers. The reason was not incontinence, but to catch the eggs before falling to the ground and break. It was a robust system from own observations :-)

  246. Sekerob:

    new keyboard required… “Ostrichitis” of course

  247. Chris:

    This is not specifically on topic but I can find no other place to post an inquiry. Sorry.

    George Will in today’s Washington Post resurrects the 1970s global cooling myth and adds this statement:

    “[A]ccording to the U.N. World Meteorological Organization, there has been no recorded global warming for more than a decade….”

    The global cooling part is easy enough to debunk, but what is that? He provides no reference, and I can’t find anything on the WMO site to support it. (It certainly is not supported by their press release titled ““1998-2007 Is Warmest Decade on Record”!)

    Does anyone know what he’s talking about?

  248. David B. Benson:

    Chris (247) — Methinks he is just MSU (Making Stuff Up).

  249. David B. Benson:

    Chris (247) — For a more thorough fisking of George Will (and the specfic point you raise), see

    http://climateprogress.org/2009/02/15/george-will-global-cooling-warming-debunked/

    [reCAPTCHA points out that "grandchildren asked".]

  250. Rod B:

    James (242), I don’t disagree with your general thesis: it’ll get in sometime. But, just for the record, “bicyclist” is in all of my spellcheckers (5) and listed in a ton (24 actually — 8 less than that for “cyclist”) of dictionaries at onelook.com. It’s also listed in my Webster’s of 1956 — don’t know what’s wrong with yours. “Denialist” is in none.

  251. Rod B:

    Ray, “solid theory” and incontravertible evidence” are of course subjective phrases that can be determined by whatever a person’s thinks of any particular aspect.

  252. Hank Roberts:

    Well, Rod, your task of filling up the Antarctic Warming thread with tempting off topic posts, and encouraging others to reply to you, is going along well. Past 250 responses now, and the substantive ones have about tapered off to zero.

    What was it you were here to accomplish?

  253. Ray Ladbury:

    Rod B., No, actually, solid theory is quite well defined in science (e.g. based on principles accepted by the vast majority of experts), as is incontrovertible evidence (e.g. >90% CL). Once again, you are in denial about this one subject–but that is a subject for psychology, not physics.
    I am disappointed that you seem to have given up all pretense of trying to understand the science, Rod. Your emphasis now seems to be to divert conversation away from the hard subjects and onto semantic controversies that you find less taxing. A pity.

  254. Jim Galasyn:

    Rod, thermodynamics is not Derrida. Before humans burned the coal and petroleum, Earth was in radiative equilibrium with space.

    Humans abruptly added 500 gigatons of carbon to the atmosphere in the form of “heat-trapping” gases. Earth is now is radiative disequilibrium with space. For Earth to return to equilibrium, the atmosphere must warm. There is ample evidence that abrupt warming is very bad for the biosphere.

    Do you disagree?

  255. Thomas Lee Elifritz:

    It’s also listed in my Webster’s of 1956 — don’t know what’s wrong with yours. “Denialist” is in none.

    Is ‘absolutist’ in there?

  256. Rod B:

    Hank, just trying to toss a little reality out there. Most of the subthreads I participated in were more or less on topic, and, as you say, those have about tapered off to zero. I started none of the subthreads, though depending on your interpretation it might be said I began the OT “denialist” arguement. That one probably didn’t deserve anything beyond my initial post #219 — not much more than a simple observation, but its reality seemingly made everyone mad.

  257. Sekerob:

    Okay, following Hank’s lament, a semi try to get back to the south pole region… Yesterday opposed to what I wrote a few days ago, the Antarctic Summer Sea Ice Area was 1.2-1.3 million below same time last year. Minimum calculated from the NCDC data occurs on average Feb-21-26, normally, so another week or more of melt is to be expected and from the NSIDC chart not out with reasonable expectation.

    http://nsidc.org/data/seaice_index/images//daily_images/S_timeseries.png

    Last big minimum occurred in 1993 and wonder what drove the growth / recovery since. Was it increased precipitation, snow, thus more incoming water vapor on land and sea ice? Has the Antarctic now reached a tipping point too where the additional precipitation is overwhelmed by further (air) temperature rise? The SST maps indicate a mix of warmer and cooler bordering ocean regions

    http://polar.ncep.noaa.gov/sst/ophi/color_anomaly_SW_ophi0.png
    http://polar.ncep.noaa.gov/sst/ophi/color_anomaly_SE_ophi0.png
    http://polar.ncep.noaa.gov/sst/ophi/color_anomaly_IND_ophi0.png

    So, taking the Antarctic temperature history of recent decades, is sea ice acreage and volume (if there is such data) a good supporter of the Antarctic Temperature trend analysis’ robustness?

  258. Vernon:

    [edit]

    I have read some work done by Ryan O about this study and he raised some questions that seem relevant to me. In that light, I have some questions so that I can better understand the paper.

    For Table S1, why was 40% complete calibration information picked for the cutoff?
    Why are stations that do not meet the 40% data cut off: Enigma Lake, LGB20, LGB35, Larsen Ice Shelf, and Nico included?

    Why are stations that exceed the 40% data cut off: Byrd, Mt. Siple, pre-correction Harry excluded? If it was because they did not show enough verification skill then how could they be used to show a correlation in Table S2?

    While AWS recon and AVHRR have trends seem similar 1957-2007, this is not the complete picture. AVHRR shows warming from 1980 – 2006 and AWS shows no warming (basically flat) from 1980 – 2006. The trend prior to 1980 was created by RegEM using manned data so is the same for both AWS and AVHRR. It appears that AWS benchmark does not provide the needed certainty.

    Thanks for your time.

    I now have some additional questions. Dr. Steig says that he used the original Schneider code while the article says that he used the modified Rutherford-Mann adapted code. Which is it and if it is the Rutherford-Mann adapted code, where can that be found?

    Regards

    [Response: You are probably going to have to wait until Eric gets back for most of these kinds of questions. But the code used is the code on Tapio Schneider's website (using the TTLS options as described). Any decisions on what cut-offs get applied in a Table are always a balance between including enough data to be meaningful while avoiding adding noise. The caption says that the included stations have > 40% data in the calibration period (pre 1995) and verification scores greater than the cut off. The logic would imply that those not included didn't make one or other of the criteria. - gavin]

  259. Chris:

    David (249) – thanks for the link. I knew that it had to be wrong, but I thought Will must have misinterpreted something. I couldn’t believe that he would just make it up–he’s conservative but not really of the Limbaugh/Hannity/Savage type. But as far as I can tell, that is exactly what he did.

    Of course, he did say in his previous column (“How Congress Trumps Darwin”) that the Endangered Species Act is an attempt by Congress to fix evolution: apparently he thinks that species extinctions from dam building, deforestation, and so forth are examples of natural selection. So I suppose that not too much is to be expected.

  260. William:

    #215 dhogaza
    Regarding the discussion about bird/animal/plant ranges. To the extent that climate varies in cycles ice age to ice age, can someone define here what the “normal” ranges should be for the species you are discussing for this period of time mid way between ice ages? For simplification perhaps you can start with one species and compare where they range today and where they are supposed to be if there were 100ppm less of CO2 in the air.

  261. Rod B:

    Ray (253), no, I fully agree that “solid theory”, etc. is properly defined. I said that deciding if this or that is actually solid evidence is a subjective process.

    I post contrary/skeptic comments on the science if 1) I actually have an opposing view on this or that aspect (a minority of the science posts — I do not disagree with any and all of the science as many here stupidly claim that skeptics do), and 2) if I have at least some basic level of understanding in that aspect. Occasionally I simply ask clarifications. I do occasionally, as you aver, comment tangentially on posts because they are exaggerated, hyperbolic or outrageous, sometimes directly related to the science (your “best known science” comment on another thread being directly related and exaggerated, IMO), sometimes not. The latter I try to limit but find it difficult to pass up sometimes. Ironically, the latter seems to generate vociferous and voluminous responses — much more than I would deem necessary — and can take an undue share of a thread. I do need to even better limit myself here.

  262. Chris S:

    #260 William

    Perhaps a more immediate (and economically “interesting”) examination of range change can be found by looking at the recent spread of outbreaks of bluetongue virus (see here: http://www.reoviridae.org/dsRNA_virus_proteins/outbreaks.htm ). (For further commentary see the Institute of Animal Health website here: http://www.iah.bbsrc.ac.uk/disease/bt_aw.shtml ). I’d be interested in your thoughts.

  263. Hank Roberts:

    William, your question has many assumptions, not useful ones.

    A animals aren’t “supposed” to be anywhere in particular.

    A “species” isn’t a rigid definition over geological time.

    Population genetics is the place to start looking for adaptation over time; the current problem is the rate of change being far greater than anything in the past, and the impediments to adaptation also being enormous.

    You’re asking about ecological change, and ecology doesn’t work with “one species” at a time either — changes occur in the frequency of genes in populations, in the mix of populations in an area, in the timing of their interactions.

    And we know how fast ecology changes while holding complex relations together — about as fast as the ice moved, in the areas where glaciation happened, most of the time, but look at the mammoths found frozen with stomachs full of green plants. They turned the wrong way during one blizzard, probably, and crossed a line, and died.

    Rate of change is what matters for life on the planet, and we’ve pushed the rate of change beyond what has happened in the past except in the previous great extinctions. We’re in the middle of a great extinction now.

    Ther is no ‘normal’ or ‘should’ about how life works.

  264. Hank Roberts:

    Sekerob, thanks for the pointer to the Arctic ice.
    Do we have a topic for that currently? It’s time, again.

  265. James:

    William Says (16 February 2009 at 11:53 AM)

    “…can someone define here what the “normal” ranges should be for the species you are discussing for this period of time…”

    You’re missing the point. “Should be” doesn’t enter into the discussion at any point. The operative words are “were” and “are”. The study examines historic ranges of numerous species. Those ranges WERE such & such, and now ARE some distance further north. Observational evidence, from which it’s possible to draw the fairly obvious conclusion that the northward shift is due to warming.

  266. William:

    Chris
    Interesting indeed. The studies you cited had a very limited time span and made no conclusions as to whether the movement of this virus was un-natural based on the climate we are experiencing at this stage in time between the previous glacial period and another that may begin thousands of years from now. Neither describes whether the virus made similar incursions in the recent past of 1-2000 years ago when there may have been similar warmings. It may be that the huge changes in European land use humans have made over the last 1000 years may also be a factor.

  267. Mark:

    William #260, what about at the tail end of an ice age?

    We ain’t in the middle.

  268. Hank Roberts:

    Latest North Pole thread still open:
    http://www.realclimate.org/index.php/archives/2008/08/north-pole-notes-continued/

  269. Rod B:

    Jim G.(254), Excessive temperature increase will affect the biosphere, but “abrupt” is dicey. I might be missing your point.

  270. Hank Roberts:

    Rod, make a minimal effort, would you, to look this stuff up?
    There are whole topics here discussing that question.
    You’ll miss every point if you regularly question every word..
    And you’ll divert every thread by doing so. Consider your goals.

  271. Chris:

    For anyone who might be curious about the source of the (off-topic) George Will quote (247), it appears to be this BBC News article; see the section headed “Rises ‘stalled’”.

    Will is taking out of context a sentence that was poorly written to begin with. All the WMO said was that 2008 (a La Nina year) would probably be cooler than 1998 (the big El Nino year, of course); but the BBC’s author misleadingly writes, “This would mean that temperatures have not risen globally since 1998 when El Nino warmed the world.”

    The author follows with some more sensible quotes from the WMO’s Michel Jarraud, but the damage is done.

  272. Jim Galasyn:

    Rod, if I read you correctly, your only quibble with my statement is the use of “abrupt” to describe the climate response; you accept the basic thermodynamics.

    “Abrupt” changes in climate have occurred with large excursions in the planet’s carbon budget, typically from flood basalts, and on a time scale of millenia. Mass extinctions were the result.

    What’s happening today is on the time scale of decades, so maybe I should have said “ultra-abrupt.”

  273. William:

    #254 Jim
    Hansen at Giss on Sept 2003 stated the following: (http://www.giss.nasa.gov/research/news/20030903/) “The enhanced GCM showed the world’s oceans were storing heat at a rate of about 0.2 W/m2 in 1951, and in the past 50 years, as atmospheric temperatures warmed, the rate of heat storage increased to about 0.75 W/m2, capturing more heat from the atmosphere. “This increase in ocean heat storage shows that the planet is out of energy balance,” Hansen said. “This energy imbalance implies that the atmosphere and ocean will continue to warm over time, so we will see continuing climate change.”

    However, if you look at ocean temperatures over the last 5-1/2 years since that statement there has been no significant heating of the oceans. Link here: http://climatesci.org/2009/02/13/article-by-josh-willis-is-it-me-or-did-the-oceans-cool-a-lesson-on-global-warming-from-my-favorite-denier/

    It looks like at least for the last 5 years there has not been any “excess” heat to store. Perhaps we are back in equilibrium. Within another 10-200 years we should find out for sure.

  274. William:

    Hank
    I appreciate your comments. You may have supplied my answer in that “animals aren’t supposed to be anywhere in particular.” Animals seem to do fairly well surviving differences in 100 degrees between seasons by a variety of adaptations. I think that it is arguable that a half of a degree one way or another is the cause of a “great extinction”.
    I’d suggest that chopping down trees, draining swamps, rerouting rivers, paving over prarie, clearcutting or burning forests and hunting to extinction is a much clearer way of contributing to ecological change rather than trying to link that to 385ppm of CO2.
    Thanks
    William

  275. Jim Galasyn:

    William, if Earth were in radiative equilibrium with space, there would be no observed trend in atmospheric mean temperature.

    Because there is a robust upward trend, we can infer that Earth is not in equilibrium.

  276. Hank Roberts:

    Sorry, William, picking a few words out of a pointer isn’t solving your problem. You really need to look this stuff up if you want to learn it. If you just want to play ‘arguable’ then you’re not doing science.
    Rate of change, rate of adaptation, ecological community, complexity.
    Migration, food availability, seasonality, warmth, day length.

  277. William:

    Hank
    Thanks for your comments and I acknowledge that I am not trying to do science within the short span of these comments. My argument is that species extinction is continuing unabated as a result of human activities completely unrelated to CO2 production. I cannot recall a species having been declared extinct from an increase in 100ppm of CO2 nor a .5C increase in temperature over the last 160 years. Zoologists can point to hundreds of extinct species and many more endangered species that are directly a result of the negative impact of humans on their environment. I’ve read the same warnings that say IF CO2 doubles, or IF we burn all coal, or IF the icecaps melt, but for the silverback gorilla’s being eaten for meat their problems are a bit more immediate than worrying about it being a bit hotter a hundred years from now.

  278. Hank Roberts:

    William, you understand strawman argument, you’ve set out some there.
    Why bother? What are you here to learn? CO2 is going to double and the ice caps are going to melt. Ocean pH is well along to an excursion.
    Sure, idiots with guns are more of a risk to the particular gorillas.

    What’s your reason for contributing to RC?

  279. Hank Roberts:

    > look at ocean temperatures over the last 5-1/2 years

    William, good grief. You’re pulling stuff out of the dustbins, proclaiming you’ve found something brand new.

    The amount of debunking already done isn’t worth retyping into this thread where it’s off topic. You can look this stuff up.

    Think about what you know about trend detection.

    http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#more

  280. Dave Andrews:

    Hank,

    The paid pros have time and staff to locate them and copypaste their talking points.”

    How does Gavin fit into this scenario then?

    [Response: Ummm..... not at all? - gavin]

  281. Jim Galasyn:

    William, for an example of how changing climate can precipitate large-scale disasters, check out the well-timed new post, Bushfires and Extreme Heat in SE Australia.

  282. Ray Ladbury:

    William, Google Golden tree frog or Harlequin tree frog.

  283. William:

    Ray
    I did a search on tree frogs as you suggested. And in about 15 minutes I did learn something. Climate change is not what is killing amphibians. Suggested causations for extinctions include: fungus carried by invasive species, hydroelectric projects, toxic chemicals and pesticides. Although climate is mentioned it’s the fungus brought in by invasive species that is killing the amphibians, not a half degree of temperature.

    “No evidence for precipitous declines of harlequin frogs (Atelopus) in the Guyanas”
    http://www.informaworld.com/smpp/content~content=a903336055~db=all

    “The chytrid fungus is found throughout the world, possibly carried by invasive species such as bull frogs.”
    http://news.bbc.co.uk/2/hi/asia-pacific/4602116.stm

    Lethal amphibian fungus ‘in UK’
    The American bullfrog (Rana catesbeiana) is native to the central and eastern US and parts of Canada. The UK colony was probably derived from animals kept as pets that escaped or were released.
    “If it does get into British species, it’s going to be very difficult to get rid of,” Dr Cunningham told the BBC News website. at: http://news.bbc.co.uk/2/hi/science/nature/4249136.stm

    The Kihansi spray toad, a small toad that is virtually extinct in the wild after its habitat was destroyed by a hydroelectric project in Tanzania, only exists today, thanks to efforts by the Bronx Zoo, the Detroit Zoo in Michigan and the Toledo Zoo in Ohio, which are planning to reintroduce the species it to its native habitat.
    While climate change, habitat destruction and pollution are factors in their rapid disappearance, Pramuk says the main suspect is the onset of a deadly disease – the chytrid fungus – now found in frog populations around the world.
    So what can we do to help? “Live green,” says Pramuk. “Recycle, avoid using toxic chemicals and pesticides, and don’t purchase frogs caught in the wild.” at:http://www.nydailynews.com/lifestyle/2008/02/23/2008-02-23_amphibians_under_threat_of_extinction-2.html
    Thanks for the suggestions.
    William

  284. William:

    Ray
    With a little more research, google provides a dozen additional causes besides the chytrid fungus for declining amphibian populations: invasive plant, animal and fish species as predators, weed invasion, erosion, sedimentation of streams, habitat fragmentation, urbanization, dredging, conversion of wetlands, deforestation, pollution, Pet trade, consumption of frog legs as food and an increase in UV-B radiation at 30nm.

    In terms of overall habitat destruction, humans have altered nearly half of the Earth’s land mass over the past 150 years and could rise to 70% within 30 years.

    In addition, the decline in amphibian populations had already been recognized by the early 1980′s. GISS temperature anomolies were at or below zero up until about 1980. Any rise in temps occurred between 1980 and 2000 after many amphibian populations were in decline from one or many other causations.
    see: http://data.giss.nasa.gov/gistemp/graphs/

  285. herbert stencil:

    Re #278: “[William] What’s your reason for contributing to RC?”

    What a good question Hank. And what would your response be?? [edit]

  286. Ray Ladbury:

    William, I said specifically to look for info on the golden tree frog. The evidence is quite strong there. None of the fungi were contributing factors there. You are either a poor reader or you are being disingenuous.

  287. Chris S:

    #266 William
    Thanks for your comment, although I would note that neither link was a study rather they are reports. Unfortunately the second link seems to be dead at the moment but it did state that the virus has previously made only brief, sporadic appearances in Southern Europe. (indeed, according to SchwartzMellor et al in Preventative Vetinary Medicine Vol 87 (2008) pp 4-20 the most recent outbreaks of BT in Europe are further north than this virus has ever previously occurred anywhere in the world.)
    Whilst land-use could be a factor it has been suggested that the real limiting factors of this disease and its vector were winter temperature and the distribution of its primary vector. Purse et al (2005) found that its spread is due to “recent changes in European climate that have allowed increased virus persistence during winter, the northward expansion of Culicoides imicola, the main bluetongue virus vector, and, beyond this vector’s range, transmission by indigenous European Culicoides species” ( http://www.nature.com/nrmicro/journal/v3/n2/full/nrmicro1090.html )
    For further studies see: http://www3.interscience.wiley.com/cgi-bin/fulltext/118916065/HTMLSTART , http://www.ncbi.nlm.nih.gov/pubmed/11189725?dopt=abstract and elsewhere.

    A further question to you – what factors do you think keep tropical diseases in the tropics?

  288. Sekerob:

    Chris S., Gee, why’s malaria not prevalent in Europe, where it was common even not far from the place I live and not a century ago?

  289. Sekerob:

    With my previous comment… only in 1970 was Italy declared Malaria free and now:

    Climate change brings malaria back to Italy

  290. Nick Gotts:

    William@283,
    You say “climate change is not what is killing amphibians”, but one of the source you cite in your support says, specifically:
    “While climate change, habitat destruction and pollution are factors in their rapid disappearance…”
    Multiple causation is common in complex systems. A lot of denialists seem to have problems with it.

  291. Chris:

    Not to be a pest about this (247, 271), but I’ve posted a blog entry on the George Will column. Since I’m not a professional (or even a scientist), I’d appreciate any comments or corrections, either here or on my blog.

    Thanks.

    [Response: See also - gavin]

  292. Mark:

    Nick, #290. Although it’s hard to tell which way it goes, they seem to have a problem with multiple causation all over the shop.

    I.e. when it comes to what’s causing it, there are two sorts of denialist

    Type A: “It’s the SUN!!!!”
    Type B: “Huh, so you don’t think the sun could be the cause???”

    Type A has a problem with multiple causation. They see the sun as the sole source of heating. Multiple causation is impossible or merely invisible. Though they ought to be the sort that Type B are complaining about:

    Type B thinks that the climate science community has never considered that there is ANY OTHER cause for warming than antropogenic global warming, mistaking the number one cause for the ONLY cause. That is what the Type A denialist would say if they were pro-AGW, but this blindness to multiple causation seems to be a denialist thing.

  293. Chris S:

    #288 Sekerob
    Malaria was present (though not common) in Europe through both the MWP and the Little Ice Age – at least if the clinical diagnoses of mediaeval reports are correct. Furthermore, there were outbreaks right up until 1880 & beyond (including a minor one just after WWI). Malarial outbreaks tailed off (during a warming phase) mainly due to: continued drainage of wetlands for agriculture – thus destroying the vector’s habitat; an increase in livestock providing altrnative blood sources for the vector; reduction in rural populations & improvement in buildings reducing the availability of human hosts.

  294. Mark:

    Will says: “With a little more research, google provides a dozen additional causes besides the chytrid fungus for declining amphibian populations”

    But you do now admit that climate change has caused another additional plague for the species.

  295. Sekerob:

    Great Chris S, so after reading up on what you did not know before you found yourself one answer why tropical diseases are concentrated in the tropics… but now Dengue and West Nile are moving up too and a few more.

  296. Chris S:

    #295 Sekerob
    What part of that did I not know?* I’m aware that vector distribution is a major limiting factor in tropical diseases and the recent outbreak of WNV in the US is illustrative of that. In addition it’s worth keeping an eye out for Chikungunya, African Horse Sickness, Usutu virus and Tick-borne encephalitis (louping ill) arriving in Europe soon as their vectors undergo range expansion along the same lines as Culicoides imicola.

    (*I should note that I am an entomologist with interest in Arboviruses and their vectors)

  297. William:

    #285 Herb Thanks for your inquiry but I believe the discussion here is restricted to scientific topics not why I am interested in discussing impending frog extinctions.
    #290 Nick I dont understand your point in calling me a denialist. I’m not a scientist. However, Feynman stated in 1969 that “Science is the belief in the ignorance of experts” and other contributors to this site much smarter than I am have already pointed out that “any scientist outside his known specialty is just another commentator”.

    Ray #286 Although you do not say so explicitly you must be referring to the Alan Pounds Study published in Nature in 2006. Other experts in the field do not agree with Pounds conclusions:
    “Most experts agree that the disease-causing chytrid fungus Batrachochytrium dendrobatidis is taking a terrible toll on frogs and toads. One in three species worldwide is threatened with extinction.
    “There seems to be convincing evidence that chytrid fungus is the bullet killing amphibians,” said University of South Florida biologist Jason Rohr, lead author of the study, published in a recent issue of the journal Proceedings of the National Academy of Sciences. “But the evidence that climate change is pulling the trigger is weak at this point.”
    Beer to Blame? Rohr and colleagues don’t completely discount the role of global warming in amphibian declines. But they say decades of data show only that some correlation exists between rising air temperatures and Latin American amphibian extinctions—and that data are well short of proving causation.
    In fact, the researchers found that, in the Latin American countries they studied, beer and banana production were actually better predictors of amphibian extinctions than tropical air temperature.
    While beer and bananas are certainly not to blame, the whimsical comparison makes a point.
    “We can’t jump to conclusions of causality based on a correlation—especially when we’re talking about 60 or 70 species,” Rohr said.”
    http://news.nationalgeographic.com/news/2008/12/081201-global-warming-frogs_2.html

    Pounds is touting the Gold Tree Frog as the first species extinction linked to global warming but until the frog experts agree we’ll have to agree to disagree.

    After a little further research, the cloud cover mechanism pounds suggests is questionable:
    Ellis et al. (2004) provides a global map of cloud cover trends from 1987-2001, which includes the period of maximum amphibian extinction. If you look at the study area of Pounds et al., Ellis found no change in cloud cover in that region during the period 1987-2001.
    Ellis T.D., et al., 2004. Evaluation of cloud amount trends and connections to large scale dynamics. 15th Symposium of Global Change and Climate Variations, Paper No. 5.7, American Meteorological Society.

    [Response: No more frogs on an Antarctic thread. Please stay focused people. - gavin]

  298. William:

    #293 Chris
    Habitat destruction, such as draining swampland for agricultural use has been cited as one of the factors putting pressure on amphibian populations such as frogs.

  299. Sekerob:

    Chris S. #296

    So the question “what factors do you think keep tropical diseases in the tropics?” was rhetoric?

    http://www.world-television.se/world_television.se/mnr_stat/mnr/ECDC/431/index.php

    Some of these diseases came along to any country that has a seafaring history doing Africa/Asia, irrespective of LIA or MWP climate state.

  300. Rod B:

    Nick Gotts (290), “…are factors in …” is the phrase often used to get someone to infer a cause that otherwise has little correlation.

  301. Ike Solem:

    Basically, the issue here seems to be about how to go about reconstructing the time-evolution of a temperature field from a limited set of point measurements of that field.

    To do that, you must have some idea of the homogeneity of the field – how does it vary spatially, roughly? For example, a lake in summer has a thin layer of warm water over much colder deeper water; measuring only the upper six inches gives an exxagerated estimate of total lake heat content.

    A similar problem exists with measurements of ocean heat content and oceanic heat transport. Direct observations are scattered through space and time, and with the ocean, organized structures can persist far longer than in the atmosphere (since the oceans are more viscous). For example, when the Gulf Stream hits the cold North Atlantic, eddy mixing occurs, and that can send rings of warm water traveling north. It’s easy for observations to miss such rings, and they are also smaller than model grid scales. Practically, that leads to larger uncertainties in heat transport estimates. Nevertheless, the estimates of ocean warming are robust and unchallenged: http://www.sciencemag.org/cgi/content/abstract/287/5461/2225

    In Antarctica, the most visible result of the net warming is the thinning and breakup of ice shelves that have persisted for many thousands of years (similar to the loss of mountain glaciers of equal age). That could be due to atmospheric and/or oceanic warming, but something is causing it.

    What this paper did was attempt to improve the estimates of the Antarctic temperature field history by using satellite data to guide the data-infilling algorithm, as described:

    In essence, we use the spatial covariance structure of the surface temperature field to guide interpolation of the sparse but reliable 50-year-long records of 2-m temperature from occupied weather stations. Although it has been suggested that such interpolation is unreliable owing to the distances involved, large spatial scales are not inherently problematic if there is high spatial coherence, as is the case in continental Antarctica.

    For example, if you put a thermometer in one side of a warm bowl of water, it will read the same as on the other side – the field is homogeneous. Compared to the oceans, the Antarctic continent has “high spatial coherence” – so it’s likely that the procedures used are reliable, and the peer review process should catch any glaring technical errors – and then, you have the vanishing ice shelves as physical evidence.

    Why not also do this with ocean temperatures around Antarctica? Answer: there is so little real historical data that the exercise would be futile, since the spatial coherence is definitely lower. This lack of data leads to problems in attributing the loss of the ice shelves to either ocean or atmospheric effects, which is likely why the authors are cautious about attribution:

    Although the influence of ozone-related changes in the SAM [Southern Annular Mode] has been emphasized in recent studies of Antarctic temperature trends, the spatial and seasonal patterns of the observed temperature trends indicate that higher-order modes of atmospheric circulation, associated with regional sea-ice changes, have had a larger role in West Antarctica.

    Thus, the detected trend is robust and the attribution is complex.

    The thing to keep in mind is that this business of reconstructing fields from isolated data points is central to climate science. One of the most valuable approaches is to use types of data that average out the time-space variability. Take precipitation – one side of a mountain can get soaked while the other gets nothing, meaning that isolated rain guages are poor choices for estimating yearly precipitation. The rain guage data, however, is critical for studying the timing of seasonal variations. For getting at total precipitation, however, scientists focus on several other dependent variables – soil moisture, river flow rates, and snowpack accumulation. (Frogs, by the way, serve a similar function as indicator species). Such studies show a trend of increasing drought in subtropical regions.

    The indicator species in Antarctica are the ice shelves.

  302. Mark:

    What did that little “nugget” of information bring to the party in #300, RodB?

    “are factors in” can also mean that there are factors in something. Are you complaining that Nick is being correct??? Or are you complaining that he isn’t ignoring other factors?

  303. Jim Eager:

    Re Nick @290: “Multiple causation is common in complex systems. A lot of denialists seem to have problems with it.”

    Exactly. Simple minds seek out simple answers, even if they are incomplete or flat out wrong. Or mutually exclusive, as so many denier arguments are: “There is no warming, the recorded temperature rise is caused by faulty weather stations and the urban heat island effect. And besides, there’s no proof humans have caused the warming anyway, it’s the sun!”

    The fact that climate change could play a role in the rise of chytrid fungus, and in the expansion of invasive species never dawned on William. Nor the in the vertical movement of climate zones up the slopes of the cloud forest in which the golden toad lived.

    But, as Captcha councils, we do “constitutionally allow” deniers their right to make themselves look inconsistant and foolish in public.

  304. Carrick:

    For what it’s worth, in the field of research I am in, we regularly collect 50-GB of data from multiple geographical sites and dozens of sensors.

    I am responsible for maintaining the repository, and everything is kept in a given directory for an experiment from the process-control scripts that drive the recording software, to the analysis scripts to produce different the different levels of analysis of the data.

    Any of my colleagues can log into that copy copy the directory and run the scripts (based on instructions from the enclosed Readme file) and duplicate everything I have done. When manual intervention is required (for example for data drop outs), that is stored in an exceptions file that the scripts can automatically parse, and which are human readable by my colleagues so they know what additional “massaging” of the data has been done (and even better, they are at liberty to make their own changes and see how that affects the outcome).
    We provide code+data+scripts to run the programs to interested researchers, and we don’t screen for who is a “colleague” and who is not. I use MacOSX, but the software can be compiled and run on both Windows and Linux with little fuss.

    I do not think it holds climate scientists to provide a similar “cookie cutter” template set of scripts for how they took the raw data and processed it. Languages like MATLAB are scriptable and available on all major platforms, so there is no reason that the entire process couldn’t be initiated with one or more MATLAB drivers.

    Not going to end this with a rant, but I do think more can be done in the climate community [ad hom deleted]

  305. dhogaza:

    For what it’s worth, in the field of research I am in, we regularly collect 50-GB of data from multiple geographical sites and dozens of sensors.

    I am responsible for maintaining the repository, and everything is kept in a given directory for an experiment from the process-control scripts that drive the recording software, to the analysis scripts to produce different the different levels of analysis of the data.

    Any of my colleagues can log into that copy copy the directory and run the scripts (based on instructions from the enclosed Readme file) and duplicate everything I have done.

    Nice to have funding for software infrastructure at this level.

    Now … is every piece of work done by your researchers who are *using that data* documented to the same level of detail, including productization to the level of scripts that work on multiple unix platforms?

    Or do you think that just possibly some of those researchers have their own homebrew scripts and code lying around that they use when analyzing the data.

  306. William:

    #303 Jim
    Please read the literature as you have the causality reversed. The chytrid fungus was introduced into untouched habitats by invasive species not by warming.
    I thought we were done with frogs?

    [Response: We are! - gavin]

  307. Jim Galasyn:

    At least it’s not frogs:

    Global warming ‘changing balance’ of marine life in polar seas

    Global warming is changing the distribution, abundance and diversity of marine life in the polar seas with “profound” implications for creatures further up the food chain, according to scientists involved in the most comprehensive study of life in the oceans ever conducted.

  308. Hank Roberts:

    > marine life, polar oceans
    That’s what Dr. Le Quere is studying and some climate models are including. This will get very interesting very fast.

    http://dx.doi.org/10.1016/j.envsoft.2008.06.004
    Is my model too complex? Evaluating model formulation using model reduction

    One for Tamino — time series:

    http://arjournals.annualreviews.org/doi/pdf/10.1146/annurev.marine.010908.163801
    Contributions of long-term research and time-series observations to marine ecology …
    HW Ducklow, SC Doney, DK Steinberg – Annu. Rev. Mar. Sci, 2009 – Annual Reviews
    … top-down model is that climate change induces bottom-up responses in lower trophic level populations. Examples include changes in phytoplankton size classes …

    … Searching for ecological responses to climate mode variability enables us to formulate testable hypotheses about the mechanisms of ecosystem change.
    The CalCOFI program has sampled phytoplankton, zooplankton and fish distributions with species-level resolution, associated physical variables, primary production (PP), and nutrients over a 200,000 km2 region off Southern California since 1949….

  309. dhogaza:

    Global warming is changing the distribution, abundance and diversity of marine life in the polar seas with “profound” implications for creatures further up the food chain, according to scientists involved in the most comprehensive study of life in the oceans ever conducted.

    Damn, the wordwide conspiracy of scientists vs. humanity has reached a new peak!

    Whatever explanation is possible for these obviously fraudulent observational results?

    It couldn’t be 20-20 vision, could it?

    (thanks for the post, and excuse the snark)

  310. Chris S:

    >implications further up the food chain.

    In my mind this is the direction that future research needs to take, both to clarify the effects climate change is having/will have on natural systems & to add further to the body of evidence that the climate is changing for those who remain to be convinced.

    Whilst the loss of frogs in the tropics & potential extinction of charismatic species like the polar bear is regrettable, it is unlikely to cause much more than a shrug of the shoulders amongst many outside the “envirofascist” movement. What is required is evidence that changes in natural systems will affect us economically, and how. I note with amusement that certain areas of the blogosphere are trumpeting the FACE experiments that show increase in CO2 leads to increased plant growth whilst at the same time ignoring the study from the same source that demonstrates that the plant defences against insect attack are weakened. Above I have alluded to evidence that vectors of animal diseases are moving north into areas where livestock will not have previously had any contact with them (and thus lack any natural immunity). The same cannot (yet) be said for plant viruses, but there is concern that the advancing phenology of plant virus vectors like aphids will lead to diseases being introduced to crops at an earlier – more vulnerable – stage in their development.

    The exercise in proving without doubt that there is a warming trend (or not) in Antarctica has its merits but I feel that it must be getting towards the time that the debate should really be moving on to whether the effects of warming are going to be beneficial or detrimental – and where (if anywhere) the effects will be worst. There seems to be an assumption that the third world will bear the brunt of what is to come, I wonder whether that is the case, or will the agricutural powerhouses in the northern temperate zone suffer more from the ecological changes that warmer temperatures might bring?

  311. Sekerob:

    Chris S. The discussion if AGW is beneficial or detrimental is to me sick. The implications are profound, earlier discussions and tabulation elsewhere what the effects are of just 1 meter of SLR are dramatic. Lomborg stats of less death are just beyond if it gets warmer… he’s playing on “the haves” emotions, whilst ignoring disaster costing many more lives elsewhere.

    Earth is no Petri dish to experiment with. The 2 legged bacteria called homo sapiens sapiens are doing just that. 9 billion of them by 2050? Nope, mayhem long before that.

  312. Hank Roberts:

    George Will’s nitwittery is taken to task in the Washington Monthly online today: http://www.washingtonmonthly.com/

    Note for Stoat, goes with the “Ice Age” collection

    February 18, 2009

    WHERE THERE’S A WILL, THERE NO WAY…. George Will not only published an error-filled column on global warming, Brad Johnson notes that the conservative columnist “is also recycling his own work, republishing an extended passage from a 2006 column — which Think Progress debunked — almost word for word.”*

    __________
    * http://thinkprogress.org/2006/04/03/willful-deception/

  313. Ray Ladbury:

    Sekerob and Chris, Personally, I think that both the polar-bear and the cost-benefit analysis have their place. Emblematic species like the polar bear remind us of what we stand to lose irreparably in our natural heritage. The cost-benefit analysis is needed to bring along homo beancountus. They both unequivocally favor addressing climate change aggressively. The only way you can cook the numbers to make it look otherwise is to ignore any semblance of competent risk analysis–the approach taken by Lomborg.

  314. Chris S:

    #313 Ray,

    Don’t get me wrong, I think the loss of the polar bear would be tragic, but I’m also fully aware that “Joe Plumber”, “Dave Accountant” and “Jethro Farmer” would likely take the news without even blinking.

    Drawing attention to range changes of economic species (and predictions of such using (e.g.) climate envelope modelling) is much more likely to make “Dave” and especially “Jethro” sit up & take a bit more notice than pointing out the extinction of a pretty, but obscure frog/butterfly/plant or the temperature trend on a virtually uninhabited continent. “Joe” may be a lost cause however.

  315. Chris Colose:

    The human vs. economic calculus seems to be a consistent issue in climate and enviornmental discussions. There is no consensus view in the economics community as to how one should “weigh” species loss, human suffering and displacement, and the like. Long-term climate trajectories are also ignored often. The impacts of climate change go beyond how goods are sold in markets.

    Industrialization separates community from nature. Land is an alienable commodity called “real estate,” soil a “natural resource,” and food an exchange value bought and sold by a medium called “money.” Food cultivation, exploiting fossil fuels, etc is merely a business enterprise to be operated strictly for generating profit in a market economy. It is entirely unpractical to disagree with Chris S and the need for a more personalized “motivation” for climate change action. Climate Change is, as Ray Pierrehumbert called it, a “catastrophe in slow motion.” It is small compared to daily weather fluctuations and is not discriminate to those who emit fossil fuels or those who are still developing. As such, the primary motivation for action, right now, by “big industry” is finances…not polar bear pictures.

  316. Nick Gotts:

    “There seems to be an assumption that the third world will bear the brunt of what is to come, I wonder whether that is the case, or will the agricutural powerhouses in the northern temperate zone suffer more from the ecological changes that warmer temperatures might bring?” – Chris S.

    It’s not an assumption; it’s based on extensive research. See the AR4 report of WGII for work up to 2005. Initially, temperate agriculture will increase in productivity, while tropical agriculture (except initially in east and south-east Asia) declines. Large parts of Asia will suffer in the medium term from the disappearance of Himalayan glaciers and snowpack, which regulate river flow. Of course even in northern temperate regions there will be areas that suffer agriculturally from early on (e.g the SW and SE of the USA, southern Europe), but the population there are unlikely to starve, because they live in rich, stable states.

  317. Chris S:

    #316 Nick Gotts

    (I’ll stop posting OT on this thread after this but I feel this point should be made clear).

    Whilst I haven’t read all of the IPCC report you mention, I have looked at the European section quite closely before now. I would point out that the (four) references for the section (well, sentence) on arboviruses concentrate mainly on the Mediterranean region and is largely out of date – as you say it reports the work up until 2005. I posted this link earlier but you may not have seen it:

    http://www.reoviridae.org/dsRNA_virus_proteins/outbreaks.htm

    I draw your attention to reports of outbreaks of bluetongue since 2005, including an outbreak in Sweden in 2008. I’ll repeat that for effect: an outbreak of a tropical disease in Sweden in 2008.

    Bluetongue is a relatively benign disease but there will be serious economic impact if it continues to be endemic in temperate Europe. The rate it has spread in the last 3-4 years has really made people sit up & take notice. African horse sickness (with a 60-95% mortality) is a far more damaging virus that has the same vector as bluetongue and is not the only livestock disease with the potential to spread into the temperate zone.

    (reCaptcha: Ill GLOOM very apt!)

  318. Hank Roberts:

    http://www.e360.yale.edu/content/feature.msp?id=2115

    ——-excerpt——–
    e360: Are you saying that the rate of acceleration of the Pine Island and Thwaites glaciers is so rapid because of changes at their seaward ends, and so much more ice is coming off the continent into the water that this obviously is going to impact sea level rise?

    Bindschadler: That’s an important point and an issue that was hotly debated in the glaciological community for the last twenty years — this idea of, do the ice shelves matter to the ice upstream that is grounded, in regards to sea level rise? But in the Antarctic Peninsula, we finally got a definitive answer, because there the perfect natural experiment was run for us and we got to observe it, where these ice shelves like the Larsen B rapidly disintegrated over the course of just a few weeks. It was there, then it’s not there, and we were able to see that the glaciers that fed that ice shelf accelerated dramatically, more than 500 percent in just a couple years. So, that really settled the debate, told us that, yes, the ice shelves do buttress the ice upstream, and if you get rid of the ice shelf, the ice upstream really accelerates and it comes running into the ocean, and it will change sea level. In the case of Pine Island and Thwaites, the ice shelf hasn’t disintegrated, but it’s thinning quite rapidly, so it’s gradually going away and allowing the ice upstream to accelerate more and more.

    e360: And in the theoretical case that Pine Island and Thwaites glaciers completely dump into the ocean — obviously it’s not going to happen in the near-future — what kind of sea level rise would they contribute to?

    Bindschadler: That portion of West Antarctica, that third that flows northward primarily through those two glaciers, has the potential to raise sea level 1 ½ meters. That’s sort of an upper bound, a worst case. But the time scale is what really matters. Some say that we won’t see these ice shelves disappear in our lifetime — I’m not so sure. I think we might well.

    e360: Are you kidding?

    Bindschadler: No, no at all.
    —–end excerpt———

  319. tom:

    Don’t get me wrong, I think the loss of the polar bear would be tragic”

    the extinction of almost ALL large carnivores in a relatively short time frame ( 100 years? ) is inevitable, given population growth.

    Those fictional characters you condescendingly mentioned are like you. That is, they won’t ever lay eyes on a polar bear in their lives.
    So define how the exctinction of this animal, however unlikely at this time, is a tragedy.

  320. Kevin McKinney:

    tom, tragedy is independent of who does or does not witness it. If (God forbid) you were to cash it in tomorrow, my lack of acquaintance with you would not affect the tragic nature of the event.

  321. Mark:

    tom 319. Tell me how your mum having a stroke and being hospital bound is a tragedy to me.

    The only reason it would be sad to hear this and for me to say “sorry to hear that” is because WE CARE.

    Without care, we are no longer social.

    The death of the polar bear would be as bad as the death of the entire United States to me. Why? Because neither should happen but then again I’m not going to be affected directly by either loss.

    So, tell me, why should I care about your family. Or a million USians (heck, why do people pay Children In Need to save children they will never even know existed and know that there will be yet more needing saving next year?

    Because we care.

    And that care is as much for the loss of the polar bear as any human I’m never going to meet. Because where do I turn off the caring? Species? Race? Family? Utility of their existence alone?

    Or should I care for any loss?

  322. tom:

    People cash it in every day. I suppose to their immediate family, each can be tragic.

    But losing a species is not my idea of a “tragedy,” unless loss of said species affects human lives on a large scale.

    I like to watch tigers on the Nature channel as much as anyone, but if tigers were on the prowl near where my children played, the tigers would have to go. Same goes with polar bears.

    [Response: This is as clear a statement of human arrogance as I have ever read. The logical end point of such thinking is complete extinction for everyone because, though you might not like this, our species is not separate from the biosphere, and cannot live without it. Extinction is forever, but you seem to think that we know enough about the biosphere to cheerfully eliminate whole species, genera (kingdoms?) just because they bother you. I am sure proud to be sharing a planet with you. - gavin]

  323. Hank Roberts:

    > tragedy

    It’s a statistical expression:

    “… .in the tragedies of Euripides, the protagonist’s margin of freedom grows ever smaller….” (Brittanica)

    Simplifying an ecosystem makes it less stable. Once you’ve simplified your ecosystem down to, say, animals comprising people, flies, mosquitos, rats, cockroaches, and bedbugs, you have almost no margin of freedom remaining.

    Also referred to as the “removing rivets” approach to ecological management:

    “… You look out the window and notice someone is removing rivets … There are thousands of rivets in the plane and the person is just harvesting a few of them …”

    Urban Wildlife Management
    Clark Edward Adams, Kieran Jane Lindsey, Sara J. Ash

    For a solipsist (“apres moi, rien”) tragedy is only personal inconvenience.

  324. Mark:

    Hank, 322. “For a solipsist (”apres moi, rien”) tragedy is only personal inconvenience.”

    I would not consider someone who took that attitude to be a complete human being.

  325. Ray Ladbury:

    You know, Tom, I do hope to see a polar bear in the wild some day–maybe not close up, but in the wild. I’ve already seen tigers in the wild–stared into their eyes from an open jeep not 8 feet away. I’ve seen lions and cheetahs hunt and leopards haul a kill into a tree to keep the hyenas away. From your response, I can only assume you have not been similarly blessed. Your loss. But to see the loss of all these magnificent animals forever will diminish us as humans, and fools like you will never know the difference.

  326. Hank Roberts:

    Mark, Gavin, yep, thinking like that seems to be missing a few rivets.
    I hope our species doesn’t turn solipsistic There’s no line separating what we are from the life around us. I expect if we raised people for a few generations without a living world we’d find that much of what keeps us alive is borrowed, by each kid, from the world around us, and our gut commensals wouldn’t thrive if not acquired fresh from the world by each kid, by each generation. It’d be a desperate experiment.
    http://www.sciencemag.org/cgi/content/abstract/292/5519/1115
    http://dnaresearch.oxfordjournals.org/cgi/content/abstract/16/1/1

  327. sidd:

    “No life is an island, entire of itself; every one is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any death diminishes me, because I am involved in life, and therefore never send to know for whom the bells tolls; it tolls for thee.”

    paraphrasing Donne

  328. James:

    Tom Says (24 February 2009 at 3:24 PM):

    “…if tigers were on the prowl near where my children played, the tigers would have to go.”

    I could say as much for some humans of my acquaintance, and of a great many with whom I hope not to become acquainted :-)

  329. Philippe Chantreau:

    Tom, share with us the reasons why we would want to reproduce on a planetary scale the miserable fiasco that they called “Biosphere II.” Some tragedy you have in store for all of us…

  330. Chris S:

    Re#319
    Gee tom I didn’t know it was possible to infer so much about a person and what they have and haven’t seen from a few lines of text!

    But to answer your question: if you don’t see the tragedy in this statement “the extinction of almost ALL large carnivores in a relatively short time frame ( 100 years? ) is inevitable, given population growth” then I think you may have made my point for me.

  331. pete best:

    Re #325, Burnt some carbon seeing them then Ray, nice though they were and therein lies part of the dilemma. Its eay being wowed by the worlds amazing sites and places a vast swathe of people have seen and continue to want to see. you probably had a camera too and a laptop to look at all of the mamazing memories, a large motor vehicle to see em in and plane flight galore to get you there.

    Ah what a wonderful carbon world we live in.

  332. Nick Gotts:

    Chris S. #317,

    Yes, I wouldn’t dispute that various animal (and crop) diseases/pests will become more prevalent in temperate zones, but crop and pasture yields are expected to rise, up to a 2K (global mean) increase; and the range of crops that can be grown to increase. These effects are already noticeable in Scotland, where I live, and the land use change of which I model: wheat can now be grown in places it couldn’t before, and the growing season for grass is longer. On the other hand, livestock (particularly cattle) farmers are, counter-intuitively, needing to bring their stock indoors where there weren’t before, because the ground doesn’t freeze for nearly so long in winter, and stock left outside churn it up. So effects in temperate regions will be mixed and complex; but in most tropical regions, there appear to be few beneficial effects even of small temperature rises; and the biggest medium-term problem is probably the expected shrinkage of Himalayan and Andean glaciers and snowpack, which currently regulate river flow.

  333. Chris S:

    #332 Nick Gotts

    Broadly I agree with everyting you say in this post. I think there may have been some misinterpretation of the post you originally responded to – I said “There seems to be an assumption that the third world will bear the brunt of what is to come, … [will] … the northern temperate zone suffer more from the ecological changes…?” I did not state the 3rd world won’t suffer (although on reflection I can see how that inference could be drawn from what I wrote), just that the northern hemisphere will probably suffer more than is expected/predicted.

    There is an interesting page at the Warwick HRI site on potential impacts on crops ( http://www2.warwick.ac.uk/fac/sci/whri/research/climatechange/cgpests/ ) I’d be interested to know whether your models take any account of predicted changes in the phenology & survival rates of pest species outlined there. Particularly, have you looked at some of the new research coming out linking insecticide resistance in aphids to a lack of cold-tolerance and the implications of the increased survival of resistant clones during warmer winters?

  334. Mark:

    Pete, 331. So you can’t drive to see the animals, fly, or even walk (you have to breath!!!!OMG!!!) and if you don’t see the animals, you shouldn’t care about them.

    So this would be a lose-lose situation, then?

  335. tom:

    I’m sure viewing large carnivores in their natural habitat is a nice vanity project for the privileged elite, but for those who have to deal with these creatures on a daily basis, they are not so wonderful.

    I would recommend that we relocate polar bears and tigers to YOUR neighborhoods, but we all know what the response would be, don’t we?

    [Response: Now you're an Inuit? Give me a break. I do appreciate the way your 'thinking' works here - anyone who appreciates that ecosystems are fragile, necessary and irreplaceable is a vain elitist who only thinks that fauna are there for them to look at. You're viewpoint is blinding you, my friend. - gavin]

  336. Kevin McKinney:

    tom, how about this? If you don’t like living near the polar bears, tigers, or whatever, you move to Manhatten. Or Topeka, if the rent is too high.

    Unlike the bears, you have an option.

    [Captcha says, inscrutably, "simpleness 50."]

  337. Nick Gotts:

    Chris S@333,
    Up to now my models have been rather abstract, and focused on how farmers interact (imitating each other, approving/disapproving, asking/giving advice) – focused on the socio-economic rather than the biophysical – and I’m now shifting my main focus to modelling energy demand, so the land use stuff is on the back burner until I can get some more funding for it. I would then hope to go more deeply into the biophysical side. I don’t offhand know of models looking at phenological changes’ effects on pests, but I imagine there must be some. If you want to pursue this further, my work email isn’t hard to find.

  338. SecularAnimist:

    On-topic re: Antarctic warming, from Associated Press today:

    Study: Antarctic glaciers slipping swiftly seaward
    Antarctic warming worse than thought, melting glaciers
    could raise sea levels over 3 feet
    ELIANE ENGELER
    AP News
    Feb 25, 2009 08:42 EST

    Antarctic glaciers are melting faster across a much wider area than previously thought, scientists said Wednesday — a development that could lead to an unprecedented rise in sea levels.

    A report by thousands of scientists for the 2007-2008 International Polar Year concluded that the western part of the continent is warming up, not just the Antarctic Peninsula.

    Previously most of the warming was thought to occur on the narrow stretch pointing toward South America, said Colin Summerhayes, executive director of the Britain-based Scientific Committee on Antarctic Research and a member of International Polar Year’s steering committee.

    But satellite data and automated weather stations indicate otherwise.

    “The warming we see in the peninsula also extends all the way down to what is called west Antarctica,” Summerhayes told The Associated Press. “That’s unusual and unexpected.”

    For the International Polar Year, scientists from more than 60 countries have been conducting intense Arctic and Antarctic research over the past two southern summer seasons — on the ice, at sea, and via icebreaker, submarine and surveillance satellite.

    The biggest west Antarctic glacier, the Pine Island Glacier, is moving 40 percent faster than it was in the 1970s, discharging water and ice more rapidly into the ocean, Summerhayes said.

    The Smith Glacier, also in west Antarctica, is moving 83 percent faster than it did in 1992, he said.

    All the glaciers in the area together are losing a total of around 103 billion tons (114 billion U.S. tons) per year because the discharge is much greater than the new snowfall, he said.

    “That’s equivalent to the current mass loss from the whole of the Greenland ice sheet,” Summerhayes said, adding that the glaciers’ discharge was making a significant contribution to the rise in sea levels. “We didn’t realize it was moving that fast.”

    The glaciers are slipping into the sea faster because the floating ice shelf that would normally stop them — usually 650 to 980 feet (200 to 300 meters) thick — is melting.

    The warming of western Antarctica is a real concern.

    “There’s some people who fear that this is the first signs of an incipient collapse of the west Antarctic ice sheet,” Summerhayes said.

    Antarctica’s average annual temperature has increased by about 1 degree Fahrenheit (0.56 degrees Celsius) since 1957, but is still 50 degrees Fahrenheit (45.6 degrees Celsius) below zero, according to a recent study by Eric Steig of the University of Washington.

    Summerhayes said sea levels will rise faster than predicted by the Intergovernmental Panel on Climate Change, a group set up by the United Nations.

    A 2007 IPCC report predicted a sea level rise of 7 to 23 inches (18 to 58 centimeters) by the end of the century, which could flood low-lying areas and force millions to flee. The group said an additional 3.9 to 7.8 inches (10 to 20 centimeters) rise was possible if the recent, surprising melting of polar ice sheets continues.

    Summerhayes said the rise could be much higher.

    “If the west Antarctica sheet collapses, then we’re looking at a sea level rise of between 1 meter and 1.5 meters (3 feet, 4 inches to nearly 5 feet),” Summerhayes said.

    Ian Allison, co-chair of the International Polar Year’s steering committee, said many scientists now say the upper limit for sea level rise should be higher than predicted by IPCC.

    “That has a very large impact,” Allison said, adding that extremely large storms which might previously have occurred once in a year would start to occur on a weekly basis.

    The IPY researchers found the southern ocean around Antarctica has warmed about 0.2 degrees Celsius (0.36 degrees Fahrenheit) in the past decade, double the average warming of the rest of the Earth’s oceans over the past 30 years.

    It seems to me that pretty much every new study of the observed effects of anthropogenic global warming includes the quote: “We didn’t realize it was moving that fast.”

  339. tom:

    A poster writes:

    “No life is an island, entire of itself; every one is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were: any death diminishes me, because I am involved in life, and therefore never send to know for whom the bells tolls; it tolls for thee”

    by the way. There’s a tragedy going on in in the animal world as we speak. A group of animals is being slaughtered indiscrimiantely by the millions every day.

    They are known as cows and pigs.

    I sure hope you can sleep at night with al those bells tolling at each of those deaths that diminishes you.

    [Response: This is as textbook illogical as it comes. Now only vegetarians can care about ecosystems and the environment. Got it. - gavin]

  340. Hank Roberts:

    Could be captioned: “Don’t let them get your goat”:
    http://members.tripod.com/~darrens/troll4.jpg

  341. Jim Eager:

    I think tom has had more than enough opportunity to demonstrate that he has nothing constructive to add to the discussion.

  342. pete best:

    Re #334. Are you trying to tell me that the breathing of 6.5 billion is a problem.? Well its is first in the history of the planet so all that organic tissue breathing out CO2 might be a problem but there again won’t all the cattle be. Eating meat is additional to seeing it I guess.

  343. sidd:

    Mr. Tom writes on the 25th of February 2009 at 11:31:

    “There’s a tragedy going on in in the animal world as we speak. A group of animals is being slaughtered indiscrimiantely by the millions every day.”
    Indeed, sir, there is. I have had the misfortune to have some acquaintance with the feedlots of the Colorado plains by Greeley as well as other Confined Animal Feeding Operations in Ohio and Pennsylvania, where tens of thousands of helpless creatures are prisoned in conditions so horrific as to beggar my imagination. Perhaps someday you, Mr. Tom, might someday familiarize yourself with the reason a pig’s tail in one of these facilities is not cropped completely off…

    Mr. Tom writes further:
    “I sure hope you can sleep at night with al those bells tolling at each of those deaths that diminishes you.”

    That is the reason I converted to free range meat some years ago. And I do bow my head and spare a thought for the animals I eat. I sleep much better at night, but of course, sir, you are correct, my sleep could and will improve, as I improve my habits.

    I apologize to the moderators for the off topic comments. Please feel free to delete this post. However I felt that I ought to respond to Mr. Tom.

    sidd

  344. Ray Ladbury:

    Pete Best, My goal is to preserve civilization and make it sustainable, not to go back to a hunter-gatherer existence, or even a medieval feudalism. Like it or not, if we want people to preserve the wonders of the world, we will have to find ways for them to enjoy them.
    I learned something when I was in the Peace Corps. I was feeling guilty about my privileged lifestyle and status (at least relative to my African peers), and a wise woman who had lived her life doing development told me, “You can give away everything you have and the only thing you’d accomplish is you’d be poor too. You have to have a firm footing to give people a hand up.”
    By all means, we should conserve. That does not mean eschewing all transport or modern advantages.

  345. Kevin McKinney:

    If tom is a vegetarian, I’ll eat my tofu. Sarcasm, pure & simple.

    Jim Eager is right; this subthread is going nowhere.

  346. Jim Eaton:

    In Michael Pollan’s book, “In Defense of Food,” he suggests, “Eat food. Not too much. Mostly plants.”

    But he doesn’t go so far as to say everyone should be a vegetarian.

    But he does suggest that if you eat meat, you should get it from animals eating a natural diet. For cows, that is grass, not seeds like corn. This results in a healthier food for consumption.

    Of course, we cannot raise as many cows or other animals with this suggestion, because there is not enough land available (or we cut down yet more rain forests to convert to pasture). But it would have its benefits to the planet. Reduced consumption of meat likely would result in healthier people, and a smaller number of ungulates would reduce the amount of methane being emitted to the atmosphere.

    The eating habits of many (especially Americans) have a great impact on CO2 emissions, whether it be from feedlot cows or fruits and vegetables transported long distances. It is yet another change we all need to make, amongst many, to slow global warming.

    Oracle: Pressure lecture

  347. Mark:

    re 342.

    Where do you think the carbon you breathe out comes from? You aren’t a nuclear thermopile so cannot be creating it from nothing.

    It comes from Your Food.

    Carbohydrates.

    What do you think they contain a lot of?

    Carbon?

    Yes.

    Now the plant matter isn’t a fusion reactor, is it. So where does it get its Carbon from?

    CO2.

    I believe you’ve said in the past CO2 is plant food.

    So you breathe out CO2. Plant breathes in CO2. You eat plant. You breathe out CO2. Plant breathes in CO2. …

    It’s called the carbon cycle.

    and 6.5Billion people eat a lot of plants. And all those plants breathe CO2.

    It kind of scales itself.

  348. pete best:

    Re #344, bit of a sloppy answer considering carbon is carbon, we all burn some but should it be less and if so how much less I wonder. The USA constructed its whole way of life on fossil fuel uage, 20 tonnes per person on average I believe or as near as damm it. So what is the solution I wonder ?

  349. Mark:

    re: 348.

    Scotland, which is peopled by first world inhabitants and is quite a long way north uses about 1/5th the energy per household compared to the North American household.

    Maybe there’s a start.

  350. Jim Eager:

    Pete, you obviously missed the words “Peace Corps” in Ray’s post. In other words, he didn’t go to Africa as a tourist.

    You are so far off base on this one that you can’t even see the base any more.

  351. pete best:

    Re #350, regardless, its all carbon use and the military use a lot and most people do not have the luxury of bring in the Peace Corps and do it themselves travelling around the globe in planes and burning carbon to do things. I just begs the question, what is the solution?

    Do nothing – BAU scenario
    Do something but not enough – 2 ppmv down to 1 ppmv perhaps
    Do a lot but wish upon a star if you think life will continue as it is now – avert the climate disaster people seem happy to tell everyone about and avert population decimiation and mass unrest.

    But how do we avert it all? We can all shout about the science endlessly and get wound up by the deniers and skeptics but where is the solution, there is not one. Just another load of people arguing about nuclear, renewables etc whilst we continue the carbon burn and carry out our normal lives. Prosperity and progress will render this site useless.

  352. Mark:

    “where is the solution, there is not one.”

    Yes there is.

    Don’t burn fossil fuels.

    There’s your solution.

    At the moment we have too much infrastructure that relies on fossil fuels. So change the infrastructure (in exactly the same way as you build more roads to new cities, build new houses for new families to live in, or build new powerstations to produce more power and all the infrastructure building that comes from entropy being an unstoppable feature of reality).

    At the moment we demand too much energy to easily change power sources. So use less. Scotland uses 20% of the average US household. Sweden uses something less than 1/3. Neither are in temperate areas that don’t rely a lot on heating or lighting to become comfortable in its luxuries. Neither are second world or third world countries with a lower standard of living. Neither are living in caves, eating tree roots and berries and whatever dead animal they find.

    There may not be ones you WANT to solve the problem with, but then again if my arm is gangrenous, I STILL don’t want it sawn off. If I could keep the arm and lose the gangrene, I would much prefer that option.

    But you can blame your parents, your grandparents and back 5-8 generations for the mess they made so that they could have a good life and hang the mess they caused. If it hadn’t been for their profligacy CO2 levels would not be so high and the infrastructure would not be so dependent on fossil fuels.

    And if YOU don’t take the hard steps, your children will have to work even harder to undo the mess YOU left behind and his ancestors back 6-9 generations.

  353. PaulM:

    One of the flaws of this paper is the reliance on a small number of principal components. I’ve just been looking at Steig et al again: they say “peninsula warming averages 0.11 +- 0.04 C per decade”. That’s 0.55 degrees over the last 50 years. But if you look at the BAS site they say the peninsula has warmed by 2.8 C over the last 50 years, which is roughly what you can see by looking at the station data. So if Steig et al’s “reconstruction” is correct, the station data overestimate warming by about a factor of 5! One of their conclusions ought to be that the peninsula is warming much less than previously thought. But curiously this is not the conclusion they reached or the story reported in the media.
    So the “reconstruction” is clearly wrong. Where has the peninsula warming gone? Answer: it has been smeared and diluted over the continent by the inappropriate use of a small number of PCs. All of this ought to have been obvious to Eric Steig and his co-authors and the Nature referees and editors.
    [For those familiar with Fourier analysis: It is like taking a spiky function and trying to represent it with just 3 Fourier modes. The spike gets lower and broader]

    [Response: Surprisingly enough, this was realised by the authors and they discussed it: "A disadvantage of excluding higher-order terms (k . 3) is that this fails to fully capture the variance in the Antarctic Peninsula region. We accept this trade-off because the Peninsula is already the best-observed region of the Antarctic." - gavin]

  354. Ray Ladbury:

    Pete, I share your frustration. Indeed, I think the frustration you feel is one of the reasons why denialists have such a hard time accepting the science. It looks as if the only solution would be to cut back to zero fossil fuel use immediately. However, this isn’t an option in a world of 6.5 billion people dependent on fossil fuels for everything from the food they eat to the clothes they wear. We cannot conserve ourselves out of this mess.

    At the same time, conservation is critical, since it is something we can do now that buys us time for developing more effective solutions, mitigations and models.

    FWIW, I am fully cognizant of how privileged I have been to have such experiences, and I hope I have used them to advantage–for instance indicating the shortsightedness of Tom’s trolling. To commit to a fight, it’s sometimes an advantage to have experienced some of what you are fight for.

  355. SecularAnimist:

    Ray Ladbury wrote: “My goal is to preserve civilization and make it sustainable, not to go back to a hunter-gatherer existence, or even a medieval feudalism.”

    Medieval feudalism refers to a political system, not a level of technological development. It is certainly conceivable to imagine a high-tech version of medieval feudalism. I believe such a scenario has been imagined in various science fiction novels. Frank Herbert’s Dune would be an example. Some, including myself, see parallels between the modern corporation and the medieval feudal state.

    The fundamental basis of any civilization is agriculture. My view is that the essential ingredient of a technologically advanced civilization is the ability to generate, distribute and use electricity. Fortunately there are plenty of ways to produce abundant electricity without carbon emissions, principally with solar, wind and hydro energy.

  356. pete best:

    Re #352, Lets forget the blame game shall we. Lets just accept the othodox position on the matter and devise the solution(s) but as it stands I doubt its going to happen as yet. Yes some definite large scale projects on Wind, and CSP etc but its not strategic enough and the post kyoto treaty coming up to start by 2012 is it needs to be major step forward, 80% cuts within all round by some time in the future (2050 I beieve) or Hansen’s moritorium on coal by 2030 or the introduction of CCS to keep coal or postpone the moritorium until it is ready. And so it goes on and on. Talk talk talk so far, some action but cuts small and often gobbled up in China and India etc.

    I reckon its peak fossil fuels before we go on a eco war.

  357. Jim Eager:

    “Where has the peninsula warming gone?”

    Hmmmm, heat of fusion comes to mind.

  358. Mark:

    re:#356

    Most of the world’s energy is gobbled up by the USA.

    A 30% reduction (doesn’t even take them down to UK levels of use) and frees up around 15% of the energy demand.

    What are the chinese going to do with all that extra energy? Run a Stargate?

    Don’t let someone else doing bad stop you from doing good. Or you’re worse than they are.

  359. pete best:

    #358, I doubt it works like that and since when is the USA going to consume less energy, they are just going to try getting it from a different source at best or lobby for the CCS revolution that will come too little too late.

    President Obama stated that our way of life is non negotiable again even though he wants an alleged energy revolution. So what gives apart from the rhetoric of his speeches I guess.

  360. Barton Paul Levenson:

    Mark writes:

    Most of the world’s energy is gobbled up by the USA.

    This isn’t even close to being true. The USA’s share of world energy use is about 20%.

  361. Sekerob:

    BPL, just let him and few more. Not worth your time.

    China few months ago was reported to just having passed the USA on CO2 contribution, now each sitting on about 25%. How efficient the path from Fossil Fuel to CO2 is, I don’t know, but my feeling is that the USA might be a bit better, so 20-25% range on global energy consumption I’d guesstimate. The French just smiling ear to ear and just signed to build 4 new NP in Italy.

    Now question for the American brethren. Why are you running your office airco’s at 20C and lower, when 22-24C is much better, and healthier? Just imagine if the USA would just switch those energy gobblers 1 or 2 C up.

    Dead back on mean, NSIDC daily charts are back up, currently showing about 1.2 million km square less summer sea ice extent than same time last year… global cooling we wish. http://nsidc.org/data/seaice_index/images/daily_images/S_timeseries_thumb.png

  362. Hank Roberts:

    This may help:
    http://www.eia.doe.gov/emeu/international/energyconsumption.html

  363. Hank Roberts:

    On topic:
    Article:
    http://www.newscientist.com/article/mg20126971.700-how-to-survive-the-coming-century.html

    Interactive map:
    http://www.newscientist.com/embedded/mg20126971700-surviving-in-a-warmer-world

    Click on Antarctica.
    See this text pop up, in Google Earth:
    ___________________________________
    Western Antarctica

    This area will be simply unrecognisable. Instead of the vast ice sheets, it will be densely inhabited with high-rise cities.

    Get directions: To here – From here
    _______________________________________________________

  364. ApolytonGP:

    353, Gavin reply:

    But Gavin, I don’t think you are addressing his point that the low number of PCs is inapporopriate given the spikiness of the function. And that it transfers peninsula warming into the interior. Now, that guy may be wrong…but you are not addressing the issue in debate. Merely making a textualist comment about the paper.

    [Response: If people want to make specific points, they should make them. I'll often try to intuit what someone is trying to get to, but I'm not psychic. In this case, I didn't find people expressing shock that they have worked out the number of eigenmodes used in an analysis, when it was clearly stated in the paper, particularly interesting. With respect to your point, the number of modes is always a balance between including more to include smaller scale features, and not including modes that are not climatically relevant or that might be contaminated by artifacts. There are a number of ad hoc rules to determine this - 'Rule N' for instance, but I don't know exactly what was used here. It doesn't appear to make much difference, but it's a fair point to explore. - gavin]

  365. Aylamp:

    363 Hank Roberts

    From New “Scientist”.

    “According to models, we could cook the planet by 4 °C by 2100.”

    According to who – Kate Moss?

    http://www.iht.com/articles/2007/02/08/news/rny09R.php

  366. Philippe Chantreau:

    OK BPL but the US represents what fraction of the world’s population? So the per capita energy consumption is quite high, perhas not as high as Canada but still much higher than most, including China. Of course China is striving to change that, scary prospect.

  367. Hank Roberts:

    Aylamp inquires about the New Scientist article:

    >> According to who — Kate Moss?

    For the statement Aylamp questions,
    New Scientist cites their source.

    Fifth paragraph.

  368. Hank Roberts:

    Some sources on per capita energy consumption:

    The page I linked recently above may help:
    http://www.eia.doe.gov/emeu/international/energyconsumption.html

    As of 2005, some numbers here:
    http://earthtrends.wri.org/text/energy-resources/variable-351.html
    (cited to: http://data.iea.org/ieastore/default.asp )

  369. Ryan O:

    Gavin says: “If people want to make specific points, they should make them.”
    .
    With all due respect, he did make a specific point: The method used by the authors smeared the peninsula warming over the interior. Or, to put it slightly differently, the low number of PCs used was insufficient to properly capture the geographical distribution of the temperature trends.
    .
    Stating that the authors “realized that the disadvantage of not including higher order terms” led to an inaccurate depiction of peninsula warming in no way addresses the statement that the failure to include higher-order terms also had the effect of transferring peninsula warming to the interior. The followup that “there are ad hoc rules” to determine this is similarly irrelevant. It doesn’t matter what the ad hoc rules are . . . if the application of one or more of those rules resulted in an inaccurate geographic distribution of temperature trends, then the rule was either wrong, inappropriately used, or both.
    .
    And if, as you imply, the higher-order terms are contaminated by artifacts (and thus cannot be used) – while simultaneously the lower-order terms are shown to be insufficient to accurately depict the geographical distribution of trends – then the obvious conclusion is that the available information is insufficient to support the main conclusion of the paper: heretofore unreported significant warming in West Antarctica.

    [Response: We have a situation where we don't have complete information going back in time. The information we do have has issues (data gaps, sampling inhomogeneities, possible un-climatic trends). The goal is to extract enough information from the periods when there is more information about the spatial structure of temperature covariance to make an estimate of the spatial structure of changes in the past. Since we are interested in the robust features of the spatial correlation, you don't want to include too many PCs or eigenmodes (each with ever more localised structures) since you will be including features that are very dependent on individual (and possibly suspect) records. Schneider et al (2004) looked much more closely at how many eigenmodes can be usefully extracted from the data and how much of the variance they explain. Their answer was 3 or possibly 4. That's just how it works out. The fact is that multiple methods (as shown in the Steig et al paper) show that the West Antarctic long term warming is robust and I have seen no analysis that puts that into question. You could clearly add in enough modes to better resolve the peninsular trends, but at the cost of adding spurious noise elsewhere. The aim is to see what can be safely deduced with the data that exists. - gavin]

  370. Ryan O:

    Gavin, thanks for the reply. I do appreciate it. However, I feel that it misses the point. For the sake of this discussion, I will accept the offering that using more than 3 or 4 PCs results in incorporating artifacts into the reconstruction. In that case, the authors are perfectly justified in truncating the rank at 3.
    .
    What the reply doesn’t address, however, is that by using only the first 3 PCs, the geographical distribution of temperature trends is not accurately captured. The Big Deal associated with the paper – and it definitely generated a lot of excitement – was that the warming was not merely restricted to the peninsula. But a 3-PC analysis appears to inaccurately transfer peninsula warming to all of West Antarctica.
    .
    So I am in agreement that the aim is to see what can be safely deduced with the data that exists. Unfortunately, if the higher order modes cannot be used due to contamination or artifacting and the lower order modes do not provide enough geographical resolution, then the data does not support the conclusion.
    .
    By the way, the AWS recon shows no statistically significant warming in West Antarctica post-1969 and most of the stations show a negative trend post-1979 – a very different picture of West Antarctica.

    [Response: You appear to under some mis-apprehension here. Neither the PCA nor the RegEm methodologies know anything about physical location. Thus any correlation between stations, or similar weightings for a particular mode occur because there are real correlations in time. There can be no aphysical 'smearing' simply because of physical closeness in the absence of an actual correlation. Higher order modes are not distinguishable from noise and so shouldn't be used (whatever the cut-off point), but the remaining modes define the spatial scales at which the reconstruction is useful. And that is roughly at the semi-continental scale in this case. Making strong statements about smaller regions would not be sensible, though the pointwise validation scores in held back data given in the supp. mat. indicate that there there might not be much of a problem. - gavin]

  371. ApolytonGP:

    I don’t quite get the argument for higher PC cutting off. Isn’t it true that infinite PCs gives back the original data? And aren’t lots of stats analyses done on data themselves? You seem to be a bit too strong in thinking that PCA will cut bad stuff and keep good. IT has a real danger of doing the reverse. It’s just a math transform.

    [Response: You don't need an infinite number of PCs. There are only as many PCs as there are initial timeseries (assuming they are not degenerate), and using all of them gives you the original data back - at which point there was no point in doing PCA at all. In these kinds of applications, the PCs are used as filters, with higher modes explaining less and less of the joint variance. If you are interested in what the ensemble of the data gives that doesn't depend on one or two elements, it makes sense to focus on the first few modes. Higher modes are much more sensitive to small random variations in ways that can be checked using Monte Carlo simulations and which form the basis of some of the criteria for how many modes you keep. - gavin]

  372. Ryan O:

    Gavin says: “You appear to under some mis-apprehension here. Neither the PCA nor the RegEm methodologies know anything about physical location. Thus any correlation between stations, or similar weightings for a particular mode occur because there are real correlations in time.”
    .
    No, that doesn’t have anything to do with it. It doesn’t matter that PCA and RegEM don’t know or care about physical distance. What matters is whether the amount of variation captured by the PC changes with location.
    .
    Let’s say that the first PC captures 50% of the variation in the data set as a whole. That does not allow you to say that it captures 50% of the variation in the peninsula, 50% of the variation along the Adelaide coast, 50% of the variation near the Ross ice shelf, and so forth. It may capture 80% of the variation in the plateau but only 15% of the variation at the Ross ice shelf. This can result in major over/under estimation of the signal depending on location. The “smearing” does not depend on RegEM or PCA “knowing” the station locations; it’s entirely dependent on the geographic distribution of the variation in the real data. Using fewer PCs results in less noise, certainly, but also results in an inability to fully capture variation that changes with physical location. PCs aren’t magic. Every one of them contains some information and some noise. Not using higher-order PCs necessarily results in removing some information. You are exactly right in that it is a trade-off. The question here is whether the trade-off chosen by the authors results in a conclusion that can be supported by the data.
    .
    The way to test this is to compare the results from the PCA to independent records by location. I have done this, and done it in detail. I have compared the sat recons to the AWS recon. I have compared all the recons to the ground data. And I can tell you that for the peninsula, the grid points corresponding to ground stations in the main/PCA recons spend more time outside the 95% confidence intervals for the means (doesn’t matter whether you use 24, 36, 48, 60, 72, 84, 96…192 month means) than within. Many West Antarctica stations spend 30% or more of their time outside the 95% confidence intervals, and most are outside the lower 95% interval in the 1970-1980 timeframe and outside the upper 95% interval in the 2000-2006 timeframe (paired Wilcoxon test). Not only that, but the shape of the difference in means curves have geographical significance – meaning the choice of 3PCs insufficiently captures the evolution of temperature with location.
    .
    The fact that RegEM or PCA don’t care about physical distance is not relevant.

    [Response: But what's your null hypothesis here? That temperatures at a location must be gaussian? That they have no auto-correlation? Why? There is nothing magic about 3 PCs - they were chosen based on the prior analysis of Schneider et al. Nothing much is going to change with 4 or 5 - and how much variance to they explain in any case? - gavin]

  373. Ryan O:

    Gavin,
    .
    First, I know you’re in the difficult position of answering questions about a paper you did not write – so I hope I’m not coming off as too argumentative. :) I’m not sure you understood what I meant, which is most likely lack of clarity on my part.
    .
    The test I describe is comparative between ground record and recon or recon A and recon B. The null hypothesis is nothing more complex than there is no difference in the sample means. There is no underlying model. I make no assumptions about the distribution of temperatures or degree of spatial autocorrelation. I simply compare the means corresponding to the same physical location over various time periods in order to determine how the means evolve temporally and whether differences in the evolution are statistically significant.
    .
    All of the provided recons, when compared to each other gridpoint to gridpoint (or, for the AWS and ground station comparison, station location to the corresponding gridpoint in the satellite recon), show differences in the means that are statistically unlikely if they were due to chance alone. The shape of the curve of differences is easily associated with geographical location. In other words, the geographic distribution of temperature change differs between the two satellite recons (minor), either satellite recon and the AWS recon (major), and either satellite recon and the manned station records (major).
    .
    As a finer point, one could potentially assume that the satellite recons are correct and it is the station records that are incorrect. However, this requires that the ground station records within specific geographic regions all evolve incorrectly in the same manner. This is entirely implausible.
    .
    The plots of the differences in means are not random. They are so not random that I was able to group the manned stations with 10+ years of data and all of the AWS stations from the AWS recon (a total of 79 stations) simultaneously into 6 geographically distinct regions by curve shape alone with only 1 incorrectly placed station (I had placed D-47 in an adjacent region). The differences are not due to ground instrumentation error. They are solely a function of the lack of geographical resolution in the satellite recons.
    .
    The comment about whether 3 or 4 PCs would have changed anything is irrelevant. The only relevant question to ask is “How many PCs would be required to properly capture the geographic distribution of temperature trends?” with the followup question of, “Does this number of PCs result in inclusion of an undesirable magnitude of mathematical artifacts and noise?”
    .
    If the answer to the followup is “yes”, then the conclusion is that the analysis is not powerful enough to geographically discriminate temperature trends on the level claimed in the paper. It doesn’t mean the authors did anything wrong or inappropriate in the analysis (I am NOT saying or implying anything of the sort). It simply means that the conclusion of heretofore unnoticed West Antarctic warming overreaches the analysis.
    .
    In answer to your question about how much variation they need to explain, that is entirely dependent on the level of detail they want to find. If they wish to have enough detail to properly discriminate between West Antarctic warming and peninsula warming, then they must use an appropriate number of PCs. If they wish to simply confirm that, on average, the continent appears to be warming without being able to discriminate between which parts are warming, then the number of PCs required is correspondingly less.
    .
    [edit]

    [Response: Sorry, but any statement that contains a 95% confidence interval implies an underlying null hypothesis and a distribution. You don't get a pass on that. - gavin]

  374. Ryan O:

    Gavin, the null hypothesis is that there is no difference in the means. The Wilcoxon test does not require any assumptions on the distribution of differences. Had I used a t-test, you’d have a point; but the Wilcoxon test does not have that restriction.

    [Response: Fair enough, but that just gives a finding that there is "a significant difference in the means" over some time period. Your interpretation assumes that there is some expected distribution of what this should be in any particular reconstruction and that must depend on the temporal auto-correlation of the time-series among other things. Wilcoxon also does not tell you how big the deviations are. The fact is that the number of retained PCs above two doesn't impact the long term trends (for instance, for the AWS reconstruction the trends in the mean are -0.08, 0.15, 0.14, 0.16, 0.13 deg C/dec for k=1,2,3,4,5). - gavin]

  375. Ryan O:

    Gavin,
    .
    “Fair enough, but that just gives a finding that there is “a significant difference in the means” over some time period.”
    .
    Agreed.
    .
    “Your interpretation assumes that there is some expected distribution of what this should be in any particular reconstruction and that must depend on the temporal auto-correlation of the time-series among other things.”
    .
    Not quite. The results of the test itself can only tell me whether I can accept or reject the null hypotheses (no difference in means) at the chosen confidence level (in this case, 95%) for a particular set of paired data. The test itself gives me no additional information. At this point, the only thing I know is that they are different. By the way, the Wilcoxon test most certainly can be used to give you estimates of the difference in sample means and can be used to yield exact confidence intervals and p-values.
    .
    For the next part, that I assume an expected distribution, this is true only in the sense that the null hypothesis is that there is no statistically significant difference in the means. I do not need to make any additional assumptions.
    .
    At this point, you are entirely correct that we have exhausted what the Wilcoxon test can tell us. It cannot tell us that there are geographical differences and it cannot tell us which data set is “right”.
    .
    You can, however, run an additional test. In this case, the null hypothesis I chose is that there is no correlation between curve shape and geographical location. I make no prior assumptions about the shape of the curve. I then attempt to group the curves based on shape. Were the curves entirely arbitrary, this would be a fruitless task, but they are not. The groupings are easy to see. Following the grouping, I plot the locations of the groups and find that they are strongly correlated geographically.
    .
    If you wish to challenge the rigor of this, your concerns would be legitimate based only on the information I’ve provided so far. I have not yet challenged my grouping statistically (though I will). As a warm-and-fuzzy test, however, I would have no problem providing all the curves to you and letting you try to group them yourself. I am confident that you will arrive at a very similar grouping.
    .
    “The fact is that the number of retained PCs above two doesn’t impact the long term trends (for instance, for the AWS reconstruction the trends in the mean are -0.08, 0.15, 0.14, 0.16, 0.13 deg C/dec for k=1,2,3,4,5).”
    .
    This makes no statement on where those trends occur – which is how this latest discussion started. The aggregate measurement is similar. The geographic distribution of those trends may not be. I haven’t run it myself, but I’d be willing to bet that the geographic distribution changes significantly as higher-order PCs are included.