Friday roundup

A few items of interest this week:

Katrina Report Card:

The National Wildlife Federation (NWF, not to be confused with the ‘National Wrestling Federation’, which has no stated position on the matter) has issued a report card evaluating the U.S. government response in the wake of the Katrina disaster. We’re neither agreeing nor disagreeing with their position, but it should be grist for an interesting discussion.

An Insensitive Climate?:

A paper by Stephen Schwartz of Brookhaven National Laboratory accepted for publication in the AGU Journal of Geophysical Research is already getting quite a bit of attention in the blogosphere. It argues for a CO2-doubling climate sensitivity of about 1 degree C, markedly lower than just about any other published estimate, well below the low end of the range cited by recent scientific assessments (e.g. the IPCC AR4 report) and inconsistent with any number of other estimates. Why are Schwartz’s calculations wrong? The early scientific reviews suggest a couple of reasons: firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification; secondly, that a similar analysis in a GCM with a known sensitivity would likely give incorrect results, and finally, that his estimate of the error bars on his calculation are very optimistic. We’ll likely have a more thorough analysis of this soon…

It’s the Sun (not) (again!):

The solar cyclists are back on the track. And, to nobody’s surprise, Fox News is doing the announcing. The Schwartz paper gets an honorable mention even though a low climate sensitivity makes it even harder to understand how solar cycle forcing can be significant. Combining the two critiques is therefore a little incoherent. No matter!

  1. VirgilM:

    1) What is Stephen Schwartz’s response to the un-peer reviewed “early scientific reviews”? 2) If a journal allowed a bad anti-AGW paper to be published, then how can I be assured that these same journals didn’t allow bad pro-AGW papers to be published? Do we need a stronger peer-review process?

    [Response: There is, its called an assessment process (e.g. IPCC, NRC, etc). See our previous article "Peer Review: A Necessary But Not Sufficient Condition". -mike]

    3) So in the last 100 years, the Sun had zero impact on the change of globally averaged temperatures? How much of the warming in the last 100 years can be attributed to the Sun? The IPCC doesn’t even try to answer the last question, but I get asked that question by the public that I interact with.

    [Response: Hmmm? Not sure whom you are referring to as claiming that the sun has had zero impact on global mean temperature. I'm also not sure which IPCC you are referring to. The IPCC Third and Fourth Assessment reports both contained detailed summaries of Detection/Attribution studies that have indeed attempted to estimate the relative roles of natural (solar and volcanic) and anthropogenic forcing in observed 20th century temperature changes. I'd suggest reading Chapter 9 ("Understanding and Attributing Climate Change") of the Fourth Assessment Working Group 1 report, the full report is available online here. -mike]

  2. bjc:

    The NWF report is informative in what regard? It is pot banging pure and simple.

  3. John Cook:

    The sun certainly doesn’t have zero impact on temperature. Tung’s study (the one mentioned in that Fox News article) estimates the impact on global temperature from the solar cycle (eg – from solar minimum to maximum over 5.5 years) is about .18 degrees. But note that this is superimposed over the top of global warming – Tung had to detrend the temperature data to filter out the CO2 warming in order to find the solar signal.

    Tung’s follow-up paper (http://www.amath.washington.edu/research/articles/Tung/journals/solar-jgr.pdf) has some interesting conclusions. Independently of models, he calculates a climate sensitivity of 2.3 to 4.1K, confirming the IPCC estimate. And as we’re currently at solar minimum, he estimates the solar warming over the next 5/6 years is going to add double the amount of warming as we head towards a solar maximum around 2013.

    It’s ironic that the studies that skeptics quote as proving the sun
    is causing global warming (eg – Solanki 2003 gets mentioned a lot) actually conclude the opposite.

  4. Alastair McDonald:

    The group write Why are Schwartz’s calculations wrong?

    But Schwartz is being published in a peer reviewed journal. How can he possibly be wrong :-? Are you saying that peer review is not enough :-? If that is true then all the science published since the late 20th century must be suspect.

    [Response: At the risk of being repetitious, please see our previous article "Peer Review: A Necessary But Not Sufficient Condition". -mike]

    Perhaps the climate models, which only date to the 1960s, are wrong too! Surely, if Schwartz calculations are based on the models and he is wrong then the models must be wrong too.

    OTOH, Schwartz is only considering the oceans. 30% of the surface of the Earth is covered with land. ‘No a lot!’ as Paul Daniels used to say. But it is where we live, so for us it is important. The models say that the land areas will warm twice as much as the oceans, So his 1C rise will mean a 2 C rise for us.

    But if we look at his calculations, which include recent ocean cooling, one wonders if there is not a flaw somewhere. The Arctic ice has been melting much faster than the models predicted, and perhaps that cooling was just an abberation produced by icy water flowing from the Arctic Ocean. When Schwartz calculated the average ocean warming, he only included the increase in the senisible heat of the oceans, but he should also have included the increase in latent heat from the loss of sea ice.

    That would have needed a factor of 80 for every cubic metre of perennial Arctic ice melted, along with the same factor for every cubic meter of ice calved and melted from the base of the Antarctic ice shelves.

    The point to note is that this error would explain the slow rise in ocean temperature despite the current CO2 forcing. And when the sea ice disappears then that factor will have to be added to the ocean warming instead of being subtracted from it as happens now. If, using round figures, we assume we have had a warming of 1.0C over the last century, and that another 0.5C has been diverted to melting ice, then the true warming would have been 1.5C. When the ice disappears then the true warming true will be enhanced by the 0.5C that is no longer going into the melting the ice and temperatures will rise by 2.0C per century.

    The bottom line is that Schwartz’s calculations are correct.

    [Response: Hardly. We'll have more on this soon, we promise. -mike]

    It is the underlying science which underestimates the melting of the sea ice which is wrong.

  5. Aaron Lewis:

    Why is the Schwartz paper getting all this attention?

    In Equation 1, he assumes that Q ≈ E, then he does a bunch of hocus-pocus and comes out with an equilibrium climate sensitivity and time constant of values in a range to make Q ≈ E. Along the way, he forgets that by only using instrumental records, he has assumed a very short time constant. We do not have instrumental records from the days when Earth was a snowball or all tropical. As Earth went into its snowball stages, as it come out of its snowball stages, as Earth went into its tropical stages, and as Earth came out of its tropical stages, Q did NOT come close to equalling E!

    If Q does not approximate E, then the value for dH/dt in Equation 2 is much larger, dT/dt in Equation 3 is larger. Et certa!

    Unless he offers compelling evidence for the stability of ocean heat content and the stability of the heat content of polar ice on geologic time frames, Schwartz cannot expect the reader to accept Equation 1.

    I would say that a careful reading of this paper in the context of careful remote sensing strongly argues for a higher equilibrium climate sensitivity and longer time constant than Schwartz proposes. Try redoing the math, only instead of assuming Q ≈ E, assume that Q is related to E through various functions of atmospheric physics. Oh, but wait! Then, you would have the global climate models that offer a higher equilibrium climate sensitivity and longer time constants.

  6. Jerry:

    Schwartz’s model seems awfully simplistic; in fact, I don’t see much difference between his calculations and those presented in Sec. 12.6 of the textbook “Global Physical Climatology” by Dennis Hartmann.

  7. Chris S:

    I love reading this kind of material on this blog. Fox is reporting it much differently then the article states it on the right (from that link) and then come to find out those of you who have read his work, conclude that in the end, his end results are that of what the IPCC predicts. They don’t tell you that. Guaranteed my father will come home talking about “they discovered what causes global warming again”. And now I can kill that too with my newly acquired knowledge from you all. Thanks.

  8. Richard LaRosa:

    Stephen Schwartz of BNL gave a talk to the Science Club of Long Island at SUNY Stonybrook in 2004. I knew BNL was interested in sequestering CO2 in the ocean so I asked him about ocean acidification due to CO2. He said not to believe everything you see in the Sunday Times. I replied that I was referring to two recent papers in Science. He said don’t believe it until a great concensus of scientists duplicates the findings and agrees. Something fishy about this.

  9. Lawrence Brown:

    This is regarding the Katrina “Report Card”.

    Who speaks for the Corps?

    The Army Corps of Engineers has become an all too convenient whipping post for the failures of the New Orleans levees.

    But the levees that were originally built were as much as a public policy decision as were the Corps design criteria. Thelevees were designed to withstand a category 3 hurricane which may have been a rarer occurrence than it is today.

    The process is that The Corps presents an initial proposal before Congress containing planning , construction, maintainance and other costs for say a 150 or 200 year flood ( a flood that has a recurrance interval, on average, of the specified time period). The Congress makes it’s decision based on it’s budget and against competing expenses in the
    budget. Federal funds were requested to bolster the levees several years before Katrina but no money was authorized.

    There was a great deal of complacency at all levels of government. After all New Orleans had escaped disaster so
    often in thepast, and so there was a mindset among the officials and the public that it would continue to do so. Live and learn.When Rita hit in the Gulf afterward, everyone was better prepared for emergency plans and evacuations.The Corps has many conscientious and dedicated employees, who are willing and able to protect against future foreseeable flooding
    but will be constrained by what the decision makers are willing to budget toward that cause.

  10. John Mashey:

    re: #9

    The Economist has a succinct current story about New Orleans,
    http://www.economist.com/world/na/displaystory.cfm?story_id=9687404
    Among other things, it says:

    “But the government has done practically nothing to discourage rebuilding in the most flood-prone areas….In the end, most homeowners were able to rebuild precisely where they were before and still qualify for federal food insurance.”

    While I normally don’t read the Wall Street Journal’s Editorial, they had an article by Lomborg today, of which one point was right: encouraging building on some coasts is crazy.

  11. John Mashey:

    http://www.forbes.com/forbes/, current issue (Sept 3), has a nice section on solar power, but also an unfortunate editorial by Steve Forbes, praising the (negative) analysis of An Inconvenient Truth by Mary Ellen Gilder (George’s daughter), a *medical school student*, pointing at:
    http://www.oism.org/pproject (a site that RC fans may recognize).

    I’ve suggested that “debunking bad papers” might be given a new section, with one thread per paper, to get such out of other threads. This one wouldn’t be worth it … except for being featured in Forbes.

  12. Rod B:

    Just an observation RE 9, et al: Rita was better prepared for because it was done by Texas and Texas cities (though they too had their bad moments). New Orleans and LA, as first responders, get an F- for their response and their bumblings before, during and after Katrina. I find it odd that the entire focus is on the Feds who in fact deserve at least a D if not a C (or C-) for their efforts. (Though nobody could handle a leeve breach.) I think I heard the Mayor of NO is parlaying his total incompetance into a run for Governor… (or maybe Senator, I can’t recall).

  13. Hank Roberts:

    Richard, thanks for the report on Schwartz’s disbelief in the oceah pH work.
    Now that’s scary. It’s not fancy physics, it’s physical chemistry.

  14. B Buckner:

    Re:9 while you make many valid points, the levees were in fact poorly designed and constructed, and this falls right in the lap of the Corps.

  15. Lawrence Brown:

    Respecting B. Buckner’s comment in 14, from the partial plans that I saw in the media shortly after the breach of the levees, the underground supports didn’t go deeply enough into the the loose surrounding terrain and should have penetrated bedrock to prevent overturning. I know nothing about the inspection at the construction sites, but some few inspectors are notoriously lax in accepting shoddy construction methods, too much of a proportion of water inthe concrete for example.But there’s plenty of culpability to go around. Engineers are hardly ever given carte blanche on what they can build. Pressure will come from those responsible for the budget to build at a quicker pace and at less cost.
    In addition the government didn’t have it’s priorities straight on spending. The Boston big dig was built at a cost of $15 billion. The estimated cost to upgrade the New Orleans levees to protect for cat.5 hurricand was $2.5 billion.It would have been well worth it. It had been anticipated in the years before Katrina but no money was alloted

  16. Hank Roberts:

    > levees, the underground supports should have penetrated bedrock …

    Impractical: “Southern Louisiana has been built up out of sediments transported from the interior of the continent down the Mississippi River. Tens of thousands of feet of these soft sediments overlie crystalline bedrock….”

    http://www.nae.edu/nae/bridgecom.nsf/weblinks/CGOZ-6ZQPVT?OpenDocument

    Lessons from Hurricane Katrina
    John T. Christian
    Volume 37, Number 1 – Spring 2007

    Geotechnical conditions and design flaws both contributed to the failure of the levees in New Orleans.

    This is a good summary referencing some 6,000 pages of technical reports on the Louisiana failure.

    Short answer: the builders used wishful thinking, optimism, and hope, and called it engineering.

  17. Joe Romm:

    I also have some posts debunking Schwartz. The basic question is whether scientists (i.e. the IPCC consensus) are overestimating or underestimating climate change. Schwartz’s results would imply the former — but all observations and much recent research strongly suggests the latter. See

    http://climateprogress.org/2007/08/21/are-scientists-overestimating-or-underestimating-climate-change-part-i/
    http://climateprogress.org/2007/08/22/are-scientists-overestimating-or-underestimating-climate-change-part-ii/
    http://climateprogress.org/2007/08/23/are-scientists-overestimating-or-underestimating-climate-change-part-iii/

  18. Rod B:

    “Re:9 while you make many valid points, the levees were in fact poorly designed and constructed, and this falls right in the lap of the Corps.”

    And neither NOL nor LA had absolutely nothing to say about it?

  19. B Buckner:

    RE: 15 Bedrock is about 2,000 feet below the ground surface, so that was not the issue. The Corps made many rookie, and frankly unprofessional mistakes. They ignored ample subsurface information that indicated the existence of permeable layers beneath the dikes that should have been cut off to prevent a boiling or quick condition that undermined the foundation when water rose on one side of the wall. Also they ignored the presence of soft compressible soils beneath the dikes. The dikes subsequently settled several feet so that they were no longer set at the design flood level, and were subsequently overtopped, something they were not designed to withstand. Overall, their performance was an embarrassment.

  20. dhogaza:

    And neither NOL nor LA had absolutely nothing to say about it?

    Not directly, only indirectly through the political process, and then only in regard to the big picture, i.e. “how big a hurricane to guard against”, not, say, “what kind of concrete to use”.

  21. SecularAnimist:

    Regardless of whether you attribute hurricane Katrina to anthropogenic global warming, it is likely representative of the sort of catastrophic “extreme weather events” that will become more and more frequent as a result of global warming. (Recall that Houston narrowly escaped an even worse disaster from hurricane Rita.) The moral of the response to the Katrina disaster is that even the wealthiest, most powerful and most technologically advanced nation on Earth is entirely unprepared and unable to respond effectively to such events.

  22. David B. Benson:

    Do better to hire the Dutch engineers…

  23. Lawrence Brown:

    Re #16 “Short answer: the builders used wishful thinking, optimism, and hope, and called it engineering.”

    C’mon Hank, you don’t know that. It’s this kind of inflammatory rhetoric that causes and confusion and contention among the public. Since I found Realclimate about 4 months ago, you’ve been among the more cogent and analytical posters here. It seems out of character to state this kind of subjective verbiage. See B.Buck’s more reasoned response in comment #19. Most career civil employees and officers are professional in their work. They don’t make as much as they might outside but there are gratifications in serving the public.

    John Mashey is onto something in comment #10. Why keep doing the same thing and risk getting the same results.Eventually a storm greater than Katrina will come ashore,especially since greater magnitude storms have been occurring more frequently,lately. Maybe New Orleans ought to do what Galveston did after a storm surge in 1900 caused 6000 deaths.

    They jacked up the buildings and deposited dredged sand beneath the raised structures and built a large seawall for further protection.They could attempt the same in New Orleans to raise some of the lower sections of the city above sea level.

  24. bigcitylib:

    # 8 Shwartz has a website here

    http://www.ecd.bnl.gov/steve/pubs.html#preprints

    And here is the conclusion of his 2007 paper:

    Quantifying climate change — Too rosy a picture?

    “The century-long lifetime of atmospheric CO2 and the anticipated future decline in atmospheric aerosols mean that greenhouse gases will inevitably emerge as the dominant forcing of climate change, and in the absence of a draconian reduction in emissions, this forcing will be large. Such dominance can be seen, for example, in estimates from the third IPCC report of projected total forcing in 2100 for various emissions scenarios2 as shown at the bottom of Fig. 1. Depending on which future emissions scenario prevails, the projected forcing is 4 to 9 W m-2. This is comparable to forcings estimated for major climatic shifts, such as that for the end of the last ice age3.”

  25. Phil. Felton:

    “Do better to hire the Dutch engineers…

    Comment by David B. Benson — 25 August 2007 @ 14:39″

    Or the guys who designed and built this:
    http://en.wikipedia.org/wiki/Thames_Barrier

  26. Nick Stokes:

    I thought the answer to the Schwartz puzzle is simply mishandled signal analysis. Starting at the bottom of p 13, he describes the filtering he applied. Without detrending, he gets a relaxation time constant of 15-17 years, which would give a sensitivity much in line with other calculations. But then he applies detrending, which he concedes is a high pass filter. By attenuating long-term effects in this way, he gets a much shorter time constant of 5-7 years, which then leads to the lower sensitivity.

    Detrending is quite inappripriate here. His argument for using it seems to be that without it, he gets inconsistent data between the first and second half-periods of data. All that means is that his method cannot resolve the longer term effects properly, and so he removes them. The root problem is that he is trying to find a transfer function for a system by dividing the output spectrum by the input, but the input is poorly known, and has a small range of frequencies.

  27. Hank Roberts:

    Lawrence, when I say ‘builders” I mean the people who hire the engineers, and the politicians who pay for the work and tell the public that it’s good and their money well spent. From that study — based on the first 6,000 pages of reports — it seems like the engineering information was sufficient to say the builders were wrong, and some engineers did warn about the situation. The pressure’s always going to be strong on engineers, as on climatologists, to provide data but not warnings, I imagine.

    These huge retrospective studies failures I guess are how they figure out what their professional responsibility is to warn where the builders and politicians don’t want the public to know a failure is likely.

    Compare the way earthquake building standards get changed after each major earthquake — there’s a cycle of engineering reports, proposals to upgrade the way buildings are built, a long period of political negotiation, and maybe change or maybe not. There’s a whole academic field studying how this kind of change happens.

    Remember this one? http://www.fema.gov/news/newsrelease.fema?id=6202
    “… the Northridge earthquake …. discoveries alarmed the structural engineering community, building codes officials and emergency managers, especially in earthquake prone areas. The findings called into question all building codes developed over the previous 20 years that addressed this type of construction…..”

    That’s what engineers can do — “call into question” — but can they do any more?

    Climatologists have a tougher situation — they don’t get a “next time” toward which to make recommendations.

  28. Vernon:

    Well I have been ad hom’ed on a bunch of other pro CO2 AGW sites but no one has been able to point out where my facts or conclusions are wrong. Any one here want to take a shot at it?

    [Response: Vernon, the reaction you get is probably directly proportional to your reluctance to listen. But here are the answers again. - gavin]

    Here are the facts and conclusions:

    Hansen (2001) states quite plainly that he depends on the accuracy of the station data for the accuracy of his UHI off-set

    [Response: Of course. - gavin]

    WMO/NOAA/NWS have siting standards

    Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards

    [Response: They have not shown that those violations are i) giving measurable differences to temperatures, or ii) they are imparting a bias (and not just random errors) into the overall dataset which is already hugely oversampling the regional anomalies. - gavin]

    There is no way to determine the accuracy of the station data for stations that do not meet standards.

    [Response: There is also no way to determine the accuracy of the stations that do either. Except for comparing them to other nearby stations and looking for coherence for the past, and actually doing some measurements of temperature now. - gavin]

    Hansen uses lights=0 in his 2001 study

    Due to failure of stations to meet siting standards, lights=0 does not always put the station in an rural environment

    [Response: False. You are confusing a correction for urbanisation with micro-site effect. UHI is a real problem, and without that correction the global trends would be biased high. The Hansen urban-only US trend is about 0.3 deg C/century warmer than the rural trend (which is what is used). Therefore the lights=0 technique certainly does reduce urban biases. - gavin]

    At this time there is no way to determine the accuracy of Hansen’s UHI off-set

    [Response: The effect diminishes with the size of town, it is actually larger than corrections based on population rises, and it gives results that are regionally coherent and you have yet to show that any objective subsampling of the rural stations makes any difference. - gavin]

    Any GCM that uses this off-set has no way to determine the accuracy of the product being produced.

    [Response: GCMs don't use the surface station data. How many times does that need to be pointed out? - gavin]

    Tell which facts I got wrong!

    Oh and if you did not catch this, it means that GISS GCM is pretty worthless till they figure this out.

    [Response: GCM physics is independent of the trends in the surface data - no changes to that data will change a single line of GCM code or calculation. If you want to have a continued discussion then address the responses. Simply repeats of the same statements over and again is tiresome and pointless. - gavin]

  29. ray ladbury:

    Re #12. Rod. B. Oh, now come on. The federal response to Katrina would have been comic had the consequences not been so tragic. Chertoff didn’t even know there were refugees at the superdome until NPR pointed it out to him.
    Rod, I travel a lot overseas and work with lots of foreign nationals on international satellite collaborations. This one massive failure did more to destroy faith in the US federal government than anything else–including Iraq. This was the federal government saying: “You’re on your own.”

  30. Lawrence Brown:

    “Climatologists have a tougher situation — they don’t get a “next time” toward which to make recommendations.”

    After The Tacoma Narrows Bridge failure( Galloping Gertie) in 1940, Tacoma Narrows Bridge: “Galloping Gertie” Collapses November 7, 1940 , engineers began to pay much more attention to wind stress. Often we learn from our mistakes and miscalculations, though at great expense.
    Future climatologists will learn from what today’s climatologists are doing. Hopefully, they’ll be able to fine tune or more fully understand such things as details of parameterizing partial cloud cover inside a model’s cell.
    First somebody does something than somebody does it better. The Wright Brothers flight was measured in minutes, and two decades later Lindbergh made a non-stop flight across the Atlantic.

    Whether you’re a scientist climatologist or engineer,it smoothes the process if you have the support of those who control the purse strings.Here’s what Jim VandeHei and Peter Baker of the Washington Post had to say on Sept.2,2005:

    “In recent years, Bush repeatedly sought to slice the Army Corps of Engineers’ funding requests to improve the levees holding back Lake Pontchartrain, which Katrina smashed through, flooding New Orleans. In 2005, Bush asked for $3.9 million, a small fraction of the request the corps made in internal administration deliberations. Under pressure from Congress, Bush ultimately agreed to spend $5.7 million. Since coming to office, Bush has essentially frozen spending on the Corps of Engineers, which is responsible for protecting the coastlines, waterways and other areas susceptible to natural disaster, at around $4.7 billion.”
    As recently as July, the White House lobbied unsuccessfully against a plan to spend $1 billion over four years to rebuild coastlines and wetlands, which serve as buffers against hurricanes. More than half of that money goes to Louisiana.”

    Bottlenecks are always found at the top of the bottle. When this Nation has short sighted leaders,we’ll all inevitably pay in the long run.

  31. Vernon:

    Gavin, thank you for your input and I have addressed your responses in-line. I bolded the original post.

    Here are the facts and conclusions:

    Hansen (2001) states quite plainly that he depends on the accuracy of the station data for the accuracy of his UHI off-set

    [Response: Of course. - gavin]

    Vern’s Response:

    Yeah, Gavin agreed with me.

    WMO/NOAA/NWS have siting standards

    Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards

    [Response: They have not shown that those violations are i) giving measurable differences to temperatures, or ii) they are imparting a bias (and not just random errors) into the overall dataset which is already hugely oversampling the regional anomalies. - gavin]

    Vern’s Response:

    i) They do show that the stations are not in accordance with NOAA/NWS guidelines. No one knows what this is doing to the station accuracy.

    ii) This is a red herring, it does not matter what they are doing, what matters is no one knows what this is doing to accuracy.

    iii)Oversampling does not matter, Hansen (2001) is not about trends, it is about adjustments to individual stations for UHI, TOD, and station movement.

    [Response: False. The Hansen study is precisely about calculating regional and global trends. It specifically states that for local studies, looking at the raw data would be better. If you don't know why a study is being done, you are unlikely to be able to work out why some details matter and some do not. - gavin]

    There is no way to determine the accuracy of the station data for stations that do not meet standards

    [Response: There is also no way to determine the accuracy of the stations that do either. Except for comparing them to other nearby stations and looking for coherence for the past, and actually doing some measurements of temperature now. - gavin]

    Vern’s Response:
    Gavin, this is another red herring. Hansen assumed the stations did meet the accuracy requirements, it can be shown this assumption is not supported.

    One reason to be cautious about the inferred urban warming is the possibility that it could be, at least in part, an artifact of inhomogeneities in the station records. Our present analysis is dependent on the validity of the temperature records and station history adjustments at the unlit stations.

    So Hansen says that the data from the surface stations needs to be accurate for his methodology to work.

    [Response: Hansen appropriately acknowledges that if the data are seriously flawed, then the GISS analysis will have included those flaws. But again you are mistaken in what you think has been shown. No-one has demonstrated that are significant problems with enough stations to change the regional picture. For you "individual non-compliance"="unusable data", but this has not been shown at all for a significant number of single stations, let alone their regional average. - gavin]

    Further, Hansen made additional assumptions (his definition of rural):

    We are implicitly assuming that urban (local human induced) warming at the unlit stations is negligible. We argue that this warming can be, at most, a few hundredths of a degree Celsius over the past 100 years.

    Hansen uses lights=0 in his 2001 study

    Due to failure of stations to meet siting standards, lights=0 does not always put the station in an rural environment

    [Response: False. You are confusing a correction for urbanisation with micro-site effect. UHI is a real problem, and without that correction the global trends would be biased high. The Hansen urban-only US trend is about 0.3 deg C/century warmer than the rural trend (which is what is used). Therefore the lights=0 technique certainly does reduce urban biases. - gavin]

    Vern’s Response:
    Gavin, I am not confusing anything. You have a nice red herring but I did not say that the currently used UHI off-set does not reduce urban biases. I said there is no way to know the accuracy of the UHI off-set. You have not disputed this, and saying your doing something that you cannot prove is right is not much better than doing nothing.

    [Response: First off, the UHI correction is a trend, not an offset. Secondly, you are confused. Where is there a 'lights=0' station that is in an urban environment? An urban environment is a city, not a building or a road. Secondly, you asking for something that is impossible. How can anyone prove that the correction is correct? Science doesn't work like that. Instead, you make assumptions (that rural trends are more representative of the large scale than urban ones), and you calculate the regional trends accordingly. You might dispute that assumption, but at no stage can anyone 'prove' that it is perfectly correct. - gavin]

    At this time there is no way to determine the accuracy of Hansen’s UHI off-set

    [Response: The effect diminishes with the size of town, it is actually larger than corrections based on population rises, and it gives results that are regionally coherent and you have yet to show that any objective subsampling of the rural stations makes any difference. - gavin]

    Vern’s Response:
    Your response has nothing to do with my statement. You then follow up with talking about the census based UHI off-set which Hansen say specifically his methodology is better.

    Any GCM that uses this off-set has no way to determine the accuracy of the product being produced.

    [Response: GCMs don’t use the surface station data. How many times does that need to be pointed out? - gavin]

    Vern’s Response:
    yet another red herring. I never claimed that you used surface station data. You use the trends, which are in part, formed by using Hansen (2001) off-sets based on the surface station data. [edit]

    Tell which facts I got wrong!

    Oh and if you did not catch this, it means that GISS GCM is pretty worthless till they figure this out.

    [Response: GCM physics is independent of the trends in the surface data - no changes to that data will change a single line of GCM code or calculation. If you want to have a continued discussion then address the responses. Simply repeats of the same statements over and again is tiresome and pointless. - gavin]

    Vern’s Response:
    yet another red herring. Gavin, you can continue to mischaracterize what I said but it will not change the facts. The facts are that the surface station trends are used by GISS GCM as an input. There is no way with the work that Hansen has currently done in (2001) to know if the trends which use his off-sets are any good. Remember garbage in garbage out?

    So your basic response consists of conceding that the surefacestations.org census is showing that a significant number of stations, to date, are not in compliance. You offer nothing to show what the impact of being out of compliance is. You offer nothing to show that if Hansen’s assumptions are wrong, his results are still right. You offer red-herrings as to why this would affect the GISS GCM.

    [edit]

    [Response: Pay attention here. I said the surface data are not used in the GCM. You insist that they are. Since I think I know what's in the GCM quite well, I am certain that I am not mistaken on this. If you think I am wrong, please point out the lines of code where this data are used. This is a straightforward point of fact, if you cannot concede this, there is absolutely no point continuing this discussion. - gavin]

  32. Zeke Hausfather:

    Re: 11
    That Gilder essay makes the amusing claim that the Oreskes article should be discounted because it was published in Sciences “Essays” section rather than in the peer reviewed literature. Perhaps she is right; from now on, I’ll only cite surveys that have been peer reviewed.
    [/sarcasm]

  33. Aaron Lewis:

    I know that sea level was discussed in this group at some length in July. However, with the new evidence that changes in atmospheric and thus oceanic circulation may have obscured changes in sea level (http://environment.newscientist.com/article/dn12547-flatter-oceans-may-have-caused-1920s-sea-rise.html ) , is there any evidence that the previously apparently static sea levels caused groups to self-censor data on ice sheet melting? That is: Is there ice sheet melting data out there that was not published because it seemed inconsistent with sea level data?

  34. Jim Eager:

    Re 28 Vernon : “Any GCM that uses this off-set has no way to determine the accuracy of the product being produced.”

    Vernon, yet again you prove that you have no idea what a GCM is or how they work.
    This is not an ad hom, merely a statement of fact based on your own expressed misconceptions.

  35. Jerry:

    Re #31
    This exchange reminds me of a student I once had who was convinced from the very
    beginning that anything I told her was a lie. Needless to say, I wasn’t
    able to persuade her of much!

  36. Rod B:

    Ray, and what exactly were all those refugees doing at the Superdome besides wondering where the supplies NOL promised were and where the busses NOL promised to take them to safety were? Not knowing most were sitting at the bus lot untouched and now mostly submerged. I don’t say the Feds response was anything great. But the attitude, as you express, that the Feds — not the State, Locals, or ourselves, are responsible to kiss us and make us better is a never winning situation, and not the Feds fault.

  37. beefeater:

    Well I feel better already. A non-peer reviewed blog refuting this study! Oh who to put faith in? A website or a peer reviewed published in a respected scientific journal study? I just can’t decide.

  38. Eli Rabett:

    Vernon’s mission is to anger Gavin, at which point the climate blogger ethics panel in permanent session over at Climate Audit and Climate Science will convict Real Climate of being a very nasty place indeed. If Gavin tires or allows Vernon to keep propagating ignorance, well, that’s fine with Vernon. The whole purpose of the Vernon exercise is to put Real Climate into Zuzwang.

    So one needs another strategy: The Time Out Box TM

    When you get someone whose mission in life is provocation, send them to the Time Out box. Put a notice on the front page: The following children are sitting in the Real Climate Time Out Box: Vernon. The box would have links to their comments showing why they were sent there. If Vernon decides that there is no future in claiming that surface data is used in GCMs after being told that it was not so, after being asked to provide some evidence, etc. well, he has the keys to the box in his own hand.

  39. steven mosher:

    RE 1.

    Peer review is a method of insuring that Bayesians don’t get surprised since they share priors with their peers.

    Hostile scrunity is the evolutionary test of an ideas fitness.

    [Response: Peer review does not go around simply squashing ideas that aren't agreed on by the reviewer. Instead, it is a review of the logical basis of an idea, not whether the idea is right or wrong, or likely or unlikely. Frankly, it often doesn't even test that properly. It is a minimum test of an idea's fitness, not the end result. - gavin]

  40. Ron Taylor:

    Rod (36) I think the supplies were to come from FEMA. No city can possibly store emergency supplies for a disaster of this scale. Yes, the city was supposed to have buses. But the bus drivers all fled. Should have been anticipated.

    No one expects the feds to kiss us. But planning for disasters of this scale has to be a federal problem or we will be needlessly duplicating expertise and planning all over the country.

  41. dallas tisdale:

    ref: 36 One good thing happened because of Katrina, local and state officials learned that the FEMA is there to provide assistance, not usurp authority. Calling for the equivalent of marshal law in the wake did not speak well for LA’s and NOL’s planning.

    Disaster planning has improved dramatically throughout the country because of NOL. Of course it is easier to blame a federal administration.

  42. steven mosher:

    RE 28. Vernon writes, and gavin inlines

    “Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards

    [Response: They have not shown that those violations are i) giving measurable differences to temperatures, or ii) they are imparting a bias (and not just random errors) into the overall dataset which is already hugely oversampling the regional anomalies. - gavin]”

    I’ve pointed this out before so I do not understand why some cannot RTFM

    1. Leroy’s study that forms the basis of the new standards for the CRN ( which gavin endorses) indicates that poor siting, like that shown by Surfacestations.org
    can lead to microsite errors on the order of 1-5C. RTFM.
    Watts and company are relying on the published peer reviewed science. Their Prior is that bad siting leads
    to a 1-5C error.

    2. Peilke’s study of Colorado sites showed microsite Bias. However, Karl objected that this “micro site issue” while important was limited. Surface stations has shown the issue to be more widespread than Karl Imagined. Kar’s Prior was wrongly assumed to be a rare event type of distribution. So much for priors.

    3. The First study of the new network ( the CRN) found that microsite issues ( see the Ashland study) were citical. In this particular case it was siting a weather station by a runway that caused an issue.

    Simply, SurfaceStations does not have to Reprove what

    1. Karl admits is true
    2. Hansen admits is true
    3. CRN studies prove.

    Namely THIS. The WMO scientists were correct when they established siting guidelines. They expressed a consensus of scientific thought.

    In short. WMO have standards for a reason. It is the consensus of scientists that improper siting leads to improper readings. This finding has not been seriously challenged anywhere in peer reviewed literature. Karl called for better siting, HANSEN called for better siting. The WMO calls for better siting. Gavin has called for better siting by pointing to the CRN.

    only A siting sceptic could argue that bad siting is ok .

  43. dhogaza:

    In short. WMO have standards for a reason. It is the consensus of scientists that improper siting leads to improper readings. This finding has not been seriously challenged anywhere in peer reviewed literature. Karl called for better siting, HANSEN called for better siting. The WMO calls for better siting. Gavin has called for better siting by pointing to the CRN.

    Which is all meaningless blather. It does not speak to the pertinent question:

    Can we draw meaningful conclusions from the historical data we have?

    Those same scientists calling for a better network in the future also happen to believe that yes, we CAN draw meaningful conclusions from the historical data.

    A bunch of photographs aren’t going to change that conclusion.

  44. Eli Rabett:

    That microsite bias exists is not an issue. That it goes in only one direction is. There are more USHCN sites near trees than airconditioners. Moreover, Peterson has shown that trends at urban sites are essentially the same as trends at rural sites.

  45. ray ladbury:

    Rod B.,
    While I agree that there was failure on all levels, what was new in NO was that the failure reached all the way to the top. The Bush administration’s political appointees utterly failed to appreciate the gravity of the situation–that is clear from the emails. I do not let LA or NO off the hook, but the perception of our allies (and our enemies) is that the government of the most powerful nation on Earth allowed civil society to break down for an extended period of time–and it makes them wonder just how strong we are.

  46. ray ladbury:

    Steven Mosher,
    Let us presume that there is a dataset with a systematic error of unknown origin. Two graduate students start looking for the source fo the error. Graduate student A begins looking at every nut and bolt in the apparatus that is the source of the data. Grad student B looks at the dataset and characterizes the error–maybe looks at the data at different stages trying to find where the error is creeping in and what might be the nature of the error.
    I will bet you dollars to donuts (granted, not that great odds given what the dollar is doing these days) that not only will grad student B find the source of the error first, she will also have a better idea of how to correct for it and salvage the dataset rather than starting over. Data are never perfect.
    By all means, let us apply the standards to any new stations. However, there is zero chance that the audits of stations will produce evidence of any problem the analyses cannot handle.

  47. Craig Allen:

    There is an excelent article on Swartze paper at the blog.

    The lack of sophistication in the model he use in the study is revealed in this quote from the paper:

    Finally, as the present analysis rests on a simple single-compartment energy balance model, the question must inevitably arise whether the rather obdurate climate system might be amenable to determination of its key properties through empirical analysis based on such a simple model. In response to that question it might have to be said that it remains to be seen. In this context it is hoped that the present study might stimulate further work along these lines with more complex models…. Ultimately of course the climate models are essential to provide much more refined projections of climate change than would be available from the global mean quantities that result from an analysis of the present sort.

  48. Craig Allen:

    Hey, where did the preview button go? I didn’t mean to submit that my last comment yet, that last paragraph is meant to be a block quote and I was only just getting started otherwise …

  49. Neil B.:

    This is important, but didn’t (?) get much media attention:

    Mine fires in China, India etc., burning coal, put out about as much greenhouse CO2 as US gas consumption. Suppressing them out substantially would have significant impact on reducing greenhouse gas increases.

    Link

  50. Rod B:

    Ray, one reason why the feds didn’t fully appreciate Katrina’s situation might be because both LA and NOL told FEMA that the levees were intact hours after they had been breached. Are the critical foreigners the ones who had just been crushed by a tsunami and helped extraordinarily by our military? Or maybe the self-centered French et al because we’re 30 years late with our periodic saving their butts from the Germans? Actually, some of what you and they say is valid. I do not give the Feds high marks by a long shot, and ultimately they have to carry the ball for this scale of disaster. I just think we should appropriately spread the blame. Plus I think Mississippi has put LA to shame with their recovery.

  51. Dylan:

    Does anyone know of a plot over time of total reflected radiation from the Earth back into space (as measured by satellites)?
    Presumably this would show a gradual decrease over the last 100 years.

  52. dhogaza:

    Does anyone know of a plot over time of total reflected radiation from the Earth back into space (as measured by satellites)?
    Presumably this would show a gradual decrease over the last 100 years.

    File this under “the obvious”, but we haven’t had satellites for 100 years…

  53. Dylan:

    Sure we have…we’ve had at least one for 4 billion years! But uh, yes, obviously 100 years is asking a tad much. Was meant to be 10 (although 20 or 30 would be better).

  54. bigcitylib:

    A bit OT, but here is an honest to gawd email exchange between British Coal’s Richard Courtney and various other deniers on the topic of carbon sequestration:

    http://bigcitylib.blogspot.com/2007/08/great-balls-of-dry-ice-deniers-at-play.html#links

  55. Bob B:

    surface data are not used in the GCM

    Surely the GCM includes parameters that are calibrated against the surface data?

    It seems hard to believe that improved surface data wouldn’t, for example, improve our estimates of CO2 sensitivity?

    [Response: They are compared against things like the absolute annual mean, the seasonal change and the diurnal range. But these are much better characterised than the trends that are being discussed. And it's worth bearing in mind that the errors in the GCM range up to a few degrees or so, and so are much larger than differences any of this will make. Therefore the skill scores are not going to be greatly affected and the code will not change. - gavin]

  56. bjc:

    #44 Eli:
    The problem is that trees are likely to be far more representative of the environment that is being measured, i.e., a given climate in a given region, than are air conditioners unless you are trying to measure the temperature of NYC.

  57. Vernon:

    Gavin, thanks again for this discussion. I removed both comments when there was no response to my comment.

    [edit for conciseness]

    Vern’s 2nd Response:

    So you do agree not meeting the site guidelines will inject 0.8 – 5.4 degree C?

    You have not presented any facts to show that the injected error will not cause a bias, remember, I am not arguing that there is one, only that it is not possible to tell.

    Finally, per CRN, Hansen’s 250 rural stations are not hugely over sampled since they are putting out 100 to give a 5 degree national coverage. Further, CRN states 300 stations will be needed reduce climate uncertainty to about 95%.

    Well, as to whether I understand what Hansen says he is doing, as to the trend vs temperature delta, I could be wrong on this, and if I am, enlighten me, but I understood that Hansen is taking the urban stations, processing them, then the rural stations, processing them for each grid, then on a yearly basis, do the delta between rural and urban for each grid cell per year, then taking those to do the UHI off-set. Then that off-set is done against individual stations as part of GISSTemp processing. (I know I am simplifying this since there is actually urban, semi-urban, and unlighted.) It is not trend vs trend. The only part I really have questions on is which off-set was he doing in what order, or was he doing all variations and then taking the mean?

    [edit]

    Vern’s 2nd response: Well you did not disagree with this one the first time around but do now. Fine, please show me your evidence that all 250 stations meet site guides? If not then I believe that the 0.8 – 5.4 degree C from failure to meeting site standards far exceeds the few hundredths of a degree C over the past 100 years that Hansen assumes. You presented no facts or logic, just made a statement. Please back that up with facts or studies.

    [edit]

    Vern’s 2nd response: I addressed the UHI off-set above. I apologize if I did not say it clearly enough but poorly sited rural stations do not give accurate rural data. I did not say light=0 was an urban environment, you just did. I said that light=0 did not give you a good rural environment. CRN says that a good rural site will be: ‘not be subject to local microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks.’

    Then you drag out a red herring, there is proof that the stations are not meeting site guidelines. Hansen said that his work needs accurate data to be correct. It can be shown that that assumption is not supported by the stations. You still have not addressed this.

    [edit]

    Vern’s 2nd response: Gavin, you do use the surface station data, just not directly. In your Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data (2006) you use the station data to verify that the model is correct. ‘As in the other diagnostics, the differences among the different models are small compared to the offset with observations.’ So you are using the surface station data to make your model better. If the surface station data is wrong, then your model also surfers.

    Ok, worthless was a bit much, but if you do not know the errors in the stations, you do not know now much error is being injected into your model as you have optimize against a ‘real’ base-line that may or may not reflect actual climate change.

    [Response: First, I am not disputing that microsite effects can offset temperature readings. However they can offset them in both directions, it will be the net effect that matters. Second, the impact of the microsite effect only enters the trend calculation if it changes, not if it is a constant offset (since only anomalies are used). Thirdly, the GISTEMP analysis has a smoothing radius of 1200 km, this means that for the continental US there is nothing close to 200 degrees of freedom in the regional temperature trends the Hansen papers try to estimate. Eyeballing it you would guess something more like 10 or even less. That is why the regional trends are hugely oversampled. Finally, if you look at the GCM comparisons to the CRU data in the paper you cite, you will notice that the comparison is to the regional absolute temperatures and the seasonal cycle and that local errors can be large. The microsite issues are not going to make any difference to that comparison. Trust me on that. - gavin]

  58. Hank Roberts:

    bjc, are you saying the criteria for new stations should _not_ mention shade if there are trees in the area?
    Remember, these are not “climate” measuring devices, they’re thermometers for air temperature.

  59. FurryCatHerder:

    On the subject of raising the level of New Orleans –

    When I was there a year ago we couldn’t figure out why there was so much dirt inside of everything. Then we realized the 1″ thick layer of dirt was silt from the flooding. All that would be needed are another 96 more Katrina’s and the Lower Ninth Ward will be at sea level.

    But seriously, there are parts of the Upper Ninth Ward — Musician’s Village, in particular — that are being built at the proper grade. It was really interesting for me to be in one family’s new house looking down on the houses across the street.

    New Orleans needs to be where it is, and it’s not going to move because of storms. Further down river is further out into the Gulf of Mexico. Further up river is swamp. Many parts of the city are several feet below sea level, and raising any neighborhood with fill would require completely rebuilding the infrastructure.

    The fault for Katrina lies with multiple people and entities. The city cannot be evacuated simply because it there are too few roads out — I-10 to the east and west, the Causeway to the north (the Gulf is to the south — wrong way). There are 6 lanes east, 6 lanes west, 4 lanes north, for a total of 16 lanes. Normal traffic is 2 seconds per lane per vehicle, or 8 vehicles per second. Assume a million people (less than it was), 4 people per vehicle (there aren’t enough buses, including the 100 or so school buses that weren’t used, to evacuate the city), that’s 250,000 vehicles. That starts to look do-able until you figure out the rest of it, and that’s where it breaks down.

    The evacuation for Rita had people stuck on the highway, in traffic jams, when the storm made landfall. Many people who evacuated never made it where they were going — the traffic was so bad that they turned around before getting to where they were driving. I live 200 miles from Houston, and the evacuation of Houston wrecked traffic where I live. The only way to avoid another Katrina is to make cities like New Orleans (and Houston) storm-proof.

  60. bjc:

    Hank:
    I agree that the thermometers are there to measure air temperature and for local weather recording purposes, but the data is also being used to build a measure of climate trends and these records are being adjusted for a whole range of microsite effects. But not all microsite effects are conceptually equivalent. UHI, anthropogenic heat sources, asphalt are non-representative of the 5*5 regional climate grids, but trees in a forested region are most definitely part of the climate as is packed earth/rocks in a desert region, viz., weather station at UA at Tuscon. Difficulties obvioulsy arise where a region has a complex, heterogenous landcape or a multiplicity of land uses…then we need to ensure an adequate sampling of the differing land uses/landscapes. It strikes that non rural sites are almost by definition non-representative and as such should be excluded from the data-base, and not adjusted or corrected. If this is done then UHI becomes a non-issue!

  61. Vernon:

    Gavin, thank your taking the time to answer me, and I am enjoying this discussion. I do have a few questions based on you last.

    [Response: First, I am not disputing that microsite effects can offset temperature readings. However they can offset them in both directions, it will be the net effect that matters. Second, the impact of the microsite effect only enters the trend calculation if it changes, not if it is a constant offset (since only anomalies are used). Thirdly, the GISTEMP analysis has a smoothing radius of 1200 km, this means that for the continental US there is nothing close to 200 degrees of freedom in the regional temperature trends the Hansen papers try to estimate. Eyeballing it you would guess something more like 10 or even less. That is why the regional trends are hugely oversampled. Finally, if you look at the GCM comparisons to the CRU data in the paper you cite, you will notice that the comparison is to the regional absolute temperatures and the seasonal cycle and that local errors can be large. The microsite issues are not going to make any difference to that comparison. Trust me on that. - gavin]

    Vern’s Response:

    I am glad that we have reached agreement that microsite affects to stations that do not meet site guidelines for local microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks which surfacestations.org is bring to light.

    However, your second point is not valid for Hansen (2001). No study I have read indicates that a surface station that does not meet site guides will change. If there was anything that indicated that the changes would be constantly changing in a random manner, then I would agree with you but there is no evidence of that. The effect, I believe, would be consistent until something changed in the environment. It would be wrong, but it would be consistently wrong. This hurts Hansen (2001) since he is doing the temp delta for the grid cell and there are a limited number of lights = 0 (~250). The mere fact it is wrong in a small data pool will have even larger impact.

    I also disagree that there is enough information about the stations to make a case for a binomial distribution. Basically, you’re saying it has an equal chance of being warm or cold but I have not seen any studies to back up that position. That is why I do not believe that until the evidence is collected the Hansen (2001) UHI off-set is valid until further due diligence is accomplished in light of the failure of several of his assumptions.

    Your third point about GISTEMP is not quite valid. Why? Because you have already applied the Hansen’s UHI off-set to the stations so no matter how big you make the pool at that point, the data is already tainted. Since you do not know how much, or which one, there is no statically valid way to correct for it.

    As for you last point, I have to disagree. Why, because Hansen’s UHI off-set is used. An off-set is applied to individual stations which at this point, there is no way to know if it is valid. Why is Hansen’s UHI off-set wrong, because of the microsite issues. I see no way of fixing Hansen’s work with out studying the lights = 0 stations to determine what the microsite issues are. Once they are known, he can redo his work and you should get an accurate UHI off-set.

    Additionally, even with over-sampling at the global level, there is nothing that indicates that microsite problems are a local (USA only) issue. Without a study that actually does and assessment of individual sites, as time consuming or as hard as it would be, there is no indicator that the microsite issues do not cause bias.

    Do you know of such a study?

  62. J.C.H:

    They learned a lot from the evacuation of Galveston and Houston, which was a total fiasco that easily could have become a human catastrophe had a 5Cat plowed directly into the two cities.

    In the weeks between the two storms Texans had a field day mocking the incompetence of LA and NOL. They got a well-deserved comeuppance right on the old kisser. They fared no better, and they had a lot of extra time to get ready.

    With the storm about 4 days out a Houston city official pronounced on the news that he knew of no structures in the city of Houston that would survive a 5Cat, and that sent millions of people onto the freeways, which immediately locked up like the worst case of constipation in history. In a few hours there was no gas and no food available along the freeway.

    There is no way to storm proof a city. Mother nature is just that darn powerful.

  63. Eli Rabett:

    bjc, are you arguing that all the sites should have shade? or air conditioners (pretty common in the US) The point is that there is a mix, and the shaded sites will be cooler than the unshaded ones. It really has not been shown how close to an A/C unit the thermal sensor has to be for that to have an effect and whether the effect would be a step when it was installed, etc. Otherwise what Gavin said about trends.

  64. Mike Alexander:

    I liked Schwartz’ paper. I could actually follow most of it and his approach is very similar to my own amateur efforts. My results are different, I get the standard result. One source of error is his 20th century temperature increase for which he uses 0.57 C. If I subtract the 2000 CRU value from the 1900 value I get 0.53, which is close to the value Schwartz uses. But that’s not the right value to use. This is because the temeperature series shows short-term one-quarter degree fluctuations, so you have to use the *trend* value. For example in the database I use the temp value is 2000 was +0.277, but the average value over 1995-2006 was +0.377. Similarly, the temp value for 1900 was -0.253 while the average value over 1895-1906 was -0.369. The temperature increase using the single year points is +0.53, but using the averaged values its 0.75. Figure 5 of the linked webpage shows the trend line I constructed using a running centered 20-year linear regression. The change in trend temperature obtained using this measure is +0.78. With this larger delta T the impied forcing is 2.6 watts/m^2 instead of 1.9, which is *larger* than the 2.2 watts/m^2 for the greenhouse effect, implying the sensitivity is greater than 0.3.

    Now this is just a plain bonehead error. Another source of error is the deep ocean response. The definition of climate response says it is at *equilibrium* This means deep ocean response has to be included. The problem is the deep ocean response is so slow that it mostly hasn’t taken place over a few decades. In fact you can roughly represent the climate response as a two phase first order response. One is short and the other is much longer. In this case the *apparent* sensitivity obtained by considering short term dynamics is depressed by maybe 20% or so.

    So using the same approach by Schwarz we have 2.6 (not 1.9) watts of apparent climate forcing plus the 0.3 watts of aerosol forcing impact he grants magnified by 1.2 to account for deep ocean effects to give 3.5 watts of apparent forcing compared to 2.2 watts of actual greenhouse forcing. In other words the climate sensitivity appears to be 3.5/2.2 = 1.6 times larger than the 1.1 CO2x2 value he favors. The actual value consistent with his own approach is thus about 1.8 C for a CO2 doubling.

  65. Hank Roberts:

    > If there was anything that indicated that the changes would
    > be constantly changing in a random manner, then I would agree
    > with you but there is no evidence of that. The effect, I believe,
    > would be consistent until something changed in the environment.

    Parking lots? Weekdays vs. weekends.
    Trash burning? Day of the week
    Air conditioner? On/off cycles, building hours, thermostat
    Water sprinklers? On/off cycles, drought indexes, time of day
    Freeways? time of day, day of week
    Peeling paint? day/night
    Nesting birds? Springtime ….
    Nesting bats? Time of day, season of year ….
    Spiderwebs? “… along came the rain, and washed the spider out …”

  66. ray ladbury:

    Vernon, Do not forget that you are dealing with an oversampled system for the purposes of comparison to GCM. Moreover, the way to estimate the systematic errors is from the data–not by examining every station down to the last tree or biulding. This will be true unless every station has a comparable bias–and it is safe to conclude that the amount of oversampling is at least 3:1, so the probability is that there is no information lost.

  67. bjc:

    Eli:
    You really have to give people more credit. All I am saying is that the environment that surrounds the measurement device should be representative of the region for which the data is meant to represent. How hard is that? THe proble with urban settings is that actually are not representative of very much on an area basis. CLearly we are looking at trends not absolute measures, but with an adequate number of representative stations there would be no issue about UHI trends. Don’t you agree? The issue wiuld be moot.

  68. Petro:

    Vernon reasoned:
    “Without a study that actually does and assessment of individual sites, as time consuming or as hard as it would be, there is no indicator that the microsite issues do not cause bias.”

    This is plainly false. The temperature records by different actors are not only in agreement with each other regarding on the trend in global temperatures, they are also in agreement with other observations indicating global warming. Would your issue cause signicant bias, there should be a discrepancy and that is not a case.

    However, if you have a hunch you could demonstrate otherwise, please carry out the time consuming and hard assessment by yourself and publish the results in a scientific journal. Why harrass professionals to do it for you? They are competent enough to find more relevant topics for their research.

  69. Vernon:

    Hank, If you have proof of any of those things, please present it the evidence. I am going strictly by errors associated with poor sites. Surfacestations.org is showing bad stations that do not meet the guidelines. What else could be happening, I do not know.

    Ray, your on the wrong page. This is addressing Hansen (2001) which is does not have that amount of oversampling. CRN says to get 95 percent confidence within CONUS takes 300 stations, Hansen only has ~250 lights = 0.

    Also, surfacestations.org is showning a lot of stations do not meet site guidance. We know that failure to meet site guidance injects 1-5 degrees C of error.

    Finally, this is not about getting just the trend. It is about getting the light = 0 temperature for all relavent stations getting the temp delta with the remaining urban stations.

    So, there is no way to know what the actual temp delta is since we do not know the quality of the stations. This is fully addressed by NOAA/CRN in how they are building a quality climate network.

    I just do not think we have 30 years to wait on them so, Hansen, if he wants his UHI off-set from (2001) needs to get funding to validate his stations.

    Anyway, Ray, your going after the wrong thing and for lights = 0, there is not over sampling, and sense the goal is to get the actual temp to do a temp delta. It would still be wrong.

  70. Hank Roberts:

    > representative

    What data set do you rely on, if you want to find a representative half acre, in your neighborhood? Or if you don’t have data, how would you decide?

  71. Rod B:

    Furry (59), well put.

  72. Philippe Chantreau:

    RE 71: I agree that furry’s comment is astute. No city in the world is designed to be quickly and efficiently evacuated. However, the reasons why that is may be similar to the reasons why cities are not storm proof either. Cities are accumulations of strucures that corresponded to more immediate needs at a given time, without the specific, conistent risk analysis that would impose one given priority (i.e. storm resistance or “evacuability”) high on the list. Priorities, risks and benefits translate into actions according to how we perceive them at the moment when we do the analysis. If the focus in the design of a city was that, no matter what, you have to have it withstand a cat5 hurricane, then cities would be mostly compliant with that. If the focus was that, no matter what, you’d have to be able to have 90% of the population out of there in 36 hrs, then they would probably be able to achieve close to that. Of course, there would be much groaning and moaning of anti-regulations groups arguing that, historically, the likelihood of such an occurence doesn’t deserve the effort and regulatory burden on the city’s overall structure.

    The main problem is the objective reality of risk compared to our perception of it at the time when the risk is integrated in risk/benefit ananlysis. Imagine that you have to design a 747 and the emphasis imposed by management on your department is marketability and production costs. To achieve that, you route hot compressed air through the center fuel tanks. Then an accident happens involving fuel/air mixture in the center tanks exceeding a flamability treshlod because, among toher things, the extra heat afforded by the hot air ducts. That was deemed unlikely enough at the time of conception. Now consider that this led to catastrophic inflight explosion and your son or daughter was on that airplane. You would see that risk in a different light (not necessarily better). Risk assessment (perception) and their politics are the main drivers of curent climate change policies (or the lack thereof). Yet they are highly subjective areas. There are things even more difficult than building accurate climate models. Strangely enough, humans are both best equipped and worst equipped to accurately perceive objective realities.

  73. Hank Roberts:

    Vernon, this is silly. The instrumentation across the world developed over the past century or more, starting with boxes with mercury thermometers and pocket watches and calendars and paper and ink.

    Your old car or old house don’t meet contemporary guidelines. Your old education doesn’t. Your old dental work doesn’t. You don’t throw them out, you improve on what’s done now.

    The guidelines are for installing new instruments. Once the new network is in place, running in parallel to the old equipment, it allows getting more information out of the old data collection by verifying the existing instruments.

    If you want to ruin the ability to know what’s going on — go out there and move instruments around, change their location, change their paint, change their environment, and then declare they now “reliable” — do you understand why that’s foolish?

    A consistently biased instrument is as valuable as a perfectly accurate instrument — once you know the bias. You don’t mess with the old gear. You install better gear to the new guidelines, nearby, and run them in parallel

  74. John Mashey:

    re: #59 FCH
    I haven’t been to New Orleans for several decades, so hopefully you can offer some more insight. I have a concern that is likely to be shared by many, but which of course, is very difficult to get mentioned by politicians and subject to any reasonable debate. As you note, NOL can’t move either upstream or downstream, but the real question is:

    How much will it cost, and who will pay for it, to keep NOL viable in:
    2020
    2050
    2100
    2200
    Americans live there, and of course NOL is a sentimental favorite, but sooner or later economics matter as well:

    a) NOL/LA can afford to spend some money on its own behalf, although since LA is a net recipient of federal money, as of 2001, it got ~8B more then it sent, of which 24% came from CA, 16% from NY, 10% from NJ, 14% from (CT, WA, CO, NV). LA also got 10% from IL, 5% from TX, 5% from MI, 4% from MN, 2% from WI, and the latter states would seem to benefit more directly from having LA where it is, although presumably all of us benefit somewhat. The Mississippi River is rather valuable.

    b) There are economic benefits to having LA where it is that do not accrue to LA/NOL; I have no idea how LA captures revenue from being where it is, and how close that is to the economic value.

    c) The Corps of engineers spends money to build.
    (I.e., this is planned work).

    d) Finally, there are potential subsidies from the Federal treasury for:
    - Disaster relief & rebuild
    - Flood insurance [given the pullback of private insurers]
    (i.e., these happen less predictably).

    There are clear historical facts (*), and some predictions as in:
    http://www.sciencedaily.com/releases/2000/01/000121071306.htm

    A)* New Orleans is slowly sinking (3 ft/century).

    B)* The Mississippi has been known to flood, although NOL has usually escaped that.

    C)* The Mississippi really *wants* not to go through NOL, but down the Atchafalya channel, as well-described in John McPhee’s “Control of Nature,” bypassing not only NOL but Baton Rouge. It has generally shifted channels ~1000 years, and last did so around 1000AD. It would have already shifted except for large and continuing efforts by the Corps of Engineers, startimg in 1954 when Congress budgeted money for the Old River control effort.
    http://en.wikipedia.org/wiki/Atchafalaya_River
    http://www.newyorker.com/archive/1987/02/23/1987_02_23_027_TNY_CARDS_000348555

    D) Sea level is expected to rise. Although the following is simplistic, it’s still worth studying: zoom in to LA, set sea-level-rise =0, then +1m, which we probably *won’t* get by 2100, unless these nonlinear melting effects happen.

    http://flood.firetree.net/?ll=43.3251,-101.6015&z=13&m=7

    Needless to say, don’t go much higher unless you want to get depressed.

    E)* NOL certainly gets hit by hurricanes; recall that Katrina actually missed.

    F) Temperatures will rise, and (maybe) that will increase the frequency of more intense hurricanes.

    Hence, the real policy questions (for which I certainly don’t know the answers):
    - how much will it cost to keep NOL viable, and in what form, and for how long? [some of this depends strongly on the actual rate of sea-level rise, a subject of some contention, and one of the reasons it is *very* important to keep refining models and improving their skill as inputs to rational planning.]

    - who will pay for it?

    - and is the opportunity cost worth it? given $X, is it better to spend the money building big levees around NOL, or to look ahead 50 years, diverting development upstream, and figuring out how to handle the possible jump to the Atchafalaya. Alternatively, if there is $YT available for building levees and dealing with other seacoast issues along the Gulf/Atlantic Coasts during this century, what fraction of it does NOL get? And of course, the world doesn’t end in 2100 either (I hope).

    None of this is arguing for “abandon NOL now” as I suspect that is a bad idea, although it seems to approximate what the current administration is doing (without saying so) … but sooner or later, the level, structure, and priority of investment has got to start being debated more publicly.
    [Maybe someone knows some good reports.]

    Anyway, FCH: you live not too far away, and you’ve been there recently. Any opinions on any of this?

  75. Vernon:

    Hank, that is so wrong it is not even funny. The site guidance from NWS/WMO far preceeded what CRN is doing, but it was done for the same reason. You argument does not address the issues and is just a misdirection.

    [edit - no personal comments].

    This is not about a trend, it is about the temp delta. Hansen (2001) is doing off-sets. I will agree that after he has the temp delta he finds the trend, but the trend does not matter till then. He is doing urban – rural = UHI off-set. A consistant bias is going to give bad numbers.

    Finally, this is not about how accurate the instrument is, but how accurate the station is. If the station is not sited IAW the guidance, then injecting 1-5 degree C of error is not going to get you the actual temp delta.

  76. Dan:

    re: 73. Indeed it is. The entire surfacestations.org canard has been shown here by numerous comments to be quite unscientific. A very limited sample of station photographs by “volunteers” is not the least objective or scientific. Yet the Vernons of the world harp on it with little if any basis. More important, despite umpteen mentions, the data are almost trivial with respect to the numerous proxies and other *global* data that show the clear global warming trend. The Vernons of the world keep repeating the “noise” ad nauseum as if it were essential facts, go away for a while, and then come back as if the issue is somehow still essential. There is no excuse for the failure to objectively assess what surfacestations.org has done. When one assesses it that way, it is simply a red herring and nothing more.

  77. bjc:

    Hank:
    I think you have the logic reversed. The fact that we use surface temperature data to assess climate implicitly assumes that the stations are representative. My point is that besides any flaws in individual sites, the dependence on stations that are close to major population centers is (a) an inadequate sampling (b) necessarily introduces a potentially confounding UHI trend that cannot be adequately controlled for without sufficient more “representative’ stations and, if you have a sufficient number of more representative stations you wouldn’t need the urban stations that have limited representativeness.

    Dan (#75):
    The efforts to build a better climate network is proof positive that the network of existing weather stations is flawed in terms of instrumentation, micro-climate effects and geographic coverage. The Pielke and Watts effort is simply underscoring what Karl et al already know, otherwise why the expensive push for an updated network?

  78. Ray Ladbury:

    Vernon, quite frankly, single studies do not interest me. But you need to define your terms–95 confidence of WHAT? 300 stations measuring WHAT?
    In the end, the proof of the pudding is in the eating, and the trends observed by Hansen et al. support those seen in completely independent measurements. And even if there were errors in his analysis, they will not be found or quantified by a bunch of amateurs who don’t understand the science traipsing around through poison ivy and photographing thermometers near barbecue grills. The data will tell us what the biases are.

  79. Vernon:

    RE: 75 Dan,

    Surfacestations.org census does not have to be scientific, that sir, is a red herring. All it has to show is that the station does not follow site guides. Pictures from ‘volunteers’ is just as good for this purpose as a professional film crew.

    You have not addressed my argument. Gavin agrees that failure to meet station siting guides will inject error and that surfacestations.org is enough to tell if the station is meeting the guide or not.

    [Response: I said no such thing. That some microsite issues impart biases in certain conditions, does not imply that ever so-called issue highlighted in a photograph actually does or not. Without more information, you just have insinuation, not science. - gavin]

    Once again I will state my argument. I do not doubt global warming, I do doubt the direct instrumented ‘accelerated’ warming.

    My hypothesis’s is that Hansen’s assumptions in his 2001 work for UHI off-set’s are not supported by the evidence. I have presented my facts and logic which has survived discussions with Gavin.

    I would be pleased to hear your analysis of the flaws of my argument. But failing to address any of the facts or logic I presented and calling it ‘noise’ is not. Please produce some facts or logic to support your attack.

    Finally, Dan, the fact that Hansen’s UHI off-set could be wrong is not trivial. It is applied to every station in GISTEMP as part of the station adjustment process prior to processing. This means a bias is being interjected that cannot be corrected, ever, since by definition it is applied globally to the data.

    [edit]

    [Response: You have a very odd idea about what Hansen et al are doing. They detect a clear UHI-related trend and remove it. It what sense 'can it not be corrected'? Just take the raw data and do what you want to it yourself. The GISTEMP analysis doesn't preclude anyone else's analysis, and if you want to do it differently, go ahead. - gavin ]

  80. Barton Paul Levenson:

    Vernon writes:

    [[A consistant bias is going to give bad numbers.]]

    And to get good numbers out of those bad numbers, you compensate for the bias.

    There is no such thing as unbiased data. The fossil record is biased toward creatures with hard parts. The local motion of galaxies and quasars are biased by their red shift. And temperature stations can be biased in their readings. You don’t throw the data out, you compensate for the biases.

  81. Vernon:

    Barton, your wrong on two points. I make no assumption that good sited stations would not have some bias, nor do I make any assumptions on the direction of the bias.

    What I do say is that surfacestations.org is showing that a significant number of stations are poorly sited. the impact it is to inject 1-5 degrees C of error per stations. Since Hansen (2001) is doing temp delta, getting the rural (light = 0) stations temp wrong is a critical failure.

    I make no claim on whether the Hansen’s UHI off-set is high or low, only that based on our current understanding, there is no proof that it is right. All Hansen needs to do is eliminate the light = 0 stations that fail to meet site guides and redo the math.

    Your ‘to get good numbers out of those bad numbers, you compensate for the bias’ is flatly untrue within this context. The bias is Hansen’s UHI off-set which is applied to all stations. Please explain how you compensate for this? If I am misunderstanding you, and your talking about a bias in the rural stations, please show me how to removed this without knowing which stations and now much. Please remember this is not looking for a signal trend, it is looking for the actual temp from the local rural stations.

    Finally, this is a huge red herring. Once again, I am not addressing random bias that my exist in stations that meet site guides. Do not know what it is, and for this argument, I do not care.

    So other than dragging some fish around, please point out where my facts and logic is wrong in the context of my argument.

  82. Vernon:

    Gavin, once again thank you for taking the time to address my arguments.

    Gavin, I believe your statement is just false. Science is based on observation. Are you saying that a picture cannot show whether a station is meeting site guides? That is all I am taking out of this, either the station meets site guidance or not. If it does not, studies show that the site will be off by 1-5 degree C. Which part of this is wrong?

    I have a very clear idea what Hansen (2001) is doing. You accepted that I did back in #57 which you did not disagree with me then. I do not disagree that he detects UHI and comes up with an off-set. The problem is with the accuracy. It comes down to the yearly urban temp – rural temp = UHI off-set. Hansen makes the assumption:

    We are implicitly assuming that urban (local human induced) warming at the unlit stations is negligible. We argue that this warming can be, at most, a few hundredths of a degree Celsius over the past 100 years.

    The error in station’s sites makes that assumption unsupportable.

    Your falling back into if you do not like it, do your own is a sad way to do discourse. Maybe you do not mean it like that but that is the appearance.

    What is wrong with my facts or logic? We started having a discussion. I presented facts and logic, you challenged me, I defended. We were coming down to less and less that we disagreed on. Now, I think you do not like my argument, but you are finding less and less you can challenge and do this.

    [Response: Your logic is the most faulty. Take the statement above, 'science is based on observation' - fine, no-one will disagree. But then you imply that all observations are science. That doesn't follow at all. Science proceeds by organised observation of the things that are important. You cannot quantify a microsite problem and its impact over time from a photograph. If a site's photograph is perfect, how long has it been so? If it is not, when did it start? These are almost unanswerable questions, and so this whole photographic approach is unlikely to ever yield a quantitative assessment. Instead, looking at the data, trying to identify jumps, and correcting for them, and in the meantime setting up a reference network that will be free of any biases to compare with, is probably the best that can be done. Oh yes, that's what they're doing. - gavin]

  83. L Miller:

    Vernon are you trying to claim an improperly sited station will be out by 1 deg C one year, 5 deg C the next?

    The notion that the stations temperature readings would jump around like that is absurd. Even if it were to happen there is still usable data in the signal that can be extracted and used. Random noise while not desirable does not remove the underlying trends.

    It should be even more obvious that a constant error will not remove the underlying trends either. If a station reads high by 3 deg you can still spot an underlying trend with ease. I.E. if a station is reading 17 deg when it should read 14 and 20 years later it reads18 deg when it should read 15 you still get a 1 deg temperature increase.

    To make a difference in the final calculation of the trend you need a trend in site placement issues. Assuming all site placement issues result in higher temperature readings, progressively worse site placement over time will introduce a false positive trend in the final results. Progressively better practices will introduce a false negative trend into the final results, even though the error in the site is positive.

    As far as I can tell all the issues that could introduce a false trend into the final results are being accounted for, but if you have some that are not I’m sure the people here would love to hear them. As I noted above though, none of the issues you have discussed so far seem capable of introducing a false trend. What you have talked about so far is simply noise in the data, which isn’t a good thing but doesn’t render the data useless either.

  84. Vernon:

    L Miller, can you be any more wrong. I am not claiming that the errors change at all. In fact, I have stated that I would expect the error from not meeting the site guide to be consistent year by year.

    Second, I am talking about Hansen (2001) where he is taking the urban temp subtracting the rural (light = 0) temp on a grid cell basis and getting an UHI off-set. He does this for every year of the time series. From this he develops his UHI off-set trend.

    What I am pointing out is that his assumptions are showing to not meet the facts. That injects error at the temp delta point which then gets propagated though out his work.

  85. bjc:

    I think the more trenchant issues are how do you compensate for “inconsistent” biases and at what point do you discard problematic data. What is very problematic is the domination of the current temperature record by urban stations to the extent that identifying the UHI trend becomes extremely difficult. For example, of the 47 stations included in GISS data set for Brazil, only one meets the stated criteria for rural stations of having a population of less than 10000. That one station is on an island in the South Atlantic!

  86. Hank Roberts:

    > To make a difference in the final calculation of the trend you need a trend in site placement issues. … Progressively
    > better practices will introduce a false negative trend into the final results, even though the error in the site is positive.

    That’s a simple clear explanation for why it’s important _not_ to go moving the old stations around to “improve” them.
    (Cynically, it might be why people are agitating to do exactly that — to screw up the data by “improving” the stations.)

    Instead new stations are put in. That improves the old data by adding more, and more accurate, cross-checks.

  87. Vernon:

    Gavin, thank you for your input.

    [Response: Your logic is the most faulty. Take the statement above, ’science is based on observation’ - fine, no-one will disagree. But then you imply that all observations are science. That doesn’t follow at all. Science proceeds by organised observation of the things that are important. You cannot quantify a microsite problem and its impact over time from a photograph. If a site’s photograph is perfect, how long has it been so? If it is not, when did it start? These are almost unanswerable questions, and so this whole photographic approach is unlikely to ever yield a quantitative assessment. Instead, looking at the data, trying to identify jumps, and correcting for them, and in the meantime setting up a reference network that will be free of any biases to compare with, is probably the best that can be done. Oh yes, that’s what they’re doing. - gavin]

    I really like the way you moved from specific (my argument) to general (nothing to do with my argument) and then proceeded to take me to task for something I did not say. I said ‘science is based on observation’ and are ‘you saying that a picture cannot show whether a station is meeting site guides?’ You seem to disagree with neither of these two facts.

    I will admit that I have a hard time following your logic, but your basically saying that since the pictures will show whether the station meets site guidance does not matter because you cannot use them to determine the amount or history of the error. I do not see what that has to do with my argument. My argument is quite simple; either a site is compliant or not-compliant with site guidance. I believe that a picture will show whether the site is compliant or not, which you appear to agree on. If it is not, then I expect based on the studies that the error the site will be reporting will be between 1-5 degrees C, but that does not matter. What matter is the site should not be used by Hansen et al (2001) to determine the UHI off-set.

    So I have to ask, what is faulty about my facts or logic?

    Here is my argument:

    Hansen (2001) states quite plainly that he depends on the accuracy of the station data for the accuracy of his UHI off-set. (You agree with this.)

    WMO/NOAA/NWS have siting standards (You agree with this)

    Surfacestations.org’s census is showing (based on where they are at now in the census) that a significant number of stations fail to meet WMO/NOAA/NWS standards (You agree with this)

    There is no way to determine the accuracy of the station data for stations that do not meet standards. (You agree with this, well actually you seem upset that this is not being provided.)

    Hansen uses lights=0 in his 2001 study (You agree with this.)

    Due to failure of stations to meet siting standards, lights=0 does not always put the station in a accurate rural environment (You agree with this.)

    At this time there is no way to determine the accuracy of Hansen’s UHI off-set (You will not commit to this so where did I get it wrong?)

    Any GCM that uses this off-set has no way to determine the accuracy of the product being produced. (You do not agree with this, but since you use the surface station temp as a diagnostic, then it does have an impact.)

  88. Vernon:

    Actually Hank, this is being addressed by the NOAA/CRN project. 300 stations at sites that have been extensively studied to insure there is none of the current problems. It is over at http://ams.confex.com/ams/pdfpapers/71817.pdf
    and specifically:

    a. will most likely remain in a stable tenure or
    ownership for at least the next century.

    b. are not envisioned as being areas of major
    development during at least the next century.

    c. that will not be subject to local microclimatic
    interferences

    ‘microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks.

    It is being done in two phases. Phase I is 100 stations (about half done) and Phase II is another 200 stations. With 200 stations they will reduce climate uncertainty to about 95%.

    Once we have this, no more adjusting data.

  89. richard:

    82. “Your falling back into if you do not like it, do your own is a sad way to do discourse”

    Seems to me a number of attempts have been made to address your concerns. You remain unsatisfied with the answers given. The best route for you to take would be to do as as been suggested: take the raw data and carry out your own analysis. Then submit the results to the peer-review process. At that point the discussion could proceed.

  90. Majorajam:

    Vernon,

    I think what people are trying to tell you here is that you cannot undermine Hansen by innuendo regarding his data set. If it is your assertion that flawed surface stations systemically bias the data such that Hansen’s UHI adjustment is too low, surely this should be easy enough to demonstrate by closer examination (and here I’m thinking scientific experiment using hard data and even a hypothesis involving a physical explanation. Photos, not so much). Another way of saying this is you have to establish the relevance of the objection to the data, (in the context of systemic bias), before the prevalence. You have not pointed to a study that does this, nor produced any analysis of your own. Why not?

  91. Jim Eager:

    Re 84 Vernon: “I am not claiming that the errors change at all. In fact, I have stated that I would expect the error from not meeting the site guide to be consistent year by year.”

    Which means that the errors can easily be compensated for, something that you have repeatedly been told is done, yet you continue to refuse to believe it.

  92. Petro:

    Vernon,

    It has been told you in plain English several dozens times in this thred only, the microsite issues you worry have been addressed and dealt with scientific manner.

    Below the dialogue is in formal form.

    Science: A
    Vernon: Not A, since B
    Science: A do not follow from B
    Vernon: A follows from B

    Do you realize which side in this controversy has more evidence for his argument?

    Do you realize which side has a burden of proof to collect more evidence to support his argument?

    Do your realize your insistence to ask the leading climate researchers to carry out a time-consuming study for your argument is very arrogant position?

  93. James:

    Re #87: [...since the pictures will show whether the station meets site guidance does not matter because you cannot use them to determine the amount or history of the error.]

    One of many things you’re ignoring here (which no one else seems to have pointed out) is that a picture of a site only shows site conditions at the moment it was taken. There are hundreds if not thousands of sites, many with records going back 50 to 100 years. Say you find a site where your picture shows some factor that might cause a bias: when did that factor come into play? Was it there a century ago, or was it built last week? (And of course the reverse: maybe a site meets your standards today, but how about back in 1943, when it was a busy AAF training facility?)

    It seems that what you’re really calling for is a record of what the site was like when first constructed, and through all the years between. Now unless you have a time machine you’re not telling us about, it’s going to be impossible to get such a record, so by your logic we should throw away that century of records, and start fresh, no?

  94. Barton Paul Levenson:

    [[So other than dragging some fish around, please point out where my facts and logic is wrong in the context of my argument.]]

    Climatologists DO analyze the record of individual stations, and they do it by comparing it to other stations within a wide radius. It is easy to pick out outliers.

    The theory of a significant bias in the land surface temperature stations fails empirically because the same trends they show are also shown in other ways:

    Sea temperature readings show warming:

    Gille, S.T. 2002. “Warming of the Southern Ocean Since the 1950s.” Sci. 295, 1275-1277.

    Levitus, S., Antonov, J., Boyer, T.P., and Stephens, C. 2000. “Warming of the World Ocean.” Sci. 287, 2225-2229.

    Levitus, S., Antonov J., and Boyer T. 2005. “Warming of the World Ocean, 1955-2003.” Geophys. Res. Lett., 32, L02604.

    Are there urban heat islands on the ocean?

    The balloon radiosonde record shows warming:

    http://members.cox.net/rcoppock/Angell-Balloon.jpg

    The satellite temperature records show warming:

    http://members.cox.net/rcoppock/UAH-MSU.jpg

    Melting sea ice shows warming:

    http://nsidc.org/news/press/20050928_trendscontinue.html
    http://nsidc.org/data/seaice_index/

    Glacier retreat shows warming:

    http://nsidc.org/sotc/glacier_balance.html

    Boreholes show warming:

    http://www.ncdc.noaa.gov/paleo/globalwarming/pollack.html

    Rising sea levels show warming:

    http://sealevel.colorado.edu/
    http://en.wikipedia.org/wiki/Image:Recent_Sea_Level_Rise.png

    The theory that land surface temperature readings are biased upward has no validity unless those readings are higher than the readings everyone else is getting. They aren’t. All your theories are worthless if they don’t match the evidence. You can argue till the cows come home about how badly sited the temperature stations are, but if no bias shows up in the data, your arguments are meaningless. You’re developing a theoretical basis to explain a result that doesn’t exist.

  95. Pekka Kostamo:

    Vernon repeats in every message in a very propaganda-like manner the same claim “the error the site will be reporting will be between 1-5 degrees C”. I wonder where this comes from?

    I could believe in such large impacts of unperfect siting regarding the daily max temperature on a calm and sunny day. Mixing by wind and/or shadowing by clouds reduce the UHI impacts substantially both day and night. Most days in most locations the UHI is not detectable.

  96. JamesG:

    Since we know that the US is over-represented and that leaving the affected stations out won’t affect the global graph, or the US graph overly much, then why on earth are you guys still arguing? Do you have to argue everything on principle? Anthony Watts is correct – just accept it. The reason right-wingers have such a strong argument is because the idea that you can correct data from unknown errors (up or down) is so obviously dumb that you look like you don’t care if data is accurate or not. Imagine if you said “Well done Anthony, we’ll take out these poor stations when you’ve finished the survey”, you then do the new analysis with no change in the results. You’d pull the teeth off the opposition, save a bit of extra analysis effort and we can all move on.

    Incidentally are you all still unaware that the administration knew the levees were going to break but didn’t tell anyone? Not only that but the maintenance engineers had been sent to Iraq. The myth that Katrina resulted from AGW has conveniently obscured this info and let them off the hook. Happy about that are we? See http://www.gregpalast.com for the truth. Lessons? Sticking to the facts is always the better idea.

  97. Lawrence Brown:

    Well, the President of the United States is visiting New Orleans today, the second anniversary of Katrina.Thank God for that!The first time he went he regaled the residents about how he used to sow his wild oats in N.O. during his youth, and how poor Trent Lott lost one of his homes. Hope his script is an improvement this time.

    Found this site that gives the data sources for the global maps from GHCN data, and breaks down these sources. It clearly shows the land data sources for Vernon, and anyone else who’s interested. There’s also an input calculator for generating trend maps. http://data.giss.nasa.gov/gistemp/maps/

    Using the default data, except changing the type of projection to polar, shows that the poles are far ahead of the rest of the planet and in the most danger from global warming.

  98. Hank Roberts:

    JamesG, you haven’t been reading the thread carefully, check your assumptions please.

    There’s no list of “affected stations” — there are claims station data should be thrown out based on pictures.
    Looking at the data from the station tells you the record, a picture shows what it looks like today.
    The report you wish for has been done excluding all urban stations (made no difference).
    Error correction is done routinely and tested regularly.
    Much of contemporary science seems “obviously dumb” til you study it.

  99. dhogaza:

    The reason right-wingers have such a strong argument is because the idea that you can correct data from unknown errors (up or down) is so obviously dumb that you look like you don’t care if data is accurate or not.

    “Science is dumb, scientists dumber”

    That about sums it up, I guess.

  100. Dan:

    re: 99. I would add to that slightly to say “We do not understand or want to learn about science and we can not admit when we are wrong. Therefore science is dumb, scientists dumber.” ;-)

  101. Ray Ladbury:

    James G., So… how are we supposed to correct for biases in the data if we don’t analyze the data to look for biases in it? I mean it is not as if there aren’t checks we could use–time series, comparisons to surrounding stations, filtering, averaging. That is how you find biases, errors, etc.–not by traipsing throught the poison ivy and looking for the odd barby near a thermometer. What is more, there are completely independent trends we can look at to see if they are consistent with what we are seeing in our data. As a scientist who often has to draw conclusions with very limited data, I see a dataset like this and begin to salivate. What is more, I believe the very analysis you ask has been done (urban stations removed), and no significant difference was seen.

  102. Falafulu Fisi:

    firstly, that modelling the climate as an AR(1) process with a single timescale is an over-simplification;

    Now, if is not AR(1), then what order of the transfer function would be the right order?

    The fact of the matter is that you either predetermined the order to be able to build a model or otherwise used System Identification algorithms (linear or non-linear) to identify the best model orders for you if you have no clue to what order that would best model the system.

    Here are some points:

    If you think that AR(1) is over-simplification, then you must accept that you have a full prior knowledge of the dynamics of the systems. However, in reality this in not the case. On the other hand, if you accept the you don’t know a lot about the dynamics of the system, then you must accept that AR(1) is NOT an over-simplification.

    BTW, does any member of the RealClimate has any knowledge about linear and non-linear System Identification & State-Space algorithms? The reason, I am asking here, is that it seems that you have no clue to the question that you have just asked, and that is : whether AR(1) is an over-simplification or not?. You can pretty much fit any AR model to any data, however the AR model with minimum error could be identified via AIC (akaike information criteria). So, the point of saying whether a AR model of order 1 is an over-simplification sounds like someone who makes a comment about something that he/she has no clue about.

    [Response: AR(1) processes have certain properties. You can examine the data, in particular the auto-correlation structure as in Schwatrz's paper, and examine whether those properties are found. In this case, they are not. Pretty good evidence that you are not dealing with a simple AR(1) process with a single timescale, therefore AR(1) is an over simplification. We'll put up more details on this soon. - gavin]

  103. steven mosher:

    RE 82. Vernon. Here is one way to respond to Gavins
    inline.

    Gavin inlines:

    “Response: Your logic is the most faulty. Take the statement above, ’science is based on observation’ – fine, no-one will disagree. But then you imply that all observations are science. That doesn’t follow at all. Science proceeds by organised observation of the things that are important. You cannot quantify a microsite problem and its impact over time from a photograph. If a site’s photograph is perfect, how long has it been so? If it is not, when did it start? These are almost unanswerable questions, and so this whole photographic approach is unlikely to ever yield a quantitative assessment. Instead, looking at the data, trying to identify jumps, and correcting for them, and in the meantime setting up a reference network that will be free of any biases to compare with, is probably the best that can be done. Oh yes, that’s what they’re doing. – gavin]”

    1. Let’s accept the notion that pictures don’t matter.

    Hansen uses a pictures taken from satillites ( night lights) to judge urbanization.

    The pictures were taken, best I can figure, in the mid 1990s. So yes, pictures are useless. Nighlights is junk. The pixel extent, as best as I can figure is 2.7km. NOW, consider what gavin would say if the resolution of Anthony’s pictures was 2.7km per pixel. and IF anthony used such a crude measure of urbanization.?

    So what gavin is telling you is that a picture taken from a satillite in 1995 that measures the intensity of light from a 2.7km square TELLS YOU about the urban/periurban/rural nature of that site from 1880 to 2006. That’s a smart pixel.

    Further, the population figures used by hansen are decades out of date, circa 1980. AND he does
    not correct for the known and documented errors in certain sensors. like the h-83.

    2. Bias can be identified and corrected by numerical methods. True. However, certain methods are fragile, while others are robust.

    Did Peterson or Hansen Identify the issues with the Ashland ASOS? NOPE. Why? Peterson’s method is weak. This has been documented.Why does the Ashland ASOS matter?
    well, Gavin mentions the CRN ( the REFERENCE network)
    In the VERY FIRST test where NOAA compared the CRN to the historical network, They found unexpected microsite issues. Issues missed by ham fisted numerical methods.
    I’ll let the lead scientist of the CRN explain for gavin
    who seems to be a microclimate sceptic.

    “At the Asheville site, the effect of siting difference
    between the ASOS and CRN led to a ∆Tlocal effect of
    about 0.25o C, much larger than the ∆Tshield effect (about
    -0.1o C). This local warming effect, caused by the heat
    from the airport runway and parking lots next to the
    ASOS site, was found to be strongly modulated by
    wind direction, solar radiation, and cloud type and
    height. Siting effect can vary with different locations
    and regions as well. This term, undoubtedly, needs to
    be taken into account in the bias analysis if two
    instruments of interest are separated by a significant
    distance.”

    [Response: You seem to be continually confused by UHI which is known to have a significant and largely single-signed trend over time and microsite issues which do not. Hansen's study was to remove UHI effects. If lights=0 in 1995, I think it's pretty likely that lights=0 in 1880 as well. And once again, I am not a micro-site-effect skeptic, I am a photos-of-stations-today-give-quantatative-and-useful-ways-to-characterise-changes-in-microsite-effects-through-time skeptic. Think about that difference and stop claiming the former. - gavin]

  104. tamino:

    Re: #103 (Steven Mosher)

    If you’re going to say things like, “Peterson’s method is weak. This has been documented,” then you’d better give a reference or link to exactly where it’s documented. I’m not going to just take your word for it.

  105. Falafulu Fisi:

    Barton said…
    You don’t throw the data out, you compensate for the biases.

    And what numerical methods that you use to compensate for the biases? Just curious to see if you really understand of what you’ve just said.

  106. ray ladbury:

    #103 Steven Mosher
    You are indeed the master of the straw man argument. Actually, you can tell a lot via satellite photos–that’s why DOD uses them. That’s why citrus farmers pay for photos of Brazilian orange groves to help them anticipate orange juice futures. And, yes, where there are no lights at night, you have less urbanization (viz. the difference between N. Korea and S. Korea, or Nigeria and Zaire).
    Now, pray, what does a picture of a station tell us about the data? It might make us wonder about a possible bias, but if we’ve looked at the data we’ll know that already. In fact, if we see something suspicious in a photo, what will we do? Go to the data and see if we see anything funny confirmed. You learn a lot more about the data from the data. I’d have thought that would be obvious.

  107. Falafulu Fisi:

    ray ladbury said…
    You learn a lot more about the data from the data.

    In what way Ray? Hey look, if you can’t use sophisticated analytical algorithms for analyzing images , then your statement above is meaningless. BTW and please don’t prompt me to start going into this area of computer vision & data-mining for image analysis, since it is one of the area that I am interested in algorithm development.

  108. Hank Roberts:

    > methods …. used to compensate for the biases?

    You probably know this is a question people take quite seriously and the methods are being checked and will be improved as more data comes in and newer stations are put in place and accumulate parallel records.
    For example http://climatesci.colorado.edu/publications/pdf/R-318.pdf

  109. dhogaza:

    Actually, you can tell a lot via satellite photos–that’s why DOD uses them.

    Perhaps not the best example, given Colin Powell’s performance at the UN a few years ago.

    (don’t get me wrong, Mosher and the rest are full of s…)

  110. JamesG:

    Hank and the others: Oddly you are reading something I didn’t write. No there is no list yet of affected stations – the survey is still ongoing. Yes an analysis has been done to exclude urban stations but the concern is about micro-site effects at rural stations: There may be few of these – I don’t know but neither does anyone yet. Correcting for biases is hard work – why bother? Just exclude the suspect stations. It’ll make no difference except it says that you actually care about quality control. Even if you do want to further correct for bias then photos of the stations are rather handy to tell us what biases may exist (heating or cooling), so Watt’s effort is valuable in any event. All I’m saying is stop this petty naysaying to a sensible exercise in QC – it just makes you look dogmatic, biased and silly. For those here who say they are scientists then you should know that the best quality data is the data that doesn’t need any corrections at all. If you salivate at the prospect of correcting data then you should see a shrink. And I didn’t say science or scientists are dumb: I said this particular argument is dumb and if you’d take your blinkers off you’d see it too.

  111. Ray Ladbury:

    Falufulu Fisi, not sure I understand your point, or indeed whether you have one. Images are data. Of course you learn more if you analyze them in detail. My question is what you learn from a snapshot of a site that you couldn’t learn from a detailed examination of the data–not bloody much that I can see. How else do you estimate biases and systematic errors other than by analysis of the data and the models it pertains to? And fascinating as your insights on computer vision and data mining might be, I don’t see how they are pertinent here.

  112. Ray Ladbury:

    James G: What you are ignoring is the fact that imperfect data are not bad data. There is still information content, and by correcting that data you can emerge with a better product than you would have if you eliminated it. One way to see this is to look at the Likelihood–it always becomes more definite as you add information. Quality control does not consist of eliminating “bad data”, but rather in understanding all your data–its random and systematic errors. By eliminating “bad” data you may be throwing out important information, and it opens up a minefield as to how different researchers define “bad”. Understanding systematic errors on the other hand is usually a transparent process. And if you don’t like data analysis, might I suggest a career other than science.

  113. dhogaza:

    Correcting for biases is hard work – why bother?

    Because you want as many data points as possible.

    Just exclude the suspect stations.

    This is really naive.

    It’ll make no difference except it says that you actually care about quality control.

    And this is an incredibly offensive thing to say to the scientists involved at NASA (you are saying that since they’ve not all bought brownie cameras, they don’t care about quality control)

  114. J.C.H:

    Inhofe:

    http://tinyurl.com/ynoggu

  115. Ray Ladbury:

    Re 114.
    All I can say is that I wonder what color the sky is on James Inhofe’s planet. Of course, this is a guy who equates winning a prize for research with being paid to endorse a position. Wow!

  116. Hank Roberts:

    JamesG wrote, on 30 August 2007 at 5:12 AM:
    “Correcting for biases is hard work – why bother?”

    “Math is hard.” — Barbie

    People pushing the dump-it-all talking point know the upgraded stations are going in, and that their data will allow doing the “hard work” so we get better info from the deep historical archive, after doing comparisons of the old stations running in parallel with the new.

    If much of the historical data were thrown out completely, based on these snapshots, rather than having the “hard work” done, it would be far more difficult to reach any conclusion about longterm trends and many more years would have to elapse to have any confidence.

    Who profits from that? Where are people getting this PR talking point?

  117. Timothy Chase:

    Ray Ladbury (#106) wrote:

    Actually, you can tell a lot via satellite photos–that’s why DOD uses them. That’s why citrus farmers pay for photos of Brazilian orange groves to help them anticipate orange juice futures.

    I have seen false color satellite images being used to identify minerals by means of the spectra, and simply going on the opacity of carbon dioxide to infrared (yes, the same observations that we are able to perform to see carbon dioxide actually performing its role in the greenhouse effect) we are able to get detailed information on topography and surface altitudes for geologic surveys.

    Figure 20. The 2-μm CO2 absorption strength (A) can be converted to topographic elevation (B). The derived elevations matches the USGS Digital Elevation Model (DEM) (C). The CO2 absorption strength image (A) is brighter for increasing strength. Because the atmospheric path length is smaller with increasing elevation, the absorption strength decreases, becoming darker in the image. The DEMs (B, C) are brighter for increasing elevation, thus are inversely correlated with the CO2 strength in (A).
    http://speclab.cr.usgs.gov/PAPERS/tetracorder/FIGURES/fig20.dem.abc.gif

    Figure 21. The 2-μm CO2 absorption strength versus USGS DEM elevation shows a linear trend with an excellent least squares correlation coefficient.
    http://speclab.cr.usgs.gov/PAPERS/tetracorder/FIGURES/fig21.co2_graph.gif

    Imaging Spectroscopy:
    Earth and Planetary Remote Sensing with the USGS Tetracorder and Expert Systems
    Roger N. Clark, et al
    Journal of Geophysical Research, 2003.
    http://speclab.cr.usgs.gov/PAPERS/tetracorder/

  118. Falafulu Fisi:

    Ray Ladbury said…
    How else do you estimate biases and systematic errors other than by analysis of the data and the models it pertains to?
    By training the computer software system that is use for realtime satellite image analysis to automatically detect of what is called normal behavior and what is called anomaly behavior and alert the operators? Any anomaly images that are shown up or been detected would give alert.
    So, the biases could have been detected much earlier, because biases are anomaly behavior.

    I believe that NASA is already doing satellite image & vison data-mining:

    NASA DATA MINING REVEALS A NEW HISTORY OF NATURAL DISASTERS

    http://www-robotics.jpl.nasa.gov/groups/ComputerVision/

    NASA also developed their own freely available vision toolkit.

    NASA Vision Workbench toolkit

  119. DH:

    Re: 74 John Mashey,

    See the NSF independent investigation at:

    http://www.ce.berkeley.edu/~new_orleans/

    I will try to answer some of your questions, but the cognitive dissonance that one experiences after living in NOLA for a decade, evacuating from Katrina, dealing with FEMA, Louisiana Road Home, insurance companies, changing jobs, etc, has addled my brain.

    NOLA is one of the craziest, mixed up cities that I have ever lived in. Partly this comes from its history as a port city that has been in the hands of the Spanish, French, English, and US. It received the same immigrants as the industrial northeast, mostly the Irish and Italians. Add to this cultural gumbo the descendents of slaves, free slaves, and Creoles. Diversity makes NOLA unique, gives it strength, but is its greatest weakness. The difficulty of coming to a consensus on any decision results in a hodgepodge of action and inaction. Sometimes I wondered how the city survived preK. It was a city in decline with the banking businesses moving to Atlanta, the oil companies moving to Houston, and the port becoming mechanized. This left higher education, tourism and health care as the core of its economy.

    When Katrina hit, the weaknesses of the city, state and federal governments were laid bare. The most vulnerable were the poor since they were the most dependent upon the government for help. The immediate response was one of the most bungled bureaucratic operations in US history. The only thing that did work well was the evacuation: ~1.2 million people left the city and surrounding parishes in around 36 hours. There had been some practice. The evacuation for George in 1998 exposed many of the crucial deficiencies which were largely corrected for the next evacuation for Ivan in 2004. Subsequent fine tuning resulted in the orderly evacuation of the people who could evacuate and chose to. The largest most glaring mistake was the evacuation of the people who did not have access to a car. Plans were in the works to use school busses, but these had not been fully implemented and certainly not practiced. Such evacuations take a lot of planning because when you are under the gun, if you had not planned and practiced ahead of time, it most likely would not get done.

    NOLA is now caught between a levee and a wet place.

    The anniversary will be marked by politicians making promises and showing their concern. When they leave and the news cycle has reset, NOLA will return to being “the city that care forgot.” The practical issues of recovery will remain. I have not heard of any of the candidates talking about realistic governmental reforms that can deal with this and future disasters. Politicians resist reforming entrenched bureaucracies run by political cronies who are hamstrung by layers of bureaucracies put in place to make sure that the political cronies don’t run off with all of the money. I am old enough to be cynical of both parties because both practice political patronage, the only difference being who receives the patronage. I am young enough to be hopeful that true reform can take place, where competent government can flourish.

    [[How much will it cost, and who will pay for it, to keep NOL viable in:
    2020
    2050
    2100
    2200]]

    To do the repairs and upgrades to protect the city will cost ~$10 billion, this includes wetland restoration. Will it be done right? Who knows, but at least the citizenry is much better informed. Who will pay? As with most federal projects, ~20 % is absorbed by the state. You as a taxpayer will probably pay as much for it as Boston’s Big Dig. In general, I would like to see a greater contribution from individual states towards the building of infrastructure.

    [[Americans live there, and of course NOL is a sentimental favorite, but sooner or later economics matter as well:

    a) NOL/LA can afford to spend some money on its own behalf, although since LA is a net recipient of federal money, as of 2001, it got ~8B more then it sent, of which 24% came from CA, 16% from NY, 10% from NJ, 14% from (CT, WA, CO, NV). LA also got 10% from IL, 5% from TX, 5% from MI, 4% from MN, 2% from WI, and the latter states would seem to benefit more directly from having LA where it is, although presumably all of us benefit somewhat. The Mississippi River is rather valuable.

    b) There are economic benefits to having LA where it is that do not accrue to LA/NOL; I have no idea how LA captures revenue from being where it is, and how close that is to the economic value. ]]

    A geographer once noted that NOLA is built in a place where no city should have been built, but a city needs to be there, primarily due to the need for a port. Around half of the US agricultural exports pass through the port as well as other materials. With the growing demand for ethanol, the port is needed to ship it to northern refineries (it cannot be pumped through pipelines). However, the port business is very competitive and not as labor intensive as before the introduction of containers (the Warehouse District is mostly condos now). Thus, dollars flowing into the local economy from wages is relatively small.

    The other industry that the country relies upon, for better or worse, is the petrochemical industry. Luckily the refineries have the wherewithal to fund their own protection so the $4/gal gas was short lived and the pumping of natural gas was not severely disrupted. Again, it takes relatively few people to run these facilities so there is not as much as a gain as you might think. Because of NOLA’s proximity to the oil platforms and the understandable NIMBY attitude in other states, the refineries aren’t moving any time soon.

    One might think that the oil from offshore drilling sites could bring some economic benefit. But, the royalties used to go to the federal government, since most platforms are outside the three mile limit for Louisiana and are under federal jurisdiction. (For onshore public domain leases, states generally receive 50% of rents, bonuses, and royalties collected). Louisiana had to lobby to receive funds back to address coastal and wetland degradation. This changed in 2006 where ~27 % of the money bypasses the Fed and goes directly to the gulf states, specifically for coastal restoration.

    A significant amount of coastal damage was done the early oil industry cutting canals in the wetlands, allowing salt water intrusion. Additionally, the building of levees for flood control prevented the spring time flooding, necessary for the constant renewal of “land.” Another pernicious but not well known villain is the invasive species called nutria. These voracious rodents (think of a water dwelling guinea pig with the size of a small dog and the coloring of a rat) consume large quantities of marshland grasses and wreak havoc (like rabbits in Australia).

    [[c) The Corps of engineers spends money to build.
    (I.e., this is planned work).]]

    Ah, the Army Corps of Engineers (ACOE), one of New Orleans’ favorite villains, along with FEMA, supplanted corrupt politicians (we liked them so much, we kept voting them back into office) and crime as coffee house conversation. I could go on forever about the Corps, but will limit myself to just a few points. For a thorough analysis of the failure of the flood protection system (planning, funding, designing, and political), the NSF report (at http://www.ce.berkeley.edu/~new_orleans/) and articles by Michael Grunwald in the Washington Post provide a clear description of the series of errors that led to the flooding of NOLA and St. Bernard Parish.

    When I moved to NOLA, I found residents held an unshakable, bordering on religious, faith in the Corps. They were supposed to always overbuild projects (e.g., dams) and they would keep NOLA safe. When I asked about the Old River Project, they said have no fear, not going to happen (I agree with you that this is a big threat to NOLA and SE LA in general). The floodwall projects surrounding the drainage canals were nearing completion and their thick sturdy appearance belied their vulnerabilities hidden in the sandy and peaty soils beneath.

    With regard to the COE and money, there are a myriad of decision making and organizational defects that have caused of problems over the years. A prominent one is that the Corps’ budget comes from earmarks. I am not sure of the specific mechanism on how the yearly budget is set, but with the NOLA levee system there was always uncertainty from year to year. I didn’t follow it too much from when I lived there – the politics hid below the surface like most pork-barrel spending. This also leads to a fragmentation in planning for the whole Mississippi watershed from the federal to the local level. There is movement afoot in congress to address this, but little headway is being made mainly due to _____ (fill in your favorite political gripe). It is interesting to note that the presidential budget requests for the levees were the same as the congressional requests until 1993, when the presidential requests became substantially lower than the congressional requests (NSF report, Fig. 13.1). Since this disparity has spanned two administrations with several turnovers in congress, it appears that NOLA has been a political football for a while.

    From a scientist’s perspective, the most egregious errors occurred during the design process. It has been widely reported that the design was defective. “Defective” is not the right word. Prior to beginning construction of the floodwalls, whose failures resulted in the lion’s share of flooding in NOLA, the Corps built a full scale test wall on a levee and soil nearly identical to the conditions found at the sites of the NOLA canals. The design was to take 8 feet of water above the top of the levee with a 1.25 safety factor (i.e., the design should fail at ~10 ft of water). However, the structure failed at 8 ft, primarily due to a mechanism that had not been appreciated before. So far, so good. However, these results were not incorporated in the final design. How that happened is still a mystery. Several red flags were raised along the way by the companies involved in both design and construction, but for some reason this study was not communicated effectively to the engineers.

    Such a design flaw could have been tolerated if the safety factor were high enough. But, the Corps “tradition” was to set the safety factor to 1.3 for levees (good enough for protecting mainly agricultural land), and this value was adopted for the floodwalls (not good enough for protecting a major urban center). A safety factor of 2 – 3 would have prevented most of the flooding and subsequent costs for the central portion of NOLA (i.e., west of the Industrial Canal) with all of the hospitals and Superdome.

    When I explained the floodwall story to a Russian engineer, he said “We know this well! It’s a Potemkin village!”

    [[d) Finally, there are potential subsidies from the Federal treasury for:
    - Disaster relief & rebuild
    - Flood insurance [given the pullback of private insurers]
    (i.e., these happen less predictably).]]

    Who is responsible? Upon reading the history, there are numerous decisions at all levels that led to this disaster. By law, the Corps takes complete responsibility: the subcontracted design and construction firms bear no liability. But, in the ultimate Catch 22, congress has also passed a law in 1927 absolving the Corps from liability. Ultimately, no one had responsibility, which I think goes a long way to explaining things.

    Who will pay? All the US citizens will contribute some, but the people affected will bear most of the burden. Perhaps rightly so, but it is tough to blame the victims of a system that was mostly beyond their control.

    What of the future? The future climate and its effect on sea level is one parameter that needs to be considered. Is it the Doomsday scenario? Or is it the Pollyannaish “what global warming?” I suspect that it is somewhere close to the middle of the IPCC projections of ~3 mm/yr. Regardless of these two scenarios, NOLA as well as all major coastal cities will remain vulnerable to hurricanes. This is true even with the hotly debated up tick in severe storms. It doesn’t matter whether it is an active or quiet season, it only takes one. Remember that NOLA dodged a bullet with Andrew in an otherwise quiet season.

    I yearn to return to NOLA and participate in the rebuilding. Who knows what the future has in store.

  120. Eli Rabett:

    Re #103: If you actually RTFR you would realize that the picture (it is really a composite of a great number of pictures) is only useful because of the ground truth measurements backing it up. Since the ground truth had only been done for the continental US, the GISSTEMP folk did only used the satellite method for the continental US.

    This is EXACTLY what people, including me have been trying to tell Anthony Watts and Co., including Steve Mosher

  121. Richard Ordway:

    re. 114 Yeah, a newspaper is also reporting the many “global-warming-is-false” “peer-rewiew” studies as fact:

    …I mean “if they are ‘peer-review’ then they must be correct, right?

    -that global warming is not happening?” (The poor, poor, poor public that has to try to shift through all this ‘seemingly equal’ material as real science gets trashed).

    http://www.hawaiireporter.com/story.aspx?d87f58c3-be16-4959-88e2-906b7c291fd6

  122. Falafulu Fisi:

    Ray Ladbury said…
    How else do you estimate biases and systematic errors other than by analysis of the data and the models it pertains to?
    By training the computer software system that is use for realtime satellite image analysis to automatically detect of what is called normal behavior and what is called anomaly behavior and alert the operators? Any anomaly images that are shown up or been detected would give alert.
    So, the biases could have been detected much earlier, because biases are anomaly behavior.

    I believe that NASA is already doing satellite image & vison data-mining:

    “NASA DATA MINING REVEALS A NEW HISTORY OF NATURAL DISASTERS”
    http://www.nasa.gov/centers/ames/news/releases/2003/03_51AR.html

    NASA also developed their own freely available vision toolkit.

    “NASA Vision Workbench toolkit”
    http://ti.arc.nasa.gov/visionworkbench/

  123. Falafulu Fisi:

    To the moderator, it is appalling, that messages are delayed by more than 24 hours. I had posted a few and they don’t appear at all. If you want live debate, then allow messages to appear immediately.

    [Response: Sorry, but moderation is essential to prevent spam and keep threads from being filled more than they are already with garbage. This is a managed space, not a free for all, and reflective comments are more welcome than knee-jerk responses. If you don't like it, there are plenty of free-fire forums elsewhere. - gavin]

  124. Dodo:

    Re 121. Dear Falafulu. You should understand that the tone of your messages may influence their eventual fate. I have sometimes posed impolite questions to the RC group, and noticed that such are not tolerated.

    Like what? Well, I wanted to know how a collective opinion is formed at RC when a post is signed as “group”. This was not answered, probably because I implied that “groupthink” is involved.

  125. Hank Roberts:

    FF, make sure you click “reload” after opening a page.

    (Gavin, it may be worth checking the “Recent Comments” code — that for sure is lagging way behind, and using it to navigate seems to bring up an old (cache?) version of the thread.

    Just now, I clicked on the last comment shown in this thread from the right hand “Recent Comments” list, and it took me to what’s now #115. It looked like that was really the last comment.

    I clicked “Reload” and it now is showing me the thread up to #121.

    Probably the “Recent” list isn’t updating, but I don’t understand why it would take me to a copy of the page that doesn’t go on from the comment that “Recent” led me to — but to a copy that ends there.

    That may well be the cache in my browser not checking or not getting an update properly.

    To paraphrase Mr. Reagan: “Trust, but reload.” — Me.

    [Response: The cache should reset whenever a new comment is posted which means that the recent comments should also update. I'll try and find time to investigate.... - gavin]

  126. Lawrence Brown:

    In comment 114,JCH refers us to a site of Senator Imhofe! If he’s a credible reference we’re all doomed. I believe he once said that global warming was the ‘biggest hoax ever perpetrated on the American people’! Where did Imhofe get his climatology degree, Walmart?!

  127. steven mosher:

    Gavin inlines.

    “[Response: You seem to be continually confused by UHI which is known to have a significant and largely single-signed trend over time and microsite issues which do not."

    Actually the consensus of peer reviewed studies on microsite issues shows the exact opposite of what
    you claim. You should RTFM, but you won't. Peilke's study, Oke's study, Gallo's study.... Very simply
    UHI causes can also be seen at the microsite level. The causes are the same.they are scale invarient

    I will detail them for you.

    1. Canyon effects: these happen at the "urban level" and the microsite level. Think multipath radiation.
    2. Sky view impairment; Happens at the urban level and microsite level.
    3. Wind shelter. happens at both scales..

    1-3. its just geometry gavin.

    4. Evapotranspiration. asphalt dont breath like soil. 10 miles of asphalt or 10 feet.
    5. Artifical heating. essentially a density question. Same cause. different scales.

    the sign of bias for UHI and microsite is the same because the causes are the same.

    Essentially UHI is recapilulated on a smaller scale. My Prior is that the sign of the
    effects will be likewise. Fortunately the published science back me up and not you.

    Bottom line: the distrubution of microsite bias has been demonstrated to be a warming bias. Just like UHI
    Why? because the causes are the same. Geometry effects ( canyon, skyview, shelter) and material effects
    ( evaptranspiration and artifical heating) are the same whether the scale is 10meters or 10KM.

    I do not see why you deny the published science on this and pretend that there is any study showing
    the opposite.

    "Hansen’s study was to remove UHI effects. If lights=0 in 1995, I think it’s pretty likely that lights=0 in 1880 as well. "

    Really, Thats not the issue. The issues are here.

    1. Its Light intensity as Hansen notes in 2001.
    2 If its lights =0 in 1995, is it lights 0 in 2007? ( hansen questions this himself)
    3. How was lights verified? ( taps foot)
    4. Lights accuracy is 2.7km. Given the error in site locations, how accurate is lights?
    5. Here's a fun test. Population of Marysville in late 1800s? versus early 1900
    Thats a stupid question.
    Gold rush stupid. I would not assume that rural stays rural or urban stays urban.
    I would not rely on 1980 population figures or 1995 satilitte photos. Hansen does.
    You are smarter than that gavin

    But if we want to speculate, we might say this. A site that hasnt moved in 100 years
    and a site that is photo documented today as being rural, and a site that is
    photo documented as being in compliance with CRN guidelines, should be trusted over
    a site located in a parking lot.

    Please say no.

    "And once again, I am not a micro-site-effect skeptic, I am a photos-of-stations-today-give-quantatative-and-useful-ways-to-characterise-changes-in-microsite-effects-through-time skeptic. Think about that difference and stop claiming the former. - gavin]”

    Well since you ignore the science of the matter I have little choice. Go ahead and cite the study that
    shows that microsite contamination has a mean of zero. Taps foot….. Further there are more to the
    surface station records than photos. For example, they verify the Lat/lon. In some cases the lat/lon
    is off by miles ( think nighlights pixels gavin) They also verify the elevation. Elevation changes
    have gone unrecorded ( think lapse rate gavin) They verify the instruments. Did you know that
    almost 5% of the stations have used a sensor that measures .6C high? and that this flaw has been
    documented? and not corrected?

    Another way to look at this is to say

    if microsite contamination is zero, then the CRN is useless. BUT you have promoted CRN
    here. Why? Isnt USHCN good enough?
    If the microsite bias is normaly distributed with mean =0
    then why fuss? Why fuss? why?

    Here is my thought. I give gavin the benefit of the doubt. DESPITE the priors, despite all
    the studies that show microsite contamination is warming biased( just like UHI because the CAUSES are the same)I think the right thing to do is to use CRN guidelines. A microsite issue might cool a site or warm a site so LETS AGREE TO USE SITES that dont have obvious issues. PSST… gavin you already conceeded this
    a long while back so watch the thin ice.

    SO, we agree. Use good sites. Stop the bickering. CRN is good. “parts” of the USHCN are good.

    Pick good sites. Good bouys, good satilites, Good buckets, good tree rings, Good samples.
    Get this whole instrument, data, sampling issues behind us. Agreed? Good.

    Now. How many sites? for the US?

    1. As many as south america?
    2. As many as Africa?

    You pick the land mass Gavin. I’ll count the stations that Hansen uses for that land mass AND THEN
    we will pick the same number for the US. And we will pick stations that meet the goodness guidelines
    of the CRN.

    OK?

    [Response: Despite your claims, there is a fundamental difference between UHI and microsite effects. In two nearby stations within a city, UHI effects will be consistent. Microsite effects will not be. Even in general, there is no reason to expect microsite effects at any two sites to be correlated in time. Thus microsite-related jumps are detectable in the records if they are significant. I said weeks ago that you should do an average of the stations that passed you criteria and see whether that gives a different answer, go ahead. That is science - test your assumption that this matters. Stop dicking about playing word games here and do some work. - gavin]

  128. Hank Roberts:

    Nice tidbit from Sigma Xi, full article here:
    http://www.americanscientist.org/template/AssetDetail/assetid/55905

    “Revolutionary Minds
    “Thomas Jefferson and James Madison …. recorded temperatures twice daily, at dawn and 4:00 p.m. Initially, they followed the guidelines of the Royal Society and placed their instruments in an unheated room on the north side of their homes. Madison, however, having noted ice outside when the temperature inside remained above freezing, moved his thermometer to the porch on February 10, 1787. Jefferson didn’t follow suit until 1803, but the record the two generated thereafter closely matches more-recent measurements at the Charlottesville, Virginia, area weather stations located between their plantations. …”

    Comment: I hope someone will go out and photograph the Charlottesville, Va. area weather stations, and dig out of the archive the old temperature records. Since we know the date when the first thermometer was moved from the British standard location to the revolutionary new outoor location, and the year when the second one was moved, it will be possible to look for the jump in the measurements when the move happened, and to check for any bias in the measurements and probably correct the older measurements using the contemporary data set.

    Nice work, Founders.

  129. Hank Roberts:

    http://www.americanscientist.org/content/AMSCI/AMSCI/Image/FullImage_200783141738_846.jpg

  130. Timothy Chase:

    Regarding microsite issues…

    Does it really make sense to say that the bias will necessarily always be positive? Or even that on the whole it will necessrily be positive? I would think that it is the average temperature which we wish to measure, and if it is the average, then for any positive deviation from that average there must exist a negative deviation of the same magnitude somewhere – at least among potential sites with microsite issues or potential microsite issues. As such it would seem that on the whole potential microsite issues must be neutral – simply as the result of what we mean by “average.”

    Or am I missing something here?

  131. John Mashey:

    re: #118 DH
    Thanks, that was very useful: I was hoping someone who really knew NOLA well could comment. The reference looks interesting.

  132. dhogaza:

    Does it really make sense to say that the bias will necessarily always be positive? Or even that on the whole it will necessrily be positive?

    Of course not. Wee-willie-wick waving isn’t sufficient, these people need to get out and gather some real data if they want to make a case.

  133. JamesG:

    Ray Ladbury, Hank and others: You assume that suspect data must be included in the analysis as if there was a dearth of data but in the US there isn’t and it makes little sense for the US to be oversampled in relation to the rest of the world. Strictly speaking we probably just need enough good sites to calibrate the satellite. Urban sites are (I trust) already excluded so rejection of suspect sites is exactly a radical new idea. And no, rejecting sites is not the same as dumping data – it is still available. Watt’s current findings are that around 50% of sites are affected, which still leaves a lot of compliant sites and is still far more representation than in other parts of the world. If it is compliant now then it is also quite a good assumption it was compliant in the past. Another odd assumption here is that software which failed to pick up a 2 degree step change in 2000 will manage to identify less-obvious micro-site effects. Who’s being naive? Wouldn’t it be better if we just used sites that needed no adjustments whatsoever? Just maybe it’s possible – we’ll see. Can anyone argue with that being the best data? Don’t you think that would pull the teeth from any counter-arguments? Regarding QC and the possible lack of it, don’t you think that cat is already out of the bag? And stop telling me what science is or isn’t! It used to be about experimentation and finding the truth from disparate sources of information – photos too (yes just like in CSI); data-adjustment is an unpleasant side issue that is often necessary but should be avoided if possible. I strongly get the feeling that less easy access to satellites would have produced a photo survey like this in the first place, or at least they might have picked up up the phone and asked the observers if a site was rural and compliant. So much cheaper!

  134. JamesG:

    Timothy Chase: Yes you are missing the fact that errors don’t necessarily cancel, they often compound – it is really situation dependent. However, I think that if the results were plotted then good results would be indicated by a Gaussian distribution but a bias would likely show an obviously skewed distribution. This was my usual practice in data gathering. I once identified a faulty contact from a double-headed skew. Additional data may not actually be necessary.

  135. Ray Ladbury:

    OK, let’s look at the question of bias. How might it creep in? Well, it would have to be some source of noise that we weren’t taking into account. To be of serious concern, it would have to be monotonically increasing or decreasing and always positive or negative. Steve Mosher’s comments notwithstanding (and they don’t withstand anything resembling serious scrutiny) nobody I know of has described such a source of noise.
    If such a bias were present, how would we find it? 3 possible ways:
    1) With no knowledge of its cause, characteristics or severity, we could start inspecting every weather station and HOPE we find something.
    2) We could sttempt to look for some sign of such a bias in the data–after all, it is bound to have characteristics of its own that deviate from our expected signal.
    3)We could look at independent trends sources of data that pertain to our signal and see if our trends are consistent.

    2 and 3 have already been done. 1 is probably a hopeless quest unless supplemented by 2 and 3. So without an indication from 2 and 3 that there is any sort of issue, 1 is a chimera.

  136. Ray Ladbury:

    JamesG, no it is not that there is a dearth of data. Rather it is that by excluding data without a really, really good reason (and “It’s got errors” is not a good reason) can introduce biases into your data. It is much better to understand your data with all the errors that contribute to it.
    Moreover, changing the way the data are analyzed now would make comparisons to past behavior much more difficult. It raises questions of not just whether but exactly how you must go back and reanalyze past data, and any reanalysis can itself introduce new biases and errors.
    An example: In the 1980s, the US changed the way it calculates its unemployment rate. The changes seemed reasonable–to be included in the numerator, a person had to be “actively seeking work”; and the armed forces were included in the denominator. However, the changes introduced a downward bias for the unemployment rate that has made it very difficult to judge how we compare to past epoch or to other countries. Similarly, the changes in the way the inflation rate has been calculated since the ’90s have produced an inflation rate that is artificially low and very hard to compare to past inflation rates, despite the fact that the changes made were economically sensible. The system ain’t broke. Keep your freakin’ hands off of it.

  137. richard:

    133: “Watt’s current findings are that around 50% of sites are affected”

    By that, I suppose you are suggesting that the data from those sites are ‘bad’. Wouldn’t you have to demonstrate that by first examining the data from that site and comparing it to nearby sites, then showing that trends were different? “Guilt by photograph” would not seem to be sufficient at this stage.

  138. dean:

    The Katrina report shouldn’t be here. It’s a purely political issue and political discussions have been repeatedly removed in an effort to keep things on topic.

  139. Hank Roberts:

    > 133: “Watt’s current findings are that around 50% of sites are affected”

    You’d have to mean he’s designed an experiment, chosen a statistical test, then had raters sort the stations into good and bad by looking at the photographs (and shown the agreement between raters is consistent and repeatable, of course) first, then had someone compare the data from the stations on the resulting two lists (without knowing which list is supposed be the “good” and “bad” list of course) and shown a meaningful difference between them.

    Where would this be published? Please don’t say “Energy and Environment” …

  140. Timothy Chase:

    JamesG (#134) wrote:

    Timothy Chase: Yes you are missing the fact that errors don’t necessarily cancel, they often compound – it is really situation dependent. However, I think that if the results were plotted then good results would be indicated by a Gaussian distribution but a bias would likely show an obviously skewed distribution. This was my usual practice in data gathering. I once identified a faulty contact from a double-headed skew. Additional data may not actually be necessary.

    No, errors do not necessarily cancel.

    But unless you are speaking of the actual installation itself, that is, if what you are speaking of location, microsite issues will tend to cancel as the potential microsites with a positive bias must be perfectly balanced by those with a negative bias, and the potential microsite issues which introduce a positive bias must be perfectly balanced by the potential microsite issues which are negative. This follows simply as the result of what we mean by average and the fact that what we mean by “bias” is deviation from the average.

    It would only be possible to claim that all biases are positive if one redefined bias with respect to the minimum rather than the average. But this is obviously not what we mean by “bias.” As such, the claim that microsite issues “recapitulate” urban heat island effects…

    Essentially UHI is recapilulated on a smaller scale. My Prior is that the sign of the
    effects will be likewise. Fortunately the published science back me up and not you.

    steven mosher, 127

    … is entirely nonsensical.

    And that was my point. With regard to this, the actual shape of the distribution is irrelevant.

    It is possible that there are a great many more potential microsite issues which introduce a positive bias, but only if they tend to be much smaller than those that introduce a negative bias. It is also possible to claim that the potential positive biases tend to be larger, but only if they tend to be fewer in number. It is possible of course that the potential biased sites tend to have larger biases. I believe this is the skewed distribution which you are suggesting. But this would be possible only if the number of positively biased sites tend to be fewer in number.

    Now it is possible that the actual locations of the sites are systematically biased. But this would have to be the subject of rigorous empirical studies. One might argue that the actual construction of the sites tend to result in a positive bias. But this would have to be similarly demonstrated. And it is worth noting that the most rigorous study performed to date of urban heat island effects has shown them to be rather negligible – as the result of cool park island effects. Likewise, it is worth noting that the trends produced by all stations essentially parallel those produced by all stations even in terms of the detailed shape of the curves.

    And it would seem that the only way that one could “reasonably” argue that a bias is introduced into the system that grows worse over time in a way that will produce a measured trend that increases linearly over time where no actual trend exists is by assuming that it is the result of a deliberate distortion involving a great many people – including those in a great many other countries.

    But even then it would have to find its way into satellite measurements, sea surface temperature measurements, sea measurements at different depths, borehole measurements. It would leave unexplained the rising temperatures of the lower troposphere – which nearly match the trend in stations in rise, although they involve greater variability. Then there is the expansion of the Hadley cells and the subtropical regions to the north of them, the rise in the tropopause, the rise in sea level both as the result of melting and the thermal expansion of the ocean, the predicted cooling of the stratosphere, the radiation imbalance at the top of the atmosphere, etc..

    It would also leave unexplained our observations of what is happening to the cryosphere, including the rather dramatic decline in arctic sea ice since 1958 – which this year alone has seen an area minima 25% lower than the previous minima. It would leave unexplained the decline mass balance of the glaciers, Greenland and Antarctica, the acceleration of the the glaciers along the West Antarctic Peninsula and the increase in the number and severity of icequakes in Greenland.

    Any claims with respect to the actual shape of the distribution other than those which necessarily follow from what we mean by average would have to be made on the basis of empirical studies. No doubt you would not claim that the trends we see are merely the results of distortions in how we measure temperatures, but there are many climate skeptics who still do – and there is a vast body of evidence weighing against them.

    No doubt there is some distortion in temperature measurements. But any claims with respect to the nature of this distortion would have to be the subject of rigorous empirical studies. And to claim that they are especially significant would be difficult – given the vast body of literature which suggests otherwise. Similarly, at this time, it would seem that there is a far larger body of evidence than the literature on surface stations which suggests that the global temperatures and climate change itself is quite significant.

  141. JamesG:

    Ray:
    Since the photos tell us where to look you don’t need to look at every site. I said already I don’t think it makes much of a difference but it is good QC and good PR. We disagree on what constitutes good data but what biases do you think may be introduced by excluding suspect data? – A cooling bias? – By using standards compliant sites? That implies that you think the data indeed does have an abnormal warming bias. IMO a small amount of trustworthy data is far better than a lot of poor, corrected data: An opinion forged from bitter experience.
    Richard: It’s no secret that micro-site effects affect temperatures – that’s why there are standards in the first place. Personally I’d exclude the sites because I just prefer “clean” data, others would analyse it. Ok fine, but the photos tell you where to start looking. Others just want to ignore the issue which is not good politics.
    Hank: He will publish it somewhere – where doesn’t matter as it’ll be the same text. Good and bad are already clearly defined according to official standards.

  142. Sean O:

    I agree with James G above that if a site is audited and found to be bad, throw it out. I would extend that to say that all of its data should be thrown out as well and the funding for that station should be dropped (why would we fund a station that doesn’t live up to standards?). Of course that opens up the conversation of appropriate uses of government treasure which is really what the NO conversation in this meandering thread is all about.

    I question Ray L’s analysis (135) as I understand it. You seem to imply that regular physical auditing doesn’t happen and wouldn’t matter. Is this true? I haven’t dug into the standards so I am simply asking the question – aren’t all of the sites regularly inspected and audited? If not then I am shocked that anyone would believe any number the stations produce. If they are audited, then why is the Watts finding that 50% of them are bad (is that number confirmed – a quick internet search couldn’t find the source even though it is oft repeated). Once again this probably goes to the question of appropriate uses of government treasure.

    If the data is oversampled then throwing out bad data is much better than trying to understand the individual errors and mathematically correct. Any mathematical “correction” is simply an approximation anyway and the resulting data has large error bars than necessary. The entire data sampling program already has enough error variations in it – no need to introduce more just because one thinks one knows how to mathematically correct outliers.

    Gavin/Vernon – you guys are talking past each other and not to each other. It is a shame because you are both intelligent and well educated. Sitting on the outside and watching it is almost comical. I think that if you would sit down for a beer, you would likely agree on more than you disagree with.

    I think that everyone gets that GCMs don’t use surface data (esp. after Galvin has stated at least half a dozen times above). I would argue that this is a problem. I know that the computer technology of 20 years ago might have been unable to handle that level of modeling but that is likely NOT the case today. I have stated here as well as on my own global warming site http://www.globalwarming-factorfiction.com that we need to invest much more effort in developing these types of models to really understand what is going on and how to make significant impacts. Oops – now I am back to the appropriate use of government treasure again. Hate how that happens.

  143. L Miller:

    Sean, the problem is what constitutes a bad site.

    If a site reads higher temperatures then it should but is consistent so that it accurately reflects changes in temperatures is it a bad site?

    If a site reads temperatures accurately today but didn’t do so in the past and therefore does not report the temperature trends properly is it a bad site?

    The technique of photographing sites to determine if they meet placement standards assumes the former is a bad site and the latter is a good site. To study temperature trends, however, the first site is useful the second is not is not. It’s all about what you are going to use the data for.

  144. Barton Paul Levenson:

    [[ Wouldn’t it be better if we just used sites that needed no adjustments whatsoever?]]

    No, it wouldn’t, because we’d have a much lower sample size and less small-scale resolution. You don’t throw out biased data, you correct for the biases.

    Look, there is no empirical data that is free from all biases. The fossil record is biased against preserving creatures without hard parts. The local motions of galaxies and quasars are biased by their cosmological red shifts. You don’t throw out biased data, you correct for the biases. That’s not just true in climatology, it’s true everywhere in science.

  145. Hank Roberts:

    JamesG writes: “He will publish it somewhere – where doesn’t matter as it’ll be the same text.”

    None of this silly science stuff needed, then, eh? No peer review needed?

  146. ray ladbury:

    James G., Excluding data for no good reason (and if you can correct for the biases, you have no good reason) is bad science. Should we ignore the sample standard deviation because it provides a biased estimate of the population standard deviation. No, because we understand the reasons why, and we know how to correct for it. Excluding data can exclude science. Moreover, if a site provides data that are so low quality that it contains no information then it can and will be detected and effectively excluded in the analysis. When it comes to a choice between good science and PR, PR will have to take a back seat.

  147. Lawrence Brown:

    Dean in #138 says: “The Katrina report shouldn’t be here. It’s a purely political issue……..”

    Well,I’m not so sure. Two very powerful storms(Katrina and Rita) landing within a month in 2005 could be a portent of things to come if surface ocean waters continue to warm and all indications are that they will. These hurricanes can have immediate tragic consequences for life and property, and we need leaders who are aware of this. In our system our leaders are political. If a dictatorship were as vulnerable to hurricanes as we in the U.S., I wouldn’t think of the problem as a dictatorial issue. These extreme events cross a number of disciplines, climate change included.

  148. Steve Bloom:

    Steve McIntyre Lex Luthor and the other ringleaders of this little audit charade are well aware that the huge over-sampling of the continental U.S. plus the fact that it shows significantly less trend than globally will mean little if any adjustment in the end. What they are really about is getting acceptance of the “stations that don’t meet standards should have all their data thrown out” meme so that it can be used to attack the global land data. Consistent with this goal, they will focus on identifying and analyzing “bad” U.S. stations for as long as possible. They will say that they need to entirely finish that task before doing their own analysis of the remaining U.S. stations, which they will rig in any case. IOW the script is obvious.

  149. FurryCatHerder:

    Re #147:

    Well,I’m not so sure. Two very powerful storms(Katrina and Rita) landing within a month in 2005 could be a portent of things to come if surface ocean waters continue to warm and all indications are that they will.

    I think there is a lot of politics surrounding hurricanes and climate change. Considering that it’s after the anniversary of Katrina, and we’re just now getting to “F”, and the projection for 2006 was way off, I’m going for “the observed data isn’t matching the rhetoric”.

    Be careful with rhetoric — when rhetoric doesn’t match reality, people will use that to attack the message, even if it is otherwise valid.

  150. Timothy Chase:

    Regarding the Stephen Schwartz paper…

    I ran into a paper earlier today from 2005 that made one of the mistakes regarding response times and therefore climate sensitivity – then quick looked up a critique. The authors of the original paper (a couple of astronomers) where outside of their depth when they treated the ocean as a single box rather than running their calculations with a number of layers. Current GCMs use forty, I believe.

    The results were the same: a climate sensitivity considerably lower than what we know it to be. Other mistakes were made as well, but this was apparently the main one.

    This kind of gambit seems to be getting rather popular with the degree’d climate skeptics.

    Anyway, the critique was:

    Comment on “Climate forcing by the volcanic eruption of Mount Pinatubo” by David H. Douglass and Robert S. Knox
    T. M. L. Wigley, C. M. Ammann, B. D. Santer, K. E. Taylor
    Geophysical Research Letters, Vol. 32, L20709, Doi:10.1029/2005gl023312, 2005

    Pretty short and quite readable.

  151. Lawrence Brown:

    Re 149: “Be careful with rhetoric — when rhetoric doesn’t match reality, people will use that to attack the message, even if it is otherwise valid.”

    Et Tu Brute! I’m still in admiration of the plans and the steps you’re taking in using alternative fuels and energy efficiency. But there is a definite climate component to extreme meteorological events. Whether there’s a trend or not to more frequent cat 4 and 5 storms remains to be seen.One swallow doesn’t make a summer. But It does seem like we’re hearing about storms of greater magnitude even if the frequency of hurricanes has been lower in the past few years.

    Politics unfortunately does get to decide who gets what slice of the pie in the protective stage and in the aftermath,since they control the pursestrings.
    Rhetoric is sometimes described as showy and elaborate language that sounds effective but is mostly empty of ideas or sincerity. I’m a true believer that there’s a cause and effect relationship between the strength of storms and the warmer waters in the tropical Atlantic, just north of the equator where hurricanes are born and and gain in strength as they lumber across the now warmer waters in their path. There’s no disbelief or exaggeration for effect,(hyperbole) intended. It’s counter-productive as we learned by the deceptions of past and current administrations about going to war.

  152. Lawrence Brown:

    If there’s any doubt whether there is a strong correlation between the temperature of the Atlantic Ocean’s surface waters and the intensity of hurricanes click on the link to the graph below, based on an August 2005 paper from “Nature” by Dr. Kerry Emanuel of MIT.
    http://www.cleartheair.org/hurricane_globalwarming_still.html

  153. Mike Alexander:

    I’ve studied Schwartz’s paper more thoroughly. He gets a very low sensitivity value of about 0.3 (corresponding to about 1.1 C temperature change from a CO2 doubling). Very crudely this idea can be conveyed by his assertion of a 2.2 watt per sq meter greenhouse forcing in the 20th century and a 0.57 increase in temperature: 0.57 temp change/2.2 forcing = 0.26, even smaller than 0.3.

    He gets the 2.2 watt per sq meter from this figure from IPCC 2001:
    http://www.grida.no/climate/ipcc_tar/wg1/fig6-8.htm

    I see about 2.0 I have detailed CO2, N2O and CH4 data. When I calculate using just CO2 I get only 1.2 watts per sq meter for 20th century forcing increase. But if I add N2O and CH4 using the IPCC 2001 simplified expressions I get 1.6. And if I then tack on CFCs forcings from IPCC 2001 I get 1.9, so the ~2.0 from the figure is materializing when I check it out.

    Schwarz uses a 0.57 temperature increase for the 20th century. I use the HadCRUT3v data from here:
    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt

    Now if I simply read the 1900 and 2000 temperature off this sheet I see they are different by 0.53. If you look at the CRUTEMv data the difference is 0.55 so I can see where the 0.57 might have come from.

    But when I pass a trend line (a running 20-year linear regression) through the data and look at the 1900-2000 change I get about 0.77 C. And if I look at the 1905-2005 difference I get about 0.87 C. Since Schwarz finds a 5 year lag, it makes sense of compare 1905-2005 temperature change to a 1900-2000 forcing change. Even with this revised temperature change and the 1.9 forcing (instead of 2.0) the sensitivity value comes out at 0.46 (corresponding to a 1.7 degree increase from a CO2 doubling).

    Now the IPCC 2001 report (same fig as before) suggests -0.2 net forcing from solar + aerosols. With their 2.0 watt/m^2 value for greenhouse gases this gives 1.8 for total forcing. With my 1.9 value it would come to a 1.7 watt/m^2 total forcing. Even with this reduced forcing and the 0.87 temperature change I get, sensitivity of 0.51 is obtained, which corresponds to 1.9 C for a CO2 doubling.

    The consensus sensitivity is about 3 C for a CO2 doubling. To get that we need another 0.46 C temperature increase that is unrealized. This is what is considered “in the pipeline”.

    The source of this delayed heating is the effect of the deep ocean. Using a simple model in which I divide the world into three compartments (1) land + atmosphere above land (2) surface ocean + atmosphere above ocean and (3) deep ocean I can explore the effects of lags on the system.

    Each compartment has an interchange rate with the surface ocean compartment. The land compartment has a small heat capacity and so has little impact on system dynamics over a period of decades (which is the time scale of interest since my trend serves to produce a running 20-year smoothing).

    Assuming a fairly rapid interchange between and land and ocean air masses, the effect of deep ocean interchange in system dynamics in the multi-decade time frame is not very important. What deep ocean *does* impact is the extent of the temperature change due to a forcing change. For example over a period of a few decades, the impact of a step increase in forcing will be about 85-90% of the full effect for a deep ocean exchange rate consistent with a ~2000-year equilibrium period. Increasing the exchange rate to what would produce equilibrium in say ~1000 years would reduce the impact to about 80% or so. That is, the *true* sensitivity would be 18-25% greater than the figures above suggest depending on the importance of deep water heat exchange.

    Application of these correctives to my 0.51 sensitivity value from above and I get 0.6-0.64 for sensitivity, corresponding to 2.3-2.4 degrees for CO2 doubling. It’s still not 3 C, but if I assume larger deep ocean interchange rates I can make it bigger. The issue is this heating “in the pipeline” that reflects deep ocean effects isn’t going to show up quickly. It might take a couple of centuries to manifest so is this relevant? Obviously it will be relevant when looking at paleoclimate forcing versus response behavior, but is it relevant today?

  154. Michael Tobis:

    Apropos of nothing in particular, I thought I ought to report this here.

    I just got an unsolicited invitation in my email. I contacted them to see if it was spam, and they told me I was invited as an “influential blogger”. Shucks. Flattery will get you somewhere, but I am way behind on this proposal I need to get out, so I’ll pass.

    A global shipping insurance concern is basically doing a global change for executives event. My sense is that this is for real. I’m not sure whether I’m supposed to tell my six billion closest friends more about the event or not, so I’ll just quote the interesting bit from the invitation:

    To address the major disconnect surrounding future risks for businesses, Marsh, the world’s largest insurance broker and strategic risk advisor, has created the Center for Risk Insights to help businesses better understand the risks they face today and to identify and explain threats looming on the horizon. As part of this effort, the Marsh Center recently surveyed over one hundred board level executives of Fortune 1000 companies to gauge their perceptions on the greatest business risks. Highlights of the findings include:[...]

    - An astounding half of executives surveyed do not believe that global climate change resulting in long-term environmental and economic impacts is likely to occur.

  155. Fernando Magyar:

    Re 154,

    I guess you could look at those numbers and conclude that the glass is half full. I’d also be curious to know which industries they represent.
    Regards,from one of your six billion closest friends.

  156. Lawrence Brown:

    Re 154:”- An astounding half of executives surveyed do not believe that global climate change resulting in long-term environmental and economic impacts is likely to occur.”

    There’s a story- I don’t know whether it’s apocryphal or not- that President Eisenhower was surprised to learn that half of the American people were below average in intelligence.(I’m aware of the difference between median and mean). Maybe this is the half that obeyed the ‘Peter Principle’ and rose to levels above their capabilities.

  157. TokyoTom:

    James Annan addressed Schwartz`s paper a week before this post. It`s a surprise that no one has noted it:

    http://julesandjames.blogspot.com/2007/08/schwartz-sensitivity-estimate.html

    James has also commented on Schwartz`s earlier Nature piece concerning the IPCC: http://julesandjames.blogspot.com/2007/09/pile-on.html#links

  158. TJT:

    I just today listened to a lecture on active sun and its implications for climate change given by one of our physics professors (solar-magnetospheric physics). His main point was that sun’s poloidal magnetic field activity (geomagnetic field activity is used as a proxy) has increased and this has profound effects on climate. He did not identify any specific mechanism as to how (alluded to effects on cosmic rays and, of course, brought up Svensmark et al. paper, and effect on ozone layer and some vague other things).
    He said that the solar scientists in the IPCC did not take this effect (the effect of the sun’s poloidal magnetic field on Earth’s climate) into account and therefore have underrepresented the solar influence on recent warming in the 4th IPCC report.

    I challenged this idea (which I probably should not have done since he has a big influence here…), but got only a reply that it is only that the IPCC solar scientists have not taken the effect of this poloidal field effect into account in their estimates, because it is only during the past couple of years that scientist have even known about this effect. He was very specific that solar maxima, increased sun spot numbers etc are related to the sun’s toroidal field, but it is the poloidal field that seems to have more effect on Earth’s temperature.

    How? In my opinion, this he did not satisfactorily explain.

    And also, it seems that at least some scientists studying solar activity and geomagnetism put a lot of emphasis on the fact that geomagnetic activity index curve (for the poloidal component at least) seems to match Earth’s surface temperature curve, and hence the temperature increase is explained!

    I would expect a bit more intellectual honesty…

    So, what is the deal with the sun’s poloidal and toroidal field stregths and their effect on Earth?

  159. Fuser:

    I agree with Paul. Also, we should stop spending time, money and efforts in things like Iraq and start concentrating in this problem, that sometimes seems to be a lost cause..

    Thanks,

    fuser

  160. Barton Paul Levenson:

    [[So, what is the deal with the sun’s poloidal and toroidal field stregths and their effect on Earth?]]

    What the hell does “poloidal” mean? I’ve studied astronomy since the 1970s and I never heard that term before.

  161. Barton Paul Levenson:

    [[What the hell does “poloidal” mean? I’ve studied astronomy since the 1970s and I never heard that term before.]]

    Well, apparently it’s a real technical term — “A divergenceless field can be partitioned into a toroidal and a poloidal part.”

  162. Hank Roberts:

    Google Scholar: Results — about 426 for +poloidal +climate.
    170 articles since 2002; 16 articles during 2007.

    This is behind a pay-wall; sounds like a current review, and a good reminder of how short-term our information is.

    http://www.blackwell-synergy.com/doi/abs/10.1111/j.1468-4004.2007.48223.x

    “Following 20 years without satellite magnetic coverage, the first five years of the International Decade for Geopotential Field Research have provided the geomagnetic community with a wealth of high-quality data from several near-Earth satellites: Ørsted, SAC-C and CHAMP. Combined with ground-based and aeromagnetic data, this has opened numerous opportunities for studies ranging from core flow, mantle electrical conductivity, lithospheric composition and ocean circulation to the dynamics of ionospheric and magnetospheric currents using one or more satellites. Here, I review our current state of knowledge, and discuss the challenges to maximizing the utility of the satellite data.”

    One full text article this year discusses what’s known:

    http://arxiv.org/abs/physics/0703187

    “Since the beginning of the 20th century, the correlation in the 11-year solar cycle between the sunspot number and geomagnetic aa-index has been decreasing, while the lag between the two has been increasing. We show how this can be used as a proxy for the Sun’s meridional circulation, and investigate the long-term changes in the meridional circulation and their role for solar activity and terrestrial climate. …”

  163. Fred Staples:

    I apologise posting this comment here, but I wanted to reply to Mr Barton Levenson (The CO2 problem in 6 east steps, post 253), and the original thread seems to have closed.

    The equations Mr Levenson used relate incoming short wave solar radiation to outgoing long wave radiation, and via Stefan’s law and the geometry of the earth arrive at an equilibrium temperature of 254 degrees K. The 33 degrees K increase in temperature at the surface of the earth is then attributed, via one explanation or another, to the back radiation from the atmosphere.

    What happens if we apply the same radiative equations to something which we can control, measure and understand – the real greenhouse effect.

    A much simplified account, based on the earth’s greenhouse equations, follows:

    Short wave radiation from the sun equal to W per square meter falls on the glass, passes through and warms the interior, which emits long wave radiation. In turn this warms the glass to a temperature T’, at which it emits long wave radiation, W, both outwards (balancing the sun’s input) and inwards.

    To balance the back radiation, the greenhouse interior must warm, and emit long wave radiation 2W at a higher temperature T.

    The ratio T/T’ = the fourth root of 2, or 1.189..

    This looks plausible (when the ratio is applied to the earth the author calculates an increase due to “greenhouse” of 38 degrees K), but it is, sadly, nonsense. Nonsense to the fourth power, one might say.

    The greenhouse interior is warmed by the direct sunlight and heats because the interior is insulated (the thermal conductivity of glass is low and convection is prevented). The radiation effect, which must be present, is negligible. The much derided G and T paper explains this at great length.

    Mr R W Woods (who G and T quote as required reading for all climatologists) demonstrated the absence of a significant radiative effect by actually measuring the temperatures in a glass and a non-radiative greenhouse simultaneously.

    So, my question to Mr Levenson is this. Why do we attribute our life-giving 33 degree K to back radiation? What is wrong with the adiabatic lapse rate which Mr O’Reilly laboured to explain, which we can see and feel whenever we use an a aerosol or drive up a hill?

  164. Ray Ladbury:

    TJT, (#158)Given the amount of handwaving he was doing, I’m surprised he didnt’ take flight. An assertion without a mechanism is meaningless. The only mechanism these solar types have come up with is GCR. Only one thing wrong: GCR fluxes aren’t changing. During the space era, they are remarkably consistent–to the point where the models we use to calculate bit flip rates in memories are basically the same as they were in the 80s. Neutron fluxes–a proxy for GCR–have also not been changing since the 1950s. If there were a solar/GCR mechanism, one would expect it to be active only when the GCR flux was changing, since it would act by changing the amount of solar radiation that makes it to Earth.
    So, on the one hand, you have a mechanism that is known to be active (greenhouse heating by CO2) and is well understood and that we know should be increasing. On the other hand, you have a mechanism that nobody can quite parse out, the cause of which doesn’t seem to be present, but that solar types are just sure must be the real culprit. Hmm, which to believe?
    Unfortunately, being an expert in one scientific field is no guarantee of competence in another–nor of being able to realize whether you are competent in that other field.
    BTW, you can read about geomagnetism here:
    http://www.phy6.org/earthmag/mill_5.htm
    Specifically poloidal and toroidal fields:
    “In a spherical geometry, like that of the Sun, magnetic fields can be divided into two classes, toroidal and poloidal fields. If the field is axially symmetric, poloidal field lines lie in meridional planes (like those of the dipole field) while toroidal field lines form circles around the axis of symmetry.”

  165. Hank Roberts:

    Fred, this may help; an agricultural greenhouse isn’t in a vacuum.

    http://www.ems.psu.edu/~fraser/Bad/BadGreenhouse.html

  166. Barton Paul Levenson:

    [[So, my question to Mr Levenson is this. Why do we attribute our life-giving 33 degree K to back radiation? What is wrong with the adiabatic lapse rate which Mr O’Reilly laboured to explain, which we can see and feel whenever we use an a aerosol or drive up a hill?]]

    Because it’s the greenhouse effect that gives you the surface temperature from which you start counting off the lapse rate.

  167. Fred Staples:

    Mr Levenson’s comment, 166, is interesting.

    If I understand it – which is by no means certain – he is saying that the earth’s surface temperature is increased from the “bare rock” level of 255 degrees K to 288degrees K by back radiation from the atmosphere. Adiabatic convection then cools the atmosphere and establishes the observed lapse rate from 288 back to 255 degrees K at the top of the troposphere. The role of the greenhouse gasses is presumably to increase the warming of the atmosphere by their enhanced absorption, and simultaneously to increase the back radiation.

    But the atmosphere is warmed primarily by the earth; heat can flow only in one direction, and the earth’s surface is warmer than the atmosphere.

    The ‘blanket’ theory (the atmosphere as a whole has a low thermal conductivity, so it is warmer on the inside) is more plausible, but has only a minor role for radiative effects, which is what Mr Woods found with his greenhouses.

  168. Fred Staples:

    The link from Mr Roberts, 165

    [Response: edited--the reader has provided you with a link to a rigorous explanation of how the atmospheric greenhouse effect works. We'll end this particular thread with that. If you'd like to try to overlaw the Planck radiation law, submit a publication to the peer-reviewed literature. We'll be happy to feature it if gets published (well, anywhere but "Energy and Environment"). -mike]

  169. Timothy Chase:

    Fred Staples (#167) wrote:

    The role of the greenhouse gasses is presumably to increase the warming of the atmosphere by their enhanced absorption, and simultaneously to increase the back radiation.

    Actually greenhouse gases primarily cool the atmosphere by means of their direct effect of reradiating thermal radiation. The troposphere is warmed principally by moist air convection. Collisional energy from moist air convection is transformed into thermal radiation in addition to the thermal radiation which greenhouse gases absorb which is emitted both by greenhouse gases and by the surface.

    Incidentally, I should point out that all of this is something which Levenson agrees with as he understands the scientific explanation of the greenhouse effect.

    Fred Staples (#167) wrote:

    But the atmosphere is warmed primarily by the earth; heat can flow only in one direction, and the earth’s surface is warmer than the atmosphere.

    “Heat” as thermal, kinetic energy flows in both directions even with non-greenhouse gases and with liquids and solids. To argue otherwise would be equivilent to presupposing the existence of entropy creating anti-Maxwell’s demon. However, it flows predominantly coldward. This too is statistical in nature.

    As thermal energy flows back to the surface in the form of thermal radiation, slowing the rate at which the surface is able to cool, this is the proper application of a “blanket analogy,” one of the analogies by which the greenhouse effect is first explained to those who have not yet grasped the actual physics.

    Fred Staples (#167) wrote:

    The ‘blanket’ theory (the atmosphere as a whole has a low thermal conductivity, so it is warmer on the inside) is more plausible, but has only a minor role for radiative effects, which is what Mr Woods found with his greenhouses.

    If heat in the form of thermal energy in the form of kinetic, translational energy flowed in only one direction as you claimed above, then the atmosphere would have to be the same temperature as the surface since it would never be able to transfer this translational energy to the surface. And as I pointed out in comment #555 of the discussion to Part II: What Angstrom didn’t know, we have a great deal of detailed evidence for the radiative effects of greenhouse gases. In fact, we are able to measure the back radiation.

    If you do the math rather than armchair theorizing which assumes that random kinetic energy can be transfered in only one coldward direction you will see that the mainstream view works. Yours does not.

  170. Barton Paul Levenson:

    [[If I understand it – which is by no means certain - he is saying that the earth’s surface temperature is increased from the “bare rock” level of 255 degrees K to 288degrees K by back radiation from the atmosphere. Adiabatic convection then cools the atmosphere and establishes the observed lapse rate from 288 back to 255 degrees K at the top of the troposphere. The role of the greenhouse gasses is presumably to increase the warming of the atmosphere by their enhanced absorption, and simultaneously to increase the back radiation.]]

    The greenhouse effect heats the Earth from 255 K to what would be 336 K or so in the absence of sensible and latent heat exchange. That 81 K difference is cut back to only 33 K by conduction, convection, and evaporation of seawater.

    [[But the atmosphere is warmed primarily by the earth; heat can flow only in one direction, and the earth’s surface is warmer than the atmosphere.]]

    “Heat can flow only in one direction” is completely wrong if you’re talking about radiative effects. Cooler objects radiate to warmer ones all the time, as well as vice versa.

  171. Ray Ladbury:

    Barton and Fred, Might I suggest we amend Fred’s statement to “net heat transfer goes in only one direction” and of course we have to assume that no work is done to transfer heat in the opposite direction, etc.
    The blanket analogy is quite imperfect, especially where convection and latent heat are important. Yes, I know I’ve used it, too, but what is really happening is that by altering the temperature profile of the atmosphere, ghgs change the net radiative cooling.

  172. Hank Roberts:

    > 165, 168
    Also recommended:
    http://www.ems.psu.edu/~fraser/Bad/BadFAQ/BadGreenhouseFAQ.html