RealClimate

Comments

RSS feed for comments on this post.

  1. Gavin – It is interesting – at least from eyeballing the final figure, we’d need to see some statistics to be certain – how the individual model ensemble runs match not just the magnitude of OHC variability since the 1970s but the timing of the variability so well. That seems to suggest the variability is driven more by external forcing (the volcanoes, as you suggest) than internal variability. Otherwise, there’s no reason to expect the ensembles to match up the timing of the swings in OHC. If correct, that serves as yet another reminder that one needs more than a two or three decades of data to properly estimate decadal climate variability.

    Comment by Simon D — 19 Jun 2008 @ 2:29 PM

  2. Most timely. Thank you.

    Comment by David B. Benson — 19 Jun 2008 @ 3:37 PM

  3. Why does it look like the ocean cooling leads the eruptions? Is that an artifact of the smoothing?

    [Response: Yes. There is a 3 year smooth line of the volcanic forcing so that you can see the effect. - gavin]

    Comment by Sili — 19 Jun 2008 @ 4:39 PM

  4. Thanks so much! Very timely indeed!

    If you have no objections, I would like to put a teaser of this post in my blog, with a link in to realclimate.

    [Response: no problem. - gavin]

    Comment by Tenney Naumer — 19 Jun 2008 @ 6:14 PM

  5. Thanks for the very clear discussion and update – it is greatly appreciated.

    RE#1 Keep in mind that any quasi-periodic oscillations in the climate system (such as the best understood, the El Nino / Southern Osciallation) will themselves be affected by a warming ocean. Also, even long datasets like the 20th century El Nino / La Nina record are of no use in predicting the onset and strength of the next El Nino.

    Similarly, the increase in global ocean heat content is probably going to have some effects on the global ocean circulation – effects that are also very difficult to predict. A good example there is Southern Ocean, where indications seem to be that the ocean may start warming a bit faster:

    http://www.sciencedaily.com/releases/2006/12/061205113036.htm

    ScienceDaily (Dec. 5, 2006) — The Southern Ocean may slow the rate of global warming by absorbing significantly more heat and carbon dioxide than previously thought, according to new research.

    The Southern Hemisphere westerly winds have moved southward in the last 30 years. A new climate model predicts that as the winds shift south, they can do a better job of transferring heat and carbon dioxide from the surface waters surrounding Antarctica into the deeper, colder waters.

    However, what that means is that land and atmospheric temperatures might be a bit lower, but it also means that the ocean will warm more rapidly, possibly increasing the thinning of ice shelves and sea ice.

    (regarding the carbon issue, a data-based study indicates that the increased winds will not strengthen the Southern ocean carbon sink, but will instead weaken it:
    http://www.sciencedaily.com/releases/2007/05/070517142558.htm
    The same appears true in the North Atlantic:
    http://www.sciencedaily.com/releases/2007/10/071022120224.htm )

    Comment by Ike Solem — 19 Jun 2008 @ 6:40 PM

  6. Great snakes! In the absence of significant volcanic eruptions the trend line eventually goes up FAST. It seems the models at least are telling us that, once the aerosol lid comes off, the GHGs lurking in the background resume pushing the systems hard. Looks really bad for us if we don’t get any significant eruptions in the next decode or two. Yes, I’ve already given up any hope of CO2 reductions. Thus it seems we are now at the mercy of Nature.

    Comment by cat black — 19 Jun 2008 @ 7:52 PM

  7. Quote from the Nature Summary.

    “We add our observational estimate of upper-ocean thermal expansion to other contributions to sea-level rise and find that the sum of contributions from 1961 to 2003 is about 1.5 0.4 mm yr-1, in good agreement with our updated estimate of near-global mean sea-level rise.”

    http://www.nature.com/nature/journal/v453/n7198/abs/nature07080.html

    So the 3.2 mm/yr estimate is off by a factor of 55% ???

    http://sealevel.colorado.edu/current/sl_noib_ns_global.jpg

    [Response: No. You are comparing the current (14 yr?) trend with the trend over 40 years. There has been an apparent acceleration of late.... - gavin]

    Comment by Lowell — 19 Jun 2008 @ 8:08 PM

  8. So, does this change the “amount of global warming in the pipe” a la Jim Hansen?

    [Response: Not substantially - the two GISS models bracket the observed trends and the respective "in the pipeline" values are between 0.5 and 0.6 deg C. - gavin]

    Comment by Chris ODell — 19 Jun 2008 @ 8:09 PM

  9. Good post, interesting info. Just wanted to let you know, after a long and much-appreciated absence, I am sure, that “polar cities” have been given a second nickname, that is “Lovelock Cities” — in honor of the great man of England, Dr James Lovelock. He has seen the images of these blueprints and approved, saying in an email: “Thanks for showing me those images, Danny. It may very well happen and soon”.

    Of course, Dr Lovelock says it might happen in 20 to 40 years, by 2050 at the latest. I am a bit younger than him, so I am still saying it won’t happen for another 500 years, but that it’s time to start thinking about these ideas now.

    Welcome to the Lovelock Cities blog here:

    http://northwardho.blogspot.com

    Comment by Danny Bloom — 19 Jun 2008 @ 8:13 PM

  10. As in all such sets it takes a while to get the mistakes out and it is not good to jump on the next new thing. BTW, the linear fit in Levitus is in error

    Comment by Eli Rabett — 19 Jun 2008 @ 9:01 PM

  11. If Ocean temperatures are in fact rising, and what you are seeing is not just natural long-term variability which is a real possibility, at what point do you think the speed and duration of off-shore winds (before dawn for example), might be measurably increased? If average off-shore (from land to water) wind intensities actually were also rising and could be charted perhaps this would be a way to reinforce the implications inherent in the data and the theory?

    Comment by Vern Johnson — 19 Jun 2008 @ 9:48 PM

  12. Is this new analysis likely to help resolve the ‘missing ocean heat’ puzzle that was reported a few months ago. And if so what other significant discrepancies between the models and data remain to be solved?

    Comment by Craig Allen — 19 Jun 2008 @ 10:06 PM

  13. “…There has been an apparent acceleration of late…. – gavin]…”

    That’s not true if you define “of late” as the last 4 years. As of right now, global sea level has been declining for 2 years. Also noted is the fact the OHC paper discussed appears to stop at 2003. Much interesting stuff has happened post 2003.

    Regards, BRK

    Comment by Brian Klappstein — 19 Jun 2008 @ 10:43 PM

  14. I find it quite curious, how all of our measuring devices that are discovered to have some kind of bias, tend to show cooling compared to model predictions and need corrections towards warming. None is discovered with a bias towards too much warming that needs correction. It’s like if the devices didn’t want us to believe in global warming. Radiosondes, MSU satellites, inlets and buckets, now the devices of the Argos project too…

    [Response: Your curiosity appears quite limited. The pre-war bucket corrections reduced the trend, UHI corrections reduce the trend, the XBT corrections are basically neutral in the long term, revisions to various the North Atlantic THC time series reduced the trend, the stratospheric trend in the radiosondes was reduced, .... etc - gavin]

    I recently read about NASA people being surprised about the accuracy of their satellites for measuring sea-level change. They said it happened to be more than 10 times better than expected, because they showed the “right” sea-level increase. But I fail to understand how a measuring device can get more accurate results than predicted given its physical capabilities and limitations.

    Comment by Nylo — 19 Jun 2008 @ 11:41 PM

  15. Sorry, what NASA says is that TOPEX/POSEIDON happened to give results three times more precise than expected regarding sea-level, not ten. They expected an accuracy of 5.9 inches and have found it to be 1.8 inches.

    Comment by Nylo — 19 Jun 2008 @ 11:55 PM

  16. Is there a hypothalmus in the earth’s biosystem that serves to regulate
    and reset optimal temperature, to make a comparison system function that
    is found in humans.

    Logically, it sounds like the ocean, since people do mention
    the heat releases from the ocean in the way of regulation of temperature.

    However, to myself, that sounds more like an auxillary system turn on feature
    to regulate an ideal temperature.

    For example, people shiver in reaction to the body, muscles, trying to equalize
    the thermal differences it is observing in reaction to fevers or the like. People
    evaporate excess heat through their heads, hands, feet, sweating, and blah..blah..

    Another words, what and where is the order giver, like the hypothalmus, in
    our biosystem?

    Thanks in advance.

    Comment by Cheska — 20 Jun 2008 @ 12:28 AM

  17. 9 Danny Bloom: Lovelock is right, you are wrong. In 500 years, if we haven’t gotten GW under control this century, we will be extinct. Forget about polar cities, we don’t have time to build them before going extinct.
    Environmental policy = energy policy
    Energy policy = environmental policy
    because Global Warming
    can lead to Hydrogen Sulfide gas coming out of the oceans.

    Hydrogen Sulfide gas will Kill all people. Homo Sap will go
    EXTINCT unless drastic action is taken.

    October 2006 Scientific American

    “EARTH SCIENCE
    Impact from the Deep
    Strangling heat and gases emanating from the earth and sea, not
    asteroids, most likely caused several ancient mass extinctions.
    Could the same killer-greenhouse conditions build once again?
    By Peter D. Ward
    downloaded from:
    http://www.sciam.com/article.cfm?articleID=00037A5D-A938-150E-A93883414B7F0000&sc=I100322
    ………………..Most of the article omitted………………….
    But with atmospheric carbon climbing at an annual rate of 2 ppm
    and expected to accelerate to 3 ppm, levels could approach 900
    ppm by the end of the next century, and conditions that bring
    about the beginnings of ocean anoxia may be in place. How soon
    after that could there be a new greenhouse extinction? That is
    something our society should never find out.”

    Press Release
    Pennsylvania State University
    FOR IMMEDIATE RELEASE
    Monday, Nov. 3, 2003
    downloaded from:
    http://www.geosociety.org/meetings/2003/prPennStateKump.htm
    “In the end-Permian, as the levels of atmospheric oxygen fell and
    the levels of hydrogen sulfide and carbon dioxide rose, the upper
    levels of the oceans could have become rich in hydrogen sulfide
    catastrophically. This would kill most of the oceanic plants and
    animals. The hydrogen sulfide dispersing in the atmosphere would
    kill most terrestrial life.”

    http://www.astrobio.net is a NASA web zine. See:

    http://www.astrobio.net/news/modules.php?op=modload&name=News&file=article&sid=672

    http://www.astrobio.net/news/modules.php?op=modload&name=News&file=article&sid=1535

    http://www.astrobio.net/news/article2509.html

    http://astrobio.net/news/modules.php?op=modload&name=News&file=article&sid=2429&mode=thread&order=0&thold=0

    These articles agree with the first 2. They all say 6 degrees C or
    1000 parts per million CO2 is the extinction point.

    The global warming is already 1.3 degree Farenheit. 11 degrees
    Farenheit is about 6 degrees Celsius. The book “Six Degrees” by
    Mark Lynas agrees. If the global warming is 6 degrees
    centigrade, we humans go extinct. See:
    http://www.marklynas.org/2007/4/23/six-steps-to-hell-summary-of-six-degrees-as-published-in-the-guardian

    “Under a Green Sky” by Peter D. Ward, Ph.D., 2007.
    Paleontologist discusses mass extinctions of the past and the one
    we are doing to ourselves.

    ALL COAL FIRED POWER PLANTS MUST BE
    CONVERTED TO NUCLEAR IMMEDIATELY TO AVOID
    THE EXTINCTION OF US HUMANS. 32 countries have
    nuclear power plants. Only 9 have the bomb. The top 3
    producers of CO2 all have nuclear power plants, coal fired power
    plants and nuclear bombs. They are the USA, China and India.
    Reducing CO2 production by 90% by 2050 requires drastic action
    in the USA, China and India. King Coal has to be demoted to a
    commoner. Coal must be left in the earth. If you own any coal
    stock, NOW is the time to dump it, regardless of loss, because it
    will soon be worthless.
    I have no financial connection to the nuclear power industry.

    Comment by Edward Greisch — 20 Jun 2008 @ 12:41 AM

  18. Does anyone have an idea what that decline beginning 1997 is related to?

    Comment by chipf — 20 Jun 2008 @ 2:03 AM

  19. Hi Gavin

    I apologize if I didn’t understand but I thought that there was also an OHC “problem” between 2003 and 2006 or even 2007.
    In this last period, I read from Willis (NOAA) that there was a difference between altimetric sea level (Jason) and the sum of mass (GRACE) and steric (ARGO) “sea level”.
    Please could you rectify me?

    Comment by Pascal — 20 Jun 2008 @ 4:06 AM

  20. Gavin: Haven’t you noted the impacts of ENSO on OHC in prior posts? It seems to me that there would be a better correlation between those two than volcanoes and OHC. I know I’ve seen a graph comparing OHC and ENSO, but right now I can’t find it in the realclimate archives. Could you post one, please?

    Regards.

    [Response: I've speculated on this previously, but I don't have any hard numbers nor figures. I suspect that the reason why volcanoes show up more strongly in the ocean-wide analysis is because they have a global effect. Impacts of ENSO on OHC are likely to be more regional - implying perhaps a more ambiguous global signal. Perhaps the authors of the latest study could be persuaded to do the ENSO-OHC regression? - gavin]

    Comment by Bob Tisdale — 20 Jun 2008 @ 4:55 AM

  21. Edward Greisch posts:

    ALL COAL FIRED POWER PLANTS MUST BE CONVERTED TO NUCLEAR IMMEDIATELY TO AVOID THE EXTINCTION OF US HUMANS.

    By “immediately,” do you mean something like “in the next 15 minutes?”

    Comment by Barton Paul Levenson — 20 Jun 2008 @ 6:10 AM

  22. sorry Gavin, Willis is from NASA not from NOAA

    but Eric Leuliette, from NOAA, said, in Ocean Science Meeting at Orlando the 7 march 2008:

    “Interpreting the sea level record from altimetry

    Summary: Closing the budget

    Envisat, Jason‐1, and the /de gauges independently confirm total Sea level budget …

    Use a method similar to Willis et al., 2008
    Common four‐year period: 2003.5 – 2007.5
    Total sea level from altimetry: Jason‐1 and Envisat
    Ocean mass from GRACE CSR RL04, C20 from satellite
    laser ranging, and annual model for geocenter
    Steric sea level from Argo floats
    Optimal interpolation of altimetry and Argo
    Glacial Isostatic Adjustment (GIA) corrections for
    altimetry GRACE sea level measurement.

    Steric sea level: –0.5 ± 0.5 mm/year (ARGO)
    Ocean mass from GRACE: +0.9 ± 0.8 mm/year
    Steric + mass: +0.4 ± 0.8 mm/year
    Total sea level from altimetry +3.2 ± 0.8 mm/year

    What do you think of this or it is off topic?

    [Response: I think there is some mixing of time scales here. For the short period data the error bars are larger than for the the full satellite period, and if you want to close the budget for just the most recent 4 years period the uncertainties in each individual term are very significant. It's not obvious to me that this is well constrained at all. The Domingues et al paper closes the budget over a 40 year period where the uncertainties in trends are much less. - gavin]

    Comment by Pascal — 20 Jun 2008 @ 6:23 AM

  23. I really don’t think humans are in danger of extinction.

    1) In pre-history humans survived even Tambora’s impacts and ice ages (Humans are thought to have originated 200k years ago).

    2) Humans have lived for generations before modern technology on every continent, apart from Antarctica. And in harsh environments from the Kalahari to the Arctic circle. We are mobile and our intelligence and language give us the ability to adapt on a sub-generational timescale.

    Gavin,
    Thanks for the write-up.
    Get ready for the wave of corrections and retractions from the denialist lobby. ;)

    Comment by CobblyWorlds — 20 Jun 2008 @ 8:15 AM

  24. re: 23

    “I really don’t think humans are in danger of extinction.”

    Extinction isn’t the only bad outcome from a calamitous warming. Wars over such elemental things as food and water menace many 3rd world countries. (And we’re mired in endless war in the MidEast over oil.) The hundreds of millions threatened with famine and increased spread of disease won’t feel so cavalier about the costs.

    Comment by Jeffrey Davis — 20 Jun 2008 @ 8:23 AM

  25. re: #13 and #19

    Are you referring to this?

    [Response: I imagine it's probably the actual paper, and I'd point out that this concludes there is a problem with one or more of Argo/Jason/GRACE over a four year period (2003-2007), and isn't relevant to the Domingues paper which is about the attribution over a much longer period and does not involve any of those observing systems. - gavin]

    Comment by Dan Hughes — 20 Jun 2008 @ 8:39 AM

  26. RE: 23

    Hey CobblyWorlds,

    You mean a “denialist” reference such as below:
    http://www.pmel.noaa.gov/tao/elnino/wwv/gif/wwv.gif
    http://www.pmel.noaa.gov/tao/jsdisplay/ Heat Content, 20 Deg Isotherms or even SSTs for 10 buoys (You choose!)

    Interesting that the volcanic influence seems to follow the source by 7 years. I see this data and I get curious about the data sets used in the study when I see TOA/Triton graphs such as these…

    As Dr, Schmidt would likely suggest the NOAA data is probably a regional phenomena and not global. Though, when I look at the data at this site ( http://www.pmel.noaa.gov/pirata/display.html ) I see a similar lack of an increasing heat trend. If this is true and there is still supposed to be a heat content increase, caused by thermal expansion, it makes me wonder why the apparent conflict in the data posted by NOAA and the study.

    I hope that the most recent satellite sent up (Jason 2) should help resolve most of the issues. I suspect the observations by Grace may be regional and am hoping the observations by the OSTM/Jason 2 mission package will provide a deep space global overview.

    (Though, most of the descriptions I have read of Jason 2 mission suggests that the observation window is also regional. Too bad, a long term global/atmospheric dimension measure coupled with the change in the refraction/deflection of solar light by the atmospheric or the top 100 meters ocean water, at the edge of the terrestrial sphere might offer an interesting insight when coupled with the observations of SeaWiffs…)

    Cheers!
    Dave Cooke

    Comment by l david cooke — 20 Jun 2008 @ 9:04 AM

  27. Re #17 [Edward Greisch] “ALL COAL FIRED POWER PLANTS MUST BE
    CONVERTED TO NUCLEAR IMMEDIATELY”

    Even if desirable, this is utterly impossible; and putting a claim in upper-case doesn’t make it any stronger.

    Comment by Nick Gotts — 20 Jun 2008 @ 9:46 AM

  28. Being somewhat direct, Eli agrees with CobblyWorlds that humans are not threatened by extinction. On the other hand, Eli and Cobbly and you and yours are threatened with death.

    Comment by Eli Rabett — 20 Jun 2008 @ 9:46 AM

  29. Semi–OT (and as mentioned by l david cooke): I am very thankful that our governments can work together in the form of international science projects. I see this as a hopeful sign. I wish the best for Jason-2 and all the folks that continue to work so hard on it (and all of us that have helped pay for it). More data here is very useful.

    http://www.csiro.au/news/Jason2Launch.html

    Comment by Arch Stanton — 20 Jun 2008 @ 10:04 AM

  30. Re #17 [Edward Greisch] “ALL COAL FIRED POWER PLANTS MUST BE
    CONVERTED TO NUCLEAR IMMEDIATELY”

    I have to say that I agree with Edward. But since it is impossible then we are all doomed :-(

    The problem is that no-one will face up to that possibility, so that when it becomes obvious that that is the way we are heading, it will be too late! Everyone is mocking Edward Greisch, but what hope have we of cutting back on our fossil fuel consumption when even a small rise in the price of oil has sparked strikes and riots.

    There is no way that we are goint to take the action needed to prevent the catastrophe that will end civilisation.

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 Jun 2008 @ 10:43 AM

  31. Re # 16 Cheska “Is there a hypothalmus in the earth’s biosystem that serves to regulate
    and reset optimal temperature, to make a comparison system function that
    is found in humans.”

    I’ll venture a guess here, and say no, there isn’t.

    Comment by Chuck Booth — 20 Jun 2008 @ 11:24 AM

  32. Edward Greisch wrote: “ALL COAL FIRED POWER PLANTS MUST BE CONVERTED TO NUCLEAR IMMEDIATELY”

    I agree that we urgently need phase out the burning of coal to generate electricity as quickly as possible.

    Nuclear power plants cannot be built “immediately”. It is simply impossible to build nuclear power plants fast enough to significantly reduce CO2 emissions from coal-fired electricity generation within the time frame that this needs to occur.

    Conservation and efficiency improvements, on the other hand, can be implemented almost immediately. When electricity prices skyrocketed in California a few years ago, conservation measures effectively reduced electricity consumption on a time scale of weeks.

    Solar and wind-generated electricity generation can be brought online much faster than nuclear power, and the USA has sufficient solar and wind energy resources to produce far more electricity than the country currently uses. And there are other technologies available: recovering waste heat from industrial processes and using it to generate electricity could produce more electricity than all the nuclear power plants in the USA.

    Alastair McDonald wrote: “… since it is impossible then we are all doomed … There is no way that we are going to take the action needed to prevent the catastrophe that will end civilisation.”

    It is not “impossible”. Full exploitation of available wind and solar energy resources, combined with maximizing efficiency, “green building” technologies, electrification of transport (electric rail and electric cars), supplemented with sustainably produced biofuels where appropriate, all using existing technologies, can relatively easily accomplish the transition to a near-zero carbon energy economy within the necessary time frame.

    Whether we WILL do what we most certainly CAN do, is another story. The barriers are not technological nor economic, they are political. And in that regard, I am also pessimistic.

    But sinking hundreds of billions of dollars into boondoggles like nuclear power plants and “clean coal” is going to accomplish nothing, and worse, will squander and waste both time and financial resources that would be far more effectively spent on clean renewables and efficiency.

    Comment by SecularAnimist — 20 Jun 2008 @ 11:27 AM

  33. Dan Hughes (25), Gavin

    Is it now fair to say that the real continuing “problem” with Argo/Jason/GRACE is that there is misplaced or overstated reliance upon altimeter derived measurements as a proxy in support of heat in the upper ocean when there may be a case for thermosteric expansion of the ocean deep producing the same or similar consequent sea level rise? If so, was the use of altimeter data for comparative purposes in finding “spurious cooling” in the Argo data appropriate in the first instance given such expansion of the deep?

    I ask for the following reasons.

    The ARGO system has a depth domain of no more than 2,000 m or 48% of ocean volume (see:
    http://wo.jcommops.org/cgibin/WebObjects/Argo.woa/1/wo/sCZrymVtJHXMpc6XImH1mM/6.0.40.4.8.4.1.0.1.3.3
    whereas I understand the oceans have an average depth of 3,800 m. Dr. G. C. Johnson also appears to be prolific in his writings on the warming of the ocean deep, and Dr. Roger Pielke Sr. took note and the significance of Johnson et al. (2007) by writing, “This is an important paper with respect to diagnosing the radiative imbalance of the climate system (i.e. global warming and cooling). Moreover, if heat is being stored in deep depths, this would help explain why sea level continues to rise yet the upper ocean has not been warming in recent year[s]. It also means that the feedback of this heat into the atmosphere is delayed, or even lost for a very long.” (see http://climatesci.org/2008/02/07/deep-ocean-heat-accumulation-a-diagnosis-of-its-magnitude/).

    Dr. Willis himself says in
    Willis et al. (2008), **In situ data biases and recent ocean heat content variability. Journal of Atmospheric and Oceanic Technology (in revision)**,
    that ARGO data still show “no significant warming or cooling is observed in upper-ocean heat content between 2004 and 2006”, and in
    Willis et al. (2008), **Assessing the globally averaged sea level budget on seasonal to interannual timescales**, he further says:
    “First, from 2004 to the present, steric contributions to sea level rise appear to have been negligible… Although the historical record suggests that multiyear periods of little warming (or even cooling) are not unusual, the present analysis confirms this result with unprecedented accuracy.”

    Lastly, I am not aware of any comment from Dr. Hansen about the Domingues et al. 2008 paper. I understood from**Earth’s Big Heat Bucket** (at http://earthobservatory.nasa.gov/Study/HeatBucket/ )
    that Dr. Hansen looked to the ocean and Willis for the “smoking gun” of earth’s energy imbalance caused by greenhouse gases. Do you know if Dr. Hansen is ready to herald the “smoking gun” based upon Domingues et al. 2008?

    Thank you for your time.

    Comment by BRIAN M FLYNN — 20 Jun 2008 @ 11:47 AM

  34. gavin,

    Is there a reason why volcano-induced cooling appears to be larger in magnitude in the models than in the observations? (Especially in the GISS models, it seems to me, although it’s hard to pick out individual models in that graph.) Perhaps it’s due to the extra smoothing of the data, but I don’t know if that explains all of it. I think I’ve noticed this in surface temperature data-model comparisons as well. Are the models over-sensitive to volcanic forcing?

    [Response: The smoothing is part of it, but there is also some uncertainty in the stratospheric aerosols. The overall fit to radiative perturbations and temperature changes is pretty good though (see Hansen et al, 2007). - gavin]

    Comment by NU — 20 Jun 2008 @ 11:59 AM

  35. Re: #32:
    “It is not “impossible”. Full exploitation of available wind and solar energy resources, combined with maximizing efficiency, “green building” technologies, electrification of transport (electric rail and electric cars), supplemented with sustainably produced biofuels where appropriate, all using existing technologies, can relatively easily accomplish the transition to a near-zero carbon energy economy within the necessary time frame.”

    As you indicate, much of the problem is political, not technological. But it is extremely complicated.

    China and India, China in particular, is heavily involved in the construction of coal powered electric plants that will be sending large amounts of CO2 into the atmosphere for the next 40 years.

    The efforts of these developing countries may overpower the efforts of the United States even if the U.S. did what it should in the conservation efforts and in the efforts to use solar, wind, nuclear, and other power sources that will not produce CO2.

    And I do not see within the U.S. the political climate to allow us to do nearly enough in this direction. I see some movement in this direction, but it will not be sufficient to keep the worst case situations from happening.

    I am very pessimistic about the long term future. But fortunately for me my age (67) will likely keep me from seeing the worst of it. It is my grandchildren that I worry about.

    Comment by AlCrawford — 20 Jun 2008 @ 12:04 PM

  36. There is not enough uranium to convert all our coal-fired power plants to nuclear. Current supplies and estimated reserves of uranium are only good for about 40 years.

    Comment by John Lang — 20 Jun 2008 @ 5:29 PM

  37. #27

    Re #17 [Edward Greisch] “ALL COAL FIRED POWER PLANTS MUST BE
    CONVERTED TO NUCLEAR IMMEDIATELY”

    Even if desirable, this is utterly impossible; and putting a claim in upper-case doesn’t make it any stronger.

    “Utterly impossible” seems a touch harsh. Ordinary large coal-fired and nuclear power stations are similar machines. Many systems are essentially identical – primary cooling (cooling towers / sea water cooling), turbo-generators, power handling (unit transformers; switchyard), water management, civil works / admin / security (well, sort of).

    The differences are all on the other side of the turbine house, where coal has boiler houses, coal handling / milling, flue gas handling, ash handling and disposal. It’s far from immediately obvious that you couldn’t just slot in a PWR containment building and hook up the existing steam lines to the heat exchanger.

    “Impossible”, no. Economic, well, that’s unlikely, but there might be a few cases where it could work.

    Comment by GlenFergus — 20 Jun 2008 @ 7:59 PM

  38. Uranium is mined and has a “peak uranium” issue, just like “peak oil”.

    It can buy 10-20 years of time, but it isn’t a solution. Reactors are also ridiculously expensive and time consuming to construct, even if we decided to drop everything and build them. Without massive subsidies they aren’t going to get built, which is a drag on the economy.

    Guess Again.

    Comment by Lamont — 20 Jun 2008 @ 8:06 PM

  39. I agree with #32. See http://www.sciam.com/article.cfm?id=hydrogen-house it can be done and apparently rather easily.

    It is political and embedded. China and India are the not so new ditch diggers of corporate America.

    Assuming the disaster scenario is as significant as advertised. The only answer is to treat global warming with same concern as nuclear threat.

    The problem is everything we eat, use and buy depends on oil and China. There is not a single thing in your supermarket that did not get there without oil, organic or otherwise. Almost everything in your house, what you wear, what use, etc…, is made in China and most likely out of oil.

    America is a drug addict whose system cannot survive without the drug (oil).

    Cut-off the oil and break off trade with China and the US will instantly lose its power on the world stage and some other country will take its place. And then, who would stop them?

    The question is no longer can it be stopped, but what is the best strategy to survive it?

    You can’t stop evolution.

    Comment by Tim — 20 Jun 2008 @ 10:36 PM

  40. To counter the woe and gloom about our ability to solve this …

    Nanosolar just announced that their new printing press for printing solar voltaic thin film runs at 100 feet (30m) per minute and can theoretically be pushed to 2000 feet (666m) per minute. At the slower speed, this means that a single press is able to pump out 1 Gigawatt of generation capacity per year. The total US generation capacity is 1000 GW. So at the slower speed 100 printers could produce enough panels in 10 years to replace the current total US generation capacity. At the faster speed 50 printers would do it in a year.

    They say the printers will cost $1.6 million each.

    They don’t say what it costs to run them, or what the electricity cost will be once you factor in all manufacturing costs, framing and installation. But clearly it will drop the cost of solar generated electricity significantly.

    If this reduces the cost of solar below the cost of coal fired electricity, then it seems inevitable that we will see a rapid switch to solar.

    (Of course the various energy storage technologies available and in development would need to be ramped up also, possibly along with adjustments to energy usage patterns and the installation of direct current transmission lines. But this is all doable.)

    There are many things that technology can’t solve, but our emissions problem clearly can be, in spite of the clear preference of the political classes for dumb, destructive, dirty technologies.

    Comment by Craig Allen — 21 Jun 2008 @ 1:18 AM

  41. I had the impression that looking at ocean heat content rather than tropospheric temperatures has the advantage that the signal is less variable, because of the ocean’s large heat capacity and resulting lag time in response (and a smoothing of the response). Therefore I wouldn’t have expected volcanic eruptions to leave a temperature signal in the ocean. Am I missing something?

    Comment by Bart Verheggen — 21 Jun 2008 @ 4:47 AM

  42. Am I reading the first chart correctly: The heat content of the oceans has risen 10 fold in the last 40 years? How is that possible without a huge rise in either mass or temperature (neither of which seems to have happened)?

    [Response: The graph is of the anomaly - not the absolute amount. The total energy in the oceans (compared to water at 0ºC) is vastly greater (mass of the ocean 1.4x1021 kg x average temperature (maybe 5ºC) x specific heat (~4000 J/kg/C) = ~ 2.8 x1025 J). - gavin]

    Comment by Greg — 21 Jun 2008 @ 10:19 AM

  43. #41 Bart Verheggen,

    The volcanic eruptions that have this effect were tropical ‘Plinean’ eruptions. These eruptions (named after Pliny the Younger’s account of Vesuvius) eject sulphate aerosols and ash into the stratosphere, where it reflects sunlight back into space. This has a fundamental impact on the global energy budget, causing a cooling due to reduced incoming sunlight. So it would be expected to appear. The ocean integrates out (smooths) much of the weather that impacts surface/atmospheric temperatures.

    There are 3 factors governing global planatery temperature at it’s most basic level,

    *incoming radiation (shortwave light from the sun)
    *albedo (the amount incoming radiation reflected back to space)
    *outgoung radiation (outgoing longwave radiation)

    Change any of those three and you can change planetary temperature.

    Comment by CobblyWorlds — 21 Jun 2008 @ 10:38 AM

  44. Sorry Bart,

    Forgot to add…
    The observed response is mainly damped by shorter mixing times of upper ocean layers (not to be confused with centennial overturning). The oceans are in 2 way flux of energy with atmosphere, reduce the incoming shortwave incident upon the surface and you get cooling, the response of the upper wind-mixed layer is quite rapid.

    Comment by CobblyWorlds — 21 Jun 2008 @ 10:49 AM

  45. Re #17 [Edward Greisch] “ALL COAL FIRED POWER PLANTS MUST BE
    CONVERTED TO NUCLEAR IMMEDIATELY”

    KK. On it.

    *grabs bag of spanners, Makita Drill Set, and heads for the door*

    Comment by Paul G. Brown — 21 Jun 2008 @ 11:30 AM

  46. Willis et al.,J. Geophys. Res., 113, C06015, doi:10.1029/2007JC004517, report oceanic enthalpic changes beyond the period covered by Domingues et al. They find little or no change in heat content for the upper ocean from about 2003 to now. I remain unconvinced of the efficacy of the current generation of models.

    [Response: Why might that be? The Willis paper clearly states that there are a) unresolved issues with trends in one or more of the data sets they are looking at, and b) that short term variability in trends is to be expected both from past data and as seen in the models. This should be contrasted for the much longer time frame considered in Domingues et al (where issues of short term variability are much less important and trends much more defined). Please explain your reasoning as to why the latter study apparently weighs less in your deliberation. - gavin]

    Comment by A. Fucaloro — 21 Jun 2008 @ 12:16 PM

  47. #40 Craig, this is interesting.

    About the storage, I did myself a little calculation for pumped hydro in the Finnish situation.

    At 1 km depth, a cubic m of water represents 10^7 J of energy. 1000 m^3/s thus represents 10 GW of power, about what a Finland sized country needs.

    Over a day, this requires a storage volume of 10^8 m^3. Compare this to the volume of the projected Helsinki-Tallinn rail tunnel, 10^7 m^3. A facility could even be built in connection with this project. Bedrock excavation with dynamite is relatively inexpensive, and the rail can be used to get the rubble out…

    Comment by Martin Vermeer — 22 Jun 2008 @ 3:42 AM

  48. Ref 17, Edward Greisch,

    How do we maintain +3ppm of CO2 per year for two hundred years, getting to 900ppm when we know oil and gas reserves are already approximately half gone so their extraction rate and therefore annual CO2 contribution will be falling soon. Likewise coal reserves aren’t good for 200 years of today’s extraction rates, more likely a peak within a few decades. There certainly aren’t the fossil fuel reserves to allow the current mechanism of CO2 accumulation to continue.

    Are you assuming different net sources of CO2 dominating soon?

    Comment by clv101 — 22 Jun 2008 @ 5:01 AM

  49. Gavin writes:

    [Response: The graph is of the anomaly - not the absolute amount. The total energy in the oceans (compared to water at 0ºC) is vastly greater (mass of the ocean 1.4×1021 kg x average temperature (maybe 5ºC) x specific heat (~4000 J/kg/C) = ~ 2.8 x1025 J). - gavin]

    Gavin!!! That 5 C should be 278 K and the final figure should be 1.5 x 1027 Joules!!!

    [Response: Not really. I gave the baseline I was using which is a common standard. Using absolute zero doesn't give a good idea of what energy is actually usable. - gavin]

    Comment by Barton Paul Levenson — 22 Jun 2008 @ 6:07 AM

  50. Oil, gas and coal are cheap, have over 100 years of know how and energy infrastructure invested in them and hence they entrench the mind. Politically and economically they dominate to as they are cost effective and readily available globally. The USA also gets a lot of economic and political leverage from oil in the form of petro dollars allowing the USA to print a lot of money to keep this system going and getting a free lunch on the bacl of investments in dollars.

    Therefore the whole idea of replacing it with something else or indeed in just meeting future demand (7 TW) with alternatives is only gaining credence in regard to oil and gas as they are expensive at the moment and demand is rising. Coal on the other hand is locally available (china, Russia, Europe and the USA have plentiful reserves) and hence offers some energy security in an uncertain world.

    Pioneering alternatives and a new economical and political landscape is hard work for politicians and will be a long time in coming although the landscape is moving slightly in regard to electricity production and transport we have not even really got a strategy, just some ideas at the moment.

    Even if the USA hybridised every car in the USA today, in 7 years time we would still need the same amount of oil as we use today due to economic growth. 2 to 3% per annum is enough to double energy demand in 30 years.

    Comment by pete best — 22 Jun 2008 @ 6:55 AM

  51. #26 L. David Cooke,

    If Gavin said you were citing regional effects I’d agree.

    This 7 year lag. I know it sounds stupid but I can only guess from the first link that you’re attributing the 1998 (ENSO related?) dip, to 1991′s eruption of Pinatubo. As I’ve obviously got the wrong end of the stick, could you clarify?

    There are known issues with the most recent data that as far as I know still need to be addressed. Yet some consider themselves so much ahead of the oceanography field that they can claim implications from that data. The Domingues paper makes a significant revision as compared to Levitus, but it’s still in reasonable agreement with the models. Indeed in removing the 1970-1980 hump it seems to make a better fit.

    So are we really supposed to believe that the models were working OK until a few years ago when they “fail” and that “failure” is best interpreted not as “there’s something else afoot here”, but as “the models are wrong”?

    Sounds desperately contrived to me! ;)

    Comment by CobblyWorlds — 22 Jun 2008 @ 1:18 PM

  52. RE: 51

    Hey Cobbly,

    The first statistically significant dip in the Pacific Ocean SST and heat content to occur after the Mt. Pinatubo eruption appears to coincide with the 1998 ENSO dip when the average global surface atmospheric land temperature appeared to take a significant rise. In review of the precipitant and historic record following large tropical volcanic releases there appears to be a similar pattern. Though it is entirely possible I am seeing a pattern where there is not one. Though if you were to remove up to 5% of the energy input to a region of the ocean during the peak input time it could take several years for the anomaly to circulate and be detected.

    Another interesting pattern seems to be related to volcanic activity as well. According to the data record that I researched, I have seen there is an anomaly that appears to occur between 15 and 20 years after a large eruption. Historically, there appears to be a curious signature of a NH temperate zone drought associated with the volcanic activity. Though again it is possible these are only observations due to a search for them and they are not necessarily truly correlated.

    As to the Argo and the satellite data I concur; however, I suspect that for the TOA/Triton and PIRATA data sets the data is generally worthy in its raw form of not requiring much in the way of correction. If the data collected by these arrays are correct and there is not an invalidation of the raw measurements then I suggest that the measurement data record would be more correct then the model record used in the heat content study above. If this is true then yes I would suggest there might be a problem with the models. I know a lot of ifs; however, as usual my preference is to go with known data rather then “created” data when I can.

    As to models being wrong or describing an alternative variable, maybe I have not done enough research. Most of the model work I have seen described by Dr. Schmidt and others here has indicated that they are very careful in the creation of single variable attributes to reduce the possibility of competing or confounding variables. Based on due diligence by these experts I would suspect that there may be more of an issue with the description of the nature of the described attribute rather then an error in the model itself.

    Cheers!
    Dave Cooke

    Comment by l david cooke — 22 Jun 2008 @ 2:32 PM

  53. I realize that this is way of topic, but it concerns arctic ice, which we have discussed at length in the past. We are now coming to the time of year when, looking back at the data with 20/20 hindsight, this was the start of the big melt in 2007. From here on, for the next few weeks, we will have an idea of whether 2008 is going to follow in the steps of 2007. 1st July (Canada Day) 2007 was when the largest single day’s ice melt, recorded since 1979, occurred. May I suggest that
    http://nsidc.org/data/seaice_index/images/daily_images/N_timeseries.png is a good reference. It seems to be updated daily.

    Comment by Jim Cripwell — 22 Jun 2008 @ 2:57 PM

  54. CobblyWorlds

    “So are we really supposed to believe that the models were working OK until a few years ago when they “fail” and that “failure” is best interpreted not as “there’s something else afoot here”, but as “the models are wrong”?”

    Why not? The models were largely constructed around the data at the time, therefore when the models are run against the gathered data and are subsequently tweaked a little, it is no surprise that they were a good fit.

    As an example send me a random data sample of roulet*e spins and I will send you a betting model that will earn you money at roulet*e. You can prove its efficacy by running it against the supplied data and you will see that it works. I can do that every time 100% guaranteed. The trick is to keep the model working when you send me new data I can keep it working for a time by tweaking the sequencing , betting amounts etc but it will eventually be overwhelmed by new data and be falsified.

    That is because this model operates in an area of science that is settled, Probability Theory, unlike Climate Science which clearly isn’t.

    Comment by Alan Millar — 22 Jun 2008 @ 3:20 PM

  55. The models were largely constructed around the data at the time,

    Except, as has endlessly pointed out, this isn’t how they are constructed.

    The trick is to keep the model working when you send me new data

    And, because models are built on our best understanding of the underlying physics, rather than built to match old data, models do much better than perform that trick.

    They predict things that haven’t been previously measured, like stratospheric cooling … in other words they send you new data before you measure it, rather than simply match new data you send them.

    Comment by dhogaza — 22 Jun 2008 @ 4:59 PM

  56. dhoghaza

    “And, because models are built on our best understanding of the underlying physics, rather than built to match old data, models do much better than perform that trick.”

    The underlying physics were present in the period 1945 – 1975, why therefore were there no large scale models produced at that time that predicted the subsequent warming to 1998?

    [Response: There were. Look up Spencer Weart's site on the discovery of global warming, or look up Manabe and Weatherald (1967) discussed in Petersen. Connolley and Fleck (2008). - gavin ]

    I would have far more more impressed and convinced if there had been. But of course models constructed using data at that time would not have shown the subsequent warming because there was not an obvious relationship between CO2 emmissions and warming observable at that time.

    Mankind always want instant solutions and gratifications, it’s natural, but the scientific community and approach must recognise this and be cautious with its pronouncements until it is has a very high confidence level in its theories.

    We are way short of this in relation to how the planets various climatic processes react and combine to the huge number of forcing and feedback factors.

    Comment by Alan Millar — 22 Jun 2008 @ 6:37 PM

  57. #53 OT – watching the ice:

    NCEP’s weekly analysis also gives a nice perspective. Put these beside each other in your browser:

    Ice concentrations: 14 June 2008 and 14 June 2007. There’s slightly more this year (but it’s younger and, presumably, thinner).

    Sea surface temperatures: 14 June 2008 and 14 June 2007.

    Last year a large positive SST anomaly developed in the Bearing Sea in July, and extended into the East Siberian Sea. Water temps north of the Bearing Strait were as high as +8°C. Almost swimmable…

    Comment by GlenFergus — 22 Jun 2008 @ 7:37 PM

  58. “Except, as has endlessly pointed out, this isn’t how they are constructed.”

    Quite true, dhogaza. Amazing how some people just don’t want to absorb that inconvenient fact, though.

    Comment by Jim Eager — 22 Jun 2008 @ 8:43 PM

  59. I Love you guys. The data doesn’t fit the model predictions so we’ll just apply “corrections” to the data to make it fit. And you call it science? I mean this is what he said to Reuters

    http://africa.reuters.com/top/news/usnBAN946269.html see paragraphs 5 and 6

    [Response: You are interpreting something that isn't there. They corrected data that was incorrect and that led to a better match. They did not correct the data in order to get a better match. Given the demonstrated problem in the XBT data, what would you have them do instead? - gavin]

    Comment by greg smith — 23 Jun 2008 @ 2:59 AM

  60. Ref #57 from GlenFergus “(but it’s younger and, presumably, thinner).” We have been through this before. Most of the ice that is currently turning into open ocean is what I call “annual ice”; ice which is, by definition less than one year old. Each year about 9 million sq kms of open ocean turn into ice, and each year about the same amount changes from ice into open ocean. This is “annual ice”. So far as I am aware, the thing that controls how much annual ice there is, and how thick it is, is the length and coldness of the winter. Last winter in the Arctic was, in comparison to recent years, long and cold. So one would not necessarily expect the current annual ice to be “thinner”. In any event, “thinner” than what?

    Comment by Jim Cripwell — 23 Jun 2008 @ 6:41 AM

  61. RE: 59

    Hey Greg and Dr. Schmidt,

    It would be helpful though if the nature of the how the data was corrected was generally known. For us who are layman, we would expect that the data correction is only for data where there is missing data. Given this then how do you fill in the holes? Two methods come to mind, one is where you would use the thirty year average high and low. Another method would be where you would interpolate between the day before and the day after. It would seem that the best method would be to use the 30 year average for that day and adjust it by the deviation seen in the prior and following measured period.

    As for data fitting the model, Greg, I think I can see your point based on my former bias. It just seems terribly convenient when a “correction” is reported and the description of the application of the correction is not part of the data set.

    That there have been occurrences in the past where a corrective value has been used to broad stroke correct a data set where there is a known and possibly tested offset. For example even today there is a corrective value that needs to be applied to about 2% of the ARGO buoys in the Atlantic based on the serial number of the depth gauge. However, the remaining 98% do not require a correction, nor do all the “deviant” buoys need to get the same correction.

    If you were to review the USCHN data sets there is a correction applied to certain data sets that are related to Urbanization. The question is are these corrections applied even during days in which the cloud cover would be dense enough to negate or reduce the necessity of the correction? You also have the issue of non-insolation heat source corrections. To go a little off topic myself, I think I can share why this question reappears so often when talking about models.

    The point is layman are being asked to trust and not to look over the shoulder of experts when they clearly question the validity of the experts due to the chaotic nature of the data sets appearing to invalidate the experts predictions. That “everyman” is being asked to trust when a trust condition does not exist leads to more distrust in many minds. Full disclosure, as was the embodiment of the nature of science in the early part of the 20th century would be welcome. However, we perceive an issue when data is withheld or enveloped in higher math where it is difficult for us to understand.

    The point is, it is not incumbent on the part of the expert to disclose the data correction or algorithm? If the data set or calculations can lead to commercial value or funding restrictions the evident answer is no. The idea of science for the sole purpose of science appears to have been lost. On the other hand if this avenue is not taken there are insufficient funds to perform the science. I suspect it has become a balancing act as budgets come in to play….

    Greg, I suspect you may have to make a decision. Does the historic record of the source invoke trust. If yes then accept the data as represented. If no, then pursue expert evaluation by experts you trust. In short, know your source…

    Cheers!
    Dave Cooke

    Comment by l david cooke — 23 Jun 2008 @ 7:30 AM

  62. #56 Alan Miller,

    …until it is has a very high confidence level in its theories.

    Over 95% good enough for you? It is for me.

    Re models I refer you to the replies above. In addition, you may prefer to use Occams Razor to whittle away the most straightforward explanations to suit your preconceptions. I prefer to use it to leave me with the simplest and least contrived explanations.

    #52 L David Cooke,
    I still see that as ENSO related. Given the immense local and seasonal variability of the oceans (as with the atmosphere) I recommend not trying to draw conclusions with regards AGW from local observations, although when studying ENSO etc such local observations can be very helpful.

    #57 Glen Fergus,
    The thinning currently extending from Beaufort/Chucki to the pole is less extensive for June this year than June 2007 (Cryosphere Today / AMSR-E). Furthermore last year’s exceptional melt rate occurred early July (NSIDC 2007). So things are looking as uncertain as before, although I still expect acceleration in July August I am now less sure William Connelly will lose his bet (I’d expected greater thinning and Polnya formation by now). Watch out for the June outlook from ARCUS, it’ll be interesting to see if stances change.

    Comment by Cobblyworlds — 23 Jun 2008 @ 8:22 AM

  63. I’m new to this and have a couple of simple questions:
    Why are climatologiusts always adjusting their data? How do they know in which direction to adjust them?

    [Response: The fundamental problem is that long climate time series are based on data that was designed to do something else. Weather stations were there for weather reports and forecasts, ocean temperatures were for all sorts of reasons - none of which were estimating 50 year long trends. That means that people made decisions about changes of technique, instruments, data treatment etc. without thinking about what effect it would make on the long term trend. Now, we have a different perspective on these data and those decisions need to be dealt with in some way. Corrections usually come from either physical modelling of what changed (i.e. the bucket issue or the XBT fall rate), or on comparison across different stations, change point analysis etc. Those corrections are made and can be of either sign. If, additionally, they end up giving a better fit to the models, that adds to the confidence that the models are reasonable. - gavin]

    Comment by Robert Wood — 23 Jun 2008 @ 8:33 AM

  64. I think most of the misunderstanding about all this comes from people always getting suspicious if anybody corrects measurements and/or feeds the measurements thru a statistical model rather than using the raw values. But with something like this, the raw values would probably not reveal anything usefull. Measuring the temperature in my garden at noon tells me how warm it was at 12 o’clock in my garden and nothing more. To derive a global trend from something like this, one has to correct for location, time of observation, kind of thermometer, wind and all sorts of other factors – or just drop the effort alltogether. How and based on what these calculations take place needs a solid understanding of the underlying physics, obviously, but in the end there’s no way around them. The fact that these corrections mostly seem to correct measurements closer towards modeled values could be explained either by the Rosenthal effect or the fact, that the models have it about right. I believe the Rosenthal effect is real but I can’t imagine it dominating climate science on such a scale.
    I guess another possible explanation would be, that a particular (not necessarily correct) assumption in a model is at the same time used to correct measurements – but while this would be hard to detect on its own, it would probably only apply to a rather small subset of a bigger picture and would sooner or later stick out and lead to an improved understanding.

    Comment by Henning — 23 Jun 2008 @ 8:46 AM

  65. RE: 62

    Hey Cobbly,

    Here is the list of the measured ENSO history. As you can see the TAO/Triton measured values do not coincide with the ENSO lows that you express explanatory confidence in. http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml

    Hence, I suspect we need a better explanation of why the period and variation in the data record does not appear to match up with the graph. This does not invalidate your observation, it only suggests that your observation is a partial explanation for the anomaly.

    The most interesting thing is, when looking further back in the history there appears to be similar periods of deviation, though the the depth of the deviations do not match up even though the temperature deviation measures are similar. This would have a tendency to suggest there is likely multiple variables moving in the same direction at the same time.

    Do you have any suggestions to explain the anomaly, I have offered mine…

    Cheers!
    Dave Cooke

    Comment by l david cooke — 23 Jun 2008 @ 9:00 AM

  66. The underlying physics were present in the period 1945 – 1975, why therefore were there no large scale models produced at that time that predicted the subsequent warming to 1998?

    Take a good look at the computer you used to type your blitheringly ignorant response. It is hundreds of thousands of times more powerful than the computing power available in the ENTIRE WORLD in 1945. The Manhattan Project did its modeling using rooms of people pulling handles on manual calculating machines, card sorters, etc. The first implosion models were two-dimensional because preliminary results for three-dimensional models took so long to calculate. Into the 1950s computers were still so slow and primitive that they could only provide limited insight fusion bomb design. And these were the super computers of the day.

    So, to bounce your question back to you, why were airplanes designed without the benefit of the detailed aerodynamic modeling used to test today’s designs? Does the fact that the computing power didn’t exist somehow lead you to the conclusion that aerodynamic theory is all wrong?

    As Gavin points out, modeling of climate and weather grew as computing power grew, as is true of modeling in every field you can think of, and it was a lot earlie than you imagine.

    Comment by dhogaza — 23 Jun 2008 @ 9:34 AM

  67. re 56.

    A much more interesting paper is Robocks 1978 paper.

    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F1520-0469(1978)035%3C1111:IAECCC%3E2.0.CO%3B2&ct=1

    Comment by stevenmosher — 23 Jun 2008 @ 10:21 AM

  68. RE: 62 continued

    Hey Cobbly,

    I forgot to add that the comments I have been making do not relate to AGW or CC. The point of my posts were simply to demonstrate measure of the Oceanic Heat Content data sets within the ITCZ, for both the Pacific and Atlantic. To me it appears there is a conflict, with the long term data reconstructed in the reported study driving this thread, during the time frame of the TOA/Triton data set. Granted the TOA/Trition data set is small; however, it covers the greatest percentage change in the measured oceanic heat content known in recent history.

    (That the heat content represented in the NOAA data sets are limited as to statistical confidence of between 58 to 89% this is not entirely unlike the current confidence level I would expect for the IPCC data sets related to AGW/CC.)

    When I perform a side by side comparison of the data table and rotate the graphic in the original post relating to the Ocean Volume change in response to heat content graphic, the deviation of change in the graph looks relatively as expected up until the 1998/1999 time period. That the heat content was cooler longer should not have made the volume significantly lower. Hence the reason I suspect that there was likely a separate variable playing a part. Whether it was an after effect of Mt. Pinatubo or a massive heat release from overturing, transitional barrier salinity, clouds, storms, … Ad Nausium

    The point being is the observation of a reduction in heat content was not limited to the Pacific; but, appeared to be Zonal for nearly 180 Deg. My research to date shows a low correlation for most of the more commonly suggested alternate variables. Hence, my consideration that the depth of the deviation may be related to a recent volcanic event, only that it took 7 years for it to be detected.

    Does this help reduce the apparent miscommunication?

    Cheers!
    Dave Cooke

    Comment by l david cooke — 23 Jun 2008 @ 10:23 AM

  69. Alan Millar and Greg Smith, Now let me see if I’ve got this straight. You come in here with absolute ignorance of climate modeling and climate science and are ready to levy a charge of scientific fraud against the entire scientific community. That about right?
    Realclimate is a wonderful resource for finding out how climate science acutally works. You can use it for that, or you can continue to make ignorant accusations. Your choice.

    Comment by Ray Ladbury — 23 Jun 2008 @ 10:36 AM

  70. Re #60

    “Ref #57 from GlenFergus “(but it’s younger and, presumably, thinner).” We have been through this before. Most of the ice that is currently turning into open ocean is what I call “annual ice”; ice which is, by definition less than one year old. Each year about 9 million sq kms of open ocean turn into ice, and each year about the same amount changes from ice into open ocean. This is “annual ice”. So far as I am aware, the thing that controls how much annual ice there is, and how thick it is, is the length and coldness of the winter. Last winter in the Arctic was, in comparison to recent years, long and cold. So one would not necessarily expect the current annual ice to be “thinner”. In any event, “thinner” than what?”

    Than the multiyear ice it replaced since last year. Even during the last winter there was a loss of multiyear ice due to outflow through the Fram strait.
    http://nsidc.org/images/arcticseaicenews/200804_Figure5.png
    http://nsidc.org/images/arcticseaicenews/200804_Figure4.png
    http://nsidc.org/arcticseaicenews/2008/040708.html
    To see the flow out through the Fram checkout the Quikscat movie at the foot of this page:
    http://ice-glaces.ec.gc.ca/App/WsvPageDsp.cfm?Lang=eng&lnid=43&ScndLvl=no&ID=11892

    Comment by Phil. Felton — 23 Jun 2008 @ 11:32 AM

  71. “hundreds of thousands” — Not enough. The MANIAC’s floating point division was so slow you could watch it on the front panel lights. On debugging technique was to tune an AM radio to static and then place in up by the multiplier circuits. If it didn’t sound right, you knew something went wrong, either the code of the physical computer.

    Think billions.

    That’s just per uni-core processor.

    Comment by David B. Benson — 23 Jun 2008 @ 2:03 PM

  72. In all cases where the data does not match the model one modifies the data?

    I think I learned in High School science class something about rejecting the theory instead of the data… but that’s so old school. I mean who rejects a hypothesis that everyone believes in anymore, that just downright 19th century.

    [Response: Of course not. There are plenty of data that show that the models have problems (tropical rainfall, cloud distributions etc.) that are not going to change. But there are plenty of datasets where there are known problems. What would you have the people that produce them do? Not fix known problems? If there is still a a discrepancy with the model, you start again - is there a reason why the model could be wrong? are you comparing like with like? are there additional issues with the data product? - gavin]

    Comment by Paulidan — 23 Jun 2008 @ 4:05 PM

  73. #65 L David Cooke,

    I think I get where you’re coming from.

    You’re comparing 2 variables.

    1) ENSO index – which is related to sea surface temperature.
    http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
    2) Warm Water Pool volume West of Galapagos – which is related to the volume enclosed by the 20degC isotherm and the surface.
    http://www.pmel.noaa.gov/tao/elnino/wwv/gif/wwv.gif

    The reason these 2 indices don’t match up as you expect is what goes on at depth throughout the evolution of the El Nino event. Check out this graphic: http://www.pmel.noaa.gov/tao/elnino/nino_profiles.html

    What’s happening is that in Spring 1998 the sea surface temperatures give a +ve ENSO index. However as seen in the above graphic, in March the the warm pool’s volume has decreased because the warm water has migrated to leave a much thinner warm pool. Hence the warm pool volume is decreasing in the early part of March even as the ENSO index shows it’s an El Nino.

    Re 52,

    The Temperate NH drought associated with a plinean eruption may be interesting. Just going back onto the issue of looking at temperature or some other variable in a small location. Here in the UK a significant part of the warming trend is due to atmospheric circulation changes not direct enhanced greenhouse effect. This is because changes in the winter Arctic Oscillation have caused a decline in the number of wintertime blocking highs from continental Europe blocking the jetstream. The blocking high pressure was a frequent British pattern in the winter, it brought colder continental/arctic air and the blocking stopped mild air from the Atlantic (jetstream deflection). The change in the AO behaviour is thought to be driven more by stratospheric cooling due mainly to the enhanced greenhouse effect. e.g. Shindell 1999 “Northern Hemisphere winter climate response to greenhouse gas, ozone, solar and volcanic forcing”. Google pubs.GISS Shindell et al. 1999 – you should find it in the top 3, free from GISS.

    Plinean eruptions eject particulates into the stratosphere and whilst they cool the surface/troposphere by reducing incident sunlight. They warm the stratosphere by absorbing the light they don’t reflect. Such changes could well impact jetstream tracks, as the change in the Arctic Oscillation has impacted Britain. I’ve not read about it (as far as I can remember right now) but it sounds quite feasible.

    The point being is the observation of a reduction in heat content was not limited to the Pacific; but, appeared to be Zonal for nearly 180 Deg.

    The PIRATA data shows only 2 years which is way too short.

    You almost certainly wouldn’t see anything looking at individual sites, it’s hard enough on land. But water has a specific heat capacity of 4.186 joule/gram degC.

    10^22 j is 10,000,000,000,000,000,000,000.

    Looks like a big number, until you consider it’s context.

    Imagine how many grams there are in the ocean at down to 300 or 700 metres….

    The Ocean surface is 169.2 million square kilometers (Wikipedia), that’s 169,200,000,000,000 square metres. A cubic metre of water is about 1 ton, which is 1,000,000 grams. So it takes 1 ton of water 4,186,000 joules to warm by 1 degC. I can’t factor in the depth profile, so I’m not going to calculate based on those figures. But see how fast all the zeros in 10^22 ballpark figure get eaten up.

    10,000,000,000,000,000,000,000 joules is enough to warm the top 14 metres of the ocean’s surface area by a bit over 1 degree C. You’d have to allow for a bit of shallower water than that for the coasts, but ultimately when you factor in 300 or 700 metre depth you get a tiny temperature increase. The only way to find that amongst the noise of short term and local variance is by careful processing of masses of data.

    Comment by CobblyWorlds — 23 Jun 2008 @ 4:11 PM

  74. re: #66 & #71
    Historical correction:

    In 1945, pretty much the total of electronic computers was: Atanasoff-Berry Computer, some Colossus machines at Bletchley Park, and the Harvard Mark I, ENIAC was just starting to come up, and its clock was 60-125KHz. It started with 20 words of memory, although it later acquired a 100-word core memory.
    Hence, even an iPhone’s 620Mhz ARM CPU is about 5000-10000X higher in clock rate, better than that on performance, and vastly larger in memory.

    MANIAC didn’t get turned on until 1952 [it was among the small batch of "open source" machines derived from John von Neumann's plans, of which the only one left is JOHNNIAC, from RAND Corp.]

    As for later dates:
    1964: CDC 6600, first really successful supercomputer
    Clock = 10Mhz, main memory = 256K (60-bit) words, call it 2MB.

    1976: Cray-1, first really successful vector supercomputer; typically $8M or so.
    Clock = 80Mhz, main memory = up to 1M 64-bit words, i.e., 8MB [huge!].
    Any current laptop could easily beat it.

    At the Computer history Museum, we have a tiny piece of a Colossus, one rack of the ENIAC, and the entire JOHNNIAC. [And for the next year, courtesy of Nathan Myrhvold, we have one of the two working mechanical Babbage Engines in the world, which is cranked daily, and must be seen to be appreciated.] Of course, we have a CDC 6600 and Cray-1 as well.

    Anyway, it *was* possible to run a “large-scale” simulation in 1976, if by large-scale, someone means “fits in 8MB of memory, with processor much slower than laptop, and costs $8M.”

    Comment by John Mashey — 23 Jun 2008 @ 4:36 PM

  75. “1976: Cray-1, first really successful vector supercomputer; typically $8M or so.
    Clock = 80Mhz, main memory = up to 1M 64-bit words, i.e., 8MB [huge!].
    Any current laptop could easily beat it”

    Even the iphone could beat it by a fairly wide margin. That doesn’t even take into account the fact the Cray 1 wasn’t available in the time frame we are talking about and was a huge jump in it’s time.

    Comment by L Miller — 23 Jun 2008 @ 6:29 PM

  76. RE: 73

    Hey Cobbly,

    Actually, for the purpose of clarity I was using the table to define the period of occurrence of an El Nino pattern. It was not for comparative purposes. The comparison that I had intended, is between the ocean heat content of the study and the Pacific ocean heat content noted by the TOA/Triton system.

    The original graphic was used to demonstrate the strong negative Heat Content spike that occurred between 1998 and 2000. The point I was attempting to make in 65 was the occurrence of the spike did coincide with an El Nino; however, the depth of the spike exceeded recent historic values.

    Looking over the data set that was used in the study it would appear that the amplitude of the 1998-2000 cooling exceeded any prior data set which would suggest that El Nino alone must not have been responsible for the depth of the amplitude of the measured signal.

    To me it would seem there is a high possibility that the additional variable for the spike has to be outside direct ENSO influence. I appreciate your preliminary analysis; however, your preliminary analysis should suggest for the deviation in amplitude noted there had to be a significant event that could have influenced the amplitude as measured. It appears it either had to be an equipment error or a very large phenomena. I imagine a reduction in insolation for a year would have played a part, though I do not know this for a fact.

    That the heat content was demonstrated at a few of the initial long term PRIATA sites as well suggests that the cooling extended to bodies of water other then those that would be directly affected by an El Nino Wind or ocean current. The end result as I see it is the pattern noted in the study may have been mirrored in the TOA/Triton data set up until 1998; however, it appears to have diverged since then.

    As to the introduction of the new subject regarding the normal winter time blocking anti-cyclonic that sets up NW of Ireland. On UKweatherworld I have tried to address the anti-cyclonic pattern that has been in absence recently, (having advanced to the Barents north of Norway for the 2004-2007 winter seasons). Last year was the first time that the normal pattern has set up in nearly 5 years, though it was still around a hundred miles north and further west then normal. The interesting thing I saw was the location of the pressure wave at the 250mb altitude and the surface phenomena that occurred both NW of Ireland and over Austria/Switzerland for the last three years. It almost appeared the upper level High split into two forming the two surface features with the same cold air pocket driving the two pressure centers.

    The major difference I saw in the NCEP* data set was the location of a cyclonic pressure wave that appeared to take up residence at the Pole this past winter as opposed to a anti-cyclonic pressure wave of the past two years.

    * http://nomads.ncdc.noaa.gov:9091/ncep/dates
    “Chart Type: No Hemp 250MB Analysis Hgts_Isotachs Stn Plots”
    and again the similar chart based on either 950mb or the surface.

    Cheers!
    Dave Cooke

    Comment by l david cooke — 23 Jun 2008 @ 6:53 PM

  77. Sea levels still seem to be a problem. Annual rise 1mm to 3mm depending on who is measuring. Accuracy for NASA Jason 3cm to 4cm. Note current falling sea level by NASA http://www.jpl.nasa.gov/images/jason/20080616/chart-browse.jpg

    Comment by Gary — 23 Jun 2008 @ 7:29 PM

  78. I am unclear on something.

    I understand ENSO events to principally be redistributions of water of different temperature in the Pacific.

    As a redistribution there should be no effect on overall global average sea level.

    Is there some other process that is being posited? Escape of heat to the atmosphere or vice versa?

    Comment by John Lederer — 23 Jun 2008 @ 8:46 PM

  79. 77 the problem seems to be open mindedness. Chaos is not easily anticipated. I would be more comfortable with responses that indicated uncertainty than certainty. Overconfidence is not a virtue predicting climate or weather.

    Comment by captdallas2 — 23 Jun 2008 @ 9:20 PM

  80. Re 69 Ray This is what he actually said in the Reuters article if you had bothered to look

    Fellow report author John Church said he had long been suspicious about the historical data because it did not match results from computer models of the world’s climate and oceans.

    “We’ve realigned the observations and as a result the models agree with the observations much better than previously,” said Church, a senior research scientist with the climate centre.

    “And so by comparing many XBT observations with research ship observations in a statistical way, you can estimate what the errors associated with the XBTs are.”

    Maybe climatologists have a different set of glasses with which to look at data. If I did this in my profession I would be fired and rightly so

    [Response: I hope that you are not in a profession that the public relies on - if you ignored known problems with your data and refused to deal with them, then you would deserve to be fired. - gavin]

    Comment by greg smith — 23 Jun 2008 @ 9:35 PM

  81. #74 John Mashey:

    CSIRAC, circa 1949, is a von Neumann machine, and is intact in a Museum in Melbourne. No longer goes, of course…

    Comment by GlenFergus — 23 Jun 2008 @ 11:09 PM

  82. Paulidian, I’m not sure you were paying much attention in High School Science class. I teach first year university physics experimental technique. In one experiment my students test the equations of circular motion. It’s not uncommon for their results to disagree with these equations.

    Rather than encouraging them to run from the room screaming “where’s my Nobel Prize” I tell them first to check their calculations, at which point most of the discrepancies disappear. In those cases where they don’t I suggest they review their experimental technique. Occasionally unexplained discrepancies remain, but usually they realise they were doing something wrong. If they have time to retake the data it’s usually much better.

    It’s always possible that these 400 year old theories are wrong, but not even a first year student would seriously believe we would throw out the model based on a single set of data. In this case the data from the more accurate measuring devices fits well with the models. The data from the older, less accurate devices does not, so people have checked whether the older devices might have a systematic problem and no one is surprised to discover they do. When corrected both devices fit well with the models.

    Comment by feral sparrowhawk — 23 Jun 2008 @ 11:23 PM

  83. I am disappointed that the study only goes to 2004. It seems odd to cut off analysis at 2004 when the Lyman paper shows a cooling trend to 2005. Is there an explanation for such an arbitrary cut off point?

    And even though vulcanism correlates with a few cooling inflection points, there are other cooling inflection points that lack volcanic explanations. At best it may explain 50% of the variation.

    I am curious if people have read the recent article and could comment.

    Greenland Ice Core Analysis Shows Drastic Climate Change Near End Of Last Ice Age

    http://www.sciencedaily.com/releases/2008/06/080619142112.htm

    “The ice core showed the Northern Hemisphere briefly emerged from the last ice age some 14,700 years ago with a 22-degree-Fahrenheit spike in just 50 years, then plunged back into icy conditions before abruptly warming again about 11,700 years ago.”

    Such changes certainly do not seem to be the result of vulcanism or CO2. What energy source could create such changes? 22 degrees in 50 years makes the recent trend pale.

    Comment by gusbobb — 24 Jun 2008 @ 12:46 AM

  84. One very interresting part of the paper is the speculation that thermal expansion of the deep ocean (depth>700m) is the unknown contributor in the sea level rise budget (see fig3a, orange line in paper). To close the budget they pick a deep-ocean thermal expansion of 0.2 mm/yr (which corresponds to 0.2 W/m2 at 700m depth).

    Comment by Aslak Grinsted — 24 Jun 2008 @ 4:11 AM

  85. #76 L David Cooke,

    With any event in the climate system it’s often impossible to be sure whether what you’re seeing is just the feature at hand (like an EN event), or whether there’s complication from other processes. However the 1998 EN was particularly intense (in terms of it’s impact on Global Temperatures) so it wouldn’t surprise me if it had a particularly intense impact upon the Warm Pool volume. There may be something else going on, but I haven’t the time to start searching journals for discussion of the ’98 EN event.

    I wasn’t intending to introduce a new issue, merely to illustrate an analogous process that seems to support your suggestion of a link between of NH droughts and volcanic activity. With what’s going on in the Arctic we may (should) already see impacts on Northern Hemisphere synoptic patterns, my over-riding interest at present. I shall go and lurk over at UK Weatherworld to see what’s being said. But I prefer to rely on primary published research (as I’m all too aware I don’t know enough to start looking at raw met. data to try to find any impact).

    Regards

    Cobbly.

    Comment by Cobblyworlds — 24 Jun 2008 @ 6:18 AM

  86. Dallas Tisdale (re #79), Uncertainty is an integral part of science–but it has to be quantified. This is invariabley done in studies of climate change. Within any reasonable level of uncertainty, anthropogenic causation of climate change is beyond doubt. Weather is indeed chaotic. Climate is not.

    Comment by Ray Ladbury — 24 Jun 2008 @ 7:40 AM

  87. re: 83
    “22 degrees in 50 years makes the recent trend pale.”

    Of course, the scary thing is that the climate is capable of changing 22 degrees in 50 years. Were it to suddenly ramp up like that tomorrow, the world in 5 years would be dramatically different and in 50 years, we wouldn’t recognize it.

    Comment by Jeffrey Davis — 24 Jun 2008 @ 7:43 AM

  88. Greg Smith, I did indeed read the piece, and I find nothing wrong with applying a correction to data when I find a source of error. You are assuming that the data were corrected because they did not conform to theory. Were that the case, why would Church have “LONG been suspcious”? Rather, a discrepancy was found between data and model. Both sets of researchers went back and looked at the model and the data. The error was found in the data, not the model. It takes a pretty jaundiced and paranoid view of the scientific process to find anything there that is not above board. I think you need to go back and read over what was done and examine what it says about YOUR underlying attitudes. They reveal much more about you than they do about the scientific process.

    Comment by Ray Ladbury — 24 Jun 2008 @ 7:45 AM

  89. Re gusbob @ 83: “What energy source could create such changes? 22 degrees in 50 years makes the recent trend pale.”

    Yep, amazing what a relatively small change in insolation, plus a rapid change in albedo, plus a rapid change in atmospheric water vapour, CO2 and methane can do, isn’t it?

    Comment by Jim Eager — 24 Jun 2008 @ 8:52 AM

  90. RE: #63 Why are climatologists always adjusting their data? How do they know in which direction to adjust them?

    In order to compare the price of, say, gasoline in 1973 to the price in 2008, economists correct the 1973 values for inflation so they can report it in 2008 dollars. When civil engineers measure distances using a metal tape, they may have to correct their values for ambient temperature due to the thermal expansion or contraction of the tape. In the laboratory, very precise temperature measurements (e.g.,to the nearest 0.001 degree C) made with a glass thermometer have to be corrected by reference to calibration data specific to that thermometer to account for flaws in the glass capillary. To see the kinds of corrections oceanographers apply to ocean temperatures measured at sea, read this:
    https://darchive.mblwhoilibrary.org/bitstream/1912/169/3/Nansen_Bottles.pdf

    Failure to correct for a known bias would leave economists, engineers, and scientists, with data having limited utility.

    Comment by Chuck Booth — 24 Jun 2008 @ 9:40 AM

  91. Re: 63 & 90. Here’s another analogy. If I gave you copies of all the maps of part of the coastline for the last hundred years, and ask you to calculate local sea level changes from them over that time I think it would be very likely that you could refine your answer with successive attempts backed by increasing research into the map makers and their methods etc.

    Comment by Adam — 24 Jun 2008 @ 9:47 AM

  92. re:” #81 GlenFergus
    Oops, forgot that one – thanks for the reminder.
    However, just to be precise, there are von Neumann machines (most) and the tiny handful of them derived from his Princeton plans, and CSIRAC is one of the former, but not the latter, i.e., it was an independent effort.

    I’m glad it has a safe home. If for some reason anyone wants to throw it away, and no one in Australia wants it, give us a call. [That's how we got JOHNNIAC, rescued about a day before it was hauled away for junk, because an engineer who'd worked on it just happened to park in a back parking lot of the museum that was dumping it.]

    In any case, this history is a reminder that people take a lot of compute power for granted, but it certainly didn’t didn’t exist in 1945, and even a 1976 Cray-1 wasn’t much in current terms.

    For anyone interested in a readable introduction to the topic,

    Smarr and Kauffman, Supercomputing and the Transformation of Science

    is a little old, but pretty useful, and cheap on Amazon used.

    Comment by John Mashey — 24 Jun 2008 @ 9:51 AM

  93. Gary writes: “note currently falling …” and points to:
    http://www.jpl.nasa.gov/images/jason/20080616/chart-browse.jpg
    Gary, what is charted there and how long a time span do you need to assess whether there is a change in the trend? How do you know what you claim to see there? You’re not just following wiggles are you?

    Comment by Hank Roberts — 24 Jun 2008 @ 10:03 AM

  94. 89# Jim Eager Says:
    Yep, amazing what a relatively small change in insolation, plus a rapid change in albedo, plus a rapid change in atmospheric water vapour, CO2 and methane can do, isn’t it?

    Did the authors attribute both the rapid rise and fall in temperatures to water vapor CO2 and methane? That seems unlikely due to observed lag times. Cahnes in inoslation would be my first suspect. They mentioned changing atmospheric circulation and I would think changing ocean oscillations and temperatures would contribute.

    Comment by gusbobb — 24 Jun 2008 @ 10:48 AM

  95. RE: 90

    Hey Chuck,

    I don’t know that it would limit the utility as much as it would limit the precision you could apply to the data set. Hence, part of the question that comes into play is how do scientists extend the precision so that it can be tied to the original instrument? A lack of understanding by many laymen can be linked to this question. The point being (at the risk of my being redundant), the question of trust. (Regardless that people use engineered bridges or work/live/travel in engineered structures everday. Seems they are willing to trust what they can see…)

    I believe you have touched on one of the issues that I had and others have, when dealing with AGW/CC theories. When the level of precision cannot be confirmed for the data set and the analysis carried out falls outside of the precision limits, (IE: CC variation of 0.65 Deg C and raw measurement precisions of +/- 2 Degrees.

    If you replace an instrument with a more precise measure it is usually important that the two run concurrently to insure the variation can be confirmed over the normal range of measurement. What makes it even more difficult is the measurement techniques that may have changed. Depending on the person, the reading of the value can have some level of subjectiveness. Hence, not only could a measure change because of an instrument change; but, also because of a change in the reader.

    So beyond the technical bias aspects which affect the precision, as you suggest, we have confidence issues by some in regards to data set validity. Then we also have the additional issues that can be introduced when you add proxies where there could be multiple variables affecting the amplitude of an attribute. This does not mean that the science is faulty, only that we laymen may be ignorant of the protocols for the experiments and measurements used to establish the data sets.

    Cheers!
    Dave Cooke

    Comment by l david cooke — 24 Jun 2008 @ 10:58 AM

  96. gusbobb @ 83:

    “Such changes certainly do not seem to be the result of vulcanism or CO2. What energy source could create such changes?”

    Catastrophic release of methane from clathrate, perhaps. That would also nicely explain the short duration of the event.

    Comment by Jan Rooth — 24 Jun 2008 @ 12:03 PM

  97. It’s clear from a number of contributions here that what denialists want is a rule that errors in data can only be corrected if the correction leads to a worse fit between climate models and the data.

    Comment by Nick Gotts — 24 Jun 2008 @ 12:30 PM

  98. Gavin et al,

    Question for you guys. It seems clear to me that adjustments made to raw surface data should [overall] adjust recent data down (or past data up) due to UHI. Yet every statistical audit of GISS and GHCN adjustments (that I can find so far) find them to do the exact opposite overall.

    What are the contributors stronger than UHI that explain this? In simple terms. Thank you.

    [Response: Not sure why you think this. The non-UHI corrected GISS analyses show a larger trend than the corrected product (See Hansen et al 2001). But there are UHCN corrections that are related to time-of-observation biases, location changes (city centre to airport for instance) that go the other way. - gavin]

    Comment by Radar — 24 Jun 2008 @ 1:10 PM

  99. RE: #63 & #90:

    Corrections to observations are made without regard to the theory. Corrections are made to better reflect reality, whatever that reality happens to be. If the corrections turn out to better support the theory, that’s more evidence that the theory is correct.

    Perhaps skeptics are confusing the actual making of the correction with the stimulus for investigating whether a correction is needed. The former is done without regard to theory. The latter often is prompted by mismatch to theory. There’s nothing at all wrong with that; it is the standard way that science in all fields is done.

    Comment by Tom Dayton — 24 Jun 2008 @ 1:40 PM

  100. Gavin I hope you (or other RC folks but you are the insider at NASA) will do some posts about the latest James Hansen testimony in congress. Rightly or wrongly, I think Hansen’s dramatic interpretations of the threats from AGW are going to be the key media “reference points” going forward.

    Comment by Joseph Hunkins — 24 Jun 2008 @ 1:55 PM

  101. Corrections to observations are made without regard to the theory.

    I don’t think this is generally true or even possible, since observers are generally believers in the theory in question, and thus potentially influenced in any subjective aspect of the observation.

    Sometimes the prevailing theory/hypotheses will be used in an effort to formulate corrections, reject outlier data, etc. Whenever this is done is seems the author should provide the rationale to avoid criticism for a type of circular reasoning.

    Comment by Joseph Hunkins — 24 Jun 2008 @ 2:31 PM

  102. Rejecting outlier data without a lot of careful analyses and very solid justiication proving that the data was measured incorrectly and not because it doesn’t fall where it “should” is a big no-no in science. And scientists actually check each other for that kind of thing. That’s part of what peer review is for, but more than that if you earn a reputation as someone who manipulates data as a means to prove your pet theory or model, funding starts to dry up, your papers start to get rejected and basically people stop trusting your work and scientific integrity. Scientists are pretty blunt and pretty cruel that way. They have to be.

    Comment by Figen Mekik — 24 Jun 2008 @ 2:53 PM

  103. RE: #101:

    It certainly is possible for corrections to be made without regard to theory! Suppose you discover that every temperature sensor made in a particular shop in a particular week under-reports temperature by 1 degree. You verify that by testing a sample of them in the lab. Theories of climate have nothing to do with that.

    Now you use serial numbers to find all those particular temperature sensors that were deployed, and increase your records of their observed temperatures by 1 degree. You do so for _all_ those sensors, regardless of whether their observations to date have agreed with your theory.

    Any data points that, when uncorrected, exceeded your theory’s prediction, now exceed your theory’s prediction even more due to the corrective addition of 1 degree. You do _not_ refrain from applying the correction to those data points. You apply the correction based purely on criteria having nothing to do with your theory, and you see whether your theory comes out better or worse.

    This sort of correction happens all the time, in all branches of empirical science. As Figen Mekik pointed out in #102, an essential part of science is an active and sometimes vicious gang of your peers who are all too willing to point out your mistakes and biases.

    Comment by Tom Dayton — 24 Jun 2008 @ 4:34 PM

  104. Re #100 [Joseph Hunkins] “Rightly or wrongly, I think Hansen’s dramatic interpretations of the threats from AGW are going to be the key media “reference points” going forward.”

    I certainly hope so – and if you’re right, that in itself will show how mistaken those who are arguing that Hansen has made a tactical blunder are.

    Comment by Nick Gotts — 24 Jun 2008 @ 5:00 PM

  105. 101. Mr. Hunkins makes a sweeping generalization about how scientists view data versus theory and how they treat raw data. I would be very interested in an analyisis of this in a specific area of climate science or oceanography. Where do you see this happening? Please be specific and provide references.

    Comment by Paul Middents — 24 Jun 2008 @ 5:05 PM

  106. Re: 102. “Rejecting outlier data without a lot of careful analyses and very solid justiication proving that the data was measured incorrectly and not because it doesn’t fall where it “should” is a big no-no in science.”

    That’s not strictly speaking true. Any time an experiment has a heavy tailed error distribution it can result in measurements you might as well throw away even though the measurement process was as correct as it can be.

    A lot of people used to think that naturally occurring error distributions would be close to Gaussian, based on the central limit theorem, and that actually covers a lot of situations. But it does not cover all situations.

    Ultimately, the question of whether a datum should be taken at face value or not is one for robust statistical methods. One reasonable approach is to apply only methods of inference and estimation which are smooth in the Prokhorov metric. This roughly and qualitatively means that the answers you get are insensitive to a small proportion of gross errors in the data and small errors in the remainder of the data. In the past few decades many very useful ways to obtain this sort of property in inference and estimation have been devised.

    For example, it was well known for a long time that the median was much less susceptible to error than the mean, but the median was also a much less efficient estimate – in other words you payed a big penalty from the median “throwing away” so much of your data. But it has also long been known that other estimates which are equally robust, but which are more efficient (for example the Hodges-Lehmann estimate (median of pairwise means) is very efficient if the error distribution is symmetric).

    I suspect that one of the main reason people in the climate field do not reflexively only use Prokhorov smooth methods all the time (as we do in my field of finance) is that they know they might have to explain their work to an unsophisticated audience (such as government officials).

    Comment by Andrew — 24 Jun 2008 @ 6:53 PM

  107. Joseph Hunkins, Figen Mekik is correct–you would never reject data simply because it did not fit a particular theory. In so doing you might be throwing away a chance for a very important discovery–and scientists are interested in nothing more than this.
    While it is true that theory can introduce bias into an analysis, they way this happens is much more subtle than this–e.g by influencing our ideas of what can be measured or perhaps looking for errors in outliers more than in data that fit the theory. However, a good scientist can and will try to check for these biases–and if he doesn’t his peers certainly will, and not kindly.
    There are ways of checking data for self-consistency without reference to a theory. Such methods can also be used to detect bogus data. As an example, suppose you have a moderate-sized sample, and you have a single outlier. Is it a mistake, or is it physics? One way to check is to gather more data and see if you get more outliers and whether they start to resemble a feature–e.g. a second mode, a thick tail, etc. If you can’t find any more data, you can see if you can find “similar” data (generated by at least some of the same processes) and look that way. In addition, you can look at the order plots, do bootstrapping… The data analysis toolbox is huge and crammed to the gills with interesting tools.

    Comment by Ray Ladbury — 24 Jun 2008 @ 7:00 PM

  108. RE: 107 and Joseph Hunkins:

    Probably, Joseph, you are applying your knowledge of science as it is usually taught in high school and even some undergraduate college classes, as the purest form of experimentation. That is, if your hypothesis is not supported by your pure experiment, you must accept that result as absolute, give up that hypothesis and the theory that created it, modify your theory so it produces a different hypothesis, and then run a brand new experiment.

    The actual practice of science doesn’t work in only that way. It works differently not because scientists don’t rigorously follow the scientific method as my previous paragraph described it. Rather, science never has and never should work only as my previous paragraph described it. “Normative” (prescriptive–ideal) decision making uses every scrap of information that is available. The form of science that my previous paragraph describes is only one small part of the ideal scientific method, and it is not possible at all when dealing with irreproducible historic data that you’ve already explored thoroughly.

    A gentle introduction to the messiness of real science is the topic of “quasiexperimentation.” I was brought up on Cook and Campbell’s book having that title, but there are lots of more recent and brief overviews. By the way, that book is not about climatology. It is about research methodology in general, though its examples are from behavioral science. Climatology in no way gets an exemption from scientific method. All these explanations by me and other bloggers are not excuse-making for climatology. We’re explaining how science does and should work.

    Comment by Tom Dayton — 24 Jun 2008 @ 11:19 PM

  109. My comment was not clear enough above. I am not suggesting that observations are generally unreasonably influenced by theory. I would suggest that that happens sometimes, and is a potentially serious problem. My point is that divorcing theory and observation is not as easy as was suggested, and scientists sometimes fail to do this. A good example where this is *routinely* done is “creation science” where accredited scientists reject overwhelming evidence simply because it is incompatible with the key features of creation “science”.

    Ray I think the notion that data is never rejected because it is incompatible with theory is far too optimistic. More importantly, there is often some room for subjectivity when applying corrections and even in initial measurements (e.g. reading a thermometer).

    Paul my point was not as dramatic as you seem to think, though observational and correction bias would make a fantastic (and unfundable?) study,

    Comment by Joe Hunkins — 25 Jun 2008 @ 3:55 AM

  110. Andrew, If you look at what Figen said, I think you will find that it is consistent with your more detailed explanation. I am grateful to you for emphasizing the statistical nature of outlier identification, though. It’s a problem I often confront in my day job.

    Comment by Ray Ladbury — 25 Jun 2008 @ 6:57 AM

  111. Re Gussbob @94: I suspect I’ll regret the resulting thread diversion, but exactly what change in insolation do you propose that is capable of producing a change of 22 degrees in 50 years?

    Jan Rooth @96 was more specific than I in suggesting sudden and massive release from methane clathrates. His comment on short duration stems from the fact that methane is a potent greenhouse gas, but reduces to less potent CO2 and H2O in the atmosphere fairly rapidly.

    Comment by Jim Eager — 25 Jun 2008 @ 9:21 AM

  112. RE 111, I think the life span for CH4 (of which there are many gigtons in the Arctic region just waiting to be released with the warming that’s already in the pipeline) is about 10 years. With our rapid warming and great CH4 releases compounding, I think this could spiral the warming way out of control in a positive feedback fashion.

    See David Archer’s post, http://www.realclimate.org/index.php/archives/2005/12/methane-hydrates-and-global-warming/langswitch_lang/po

    Comment by Lynn Vincentnathan — 25 Jun 2008 @ 10:17 AM

  113. 110. “Andrew, If you look at what Figen said, I think you will find that it is consistent with your more detailed explanation.”

    No, I won’t. The part that is definitely irreconcilable is where he says:

    “Rejecting outlier data without a lot of careful analyses and very solid justiication proving that the data was measured incorrectly and not because it doesn’t fall where it “should” is a big no-no in science.”

    The problem is that he implies that rejecting data is only justifiable when one can show that the data was “measured incorrectly”. There are distributions of errors for which this is not true – for which it is better to ignore or down-weight some data intrinsically as part of the estimation.

    For example, consider estimation of something like a stability boundary of a chaotic system. (Or to use the recent slang “tipping point”). If you can only take data from a system for a finite time, you don’t have the luxury of finding out what eventually happens; so there will be some cases of instability which do not manifest themselves in a short time, and have excursions which are less than for some cases of stability. If the mixing time of the chaos is faster than the time scale which you can observe, or if there may be some randomness in the system, you are in a situation where the best option may well be to use a statisical model to determine stability; probably a logistic regression to estimate the probability of instability. In this case, you run the danger of a purely artificial estimation problem called “separation” – it will happen if you correctly identify the all stable and unstable examples as such. Because you will only have a finite set of data, there will then be a range of model parameters which are equally likely – i.e. the likelihood function is degenerate (the Fisher Information is singular). The parameter errors which result from this are huge, and the model – despite getting all the data to “lie on the theoretical line” is not so useful for prediction. What one normally does in this situation (as pointed out by Firth) is to add an information-free prior distribution (Jeffrey’s rule) and what this ends up doing is down-weighting the data which are most influential. So even though the observations of those data are correct, and even though the naive model agrees with those data, and the outcomes of the model for those data are the most important, the best thing to do is to down-weight those data, and move the “theoretical line” to AGREE LESS with them. In particular – if you happen to have observations which happen to land exactly on the stability boundary – the very thing you are trying to estimate – you will be best off IGNORING THEM COMPLETELY. In other words, if you actually take data exactly where you want it, then irrespective of whether the measurement is correct – or completely broken – then you will throw that data out.

    This sort of example – where TOO MUCH AGREEMENT BETWEEN THEORY AND EXPERIMENT leads to a pathology resolved by rejecting good data – is not something that most scientists keep in the back of their mind, unless they are thinking about some sort of malfeasance.

    “I am grateful to you for emphasizing the statistical nature of outlier identification, though. It’s a problem I often confront in my day job.”

    I actually think just about every modern scientists is probably up to their neck in this stuff, whether they know it or not.

    Comment by Andrew — 25 Jun 2008 @ 10:52 AM

  114. I think Ray was referring to the part where I said without further analyses. BTW, and maybe a minor point but I am a she. :)

    Comment by Figen Mekik — 25 Jun 2008 @ 11:19 AM

  115. Joseph Hunkins, “Creation science” is not scientists, and those who purport to practice it are not scientists. There may be some scientists out there who are over zealous in purging their data of “outliers,” but they aren’t doing science when they do this. In so doing, they preclude ever discovering new science, so I would contend that any scientist worth his or her salt will instead use statistical techniques and stringent investigation to identify potential outliers. Experimentalists love nothing better than to rub the nose of a theorist in a theoretical failure. It is the closest the experimentalist will ever get to the feeling of sacking a quarterback in the superbowl. I do not know of anyone who would forego that possibility merely to conform with theory.

    Comment by Ray Ladbury — 25 Jun 2008 @ 11:28 AM

  116. This has been quite the enlightening exchange this morning. My thanks to all commenters.

    Comment by David B. Benson — 25 Jun 2008 @ 1:55 PM

  117. I am a little troubled by an approach to data that starts off with “There is a divergence in the data between it and my theory. What errors in data gathering might there be that would explain this divergence?”

    Doesn’t this tend to bias the corrections?

    In an ideal world the hunt for problems in the data would be balanced, i.e. as much attention would be paid to “what errors in data gathering might there be that causes a false congruence between my theory and the data”.

    But the real world, and real people, are not perfect.

    [Response: Who said that this was the starting point? We have stressed here over and again that discrepancies that exist perforce demand that three things be looked at concurrently: 1) is there an issue with the data; 2) is there an issue with the theory (or it's quantification in the model)? and 3) are we correctly comparing like with like? In climate science, as in every science, the resolution of any discrepancy is distributed amongst these alternatives. - gavin]

    Comment by John Lederer — 25 Jun 2008 @ 2:46 PM

  118. Tropical oceans expose riddle over global-warming equation

    A probe into levels of an important greenhouse gas above the tropical Atlantic has challenged assumptions about key sources of global warming, scientists said on Wednesday.

    Researchers found that natural chemicals in the atmosphere west of equatorial Africa destroyed 50 percent more ozone in that region than expected.

    This process also reduced concentrations of methane, another powerful greenhouse gas.

    It may well apply in oceans around the world and if so, it would pose major questions about how Earth’s inventory of global warming gases is calculated, they said. …

    Scientists led by John Plane at the University of Leeds, northern England, analysed a year of ozone and methane measurements taken at the Cape Verde Atmospheric Observatory on Sao Vicente, an island some 500 kilometres (380 miles) west of Senegal. …

    “Global models get levels of ozone in the troposphere about right. So if destruction rates are much higher than thought that means it must be coming from somewhere else,” University of York scientist Lucy Carpenter told AFP.

    Comment by Jim Galasyn — 25 Jun 2008 @ 5:26 PM

  119. All you climate change deniers can put Dr. Hansen’s statement in your pdf reader, read it and weep along with the rest of us.
    Twenty years ago we had twenty years to do something about the rise in CO2 in our atmosphere. We’re almost out of time.
    http://www.columbia.edu/~jeh1/

    June 23, 2008

    Comment by catman306 — 25 Jun 2008 @ 5:51 PM

  120. Just a layman’s question : could Benford’s Law be used to check the quality of data, and if so, in which instances?

    Comment by Francois Marchand — 25 Jun 2008 @ 6:17 PM

  121. John Lederer, Why do you jump to the conclusion that the motivation to find the error came from the discrepancy with theory? Any decent experimentalist would want to find a real discrepancy–that’s how they get famous. The discrepancy has existed for a very long time–only now has it been explained. If the motivation were to make theory and data match by whatever means, don’t you think they could have done so more quickly?

    Comment by Ray Ladbury — 25 Jun 2008 @ 7:06 PM

  122. John Lederer,

    I did not write that the only time scientists look for errors in observations is when the observations deviate from the theory!

    Of course scientists do their best to make the observations accurate right from the get-go. But problems sometimes get missed, and problems sometimes don’t manifest until much later.

    When there _is_ a deviation from the theory, it is a clue that there _might_ be a problem with the observations. It’s a clue just like it’s a clue when the shop that built the sensor tells you they fired a worker who built sensors numbered 456 through 655, for falsifying the quality control records. Regardless of where the clue comes from, the scientists have a responsibility to follow up.

    As Ray wrote in #117, you don’t _only_ investigate the observations, you investigate the theory as well. Then you let the chips fall where they may.

    Comment by Tom Dayton — 25 Jun 2008 @ 7:41 PM

  123. Ray – I can only hope you are right to have what seems like boundless optimism that bias and subjectivity rarely rear their ugly heads among scientists and then are flushed out by the peer review process.

    Comment by Joe Hunkins — 26 Jun 2008 @ 2:43 AM

  124. Tom Dayton, I would say that the interest in points that don’t fall on the theoretical curve is two-fold:
    1)They could be in error
    2)They could be new physics
    You certainly don’t want to preclude the possibility of discovering new physics for the sake of conformism (conformism is not an adjective usually applied to scientists). You also don’t want to proclaim new physics for any tiny disagreement. So it is natural to look at these points AND to look at what theory predicts very closely. As Asimov said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’, but ‘That’s funny …”

    Comment by Ray Ladbury — 26 Jun 2008 @ 4:57 AM

  125. Nick Gott #97:

    My variation on this theme:

    When the data is corrected towards confirmation of AGW, it is tampering with the data to fit the models.

    When the data is corrected in the other direction, it is proof that the denialists were right all along and there is nothing to worry about.

    Comment by Anne van der Bom — 26 Jun 2008 @ 7:27 AM

  126. Gavin, Looking at the second figure in your writeup, it appears that the model runs with volcanic aerosols are matching the long-term trend in ocean heat content very well. When one takes a closer look however, there is a possibly important observation. Firstly, during each of the major volcanic events, the models show a higher amplitude change in ocean heat than is present in the Dominguez curve. In fact, for the Agung eruption, the Dominguez reanalysis is completely out of phase with the model ensemble mean. It is not clear from the labeling, but I assume the upper model realizations are for runs with no volcanic forcing added. Now my question: Do you think that the volcanic forcing is overdone in most of the recent-vintage AOGCM model runs? In other words, if the model aerosol imput is tweeked until a good history-match is achieved with observed volcanic forcing (as measured by observed decreases in ocean heat content), will they then produce a long-term trend in TOA radiative imbalance that is too high relative to observations?

    [Response: The aerosol distribution for Agung is the least well observed of recent volcanoes, and so the uncertainty in the forcing is non-negligible - and the observed data is has higher uncertainty too. So it could be tweaked either way. However, the volcanoes are acting like a release valve, deflating the OHC for a couple of years at a time. So while there will be less OHC going forward from every eruption (Glecker et al 2005 found impacts even today from Kratatoa), that doesn't impact the trend subsequently. Think of it like a sweet jar - if you add in 2 sweets a day for weeks, and every so often scoop out a handful, the long term trend is lower, but the trend in between scoops is the same. - gavin]

    Comment by Bryan S — 26 Jun 2008 @ 7:54 AM

  127. Joeduck, the only thing I have faith in is that humans will act in a manner that they believe best furthers their interests. I also have faith that intelligent people will be better able to perceive where their interests lie. As a scientist, it is in my interest to detect any errors I make before they are detected by my peers. It is in my interest to detect any errors my peers will make. It is in my interest to know the difference between errors and discrepancies with theory that are indicative of new physics. That is how one’s prestiege and influence increase as a scientist. You have to realize that science is a collective, competitive enterprise. It doesn’t depend on the judgment of any single individual, but rather on the collective judgment of a huge community of peers–all of whom think they’re smarter than you and are just waiting for the chance to prove it.
    And I would like to thank you for a novel experience–this is the first time anyone has ever accused me of boundless optimism.

    Comment by Ray Ladbury — 26 Jun 2008 @ 8:52 AM

  128. RE: #124 & #103:

    I agree with you, Ray.

    I anticipate someone else now will object that it’s not “fair” to adjust the theory to match observations that didn’t match the theory. So I’ll respond in advance:

    First of all, the goal is to make the theory reflect reality. That’s a different goal than when you’re betting on your prediction of a soccer match’s outcome. When betting on a soccer match you’re not allowed to adjust your prediction as the game progresses, because the only purpose of betting is to test your predictive skill. (Plus the secondary goal of drinking beer in a pub. Or maybe I’ve got the priorities reversed….) But in science the goal is good theory, which largely means good match of theory to observation. (There are other aspects of theory quality, such as fruitfulness and explanatory power.)

    Then there is the fact that climate models are models of (theories of) physical processes. Adjusting such a model requires you to let the chips fall where they may, just as correcting observations for a flaw in an instrument does (see #103). If your model under-estimates the June 1979 temperature by one degree, you cannot simply add to the model a line “If it’s June 1979, add a degree.” Instead you must change the model’s description of some physical entity or process, and accept the consequences not just for June 1979, but for every other date–even if that makes the model fit worse on other dates. That’s why you don’t change the model based simply on mismatch of one or a few observations. You use the mismatch as a stimulus and guide to figuring out what parts of the model might be sensible to change, based on supportive rationale and data in addition to the mismatch that caught your attention.

    Comment by Tom Dayton — 26 Jun 2008 @ 9:11 AM

  129. I think that several of you have leaped to a conclusion other than what I intended to suggest when I pointed to a possible bias in finding errors.

    Real world data collection and recording is messy — the recent example of buckets, intake inlets, and SST should suggest how messy it can be.

    When I have a theory and real world data seems to support my theory except for some untoward mismatches here and there, I am likely to look closely at the data for those mismatches and see if there might be some problem in its collection or recording that biased it away from the theory. Data gathering being messy, I am likely to find some problems.

    If on the other hand the data and my theory matches, I am likely to tell myself what a smart lad I am and treat myself to a quiet restrained jig of delight. I will check the data but I am likely to believe that it is right and my search for error be less than dogged.

    Such an approach is not that of a perfect scientist, but I am not perfect and neither is anyone else.

    Thus we are likely to have two possible errors in our method: a less than diligent search for errors when data matches our theory, and a less than diligent search for the types of errors that would move data that does not match our theory even further away.

    That is the result of being human.

    There are two ways to fight against this human proclivity. One way is rigorous self-discipline. The other is to and provide full, easy access to the data and the detailed means of gathering it to those who oppose our theory.

    Rigorous self-discipline is commendable, but we don’t have perfect self-discipline. We just were not made that way.

    Comment by John Lederer — 26 Jun 2008 @ 11:11 AM

  130. RE: 129

    John Lederer, now I understand the points you have been trying to get across. I agree that the risks you describe exist.

    There is a third way that the scientific process fights against that human proclivity: Other folks are developing their own theories in competition with you. If they can’t get their theories to predict as accurately as yours does, they will be frustrated and maybe even suspicious, and will demand that you reveal all the relevant aspects of your theory, even if you did not do so originally.

    Comment by Tom Dayton — 26 Jun 2008 @ 12:10 PM

  131. Tom, Ray, John, Figen, others: Many good points by all of you above, but I think John’s points in 129 about the difficulties of maintaining a pristine set of observations are particularly relevant to the Global Warming discussion. MM’s complaints about temperature stations and the GISS corrections seem reasonable to me as do concerns about data sharing. Reading the comments above leads one to wonder why these remain contentious issues.

    … humans will act in a manner that they believe best furthers their interests. I also have faith that intelligent people will be better able to perceive where their interests lie.

    Ray I agree with first part, disagree with second. Like you I know a lot of smart folks, but unlike you I find intellectuals cling *as stubbornly as others* to their questionable ideas. [Shermer "Why People Believe Weird Things" suggests why, for example, hoax medicine like homeopathy is more popular among the well-educated]. Theoretical complexity often makes it easy to be stubborn about beliefs because so few have the time to learn the underlying math/science. This relates to peer review in an interesting way – friends and insiders are less likely than outsiders to challenge defects. Are critics correct when they suggest this “friendly review” system common in climate science where authors know each other well?

    Comment by Joseph Hunkins — 26 Jun 2008 @ 5:39 PM

  132. RE: #130:

    John Lederer, lest I be misunderstood:

    I am very, very, very confident that the existing mechanisms in climatological science far more than adequately cover those risks.

    Comment by Tom Dayton — 26 Jun 2008 @ 11:24 PM

  133. Re #126: Gavin, to follow up, it seems to me that the issue is whether climate modelers are inadvertently dipping into the sweet jar with two hands to obtain the history match. If in the model climate, there are three sweets per day added to the jar (instead of two), then two handfulls being scooped out periodically might be required to balance the books.

    Losing the analogy, if the aerosols added to the model climate are greater than those added to the real climate, and this produces a history match, this must then point to the lack of fidelity between the model climate and actual climate. Maybe some parameterizations are not good, or there are incomplete processes included in the model, or weather processes and feedbacks may not be handled appropriately, or possibly some other unknown issues. The point is that the history match would be for the wrong reasons, and such would not portend skill for future predictions. Hence, this reasoning is what drove my previous question about aerosols.

    The bottomline for me (on the ocean heat content time series) is that sure, the climate system has been in a positive radiative imbalance over the last several decades, and sure, this is likely attributed largely to human-caused changes in GHG forcing, but I am not convinced that this new paper is a vindication of model forecasting skill. It seems possible, based on the Dominguez reanalysis, that in nearly all climate models, their net TOA radiative imbalance due to the added GHGs may be higher than that observed in the actual climate. In other words, most of the current models have the sensitivity to GHG increases too high, and this has been compensated for by adding too much aerosol forcing, thereby giving the superficial appearance of a robust history match. As supporting my curiosity, I might reference slide 3 in the Wijffels powerpoint presentation presented during the workshop. It is not hard to make the case that the amount of aerosols added to the model produces a big difference in the long- term trend in ocean heat content. Is my curiosity about this issue off base in your opinion?

    [Response: You need to distinguish volcanic from tropospheric aerosols. The timing and magnitude of the former are much better constrained than the latter. In general though you are correct - good hindcasts only show the consistency of the calculation, but do not show that this is the only way that such consistency can be achieved. However, the bigger the number of matches, the harder it is for alternative scenarios to do so. Thus, getting the mean SAT trend right is easier than the SAT trend+stratospheric trend, which is easier than SAT+strat+OHC, which is easier than SAT+strat+OHC+short responses to volcanoes... etc, You get the idea. In a Bayesian sense, the more matches there are - especially if they are independent - the higher your posterior likelihood is that the original theory was correct. This is how it always works. - gavin]

    Comment by Bryan S — 27 Jun 2008 @ 8:11 AM

  134. Joseph,

    I appreciate and agree with most of your points. Of course measurements have errors associated with them and of course some of these errors may be caused inadvertantly by the inherent biases of the observer. A lot of the science rests on quantifying those errors and uncertainties. But my point is that a good scientist would be upfront about those errors, and actually it’s more than that. A scientist NEEDS to be upfront about it if he/she wants to stay in the business as a respected member of the scientific community.

    Where I disagree is that we (climate scientists) aren’t all a bunch of friends who will be gentle with each other’s work. I have been pretty lucky to work with the scientific questions of interest to me and with leading people in the field. But friendliness goes only so far. The peer review process is often anything but friendly, especially if the reviewer chooses to be “anonymous.” And I know, as do all my colleagues, that this luck I speak of can change fast if I start to fudge data or make too many assumptions. As colleagues we really hold each other’s toes to the fire, otherwise our science will lose its integrity. You learn to develop a thick skin and accept and critically evaluate negative feedback on your work. Fast! And actually it’s that most horrendous review you receive that makes your work much stronger in the end when it is finally accepted for publication. Though you may be peeved at first, that harsh review is often very appreciated because it helps avert much potential embarrassment when the work is finally out there in litertaure for many many years.

    Comment by Figen Mekik — 27 Jun 2008 @ 9:55 AM

  135. RE: #131

    Joseph Hunkins wrote “Are critics correct when they suggest this “friendly review” system common in climate science where authors know each other well?”

    Joseph, I don’t know the climate science community. But the communities I do know–scientific research methodology and behavioral science–seem to have the same degree of overlapping authoring, peer reviewing, committee sitting, grant approving, and so on, as does climate science. In my scientific communities, “insider trading” usually is handled by formal checks and balances, but when that fails it is handled by informal checks and balances–usually more brutally.

    In most scientific fields, there is so little money that competition is fierce. That combines with fierce idealism. (You don’t become a scientist to get rich.) Young scientists sometimes are shocked at how quickly their friends become vicious critics, and then just as quickly become friends again. An analogy is the business world, where “Sorry, it’s just business” is a common conversation between friends. Likewise, prosecuting lawyers and defending lawyers can be the best of chums outside the courtroom, but horribly vicious to each other inside.

    Evidence is in most of the stories you read about any field of science. Good science reporters (e.g., those of Science News and Scientific American) stick into nearly every story about one scientist’s discovery, skeptical comments by that scientist’s peers. You’ll notice that sometimes those comments are downright dismissive or even hostile.

    “Chum” has two meanings in scientific communities, and one of those is related to “shark.”

    Comment by Tom Dayton — 27 Jun 2008 @ 10:18 AM

  136. Joeduck and John Lederer, The thing is, you can’t just think of it in terms of “data” and “theory”. Each experiment has errors associated with it, and our anticipation of the errors likely given the experimental method affect our expectations of how much we should trust the data. Likewise, we will have much more confidence in theoretical predictions (and hence distrust of “outliers”) for a theory that has been well validated previously. What is more different scientists have different levels to which they trust data and experiment, and the folks taking or gathering the data are usually not the same folks constructing the data. There is an inherent antagonism between theorists and experimentalists. A theorist likes nothing more that to look at an experimentalist’s data and say, “See, I told you so!” An experimentalist, on the other hands, fantasizes about the day when he or she will be able to slap the theorist across the face with a datasheet that contradicts the theory. It is not usual for an experimentalis to turn away meekly and say, “Yeah, my data kind of sucks.” They will naturally be reluctant to pass up the opportunity for fame and glory. And the theorist will be secretly sweating, checking equations, examining assumptions, etc.
    If you look at the classic example of how the concept of the ether was overturned. The wave theory of light had a lot of very strong evidence and there was at the time no evidence against it. The concept of a wave propagating without a medium was unthinkable, so by implication, the ether had to exist. Yet, it stubbornly refused to reveal itself. Rather than give up, experimenters kept coming up with more and more sensitive experiments until Michelson and Morley drove the nail into the coffin. Now this led to not just relativity, but also the quantum revolution. However, more important, a conceptual revolution was required. So, really, the conservatism in rejecting the ether was appropriate. I don’t think the sort of bias you are talking about is a major problem in science as a collective enterprise–and that is how science actually gets done.

    Comment by Ray Ladbury — 27 Jun 2008 @ 11:07 AM

  137. Definition:

    Friendly reviewer–one who warns you before completely trashing your research

    Keep in mind–when I review your research, my credibility–even my career–is on the line. I’ve known husband/wife researchers who trash eachother’s research if it deserves it.

    The most you can expect from a reviewer is constructive criticism.

    Comment by Ray Ladbury — 27 Jun 2008 @ 11:26 AM

  138. Ray, Figen: I don’t think the sort of bias you are talking about is a major problem in science as a collective enterprise–and that is how science actually gets done.

    I agree with this strongly. However I’m not clear to what extent it is a *minor problem*, and the stakes are so high that I would be comforted to see, for example, more critical discussion of Jim Hansen’s GISS data as well as his generalizations than I expect to see here at RC due to that “no friendly fire” factor.

    Figen unless I am mistaken *harsh insider criticism* of a colleague as high up as Jim Hansen would be a challenge to one’s career – is that a fair generalization?

    [Response: Not at all. - gavin]

    If it is every true to a small extent then …. Houston, we have a problem.

    Comment by Joe Hunkins — 27 Jun 2008 @ 2:00 PM

  139. Joe,

    If I received a detailed though harsh review from Jim Hansen, I may be a little mortified at first, but I would cherish the fact that he took the time and took my work seriously enough to provide a harsh review. Then, after getting over the emotions of both elation that he reviewed my work at all and dejection that he didn’t like it, I would sit down and go through his review point by point to see how I could improve my work. That would be the opportunity of a lifetime to seriously improve my work and potentially (hopefully) make a huge impact in the science.

    I am being sincere in this. And though not from Hansen, I have received many harsh reviews from some pretty hefty scientists and am alive to tell the tale :) Actually I am much better off for it. Nothing is as sobering and as educational as a good, long review.

    Comment by Figen Mekik — 27 Jun 2008 @ 2:45 PM

  140. Gavin and Figen I guess I was not clear enough. Are there many examples of colleagues criticizing Hansen’s sometimes extraordinary claims about pending catastrophe?

    [Response: What extraordinary claims are those? I'd prefer you took something directly from one of his many writings rather than an interpretation of them on random websites. If they are indeed extraordinary (which remains to be seen), we can see whether he has been criticised and by who. - gavin]

    Comment by Joe Hunkins — 27 Jun 2008 @ 3:06 PM

  141. RE: #140

    Joe, if your suspicions are correct, you should see evidence in the history of climate science. Take a gander at Spencer Weart’s free site, which has even more content than his book. http://www.aip.org/history/climate/

    Keep in mind that the medium-to-long term history is the scale of most relevance, because the corrective mechanisms we’ve been writing about do take a bit of time. The major reason they take a bit of time is that new observations are a key to the scientific corrective mechanisms, and making new observations takes more time than, say, writing an Op-Ed piece.

    Comment by Tom Dayton — 27 Jun 2008 @ 3:41 PM

  142. Gavin -

    Sure, I’ll find some Hansen quotes that IMHO are extraordinary and post them, but that approach won’t answer my question. The issue is simple. In the private sector if you call out your superiors you will often be squashed. I suspect, based mostly on common sense and reports from dissident climatologists, that a variation on this happens in the sciences, though far more subtly.

    In fact Hansen himself was subjected to the type of non-scientific, political and ego-driven pressures I am talking about. To his credit he resisted and reported it (nationally to headline news around the globe). You seem to think that type of pressure always comes from climate skeptic camps, and I’m saying this is not a reasonable assertion and in fact is a dangerous one that blinds those within the climate community to their own personal bias challenges.

    I’m *hypothesizing* that bureaucratic social pressures are keeping new NASA researchers from some research, and certainly from generalizations, they might otherwise consider/express.

    Tom Dayton – yes, there should be evidence of this and I will follow up at that site – thx.

    Do you find the claims of people like Pielke and Chris Landsea dubious, ie you are comfortable that grants and reviews are done with no regard to any political considerations?

    [Response: I have reviewed and participated on many panels that award climate related funding and I have never seen even a hint that political considerations were important or even relevant. The important factors are interest, tractability and competence. Look up what the NSF (for instance) actually funds before spouting off about how it is all political. - gavin]

    I do not know the extent to which this happens or poisons the well. All I’d assert as obvious is that humans – scientists or not – do not divorce themselves from these social and ego pressures. [edit]

    Comment by Joseph Hunkins — 27 Jun 2008 @ 6:38 PM

  143. RE: #142

    Joe, I understand that your experience is in the private sector, so that’s the most salient example for you to bring to bear. (I’ve been in academia, the private sector, and government.) The process and community of science is more multifaceted and wide-ranging than the private sector is.

    In particular, the process of publishing in peer-reviewed scientific journals crosses lots of boundaries, involving reviewers and editors from anyplace as long as they have appropriate expertise. Likewise, granting agencies use reviewers from all over, not just from their agency, not just from their branch of government, not just from government at all.

    There are lots of reasons for granting money. Occasionally it is even done just to end a long-lived and distracting controversy, by letting the applicant rigorously test some hypothesis that the reviewers have very low confidence will be supported. But that is done only occasionally, because if the result is as the granters expect, then the results are unlikely to be published, meaning it was a waste of money. But occasionally journals will publish such papers, for the same reason of ending a distracting controversy.

    Sometimes journals will even publish a paper purely to make a point, as in “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials” http://bmj.bmjjournals.com/cgi/content/full/327/7429/1459

    Comment by Tom Dayton — 27 Jun 2008 @ 7:46 PM

  144. Joseph Hunkins, The thing that I think you are missing is that in science it is considered petty to take offense when you are criticized on the quality of your science. I can and do criticize my group leader on the soundness of the science, and he does the same with me. I am not free to call my colleagues “ignorant jerks” without some expectation of repurcussions. I have seen graduate students go toe to toe with Nobel Laureates when the students thought they were right. The worst they got was maybe a somewhat condescending nod of approval and a correction. It would be naive to think that there is no politics in funding, but in science it is generally acknowledged that we have a common goal of advancing the state of knowledge as far as we can take it given a limited pool of resources. This tends to breed funding decisions based on the merits of the research and the researchers.
    This meritocratic process can be short-circuited, but it is generally done by nonscientists who have priorities different from advancing the state of knowledge. Yes, there are probably young researchers who would like to look into an idea they have, but the thing that keeps them from doing it is not bureaucracy, but lack of funding for the task–and the funding decisions are usually made by nonscientists.
    Scientists want more than anything else to keep doing science–to understand some portion of the Universe that fascinates them. They will suppress a lot of ego to do so–and some of them have a lot of ego to suppress.

    Comment by Ray Ladbury — 27 Jun 2008 @ 8:24 PM

  145. I was probably too subtle in my last comment.

    Anybody who thinks climatology is not following “the scientific method” should read this paper: “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials” http://bmj.bmjjournals.com/cgi/content/full/327/7429/1459

    Comment by Tom Dayton — 27 Jun 2008 @ 8:38 PM

  146. That’s hilarious Tom. I actually work with some folks (they are not scientists) who are very skeptical of gravity!

    Comment by Figen Mekik — 28 Jun 2008 @ 12:55 PM

  147. RE: #144:

    Joseph Hunkins, the “Mediarology” page by Stephen Schneider is relevant to both the issue of social and other biases in science, and to the appropriateness of an assortment of methods in science: http://stephenschneider.stanford.edu/Mediarology/MediarologyFrameset.html

    RE: #145:

    That parachute paper I cited is not only funny, it is dead serious. It was actually thoroughly researched and written exactly as thousands of similar papers are. If you read it while mentally substituting “enzyme X” for every mention of parachutes, and “disease Y” for every mention of falling out of an airplane, you’ll see that its approach is completely legitimate.

    The relevance to global warming denial is that many poorly-schooled denialists claim science requires us to effectively ignore the well-established physical causal mechanism of greenhouse gases preventing escape of radiation, in favor of tightly controlled experiments on the entire planet. In fact, scientists use all manner of information for decision making, including knowledge of plausible causal mechanisms such as parachutes’ effects on air resistance and therefore speed, and CO2′s effects on radiation emitted from the Earth.

    Comment by Tom Dayton — 28 Jun 2008 @ 6:50 PM

  148. Lots of thoughtful comments above.

    I think it was inflamatory for me to call Jim Hansen’s climate claims “extraordinary” but since I said I’d post something I’d suggest the following is not well supported by data or IPCC’s summary of the situation:


    As an example, let us say that ice sheet melting adds 1 centimetre to sea level for the decade 2005 to 2015, and that this doubles each decade until the West Antarctic ice sheet is largely depleted. This would yield a rise in sea level of more than 5 metres by 2095.

    Of course, I cannot prove that my choice of a 10-year doubling time is accurate but I’d bet $1000 to a doughnut that it provides a far better estimate of the ice sheet’s contribution to sea level rise than a linear response.

    In my opinion, if the world warms by 2 °C to 3 °C, such massive sea level rise is inevitable, and a substantial fraction of the rise would occur within a century. Business-as-usual global warming would almost surely send the planet beyond a tipping point, guaranteeing a disastrous degree of sea level rise.

    Although some ice sheet experts believe that the ice sheets are more stable, I believe that their view is partly based on the faulty assumption that the Earth has been as much as 2 °C warmer in previous interglacial periods, when the sea level was at most a few metres higher than at present.

    My biggest concern about the AGW *science* discussion is the degree to which it’s considered acceptable to emphasize the possibility of catastrophic change while failing to point out the far more likely scenarios (such as IPCC’s).

    [Response: Hansen's statements are a model for how to express a dissenting opinion in a scientific discussion without insults, accusations of bad faith and unsupportable statements of certainty. People are free to disagree and criticise, but you will actually find very few do, because no-one is very confident that they can put an upper bound on sea level rises in the next century that is negligible - and that includes the IPCC authors. At recent meetings I've attended on ice sheet dynamics, the concern is palpable that we are not in a position to do so. This doesn't just worry Hansen. - gavin]

    Gavin your optimism about how little politics wind up influencing research is encouraging. Do you see those principles as extending here to RC? It sure seems there is a great *reluctance* to welcome (or even allow?) scientists with legitimate credentials who pose challenges to the views that prevail here, but I suppose this could be that they are simply reluctant to step into the fray.

    [Response: People challenge us here all the time, and there is no "reluctance" to deal with serious scientists - all of us have such interactions at meetings and workshops continuously. What gets tiresome is the continual parade of junk masquerading as neo-Galilaen revelations. But if you have someone in mind, encourage them to participate. - gavin]

    Comment by Joseph Hunkins — 28 Jun 2008 @ 10:20 PM

  149. RE:#144 A minor clarification: internal funding decisions, such as those that are made by the Director of the National Science Foundation and various subcommittees, are made by scientists. The amount of money that each funding agency receives is decided by politicians. My PhD advisor was the Director of NSF for a year, which gave me a little more insight into how NSF works.

    Comment by Jeff — 29 Jun 2008 @ 9:34 AM

  150. Re # 147 Tom Dayton “That parachute paper I cited is not only funny, it is dead serious. It was actually thoroughly researched and written exactly as thousands of similar papers are.”

    Sorry, Tom, but I can’t help but think it is satire.

    Comment by Chuck Booth — 29 Jun 2008 @ 9:50 AM

  151. Re 147 and my response (148)

    Tom, I re-read your comments in 147 and now see the point you are making in citing the parachute article, and the point the authors of the parachute article were trying to make.

    Comment by Chuck Booth — 29 Jun 2008 @ 9:58 AM

  152. Joeduck, I’d like to emphasize Gavin’s point re: Hansen’s position quote on the WAIS. Note that he does not attack the credibility of those who disagree. Where his position differs, he prefaces his statement with “In my opinion…” Moreover, his opinion is not beyond the pale of scientific opinion. We know that the models are significantly underpredicting melting, and Hansen’s ideas can be viewed as a reasonable bounding engineering estimate of how bad things could get. That, too, has value.

    Comment by Ray Ladbury — 29 Jun 2008 @ 3:18 PM

  153. I would like to add my two cents by saying that though I am sure Joe doesn’t intend to insult anyone, posing questions like “what would happen to your career if you challenged Jim Hansen” is not only insulting to me (since my name was used in the original question) but it is also very insulting to Jim Hansen and the community of climate scientists as a whole.

    Scientists discuss and challenge ideas and scientific interpretations. There is nothing personal about that. And challenging ideas will not hurt anyone’s career when those challenges are based on observations and facts. Gavin said all this much more eloquently above.

    But I, for one, find it a little demeaning to assume that I or any of my colleagues would change their opinion about something or their inetrpretations of their observations because another, better established scientist disagreed with them. Challenging ideas are welcome in any intellectual pursuit, challenging another’s integrity or honesty without evidence is not.

    Here’s a quote from Eleanor Roosevelt: “Great minds discuss ideas; Average minds discuss events; Small minds discuss people.”

    Comment by Figen Mekik — 29 Jun 2008 @ 4:07 PM

  154. Gavin – thanks, good enough for me. I have taken up enough of your time hassling the Hansen points. Huge bonus points to you for this quote which I really enjoyed:

    What gets tiresome is the continual parade of junk masquerading as neo-Galilaen revelations

    Figen you are right I didn’t mean to insult you, rather I am very interested in the social interactions in science as a potential source of bias (as well as a potential source of enlightenment). There is a new field trying to quantify this stuff by graphing relationships among decision makers.
    Eleanor quote is good and appropriate.

    Ray – thanks for keeping me level headed here at the RealClimate Club, where the drinks are cold and the science is hot.

    Comment by Joe Hunkins — 29 Jun 2008 @ 7:06 PM

  155. Tom re: mediaorology paper by Dr. Schneider. Very interesting, and a good intro to issues that have a lot more significance as climate change has risen to the top of the environmental agenda. I think everybody would agree that the press treatments of climate issues generally leave much to be desired.

    Comment by Joe Hunkins — 29 Jun 2008 @ 7:11 PM

  156. There are a couple of things missing from the parachute paper, namely the role of informed consent and Institutional Review Board approval required for research on human subjects (although tossing chimps out of airplanes with & without parachutes may represent a suitable animal model).
    For more information, see http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm, particularly section 46.116 -
    “(a) Basic elements of informed consent. Except as provided in paragraph (c) or (d) of this section, in seeking informed consent the following information shall be provided to each subject:”
    “(2) A description of any reasonably foreseeable risks or discomforts to the subject;”

    Unfortunately, these legal requirements aren’t being applied to what Roger Revelle noted in 1956 – “Human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.”

    Arguably, there are well funded politically influential segments of our society and their allies in the current administration who have waged a campaign to obfuscate and minimize the “reasonably foreseeable risks” posed by anthropogenic CO2 and global warming.

    Comment by Brian Dodge — 30 Jun 2008 @ 12:51 AM

  157. Re Gavin’s comment to #148, I wonder how much of this “palpable concern” at our lack of understanding of dynamic ice sheet processes is due to it being solidly within the domain of geophysics, whereas the threat of a collapse of the food production system under the combined pressures of climate, ecosystem degradation, population and peak oil, which may happen well before, is much more interdisciplinary. People easily miss the overview on things outside their field (goes for me too).

    Comment by Martin Vermeer — 30 Jun 2008 @ 8:53 PM

  158. I have seen the paper published on Nature. I have a small comment.
    XBT temperature measurements are affected by several inaccuracies, not only depth errors because of wrong fall rate coefficients.
    At least, there is a contribution of instrumental biases (coupling among device, wire, thermistor,…), and accidental biases (namely launching conditions). For example, it seems that the fall rate equation is seawater temperature dependent (maybe, ship speed dependent too). In this case, the right evaluation of the XBT depth is more difficult than previously estimated.
    In addition, a significant part of historical XBT profiles stored in databases are without metadata, and details concerning the recording systems.
    The manufacturers of XBT probes quoted an accuracy of 0.1°C on the instrument and of 0.2°C on the recording system.
    Statistically speaking, it seems that biases and measurement errors produces XBT temperature values warmer than real.
    In conclusion, climatological analyses and extrapolation of ocean temperature trend by using XBT measurements, but without a correct use of XBT errors/inaccuracies, are critical.

    Comment by franco reseghetti — 1 Jul 2008 @ 1:41 AM

  159. Re # 156 Brian Dodge “although tossing chimps out of airplanes with & without parachutes may represent a suitable animal model”

    Not likely – federal animal care and use regulations are even more stringent than the federal policy on the use of human subjects in research (45 CFR 46, which you cite).
    http://www.aphis.usda.gov/lpa/pubs/awact.html
    http://books.nap.edu/html/labrats/

    Sorry for straying further off topic. :)

    Comment by Chuck Booth — 2 Jul 2008 @ 1:00 PM

  160. How is “planetary radiative imbalance” calculated? Is it measured or inferred?

    Comment by Richard Sycamore — 20 Aug 2008 @ 10:46 AM

  161. Roger Pielke Sr. has an Opinion article in Physics Today, Nov08, page 54 entitled “A broader view of the role of humans in the climate system”, (cf. his website http://www.climatesci.org/publications/pdf/R-334.pdf). He presents 4 years of data on the global changes in upper ocean heat content, obtained from J. Willis at JPL, and the article draws some contrarian conclusions.

    I have three questions:

    1. Has this article been peer reviewed?

    2. Are these data reliable and consistent with other data on ocean heat content?

    3. What conclusions can be drawn reliably from only 4 years of such data?

    I would be interested in Real Climate’s judgment.

    ————–

    Comment by Jan Dash — 13 Nov 2008 @ 10:41 AM

  162. Jan, I am familiar with the piece. It is marked as “Opinion,” so it has not been peer reviewed. This is the same strategy they used to circumvent peer review with the Scafetta and West numerology crap a few months ago. Can’t say how consistent these data are, but 4 years of data is bupkis. It’s a pretty sad effort in my opinion.

    Comment by Ray Ladbury — 13 Nov 2008 @ 1:00 PM

  163. Jan Dash, while others were puffing molehills and crew cutting mountains, I wandered off through Gavin’s links in that post (Mountains and molehills) on SLR and Argo, etc.

    I think you’ll find it worthwhile reading – eighth paragraph, starts with:

    In contrast to this molehill,

    Comment by JCH — 13 Nov 2008 @ 1:36 PM

  164. http://moregrumbinescience.blogspot.com/2008/10/pielkes-poor-summary-of-sea-ice.html

    That includes a pointer to Willis’s blog, where he comments on what Pielke says about his work, usefully.

    Comment by Hank Roberts — 13 Nov 2008 @ 2:02 PM

  165. Just for completeness, here are excerpts from an exchange between Pielke Sr. and Willis on the JPL Blog “It’s a Sure Bet” – by Josh Willis:

    Roger A, Pielke Sr Says:
    August 14th, 2008 at 1:23 pm
    Josh-
    I am puzzled by your weblog, and have weblogged on it. You are ignoring the value of heat in Joules (not surface temperature) as the primary global warming metric, despite your pioneering research using heat content change in Joules in the upper ocean to diagnose the radiative imbalance of the climate system.
    Best Regards
    Roger

    Willis says:
    Roger, thank you for the comment and the cross-link to my blog…True, ocean heat content is the better metric for global warming, and the past few years of no warming are interesting. But tacked on to the 50-year-record of ocean warming before that, the last four years pretty much ARE just a wiggle…Between the long-term records of ocean heat content, land and ocean surface warming, global sea level rise (about 20 cm over the last 100 years) and the increase in atmospheric CO2, you get a pretty simple, consistent picture of man-made warming. No models required…

    Despite all the uncertainties, I think it is pretty clear that humans have already warmed the planet. And if we continue to add more CO2 to the atmosphere, we will warm it even further.

    Ref:
    http://blogs.jpl.nasa.gov/?p=8
    ———–

    Comment by Jan Dash — 13 Nov 2008 @ 3:54 PM

Sorry, the comment form is closed at this time.

Close this window.

0.568 Powered by WordPress