Hansen’s 1988 projections

As mentioned above, with a single realisation, there is going to be an amount of weather noise that has nothing to do with the forcings. In these simulations, this noise component has a standard deviation of around 0.1 deg C in the annual mean. That is, if the models had been run using a slightly different initial condition so that the weather was different, the difference in the two runs’ mean temperature in any one year would have a standard deviation of about 0.14 deg C., but the long term trends would be similar. Thus, comparing specific years is very prone to differences due to the noise, while looking at the trends is more robust.

From 1984 to 2006, the trends in the two observational datasets are 0.24+/- 0.07 and 0.21 +/- 0.06 deg C/decade, where the error bars (2\sigma ) are the derived from the linear fit. The ‘true’ error bars should be slightly larger given the uncertainty in the annual estimates themselves. For the model simulations, the trends are for Scenario A: 0.39+/-0.05 deg C/decade, Scenario B: 0.24+/- 0.06 deg C/decade and Scenario C: 0.24 +/- 0.05 deg C/decade.

The bottom line? Scenario B is pretty close and certainly well within the error estimates of the real world changes. And if you factor in the 5 to 10% overestimate of the forcings in a simple way, Scenario B would be right in the middle of the observed trends. It is certainly close enough to provide confidence that the model is capable of matching the global mean temperature rise!

But can we say that this proves the model is correct? Not quite. Look at the difference between Scenario B and C. Despite the large difference in forcings in the later years, the long term trend over that same period is similar. The implication is that over a short period, the weather noise can mask significant differences in the forced component. This version of the model had a climate sensitivity was around 4 deg C for a doubling of CO2. This is a little higher than what would be our best guess (~3 deg C) based on observations, but is within the standard range (2 to 4.5 deg C). Is this 20 year trend sufficient to determine whether the model sensitivity was too high? No. Given the noise level, a trend 75% as large, would still be within the error bars of the observation (i.e. 0.18+/-0.05), assuming the transient trend would scale linearly. Maybe with another 10 years of data, this distinction will be possible. However, a model with a very low sensitivity, say 1 deg C, would have fallen well below the observed trends.

Hansen stated that this comparison was not sufficient for a ‘precise assessment’ of the model simulations and he is of course correct. However, that does not imply that no assessment can be made, or that stated errors in the projections (themselves erroneous) of 100 to 400% can’t be challenged. My assessment is that the model results were as consistent with the real world over this period as could possibly be expected and are therefore a useful demonstration of the model’s consistency with the real world. Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get.

Note: The simulated temperatures, scenarios, effective forcing and actual forcing (up to 2003) can be downloaded. The files (all plain text) should be self-explicable.

Page 3 of 3 | Previous page

293 comments on this post.
  1. pete best:

    Did you ever doubt that he was wrong ?

  2. Alexander Ac:

    RealClimate has discussed the global dimming effect on the average temperature several times. Is the dimming effect (if there is any?) included in the model projections and is there a potential for a more rapid global temperature increase after hypothetical stopping of air pollution and subsequent cleaning of air?

    I.e. could the observed temperature increase be mitigated by the air pollution?

  3. Peter Erwin:

    GISS produces two estimates – the met station index (which does not cover a lot of the oceans), and a land-ocean index (which uses satellite ocean temperature changes in addition to the met stations). The former is likely to overestimate the true global SAT trend (since the oceans do not warm as fast as the land), while the latter may underestimate the true trend, since the SAT over the ocean is predicted to rise at a slightly higher rate than the SST.

    Um… let’s see: “SAT” = surface air temperature? And “SST” = sea surface temperature? So is the idea that the satellite measurements are of the sea surface temperature, which is predicted to be cooler than the air temperature immediately above it?

    (I was really confused by this passage, until I went looking elsewhere and found an explanation of “SAT”; initially, I was thinking it was some kind of shorthand for “satellite measurements.” It’s occasionally useful to unpack some of the acronyms you use…)

    [Response: My bad. I've edited the post to be clearer. -gavin]

  4. Johnno:

    Another possible forcings scenario has been discussed on energy forums, namely that all fossil fuels including coal will peak by 2025. If fuel burning drives most of the forcing then the curve should max out then decline. If the model can work with sudden volcanic eruptions I wonder if it can give a result with a peaking scenario.

    [Response: I'd be extremely surprised if coal peaked that soon. There is certainly no sign of that in the available reserves, but of course, you can run any scenario you like - there will be a paper by Kharecha and Hansen shortly available that shows some 'peak' results. - gavin]

  5. J:

    Nice post, Gavin. One tiny question, though. You write:

    “Given the noise level, a trend 75% less, would still be within the error bars of the observation (i.e. 0.18+/-0.05) [....]”

    This seems a bit confusing. I would think that “75% less” than 0.24 would be 0.06, not 0.18. (Also, the comma after “less” is not really appropriate…)

    Maybe something like this:

    “Given the noise level, a trend with only 75% of this magnitude would still be within the error bars …”

    or perhaps

    “Given the noise level, a trend that was only 75% as large …”

    [Response: You are correct. Edited. -gavin]

  6. Mitch Golden:

    Could someone clarify the difference between this analysis and the one Eli Rabbett posted on his blog a year ago, here and here, and here?

    That analysis gave me the impression that Scenario C was the closest. Is the difference the use of ‘efficacies’?

  7. Magnus W:

    The fossil fuel supply has of coarse bean descused by IPCC but as always some disagree with the conclusions.
    http://www.newscientist.com/article.ns?id=dn4216

  8. FurryCatHerder:

    Re #5:

    Oh, great, grammar police!

    Re Gavin’s comment to #4:

    “Peak Oil” isn’t just about running out, it’s also about production and cost. Being permanently impatient, I’m going to have a hard time waiting for Hansen, et alias’, report, but I’ll be paying careful attention to the references, and particularly the fields of study which are included. I don’t have anything against climatologists, but I expect to see economists and petroleum engineers heavily referenced in any paper that addresses “peak oil”.

    It’ll also be interesting to watch electric consumption in the States in the wake of Al Gore’s slideshow and the nationwide movement to ban incandescents.

  9. John Wegner:

    The Land-Ocean temperature trend line is below Scenario C for most of the record and the end point in your graphic – 2005. Scenario C assumed GHG forcings would stablize in the year 2000 which it clearly has not.

    Extending the data out to last year – 2006 – we see the temperature trend is still below Scenario C and about 0.2C below Scenario B. I am asserting that the 1998 models over-estimate the impact on global temperature.

    Since Scenario A assumed accelerating GHGs, I’m assuming we can now discard that assumption and the drastic increase in temperatures which resulted. Scenario A predicted a temperature increase of 0.95C to 2005 while global temperatures increased by about 0.55C and GHG concentrations are not accelerating.

  10. BCC:

    Unless the geologists who measure these things really screwed up, coal ain’t peaking any time soon:

    “Total recoverable reserves of coal around the world are estimated at 1,001 billion tons – enough to last approximately 180 years at current consumption levels”

    Now, China is certainly ramping up coal consumption, but not that fast.

    If the world was running out of coal, there would be a lot more attention on nukes, wind, etc.

  11. Sean O:

    Great article. Thank you for writing it. I cross posted about this article on my site (http://www.globalwarming-factorfiction.com) but I thought I would make a few comments here as well. I have encouraged all of my readers to regularly come to RealClimate since I respect much of what is written here.

    My readers know that I frequently call for more research and better modeling techniques on the issue of global climate change and global warming. It is something that I think is necessary before our governments spend billions or trillions of dollars to change behavior.

    I think public discussion on this subject is very important and encourage more scientists to do this outside of the veil of the scientific community. My concerns about the accuracy of models are confirmed with this analysis – the 3 models that are described in the first analysis are off by 50%, 10% and 25%.

    It is interesting that even the observed measurement has some ambiguity in it since there is no “standard” way of measuring global averages. This is yet another way for models to not be precise enough since we don’t have a golden standard to live up to.

    There is an often repeated saying “Close enough for government work.” In this case, I think we should hold the politicians to a higher standard and we need to have models that can more accurately predict observed occurrences. Many of you will be familiar with another saying “Close only counts in horseshoes and hand grenades.”

    Once again, great article. Thank you for writing on this subject.

    [Response: Umm.... you're welcome. But basically you are saying that nothing will ever be good enough for you to be convinced. The 3 forcing scenarios are nothing to do with the model - they come from analyses of real world emissions and there will always be ambiguity when forecasting economics. Thus a spread is required. Given the scenario that came closest to the real world, the temperatures predicted by the model are well within the observational uncertainty. That is, even if you had a perfect model, you wouldn't be able to say it was better than this. That is a purely empirical test of what is 'good enough' - and not just 'for government work' (which by the way is mildly offensive). How could it do better by this measure? Perhaps you would care to point to any other predictions of anything made in the mid-1980s that are as close to the obs as this? - gavin]

  12. Kieran Morgan:

    Not every one is convinced that there’s that much coal left,

    http://globalpublicmedia.com/richard_heinbergs_museletter_179_burning_the_furniture
    Richard Heinberg’s Museletter #179: Burning the Furniture | Global Public Media

  13. Jim Eager:

    Re # 9 John Wegner: “The Land-Ocean temperature trend line is below Scenario C for most of the record and the end point in your graphic – 2005…..
    I am asserting that the 1998 models over-estimate the impact on global temperature.”

    Apparently you missed this passage in the original:

    “The former is likely to overestimate the true global surface air temperature trend (since the oceans do not warm as fast as the land), while the latter may underestimate the true trend, since the air temperature over the ocean is predicted to rise at a slightly higher rate than the ocean temperature. In Hansen’s 2006 paper, he uses both and suggests the true answer lies in between.”

  14. Dan:

    re: 9. “Extending the data out to last year – 2006 – we see the temperature trend is still below Scenario C and about 0.2C below Scenario B. I am asserting that the 1998 models over-estimate the impact on global temperature.”

    Except it appears that your assertion does not consider, as stated above, that:
    “In these simulations, this noise component has a standard deviation of around 0.1 deg C in the annual mean. That is, if the models had been run using a slightly different initial condition so that the weather was different, the difference in the two runs’ mean temperature in any one year would have a standard deviation of about 0.14 deg C., but the long term trends would be similar. Thus, comparing specific years is very prone to differences due to the noise, while looking at the trends is more robust.”

  15. Chip Knappenberger:

    Gavin,

    I have no doubt that your statement “Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get” is correct. Just as I am sure that somewhere in the range of the IPCC projections (or “forecasts” if you prefer), lies the true course of temperature during the next 20-50 years or so (and maybe longer). In hindsight, we’ll be able to pinpoint which one (or which model tweaks would have made the projections even closer), just as you have done in your analysis of Hansen’s 1988 projections. Projections are most useful when you know in advance that they are going to be correct into the future, and are not so much so (especially when they encompass a large range of possibilities) when you only determine their accuracy in hindsight. Surely knowing in 1988 that for the next 20 years the global forcings (and temperatures) were going to be between scenarios B&C rather than closer to scenario A would have been valuable information (more valuable then, in fact, than now). Given that the forcings from 1984 to present (as well as the temperature history) lie at or below the low end of the projections made in 1988, do you suppose that this will also prove to be true into the future? If so, don’t you think this is important information? And that perhaps some of the crazier SRES scenarios on the high side should have been discounted, if not in the TAR, then definitely in AR4?

    -Chip
    to some degree, supported by the fossil fuels industry since 1992

    [Response: Chip, Your credibility on this topic is severely limited due to your association with Pat Michaels. Hansen said at the time that B was most plausible, and it was Micheals (you will no doubt recall) who dishonestly claimed that scenario A was the modellers' best guess. It would indeed have been better if that had not been said.

    However, I'd like to underline the point that the future is uncertain. A responsible forecast must take both a worst case and best case scenario, with the expectation that reality will hopefully fall in between. Most times it does, but sometimes it doesn't (polar ozone depletion for instance was worse than all scenarios/simulations suggested). Your suggestion that we forget all uncertainty in economic projections is absurd. I have much less confidence in our ability to track emission scenarios 50 or 100 years into the future than I have in what the climate consequences will be given any particular scenario, and that is what underlies the range of scenarios we used in AR4. I have previously suggested that it would have been better if they had come with likelihoods attached - but they didn't and there is not much I can do about that. -gavin]

  16. FurryCatHerder:

    Re #10:

    Again, having the fuels in the ground doesn’t mean anyone can afford to get them OUT of the ground.

    This is the note that always gets in the way –

    “3 Proved reserves, as reported by the Oil & Gas Journal, are estimated quantities that can be recovered under present technology and prices.”

    My recollection is that Kyoto (?) assumed $20/bbl oil. Seen that lately? Think you’ll see it any time soon?

    Here’s the next one –

    “9 Based on the IEO2006 reference case forecast for coal consumption, and assuming that world coal consumption would continue to increase at a rate of 2.0 percent per year after 2030, current estimated recoverable world coal reserves would last for about 70 years.”

    The IEO2006 paper forecasts having NO remaining fossil reserves, and basically being bankrupt in 2077. Those last few years of production are going to be unaffordable to all but the heirs of Bill Gates and Sam Walton.

    The IEO2006 paper also assumes a quadrupling of Chinese coal consumption. That behavior is going to have to contend with this problem — whether that was their goal or not, the skies over China are now so dirty that our spy satellites can’t even see the ground over many cities. Here is a bigger image. I don’t see China sustaining growth in air pollution for too many more years.

    In addition to rapidly declining air quality — as shown by those satellite images of China — the IEO2006 paper ignores the rising costs of energy as a fraction of living expenses. Just about everything we buy includes “energy” as a cost.

    I understand that large estimates of recoverable fossil fuels are central to making a case for the risks of burning those fuels, but the longer term risk, if we manage to survive burning everything we’ve got in the ground, is taking a path that is completely dependent on those fuel sources and finding ourselves living on a baked planet with no energy to do anything about it.

  17. BCC:

    And yes, I realize the driving issue is economics, not total reserves, but I don’t see coal becoming uneconomic any time soon (I wish it would).

  18. Wacki:

    “There have however been an awful lot of mis-statements over the years – some based on pure dishonesty, some based on simple confusion.”

    For a side by side comparison of Pat Michaels ‘confusion’ please go here:

    Logical Science’s analysis of Pat Michaels

    It amazes me that this is the most interviewed commentator by a factor of two on CNN.

  19. cce:

    A minor point: Hansen first presented these projections in his November 1987 Senate testimony. It didn’t receive much attention, which is why they decided to hold the next hearings during the dead of summer.

    Regarding peak coal, there seems to be some disagreement about this. For example, this report puts the worldwide peak (not US peak) between 2020 and 2030: http://www.energywatchgroup.org/files/Coalreport.pdf

  20. Janice:

    Re # 8

    Thank goodness for so-called ‘grammar police’ within this forum to remind everyone of the importance of tight wording.

    Legal cases too often ‘fall over’ on technicalities even this small.

    Indeed countries have gone to war over differing interpretations of rights implied to them by a single preposition in a treaty or declaration.

  21. Ian Rae:

    I’m curious about whether the model has been run out into the future for several hundred years? Do these models fall apart outside some time range or just level off (as the distance from the forcing grows)? Thanks for any insight on this.

    [Response: Yes they have. They don't fall apart, but they do get exceptionally warm if emissions are assumed to continue to rise. However, on those timescales, physics associated with ice sheets, carbon cycle, vegetation changes etc. all become very important and so there isn't much to be gained by looking at models that don't have that physics. - gavin]

  22. Marcus:

    How much of an effect did the stabilization of methane concentrations have on forcing changes? My rough estimate indicates that if methane concentrations had continued to rise at their 1980s rate, that would have led to more than a 0.05 W/m2 over observations. And if Bousquet et al. (2006) are correct about the stabilization being a temporary artifact of wetland drying, then future forcing might jump up again. (if on the other hand it is a result of reduced anthropogenic emissions from plugging natural gas lines, changes in rice paddy management in China, landfill capping in the EU and US, etc. – then maybe we’ve been more successful at mitigation this past decade than is otherwise estimated)

    Also, in response to FurryCatHerder: I work with energy economists. All of them believe that at prices near $100/barrel, so many otherwise marginal sources become profitable (tar sands, oil shale, coal to liquids, offshore reserves, etc.) that those present an effective cap on price increases for many decades. And $100/barrel is not yet high enough to really put much of a dent in demand. Mind you, they all think that there is good reason to reduce fossil fuel use as early as possible in any case… but more for climate, geopolitical, and other environmental reasons…

  23. Thomas Lee Elifritz:

    Hansen first presented these projections in his November 1987 Senate testimony. It didn’t receive much attention, which is why they decided to hold the next hearings during the dead of summer.

    And that was an extremely hot and dry summer for most people was it not? I distinctly remembering that I discovered that the only thing between me and total agony, was my very extensive deciduous black walnut, maple and oak leaf cover. There is a really big difference between total scorch, and the greatly reduced temperatures, and the elevated oxygen and humidity of the decomposing forest floor. It will be interesting to see what this summer brings for us.

    I just don’t see how we can recover from this with anything less than a complete technological breakthrough solution, as it appears we are already too over extended technologically on this planet. It’s going to require people to adjust their critical thinking in far more ways than just energy reduction. Is the average joe American scientist capable of this sort of creative out of the box thinking? From what I’ve seen thus far, no. At least not with this administration.

  24. Sean O:

    Re. 11.

    Gavin – perhaps I wasn’t clear. I thought this was great effort for a rather old model and surely advances in technology should be able to improve on this.

    However, as a mechanical engineer (no longer practicing), industry would never design a product that had such a high error rate as the result would be a guaranteed lawsuit and likely loss of life. From a scientific POV, this may be acceptable but politicians are using this kind of data to create legislation that will cost billions and likely cost lives in hopes of saving lives. The error rate is simply to large to make that kind of investment. We need to invest in better research and better models before that kind of investment is justified. Showing 3 graphs and finding that one is “kind of” correct and therefore could be used 50-100 years later is bad engineering. If a model cannot get within a few percentages on an average for a 15-20 year test than how off will it be for 2100?

    The Wall Street Journal said that I was a semi-skeptic and that is pretty accurate. I speak to both sides of this argument on my site http://www.globalwarming-factorfiction.com. I am willing, ready, and anxious to “believe” but I need better math than I have seen so far. If I saw decent math to getting to the conclusions than I am willing to invest the multi-billions (and the toll in lives that will be lost due to the decline in treasure) that it will be required to fix the problem.

    This site is great because it gives some very compelling arguments on the climate. We simply need to invest more in getting better science.

    [Response: But what is your standard for acceptable? My opinion, judging from the amount of unforced (and hence unpredictable) weather noise is that you would not have been able to say that even a perfect model was clearly better than this. Think of it was the uncertainty principle for climate models. After another 20 years we'll know more, but as of now, the models have done as well at projecting long term trends as you can possibly detect. Thus you cannot have a serious reason to use these results to claim the models are not yet sufficient for your purpose. If you will not make any conclusion unless someone can show accurate projections for 100 years that are within a few percent of reality then you are unconvincible (since that will never occur) and thus your claim to open-mindedness on this issue is a sham. Some uncertainty is irreducible and you deal with that in all other aspects of your life. Why is climate change different? As I said, point me to one other prediction of anything you like that was made in the mid-1980s that did as well as this. - gavin]

  25. Hank Roberts:

    > industry would never design a product that had such a high error rate
    > as the result would be a guaranteed lawsuit and likely loss of life.

    Who did design the current technology?

  26. Leonard Evens:

    Re comment 24 by Sean O.:

    This is just the old argument that we can’t do anything until we are certain. The fallacy in that argument was, and still is, that we are doing something to change the radiative behavior of the atmosphere. The appropriate response, without knowing more, other things being equal, would be to stop that right now, but I don’t suppose that you proose closing down all fossil fuel emissions until the science is better understood.

  27. Chip Knappenberger:

    Gavin,

    Thanks for the response. As far as my “credibility” on the issue as addressed in my comment (#15), I was largely paraphrasing your results. I wasn’t relying on credibility to make my point, thus my association with Pat Michaels (and, for full disclosure, my boss) doesn’t seem to impact the contents of my comment–that it is easy to take credit in hindsight for a correct forecast, but, if you can’t use it to reliably identify a future course, than it is of little good. You seem to be of the mind that Dr. Hansen was able to reliably see the future (or at least 20 years of it) in his scenario B, so, what is your opinion about the next 20 years? Did Dr. Hansen’s Scenario A serve any purpose? Does IPCC SRES scenario A2? A1FI? If Dr. Hansen never imagined Scenario A as being a real possibility for the next 20 years, I guess indicated by his description “Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in Scenario A (~1.5% yr-1) is less than the rate typical of the past century (~4% yr-1)” then his subsequent comment (PNAS, 2001) “Second, the IPCC includes CO2 growth rates that we contend are unrealistically large” seems to indicate that Dr. Hansen doesn’t support some of the more extreme SRES scenarios. Am I not correct in this assumption?

    I guess the point of your posting was that IF we know the course of the actual forcing, climate models can get global temperatures pretty well. But you go beyond that, in suggesting that Dr. Hansen knew the correct course of the forcing back in 1988. Isn’t that what you meant in writing “Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get.”?

    So, in the same vein, which is the IPCC SRES scenario that we ought to be paying the most attention too (i.e. the one made “ahead of time” that will prove to accurate?)

    -Chip Knappenberger
    to some degree, supported by the fossil fuels industry since 1992

  28. Hank Roberts:

    It’s amazing how our politicians can tell the truth while maintaining deniability:
    ___________________________________
    Yesterday, White House spokesman Tony Snow said Bush wants new regulations because “you’ve got a somewhat different atmosphere now, ….”

  29. tom:

    Ok . A hypothetical math question.

    Let’s say for argument’s sake we wanted to achieve a certain global temperature in , say , 25 years.

    Given the track record of Government, what are the chances that could be achieved?

    I’d say almost zero.

    [Response: Nothing to do with the government. If the temperature rise was not around 0.5 deg C warmer than today, it would not be achievable under any circumstances. However, radically different outcomes will be possible by 2050, and certainly by 2100. - gavin]

  30. Sam Gralla:

    Gavin–thank you for this post. As a physics student very much used to operating on the “make prediction; test prediction” model of determining the reliability of a theory, I appreciate thorough discussion of realistic expectations for these climate models. One question I would ask, being a non-specialist, is what the models gain us over a, say, “straight curve-fitting” approach in the first 10-20 years regime you examine here. For example, just looking at these graphs, I feel like I could have drawn in 3 lines (A,B,C) by hand, based on what seems plausible given recent trends, and be reasonably confident that the near future would lie somewhere within those predictions (and statistically, maybe even land right on one of them). Is this the case? Of course, I wouldn’t expect to be able to do that for 50 to 100 years, which is the interesting timescale for global warming.

    [Response: For a short period, the planet is still playing "catch up" with the existing forcings, and so we are quite confident in predicting an increase in global temperature of about 0.2 to 0.3 deg C/decade out for 20 years or so, even without a model. But the issue is not really the global mean temperature (even though that is what is usually plotted), but the distribution of temperature change, rainfall patterns, winds, sea ice etc. And for all of those elements, you need a physically based model. As you also point out, that kind of linear trend is very unlikely to be useful much past a couple of decades which is where the trajectory of forcings we choose as a society will really make the difference. - gavin]

  31. wayne davidson:

    Congrats Gavin, it puts recent intellectual crimes to light. How can anyone especially top-level scientists, and sometimes the odd science fiction book writer, come to any conclusion that GCM’s are bunk? It would be good to constantly refer this site to any prospective journalist or blogger inclined to devote some ink to the regular “arm chair” contrarian scientist, usually equipped with a snarky philosophy rather than facts. The basic contra GCM argument is dead.

  32. Mark Chandler:

    Great to see this revisited in this way. It’s something that comes up a lot in our workshops with teachers. I think it’s worth pointing out that the GCM in EdGCM is the GISS Model II, the same as was used by Hansen et al., 1988. Thus, anyone who can download and run EdGCM (see link to the right) can reproduce for themselves Hansen’s experiments. The hard-coded trends are now gone, but it’s easy to create your own, and you can try altering the random number seed in the Setup Simulations form to create unique experiments for an ensemble (as Gavin mentions, this wasn’t done for the orginal 1988 paper). The only differences in the GCM are a few bug fixes related to the calculation of seasonal insolation (a problem discovered in Model II in the mid-1990′s) and an adjustment to the grid configuration that makes Model II’s grid an exact multiple of the more recent generations of GISS GCMs (like Model E).

  33. tom:

    29.

    I’m not following. How will radically different outcomes be possible?

    [Response: By controlling emissions of CO2, CH4, N2O, CFCs, black carbon etc. By 2050, a reasonable difference between BAU (A1B) and relatively aggressive controls (the 'Alternative Scenario') could be ~0.5 deg C, and by 2100, over 1 deg C difference. And if we've underestimated sensitivity or what carbon cycle feedbacks could do, the differences could be larger still. - gavin]

  34. pat n:

    An interview with renowned climate scientist James Hansen | By Kate Sheppard | Grist Main Dish 15 May 2007

    http://www.grist.org/news/maindish/2007/05/15/hansen/index.html?source=daily

    In my January 31, 2006 letter to the U.S. Department of Commerce (DOC) Office of Inspector General (IG) I requested the DOC IG to see to it that NASA oversee NOAA on climate change and global warming (comment 1).

    http://npat1.newsvine.com/_news/2007/05/15/720628-an-interview-with-renowned-climate-scientist-james-hansen-by-kate-sheppard-grist

  35. cce:

    Re: 24

    Choose any mathematical scenario you like. They range from bad to catstrophe. Hansen provided 3 emissions scenarios to bracket what was likely, based on models that are now 20 years old. The results were as accurate as anyone could have hoped for. Yet, somehow, for some reason, that isn’t good enough. No one is going to be able to predict what actual emissions of the various greenhouse gases will be over the next 100 years. No one will be able to predict what volcanic activity there will be. We do not understand all of the possible feedbacks, but the more we know, the worse it looks. The only way the models can be substantially wrong is if there is an unknown feedback that will swoop in and negate all of the warming in the nick of time. Only a fool would delay policy action based on such a “hope.” We know the basic mechanisms well enough to know that doing nothing is a recipe for disaster.

    And please, the idea that industry always carefully weighs the cost to society is a bit silly. How many decades did it take the tobacco industry to admit in public what their own scientist knew about their products? How many people died during this time? How aggressive did the autocompanies persue safety? Did it really take until the mid ’60s to recognize the importance of seatbelts? Delay, delay, delay.

  36. Steve Horstmeyer:

    RE 15 & 27

    Chip:

    There are many who read the comments and replies contained here who are not “in the business” and who cannot readily discern what comments come from what side of the issue.

    Note how Gavin included complete disclosure in his first paragraph about Dr. Hansen being his boss. Then note how you faild to do so, about your boss, Pat Michaels until Gavin pointed it out.

    Why did you do that? Was it merely an oversight? Or was it an attempt to lessen the impact of your association with your boss who has been convincingly exposed as being willing to manipulate the findings of others for what appears to be a predetermined political/economic goal?

    I cannot convince myself you simply forgot to disclose this important information, after all this is the type of communication you do all the time and you know how important the perception of bias is in any debate.

    So Gavin scores points on the “credibility scale” and in my ledger you have lost a few.

    Steve Horstmeyer

  37. Chip Knappenberger:

    Steve (re comment 36):

    For 1, the contents of my comment 15 was in no way dependent on my association with Pat Michaels (reread the comment to see for yourself). I agreed with Gavin’s analysis about which scenario was closest to the observations, but quibbled with his underlying sentiment that this was known to be the case in 1988. And if so, wondered if we knew the most likely pathway for the next 20-50 years (my guess was that it does not include the extreme IPCC SRES scenarios such as A2).

    For 2, I included my usual disclaimer about being funded by the fossil fuels industry (in an attempt to head off replies to my comments being dominated by folks pointing out that association)

    For 3, I sort of thought it was common knowledge that I work for Pat Michaels (perhaps I need to add this to my disclaimer as well)

    For 4, I repeat my first point…my comment 15 was in no way a defense of Pat Michaels, it had to do with hindsight v. foresight. True I am keenly aware of the contents of Hansen 1988 because of my work for Pat, and thus, I guess have a tendency to comment on them.

    If my failure to point out my association with Pat (or even simply that an association with him exists, whether pointed out or not) severly reduces my “credibility” so be it. But rarely do I invoke credibility in my comments. If you don’t believe what I am saying, then look it up. I try to include quotes and references to help you (and others) in their ability to cross-check what I am commenting on.

    -Chip Knappenberger
    to some degree, supported by the fossil fuels industry since 1992
    and Pat Michaels is my boss

  38. Rod B.:

    re 35: just a sidebar tidbit — in 1956 Ford made a great corporate effort to push safety. Resulted in the industry saw, “Ford sold safety; Chevy sold cars!”

  39. Peter Williams:

    Just wondering what you use for initial conditions. Seems like a shame to waste processors doing ten years simulated time waiting for the transients to settle down so you can pick out the secular trend. Not that there’s necessarily a better way of doing it.

    I’d look in the paper to find out but I’m assuming the article’s not in the open access literature.

    Thanks,
    Peter

    [Response: It's available. Just follow the link. (The answer is at the end of a 100 year control run). - gavin]

  40. Craig Allen:

    Re: 24: “industry would never design a product that had such a high error rate as the result would be a guaranteed lawsuit and likely loss of life.”

    The climate system was not built by industry.

    Just like the San Andreas fault was not built by industry and models do not accurately predict when, where and how big quakes will be. The San Andreas fault is far less predictable than the climate, but it does not stop industry from proceeding with operations using building and other specifications that are (hopefully) designed to take account of and minimize the risks involved in building near an active fault line.

    If the geophysics science community were to come to the conclusion that something we are doing was 90% likely to be significantly, and irreversibly increasing the frequency and strength of damaging earthquakes, would industries insist that they be able continue to do it until the frequency, strength and timing of quakes could be predicted with precession? Obviously not. Why then is this course of action OK with the Climate?

  41. robert davies:

    Re: 24.

    We know we need to transition to renewable energy sources – this must happen in the next century regardless of climate change. But the potentially severe impacts of a quickly warming world up the ante; therefore, though the model predictions have significant error bars, a risk management perspective demands that significant mitigations steps be taken immediately. Gavins analysis, posted here, serves to further support this conclusion.

    An analogy: Hurricanes approaching the Gulf coast pose a threat to refineries. These refinereries have safe opertating limits w/ respect to wind and water; a full shut down requires more than 24 hours to accomplish and millions of dollars. If forecasters and consultants present them w/ a roughly 10% likelihood of exceeding those limits w/in roughly 36 hours, the refineries will shut down. From a risk management perspective they deem the multi-million dollar shutdown cost a reasonable investment, based on a 10% threat likelihood. (These figures are approximate.)

    While it’s more difficult to assign a percentage-level threat to severe climate change risk, the stakes are infinitely higher. Taken in total, the vast majority of evidence suggests that anthropogenic forcings are now dominant and that human impact over the next century will be significant, possibly catastrophic. From the standpoint of policy, it’s not about certainty, but risk management.

  42. DocMartyn:

    Could you please do a reprint of the figure this time adding Error Bars.
    there is absolutely no point in trying to see if the “real” data (actually data that has been through the sausage machine a few times) and the models.
    just how big are the Error Bars in each of the scenarios?

  43. Dave Rado:

    Re. #18

    It amazes me that this [Pat Michaels] is the most interviewed commentator by a factor of two on CNN.

    Has an academic study been done that proves that he is? (If not, where does your 2:1 figure come from?)

    Also, have any academic studies been done that show similar conclusions about other contrarians?

    And have any academic studies been done that show what proportion of press/media coverage of global warming is sympathetic to the argument that it is anthropogenic and serious, what proportion is antipathetic to that argument, and what proportion is neutral; and how the proportions have altered over recent years?

  44. Paul M:

    Today I felt like I walked out of the house and into an oven. If this is the 0.5 degree difference over 20 years, I would say we are in big trouble. This reminds me of those nightclubs with the second rate band and then a fire breaks out. At first, the people find it amusing and cool, then it gets out of control and there is panic and the eventual end is many many charred bodies and tragedy. Right now we are in the amused stage of this proverbial fire and are observing and trying to figure this out. Wait until the real burning and panic, that will eventually come. All I can do is sit outside on the porch and have a beer and wonder where all the bees are. Our only hope is a quantum paradigm shift which is a possibility. We need to save our grandchildren.

  45. pat n:

    I think this response by James Hansen was a highlight of the interview (#34).

    A molecule of CO2 from coal, in a certain sense, is different from one from oil or gas, because in the case of oil and gas, it doesn’t matter too much when you burn it, because a good fraction of it’s going to stay there 500 years anyway. If we wait to use the coal until after we have the sequestration technology, then we could prevent that contribution.

  46. Randy:

    While not a scientist, I clearly understand that fossil fuels emits greenhouse gases, though the degree of warming are obviously open for heated debate and frankly, a lot of not so friendly jabs on this and other sites. What appears not to be understood is that despite someone’s prior posting, the extraction of fossil fuels is already getting more difficult and expensive, and will continue to do so. Coal and gas costs are already significantly higher than would have been suggested a number of years ago with additional taxes, and it is a foregone conclusion that regulations will be passed to get those costs even higher.

    My point, I have no faith that anything short of a technological breakthrough is going to reverse the growth of CO2. The world had less than 1 billion people in 1750, 4 billion when we had the coldest temperatures in the 70′s, over 6 billion now, and forecasted at over 9 billion somewhere down the road. Even if we significantly slow the burning of fossil fuels, does anyone really expect CO2 to go down? All these people breathe, clear land to live, and will burn some fossil fuels. What the US does to control emissions will have no significant effect.

    I have faith that science and technology will develop the answers we need, but it will take time. Think outside of the box and quit pointing fingers.

  47. Hank Roberts:

    > industry would never design a product …

    It’s worth remembering that each improvement in the earthquake building codes has been opposed by industry on the basis that it wasn’t likely that another bad earthquake would happen, and it would cost too much, and not be economical.

    This is well studied. So is the process of getting improvements made:

    “… Promoting seismic safety is difficult. Earthquakes are not high on the political agenda, because they occur infrequently and are overshadowed by more immediate, visible issues. Nevertheless, seismic safety policies do get adopted and implemented by state and local governments. …The cases also show the importance of post-earthquake windows of opportunity…. The important lesson is that individuals can make a difference, especially if they are persistent, yet patient; have a clear message; understand the big picture; and work with others.
    (c) 2005 Earthquake Engineering Research Institute”
    http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=EASPEF000021000002000441000001&idtype=cvips&gifs=yes

  48. Eli Rabett:

    1. Gavin is looking at the forcings, I was looking at the mixing ratios in most of my posts, but did show the forcings from the follow up 1998 PNAS paper in a later one. You could make an argument from the 1998 paper that C was “slightly better” at the time, but, of course well within the natural variability, besides which forcings in the early 1990s may have been decreased by the collapse of the Soviet Union, Pinatubo, etc. There may be compensating differences in how the forcings were calculated in the 1988 paper and the difference between the scenarios for the mixing ratios and what actually happened.

    2. Hansen clearly explained that in the near run (a few decades), climate sensitivity is not very important:

    “Forecast temperature trends for time scales of a few decades or less are not very sensitive to the model’s equilibrium climate sensitivity (reference provided). Therefore climate sensitivity would have to be much smaller than 4.2 C, say 1.5 to 2 C, in order for us to modify our conclusions significantly.”

    3. One answer to Knappenberger’s rather childish comment –gee how could Hansen know the future– is, of course, that the three scenarios were chosen to bracket the possible, to explore the range of what could happen, and the middle scenario was the built to be the most likely. So Chip is miffed that what was thought to be most likely did happen. Well he might be, as it shows that the reasonable emission scenarios can be built. The other answer is Thomas Knutson’s reply to a rather similar attempt by Pat Michaels:

    “Michaels et al. (2005, hereafter MKL) recall the question of Ellsaesser: “Should we trust models or observations?” In reply we note that if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time. “

  49. Richard LaRosa:

    Re: Think outside the box. We have a vast quantity of cold, nutrient-rich water below 1000-meter depth. We have solar energy stored in the surface of the tropical ocean to act as a source for a heat engine. The heat sink is the cold bottom water which the heat engine can pump up to cool the ocean surface and the overlying atmosphere. We use some of the power to spread the cold water over the surface so that it does not sink below the layer where phytoplankton convert dissolved CO2 into organic matter that increases the mass of their bodies to feed other ocean creatures. The pumping brings up macro- and micro-nutrients required for photosynthesis, which otherwise can’t get through the thermocline in the tropical ocean. This increases the primary production of the tropical ocean and helps to repair the damage of overfishing. Using CO2 to create organic phytoplankton mass leaves less CO2 to acidify the ocean and interfere with formation of calcium carbonate shells and skeletons.

  50. Svet:

    Have the forcings from 1959 to present been growing exponentially or linearly? I had the impression that, with the growth or China and India, at least the CO2 forcings had been growing exponentially. Certainly the graph of CO2 levels at NOAA doesn’t look linear. If the growth has been exponential, then shouldn’t Scenario A be the scenario being testing against?

  51. Hank Roberts:

    >NOAA
    Eh? Look at 1990-2000 on the scenarios and on the NOAA CO2 graph.
    >Have the forcings from 1959 to present been growing eexponentially or linearly?
    The answer appears to be “No.”

  52. Johnno:

    Re #49 the normal bell curve can look exponential between -2 and -1 standard deviations, so extrapolate with care. Coal mining and burning will stop one day due to the quality of the reserves, not the size ie when there is no longer a $ profit or energy gain.

  53. Bill Tarver:

    Randy, re comment No. 46.

    Your response sounds like an excuse for inactivity. Inactivity means BaU which, in the long term, spells possible catastrophe. It is possible to have a civilized lifestyle with lower CO2 emissions. Per capita, the US emits the most CO2 of anyone and it wouldn’t require too much expenditure or effort to bring it down. Most West European countries have half the per capita emissions of the US and Sweden manages with a quarter. No one looking at Sweden would conclude their lifestyle is inferior to yours. The world population may well expand to 9bn, but most of these extra people will emit a fraction of the greenhouse gases than does a citizen of the USA. Your population may be 300 million but, if you compare your emissions with those of, say China, your effective population is roughly the size of theirs. This isn’t an anti-US rant: we all have to reduce our emissions. Doing nothing is not an option.

  54. Mike Donald:

    #4 Peak coal that soon? I’ve heard that a single state in the US has more energy in coal that the oil energy in Saudi.

    As for peak oil that’s a tricky one. Might be sooner than 2025. Have a look at the Tables in this link. Good old Exxon claims “no sign of peaking” bless them. And OPEC “deny peak oil theory” apparently!

    http://www.worldoil.com/Magazine/MAGAZINE_DETAIL.asp?ART_ID=3163&MONTH_YEAR=Apr-2007

  55. Urs Neu:

    Re 27
    Chip,
    During the last few years CO2 emissions have increased at a rate or even faster than the worst SRES scenarios (e.g. A1FI). So I am not so sure that these scenarios are so unrealistic. We just can hope that the rates since 2002 are noise and not a trend…

  56. David Donovan:

    Re 49.

    The graph you link to shows NET TOTAL CO2 in the atmosphere. This is not the same thing as the emission rate referred to in the different scenerios (sort of like the difference between a curve and it’s derivative).

  57. Bruce Tabor:

    Re #24 Sean O,

    “From a scientific POV, this may be acceptable but politicians are using this kind of data to create legislation that will cost billions and likely cost lives in hopes of saving lives.”

    No one with any credibility would do a risk analysis that ONLY includes ONE side of the equation as well as EXAGGERATES the costs of action. The Stern report is one attempt to balance the equation. The expected cost of doing nothing far exceeds the cost of taking action.

    Re #4, #10, #12, #16 etc.
    My problem with Peak Oil is that once it is upon us it will trigger a desperate search for alternatives that will include making gasoline from coal, tar sands and oil shales (among others). These are much more CO2 intensive ways to power vehicles than oil, and would potentially accelerate global warming.

  58. James Annan:

    Since Chip is here, it wuld be remiss to fail to mention his own prediction, made (jointly with Pat Michaels) as recently as 1998:

    That 1998′s extremely warm temperatures were largely confined to one calendar year makes the annual record high temperature 1998 has established quite a difficult one to break.

    If we were of a betting sort (and there are some nasty rumors going around that we are), we would be willing to wager that the 10-year period beginning in January 1998 and extending through December 2007 will show a statistically significant downward trend in the monthly satellite record of global temperatures.

    Surely such a wager should sound interesting to those who think the planetary temperature will increase several tenths of a degree during that period.

    No reasonable offers refused…

    Of course, Gavin is too much of a gentleman to bring it up…but I’m not :-)

  59. Mike Donald:

    #57
    ” trigger a desperate search for alternatives”

    Funnily enough that very same issue of World Oil has a very good editorial on alternatives. Looks like it needn’t be that desperate if the concept of extractus digitus grabs the politicians.

    http://www.worldoil.com/Magazine/MAGAZINE_DETAIL.asp?ART_ID=3170&MONTH_YEAR=Apr-2007

    “I am struck by the lack of fundamental breakthroughs required for an abundant, clean energy future, whether in electricity generation from wind, coal (IGCC), ocean thermal, ocean wave, ocean tide, solar, nuclear, or liquids from coal-to-liquids, gas-to-liquids, biofuels, bio-engineered fuels, and so on.”

  60. Mike Donald:

    #48
    Eli,

    “Thomas Knutson’s reply”

    I don’t think the link works so could you repost that part for us ta?

  61. Vinod Gupta:

    I am not a trained climatologist or scientist to comment on the issue of global warming, but I do have some very serious thoughts on the matter:

    - Every human act that uses external energy to perform, contributes to global warming, however large or miniscule.

    - Even the act of growing and consuming food is not carbon-neutral, as lot of external energy is used for cultivation, fertilizers, transportation, storage, etc.

    - Every human appears to have 10, 20 or more horses yoked with him (the primemovers that burn fossil fuels and make our current lives comfortable)which consume oxygen and spew out far more carbon-dioxide than man would do alone.

    - the natural eco-system could perhaps only balance off animal (including man) respiration and plant photosynthesis. Perhaps no surplus carbon sink exists at all to absorb the emissions caused by burning of fossil fuels accumulated in the earth over millions of years.

    - In short, no human solution to global warming is possible for the industrial man and industrial civilization is ultimately doomed. It is only a question of how much time we may have in facing irreversible climate change and damage.

  62. tom:

    RE:44

    Where do you live? Because the United States just had a very cold April.

  63. John Wegner:

    The results to date show that the 1998 models Over-Estimate the impact of GHGs on temperature.

    This has important consequences. If the GHG-temperature link is on the low end of the range of estimates (1.5C to 4.5C per doubling), then global warming will not be a significant problem.

    The results to date, suggest that the link is indeed on the low end of the range, maybe even below the 1.5C lower limit. If that is the case, then global warming will not be a problem at all.

  64. Eli Rabett:

    #60, #48 You can find a working link in a further comment of mine on the MKL paper, K being the Chipster. Now this is not simple blog whoring on my part, but actually evidence of Eli having a senior moment. Among other things, as Knutson and Tulyea point out, Michaels, Knappenberger, and wadda you know Landsea

    Propose to adopt what appears to be a plausible but low-end scenario of future radiative forcing, whereas Houghton et al. (2001) indicates that even stronger radiative forcing scenarios than we use in KT04 are also plausible.

    Bad estimates of forcing are a tradition at New Hope Environmental Services for argument pushing as is

    In contrast to Michaels et al., who exclusively emphasize uncertainties that lead to smaller future changes, uncertainties are noted that could lead to either smaller or larger changes in future intensities of hurricanes than those summarized in the original study, with accompany in smaller or larger societal impacts.

    There is nothing new under the hot Virginia sun.

  65. tamino:

    On the one hand, we have Vinod Gupta (#61) saying, “… no human solution to global warming is possible for the industrial man and industrial civilization is ultimately doomed.”

    On the other hand, we have John Wegner (#62) saying, “The results to date, suggest that the link is indeed on the low end of the range, maybe even below the 1.5C lower limit. If that is the case, then global warming will not be a problem at all.”

    Lord, what fools these mortals be!

  66. Hank Roberts:

    John Wegner wrote:
    > The results to date show that … global warming will not be a problem at all.

    Where did you publish this or where did you find it published?

    There are unfortunately a great many people named “John Wegner” who show up in a Google search for climate studies, from a lot of different places; I can’t figure out whether you’re talking about your own publication or something you read.

    What’s your source, please, and why do you consider it to be reliable?
    Have you reconciled your source with the IPCC’s published statements, or have reason to consider it more reliable?

  67. ChrisC:

    Sean O (Re comment 24)

    I’m an engineer by training myself (recently made the switch to meteorology). Before moving into this field, I worked in the mining industry in Australia. I saw mining companies, including the major, pour millions, sometimes tens of millions of dollars into project on the rumour that a significant deposite could be found. Projects have been greenlighted here and overseas with huge risk and uncertainty associated with it. This does not even mention to risks and uncertainty associated with sending people to work in deep shafts.

    A seismic even killed a miner here last year and trapped another two for two weeks. There must have been a risk assesment taken on the mine, and uncertainties were no doubt present in this analysis. Yet men were sent into the shaft and put at risk.

    The petroleum industry is the best example of this. Test wells in deep water cost over a million US smackers to drill. The expansion into the Gulf of Mexico and the Caspian Sea for meger returns shows that massive risk and uncertainty associated with petroleum.

    My point is that “industry” descions are full of uncertainty. To wax lyrical about a hypothetical industry that does not act until a near certainty is reached is to break with reality.

    To put it in context, our best guess (from GCM models) is that temperatures will rise due to GHG emissions and this carries substantial risk. Our best guess in 1988 has been shown to be remarkably accurate. 2+2 = ?

  68. Barton Paul Levenson:

    [[I have faith that science and technology will develop the answers we need, but it will take time. ]]

    And in the meantime we should just go on burning all the fossil fuels we want?

    The solutions are here now: Solar, wind, geothermal, biomass, conservation. No breakthroughs required. Mass production will in and of itself drive down the capital costs, and of course for solar, wind and geothermal the fuel costs are zero.

  69. Barton Paul Levenson:

    [[In short, no human solution to global warming is possible for the industrial man and industrial civilization is ultimately doomed. It is only a question of how much time we may have in facing irreversible climate change and damage. ]]

    Switching to renewable sources of energy should do the trick.

  70. tom:

    I believe that will happen, sooner, rather than later.

    When the price of gas goes up to 4 dollars a gallon, as it probbaly will this summer, you will see consumers flocking to alternate choices.

  71. wayne davidson:

    #58, You brought out a curious , I might say puzzling reason why medias carry any credence to The Michaels and Lindzen’s
    of this world, when all they have to do is look at their none existing betting records, good thing they are not putting their money where their mouth is, otherwise they would be quite poor. I will repeat this again and again, anyone interested doing a story from a climate change expert out there? Look at their temperature projection record. Sports writers do performance statistics all the time, a science story might be just as important!

  72. Walt Bennett:

    Re: #45 and the pull quote “a good fraction of it’s going to stay there 500 years”: One of the main points regarding AGW which I took from AIT and have relied on since that time is that CO2 is highly persistent in the atmosphere. It has seemed to me from the beginning that this is the crucial point, because the lag effect and the persistence equate to ‘built in’ warming far into the future.

    However, there seems to be hot dispute over the residency time of CO2. I have seen estimates as low as 5 years.

    My question is this: what research/reference material is available which justifies the more long-term predictions? I have searched for such corroboration and thus far have come up empty.

    Walt Bennett
    Harrisburg, PA

  73. tom:

    # 58.

    What “media”are you talking about. Remember , the # 1
    objective of most media outlets ( ie the ones that are corporate> is to maximize the wealth of the stockholders, not make editorial decision about climate scientists’ credibility. This IS a capitalist country you know.

  74. Thomas Lee Elifritz:

    The solutions are here now:

    No, they are not. Human beings have overpopulated the Earth far beyond its natural carrying capacity, and is essentially sustained by burning coal, oil, gas and uranium. You need to revisit the concept of ‘numbers’.

  75. John L. McCormick:

    RE # 69 Barton, you cannot simply throw down your wild assumption and expect anyone to take you seriously.

    How about putting some more thought into:

    [Switching to renewable sources of energy should do the trick]

  76. James Hargrove:

    On the first graph it was suggested that B is the best match. From real world experience wouldn’t it be more reasonable to suggest that A is what has happened… i.e. that exponential growth in forcings has occured? It could be that this is evident in the curve as well both before and after the 1991 volcano in that the curve is steeper than B before 1991 and then after the significant flat portion then crosses B at a similar slope. However, looking at the overall shape of the curve it might be more reasonable to suggest that neither A nor B can be inferred directly from the data. There is too much noise. Unfortunately, it seems that in two regions the data are steeper than A. It would be interesting to see the derivative of these curves to see at what points the slopes are in the best agreement. and when they are not.

  77. Dan:

    re: 62. No, the US did not have “a very cold April”. It was just below average, nationwide. Some regions were considerably above average. The actual NOAA stats indicate it was the 47th coolest April out of the past 113 years and about 0.3 F below average. That is not “very cold” but rather is closer to the middle of the pack, nationally.

  78. J.C.H:

    April saw the largest demand for gasoline ever recorded. I now think the American people will buy just as much gasoline at $4 plus as they were at $3 plus.

  79. Jim Eager:

    Re #46 Randy: “All these people breathe….”
    Comment by Randy

    You’re not suggesting that human respiration in any way increases GHGs, are you?

    If so your really need to do some reading.

  80. Chip Knappenberger:

    A couple of replies:

    James (comment 58), as you know, we have discussed this bet previously, see your posting on it.

    And I have admitted that we would have lost. The same holds true today. Yes, we cherry-picked the start of the period of the bet – that was the point. And yes, we would have lost despite that, as indeed there will be no statistically significant downward trend in the global average temperature in the lower troposphere as measured by satellites (take your pick of either UAH or RSS datasets) from January 1998 through December 2007 (the period of our proposed bet). In fact, the UAH data from January 1998 through present (April 2007) is (a non-significant) 0.072 ºC/decade and the RSS trend for the same period is (a non-significant) 0.012 ºC/decade (overall, from the start of the record until now, the UAH and RSS global lower tropospheric trends are (a significant) 0.15ºC/dec and 0.18ºC/dec, respectively). Is there anything more that you would like me to add?

    Perhaps another wager? How about this – I’ll take the low end of the IPCC range of projected warming and you take the high of the range, and whoever reality proves to be closer to will be the winner? (note I would have won this bet for a period of the past 20 years using Dr. Hansen’s 1988 projections to define the range – as proven by Gavin’s analysis).

    My point here is I personally believe, and probably so do a lot of other folks, that the high end of the IPCC temperature (and forcing) range is unrealistic. My question is why included it? Why did Dr. Hansen include his scenario A, when now, Gavin contends that the real “forecast” was scenario B? What is the real “forecast” now?

    Eli (Comment 48, 64), do you think the emissions scenario used in Knutson and Tuleya (2004) was reasonable? (hint, they assumed a 1%/yr increase in atmospheric CO2 concentration from present day for 80 years – the current rate (depending on how you define “current” is some where between 0.5 and 0.6%/yr). If they used an “idealized” scenario, then they should have only given an “idealized” conclusion – not one with a date attached to it that is likely to be far sooner than observations suggest. And as far as the “hot Virginia sun” goes, we’ve been overall a bit chillier than normal since about February – kind of unpleasant during winter, but kind of nice, now! :^)

    Urs, (comment 55), thanks for the information about global CO2 emissions. I have seen the numbers for 2003 and 2004(?) but nothing since, do you have, or can you point me to, the global CO2 emissions numbers for more recent years? Thanks!

    -Chip Knappenberger
    to some degree, supported by the fossil fuels industry since 1992

  81. Thomas Lee Elifritz:

    What is the real “forecast” now?

    The real forecast is 383 ppm rising at 2 ppm/year, a minimum carbon dioxide sensitivity to doubling of 3 C, adding positive feedbacks, some of which are unknown, yields a 5 C increase in global average temperatures by 2100, and of course, time does not stop in 2100. There is 15 feet of water in West Antarctica just waiting to be released. By all available metrics, we have initiated a ‘carbon dioxide catastrophe’. You can spin it all you want, but the numbers and the paleoclimate history of the planet Earth are already starkly clear to most intelligent, credible and inquiring minds.

  82. John L. McCormick:

    RE # 80, Chip, try this link:

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    It will give you measurements up to Apr. 2007, both Manua Loa and global and monthly means going back to 1958.

  83. Hank Roberts:

    Mr. Knappenberger, you’ve already used that talking point in this thread, and it’s been answered; did you read the answer? If not look above, find timestamp 10:29 am.
    Read through the second paragraph of Gavin’s answer, in green font.

  84. Thomas Lee Elifritz:

    FYI : West Antarctica Melting/QuickScat in the news today :

    http://www.livescience.com/environment/070515_antarctic_melt.html

  85. tom:

    # 78- because gas was cheap in April.Despite the ‘whining’ by us ‘muricans about gas prices, it’s RIDICULOSULY cheap .

    I think $ 4.00 will be a little wake up call.

  86. John L. McCormick:

    RE # 81

    Thomas, you said:

    [There is 15 feet of water in West Antarctica just waiting to be released.]

    and add glacial meltback, Greenland meltback and ocean thermal expansion. Yes, they are also being, and waiting to be, released.

    So, an honest, intelligent, credible and inquiring mind can ponder this question: Can capitalism survive that amount of sea level rise?

  87. fieldnorth:

    Re: Think outside the box. No need for breakthroughs, I’m optimistic the evolution of existing technologies will make CO2 emissions surprisingly easy to control. My principal worry is businesses, politicians and green pressure groups doing much damage in the process. Witness the stupid biofuel craze, where forests are cleared for energy crops .

  88. wayne davidson:

    #80, MSU data has been questionned once or twice in the past, for instance from NOAA sep 2005:

    “Scientists at the University of Washington {UW}, developed a method for quantifying the stratospheric contribution to the satellite record of tropospheric temperatures and applied an adjustment to the UAH and RSS temperature record that attempts to remove the satellite contribution (cooling influence) from the middle troposphere record. This method results in trends that are larger than the those from the respective source.”

    Have we forgotten the UW with the UAH, as with UW-UAH?

    From just a guy who sees significant warming trends with a telescope and a digital camera.

  89. John Wegner:

    The RSS MSU Lower Troposphere temperature trend might now be going down versus 1998.

    January 1998 was about 0.6C above average while the latest figures for April 2007 are only about 0.2C above average.

    http://www.remss.com/msu/msu_data_description.html#msu_amsu_trend_map_tlt

  90. Chip Knappenberger:

    Mr Roberts (re: 83),

    In Gavin’s response that you point to, he concludes “I have previously suggested that it would have been better if they had come with likelihoods attached – but they didn’t and there is not much I can do about that.” I take that to mean that he thinks that all scenarios are not equally likely. I agree. I am taking the position that I don’t think the high end scenarios are very likely at all–for instance, SRES A1FI results in a CO2 concentration of somewhere around 970ppm by 2100, SRES A2 produces ~850ppm. In comment 81, Thomas Lee Elifritz thinks the real forecast for CO2 concentration is 383 + 2pmm/yr which produces ~570ppm by 2100–an amount less than nearly all SRES scenarios! In 1988, Dr. Hansen wrote that his scenario B was “perhaps the most plausible.” So obviously people have opinions as to which scenarios are more likely than others.

    Gavin concluded his original post with “Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get.”

    So why is everyone being so coy? I am just asking what people think are the most likely scenarios, i.e. which climate model forecasts you think “ahead of time” will prove to be most accurate? My hunch is that not many will give much credence to the high-end IPCC scenarios. Perhaps I will be wrong. You all have opinions, so what are they? And how do they square with the IPCC SRES scenarios?

    -Chip Knappenberger
    to some degree, supported by the fossil fuels industry since 1992

  91. Hank Roberts:

    John Wegner wrote:
    > The results to date show that … global warming will not be
    > a problem at all.

    Where did you publish this or where did you find it published?

  92. Mark Chopping:

    re: #86 John, I don’t see any evidence of a decline in the TLT data presented on that page. 1998 was an exceptional year — why would you select it as a basis for comparison? Or did I miss something?

  93. Hank Roberts:

    Chip, nobody’s being “coy” — have you looked at the sea level rise numbers, or the Arctic ice area numbers, compared to the projections?

    Have you looked at the change in the view of the Antarctic ice, from changeless to changing unexpectedly?

    Perhaps you missed this little gem of childhood:

    “… a voice came to me out of the darkness and said,
    ‘Cheer up! Things could be worse.’
    So I cheered up and, sure enough, things got worse.”

    People like me with no expertise are here to try to learn what’s known, and perhaps to remind the scientists to speak more clearly, use one idea per paragraph, and aim for the national 7th grade average reading level.

    You’re here presenting PR positions for your clients, right? paid time?
    Or can you differ with their public stance on your own time?

    The latest IPCC report explicitly doesn’t include changes the climate scientists tell us are clearly happening, because the changes got published after the cutoff.

    http://scienceblogs.com/stoat/2007/04/egu_thursday_1.php

    So — opinion? My opinion is I trust the research more than the PR.
    Look at the history, from asbestos to lead to tobacco to pesticides to now. What can you tell about what industry PR claims? They’re lies. They minimize risk and put off certainty with …. opinions.

  94. Hank Roberts:

    John Wegner wrote:
    > The RSS MSU Lower Troposphere temperature trend might now
    > be going down versus 1998.

    You’re reading from the error Pielke Senior made and admitted making, using a period that’s too short to determine a trend. It’s blown already:
    http://scienceblogs.com/stoat/2007/05/another_failed_pielke_attempt.php

  95. James Annan:

    Chip,

    The simple answer is that SRES isn’t climate science, it is economics with a bit of politics thrown in.

    Turning SRES into climate changes, that is where the climate science comes in.

    Note, however, that over 2 decades, the SRES question hardly matters, and you have to go to wide extremes to make a noticeable difference (eg Hansen’s A, and C forecasts).

    Over 100 years, the emissions matter a whole lot more. But over that long a time scale, they are much more a matter of choice, and less a matter of the inertia in the current infrastructure. In no small part, we get to choose whether we prefer A2 or B1 etc. We don’t get to choose climate sensitivity.

  96. Hank Roberts:

    > Chip K
    Check yourself out here, when wondering why people think estimated ranges may be too conservative:

    http://www.wunderground.com/education/ozone_skeptics.asp

  97. Barton Paul Levenson:

    [[The solutions are here now:
    No, they are not. Human beings have overpopulated the Earth far beyond its natural carrying capacity, and is essentially sustained by burning coal, oil, gas and uranium. You need to revisit the concept of 'numbers'.
    ]]

    No, thanks, I was satisfied with the calculus, linear algebra, etc. I took way back in my college days.

    You need to revisit the concept of “renewable energy.” The world does not have to run on fossil fuels.

  98. Barton Paul Levenson:

    [[RE # 69 Barton, you cannot simply throw down your wild assumption and expect anyone to take you seriously.
    How about putting some more thought into:
    [Switching to renewable sources of energy should do the trick]
    ]]

    Why do you assume I haven’t?

  99. Eli Rabett:

    Chip is doing it again, when he tries to trash Hansen by saying

    I am taking the position that I don’t think the high end scenarios are very likely at all–for instance, SRES A1FI results in a CO2 concentration of somewhere around 970ppm by 2100, SRES A2 produces ~850ppm.

    he conveniently neglects to mention that the 1988 paper characterized Scenario A :

    These scenarios are designed to yield sensitivity experiments for a broad range of future greenhouse forcings. Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in scenario A (~1.5%/yr) is less than the rate typical of the past century (~4%/yr).

    James Annan in comment 95 is exactly right, in the long run the actual scenerio is a matter of political will, in the short, it is pretty easy to get it right and to bracket the possibilities, something that Hansen recognized from the get go, and that Chip is trying to obfusticate.

  100. jimmiraybob:

    My point here is I personally believe, and probably so do a lot of other folks, that the high end of the IPCC temperature (and forcing) range is unrealistic. My question is why included it? Why did Dr. Hansen include his scenario A…
    - Comment by Chip Knappenberger � 16 May 2007 @ 11:39 am

    Am I missing something? I’m not a climate scientist or engineer so the question isn’t snark. Don’t the high-, intermediate- and low-end scenarios present a range of model prediction, each based on a grouping of different assumptions & factors? And wouldn’t the purpose of this, especially in the relatively early stages of a research program, be to provide a basis for testing data against the model scenarios in order to evaluate the various assumptions & factors? Why then wouldn’t you include all three? If you were to leave out one or more possible scenarios, even if at the time you don’t necessarily consider it likely, wouldn’t you be opening yourself to charges of not being rigorous in your approach? It seems that including A, B and C and then testing the real world data against each is science 101. Certainly even in applied engineering design there are scenarios that may not be judged at the time to be likely but are still considered, especially if the consequences of not doing so could reflect on the credibility of the design team.

  101. John L. McCormick:

    RE # 98

    [Why do you assume I haven't? ]

    Barton, put aside your wishful thinking and get serious about the 4 trillion kilowatt-hour US 2006 electric power demand (minus the 10-20 percent of high end efficiency capability) being replaced by:

    a. how many wind towers;
    b. how many sqft of solar panels (absent storage capability)

    their output being fed into a grid that today struggles to manage the flow of bulk power transfers.

    Wishing an end to fossil fuel consumption is about all we believers seem to offer.

    Give us some numbers, please.

  102. Thomas Lee Elifritz:

    You need to revisit the concept of “renewable energy.”

    Surely you jest. I am a pioneer of alternative energy. I certainly don’t have to justify my contributions to anybody.

    The world does not have to run on fossil fuels.

    Yes it does. We are not anywhere near closing any supply chain loops for any aspect of alternative and/or renewable energy. Ideally, the solar irradiance is adequate, but until we come up with some technological breakthroughs in optical and/or thermal energy conversion, we are many many orders of magnitude away from anything even remotely qualified as sustainable, and thus still utterly dependent upon fossil fuels. The numbers are simply not there yet, no matter how cavalierly you wave them away with simple monocrystalline solar cells and small wind generators, lead acid batteries, 12 volt DC appliances and inverters. I encourage you to install them, as I have developed these techniques over the previous several decades, but unless you’re extremely clever and resourceful, these systems will not deliver food to your table from great distances.

    Are you that clever and resourceful? Are you ready for sustenance farming in your back yard? Are you ready to dump that Victorian home and/or cheap stick McMansion, and build a superinsulated Earth home? How do you intend to pay your ta-xes and mort-gage? Are you familiar with the amount of capital investment required for a complete alternative solution? Multiply that by billions.

    Clearly this is one part of the solution, but I find it incredible that you can completely trivialize that from the comfort of your armchair. That’s not how science is done and technology is developed. If you’re going to talk, you need to walk.

  103. Hank Roberts:

    http://www.sciencemag.org/cgi/content/abstract/316/5825/709

  104. Joel Shore:

    Chip:

    In regards to the IPCC emissions scenarios, things get a bit confusing when one is dealing with predictions of things we ourselves can influence. So, yes, I will agree with you that the high end IPCC scenarios are unlikely to come to pass but that is because I don’t think humanity will be that stupid in the face of this problem and will, at some point, ignore what the Cato Institute and Western Fuels Association thinks is best (despite the best efforts of you and Patrick Michaels!) and actually do what is best for humanity.

    So, in a sense, we are back at the Easterbrook Fallacy: It is exactly because we are going to take action to avoid the worst scenarios that they are unlikely to actually come to pass. However, it obviously doesn’t then follow that we should do nothing to avoid them because they aren’t likely happen!

  105. Thomas Lee Elifritz:

    You need to revisit the concept of “renewable energy.” The world does not have to run on fossil fuels.

    [note to moderator]

    Rather than arguing here with people, it would probably be more productive to point them to some fairly recent ‘numbers’ :

    http://www.sciencemag.org/cgi/content/abstract/298/5595/981

    Review

    ENGINEERING:

    Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet

    Hoffert et. al.

  106. bender:

    There’s a very simple yes/no question in #42 still awaiting reply. Unfortunate that the moderator departed at #40. I know: folks are busy.

    [Response: Not sure what yes/no question you are referring to, or indeed what error bars are being referred to in #42. Observational errors on any one annual mean temperature anomaly estimate are around 0.1 deg C, and the errors from the linear fits are given in the text. If you were more precise, I might be able to help. - gavin]

  107. Heidi:

    Funny how we looked at the very same set of forecasts and came up with a much less favorable evaluation.

    Lets use the verification period from mid 1984 to Feb 2007. How much have temperatures risen since then ? Our figures show that the average global temperature has risen between .37 deg C and .43 deg C depending on how we smooth it. Either way, this agrees roughly with the commonly quoted warming rate of .17 deg C per decade.

    From the graphs in Hansen’s paper, we interpolate the following forecasts of warming for the 1984-2006 period….

    Scenario A 0.84 deg
    Scenario B 0.64 deg
    Scenario C 0.48 deg

    So, forecast minus observed….

    Scenario A 0.84 deg – .40 deg = error of 0.44 deg
    Scenario B 0.64 deg – .40 deg = error of 0.24 deg
    Scenario C 0.48 deg – .40 deg = error of 0.08 deg.

    Our interpretation is that Scenario A is the one that has occured…

    “Scenario A assumes continued exponential trace gas growth, scenario B assumes a reduced linear growth of trace gases, and scenario C assumes a rapid curtailment of trace gas emissions such that the net climate forcing ceases to increase after the year 2000.”

    http://adsabs.harvard.edu/abs/1988JGR….93.9341H

    The rate of CO2 growth has continued at an increasing rate.

    So, in summary, Hansen overforecast the warming by more than a factor of two (.84 deg vs .40 deg). Even allowing Scenario B as the verification, the forecast is still off by 60% (.64 vs .40).

    Hansen’s forecast, however, should not be totally discreditied. At the time of his research, global warming was a fringe idea and the warming trend had scarcely even begun. Warming has occured and somewhat dramatically, just not as strongly as forecast.

    [Response: Who are 'we'? I ask because I find a number of aspects of 'your' analysis puzzling. First off, there is no way that Scenario A was more realistic - the proof is in figure 1 above which gives the real forcings used. Secondly, I cannot discern what your interpolation procedure is for analysing the data - it is not a linear trend (which are given above) - and without that information it is unclear whether it is appropriate and what the error bars on the forced component would be due to the unforced (random) noise component associated with any one year in any one realisation. And thirdly, you don't give a source for your observational data - absent these three things I cannot comment on your conclusions. - gavin]

  108. Heidi:

    re #58

    I don’t know how you can look at the satellite record since 1998 and not conclude that the temperature has fallen.

    “Statistically significant trend” is open for debate…but clearly satellite measured temperatures are lower than in 1998.

    http://upload.wikimedia.org/wikipedia/en/7/7e/Satellite_Temperatures.png

  109. Timothy Chase:

    Heidi (#107) wrote:

    “Statistically significant trend” is open for debate…but clearly satellite measured temperatures are lower than in 1998.

    http://upload.wikimedia.org/wikipedia/en/7/7e/Satellite_Temperatures.png

    Impressive chart.

    For the last twenty-five years, the new highs have been coming along with a fair amount of regularity. The trend lines seem quite telling.

    I would say that the temperatures are definitely going up.

  110. Ralph:

    re #70, #85 Gas at $4/US Gallon?? Perish the thought!

    Over here in the UK, today’s pump price was 95.9p/litre.

    Using 1 US gallon = 3.785 litres, that works out at roughly 3.63GBP.

    Using the xe.com’s exchange rate of 1 GBP = 1.97705 USD, that equates to $7.18/gallon. (when I last ran this calculation, the USD was even weaker, and the sums worked out to c$8.50/gallon).

    The point, though, is this: even at that price, people still drive their cars over here. Traffic volume continues to increase in this green and pleasant land, so I suspect it’s going to take something a bit sterner than a mere $8/US Gallon to materially affect peoples’ decisisions about transport and energy use.

  111. James Annan:

    Heidi,

    “Statistically significant trend” is open for debate

    “Statistically significant trend” was their choice, not mine, and they freely admit they got it wrong, so it’s a bit silly for you to pretend otherwise. Obviously given the size of the temperature spike in the monthly satellite record for 1998, it may be some time before this measurement is beaten. However, the annual average record in surface temperatures should be overtaken in the next few years. I’d bet money on it happening by 2010, in fact. Any takers?

  112. DocMartyn:

    “Observational errors on any one annual mean temperature anomaly estimate are around 0.1 deg C”

    There are a finite number of temperature recording stations and these are placed around the Earth in an heterogeneous manner. It is therefore obvious that each station requires a different weighting, and coupled with natural variation of temperature, there can be no simple formular for determination of the SD, as in +/- 0.1 deg C.
    What is the actual error for the years recorded. What are the error bars of projection A, B and C in the year 2010?

    [Response: Estimates of the error due to sampling are available from the very high resolution weather models and from considerations of the number of degrees of freedom in the annual surface temperature anomaly (it's less than you think). How would I be able to know the actual error in any one year? If I knew it, I'd fix the analysis! Finally, what do you mean by the 'error bars of projection'? These were single realisations, and so don't have an ensemble envelope around them (which is how we would assess the uncertainty due to weather noise today). That would have given ~0.1 deg C again in any one year, much less difference in the trend. If you want uncertainty due to the forcings, then take the span of Scenario A to C, if you want the uncertainty due to the climate model, you need to compare different climate models which is a little beyond this post - but look at IPCC AR4 to get an idea. - gavin]

  113. Barton Paul Levenson:

    [[Barton, put aside your wishful thinking and get serious about the 4 trillion kilowatt-hour US 2006 electric power demand (minus the 10-20 percent of high end efficiency capability) being replaced by:
    a. how many wind towers;
    b. how many sqft of solar panels (absent storage capability)
    their output being fed into a grid that today struggles to manage the flow of bulk power transfers.
    ]]

    John, put aside your wishful thinking and get serious about the amount of new power-related infrastructure built every year, and how quickly we could switch to renewables if the bulk of that were directed that way. For an example of how quickly the US can produce something it really wants to produce, you might consider the amount of US shipbuilding in 1938 versus that in 1943.

    I didn’t say renewables could do it all this minute. What I said was that renewables could do it all, a point which I will continue to stand by despite your patronizing attitude.

  114. Barton Paul Levenson:

    [[The world does not have to run on fossil fuels.
    Yes it does. We are not anywhere near closing any supply chain loops for any aspect of alternative and/or renewable energy. Ideally, the solar irradiance is adequate, but until we come up with some technological breakthroughs in optical and/or thermal energy conversion, we are many many orders of magnitude away from anything even remotely qualified as sustainable, and thus still utterly dependent upon fossil fuels.
    ]]

    No it doesn’t. We can add large amounts of wind, solar, geothermal and biomass to the mix now, and in 10-50 years can be running entirely off renewables. All that’s necessary is the political will to do it. The repeated statement that we need technological breakthroughs in order for renewable energy to be useful is a lie. Solar thermal plants work now, so do windmills. No new technology required.

    We are utterly dependent on fossil fuels in 2007. We do not have to be utterly dependent on fossil fuels in 2027.

    [[ Are you ready for sustenance farming in your back yard? Are you ready to dump that Victorian home and/or cheap stick McMansion, and build a superinsulated Earth home? How do you intend to pay your ta-xes and mort-gage? Are you familiar with the amount of capital investment required for a complete alternative solution? Multiply that by billions.]]

    Straw man. All I said was that we should switch to renewable sources of energy. Living like a cave man as the solution to global warming is a lie thought up by people like Rush Limbaugh and repeated by thoughtless people like yourself.

    [[Clearly this is one part of the solution, but I find it incredible that you can completely trivialize that from the comfort of your armchair. That's not how science is done and technology is developed. If you're going to talk, you need to walk. ]]

    Jeez, Tom, please don’t let my old professors know I don’t know how science is done. They might take back my degree in physics.

  115. Barton Paul Levenson:

    [[I don't know how you can look at the satellite record since 1998 and not conclude that the temperature has fallen.
    "Statistically significant trend" is open for debate...but clearly satellite measured temperatures are lower than in 1998.
    ]]

    Not as of 2005-2006 they aren’t. Your information is obsolete.

  116. Heidi:

    re # 108, 110

    The bet from 1998 was better discussed in #80, apparently by the guy who made the bet.

    Clearly, the raw temp has fallen since 1998 (on sat data) but apparently not the trend as they defined it to be calculated.

  117. FurryCatHerder:

    Re #108:

    However, the annual average record in surface temperatures should be overtaken in the next few years. I’d bet money on it happening by 2010, in fact. Any takers?

    Which year is “by 2010″? I’ll take you up on 2009 being lower than 1998. 2011 or 2012 — very likely not.

    1998 was exceptional in a lot of ways. Let’s hope 2011 or 2012 aren’t equally exceptional.

  118. Walt Bennett:

    Repost, hoping for a reply:

    Re: #45 and the pull quote “a good fraction of it’s going to stay there 500 years”: One of the main points regarding AGW which I took from AIT and have relied on since that time is that CO2 is highly persistent in the atmosphere. It has seemed to me from the beginning that this is the crucial point, because the lag effect and the persistence equate to ‘built in’ warming far into the future.

    However, there seems to be hot dispute over the residency time of CO2. I have seen estimates as low as 5 years.

    My question is this: what research/reference material is available which justifies the more long-term predictions? I have searched for such corroboration and thus far have come up empty.

    Walt Bennett
    Harrisburg, P

  119. Thomas Lee Elifritz:

    We can add large amounts of wind, solar, geothermal and biomass to the mix now, and in 10-50 years can be running entirely off renewables.z

    Still having trouble with orders of magnitude, I see. Did you even bother to read the article I posted? Carbon and hydrocarbon combustion are integrated high temperature processes, with fertilizer and metals production thrown into the mix, the same fertilizers and metals that permit the production and transportation of food over great distances. Unless everyone is producing their own food in their back yard, renewing the soil every year from biomass, which we know now is physically impossible given the population of the Earth, then it simply won’t work. By not addressing the actual scales involved, you are entering the realm of irrationality. It’s not that it can’t work, it’s simply that it won’t work given the population of the Earth. An alternative energy solution given today’s technology would require drastic reductions in population.

    Wind perhaps has the greatest potential from a energy cost standpoint, but solar has severe materials bottlenecks at this time. Do you know why?

  120. Alastair McDonald:

    Re #118 Since no-one else has replied I will give it a go.

    It does not seem to be generally recognised that CO2 like H2O is a feedback. When temperatures rise the oceans give off more CO2, and when temperature fall the cold oceans absorb more CO2. Thus the CO2 content of the atmosphere depends on sea surface temperatures, amongst other things.

    So, although each molecule of CO2 that escapes from the oceans will, on average, be back in the ocean again in five years time, if the sea surface temperature rises the increase in the atmospheric CO2 will remain.

    But just to be quite clear, the warming of the oceans is being caused by the increase in CO2 due to man’s activities. But that increase will be self sustaining. In fact it will soon get worse, because at present 40% of the increase is being dissolved in the oceans. Half of that CO2 is removed from the atmosphere by rain in the Arctic and buried deep in the ocean by the descending leg of the THC (the Conveyor ocean current.) When the Arctic sea ice disappears, the atmospheric CO2 will no longer be removed in the NH. Thus the 40% of man made CO2 which is currently being removed into the oceans will be reduced to the 20% which is being absorbed in the Weddell Sea.

    Assuming the world does not get so hot that all vegetation is killed off, eventually the creation of peat bogs and the burial of trees will remove the excess CO2 from the atmosphere, but that will take perhaps thousands of years. Of course, if civilised man continues to rule the planet, no trees will be buried as they will be needed to fuel his extravagant life style.

  121. John L. McCormick:

    RE # 113 Barton, you said,

    [I didn't say renewables could do it all this minute. What I said was that I didn't say renewables could do it all this minute. What I said was that renewables could do it all, a point which I will continue to stand by despite your patronizing attitude.

    Barton, my attitude is not patronizing; it is critical of your refusal to back up any of your hand-waving claims that:

    [renewables could do it all].

    Where did you ever get that notion.

    Will China, India, Brazil, Indonesia, EU, US, Canada, Mexico, Papua New Guinea, Egypt, Russia……..run their economies entirely on renewables? Of course not. No economy can except maybe the remarkable Amish people.

    Read the business announcements of US, China and India energy growth reliant upon coal, coal, coal, coal. If you think you can change that scenario, do something more than make pronouncements.

    You do not serve your position by arguing what no one else is proposing, i.e.,[renewables could do it all].

    That is all I am going to contribute to this discussion with you. And I care just as deeply as you about saving our children from the future we are designig for them.

  122. TheBlackCat:

    I don’t mean to sound sarcastic but aren’t the high end estimates, by definition, unrealistically high? Isn’t that the whole point of having such estimates? If it wasn’t unrealistically high it would be the best estimate (or at least closer to that), not the high end estimate. Aren’t worst-case scenarios supposed to be worse than you really think it is going to be? I may be missing something, but I find all this talk about the high end estimate being too high both obvious and trivial. Based simply on the name I would be surprised, and extremely worried, if they weren’t too high.

  123. Eli Rabett:

    #118, Walt, there is a long explanation about how long CO2 mixing ratios persist in the atmosphere on Real Climate (use the search box), but the short answer is:

    If you are asking how long on average a single particular molecule of CO2 persists in the atmosphere, five years is a good estimate,

    If you are asking how long an increase in the total amount of CO2 in the atmosphere will persist, 500 years is a reasonable estimate (see the RC post for details). See also #120

  124. Chip Knappenberger:

    Eli,

    I am not following your Comment 99… “Chip is doing it again…when he neglects to mention…” I used precisely that Hansen (1988) quote in my original comment (#27). So what have I neglected to do? Your point is precisely my point, Scenario A was never thought to be correct. This is definitely true in the long run (as Hansen’s (1988) quote makes clear), and by Gavin claiming that the close match with Scenario B shows that “Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get” indicates that scenario A was never thought to be correct in the short run either.

    Obviously, we, the denizens of the world, will determine the future course of human emissions to the atmosphere as well as other perturbations to the earth. And our actions will ultimately determine which SRES scenario was closest to predicting the future. Reality, rather than idealism, will largely determine how fast fossil fuel emissions will decline… and reality, believe it or not, involves issues of greater personal urgency to most of the world’s ~6.5+ billion (and growing) people than matters of climate.

    -Chip Knappenberger
    to some degree, supported by the fossil fuels industry since 1992
    (and, as is the case with virtually all of you, a beneficiary of the fossil fuels since birth)

    [Response: Chip, you seem to insist that we break everything down into correct and incorrect projections. There is no such binary distinction. The reason why we have best case and worst case scenarios is not because they are likely, but because they are possible. The scenario generation process was designed to give possible futures and for modellers to ignore such possibilities is irresponsible. I point out again that these results do not span the whole space of possibilities - witness polar ozone depletion which was much worse and much more rapid than models predicted. Remember, there are always unknown unknowns! -gavin]

  125. John Wegner:

    I have just recalculated the trendline in the RSS lower troposphere satellite data using the most recent April 2007 data.

    Get this: the trend is 0.0001K per decade.

    So I would say the bet (referred to above in #80) is not over yet.

    Morever, the lower troposphere temperature has declined by 0.723C since April 1998 (the peak of the El Nino event) to April 2007 (a mild La Nina event).

  126. tamino:

    Re: #124 (John Wegner)

    What’s the starting point of your analysis?

  127. Jim Manzi:

    Gavin:

    Thanks for the excellent post. We have gone back and forth some on this issue, and your analysis and interpretation is a real example of the intellectual integrity that we can expect from a scientist speaking in his area of technical expertise.

    The first several paragraphs of your post go into the details on a question of fact: which projected Scenario ended up being closest to the actual measured forcings that occurred between Hansen’s prediction and today. I took Hansen’s word for it in his 2006 NAS paper that these were closest to Scenario B, but the depth of your presentation was very useful.

    You then proceed to analyze the fit of model predictions vs. actual results through 2006. You note that the Scenario B forecast is a good fit to the observed outcomes. I think that there are two key points here:

    1. You highlight the first point very clearly. Exactly as you say:

    “But can we say that this proves the model is correct? Not quite. Look at the difference between Scenario B and C. Despite the large difference in forcings in the later years, the long term trend over that same period is similar. The implication is that over a short period, the weather noise can mask significant differences in the forced component.”

    You then proceed to conduct a simple and important piece of sensitivity analysis, basically asking what is the maximum sensitivity that we could exclude as a possibility at the 95% significance level. I don’t see an explicit statement of the maximum value, but you say that a sensitivity of 1C would fall below it. In this sensitivity analysis you are explicit that you are “assuming the transient trend would scale linearly”. I presume that there is some physical basis for this assumption, but isn’t this a huge assumption in calculating the probability distribution for the actual sensitivity?

    2. The second point is that the exact start and end dates that we pick for the evaluation of prediction vs. actual turn out to be critical. Suppose instead of evaluating this forecast with 1988 as the base through 2006 (i.e., 18 years from forecast), we have evaluated it through 2000 (i.e., 12 years from forecast). In that case (reading approximately from the chart published in Hansen’s paper), the Scenario B forecast would have been about 0.2C of warming and actual would have been indistinguishable from 0C warming. This would have implied massive % error. While I suppose it’s possible that a detailed analysis of the actual forcings through 2000 might have reconciled this finding, even if they did not I think we would both agree that a reasonable analyst would not have concluded in 2000 that we had just falsified the prediction. (This is one of the reasons that Crichton’s claim that Hansen was “off by 300%” or whatever was so outrageous.) All this is of course the flip-side of your appropriately modest claim that we have not yet proved the accuracy of the model – or more precisely, measured the fact of most interest: the sensitivity – at the 18-year point.

    I think what this point highlights is that without an a priori belief about the relationship between time from prediction and an expected confidence interval for prediction accuracy, it’s very hard to know where the finish line is in this kind of an evaluation.

    Gavin, once again thanks for taking the time to develop and post this material.

    [Response: If you restricted the analysis to 1988 to 2000 (13 years instead of 23 years of 1984-2006), no trends are significant in the obs or scenario B (scenario A is and scenario C is marginally so), although all error bars encompass the longer term trends (as you would expect). Short term analyses are pretty much useless given the level of noise. The assumption that the transient sensitivity scales with the equilibrium sensitivity is a little more problematic, but it's a reasonable rule of thumb. Actually though, there is slightly less variation in the transients than you would expect from the equilibrium results. -gavin]

  128. wayne davidson:

    #124 May be NOAA needs to be corrected as their RSS lower troposphere trend is .20 C/decade

    http://lwf.ncdc.noaa.gov/oa/climate/research/2007/apr/global.html#Temp

    #113 Barton, How unimaginable it is to have a cluster of 1 million wind turbines over an apt oceanic location, I wonder how much energy that would produce? May be we can produce more wind towers than cars for a few years?

    By the way, planetary Lapse rate = -g/Cp is one of the most elegant equations in atmospheric physics, do you know who was the first person who has derived it?

  129. Philippe Chantreau:

    RE # 124: how much urgency do you think the climate now represents for the people of Australia?

  130. wayne davidson:

    #125 Mr Wegner, if it is the Mid-troposphere, Consider UW-RSS as a more valid calculation. UW gets it more right (cheers to the guys and gals at the U of Washington), In no way little old me would have picked up the historical all time 2005 warming signal in February-March 2005 with an amateur telescope and not powerful reliable weather satellites.

  131. Chip Knappenberger:

    Well, Gavin,

    The thing that got me commenting in this thread in the first place was your claim, in your original post, that a pretty good a priori forecast had been made with Scenario B. The reason, I guess that Scenario B is considered “the” forecast is that the other two scenarios were known at the time to be relatively unlikely, if not outright impossible. Thus, today, you point to Scenario B as the true “forecast”. I ask, then, if you were in the same position today, as Dr. Hansen was in 1988, which scenario would you point to as the “perhaps most plausible”? Instead of qualifying the SRES scenarios, the IPCC chose to present them all as equally likely. Based upon your subsequent comments, I think we agree that they are all not equally likely, and we know this a priori. Further, Dr. Hansen (and yourself and many others) do not like it when the results of Scenario A are presented against actual observations to claim that a gross error had been made. Why? Because Scenario A was never really considered by you all to be reflective of the way things would evolve. In the IPCC’s case, the results from the high-end scenarios are splashed all over the press indicating how bad things may possibly become. If you don’t like comparisons being made between observations and Scenario A (because it wasn’t considered to be very likely), then how do you feel about advertising outcomes produced with the high-end SRES scenarios (when they aren’t very likely either)?

    Maybe your answer is that all SRES scenarios should be considered Scenario Bs, and that the IPCC didn’t include the next to impossible Scenarios A and C. I could accept this as an answer. But I think that you think that the set of SRES scenarios that could really be considered Scenario Bs is less than the total number.

    Thanks for the fair treatment and consideration of my comments.

    -Chip Knappenberger
    to some degree, supported by the fossil fuels industry since 1992

    [Response: There are two separate points here. First, scenario A did not come to pass, therefore comparisons of scenario A model results to real observations is fundamentally incorrect - regardless of how likely anyone though it was in 1988. Secondly, Jim said B was 'most plausible', not that A was impossible. We are climate scientists not crystal ball gazers. So can I rule out any of the BAU scenarios? No. I can't. If people who've looked into this more closely than I for many years think these things are possible, I would be foolish to second guess them. More to the point, all the scenarios assume that no action is taken to reduce emissions to avoid climate change. However, actions are already being taken to do so, and further actions to avoid the worst cases will continue. If that occurs, does that mean it would have been foolish to to look at BAU? Not at all. It is often by estimating what would happen in absence of an action that an action is taken. So if in 100 years, emissions tracked the 'Alternative Scenario' rather than A1F1 would that mean that A1F1 scenario was pointless to run? Again no. The only test is whether it is conceivable that the planet would have followed A1F1 - and it was (is). -gavin]

  132. John Wegner:

    To tamino #126.

    Sorry, I should have included the starting date – January 1998. My comments were specifically about the bet referred to in #80 – that there would be a statistically significant downward trend in lower troposphere temperatures (from RSS or UAH) from January 1998 to December 2007. People seem to give the RSS group more credibility even though the UAH temps are virtually identical.

    If the current temperature from April 2007 remains in place for the remainder of the year, there will be a downward trend in temperatures of 0.0003C per decade in regards to the bet although that is unlikely to be significant.

    What is significant, however, is the amount of heat that the 1997-98 El Nino dumped into the atmosphere and the fact that temperatures are now 0.7C lower than the peak of 1998. The current La Nina pattern may, in fact, result in a continuing decline in temperatures.

    I believe this also has bearing on the accuracy of Hansen’s model predictions.

  133. cats:

    Thanks for the post.

  134. Walt Bennett:

    Eli,

    Thanks for the tip. I am anxious to read the link, but I cannot find it no matter how I try. Might you help me locate it?

    Alistair, thank you for that boilerplate, which makes a lot of sense. Do you have any references you can share?

  135. tamino:

    Re: #125, #131 (John Wegner)

    First of all, your numbers are wrong. The trend from Jan. 1998 to Apr. 2007 is not 0.0001 K/decade, it’s 0.01 +/- 0.34 K/decade. I’m guessing that instead of multiplying the annual rate by 10, you divided by 10.

    Second, you’ve demonstrated an interesting fact, namely: EVEN IF you do the most extreme cherry-picking possible (and Jan. 1998 is the most extreme choice), you *still* get a positive trend rate, albeit not statistically significant.

    Third, the trend for the entire RSS MSU data set is 0.18 +/- 0.06 K/decade. Taking the data ONLY from 1998 to Apr. 2007, the range of values is -0.33 to +0.35 K/decade, which range includes the whole-dataset estimate 0.18 K/decade. So, the data from 1998 to 2007 provide no statistical evidence that the trend is any different from that of the entire dataset.

    What you’ve essentially shown is that for the MSU data, even cherry-picking will not produce your desired result.

  136. Urs Neu:

    Re 80 (and further on)
    Chip
    I have seen the global emission numbers from Mike Raupach (CSIRO and chair of global carbon projekt). It will be published soon in PNAS.
    2002/2003 and 2003/2004 have shown an increase of around 5%, 2004/2005 of around 3%.
    I fear that Hansens emission scenario B trend has shown to be the most adequate only because of the breakdown of the soviet imperium in the 1990ies. If you look what has happened during the last few years in China and India and how negotiations on emission caps are very tough, and that there is lots of coal in China, etc., I do not know how you can think that the high emission scenarios ar so unlikely. Of course we hope they will not happen, but at the moment I cannot see any reason to exclude them.

  137. Eli Rabett:

    Walt, try this: How long will global warming last by David Archer. I was hurried at the time I wrote, even Rabetts have day jobs.

  138. Richard Ordway:

    re. 43 [Also, have any academic studies been done that show similar conclusions about other contrarians?]

    Are you trying to deceive the public? Who are you working for?

    I don’t care a whit about academic studies…and neither should you or the public …unless you have a political agenda.

    “Academic study” means that your facts are not necessarily checked to see if you are lying (such as “academic studies” that show that tobacco smoking is not harmful to your health…or that global warming is not happening…and there are plenty of examples of both that are just lies).

    Make it a peer-review study in an established peer-review journal…that stands up under scrutiny and then you can start paying attention.

    I can’t believe that you said that.

  139. Marion_Delgado:

    I think the climate denialists are further refining our models for disingenuousness every day.

    Climate science says, “this is a range given our inability to completely predict massive human and insitutional actions and the availability and development of technology, so here are some scenarios – if the conditions are X0, we see X1 as a likely outcome. If they’re Y0, we see Y1 as a likely outcome. If they’re Z0, we see Z1 as a likely outcome. We see X0, Y0 and Z0 as being in the range of likely outcomes.” Not only is the true picture closest to Y0, the midrange estimate, and the one the representative of climate science presented as likeliest, but the data follows the model prediction if Y0 is correct very closely.

    The denialists come back with the same tired dishonest argument about X1-Z1 being imprecise and X1 being wrong.

    At some point you have to tell them they need new squid ink because the ocean of plausibility has already been saturated with their old formula and doesn’t have the water-clouding power it used to have.

  140. Marion_Delgado:

    Alastair:

    But that increase will be self sustaining. In fact it will soon get worse, because at present 40% of the increase is being dissolved in the oceans. Half of that CO2 is removed from the atmosphere by rain in the Arctic and buried deep in the ocean by the descending leg of the THC (the Conveyor ocean current.) When the Arctic sea ice disappears, the atmospheric CO2 will no longer be removed in the NH. Thus the 40% of man made CO2 which is currently being removed into the oceans will be reduced to the 20% which is being absorbed in the Weddell Sea.

    Apparently, we may already be there, at least in the Southern Ocean:

    ENVIRONMENT:
    Southern Ocean Nears CO2 Saturation Point
    Stephen Leahy

    BROOKLIN, Canada, May 17 (IPS) – Climate change has arrested the Southern Ocean’s ability to absorb greenhouse gases from the atmosphere, researchers announced Thursday.

    That will make it more difficult to stabilise carbon dioxide (CO2) levels in the atmosphere and to reduce the risks of extreme forms of global warming.

    The Southern Ocean has been absorbing less CO2 from the atmosphere since 1981, even though levels have increased 40 percent due to burning of fossil fuels. Oceans absorb half of all human carbon emissions, but the Southern Ocean is taking up less and less and is reaching its saturation point, reported an international research team in the journal Science.

    This is the first evidence of the long-feared positive feedbacks that could rapidly accelerate the rate of climate change, pushing impacts to the extreme end of the scale.

    “This is serious. All climate models predict that this kind of feedback will continue and intensify during this century,” said Corinne Le Quere of Britain’s University of East Anglia and the paper’s lead author.

    “With the Southern Ocean reaching its saturation point, more CO2 will stay in our atmosphere,” Le Quere said in a statement.

    This new research is not reflected in the recent reports from the Intergovernmental Panel on Climate Change (IPCC).

  141. Dave Rado:

    As someone has completely misunderstood my previous post I’ll re-word it slightly, and I hope I’ll get a sensible response this time:

    Re. #18

    It amazes me that this [Pat Michaels] is the most interviewed commentator by a factor of two on CNN.

    Has an peer reviewed study been done that proves that he is? (If not, where does your factor of two figure come from?)

    Also, have any peer reviewed studies been done that show similar conclusions about other contrarians?

    And have any peer reviewed studies been done that show what proportion of press/media coverage of global warming is sympathetic to the argument that it is anthropogenic and serious, what proportion is antipathetic to that argument, and what proportion is neutral; and how the proportions have altered over recent years?

    Dave

  142. Hank Roberts:

    See also:

    http://www.realclimate.org/index.php?p=160
    7 Jun 2005
    How much of the recent CO2 increase is due to human activities?
    Contributed by Corinne Le Quéré, University of East Anglia.

    and
    http://www.agu.org/pubs/crossref/2006/2005GB002511.shtml
    and
    http://lgmacweb.env.uea.ac.uk/lequere/publi/Le_Quere_and_Metzl_Scope_2004.pdf

    And much else. I hope Dr. Le Quéré will say more here soon. I’ve really been hoping to hear from the biologists more.

  143. Alastair McDonald:

    Hi Walt,

    Re #135 The general opinion amongst most climate scientists and oceanographers is that a slow down in the THC will cause a cooling based on what they think happened at the start of the Younger Dryas stadial (mini ice age.) However, the oceanographer who pioneered that belief is now having second thoughts. See Was the Younger Dryas Triggered by a Flood? by Wallace S. Broecker in Science 26 May 2006.

    Meanwhile in Germany they have been investigating CO2 and ocean. See Biopumps Impact the Climate. They published a paper in Science today described here Antarctic Oceans Absorbing Less CO2, Experts Say and here http://www.nature.com/news/2007/070514/full/070514-18.html

    The paper shows that I am wrong to think that it is only in the Arctic where there is a danger of the CO2 failing to be sequestered. Also their figures for the proportions of CO2 will be more accurate than those quoted by me which were very rough but of the right order.

    I think it is obvious that the situation is pretty scary. What is worse is that scientists who run RealClimate don’t seem to be aware of the danger. Perhaps it is not their speciality and they prefer not to comment.

    HTH, Alastair.

  144. FurryCatHerder:

    Re #108:

    However, the annual average record in surface temperatures should be overtaken in the next few years. I’d bet money on it happening by 2010, in fact. Any takers?

    Which year is “by 2010″? I’ll take you up on 2009 being lower than 1998. 2011 or 2012 — very likely not.

    1998 was exceptional in a lot of ways. Let’s hope 2011 or 2012 aren’t equally exceptional.

  145. John Wegner:

    To tamino #135 – I’m guessing you rounded the figures from RSS while I used all the data and a statistical package to derive the trend. Try it again.

  146. tamino:

    Re: #142 (John Wegner)

    I did no rounding. I used the monthly means downloaded from the RSS website. Using a program I developed myself, which has been in use among astrophysicists for over a decade, I get a trend from Jan. 1998 to Apr. 2007 of 0.00122 K/yr, which is 0.0122 K/decade. To test the remarkably unlikely possibility that the program is wrong, I also loaded the data into ExCel in order to compute the trend. I got exactly the same result. Perhaps you should try it again.

    And in fact, it doesn’t really matter whether it’s 0.0122 K/decade or 0.0001 K/decade, it’s still greater than zero, and it’s still true that even cherry-picking can’t produce the result you want.

    And of course, it’s still +/- 0.34 K/decade.

  147. Bruce Tabor:

    Re. 105 Thomas Lee Elifritz ,

    Thanks for the reference Thomas. Interestingly the paper discusses using superc onducting nitrogen (N2) liquid pipelines as an effcient transmission network. A recent Scientific American article proposed that superconducting hydrogen (H2)be used instead – eliminating the duplication of “power lines” and hydrogen pipelines.

    Bruce

  148. Blair Dowden:

    Re #143: About the southern ocean absorbing less carbon dioxide, the Nature paper said:

    The main cause of the changes seems to be a relatively rapid increase in average wind strengths over the Southern Ocean, Le Quéré and her team report. These stronger winds, thought to be driven by the depletion of the ozone layer over Antarctic regions, churn up the ocean and bring more dissolved carbon up from the depths.

    My thoughts on this are:

    1) The cause is said to be ozone depletion, only indirectly linked to global warming.

    2) Are wind speeds supposed to increase over most of the oceans? I think we are talking about a local effect here that should not be extrapolated.

    3) Why does ocean churn reduce CO2 uptake? I thought CO2 absorption was slowed down by the lack of mixing with lower ocean layers.

    Later they say “An increase in global temperature is predicted to worsen the effect, since warmer waters hold less gas.” This makes sense, but the question is by how much. This paper does not address that issue.

  149. Timothy Chase:

    Alastair McDonald (#143) wrote:

    Meanwhile in Germany they have been investigating CO2 and ocean. See Biopumps Impact the Climate. They published a paper in Science today described here Antarctic Oceans Absorbing Less CO2, Experts Say and here http://www.nature.com/news/2007/070514/full/070514-18.html

    The paper shows that I am wrong to think that it is only in the Arctic where there is a danger of the CO2 failing to be sequestered. Also their figures for the proportions of CO2 will be more accurate than those quoted by me which were very rough but of the right order.

    The paper you refered to is:

    Saturation of the Southern ocean CO2 sink due to recent climate change
    Corinne Le Quere, et al.

    Not available unless you have a subscrption,but there is a little introductory article at:

    Polar ocean is sucking up less carbon dioxide
    Windy waters may mean less greenhouse gas is stored at sea.
    Michael Hopkin
    Published online: 17 May 2007
    http://www.nature.com/news/2007/070514/full/070514-18.html

    The mechanism is a little strange: not the reduced mixing, but increased mixing due to the winds resulting in carbon being brought up from below. A real step in the direction of the ocean becoming an emitter rather than a sink.

    In any case, I know I have brought this up before, but another carbon cycle feedback is kicking in: heat stress is reducing the ability of plants to act as carbon sinks, at least during the warmer, dryer years. Plenty of positive feedback here, too, although not quite as significant in terms of the greenhouse effect. Higher temperatures mean increased evaporation from the soil. Evaporation from the soil means that plants will have less water. Plant die-off means carbon dioxide is released and more soil is exposed, increasing evaporation, and thus drying out the soil for neighboring plants. Then there are the fires which release a great deal of carbon dioxide. I remember a reading a paper recently (I could look it up) which demonstrated that the fires are typically associated with human settlement, though, at least in the Amazon river valley – which makes this process more controllable.

    Impact of terrestrial biosphere carbon exchanges on the anomalous CO2 increase in 2002â??2003
    W. Knorr
    GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L09703, doi:10.1029/2006GL029019, 2007

    Now of course increased evaporation also means increased precipitation, but this tends to fall prematurely over the source of the evaporation – the oceans. And even when it does happen over land, it tends will be more sporadic. Floods. Not that conducive for growing crops.

    Then we have the decreased albedo of ice during the spring which implies greater absorbtion of sunlight, then we have the ice quakes which are becoming more severe in Greenland, the rapidly moving subglacial lakes in the antarctic.

    It looks like some long-anticipated feedback loops are getting started – and some unanticipated ones as well.

    I think it is obvious that the situation is pretty scary. What is worse is that scientists who run RealClimate don’t seem to be aware of the danger. Perhaps it is not their speciality and they prefer not to comment.

    I suspect the latter: they do the science and leave the panic to others. Probably works best that way – they can do what they are good at, and we can take care of the other part. A neat division of labor. In case, this will take time, just probably not as much as we thought. But if we don’t light the fuse to something other than global warming, I strongly suspect that people will still be here a million years from now, even if they are only hanging on by their finger nails.

  150. Bruce Tabor:

    Hi Walt (re #72, 118 and others).

    I must admit I share your frustrations regarding an explantion of how long CO2 stays in the atmosphere. Ive made a little progress by myself which I’ll attempt to share.

    There are 2 seperate issues in this problem: the CO2 residence time and the CO2 response function.

    If you look at Figure 1, page 5 of this document:
    http://www.royalsoc.ac.uk/displaypagedoc.asp?id=13539
    You will see the residence time of carbon (CO2) in the atmosphere is about 3 years. Your typical CO2 molecule will spend 3 years in the atmosphere before being taken up by the ocean or terrestrial biosphere. But you will also notice CO2 has a residence time of 6 years in the surface ocean and 5 years in the living terrestrial biosphere. The fluxes to and from the atmosphere are huge and approximately balance and this is why the residence time of CO2 is so short. This approximate balance of fluxes is both directions is a reflection of the equilibrium exchange of CO2 between the atmosphere and the ocean/terrestrial biosphere.

    However, the equilibrium exchange does not help us answer our question. What happens when we put more than the equilibium CO2 in the atmosphere? This is dealt with by the CO2 response function. A lot of work on this has been done by Fortunat Joos et al in Bern, which I don’t pretend to understand in detail at this stage:
    http://www.climate.unibe.ch/~joos/
    However, if you go to footnote (a), page 34 at the bottom of Table TS.2 in the Technical Summary of the IPCC WG1 AR4 report there is an equation which summarises the Bern CO2 response function:
    http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_TS.pdf
    I’ll try and reproduce it here:
    “The CO2 response function used in this report is based on the revised version of the Bern Carbon cycle model used in Chapter 10 of this report [which did not help me understand the function - Bruce]
    (Bern2.5CC; Joos et al. 2001) using a background CO2 concentration value of 378 ppm. The decay of a pulse of CO2 with time t is given by
    a0 + a1*exp(-t/t1) + a2*exp(-t/t2) + a3*exp(-t/t3)
    where a0 = 0.217, a1 = 0.259, a2 = 0.338, a3 = 0.186, t1 = 172.9 years, t2 = 18.51 years, and t3 = 1.186 years, for t < 1,000 years.”
    Now I would intuitively have thought that the decay function would be related to the size of the “pulse”, but a simple way to understand this function is to multiply it by 100ppm and use a spreadsheet to calculate the amount left over 1000 years.I calculated 57 years to reach 50 ppm. The function does not decay to zero, but to 21.7 ppm – so calculating a mean is meaningless – although it is clear from my reading that slower processes which operate over thousands of years have been ignored in the equation.

    This is
    This is a compartmental model with three decay processes that operate over different time scales. You can see this more clearly be plotting the response function versus the log of the time scale. It has 3 sloped straight lines joined by curves before becoming flat around 1000 years (I wish I could show this here). The initial decay is dominated by 0.186exp(-t/1.186), the second by 0.338exp(-t/18.51), and the last by 0.259exp(-t/172.9). These times (t1, t2 and t3)are sometimes called the e-folding times, the time to decay to 1/e of the original value. (not quite the half-life)

    Physically it means that the initial fast decay process requires a large driving force (departure from the equilibrium CO2). Then it peters out and each of the two slower processes take over in turn. [This is why I don't understand why the size of the pulse, which is the concentration gradient - the driving force - is not specified. Although it may make sense if the compartments are arranged consecutively. I have to think about it. I'm struggling to find an explanation of the model.]

    I hope this helps!

  151. wayne davidson:

    146 Tamino, Consider the physical impossibility of having the warmest year in history, with all its heat radiation confined to surface stations. RSS or UAH calculations from raw data seems totally unrealistic, the UW effort looks right on.

  152. Jim Manzi:

    Re: 127

    Gavin, thanks for your thoughtful in-line comment.

    I thought about it a little bit, and did the following quick analysis, in all cases eyeballing the numbers from the chart in Hansen’s 2006 published paper:

    1. Averaged the two observational time series to create an estimated actual temperature for each year.

    2. Calculated the 18-year temperature change for each 18-year period starting in 1958 and ending in 1988 (the year the prediction was published), i.e., 1958-1975, 1959-1976 and so on through 1971-1988.

    3. Did the same calculation for each 13-year period starting in 1958 and ending in 1988.

    4. Calculated the mean measured change (.14C) for the population (N=14) of 18-year periods, and then calculated the Std. Dev. (.22C) for this population.

    5. Calculated the mean change (.11C) and Std. Dev. (.20C) for the population (N=19) of 13-year periods.

    6. Made the synthetic forecast that an analyst would have made in 1988 based on simple trend for the change for the projected change in temperature over the 13 years ending in 2000 (.11C) and the 18 years ending in 2005 (.14C) vs. 1988.

    7. Added these projected changes to the actual temperature in 1988 to create temperature forecasts for 2000 and 2005 as of 1988 based on simple trend.

    8. Recorded the Scenario B temperature forecast published in 1988 for the years 2000 and 2005, and termed this “model forecast”.

    9. Compared the model forecast to the simple trend-based forecast calculated in Step 7 for the years 2000 and 2005.

    The analytical results are as follows:

    1. As is obvious from visual inspection of the chart, the model prediction is basically spot-on for 2005. The simple trend-based forecast for 2005 was .46C, while actual in 2005 was .70C, so the forecast error for the simple trend was .23C, or about 1 Std. Dev. for 18-year periods.

    2. The model prediction for 2000 was .55C, while actual was .35C. Forecast error for the model was therefore .20C, or about 1 Std. Dev. for 13-year periods. Forecast error for the simple trend in 2000 was about 0.5 Std. Dev.

    3. These results are directionally robust against various starting periods for the analysis, including 1964 (the low point in the historical temperature record provided in Hansen) and 1969, as well as for 13-year and 18-year period mean and Std. Dev. calculated for the whole dataset, including the years 1989 – 2005 not available at the publication date for the forecast.

    Observations are as follows:

    1. We can not reject the null hypothesis of continuation of the simple pre-1988 trend based on evaluation of model performance through 2005.

    2. Once normalized for the underlying trend and variance in the data for differing forecast periods, the model was off by the same number of Std. Dev. in 2000 as the simple trend-based forecast was in 2005. That is, the relative performance of the model vs. simple trend is highly dependent on the selection of evaluation period.

    3. The model forecast for the period 2010 – 2015 plateaus at an increase of about .70C, which is about .50C, or slightly more than 2 Std. Dev., greater that the simple trend-based forecast for this period, so as per Hansen’s comments in his 2006 paper, that should be a period when evaluation of the forecast will permit more rigorous evaluation of model performance vs. simple pre-1988 trend.

    Gavin, I don’t think any of this contradicts anything that you put in your post to start this thread. As you said clearly, model results to date are consistent with an accurate model but do not prove that the model is accurate.

  153. bender:

    Parameterized models typically fail soon after they are used to simulate phenomena in domains outside those in which the model was parameterized. As climate continues to warm, the probability of model failure thus increases. The linear trend fit of the Hansen model to observations during the last few decades thus does not imply that we can expect more of the same. This is why I am interested in the uncertainty associated with the parameters that make up the current suite of parameterizations. How likely is it that those parameters are off, and what would be the consequences, in terms of overall prediction uncertainty? Is moist convection the wild card? Why or why not? Tough question, I know. Possibly even overly presumptive. Maybe the best way to answer is through a new post?

    [Response: You need to remember that the models are not parameterised in order to reproduce these time series. All the parameterisations are at the level of individual climate processes (evaporation, cloud formation, gravity wave drag etc.). They are tested against climatology generally, not trends. Additionally, the models are frequently tested against 'out of sample' data and don't 'typically fail' - vis. the last glacial maximum, the 8.2 kyr event, the mid-Holocene etc. The sensitivity of any of these results to the parameterisations is clearly interesting, but you get a good idea of that from looking at the multi-model meta-ensembles. One can never know that you have spanned the full phase space and so you are always aware that all the models might agree and yet still be wrong (cf. polar ozone depletion forecasts), but that irreducible uncertainty cuts both ways. So far, there is no reason to think that we are missing something fundamental. -gavin]

  154. James Annan:

    FCH (#117) I meant including 2010, and I won’t offer long odds but I reckon it has to be more likely than not. For a top-of-my-head calculation, 2007 looks like a miss but 3 more shots, each with a roughly independent 1/3 chance of exceeding 1998 means an 8/27 probability of not breaking the record over this time (hence 19/27 that the record is broken).

    According to NASA GISS, 2005 already broke the old record.

  155. Steve Bloom:

    Re #153 (JA): 2007 looks like a miss? Just based on the GISS combined land/ocean monthlies through April (I assume what’s used for the record, but perhaps it’s land only?), 2007 is second warmest after 2002 (which after a fast start had a relatively cool remaining year and so fell out of the running) but warmer than both 1998 and 2005. Are you expecting La Nina to have its way?

  156. Barton Paul Levenson:

    [[By not addressing the actual scales involved, you are entering the realm of irrationality. It's not that it can't work, it's simply that it won't work given the population of the Earth. An alternative energy solution given today's technology would require drastic reductions in population.]]

    No it wouldn’t. You don’t understand how markets work or how governments can affect them. There’s nothing irrational about what I said, but there’s something ignorant about assuming the US has to stay locked in the infrastructure patterns it now possesses.

  157. Barton Paul Levenson:

    [[That is all I am going to contribute to this discussion with you.]]

    Super. Then I’ll have the last word by pointing out that renewables can do it all. Maybe not this year, maybe not ten years from now, but to say that it will never happen in our lifetime is just not justified. Technological change can happen very rapidly. Compare the number of horse-drawn buggies being manufactured in 1910 versus 1920. Compare the number of ships built in the US in 1938 versus 1943. Compare the number of electronic calculators in use in 1972 versus 1982. Do I need to go on? We are not locked into the present infrastructure forever. Hundreds of billions of dollars are spent on energy infrastructure every year in this country. How long did it take Brazil to go from gasoline to ethanol for their cars?

  158. Barton Paul Levenson:

    [[#113 Barton, How unimaginable it is to have a cluster of 1 million wind turbines over an apt oceanic location, I wonder how much energy that would produce? May be we can produce more wind towers than cars for a few years? ]]

    I would love to see that. I understand Denmark is already producing 20% of its electricity from wind, and a lot of that comes from offshore arrays.

    [[By the way, planetary Lapse rate = -g/Cp is one of the most elegant equations in atmospheric physics, do you know who was the first person who has derived it? ]]

    I’m not sure, but it goes fairly far back. My guess would be 19th century.

  159. Bruce Tabor:

    Re 127 Jim Manzi,

    Hi Jim,

    When I use the averaged data as you described from 1958 to 1988 and fit a linear regression I get a slope of 0.09 degC/decade with 95% CIs of 0.04 to 0.14 degC/decade. This is significant, but well below Gavin’s estimate. Using this model to forecast the temperature in 2005 I get an estimate of 0.35C +/- 0.14C, well below the recorded annomaly of 0.7C. I am using the standard approach to estimating and forecasting using ordinary least squares (OLS).

    This tells me a forecast based on OLS over this time period is not appropriate. And indeed Gavin has not used OLS to do the forecast, but simply to compare predicted and observed trends. He has done this for a period when the trend is approximately linear and so a linear model is approriate.

    If I had the data for the 3 models I could formaly test the hypothesis that their slopes differ for 1988 to 2006. As it is, a forcast for 2005 based on OLS regression for 1988 to 2006 has a mean of 0.61C with a 95%CI from 0.37C to 0.84C. This just tells me what is obvious from the plot. Models B and C are consistent with data, while model A is too high.

    Cheers,

    Bruce

    ps. I get the same estimates for trend (including errors) as Gavin for the 2 observational datasets.

  160. Timothy Chase:

    bender (#153) wrote:

    Parameterized models typically fail soon after they are used to simulate phenomena in domains outside those in which the model was parameterized. As climate continues to warm, the probability of model failure thus increases. The linear trend fit of the Hansen model to observations during the last few decades thus does not imply that we can expect more of the same. This is why I am interested in the uncertainty associated with the paraemters that make up the current suite of parameterizations. How likely is it that those parameters are off, and what would be the consequences, in terms of overall prediction uncertainty? Is moist convection the wild card? Why or why not? Tough question, I know. Possibly even overly presumptive. Maybe the best way to answer is through a new post?

    My own thoughts, for what they are worth…

    I would expect convection and turbulence to come into play, especially hurricanes as they send heat into the atmosphere and cool the ocean below on a large scale, and are themselves so unpredictable.

    But there are other things. For example, we don’t presently know all the positive feedbacks, and such feedbacks, even if they were known (and could be appropriately modeled) would undoubtedly introduce a chaotic element much like turbulence itself. Then of course the grid itself implies that the data has only so much resolution. But at this point, there are the model parameters themselves which are known with only a certain level of accuracy. Undoubtedly we will be able to improve upon that and include more of the feedbacks.

    It helps, too, to keep in mind that the global model works in concert with regional models. Global calculations determine the background in which regional models are applied, but they don’t have the same level of detail or use as many variables. One recent story involving this is that, while the global models predict temperatures to rise in places like Chicago and Washington DC by a certain amount, regional models are better able to keep track of the time of day when precipitation is likely to occur. Since it is later in the afternoon, this gives the ground a chance to heat up, and as a consequence, rather than being absorbed, the precipitation is more likely to evaporate, drying out the soil, the plants, etc.. As such, they are expecting more dry hot summers, and temperatures are likely to be hotter. 100-110 F in the 2080s. And this will affect the global behavior over time. Global models help tune the regional, but the regional will in turn help to tune the global.

    Finally, models don’t predict specific behavior. What they predict is a certain probability distribution of behaviors – using multiple runs with slightly different initial conditions. This helps to account for uncertainty with respect to initial conditions and determine the robustness of the results. Likewise, multiple models which may assume different parameters or take into account different mechanisms help to determine the robustness of various results.

  161. Dick Veldkamp:

    #113 #158 A million wind turbines offshore?

    Back of the envelope: the current common turbine size is 3 MW, so that makes 3,000,000 MW for such a windfarm. Because of variability in wind, average output would be something like 40% of this (offshore), or 1,200,000 MW.

    What does that mean? World energy demand is 15 TW (2.5 kW/head) of which 1/6 is electricity, which gives 15e9/6 = 2.5e9 kW = 2,500,000 MW. Thus 1 million turbines would provide half of the world’s electricity. Now that would be something.

    Currently there is 75,000 MW installed, and the expectation for 2020 is 300,000 MW (1/10 of the figure above, generating 5% of the world’s electricity).

    Denmark already gets 20% from wind, Germany and Spain are at 6% or so.

    There’s some more figures in Wikipedia: http://en.wikipedia.org/wiki/World_energy_resources_and_consumption but they seem not to be entirely clear on the difference between installed wind power and average output power.

  162. Jim Manzi:

    Re: 159

    Hey Bruce:

    Thanks for pushing the analysis.

    You write:

    “When I use the averaged data as you described from 1958 to 1988 and fit a linear regression I get a slope of 0.09 degC/decade ”

    Check (or at least very close to what I get, but I’m eyeballing numbers from a chart).

    You’ll find, as I’m sure is obvious to you, that this slope estimate is highly sensitive to the starting year for the calculation. This is because the behavior in the first several years of the dataset is very different than in the balance of the dataset. 1958 – 1963 has essentially flat temperatures, then there is a big drop in 1964 followed by a steady upward trend from 1964 to 1988 (and, unfortunately for all us, on through to today).

    I don’t know of any physical logic for starting the trend calculation in 1958, it’s just the first year Hansen has on his chart. I assume there is some good reason for why this chart starts in 1958, but I don’t know what it is. In the absence of any other a priori information, I think a rational analyst would probably start the slope calculation in 1964 at the start of the “regime change”. If you do this, you would find that the projected temperature for 2006 is about 0.6C. As per the fact that your calculation of the projected value in 2005 based on OLS for the 1988 to 2006 dataset would be .61C, the slope from 1964 to 1988 is not statistically distinguishable from the slope from 1988 to 2006. It is a continuation of the same “trend”.

    So should we start the calculation for “trend” in 1958 or 1964, or for that matter, 1864?

    Because any of these starting point assumptions is pretty unsatisfying and the slope is so sensitive to this assumption, I calculated the series of 13-year and 18-year differences is to create a distribution of slopes and then considered any one difference a draw from this distribution. This is a grossly imperfect way to do this, of course, but a reasonable approximation to the real time-series analysis.

    The estimate developed this way for 2005 is .46C with a large distribution of error, while the range of estimate based on an ensemble of OLS models built on data with various starting points ranges from .35C to .60C with some complicated, but large, distribution of error. So whether you do the “poor man’s Bayesian” analysis one way or the other, you still get the same basic conclusion about what “trend” and uncertainty around trend would have been in 2005.

    Once again, I don’t think I’ve disagreed in any fundamental way with you or Gavin on this. When evaluating a model forecast that as of the most recent available date is effectively EXACTLY right, there is no way any reasonable person can conclude that the performance of the model is anything other than consistent with a high degree of accuracy. . The subtle but important point, that Gavin is explicit about, is that this doesn’t necessarily “prove” that the model is “right”, or more precisely, we have to be careful in evaluating the confidence interval that the performance to date establishes. As our discussion so far indicates, this is a tricky problem, exacerbated by a paucity of data points.

    Thanks again.

  163. Thomas Lee Elifritz:

    Then I’ll have the last word by pointing out that renewables can do it all

    You don’t understand how markets work or how governments can affect them.

    But I do know how science works, and it doesn’t work when christian libertarian scientists sit back in their armchairs, and by merely lifting a little finger, are able to pronounce that markets will accomplish the global implementation of alternative energy, and that no effort by christian libertarian scientists is necessary at all.

  164. Timothy Chase:

    One more point regarding the ability of models to predict the behavior of complex systems….

    There are many uncertainties involved and additional factors which some later theory might account for undoubtedly. However, oftentimes the uncertainties will cancel each other out, even over extended periods of time. If we look over the charts for predicted vs real temperatures over the span of a decade or more, this is much of what we see: drift, but drift that largely cancels itself out such that the distance between the real and the predicted increases only gradually with the real and the predicted crossing over at various points, particularly early on.

    This is in fact part of the central conceit behind climatology: it isn’t dealing with the day to day weather so much as the average behavior of the system, but it can do this only because fluctuations and unknown factors often cancel one-another out. In some ways, it reminds me of the dirty back-of-the-envelope calculations a physicist (Oppenheimer?) made immdiately made after witnessing the test at Trinity – which were fairly accurate. Obviously climate models are far more sophisticated, but I believe the same principle applies.

  165. Neil Bates:

    Hi, I was directed here from a comment at WaMo comment page, and here is basically what I asked there:

    The seasonal variation of temperature increase caused by general increase in solar luminosity (as some claim is the major cause of GW) would be different from that due to CO2, wouldn’t it, since the mechanism (direct thermal equilibrium with radiation v. absorption of longer-wave IR) is different? Who is looking at that?

    BTW, aside from averages, I notice that the seasonal temperature curve seems to have been pushed forward in time a few weeks, with it statying warmer or colder longer in the year. In any case, the change in average is noticeable here in SE Virginia, where it used to snow more etc., although we get more early spring storms and ice.

  166. Timothy Chase:

    Re Thomas Lee Elifritz (#163):

    But I do know how science works, and it doesn’t work when christian libertarian scientists sit back in their armchairs, and by merely lifting a little finger, are able to pronounce that markets will accomplish the global implementation of alternative energy, and that no effort by christian libertarian scientists is necessary at all.

    Thomas,

    Could we drop the adjective “christian” in our criticisms? I belong to the Quasi branch of Spinozism, but I still find it offensive. Besides, many libertarians are nonreligious. Feel free to criticize libertarians, or better yet, libertarian approaches, but please leave religion out of it.

    [Response: Note to all. There are plenty of places to discuss religion and science on the web. This is not one of them. -gavin]

  167. Dan Hughes:

    re: # 161

    The numbers are off by a factor of about 5; the actual, real-world as- installed-and-operating capacity factor for wind power is about 8% ( and that’s the high end of the range). The E.on Wind Report 2005, and this summary of that report gives these numbers.

    Wind energy cannot replace base-loaded electricity generating facilities.

    Most power-generating methods that are not forced by a big hammer like consumption of fuel seldom ever reach the name-plate capacity even when the driving mechanism is present. The sun doesn’t shine and the wind doesn’t blow all the time. And when these mechanisms are present they are not present to the extent required to attain the rated generating capacity.

    All corrections appreciated.

    ps

    Regarding nuclear energy, if the future situation is as dire as described all round these days, how can *any* options be taken off the table? It would seem that any and all options for energy supplies should be valid for consideration.

  168. Timothy Chase:

    Dan Hughes (#167) wrote:

    Regarding nuclear energy, if the future situation is as dire as described all round these days, how can *any* options be taken off the table? It would seem that any and all options for energy supplies should be valid for consideration.

    I think this are somewhat worse than what is usually described (IPCC estimates are conservative and do not take into account all of the positive feedback which results from moving outside of the metastable attractor), and in my view we can’t afford to take any of the options off the table – although if we see problems with a given approach, we may try to minimize them through safeguards, e.g., so as to keep nukes out of the hands of rogue regimes. With respect to a given approach, there may be valid concerns, but if that approach is necessary, the concerns may be addressed by other means.

  169. John Wegner:

    Regarding Wind Power,

    Each MW costs about $1.5 million in capital costs on land. I imagine, sea-based windmills would be expensive.

    So if you are going to build 1 million 3 MW wind turbines on the ocean, you would need to come up at least $4,500,000,000,000 to build it. Anyone have $4.5 trillion setting around?

  170. Heidi:

    Re: #107…

    “We” are a group of meteorologists who do modeling and verifications. We are not climate experts but are curious as to the success of actual, real world climate forecasts.

    We don’t really know how scenario A, B, and C are defined exactly but clearly “C” has not occurred. The rate of CO2 rise has been increasing so we assumed that forcings in “B” had been exceeded. Therefore, the scenario that occurred was much greater than “C” and greater than “B”.

    To estimate the rise in temperature since 1984, we used the difference in moving averages. No matter how we crunched the numbers, the answer was between a .37 and .44 deg rise. I don’t see how you could fail to get a much different answer no matter how you do the numbers.

    I’m not sure where our stat guys fetched the data but it came from NOAA and it matches the graphed anomalies that are presented here. Our 1984 smoothed anomaly is around +.14 deg and currently, the smoothed anomaly is about .56 which is rise of +.42.

    Then we simply looked the warming forecast in the graphs. It appears to us that Hansen clearly overforecast the warming. Once again, we reiterate that Hansen deserves credit. At the time of his research, global warming was a fringe idea and the warming trend had scarcely even begun. Warming has occured and somewhat dramatically, just not as strongly as forecast.

    We aren’t experts in climate science but we are experts in forecasting and verification. This is not a complicated verification. Did we go wrong somewhere ?

    [Response: The forcing scenarios are described in the original paper (linked to above) and are graphed for your convenience above. Scenario B has not been exceeded in the real world - what matters is the net forcing, not the individual components separately. The observational data I used are the GISS datasets available online at http://data.giss.nasa.gov/gistemp - different datasets might give slightly different answers, but I haven't looked into it much. Note that different products available are all slightly different in coverage and methodology and are not obviously superior one to the other. That uncertainty has to factor into the assessment of whether the model trends were reasonable. If you are looking at smoothed anomalies vs the OLS trend, be aware that you are using more information than I did above. -gavin]

  171. SecularAnimist:

    Dan Hughes wrote: “Wind energy cannot replace base-loaded electricity generating facilities.”

    I am not convinced that that is true, but in any case intermittent distributed electricity production from wind and photovoltaics can dramatically reduce the “need” for baseline generation capacity. In particular rooftop PV tends to generate the most electricity during periods of greatest demand — ie. sunny summer days when air conditioning usage peaks — which means that we don’t need to have as many big power plants to meet that peak demand.

    In addition I would point out that small scale distributed wind and PV are ideal solutions for rural electrification in the developing world, in countries which don’t have the resources to build giant power plants of any kind, or to build the grids to distribute electricity from large centralized power plants. That is one of the markets where wind and PV generation are growing rapidly. This is similar to the way some developing countries, like Vietnam, chose to build modern national telephone systems using cell phone technology rather than wired telephone technology.

    Dan Hughes wrote: “Regarding nuclear energy, if the future situation is as dire as described all round these days, how can *any* options be taken off the table? It would seem that any and all options for energy supplies should be valid for consideration.”

    Nuclear power can be taken off the table because, completely independent of the dangers and risks that it presents, it is not an effective way — and certainly not a cost-effective way — to reduce GHG emissions as much and as rapidly as they need to be reduced. I often hear nuclear advocates proclaiming that “nuclear is THE solution to global warming” and that “no one can be serious about dealing with global warming if they don’t support expanded use of nuclear power” but I have never heard any nuclear advocate lay out a plan showing how many nuclear power plants would have to be built in what period of time to have a significant impact on GHG emissions. That’s because when you really look at it, you will see that even a massive global effort to build thousands of new nuclear power plants would have only a modest impact on GHG emissions and even that impact won’t occur for decades.

    Resources spent building new nuclear power plants means resources diverted from other efforts — conservation, efficiency, wind, PV, biofuels, geothermal, mass transit, less car-oriented communities — that would produce more results faster.

  172. wayne davidson:

    #161, Thanks for the revealing calculations. In my opinion 1 million wind towers can be produced faster than 1 million automobiles, and the world produces dozens of millions automobiles a year. May be the good people at Ford , Chrysler may start a rethink, produce wind turbines along with hydrogen electric cars, an automobile industry feedback! With the total car concept producing everything from the energy to motor, while leaving Exxon to the petro-chemical industry.

  173. wayne davidson:

    #169 Your numbers are a bit high given the production costs would lower if mass production occurs.
    Have you factored revenues?

  174. DocMartyn:

    Hi Bruce Tabor, than you for the equation for calculating the levels of Atmospheric CO2 vs. Man made carbon release.

    It was a little too complicated for me so I devised my own. Looking at the CO2 data from Hawaii it is obvious from the saw-tooth pattern of the seasonal levels of CO2 that the rate at which CO2 can be removed from the atmospher is more than an order of magnitutde greater than the slow rise in mean CO2. This means we can perform steady state analysis.
    To do this I needed an estimate of the amount of CO2 released by Humans per year.
    (Marland, G., T.A. Boden, and R. J. Andres. 2006. Global, Regional, and National CO2 Emissions. In Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A.)

    I also used the Hawaian data from 1959 to 2003 (averaging May and November).

    The steady state equation for atmospheric [CO2} ppm is as follows:-

    [CO2] ppm = (NCO2 + ACO2)/K(efflux)

    Where:-
    NCO2 is the release of CO2 into the atmosphere from non-human sources
    ACO2 is man-made CO2 released from all activities
    and K(efflux) is the rate the CO2 is removed from the atmosphere by all mechanisms.

    You can calculate that NCO2 is 21 GT per year, ACO2 in 2003 was 7.303 GT and K(efflux) is 0.076 per year. This last figure gives a half-life for a molecule of CO2 in the atmosphere of 9.12 years.
    Running this simulation, with the nimbers for ACO2 from Marland etal., 2003, reproduces the Hawian data from 1959 to 2003 with an average error of 3.95 +/- 2.8 ppm.
    It appears that we would hit the magic number of 560 ppm when our species releases slightly more into CO2 the atmosphere than does mother nature, I make it about 21.5 GT per year from human sources and 21GT natural to hit 550 ppm.
    Please feel free to check my numbers.

  175. John Mashey:

    RE: #172 and earlier discussions on PV, etc, although this also applies somewhat to windpower: for some techs, costs go down, seriously, as unit volumes go up, and manufacturing processes improve.
    If computers were still as expensive as mainframes, we wouldn’t have them in our watches.

    In another thread here, I mentioned a good talk by Charles Gay of Applied Materials regarding cost curves for PV cells.

    I recommend:
    http://www.appliedmaterials.com/news/solar_strategy.html,
    On that page, “what is solar electric” is a fairly simple talk.

    Charles Gay gives a nice talk in
    http://www.appliedmaterials.com/news/assets/solar.wmv
    and the parts showing the new manufacturing machinery are impressive.

    CAVEAT: this is a commercial company of course, but it has a solid reputation as a sober, conservative one.

  176. Dick Veldkamp:

    #167 Windpower (Dan Hughes), #169 (John Wegner)

    Dan, you misinterpret the numbers from the E.ON report. Their installed capacity was 7,050 MW, and average power 1,295 MW (see page 4), giving a capacity factor of 18%. Germany is a special case, lots of turbines are in areas where there’s not much wind, and where turbines never would have been built without the generous feed-in tariff. A more common value is 25% on land (UK and Denmark). The record seems to be 50% in New Zealand.

    By the way, the capacity factor only reflects the nature of the wind speed distribution, and the relative size of rotor and generator, i.e. it does not indicate some physical limit to what a wind turbine can do or something. What matters is the cost per kWh.

    The figure I used (40% offhore) is not unreasonable. I’ll be happy to dig up some more numbers (it’s an interesting matter actually) but I don’t have the time just now.

    Cost of wind turbines on land is commonly cited at EUR 1,000,000 per MW and EUR 1,500,000 per MW offshore.

  177. SecularAnimist:

    John Wegner wrote: “So if you are going to build 1 million 3 MW wind turbines on the ocean, you would need to come up at least $4,500,000,000,000 to build it. Anyone have $4.5 trillion setting around?”

    That’s less than five years of the world’s military expenditures — and less than ten years of the USA’s military expenditures, since the USA accounts for approximately half of the entire world’s military expenditures.

    So, yes, there is that much money “setting around”. It’s a question of priorities.

  178. Dick Veldkamp:

    Re #175 Cost of windpower

    Yes 4.5 trillion is about right. However, total expenditure for wind is the wrong number to look at. It would cost you roughly the same amount if you were to generate the same amount of electricity by conventional means (if you factor in environmental cost that is – if you don’t, conventional is about a factor “cheaper”). What you have to look at is the cost of generating 1 kWh of electricity.

  179. Marcus:

    Doc Martyn: In no way is the atmospheric CO2 concentration in a steady state, which pretty much invalidates your calculation. Human emissions for the past decade have been on the order of 7 GtC/year. The yearly increase in concentration were about 1.9 ppm. 1.9 ppm is about 4 GtC.

    Of course, I don’t really like the idea of calculating a “carbon lifetime”. You really want an ecosystem model and an ocean model of at least 2 layers (surface and deep ocean). Then you could have exchange rates of the surface layer and the ecosystem with the atmosphere, and of the surface and deep oceans, and you could hypothetically model a pulse of CO2 emissions and back out a “time to 50% reduction”… except that in the real world, the ecosystem is very sensitive to temperature and precipitation, and the ocean mixing may or may not slow down enough to slow exchange of CO2 into the deep ocean…

    And in the long term, human emissions would have to drop to ZERO in order to stabilize concentrations, because the deep ocean will eventually reach equilibrium with the surface layers. This differs from gases like methane which do have a “real” lifetime due to reactions with hydroxyl radicals, where for any given steady state anthropogenic methane emissions are, a stabilized concentration value will eventually be reached.

  180. tamino:

    Re: #170 (Heidi)

    I can see at least two potential inconsistencies in your analysis. First, for observed temperatures you used the difference in moving averages (what was the size of your averaging window?), but it appears that to gauge the predictions you used the point-to-point temperature difference between 1984 and 2006. Second, as Gavin pointed out, the land-ocean temperature index tends to underestimate the truth because it’s based on sea surface temperature rather than air temperature, while the meteorological-station index temperature tends to overestimate the truth because land warms faster than ocean. The truth probably lies somewhere between.

    Gavin, can you post the actual *numbers* for the predictions for scenarios A, B, C, so we can compare them to observation numerically using the same procedures?

  181. DocMartyn:

    Marcus, why do you assume that one can’t use steady state calculations for modeling the levels of atmospheric CO2?

    You stated
    “The yearly increase in concentration were about 1.9 ppm. 1.9 ppm is about 4 GtC.”
    To which I say big deal? The yearly summer/winter difference is greater than this amount. The saw-tooth patters shown in atmospheric CO2 of the study in Hawaii quite clearly show that the rate at which CO2 is absorbed in the Norther monts is much greater than the overall trend.

    Please state why I cannot use steady state kinetics? My fit is much better than most and I have based it on a single set of CO2 emission data.

    Please [edit] explain your reasoning.

  182. Steve Bloom:

    The discussion about alternative power is a) interesting but b) off-topic and thus has a bad effect in terms of the utility of this comment thread. This sort of thing happens a lot, so I would like to suggest to the RC team that a (weekly?) open thread be maintained. John Quiggin does this and it seems to work quite well.

  183. Timothy Chase:

    Re Doc (#174):

    Atmospheric CO2 half-life? There isn’t any.

    There are a variety of sinks for carbon dioxide, including the soil, the ocean and the biosphere. Each of these will act as sinks at different rates. Moreover, the ocean (which has been responsible for absorbing as much as 80% of anthropogenic emissions) can become saturated, or as temperatures rise in the temperate regions or winds increase in arctic regions and stir up carbon dioxide from below, act as an emitter. Likewise CO2 which is absorbed at the surface of the ocean will raise the PH level, and a higher PH level makes the ocean less able to absorb carbon dioxide.

    Several chemical, physical and biological factors have the potential to affect the uptake of CO2 by the oceans (Houghton et al 2001). Chemical processes that may affect CO2 uptake include changes to the CO2 buffering capacity (Sarmiento et al 1995) and the effects of temperature on CO2 solubility. Physical factors that affect uptake include increased ocean stratification due to increasing global temperatures. Warming of the oceans leads to increased vertical stratification (decreased mixing between the different levels in the oceans), which would reduce CO2 uptake, in effect, reducing the oceanic volume available to CO2 absorption from the atmosphere. Stratification will reduce the return flow of both carbon and nutrients from the deep oceans to the surface.

    Ocean acidification due to increasing atmospheric carbon dioxide
    June 2005 | The Royal Society

    It would be reasonable to expect carbon dioxide to have a half-life if there were only one sink and the absorbtion of CO2 at a later point were independent of the CO2 which had been absorbed previously. Likewise, heat and drought have been reducing the ability of plants to absorb CO2. As such, there can be no constant “half-life” of atmospheric CO2.

  184. Bruce Tabor:

    Re 162 G’day Jim,

    I agree.

    For fun (!) I tried an ARIMA model (Autoregressive integrated moving average) on the 1958 to 1988 data forcasting to 2006. ARIMA models enable the capture of autocorrelated structure in the data.

    A simple model that fits is an ARIMA(0,1,1) model. It would take me forever to explain this but basically it involves analysing the change in data from year to year (the data is differenced – to do the forecasts it must be integrated (I)) using a moving average (MA) term but no autoregressive (AR) term.

    The forecast for 2005 is 0.40 C with 95%CI of 0.01 to 0.79. The point estimate for the ARIMA model is similar to the OLS (0.35C) but the 95% CI is much broader (0.21 to 0.49 for OLS).

    ARIMA models that fit the data from 1880 to 1988 have a very complex covariance structure, which results in very wide CIs for the 2005 forecasts.

    To me this says that this type of modelling is not a particularly useful approach to this problem. Much better to base the model on the underlying physics.

  185. DocMartyn:

    “Each of these will act as sinks at different rates.”

    Which is why you use steady state kinetics.

    “Moreover, the ocean (which has been responsible for absorbing as much as 80% of anthropogenic emissions)”

    How does the ocean know which CO2 molecule is anthropogenic and which is natural?

    You say 80%, what on does that mean? Its a number, we are attempting to establish the FLUXES between two or more systems, i.e. the ocean and the atmosphere. A number like 80% does not mean anything, only rates and steady state levels are important.

    “can become saturated”

    How do you know? have you every looked at what happens when you raise the pCO2 above a large mass of deep water? If you, or someone you have read has performed the expirement, show the reference.

    “can become saturated” based on what exactly? Chemistry, physics or conjecture?

    “as temperatures rise in the temperate regions or winds increase in arctic regions and stir up carbon dioxide from below, act as an emitter”

    “Stir up CO2 from below.”

    If I were to raise a billion metric tons of sea water from the depths and dump it on the surface, why would the overall effect be the release of CO2? You may think that the warming of this volume of water from cold to normal temperature will release CO2. However, what about the effects when you consider what this water also contains. If you stir up water from the depths you bring up nutrients. Ocean food chains are pretty much nutrient limited, especially in things like iron and copper. Stirring up nutrients will lead to more photosynthesis.
    Moreover, the abstract you cited states that there will be less deep ocean to surface currents. So which is it, more deep water rising or less.
    Data, not supersition.

  186. Eli Rabett:

    There is considerable confusion here about terminology. Half-life is formally defined as the time necessary to reduce the current value of something to half its value. For first order decays (think simple radioactive decay) where the rate constant k does not change over time the rate of change of x with respect to time, dx/dt, in other words how fast it decays,

    dx/dt = -k x and the half life, t1/2 = ln2/k where ln2 = 0.67

    the half life is independent of the initial value. For other types of simple kinetic systems the half life depends on the initial value for example in a second order decay

    dx/dt =-k x^2 the half life t1/2 = 1/(k xo) where xo was the initial value.

    If you let k vary with some parameter or the rate of change of x with time depends on a complex mechanism, the half life will turn out to depend on many things.

    A useful discussion of how half lives depend on kinetic mechanisms can be found in what is usually Chapter 14 of most General Chemistry textbooks. What Timothy Chase appears to be saying is that CO2 kinetics are not simple first order. That is true.

  187. Grant Bussell:

    So does that mean a 1 degree increase by around 2015?

    Most of this is beyond me but would appreciate a layman’s conclusion?

    Recently the papers are saying the oceans are using up their CO2 capacity, will this affect the models?

    Thanks,

    Grant

  188. Eli Rabett:

    Flow of carbon (in the form of CO2, plant and soils, etc) into and out of the atmosphere is treated in what are called box models. There are three boxes which can rapidly (5-10 years) interchange carbon, the atmosphere, the upper oceans, and the land. The annual cycle seen in the Mauna Loa record is a flow of CO2 into the land (plants) in the summer and out of the atmosphere as the Northern Hemisphere blooms (the South is pretty much green all year long) and in reverse in the winter as plants decay. Think of it as tossing the carbon ball back and forth, but not dropping it into the drain. Thus Doc Martyn’s model says nothing about how long it would take for an increase in CO2 in the atmosphere to be reduced to its original value.

    To find that, we have to have a place to “hide” the carbon for long times, e.g. boxes where there is a much slower interchange of carbon with the first three. The first is the deep ocean. Carbon is carried into the deep ocean by the sinking of dead animals and plants from the upper ocean (the biological pump). This deep ocean reservoir exchanges carbon with the surface on time scales of hundreds of years. Moreover the amount of carbon in the deep ocean is more than ten times greater than that of the three surface reservoirs.

    The second is the incorporation of carbonates (from shells and such) into the lithosphere at deep ocean ridges. That carbon is REALLY lost for a long long time.

    A good picture of the process can be found at
    http://earthobservatory.nasa.gov/Library/CarbonCycle/Images/carbon_cycle_diagram.jpg

    A simple discussions of box models can be found at
    http://www.nd.edu/~enviro/pdf/Carbon_Cycle.pdf

    And David Archer has provided a box model that can be run online at
    http://geosci.uchicago.edu/~archer/cgimodels/isam.html

    and here is a homework assignment
    http://shadow.eas.gatech.edu/~jean/paleo/sets/undergrads1.pdf

  189. Josh:

    180+ comments and counting – pretty sure mine will get lost, but what the heck…

    What I’m most curious about is the model’s accuracy given EXACT data. That is, I would like to take Dr. Hansen’s model and parameters from 1984 and run it with the exact data for the relevant independent variables – not guesses by Dr. Hansen – and see how close the model comes. I would think this should eliminate the high-frequency noise that is quite noticeable from the graphs and should therefore be very accurate for every data point.

    The point is that of course the actual independent variables (i.e. the ones that are inputs to rather than outputs from the model) will differ from predictions. But given that the predictions were right, would the model have been as accurate as, say, Newton’s laws at predicting future behavior? This is the falsifiable test, and I’m having a hard time finding out when and where it is being done for existing models. If it is being done, I would appreciate some further reading on what is done and how it turned out.

  190. Barton Paul Levenson:

    [[But I do know how science works, and it doesn't work when christian libertarian scientists sit back in their armchairs, and by merely lifting a little finger, are able to pronounce that markets will accomplish the global implementation of alternative energy, and that no effort by christian libertarian scientists is necessary at all. ]]

    Gavin et al., I consider this a strike at my religion, if not my politics (I left the Libertarian Party when they wouldn’t endorse Desert Storm in 1991). If it makes it easier to visualize, make Elifritz’s term “Jewish liberal scientists,” or “Black radical scientists.”

  191. Barton Paul Levenson:

    [[The seasonal variation of temperature increase caused by general increase in solar luminosity (as some claim is the major cause of GW) would be different from that due to CO2, wouldn't it, since the mechanism (direct thermal equilibrium with radiation v. absorption of longer-wave IR) is different? Who is looking at that?]]

    Neil — you’re right that the mechanism of global warming by increased sunlight would be different. First, the stratosphere would be warming, since sunlight first gets absorbed there (in the UV range, by oxygen and ozone). Second, the equator would be heating faster than the poles, by Lambert’s cosine law.

    Instead, the opposite is observed in each case. The stratosphere is cooling, which is predictable from increased greenhouse gases since “carbon dioxide cooling” dominates over heating in the stratosphere. And the poles are heating faster than the tropics — “polar amplification” — which was also predicted by the modelers.

  192. Barton Paul Levenson:

    [[The sun doesn't shine and the wind doesn't blow all the time. ]]

    Actually, fossil-fuel and nuclear power plants aren’t on-line 24/7 either. And note that when the sun doesn’t shine, the wind still blows, and vice versa. And note that there are many ways to store excess power for use during non-peak-input times. Solar thermal plants can store it in molten salt. Windmills and photovoltaics can store it in storage batteries, or by compressing gases in pistons, or in flywheels, or by pumping water uphill to run downhill through a turbine later. We don’t need new technology to start getting a large fraction (and eventually all) of our power from renewables. We need the political will to start building.

  193. Barton Paul Levenson:

    [[Regarding Wind Power,
    Each MW costs about $1.5 million in capital costs on land. I imagine, sea-based windmills would be expensive.
    So if you are going to build 1 million 3 MW wind turbines on the ocean, you would need to come up at least $4,500,000,000,000 to build it. Anyone have $4.5 trillion setting around?
    ]]

    How much is spent around the world on building new power plants each year? The $4.5 trillion you’re talking about wouldn’t be a cost on top of all our other costs, it would be a cost in place of another cost.

  194. DocMartyn:

    “Thus Doc Martyn’s model says nothing about how long it would take for an increase in CO2 in the atmosphere to be reduced to its original value.”

    Well actually it does, assuming that at Y=0, atmos. [CO2] = 373 ppm and all man made CO2 generation ended. It takes one half-life, Y=10, for CO2 to fall from 373 to 321 ppm, at Y=18 it falls to 300 ppm and at Y=27 [CO2] is 288 ppm. After four half-lives, 36 years it falls to 282 ppm.

  195. Eli Rabett:

    Neil (#165) a solar mechanism would heat the stratosphere and the troposphere and the surface. Greenhouse gas warming warms the troposphere and the surface and cools the stratosphere. That was one of the first things people looked at and, guess what, the stratosphere is cooling. Also look at Elmar Uherek’s excellent explanation

  196. James:

    Re #191 [Actually, fossil-fuel and nuclear power plants aren't on-line 24/7 either.]

    But there’s no reason for all the FF & nuclear plants serving an area to go off-line at the same time. If you have cloudy conditions, or at night, all solar output will drop over wide areas. Likewise wind depends on weather systems: get a high pressure system sitting over e.g. the west coast, and the regional wind power output would drop.

    [And note that when the sun doesn't shine, the wind still blows, and vice versa.]

    Not necessarily: around here the wind tends to blow strongly during the afternoon and evening. Midnight to noon will be generally be calm. Then there are seasonal variations in the pattern: we see much stronger wind from late winter to early summer than in the rest of the year.

    [And note that there are many ways to store excess power for use during non-peak-input times.]

    All of which A) require significant investment in the equipment to do the storage; and B) lose significant amounts of energy in the storage cycle. This significantly raises your cost per MWh from these sources. Check out the difference in cost between a grid-connected home solar array, and an off-the-grid system, for instance.

  197. pat n:

    Re: #65

    The seasonal variation of temperature increase caused by general increase in solar luminosity (as some claim is the major cause of GW) would be different from that due to CO2, wouldn’t it, since the mechanism (direct thermal equilibrium with radiation v. absorption of longer-wave IR) is different? Who is looking at that?

    I have been looking at that in doing plots of mean temperatures at climate stations in the U.S. (1890s to current). The most warming is has been occurring in the Upper Midwest, Alaska and the upper elevation stations in New Mexico, Colorado, Wyoming and Montana.

    I’ve also been looking at average overnight minimum temperatures vs average daily maximums. As expected with greenhouse warming, the minimums are increasing faster than the maximums. Increasing humidity is a factor in that too.

    Plots at:

    http://new.photos.yahoo.com/patneuman2000/albums

    http://npat1.newsvine.com/

    http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman.htm

  198. Hank Roberts:

    > one half-life, Y=10, for CO2
    This combines several different assumptions. Have you read the “‘Science Fiction Atmospheres” piece?

  199. Michael Tobis:

    It seems like it’s time for someone to restate the obvious, so I’ll volunteer.

    Of course alternatives to fossil fuels cost more than fossil fuels do, provided you neglect the environmental impact of fossil fuels. If this were not the case, the other sources of power would already be dominant.

    Unfortunately, the marketplace as presently constituted does not adequately account for the damage due to fossil fuels.

    The fact that carbon is not already largely phased out is a simple example of the tragedy of the commons. In the global aggregate, fossil fuel use is much more expensive than it appears. It’s just that you are extracting wealth from a common pool every time you use them, rather than from your own resources. The commons is the climate system. So each of us, when we maximize individual utility in our energy decisions, reduce the viability of the world as a whole, by extracting value from the climate resource held in common.

    The simplest solution is to increase the cost to the consumer of carbon emissions to the point where these externalities are accounted for, and wait for alternatives to emerge from private enterprise. Since it would be counterproductive to increase the profits of the producer, this amounts to a tax. Public revenues can be held constant, if so desired, by reducing other taxes.

    The devil is in the details of course, but the big picture is really not all that complicated.

  200. Julian Flood:

    Ref: saturation of Antarctic ocean.

    I can’t find a decent summary of the research. Could someone kindly tell me what wind data they used. If the warming scenarios are correct then I would have expected wind speeds to decrease as the driver, temperature difference, will also be reduced between the tropics and the poles.

    My bet is that surface pollution is the culprit, not wind — all CO2 mechanisms are disrupted by this.

    Is there an isotope signal from this new outgassing?

    JF

  201. Timothy Chase:

    Josh (#189) wrote:

    180+ comments and counting – pretty sure mine will get lost, but what the heck…

    The point is that of course the actual independent variables (i.e. the ones that are inputs to rather than outputs from the model) will differ from predictions. But given that the predictions were right, would the model have been as accurate as, say, Newton’s laws at predicting future behavior? This is the falsifiable test, and I’m having a hard time finding out when and where it is being done for existing models. If it is being done, I would appreciate some further reading on what is done and how it turned out.

    (emphasis added)

    Did someone bring up Karl Popper’s Principle of Falsifiability?

    Well, I am not going to post the full analysis here, but I will provide a link to what I have posted elsewhere…

    Post subject: Reference – The Refutation of Karl Popper, etc.
    Author: Timothy Chase
    Posted: Wed Nov 08, 2006 3:00 pm
    http://bcseweb.org.uk/forum/viewtopic.php?p=1943#1943

    Karl Popper’s principle of falsifiability fails because of epistemic problems first identified in 1890 by Pierre Duhem involving the interdependence of scientific knowledge, including scientific theories. (You might have noticed before that one point I have stressed is, “Empirical science is a unity because reality is a unity.”) Scientific theories are not stictly falsifiable, but they are testable, and as such they are capable of receiving both confirmation (or justification) and disconfirmation. An alternative along these lines which seems fairly popular nowadays is known as Bayesian calculus, where if a given theory is viewed as relatively unlikely but makes a prediction which is also seen as highly improbable (given the rest of what we believe we know) the theory is viewed as more likely and the alternatives to it which predicted something else are viewed as less likely, including the alternative which was dominant.

    This is of course related to the issue of the necessity of knowledge which isn’t strictly provable and that may have to be abandoned at some later point due to additional evidence which we may acquire. Such knowledge is refered to as being “corrigible,” and I brought this up a little bit ago.

    In any case, it appears that some form of the Bayesian approach is being applied in the context of climate modeling, and this would seem to be entirely appropriate.

  202. Figen Mekik:

    Re #194: I’m sorry, I’m confused. CO2 is not radiactive, is it? So how can we talk about half-lives? Does CO2 decay exponentially, and if so to what ??

  203. Timothy Chase:

    Eli Rabett (#186) wrote:

    There is considerable confusion here about terminology. Half-life is formally defined as the time necessary to reduce the current value of something to half its value. For first order decays (think simple radioactive decay) where the rate constant k does not change over time the rate of change of x with respect to time, dx/dt, in other words how fast it decays,…

    Thank you for the clarification – and the links. I will be reading the material you have made available. However, as far as I can tell, this doesn’t exactly help DocMaynard much. Quite the opposite, in fact.

  204. Timothy Chase:

    Julian Flood (#200) wrote:

    Ref: saturation of Antarctic ocean.
    I can’t find a decent summary of the research. Could someone kindly tell me what wind data they used. If the warming scenarios are correct then I would have expected wind speeds to decrease as the driver, temperature difference, will also be reduced between the tropics and the poles.

    The article itself is:

    Saturation of the Southern ocean CO2 sink due to recent climate change
    Corinne Le Quere, Christian Rodenbeck, Erik T. Buitenhuis, Thomas J. Conway, Ray Langenfelds, Antony Gomez, Casper Labuschagne, Michel Ramonet, Takakiyo Nakazawa, Nicolas Metzl, Nathan Gillett, and Martin Heimann
    Science. 2007 May 17; [Epub ahead of print]
    http://www.sciencemag.org/cgi/content/abstract/1136188v1?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=Labuschagne&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT

    I believe one of the more relevant parts to the concern you expressed is:

    We performed two additional simulations. First the winds alone were kept constant at year 1967, but the heat and water fluxes were allowed to vary interannually. Results from this simulation show a trend in sea-air CO2 flux that is close to the simulation where both winds and fluxes are kept constant (-0.034 compared to -0.046 PgC/y per decade). Second the winds were kept constant in the formulation of the gas exchange only, but were allowed to vary in the physical model. The difference in trend with the variable gas exchange was very small (<3%). The results of the process model suggest that the changes in the CO2 sink are dominated by the impact of changes in physical mixing and upwelling driven by changes in the winds on the natural carbon cycle in the ocean (18; Fig. S5), as suggested by the positive correlation between the inversion and the SAM. The process model also shows that the acidification of the surface ocean is accelerated by this process (18; Fig. S5).

    This is followed by the authors’ conclusion that while simple models (which consider only carbon chemistry) predict that the ocean will take up 70-80% of the carbon dioxide we emit, the long-equilibrium will quite possibly be considerably higher than those models would suggest – given the changes to ocean circulation. This will in part be due to winds in the Southern Ocean increasing throughout this century.

    They write:

    On a multi-century time-scale, results of simple models based on well-known carbon chemistry show that the ocean should take up 70-80% of all the anthropogenic CO2 emitted to the atmosphere (22). This estimate takes into account changes in carbon chemistry, but not the physical response of the natural carbon cycle to changes in atmospheric forcing. In the past, the marine carbon cycle has responded to circulation changes and cooling during glaciations by taking up enough carbon to lower atmospheric CO2 by 80-100 ppm (9). Changes in Southern Ocean circulation resulting from changes in Southern Ocean winds (23) or buoyancy fluxes (24) have been identified as the dominant cause of atmospheric CO2 changes (9,10,25). Here we showed that the Southern Ocean is responding to changes in winds over a much shorter time-scale, thus suggesting that the long-term equilibration of atmospheric CO2 in the future could occur at a level that is tens of ppm higher than that predicted when considering carbon chemistry alone.

    Observations suggest that the trend in the Southern Ocean winds may be a consequence of the depletion of stratospheric ozone (26). Models suggest that part of the trend may also be caused by changes in surface temperature gradients resulting from global warming (27-28). Climate models project a continued intensification in the Southern Ocean winds throughout the 21st century if atmospheric CO2 continues to increase (28). The ocean CO2 sink will persist as long as atmospheric CO2 increases, but (i) the fraction of the CO2 emissions that the ocean is able to absorb may decrease if the observed intensification of the Southern Ocean winds continues in the future, and (ii) the level at which atmospheric CO2 will stabilize on a multi-century time-scale may be higher if natural CO2 is outgassed from the Southern Ocean.

  205. Timothy Chase:

    Figen Mekik (#202):

    Re #194: I’m sorry, I’m confused. CO2 is not radiactive, is it? So how can we talk about half-lives? Does CO2 decay exponentially, and if so to what ??

    No, it is a matter of it being radioactive, but when additional CO2 enters the atmosphere, how long it will take for half of the CO2 to be removed from the atmosphere and whether the rate at which it is removed from the atmosphere can be modeled in the way that DocMartyn claims it can. (The short answer to the latter is, “No, it can not.”)

  206. ray ladbury:

    DocMartyn, What you seemingly fail to understand is that most of the carbon goes into the oceans, and if it goes into the oceans, most of it will eventually come back. So while a carbon atom may last about 10 years, it will likely be replaced by another carbon atom. While the oceans are now a net sink of carbon, their ability do absorb our emissions is decreasing as they warm, and at some point they may become a net carbon source. The dynamics of the atmosphere-ocean interaction are sufficiently well understood that it is very unlikely that any new information will drastically change our understanding of climate. Scientists have been researching the solubility of CO2 in aqueous solutions for centuries–mostly in taverns near the universities.

  207. tamino:

    Re: #202

    No, CO2 is not radioactive. The phrase “half life” is being used to denote the time it takes for half the fraction of CO2 added to the atmosphere to be removed by natural processes.

  208. Jim Manzi:

    Re: 189

    Josh,

    I agree that this should be done.

    I think the obvious way to do it would be to escrow the exact code and operational scripts at the time of prediction, and then at the time of evaluation load all best-available current data on forcings and mother model inputs and then measure forecast accuracy. The group that escrows the code and runs the model at time of evaluation should be different that the group that does

    What Hansen, and by extension Gavin, did for this forecast was a crude version of this approach (though much closer to this than Iâ??ve seen any other modeling group do). There is a group that has been testing the role that ensembles of GCMs can play in improving seasonal-level forecasts, but I have looked as hard as I can, and I have not found rigorous forecast falsification for multi-annual or multi-decadal forecasts.

    It is surprising that such a structured and systematic effort is not part of the overall climate modeling process, as it is (in my experience, at least) very difficult to maintain a focus on performance in a predictive modeling community without it.

  209. Hank Roberts:

    Figen, this may help. The “half life of CO2″ appears to be a talking point from the PR side; looking quickly with Google Scholar, I didn’t find any scientific use of the term before it showed up, though I certainly could have missed something.

    Plain Google found it:
    http://www.google.com/search?q=%22half-life+of+CO2+in+the+atmosphere%2

    David Archer, in answering an earlier question about that on RC, points to this:
    http://www.realclimate.org/index.php/archives/2005/03/how-long-will-global-warming-last/

  210. Figen Mekik:

    Thanks to all for your kind replies :)

  211. pat n:

    Re: 165 In case it was missed, my reply to a question by Neil Bates on who is looking at seasonal variation of temperature increase is in #197.

  212. ray ladbury:

    The problem with talking about a “half-life” is that it assumes that the parent (in this case CO2) can be transformed into some inactive daughter product (bound CO2?, coal? clathrates?…). The problem is that the transformation of CO2 into biomass, its absorption by the oceans and all the other sinks (except for geologic processes that take a VERY long time) is a storage, not a true transformation. Biomass decays and releases CH4 and CO2 on a timescale of decades. Most CO2 absorbed by the oceans stays in the upper portion of the ocean–there’s just no way for it to transport to the depths. So, while at present more CO2 is absorbed than is released, this cannot be taken for granted, especially as the oceans warm. We are currently taking massive amounts of carbon that were removed from the atmosphere hundreds of millions of years ago and burning it. That has to have an effect!

  213. Timothy Chase:

    Jim Manzi (#208) wrote:

    … I have looked as hard as I can, and I have not found rigorous forecast falsification for multi-annual or multi-decadal forecasts.

    It is surprising that such a structured and systematic effort is not part of the overall climate modeling process, as it is (in my experience, at least) very difficult to maintain a focus on performance in a predictive modeling community without it.

    As I point out in the piece I linked to:

    Post subject: Reference – The Refutation of Karl Popper, etc.
    Author: Timothy Chase
    Posted: Wed Nov 08, 2006 3:00 pm
    http://bcseweb.org.uk/forum/viewtopic.php?p=1943#1943

    … even Newton’s gravitational theory wasn’t falsifiable in the strictest sense.

    There is an interdependence which exists between different scientific theories such that one cannot test more advanced theories without presupposing the truth of more basic and well-established theories in order to make the move from the theory itself to specific, testable predictions. For example, in order to test the prediction that light from a distant star would be “bent” by the gravitational field of the sun, one had to make certain assumptions regarding the behavior of light – namely, that it takes the “straightest” path between any two points. One would have to make certain assumptions regarding how optical devices would affect the behavior of light. One would have to assume that there weren’t certain unknown effects due to either the atmosphere of the sun or the earth which would distort the results.

    Typically, when one seeks to test any given advanced theory, while it is in the foreground as that which one intends to test, there are a fair number of background theories, the truth of which one is also presupposing. If the results of one’s test result in disconfirmation, the fault may lie in the theory which one is testing, but at least in principle, it may lie with one of the background theories, the truth of which one has presupposed. Likewise, if the results of one’s test result in confirmation, it may very well be the case that the prediction itself was derived from premises in the theory which are in fact false, but that premises in one of the background theories were also false, and that one logically derived a true prediction from two false premises.

    What is required is testability, and what results from such tests is confirmation or disconfirmation, not falsification. Karl Popper’s principle of falsifiability belongs to the history of the philosophy of science, being “cutting-edge” more or less in the 1940s, but is no longer taken seriously in the philosophy of science or in the cooperative endeavor of empirical science.

    *

    As an aside, from what I have seen, the conclusions of the IPCC WG1 AR 4 2007 (at least with regard to the basics) have considerably more justification than what they are claiming. The point at which their conclusions are weakest is in their inability to take into account all of the positive feedbacks. There are of course negative feedbacks which tend to stabilize the system so long as it remains within the local attractor – but we have moved well outside of this, and at this point what we are finding is considerable net positive feedback, not negative feedback.

    However, much of this still needs to be incorporated into the models. As such, we are already discovering in numerous areas that they have underestimated the effects of anthropogenic climate change.

  214. Timothy Chase:

    Re Jim Manzi (#208)

    Jim,

    I hope you don’t mind a PS to what I just wrote, but you have gotten me curious…

    You speak of the procedures of falsificationist, multi-decadal, predictive modeling communities you have had experience with – and how they seal things in vaults using different independent teams. Given what you have described, this would sound like something which has to be fairly low volume – as I don’t know of that many places where modeling is done for periods of several decades only to seal the results until the alotted number of years are up.

    Is this part of a newer, more long-range approach being applied within certain business communities? If so, it sounds like the use of such methods is almost as well-guarded as that which it is intended to insure.

    What are the complexities of the calculations you are speaking of? Given that you describe it along falsificationist lines, it would almost seem that the systems would have to be of relatively low complexity. Are the calculations at all comparable to the climate modeling performed at the NEC Earth Simulator a few years back? How long have they been doing this sort of thing?

    In any case, it sounds like the teams of modelers you have worked with have set some pretty high goalposts for themselves. No doubt there is a certain sense of accomplishment one gets from working with people of such dedication.

  215. Marion Delgado:

    Blair Dowden and Timothy Chase:

    Your follow-ups on the Southern Ocean paper were tremendous, but I want to make it clear to people who might be just skimming that:

    Alastar and I are talking about two different processes, for two different reasons, mostly in two opposite places. His is still largely in the future and we hope won’t happen, but mine is inching its way towards being here and now.

    The connection is that in either event, however you get there, once the ocean stops taking up C02 you have a bigger problem. Or rather problems.

    So, assume the Southern Ocean basically winds down soon to where under current emission conditions, it absorbs no more C02. How long before it’s releasing more C02 than it absorbs?

    Now assume the arctic ice is gone and the THC is no longer submerging C02. What part of the ocean is left as a net absorber? how long before the oceans, on average, are releasing more C02 than they absorb?

    And these questions are against a backdrop of increasing temperatures and increasing emissions.

    Throw in the possibility (weighted, perhaps) of ice sheet breakups in greenland and antarctica, and how do our “Scenarios A B and C” look for the FUTURE now?

    If IPCC can’t consider all that, who should?

  216. Barton Paul Levenson:

    [[Well actually it does, assuming that at Y=0, atmos. [CO2] = 373 ppm and all man made CO2 generation ended. It takes one half-life, Y=10, for CO2 to fall from 373 to 321 ppm, at Y=18 it falls to 300 ppm and at Y=27 [CO2] is 288 ppm. After four half-lives, 36 years it falls to 282 ppm. ]]

    Now try it with a half-life of 200 years, which would be more accurate.

  217. Barton Paul Levenson:

    [[[And note that there are many ways to store excess power for use during non-peak-input times.]
    All of which A) require significant investment in the equipment to do the storage; and B) lose significant amounts of energy in the storage cycle. This significantly raises your cost per MWh from these sources.
    ]]

    Except that the cost per MWh from fossil fuels doesn’t take into account the economic damage they do through pollution.

  218. Barton Paul Levenson:

    [[Re #194: I'm sorry, I'm confused. CO2 is not radiactive, is it? So how can we talk about half-lives? Does CO2 decay exponentially, and if so to what ?? ]]

    They’re talking about the time it takes for half the CO2 in the air to be absorbed by sinks like the ocean.

  219. Timothy Chase:

    Blair Dowden (#148) wrote:

    My thoughts on this are:
    1) The cause is said to be ozone depletion, only indirectly linked to global warming.

    Yes, it appears to be due to changes in atmospheric circulation which are themselves the result of changes due to the increased levels of carbon dioxide – similar I suppose to the changes we are observing in the NW Pacific.

    For an older paper along these lines, please see:

    Annular Modes in the Extratropical Circulation. Part II: Trends
    David W. J. Thompson, John M. Wallace, and Gabriele C. Heger
    Journal of Climate, American Meteorological Society
    Volume 13, Issue 5 (March 2000) pp. 1018â??1036
    http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2F1520-0442(2000)013%3C1018%3AAMITEC%3E2.0.CO%3B2

    2) Are wind speeds supposed to increase over most of the oceans? I think we are talking about a local effect here that should not be extrapolated.

    The focus of the paper is on the Antarctic ocean. As I remember, there are some debates regarding other regions. However, as you note below, it is the arctic and antarctic regions which are principally responsible for the ocean acting as a sink due to absorbtion by colder water.

    3) Why does ocean churn reduce CO2 uptake? I thought CO2 absorption was slowed down by the lack of mixing with lower ocean layers.

    As I understand it, the churning is bringing up rich organic material, and due to the process of organic decay and the action of churning, this material releases both methane and carbon dioxide. Churning, I would presume, is not the same thing as deep, slower convection. Churning is driven by the wind, and is no doubt more chaotic. Slow convection brings water from farther down and is less likely to break up organic material. To the extent that churning is closer to the surface, it will involve water that is already largely saturated with carbon dioxide, and thus will be far less effective in the absorbtion of carbon dioxide. The churning of organic material near the surface makes things much worse.

    Later they say “An increase in global temperature is predicted to worsen the effect, since warmer waters hold less gas.” This makes sense, but the question is by how much. This paper does not address that issue.

    As we have only recently observed the phenomena and had previously expected something along these lines near the middle of this century, it would seem that “how much” and how this is a function of variable including but not limited to temperature and air speed will be a matter for further investigation. In any case, we appear to have underestimated how soon this would begin.

  220. Timothy Chase:

    Re Marion Delgado (#215)

    As I understand it, climatologists are trying to answer the questions you have posed. One good example of this would be Hansen. To the extent that we become able to make reasonable, quantifiable projections, the IPCC may then give this greater consideration.

  221. Hank Roberts:

    >143, 148, 149
    See also, for example:
    http://www.agu.org/pubs/crossref/2007/2006GL028790.shtml
    http://www.agu.org/pubs/crossref/2007/2006GL028058.shtml

  222. Timothy Chase:

    Re: Ozone Depletion and Antarctic Winds

    I did a little digging.

    Ozone is a greenhouse gas that results in higher temperatures in the stratosphere. The higher temperatures associated with climate change near the surface are resulting in increased evaporation, leading to more water vapor in the stratosphere which chemically reacting with the ozone – resulting in ozone depletion. As the amount of ozone is reduced, the stratosphere cools, resulting in an increased temperature differential between the stratosphere and the troposphere near the surface, and as a consequence, stronger winds. The increased winds should result in a higher rate of evaporation, but more importantly with respect to the long term, the churning of the nutrient-rich surface waters of the antarctic ocean. The churning of the surface will result in more carbon dioxide being released into the atmosphere, the last of which we have learned has already begun – several decades ahead of schedule.

    Although the following deals with the arctic, it now appears to be applicable to the antarctic. It covers much of what I have given just above, namely, from ghgs resulting in the depletion of ozone to increased winds:

    Research Features
    Tango in the Atmosphere: Ozone and Climate Change
    by Jeannie Allen
    February 2004
    http://www.giss.nasa.gov/research/features/tango/

    You may also want to check out:

    Brief:
    http://www.giss.nasa.gov/research/briefs/shindell_05/

    Shindell, D.T.: Climate and ozone response to increased stratospheric water vapor, Geophys. Res. Lett., 28, 1551-1554, doi:10.1029/1999GL011197, 2001
    http://pubs.giss.nasa.gov/abstracts/2001/Shindell.html

  223. Eli Rabett:

    Putting the boot to Doc Martyn, what is missing in his statement that

    Well actually it does, assuming that at Y=0, atmos. [CO2] = 373 ppm and all man made CO2 generation ended. It takes one half-life, Y=10, for CO2 to fall from 373 to 321 ppm, at Y=18 it falls to 300 ppm and at Y=27 [CO2] is 288 ppm. After four half-lives, 36 years it falls to 282 ppm.

    is that carbon can flow from the atmosphere into the upper ocean AND BACK AGAIN. Let us try some art work. Assume we have two reservoirs A (for atomosphere) and UO (for upper ocean. They can exchange CO2 with a first order half life of say ten years.

    Before we start the experiment, the two are in balance:

    X X
    X X
    A UO

    We then put four extra units of carbon as CO2 into the atmosphere

    E
    E
    E
    E
    X X
    X X
    A UO

    after a couple of decades the system has rebalanced

    E E
    E E
    X X
    X X
    A UO

    Notice that there is now more CO2 in the atmosphere than before we started, half of the initial excess has been redistributed. What if we (foolishly) had believed Doc big shoes? Well according to him we would now have this situation

    – E
    – E
    – E
    – E
    – X
    – X
    – X
    – X
    A UO

    This is silly, because once the atmosphere and the upper ocean returned to balance they would remain there. To reduce the amount of carbon in the upper ocean and the atmosphere, you need to “bury” it into the deep ocean, a process that takes hundreds of years.

    Doc assumes that no carbon flows from the upper ocean into the atmosphere. In reality a considerable amount does. Until we started to peturb the atmosphere, an equal amount flowed from the upper ocean into the atmosphere as went in the other direction. Today, as we pump more CO2 into the atmosphere, slightly more flows from the atmosphere into the ocean, leaving enough to increase the atmospheric CO2 concentration, and increasing the oceans carbon content.

    Now we would really be in trouble if Doc was right, because when CO2 goes into the ocean, it form carbonic acid, H2CO3 which ionizes to H+ and HCO3- and (CO3)2- ions, and that lowers the pH of seawater (makes it more acid). The observed change in acidity due to human emissions of CO2 are ALREADY a threat to much of the life in the sea, and most of the oxygen produced by photosynthesis comes from sea plants. Doc wants us to put all of it into the upper oceans*. Kill the oceans, kill the planet. :(

    *the carbon reservoir in the deep ocean is so large that we could sequester CO2 there without affecting the overall acidity of the deep ocean. However if it was not trapped and rose to the surface there would be huge problems.

  224. James Annan:

    Steve, way back in #155, if you are still reading :-)

    My comment wasn’t based on any special insight or detailed analysis. But I remember that the forecast earlier this year (that 2007 would “likely” be the hottest, with 60% probability) was predicated on a coming El Nino that seems to be weakening (again, based only on half-remembered press coverage – I’ve no special insight here).

    I agree that eyeballing the numbers, it looks reasonably close, but (according to hadcrut) temperatures are clearly a little down on 1998 so far.

  225. Urs Neu:

    James, Steve
    Concerning the ENSO influence we have to consider that the global temperature lags the ENSO index by 5 to 6 months. So we see the influence of El Nino from last automn/winter in the first half of 2007. A possibly upcoming La Nina during this summer will mainly influence 2008.
    However, if the roughly 10-year oscillation of global temperature we have seen over the last several decades (be it due to the solar cycle or internal) holds on, we will see a considerable temperature increase during the coming years, since we are at the minimum now. A new record until 2010 seems likely.

    Note that the CRU value of 1998 is relatively higher (compared to the following years) than in the GISS or NOAA data set. Thus it will take longer for a new record high in the CRU data set.

  226. Barton Paul Levenson:

    [[Doc assumes that no carbon flows from the upper ocean into the atmosphere. In reality a considerable amount does.]]

    Exactly right. To get quantitative, according to the IPCC, about 92 gigatons of carbon enter the ocean each year from the atmosphere, and 90 gigatons enter the atmosphere from the ocean.

  227. Julian Flood:

    REf 204: thank you for that. I don’t think I’ll get worried yet.

    Do the model teams include marine biologists? I hope so — the Southern Ocean is jade green for a reason.

    JF

  228. Hank Roberts:

    Some do:
    http://www.agu.org/pubs/crossref/2006/2005GB002511.shtml
    http://www.realclimate.org/index.php?p=160
    http://www.bgc-jena.mpg.de/~corinne.lequere/publications.html

  229. Alastair McDonald:

    Walt Bennett asked for some references regarding the carbon cycle, and I am not sure if the ones I gave were useful. Anyway I have just come across this very recent article which might be of interest:

    Balancing the Global Carbon Budget
    R.A. Houghton
    Annual Review of Earth and Planetary Sciences, May 2007, Vol. 35, Pages 313-347 (doi: 10.1146/annurev.earth.35.031306.140057)

  230. Hank Roberts:

    Good one, Alastair. Thank you.
    Here’s the link to the abstract, just a few sentences well worth reading.
    http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.earth.35.031306.140057?journalCode=earth

  231. Steve Bloom:

    Re #229/30: Press release here and a research overview with links here.

  232. bender:

    Re reply to #153

    Forgive me for having to ask a third time. Hopefully my question gets more precise and understandable with each iteration.

    I want to understand the issue of structural instabilities.

    Given that there is some irreducible imprecision in model formulation and parameterization, what is the impact of this imprecision? I’m glad to hear that the models aren’t tuned to fit any trends, that they are tuned to fit climate scenarios. (Do you have proof of this?) But it is the stability of these scenarios that concerns me. If they are taken as reliable (i.e. persistent) features of the climate system then I can see how there might exist “a certain probability distribution of behaviors” (as conjectured by #160). My concern is that these probability distributions in fact do not exist, or, rather, are unstable in the long-run. If that is true, then the parameterizations could be far off the mark.

    What do you think the chances are of that? Please consider an entire post devoted to the issue of structural instability in GCMs. Thanks, as always.

  233. Hank Roberts:

    http://environment.newscientist.com/article/dn11899-recent-cosub2sub-rises-exceed-worstcase-scenarios.html
    http://environment.newscientist.com/data/images/ns/cms/dn11899/dn11899-1_600.jpg

  234. Hank Roberts:

    http://www.csmonitor.com/2007/0522/p01s03-wogi.html
    —–excerpt——–
    “… The Global Carbon Project study held two surprises for everyone involved, Field says. “The first was how big the change in emissions rates is between the 1990s and after 2000.” The other: “The number on carbon intensity of the world economy is going up.”

    Meanwhile, … the Southern Ocean, which serves as a moat around Antarctica, is losing its ability to take up additional CO2, reports an international team of researchers in the journal Science this week. The team attributes the change to patterns of higher winds, traceable to ozone depletion high above Antarctica, and to global warming.

    “There’s been a lot of discussion about whether the scenarios that climate modelers have used to characterize possible futures are biased toward the high end or the low end,” Field adds. “I was surprised to see that the trajectory of emissions since 2000 now looks like it’s running higher than the highest scenarios climate modelers are using.”

    If so, it wouldn’t be the first time. Recently published research has shown that Arctic ice is disappearing faster than models have suggested.

    Despite the relatively short period showing an increase in emissions growth rates, the Global Carbon Project’s report “is very disturbing,” Dr. Weaver says. “As a global society, we need to get down to a level of 90 percent reductions by 2050″ to have a decent chance of warding off the strongest effects of global warming.

    If this study is correct, “to get there we have to turn this corner much faster than it looks like we’re doing,” he says.
    —–end excerpt——-

  235. Steve Bloom:

    Re #232: Along with the rest of the ClimateAstrology crowd, what you *want* is to prove your belief that the models are invalid along with the rest of climate science. It’s an interesting hobby for Objectivists with time on their hands, but as has been proved again and again by the few climate scientists who have braved the CA gauntlet, in the end it’s a complete waste of their time.

  236. Timothy Chase:

    Re Steve (#235):

    He is an Objectivist?

    I understand they often have difficulty with some aspects of science – like quantum mechanics, general relativity, the big bang, special relativity, evolution, some economics, certain parts of mathematics, etc. If it doesn’t fit their understanding of philosophy, its out. But what fits and what doesn’t oftentimes depends upon the individual’s understanding of “the philosophy.” Never really was all that technically defined. But at least they know they are “the rational ones.”

    Come to think of it, there are other groups that claim that title, and they don’t seem to have earned it either. Maybe that is the problem: claiming it. Once you have claimed it, to engage in critical self-examination to identify whether you are behaving rationally is experienced as a threat to everything you believe and everything you stand for. A personal threat. So you don’t engage in critical self-examination. You don’t self-correct. You become unwilling to entertain doubts and uncertainties – and the possibility of becoming something more than you are.

  237. Hank Roberts:

    But we digress; that would be philosophy.

  238. bender:

    Re #237
    And the presumptions and distortions in #236, would that be “philosophy” too? Maybe there is a more accurate term for that kind of “digression”?

  239. Hank Roberts:

    > Maybe there is a more accurate term
    That would be a digression, into nomenclature.

    Seems the CO2 numbers are getting back toward Scenario A. Time will tell.

  240. John Norris:

    re 232: “My concern is that these probability distributions in fact do not exist, or, rather, are unstable in the long-run. If that is true, then the parameterizations could be far off the mark.”

    1. The expressed concern is directly on topic.

    2. 27 hours and no fact based responses yet, from the contributors, nor the cheap seats.

  241. bender:

    Re #235
    If anything, what I *want* to believe is that the models are totally reasonable approximations. Unfortunately, a scientific training teaches one to not accept models on faith, but to probe them, investigate their assumptions, limitations, and uncertainties. When agendas such as #235 do not want to hear about model uncertainties, I am not surprised; one encounters that a lot these days. But when the modelers themselves do not want to discuss uncertainties in their parameterizations and predictions, that makes me suspicious. But I am patient. It is entirely possible that my question will be answered satisfactorily, and that my suspicions will be proven unwarranted. To be fair, it’s not an easy question to answer, and perhaps a post is being prepared to address the topic. Maybe the crickets chirping in #240 is actually the sound of beavers busying themselves in the background.

  242. bender:

    Re #239 “That would be a digression, into nomenclature.”
    Then let’s just call #235 what it is – a “dodge” – and be done with this and all digressions. The topic is the accuracy and validity of model projections.

    On that topic, I question the precision of the model’s parameters on the grounds that the scenarios to which the model is tuned are not persistent, reliable, indicative features of the atmospheric/oceanic circulation. Rather, the scenarios are a dynamically realized subset of a much wider array of potential dynamical behaviors. I want to know if the fundamental instability of thermodynamic circulation is a serious problem when it comes to parameterizing (i.e. tuning) these models. I can’t see how it wouldn’t be. If one parameter is off, then the error here will propagate to affect the estimation of a neighboring parameter, and so on.

  243. Timothy Chase:

    This is in response to bender’s (#232).

    bender, I am not a climatologist, and my interest in the field is perhaps a month or two old, but I figured that with a few minutes of looking around on the web, primarily at Real Climate, I would probably run across sufficient material to respond to your requests to the extent that they deserve a response.

    bender (#232) wrote:

    Forgive me for having to ask a third time. Hopefully my question gets more precise and understandable with each iteration.

    I want to understand the issue of structural instabilities.

    Given that there is some irreducible imprecision in model formulation and parameterization, what is the impact of this imprecision?

    Essentially, what you are speaking of is chaos and how it might limit the ability to predict not the weather, but the climate, where the climate consists of the statistical properties of the weather over time.

    This is a fairly esoteric topic, but it is something under consideration. Moreover, this is something which was discussed (albeit rather briefly) at this website only a few months ago.

    gavin wrote:

    Maybe this could have been clearer, but it is the climate within the models that is non-chaotic, not the model itself. All individual solutions in the models are chaotic in the ‘sensitivity to initial conditions’ sense, but their statistical properties are stable and are not sensitive to the intial conditions (though as I allude to in the article, I don’t know whether that will remain true as we add in more feedbacks).

    Please see:

    3 Jan 2007
    The Physics of Climate Modelling, post #5, inline response
    http://www.realclimate.org/index.php/archives/2007/01/the-physics-of-climate-modelling/

    Going on the article by Gavin Schmidt to which this is refering, we find the following:

    Climate modeling is also fundamentally different from weather forecasting. Weather concerns an initial value problem: Given today’s situation, what will tomorrow bring? Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so. Climate is instead a boundary value problemâ??a statistical description of the mean state and variability of a system, not an individual path through phase space. Current climate models yield stable and nonchaotic climates, which implies that questions regarding the sensitivity of climate to, say, an increase in greenhouse gases are well posed and can be justifiably asked of the models. Conceivably, though, as more componentsâ??complicated biological systems and fully dynamic ice-sheets, for exampleâ??are incorporated, the range of possible feedbacks will increase, and chaotic climates might ensue.

    Please see:

    Quick Study
    If that is true, then the parameterizations could be far off the mark.
    The physics of climate modeling
    http://www.physicstoday.org/vol-60/iss-1/72_1.html

    You write:

    I’m glad to hear that the models aren’t tuned to fit any trends, that they are tuned to fit climate scenarios. (Do you have proof of this?)

    This sounds odd to say the least.

    Climate scenarios are what you get from the models, not the other way around. What one deals with in climate models are parameterizations. The parameters are adjusted to fit our understanding of the phenomena. For example, we do not know the exact climate sensitivity to carbon dioxide or other forcings. As such, different models use different values for these parameters. However, the parameters themselves are held constant within any given model. However, just as important (as alluded to in the previous passage) are the particular feedbacks which are incorporated into the models.

    Nevertheless, assuming you are familiar with how science is done, you will realize of course that the incorporation of additional elements (or feedbacks) is not arbitrary as these necessitate further predictions, and with the evidence that we are accumulating, these most certainly can and will be tested.

    Now to estimate the uncertainty associated with our models, we will compare the resulting scenarios against one another. Likewise, we may vary the initial conditions which are the input for a given model, accounting for the uncertainties we have with respect to those initial conditions.

    In all cases so far, the estimated rise in temperature (given the current level of carbon dioxide) is in the neighborhood of two to three degrees, suggesting that the results are robust.

    You write:

    But it is the stability of these scenarios that concerns me.

    I have already dealt with this above when quoting gavin.

    If they are taken as reliable (i.e. persistent) features of the climate system then I can see how there might exist “a certain probability distribution of behaviors” (as conjectured by #160).

    The probability distribution is in terms of the specific weather which is cranked out through computer simulations, not the climate itself. The climate is a measure of the statistical properties of the weather which results from different runs of the models.

    You write:

    My concern is that these probability distributions in fact do not exist, or, rather, are unstable in the long-run.

    You are right: at this point, such “probability distributions” of climates are not yet a part of such models. The probability distributions are of the weather, the climates are a measure of the statistical properties of the weather simulations.

    As for the “climate instabilities,” currently we aren’t seeing any – given the models. What we are seeing is chaos at the level of weather, but the climates which we are seeing are themselves stable. However, the incorporation of additional feedbacks may result in this changing – if we seeing branching points in the climates which result from different simulations.

    You write:

    What do you think the chances are of that?

    As gavin suggests, it is a possibility.

    You write:

    Please consider an entire post devoted to the issue of structural instability in GCMs.

    This seems premature insofar as the global climate models aren’t showing any structural instability at this point.

    You write:

    Thanks, as always.

    I hope I have been able help.

    Furthermore, responding to you has helped to clarify my own understanding of what climate models are.

  244. Hank Roberts:

    > the scenarios to which the model is tuned are not persistent, reliable,
    > indicative features of the atmospheric/oceanic circulation

    Chuckle. Of course not. They’re fossil fuel scenarios. From twenty years ago. And impressive, as Gavin wrote up top:

    “This was one of the earliest transient climate model experiments …. showed a rather impressive match between the recently observed data and the model projections. … the model results were as consistent with the real world over this period as could possibly be expected and are therefore a useful demonstration of the model’s consistency with the real world. Thus when asked whether any climate model forecasts ahead of time have proven accurate, this comes as close as you get.”

    Asserting you doubt the current work by attacking the early work — in any area — is fundamentalism, not science.
    That’s a seed, not a foundation. Seeds get used up, they don’t remain essential once something grows from them.

  245. bender:

    Re #244 Not surprising that a rhetoritician would try to parse the problem away by redifining the terms of the debate. With a chuckle, no less. FYI I’m referring to two papers published in 2007. Since you’re so clever, try to guess which ones.

  246. Timothy Chase:

    bender (#245) wrote:

    Re #244 Not surprising that a rhetoritician would try to parse the problem away by redifining the terms of the debate. With a chuckle, no less. FYI I’m referring to two papers published in 2007. Since you’re so clever, try to guess which ones.

    At this point you are being impolite.

    Perhaps in part this is my fault – with my estimation of the Objectivist movement. With regard to that, I apologize, although I must say that I have some experience on which to based such statements.

    In many ways, the philosophy isn’t that revolutionary, but it is underdeveloped. I know because I am a former philosophy major who focused on technical epistemology and the philosophy of science. I also have reasons for my views regarding the nature of the movement. I had close, personal contact over approximately a decade – since I was an Objectivist for thirteen years.

    Now with regard to what you have just stated, it is you who are being a rhetoritician since the subject of gavin’s post was Hansen’s models from twenty years ago. Those models proved to be fairly accurate, at least scenario b – which Hansen stated was the most likely at the time he made the predictions.

  247. bender:

    “At this point you are being impolite.”
    Yes. And you are being uneven in your treatment, as my question has twice now received impolite (and, more importantly, irrelevant) commentary, and it is me who you are rebuking. I give back what I’m given. It’s called tit-for-tat. We can discuss the facts or we can ratchet up the rheotric. You have your choice. For the record, your reply was unhelpful and reflects your amateurism, just as the chuckling in the front row reflects the chuckler’s amateurism. Laugh it up. This issue has legs.

  248. bender:

    “The subject of gavin’s post was Hansen’s models from twenty years ago.”
    The content of the post is as you describe. However the subject of the post is broader than that; it is the credibility of the Hansen et al approach to climate dynamics modeling. You can redefine the debate as narrowly as you like; the harder you work these semantics, the more it worsens your case.

    “Those models proved to be fairly accurate”
    Your notion of model “accuracy” is perhaps not as nuanced as it needs to be. With a stochastic dynamic system model containing a hundred free parameters, do you have any idea how easy it is to get the right answer … for the *wrong* reasons? A valid model is one which gives you the right answers … for the *right* reasons. There is much more to model validation than an output correlation test, especially over a meagre 20 years.

    If your faith in the model’s accuracy is so strong, then why don’t you just sit back patiently, keep quiet, and let the experts defend it? Then my concern will be proven to be unfounded. Then I will leave, we can stay friends, and all will be well again.

  249. Timothy Chase:

    bender (#247) wrote:

    “At this point you are being impolite.”

    Yes. And you are being uneven in your treatment, as my question has twice now received impolite (and, more importantly, irrelevant) commentary, and it is me who you are rebuking. I give back what I’m given. It’s called tit-for-tat. We can discuss the facts or we can ratchet up the rheotric. You have your choice. For the record, your reply was unhelpful and reflects your amateurism.

    Alright.

    In the name of objectivity, lets set aside personal comments and anything which may be interpreted (rightly or wrongly) as ad hominem attacks. Instead, as a matter of individual choice, we will focus our efforts on the volitional adherence to reality. As I understand it, nothing is of of greater importance. Not obedience to authority. Not any allegience to any group. Not any value.

    Reality comes first.

    Now you vaguely speak of amateurism. Assuming you are refering to the post #243, if there is something unclear in the explanation that I provided or something about it that is inaccurate, please feel free to point it out and we can discuss.

  250. David Donovan:

    Re# 243

    Just for the record. For things like the direct GHG forcing, for all practical purposes, there is no ‘tuning’ involved in GCMs. CO2, H2O, O3 etc. concentrations all feed directly into the radiative transfer calculations which then produce the associated atmospheric heating/cooling rates (including surface heating/cooling effects). For this aspect of CGMs there is no room for fitting anything. Parameterization (which can be `tuned’), are still necessary, for example, to take into account the role of sub-grid processes such as cloud properties and their effect on the global radiative balance. Cloud feedback issues are perhaps the greatest ligitimate souce of uncertainity in our ability to predict future climate change (see the climteprediction.net stuff at Oxford ?)

    W.r.t. Clouds the more we learn particularly about thier vertical properties on a global scale the better we can reduce this source of uncertainity (Google Cloudsat).

  251. Hank Roberts:

    Mr. Bender, this thread isn’t current on modeling, it’s not about current models. Plans made back in the late 1990s led to then new models. That’s been repeated over and over, for a long time, since the 1988 model.

    Look at this week’s EOS ($20 AGU membership; the newsletter presents current science in terms the nonscientist reader can follow). Climate model requirements are discussed in a good review by Hibbard, Meehl, Cox and Friedlingstein.

    That article begins:

    “Climate models used for climate change projections are on the threshold of including much greater biological and chemical detail ….” The background given covers models from about 2000 to the present.

    The authors there write “We welcome comments and ideas on the proposed strategy for the experimental design and related questions for the research questions …. ”

    They provide links there for AGU members to use.

    It’s what’s happening now.

  252. Timothy Chase:

    Dan Donovan (#248) wrote:

    Just for the record. For things like the direct GHG forcing, for all practical purposes, there is no ‘tuning’ involved in GCMs. CO2, H2O, O3 etc. concentrations all feed directly into the radiative transfer calculations which then produce the associated atmospheric heating/cooling rates (including surface heating/cooling effects). For this aspect of CGMs there is no room for fitting anything.

    I appreciate the correction. I always will.

    In any case, it wasn’t the forcing which I regarded as problematic so much as the water vapor feedback due to higher levels of CO2 in the atmosphere. As I understood it, this was primarily what is meant by “climate sensitivity” to the forcing of carbon dioxide. However, if we are able to accurately calculate the effects of carbon dioxide on the levels of water vapor, and then the effects of both upon temperature, I am impressed. What would then be left would be primarily the positive feedbacks due to the carbon cycle, the cryosphere “albedo” feedback, and the effects of aerosols, but the last of these is quickly becoming amenable to calculation.

  253. Timothy Chase:

    PS

    Sorry David, I got your first name wrong. I have a friend at the end of the hall I have been bumping into this afternoon by the name of Dan which might explain my confusion.

  254. Steve Bloom:

    Re #247: No, it doesn’t have legs, except in the sense that it has the potential for serving as bender’s hobby for some time to come. IMHO it is of no value for a climate modeler to interact with him. For evidence see the thread below (intentionally not hot-linked) wherein leading modeler Isaac Held (starting with comment #20) slowly loses his patience with the ClimateAstrology yahoos. Bender’s rather conclusory and self-satisfied comment (#58) is par for the course.

    (http://www.climateaudit.org/?p=845)

  255. Timothy Chase:

    Hank Roberts (#251) wrote:

    Look at this week’s EOS ($20 AGU membership; the newsletter presents current science in terms the nonscientist reader can follow). Climate model requirements are discussed in a good review by Hibbard, Meehl, Cox and Friedlingstein.

    Ah – the near-term future of climate modeling.

    Greater vertical resolution, in line with what David was mentioning, and more detailed analysis of the carbon cycle – due to its strong, positive feedback – something I was hoping for. And we were already including representative species at the microbial and floral levels in various climate models – that is something I was unaware of.

    The lateral resolution is still rather wide for the Atmospheric Oceanic Global Models (approximately two degrees – big grid), but then there are the regionals – which have higher resolution and include more variables, such as when NASA recently released projections for major eastern cities in the 2080s which showed higher temperatures (100-110 F averages July through August during the dryer summers) and more frequent droughts – as NASA was taking into account that it tends to rain later in the day, giving the ground a chance to warm up beforehand, resulting in increased evaporation and the consequent degradation of plants and soil.

    Might be interesting to see whether the shotgun analysis of microbial colonies will come into play within the next couple years. It is supposed to give you a low resolution of metabolic functions at a level which is substantially broader than individual species. We might not have to rely upon representative species to the same extent – at least in the microbial realm, and it could prove valuable in the analysis of the carbon cycle – given the considerable importance of both archaea and eubacteria to the cycle at numerous key points of our climate system.

  256. bender:

    Re #251
    So, you would have me retract my question because I’m, in your opinion, off-topic? Hokay, Hank. Carry on without me. Good luck to you and your presumptuous lot. May your arrogance not be your downfall.

  257. bender:

    Re #254
    I know you feel the need to stomp on any discussion of uncertainty because you fear the influence it could have on public opinion. That’s ok, I understand. It’s your job. I applaud your faith.

    But if your faith in GCM accuracy is so strong, then why don’t you just sit back patiently, keep quiet, and let the experts defend it? Then my concern will be proven to be unfounded. Then I will leave, we can stay friends, and all will be well again.

    Instead, you ratchet up the rhetoric with your inexpert and superfluous commentary. Can’t you see how this harms your agenda? But you can’t resist, can you? May your selfishness not be your downfall.

  258. Timothy Chase:

    bender (#248) wrote:

    “Those models proved to be fairly accurate”

    Your notion of model “accuracy” is perhaps not as nuanced as it needs to be. With a stochastic dynamic system model containing a hundred free parameters, do you have any idea how easy it is to get the right answer … for the *wrong* reasons? A valid model is one which gives you the right answers … for the *right* reasons. There is much more to model validation than an output correlation test, especially over a meagre 20 years.

    However many variables it takes, it is not that easy to get the “right results” if you do not have the right results beforehand. Back in 1988, Jim Hansen was not in possession of a time machine, and I seriously doubt that he had a crystal ball. What he did have, though, were models which were fairly good at projecting a future which he did not already know.

    If your faith in the model’s accuracy is so strong, then why don’t you just sit back patiently, keep quiet, and let the experts defend it? Then my concern will be proven to be unfounded. Then I will leave, we can stay friends, and all will be well again.

    I do not require faith. I can and do look up the technical papers whenever I have the time: it is a habit I developed while dealing with contrarians who were focused not so much on attacking climate science but evolutionary biology. Moreover, I enjoy understanding things – going as deep as I can go, and I am willing to put forward the effort. However, after Dover, things have slowed down a bit, so now I have a little extra time. Besides, there really isn’t that much of a difference. The ideological commitment to one’s own views without regard for the facts is more or less the same whether people are attacking evolutionary biology or climatology. It is probably more a matter of human nature.

    In any case, I wouldn’t care to see a young earth creationist preacher “debate” an evolutionary biologist who was being patient enough and kind enough to offer free lectures to anyone who cared to listen. I wouldn’t care to see a pseudo-scientist try and “debate” various theoretical points with a specialist in quantum theory or general relativity – particularly if it were obvious that he had no grasp of the subject he was trying to debate.

    I have similar reservations here.

    You don’t have the background which would be required to make any debate with these specialists interesting except at the most superficial level. Now this is not to say that you could not learn much of what they already know, but it would take time and there would likely be limits – even assuming that you had the motivation. But from what I can tell, you have chosen to devote all of your time and energy to the fine art of heckling. Nothing left for learning what you don’t already know, it would appear.

    But as some may have already guessed, there has been a purpose to my exchange with you: I wished to have someone illustrate in grizzly detail the sort of ideological commitments that we run into among the contrarians when defending modern science, and more generally, what practices people should avoid in their own lives as a matter of simple, commonsense cognitive and psychological hygiene. You have performed both functions quite admirably, and I appreciate the service.

    Now should I send a check, and if so, who should I make it out to? Would you prefer “bender” or simply “anon”?

  259. Hank Roberts:

    I’m just sayin’ — there’s an invitation to a discussion of your interests — it’s at the AGU.
    You could put your ideas in a thread where people are inviting it, where the discussion’s about the topic, not about you.
    If you wanted to improve automotive design, you wouldn’t be insisting it happen in a Henry Ford Model T thread, would you?
    If you did that, odds are people discussing the Model T would think you were trolling.

    Reading online involves interacting with the writers, trying to make a sensible discussion. Everyone’s doing their best here.
    What we do is what the next reader along gets.

  260. Timothy Chase:

    bender,

    I had noted earlier that it has been my experience that many Objectivists believe that the discoveries of modern science are philosophically incorrect, and as such, they find it necessary to abandon and even attack such discoveries in areas like quantum mechanics, general relativity and special relativity. You seemed to think that this was extremely unfair, and were no doubt surprised to discover that I knew exactly what I was talking about – having been an Objectivist myself for quite some time, albeit one who was able to avoid such pitfalls of the movement. (It might have helped that I fell in love with modern science first before becoming acquainted with “Objectivist philosophy.”)

    However, despite your protestations that my criticism of the movement was unfair, you never got around to explaining why it was unfair. For example, do you accept quantum mechanics, general relativity or special relativity? Assuming you do, can you honestly say that among the Objectivists you know, there isn’t a strong element of dogmatism with the associated ideological blinders that prevents them from being able to accept such discoveries?

    Or is it that when you stated that I was making certain presumptions, you actually meant that I was “assuming” that modern science is true when you know for a fact that if is some sort of horrible Kantian abomination designed to prevent the ordinary man from seeing the truth of the Omphalosian-spun version of Objectivism which you hold above all else?

    I am sorry, but if you actually addressed this at some point, I seem to have missed it.

  261. John Mashey:

    1) Science in general, and modeling/simulation (whether in science and engineering) have long progressed by creating approximations to reality, and then improving them over time.

    Improvements can come from:
    - New scientific understanding, i.e., like when people started to understand global dimming effects from volcanoes and other sulfate sources.
    - New data sources, like ice-cores.
    - Longer data series of types that already existed, like glacier records.
    - Better algorithms.
    - Computers with more TFLOPs, memory, and I/O, since a lot of simulations use methods that require modeling something physical with a large collection of elements small enough to get accurate enough. It is quite common for important effects to be visible only when elements get down to some small-enough size, which of course requires more of them, and more compute power.

    I used to help design supercomputers, and worked with the people who bought them, and had more than one researcher complain to me that we didn’t yet offer a Terabyte of main memory [I hold them, Next Year], but of course that was years ago.

    2) I’ve talked to lots of scientists doing simulations of all sorts of things:
    - automobiles (Computational Fluid Dynamics for streamlining, crash codes for simulating care crashes – for the latter, sometimes simulations tell the designers that removing metal actually makes a car safer (ex: Ford Taurus, at one point)
    - airplanes
    - bridges, oil platforms, buildings
    - human blood vessels and likely effects of surgery

    - stars & galaxies
    - weather
    - climate

    - America’s Cup racing boats
    - Chocolate ice-cream bars
    - Barbie dolls
    - etc

    (The first 4 are life-and-death, and anyone who reflexively distrusts simulations should avoid those).

    2) *Anybody* I ever talked to (a lot!) understood perfectly well that there were various degrees of approximation and uncertainty involved, including, for sure, the climate modelers. Nevertheless, as soon as people starting getting useful results (which has happened at different times for different problems), they started using them (and trying to improve accuracy and bound uncertainty), rather than waiting for perfection.

    Of course, some people seem to be able to be absolutely *sure*, but then spend their time poking at scientists to demand certainty, and claim that anything less than certainty is no good.

    I doubt that my car is as perfectly safe as it could be, but I’m virtually certain that simulation made it a lot better than it would have been otherwise. (I know for sure it was simulated by safety-fanatics. They even did events like shedding moose, of which we have none locally, but we do have lots of deer, and although I’m not sure deer were simulated, I suspect the moose results may help somewhat.)

    Car crash codes got good enough in the mid-1990s for car companies to do most of their crashes virtually. Of course, climate modeling is one of the tougher ones, along with some of the biological modeling problems.

    3) For people new to this, and interested in good background, I recommend the ancient (1993), but fine book by Larry Smarr and William Kaufmann “Supercomputing and the Transformation of Science.”

    It has a good overview of the (still-relevant) issues, beautiful illustrations, and can be gotten very cheaply via Amazon.

    4) Once again, I miss the existence of the equivalent of USENET KILLFILEs for blogs in general.

  262. Timothy Chase:

    Maybe looking up the technical papers is not such a great idea afterall.

    I might take up lessons in staring at my navel from bender, or alternately weave back-and-forth while repeating the matra “This is not happening! This is not happening!” Life would no doubt be so much simpler if I could faithfully practice some form of radical skepticism with respect to modern science in order to protect the dogmatism of some philosophic belief…

    As I have said before, we are not talking runaway greenhouse effect.

    However, it would appear that the positive feedback from the carbon cycle may substantially increase the ppm on CO2 – enough to set off my personal alarms. Habitable, of course, but it would appear that the world will be changing quite substantially – and the long-term effects may be more significant than “global warming.”

    I just hope that we have some semblance of an economy at the end of this century: we are going to need it to deal with the after-effects of all of our carbon binging. What we have seen happening to the arctic cap over the past several decades would appear to be just the appetizer – and through some odd association brings to mind a skit when I think of what lies ahead…

    “Would you care for an after dinner mint? Its waffer-thin.”

  263. ray ladbury:

    Bender, I think that you might get further if you devoted a little time to understanding climate science rather than jumping right into the instabilities of the models. If you do, you will find that although the problem of modeling global climate is very difficult, it is much easier to get the general trends right–as Hansen did back in 1988. I do not know your background, but it would seem that you are not a geoscientist. Every field has its own approach to quality control, and if you really want to make a contribution to the field, you first need to understand in detail how their process works, how what they are doing relates to your experience and see where the two may be complementary, where they are antagonistic and where your methods might supplement theirs.
    Unfortunately, climate science has lots of experience with impatient outsiders who want to delve into details without understanding the general feature. Michael Crichton comes to mind–a man who feels that he should be able to understand a complex field with 30 minutes of effort even though he doesn’t understand even the basics of how science works (as should be obvious to anyone who has suffered through one of his novels).
    This site is a resource–mostly for laymen–to come and learn about climate science. There are discussions of models, but it is not a forum for delving into the guts of the models. My recommendation to you would be to learn what you can here–about both climate and the climate models–but keep in mind that the contributors to this blog all have day jobs. They may not feel they have the time to delve into the guts of the models with someone who has not shown that they have mastered the basics.

  264. bender:

    So much verbiage, so little worth responding to. For those who do not understand how GCMs are built or parameterised, please see today’s post. It provides some (but not all) of the necessary context to understand why my questions in #153 and #232 are (a) valid and (b) directly on-topic. Re: #259 Thank you for the warm welcome here and the invitation to speak at AGU. But unfortunately I don’t have anything to discuss. All I have is a simple question – a question that is not “MY” topic, but THE topic: when we say GCMs are “skillful”, what is it that they are skillfully reproducing: the global circulation in all its possibilities, or the circulation as we have observed and characterized it over the last few decades? It’s a statistical question, going back to the reply #153 in regards to the phrase “out-of-sample”: what is the sample and what is the population?

    Please, no amateur responses. It’s the inline replies I want to read. All else, it seems, is noise.

  265. bender:

    So much verbiage, so little worth responding to. For those who do not understand how GCMs are built or parameterised, please see today’s post. It provides some (but not all) of the necessary context to understand why my questions in #153 and #232 are (a) valid and (b) directly on-topic. Re: #259 Thank you for the warm welcome here and the invitation to speak at said conference. But unfortunately I don’t have anything to discuss. All I have is a simple question – a question that is not “my” topic, but THE topic: when we say GCMs are “skillful”, what is it that they are skillfully reproducing: the global circulation in all its possibilities, or the circulation as we have observed and characterized it over the last few decades? Are they one and the same? It’s a statistical question, going back to the reply #153 in regards to the phrase “out-of-sample”: what is the sample and what is the population?

    [Response: I wouldn't be so dismissive of commenters - there's some good stuff sprinkled about there, but let me briefly step in. The models are tuned on climatology - that is the mean observed climate - usually over the satellite period. That will include the seasonal cycle and, to some extent, the unforced variability (ENSO amplitude etc.). They are not tuned to trends, events (such as Pinatubo), paleo-climates (6kyr BP, LGM, 8.2kyr event, D/O events, the PETM, the Maunder Minimum or the Eocene), other forcings (solar, orbital etc.) - thus every match to those climate changes is 'out of sample' in the sense you mean. Read a GCM background paper (like Schmidt et al 2006 for instance) to see what is tuned and what isn't. The resulting code is identical to the one used for all the paleo-climate experiments, none of which were started prior to the final tuning for IPCC AR4 runs. The sample is therefore the present day mean climate, the population is the history of all paleo-climates.

    There will be some clear failures where there are reasons to suspect that some of the (up to now) excluded physics is dominant (i.e. Heinrich events that rely on ice sheet dynamics), but pretty much everything else is fair game - as long of course there is a good hypothesis to test. The 8.2kyr event is a great example. -gavin]

  266. Hank Roberts:

    Sorry, folks, I didn’t recognize the frequently asserteed beliefs, if I’d checked I wouldn’t have responded. My bad.

  267. bender:

    Ray, I’m trying to learn the answer to my question. I don’t see it addressed here and I’m hoping it will be. Presumptuous Hank is way off the mark as to my purpose. I am not here to assert. I have no beliefs. I have only questions. Valid questions. Where shall I ask them if not here?

  268. bender:

    Ray, in #266 you say:
    “it is not a forum for delving into the guts of the models”
    but attribution is fundamentally a modeling exercise. If you can’t discuss the models, you can’t discuss attribution. It seems there may be a double-standard here: the posters can discuss the models, but the commenters can not.

    My presumed “background” (#263) and imagined “interests” (#259), “beliefs” and “assertions” (#266) are immaterial to the issue, and are, frankly, offensively prejudicial. If you yourself have read the climatology literature, as you ask me to do, then you tell me straight out: what precisely is wrong with my question?

    [Apologies to all for the redundacy in the double-posts. Some get held up, and some don't. I will be more patient if Hank Roberts, Steve Bloom, and Ray Ladbury will.]

  269. Michael:

    Presumably a climate model would assume a convergence of all moments in each relevant probability distribution function (pdf) although presumably higher order components may be truncated according to particular assumptions of a given model. Perhaps the climate modellers could elaborate on moment convergence and truncation of relevant pdfs in relation to structural stability. Some of the models also involve ensemble calculations, and again it may be instructive for the climate modellers to describe something about the use of these, especially as the public has been involved in some ensemble calculations being run on their pc’s at home.

    In the field of nonequilibrium dynamics, there are interesting phenomena, such as the Belousov Zhabotinsky systems, which exhibit complex ordered features (going under the general field of self-organising systems). The work of pioneers such as Prigogine and Haken come to mind, although their work is not directly in climate change modelling. The book “The self-made tapestry. Pattern formation in nature” by Philip Ball has lots of photographs of pattern formation in diverse systems, although not climate systems. Systems can switch from one state to another quite abruptly, and one might ask whether such sudden state changes are predicted by the climate models. For example, do climate models predict jumps between states of climatic circulation, either atmospheric or oceanic? If so, do these jumps have anything do with moment convergence and pdf stability/instability? The formation of large-scale mills in the southern oceans is an interesting phenomenon in this respect, although presumably oceanic computational fluid dynamical models may not necessarily reveal such complex vortex-type phenomena.

    It would seem that the question of structural stability and morphogenesis is very important in any model. Just because a particular pattern has not been seen before, it may not necessarily mean that a certain pattern may not form soon or some time in the future. In any case, there would also be questions about completeness of each subsystem model. For example, Hansen’s recent paper on Scientific Reticence is quite explicit that much of important physics of ice sheets is not included in the models, hence his raising of matters to do with nonlinear behaviour (eg disintegration) of ice sheets. One would have thought that if a model (eg for ice sheet disintegration) doesn’t include essential physics then that would naturally compromise other parts of GCM systems, such as oceanic circulation and oceanic biotics, to which it may link and take inputs/outputs.

    In systems far from equilibrium, the phenomenon of self-organised criticality (SOC) is frequently observed. Examples which are often quoted to illustrate this are sandpiles, avalanches, forest fires, etc, and an essential ingredient for such phenomena seem to be interaction-dominated thesholds. Do any of the climate models exhibit systems which self-organise to criticality? Does the steady increasing rise of GHG emissions trigger SOC phenomena in any of the models? It would be strange if it did not given that SOC is apparently observed in so many situations and many authors in diverse fields associated with climate change discuss thresholds. The matter of thresholds, eg where bifurcation takes place and systems can move along diverse paths / patterns, is clearly a very important area and it is not necessarily obvious that such thresholds may emerge directly from model calculations. Are there examples of thesholds which are outputted by the models?

  270. Timothy Chase:

    bender,

    With respect to the “warm” welcome you received, you should have seen the “warm” welcome I received when I first joined DebunkCreation. I wasn’t being antagonistic or demanding that someone who has spent eight or more years becoming an expert in their field defend it while standing on one foot – on one of the more esoteric issues, nonetheless. I simply popped up my head making a silly creationist argument as a joke – in a way that I figured everyone would realize was a joke. But given what they had experienced in the past with other individuals, it took some time to convince them that that was how it was intended.

    Now with respect to getting one’s questions answered, there was a question at one point that I had, I needn’t go into the details here, but I realized there had to be an answer – and it took ten years before I was able to answer the question for myself. You have spoken of stochastic processes presumably making climate prediction near to impossible, but why would they? Bell distributions with great regularity result from stochastic processes, and in this sense stochastic processes are quite predictable. Moreover, climates themselves are in essence a form of probability density function – for the weather patterns which exist within them. The weather is what is particular, whereas the climate itself is a statistical description of what particulars we can expect to find as the result of the stochastic process by which a climate system evolves.

    Beyond this, in terms of the basics (at least from my perspective), what we are dealing with is a problem of induction. Any scientific theory reasons from what is known to the as of yet unknown. This is essentially what Gavin was getting at when he stated in the inline to #153,

    One can never know that you have spanned the full phase space and so you are always aware that all the models might agree and yet still be wrong (cf. polar ozone depletion forecasts), but that irreducible uncertainty cuts both ways. So far, there is no reason to think that we are missing something fundamental.

    Making inductions on the basis of what limited evidence we might have is part of the discovery process. It permits us to make predictions, and when those predictions fail, we know that it is time to replace or modify the theory. Now if the theory has worked very well in the past or with respect to a large body of evidence, chances are that we will not want to discard the theory altogether, but will realize that while that theory has worked with regard to earlier contexts, there is something new about this context – and then it is time to investigate what is new.

    Moreover, you will note that Gavin specifically stated that while the climate models are parameterized in some contexts, but that they typically continue to work quite well outside of those contexts. We are reasoning from the known to the unknown, and Hume’s arguments to the contrary not withstanding, it generally works. If it didn’t, human cognition would be entirely impossible.

    Incidently, I have been here a few weeks, and I have as of yet to receive an inline response, so you are ahead of me on that count.

    In any case, it is quite possible that my opinions on climate science are as worthless as you appear to think. My only defense is that, like the scientific endeavor itself, this is part of how I learn, and while I hope that people will correct me when I get something wrong, it has often been my experience that I will have to do this myself.

    Nevertheless, I would remind you that just because someone is not one of the leading climatologists themselves doesn’t mean that their opinions are entirely without merit. I have looked up some of the people who are participating in these discussions. They often have some fairly relevant experience and even expertise. Now I am not about embarrass them or single them out by pointing out who they are. You can look some of them up yourself, or alternatively, judge them by their arguments and the rest of what you already know – to varying degrees of justification.

  271. bender:

    Ray, you say:
    “contributors to this blog all have day jobs. They may not feel they have the time to delve into the guts of the models with someone who has not shown that they have mastered the basics”

    First, I reject your notion of a hierarchy of qualification. I agree that a climate modeler must master the basics before building a climate model. An engineer familiar with large, complex models, in contrast, has the background in modeling to ask modeling questions that are independent of climatological detail. His questions are necessarily going to seem awkward to non-climatological specialists; however they should not be dismissed on the basis of the way they sound to a non-professional.

    Second, I am not seeking feedback from contributors who all have non-climatological day jobs. I am seeking feedback from qualified professionals who do climatological research on a daily basis. Thank you all for trying, however.

    Lastly, others posting here clearly do not understand themselves how the GCMs are built and parameterised, yet make pretenses of having read “the literature” – which is HUGE. Can I assume you are as critical of their amateur commentary as you are of mine? Or do you hold a double-standard in how you apply your criticism?

  272. Timothy Chase:

    bender,

    I had no idea you were so well-connected.

    Others might want to check out the following:

    On Earth Day, Remember: If Environmentalism Succeeds, It Will Make Human Life Impossible
    by Michael Berliner (April 21, 2006)
    http://www.capmag.com/article.asp?ID=4643

  273. bender:

    Thank you for the reply in #265. I will read the cited paper and those cited in it. Expect to hear back in a month. Meanwhile, if you can manage a post on the topic of “structural instabilities” in theoretical models and in the real climate system, that would be very much appreciated. [For the record, some of the comments by commenters were, as you suggest, a propos. It's the additional noise and the overall prejudicial tone from multiple commenters that disturbed me.]

  274. ray ladbury:

    Bender, I am not an expert in climate change. I am a physicist with enough physics knowledge–and enough knowledge of how physics gets done to follow the field. About a decade ago, I was an editor at a physics magazine. Do you know how many times I would get calls from people who were absolutely sure they had disproved relativity? Do you know how often people like the contributors here are confronted–often forcefully and rudely–by people who are absolutely sure that they’ve found a fundamental error and we can all go back to driving our gas-guzzling SUVs without guilt? I will describe these people to you. They often have some technical background–engineer, doctor, computer programmer… that they feel gives them some special understanding. They often have no background in climate science, so they cannot understand the culture of the field or how and why it has developed in the way it has. And finally, they are monomaniacal–refusing to discuss any issue other than the one they are fixated on. If you want help understanding a field, look as little as possible like a monomaniac. Accept the help as it is offered. There is usually a reason why people present issues in the order they do. One may be a prerequisite to understanding what you are really interested in. For interest, I would think it would be very useful to understand the general approaches taken by various modeling groups before trying to delve into the instabilities of the models.

    I am a physicist with 20 years experience. I accept that Gavin et al. will teach me a whole helluva lot more about climate science than I will ever teach him about anything–unless I want to set up a website about radiation physics and he for some reason wants to know something about it. This is not their day job–which they are succeeding admirably at, by the way. Rather it is a community service trying to increase understanding of what they do. I applaud them for it.

  275. Barton Paul Levenson:

    [[First, I reject your notion of a hierarchy of qualification. I agree that a climate modeler must master the basics before building a climate model. An engineer familiar with large, complex models, in contrast, has the background in modeling to ask modeling questions that are independent of climatological detail.]]

    Cf my earlier comments about engineers thinking they’re scientists. No, bender, just understanding computer modeling doesn’t mean you understand climatology as well, any more than someone who can program a runge-kutta differential equation solver but never took an astronomy course can do solar modeling.

  276. Alastair McDonald:

    Re 274,

    Ray, perhaps you could help me with a problem I am having with radiation physics. What will the energy of the radiation emitted by a CO2 molecules in the atmosphere at NTP, and does it depend on the temperature of the air ?

    Cheers, Alastair.

  277. Hank Roberts:

    >capmag
    Ew. I hope you’re wrong about this guy being connected to that. Deltoid’s debunking most of those stories.

  278. fredrik:

    “Cf my earlier comments about engineers thinking they’re scientists.”

    Once again what are a scientist and what are an engineer in your view?

    Are a scientiest someone doing research and an engineer someone applying research?

    “An engineer familiar with large, complex models, in contrast, has the background in modeling to ask modeling questions that are independent of climatological detail.”

    I agree with this. Barton, it is a huge difference with a background in modeling and a skill in computer programing.

    I haven’t read all posts but benders concern about parametrizations seems to be a valid question as is vell know in system identification but gavin wrote a good answer and it seems like the parametrizations is done in a good way.

  279. Timothy Chase:

    I have noticed that there is a great deal which goes into climate modelling, and not just the ability to model weather, either.

    Obviously you have the physics, a keener interest in atmopheric conditions at higher altitudes and the like, but there is also the understanding ocean flow, chemistry (e.g., the effects of water vapor on the ozone and carbon dioxide on ocean chemistry), soil, the role of particulates in the nucleation of precipitation, geology (e.g., permafrost and methane hydrates), and now even ecology and microbiology and their effects upon carbon sequestration. I have never seen a discipline quite like this. Nowadays much of the science being done is interdisciplinary, requiring the formation of teams from various disparate fields – which at times will even have difficulty communicating with each other due to their specialized approaches to viewing problems and even their vocabularies. However, I suspect that climatology has been at the forefront of such an integrative approach – and that there is no other branch of science which requires this sort of integration to as great an extent.

    I myself fully realize that I am an amateur in this and a number of other areas. But it is something that I would like to understand to the extent that I can – for a variety of reasons – including simple curiousity. For this reason, I greatly appreciate the work being done by Real Climate to communicate the principles of climatology with the general public, although sometimes I can’t even imagine why they would want to do it.

  280. Timothy Chase:

    PS

    I can’t believe I forgot forestry and agriculture!

  281. Timothy Chase:

    Ray Ladbury (#274) wrote:

    I am a physicist with 20 years experience. I accept that Gavin et al. will teach me a whole helluva lot more about climate science than I will ever teach him about anything–unless I want to set up a website about radiation physics and he for some reason wants to know something about it.

    Yep.

    You are one of the people I looked up and decided had some fairly relevant knowledge and expertise. (See post #270, last paragraph.) I would likewise include, to a lesser degree, those would have had to work closely with climatologists over an extended period of time.

    But you are right: having advanced knowledge in a relevant discipline doesn’t make one an expert in climatology, and it would be a grave mistake to think otherwise. In an advanced economy, there exists a well-developed division of cognitive labor, particularly in the sciences. It is often difficult for one to be an expert even in their own field, which is part of the reason why new disciplines keep branching off of earlier ones. Moreover, it is a virtual impossiblity for one to become an expert in a field for which they never trained. In fact the phrase “contradiction in terms” springs to mind. And one of the mistakes which it is far too easy to make is to gaze at another field from a distance and assume that the only thing expertise or knowledge that it requires is that which immediately seems obvious. However, I suspect that in the case of the pseudo-scientist or monomaniac, there is usually a bit more to it.

    In the case of evolutionary biology, engineers will often be the worst groups of offenders, although this isn’t meant to disparage them in their own fields. They will often be the creationists who see something which works as an integrated whole – and have considerable difficulty seeing how that whole may have come into being gradually and as the result of a blind process. The concept of systemic causation will often evade their grasp. In fact, it would sometimes appear that they haven’t even grasped the principles of organic growth and developement even at their most basic level. However, it should also be recognized in passing that there are generally certain religious motives involved, a personal framework through which they understand and live their lives which they then seek to impose upon the evidence and empirical science.

    As for myself, I will pick up articles in prebiotic chemistry, virology as it relates to retroviruses or bacterial viruses, or molecular biology, such as those dealing with new discoveries related to promoters, transcription factors (proteins and RNAs which regulate the expression of other proteins and RNAs by binding to promoters), but even after having read a fair amount, trying to fit together pieces in a puzzle, I will be lucky if I have actually understood more than half of a given article at a relatively basic level. My understanding of advanced fields in science will always be superficial, and I will always be something less than a novice. But this isn’t something I can resent anymore than the law of gravity. It is simply in the nature of things, one of the essential aspects of the human condition.

    Recognition of expertise is not a form of blind faith, nor does it prevent you from understanding a given field as best you can, but it requires a recognition of one’s own limitations which is itself an expression of rationality.

  282. Timothy Chase:

    frederik (#278) wrote:

    I haven’t read all posts but benders concern about parametrizations seems to be a valid question as is vell know in system identification but gavin wrote a good answer and it seems like the parametrizations is done in a good way.

    Having blundered part way through the article that Gavin suggested, what I have noticed is the extreme level of specification which exists in the climate model.

    Every detail has to be justified, particularly when it deviates from earlier models. And at every point, the climatologists seek to ground whatever mathematical assumptions they might make in that which has been empirically demonstrated. I even saw mention of experiments where humidity determined the size at which dust clustered – and how this further affects what is called the “imaginary index of refraction” at different wavelengths – which no doubt plays an important role in determining the associated albedo and greenhouse effect. In some ways it reminded me of all the math and physics which goes into creating realistic computer-generated images, but it is far more sophisticated, dealing with a far larger variety of phenomena than the affects angle, distance, surface properties and atmosphere which must be understood in order to recreate how things appear.

    Sometimes the formula used will be approximations in one form or another for the sake of simplifying calculations, but they will be good approximations, not something which was simply thrown in there to get the results one wants. For their models to be accepted and used by other climatologists, it has to be rationally defensible at each and every step. It has to meet high standards. Moreover, the results of such models must be replicable – which in this case means that the models, it would appear, have to be made available to other researchers – including even the source code.

    From what I can see, there is a great deal of art to this in the sense of ingenuity, but there is very little latitude for fudging either the model or the results – and I strongly doubt that it would be looked upon kindly by the rest of those within the discipline.

    My apologies, but I actually taken a bit aback by the whole thing.

  283. Timothy Chase:

    Hank Roberts (#277) wrote:

    “capmag”

    Ew. I hope you’re wrong about this guy being connected to that. Deltoid’s debunking most of those stories.

    I hope so, too.

    I still remember a time when I looked up to one of his colleagues – somebody who I thought had the desire and the ability to change the Objectivist movement. He changed. After witnessing what happened to the main group and three splinter groups – well, that is when I decided that I am very distrustful of ideologies as such.

    Incidently, I never was much into the politics – and my understanding of the ethics became much more dialectical. But everything took a backseat to epistemology and my attempt to understand it within the context of twentieth century developments.

    In any case, it was probably a bad idea to post that – since I was just guessing. But at the same time, if it was that Michael, I would want you guys to know who you are dealing with. They are fairly intelligent, and rhetoric (in a variety of senses) is a large part of what they do.

  284. bender:

    You see what I mean about presumptuousness? Here you are, accusing me of practising ideology when what I am actually practising is actuarial science. i.e. How much should I charge as an insurance premium so that I maximize my profit while maintaining a monopoly on the business of the alarmed? Nothing ideological about it. I can’t maximize my profit unless I know how much cost I can expect to incur, and with what degree of uncertainty. Just as you can’t propose a rational CO2 mitigation scheme without knowing the expected cost-benefit ratio. Throw your philosophy texts in the recycling bin and get into some economics. But be sure to start with Lorenz and Smale. (Hint: it’s not an insurance firm.)

  285. Timothy Chase:

    bender,

    Yes, I know, I have wounded you. I should have been more trusting, but I am suspicious by nature, superstitious and paranoid. It is probably biochemical. If you still feel like giving me a piece of your mind, my email is:

    timothychase AT gmail.com

    But don’t worry, I am not expecting anything, I won’t be especially hurt if you don’t write, and I don’t check my email that often anyway, so I probably won’t even notice. In the meantime, I might try finding someone to talk with who values my opinion above lint.

    I hope things work out for you.

  286. John P. Reisman:

    PEAK OIL RESEARCH

    From Princeton Plasma Physics Laboratory

    http://www.pppl.gov/

    I’ve been pretty busy lately, just noticed Peak Oil being discussed more. For those interested in a US Government funded study released February, 2005

    http://www.pppl.gov/publications/pics/Oil_Peaking_1205.pdf

  287. Ray Ladbury:

    Re 276: Hi Alistair, What specifically do you want to know? Basically, increased temperature means increased motion of the molecules, so the main thing that happens is the absorption lines get broadened. The relevant IR lines for CO2 have to do with vibrational states, so they ought to couple pretty efficiently to the kinetic degrees of freedom. Any dependence, though is going to be pretty much negligible for small temperature rises–remember its Kelvin that is the appropriate scale, not Centigrade.

  288. Alastair McDonald:

    Re 287:

    Hi Ray, what I am really asking is “Would I be correct to use the Planck function to calculate the emissions from vibrationally excited greenhouse gas molecules?

  289. Ray Ladbury:

    Re 288: As long as the matter is in equilibrium with the radiation field, I believe Planck’s law is the appropriate one to use–and I believe that that is the case for the frequency of vibrational transition.

  290. Alastair McDonald:

    Re 289

    Ray, I assume that when you wrote “As long as the matter is in equilibrium with the radiation field, …” you meant that molecular collisions were causing equal amounts of excitation and relaxation.

    Although that will be true in the mid atmosphere, do you agree that is not the case near the surface of the Earth where the greenhouse molecules are being excited by blackbody radiation from the Earth’s surface, but are being relaxed by collisions with other air molecules such as N2 & O2?

    I find it difficult to see how the greenhouse gases, which create lines and bands in the blackbody spectrum of the Earth’s radiation field, can also be radiating with an intensity determined by Planck’s function.

  291. Neal J. King:

    Is there some reason that the curve of observed GAT is not continued past 2003? That’s 4 years ago.

  292. Timothy Chase:

    Neal J. King (#291) wrote:

    Is there some reason that the curve of observed GAT is not continued past 2003? That’s 4 years ago.

    Don’t know.

    But 2005 beat out 1998 as the warmest year from 1890 to 2005. And while 1998 had the benefit of an unusually strong El Nino, 2005 did not. Moreover, 2005 had the coolest solar year since the mid-1980s. 2005 was pretty unusual compared with the twentieth century – but part of a trend for the past five, ten, fifteen and twenty years.

    The general direction was up – just like for the whole of the twentieth century.

  293. Walt Bennett:

    In a discussion of Hansen’s latest paper over at CS, Willis E. made the following comment regarding the veracity of current climate models:

    “Then we should perform Validation and Verification (V&V) and Software Quality Assurance (SQA) on the models. This has not been done. As a part of this, we should do error propagation analysis, which has not been done. Each model should provide a complete list of all of the parameters used in the model. This has not been done. We should make sure that the approximations converge, and if there are non-physical changes to make them converge (such as the incorrectly high viscosity of the air in current models), the effect of the non-physical changes should be thoroughly investigated and spelled out. This also has not been done.”

    I asked him for references showing that the above have not been done. His reply was that he has no source, but that he can find no reference to any of the above having been done.

    Does anybody know how true the above is? If it is in any part not true, are there any supporting references?