RealClimate

Comments

RSS feed for comments on this post.

  1. Professional publicists find hype attractive because human attention is hardwired to attraction by extremes.

    The distasteful fact is that is equally true on all sides. Regardless of how much is said about signal-to-noise ratios and framing all sides in the climate debate lust after contrast.

    PBS pundits, Beeb reporters and scientific court historians all love inflanmatory quotes no less than Murdoch’s minions, and Nature cover designers would be as remiss as political spot producers if they failed to keep up with the eye catching state of the art of advertising.

    Comment by Russell — 20 Sep 2012 @ 11:09 AM

  2. Captcha certainly will reduce spam when it includes extended scientific notion character set.

    Great idea !

    Comment by richard pauli — 20 Sep 2012 @ 11:15 AM

  3. Laplace:

    “The most important questions of life are indeed, for the most part, really only problems of probability.”

    Comment by tt — 20 Sep 2012 @ 11:29 AM

  4. I rarely have an opinion since I lack the scientific credentials, but this question has a wider social/political angle. Modeling that can give a probability of the increase over baseline of unusual events over time should most certainly interest policy makers and money matters. As the models improve the geographic locale will narrow from global to e.g a continent.

    Comment by Eric L Gold — 20 Sep 2012 @ 11:52 AM

  5. “… the odds of heat waves have shortened (and odds for cold snaps have decreased)”

    Perhaps a clearer statement would be that odds of heat waves have increased, while odds of cold snaps have decreased?

    The quoted text mixes directions, as “shortened” and “decreased” appear at first blush to say the same thing…

    Comment by KR — 20 Sep 2012 @ 12:17 PM

  6. Having been at the meeting, I think that most of the skepticism was directed at the pseudo-real-time aspect of extreme event attribution. The ‘users’ (apart from perhaps the media and lawyers) did not identify a valuable use for the attribution of specific individual events (such as the EU or Russian heatwave). The vast majority thought that the attribution of classes of extreme events (e.g. increase in 90th percentile hot days) is a valuable scientific endeavour.
    Ed.

    [Response: Ed, thanks. I don’t see a huge distinction between the notion of fractional attribution of a single event and the attribution of a class of events very like the single event in question (though I appreciate that I might not have thought about this particularly deeply). Obviously, one can envisage singular events that have attributable singular causes (i.e. a dramatic cool spell after a huge tropical eruption), but that doesn’t seem particularly relevant for the general class of things that people have looked at. – gavin]

    [Response: A further thought. The real-time attribution part goes much more to the role of science in answering questions that the public has. And as I mention, extreme events provoke a big increase in questions. To the extent that a public service must, in some sense, serve the public, it is useful to have near-real time analysis while people are interested. Not all scientists have this mission, but those working for a putative ‘Climate services’ dept. might well have…. – gavin]

    Comment by Ed Hawkins — 20 Sep 2012 @ 12:24 PM

  7. As far as I know, the utility point seems to be more psychological than rational. Yes, most adaptation policies are or should be based on projections, but the attribution story makes the projections easier to convey. I think it links to a fundamental way of learning – the same why everybody wants to know how and why an accident happened so that we can avoid it in the future.

    As to the quality of the models, Nature follows the argument that was made by many speakers that an attribution study should include a verification section showing that the model represents the relevant mechanisms well enough. The does not follow immediately from the right resolution; the relevant mechanisms should also be shown to be simulated well enough. This is not necessarily the case for current models for all interesting extremes…

    [Response: Sure, but it also isn’t necessarily true that it is never the case. – gavin]

    Comment by Geert Jan van Oldenborgh — 20 Sep 2012 @ 1:05 PM

  8. For members of the public, and hence policymakers, a statement like “event X that we have seen is largely attributable to global warming” is actually more meaningful, in terms of plans for future action, than “events of class X are largely attributable…” etc. A good story tends to trump general statements, except maybe for scientists like us.

    Comment by Spencer — 20 Sep 2012 @ 1:07 PM

  9. I am reminded of this story.

    “In the history of ideas, there are examples of questions being answered that had earlier been judged forever out of science’s reach. In 1835 the celebrated French philosopher Auguste Comte wrote, of the stars: ‘We shall never be able to study, by any method, their chemical composition or their mineralogical structure.’ Yet even before Comte had set down these words, Fraunhofer had begun using his spectroscope to analyse the chemical composition of the sun. Now spectroscopists daily confound Comte’s agnosticism with their long-distance analyses of the precise chemical composition of even distant stars.” – Richard Dawkins

    Those who say science cannot know are usually wrong. Even as people say we cannot attribute extreme weather events to climate change, some already are being attributed. We’ve got the sun, now let’s reach for the stars.

    Comment by Unsettled Scientist — 20 Sep 2012 @ 1:21 PM

  10. 1) I am not a climate scientist –just a common citizen — but I think Nature has got it completely backwards. There is already a climate model in existence that is being used to design our infrastructure — NOAA Atlas 14 –and that model grossly underestimates rainfall in US coastal areas.

    2) As a result, real estate developers are getting EPA/state/local government approvals to lay down massive amounts of impervious cover (roads, parking lots etc) and building stormwater systems –which have an operational live of a 100 years — that are inadequate the very day on which they become operational. People die in floods, you know.

    3) Plus NOAA’s precipitation data is updated about every 40 years or so. Atlas 14 — dated 2006 — only applies to some states. Technical Paper 40 ( 1961) is still the standard precipitation data source for many states:
    http://www.nws.noaa.gov/oh/hdsc/currentpf.htm

    4) The taxpayers fund science so that it can contribute knowledge and wisdom to solution of our national problems. If the only thing you do is confirm the obvious 30 years hence, then why should the voters fund your hobby?

    5) Do you people agree with NOAA’s Atlas 14 , Volume 2? If not, what are you doing to get it corrected?

    Comment by Don Williams — 20 Sep 2012 @ 1:24 PM

  11. If scientists refuse to answer public questions about attribution of specific extreme weather events to climate change on the grounds that those questions are poorly posed, then others, less constrained by knowledge or scruples, will step in and provide the answers.

    It’s admittedly difficult, maybe impossible, to answer such questions with the rigour and confidence that scientists usually insist upon. And, when it comes to probabilities and uncertainties, communication with the general public is fraught with pitfalls. That’s all the more reason not to leave attribution commentary to the uninformed.

    Comment by Andy S — 20 Sep 2012 @ 1:41 PM

  12. 1) For example, I live in an area subject to flooding about 15 miles west of Philadelphia. The Pennsylvania Turnpike is being widened to 8 lanes and they are using the following NOAA Atlas 14 rainfall data to compute stormwater runoff: 2year storm event: 3.16 inches, 5Y: 3.91 inches, 10Y: 4.57, 25Y: 5.60, 50Y: 6.53, 100Y: 7.63 inches. Basins designed to reduce the 2Y and 5Y peak rates and keep 10Y rate from increasing.

    2) Only problem is , we have had 4 storms in August-September over the past 13 years that gave 8 inches, 9.23 inches, 9.4 inches, and 5.13 inches. Three 200 year storms within a decade. August 2011 broke Philadelphia’s 100 year old rainfall record BEFORE Hurricane Irene arrived on August 28 and dumped 8 inches of rain.
    3) And it is not just Philadelphia — Atlanta received a massive rainfall in Sept 2009 that NWS characterized as in excess of a 10,000 year storm event.
    4) I suspect NOAA’s ocean model is faulty. Or perhaps they have no ocean model — Atlas 14 Volume 2 was based on statistical analysis of weather station observations for the past 100 years and to the best of my knowledge NOAA did not have weather stations sited on the Atlantic Ocean every 10 miles and collecting rainfall data over the past century.
    5) NOTE: One problem is that Pennsylvania uses the mean in Atlas 14 rainfall estimates for storm events instead of the upper bound of the 90 percent confidence interval. But I think a problem still exists even if you do that.
    5) But I have seen no researchers looking into this anomaly and its causes. Can anyone shed any light on this?

    Comment by Don Williams — 20 Sep 2012 @ 1:56 PM

  13. Gavin, I’m surprised you were surprised: this would not be the first time scientists organize a meeting to check that something they have planned or even done is really useful. Science is about knowledge not necessarily utility.

    [Response: Indeed. But most of the time it goes without saying (i.e. everyone takes it for granted). – gavin]

    More seriously, I’m also somewhat surprised by your response about mitigation and adaptation. It looks as if you consider that attribution to anthropogenic forcings is a prerequisite or a synonymous for reliable 21st century projections. Is it that obvious?

    [Response: No, I didn’t say that. But in assessing future vulnerabilities it is much easier if we are already seeing effects, even though the latter does not preclude the former. – gavin]

    Just think about the formerly attributed late 20th century positive NAO trend and look at the recent CMIP5 projections: you might be also surprised. Moreover and more importantly, the lack of successful attribution does not preclude the validity of the projections: just think of the early global warming projections.
    If statisticians really want to address people curiosity, they might also think of clarifying our ideas about model reliability and use attribution as an objective tool for better constraining climate projections.

    Comment by H. Douville — 20 Sep 2012 @ 2:26 PM

  14. I can say with 100% confidence that if only humanity had increased GHG emissions, then Katrina could have been prevented.

    You get the random weather you get. Period. Specific attribution is: change anything significant about the past, and you can prevent not just any event, but all events. Completely different events will take their place.

    “We’re gonna get x% more floods” is a good scientific statement.

    “Global Warming caused specific event Y” is one of those true by default statements. That wonderful day last week? AGW supplied.

    Comment by Jim Larsen — 20 Sep 2012 @ 3:37 PM

  15. Don Williams,

    this is not the site of NOAA, and there is a world with climate and climate science outside of north america. A climate map is not a climate model, and the purpose of science is to gain knowledge und understanding how things work

    Comment by Marcus — 20 Sep 2012 @ 3:56 PM

  16. @ Marcus
    What happened to rigorous peer review? Isn’t that the essence of science?

    [Response: Sure, but posting extremely specific off-topic issues that only you know anything about on a blog comment thread is not the way to do it. – gavin]

    Comment by Don Williams — 20 Sep 2012 @ 4:29 PM

  17. Down on the farm, attributing extreme events is worth $billions or much more. Consider the corn situation right now. The corn belt covers Ohio through Iowa and Missouri through Minnesota. I don’t know how far it goes into Canada. The area is huge and so is the money involved. From Kansas into Saskatchewan they grow wheat. Wheat requires less water. Cotton requires even less water and is called a dryland crop.

    Farmers are very aware of the weather and its history, but they don’t speak “Scientific.” If you want to communicate with farmers, you had better talk like a farmer.

    An extreme event well worth talking about is the weather we have had for the last 2 years. Missouri corn died early this year because of the drought and was chopped up to make silage. [Silage is “grass or other green fodder compacted and stored in airtight conditions, typically in a silo, without first being dried, and used as animal feed in the winter.”] Silage is definitely not what you want to make out of your corn. This year, there was insufficient rain when it was needed. Last year there was too much rain when dryness was needed at harvest time. The harvest was delayed because the fields were too muddy for a combine. The corn is fine in Minnesota. Illinois land is still a foot short of soil moisture.

    Should farmers switch to a dryland crop? Should they install irrigation? Where will irrigation water come from? Should the government subsidize crop insurance for those who don’t switch? How will this affect price and availability of foods? Was the land in Iowa really worth $10,000/acre?

    Attributing the change in the rain pattern is absolutely critical when talking to farmers and congressmen. Communicating to them is just as critical for getting enough votes in congress to mitigate GW and to adapt to GW. If RealClimate is about anything, RC has to be about attributing the change in the rain pattern.

    Comment by Edward Greisch — 20 Sep 2012 @ 4:30 PM

  18. Hi Gavin,
    Yes – I agree that the greatest demand for real-time attribution comes from the public and the media. Does this mean we *have* to provide answers? This is a tough question.

    Quantitative statements can of course be made soon after any event, but by definition these will be less rigorous than a more lengthy look, which may take a year or more and come to a different conclusion based on more evidence. There was a great deal of discussion about the benefits and risks of such an approach.

    And, as Geert Jan has said, the model evaluation part is absolutely crucial. For example, evidence was presented that the recent Texas summer heatwave was at least partly due to the preconditioning of a dry spring and hence dry soils – the extent to which the models used for attribution capture this process needs to be assessed to be confident about any conclusions. Could this be done in real-time? I doubt it!

    cheers,
    Ed.

    [Response: If the public have valid science questions, and scientists refuse to answer, you will find the void filled by people who are not scientists for their own purposes. Thus the choice is not between answer or no answer but between answer with scientific backing and answer unhinged from any connection to the facts. Whether you personally should be mandated to do this – well, of course not, but if the government sets up a climate service to provide information to the public about climate, then, yes, they should as best as they are able.

    Whether data that exists can be analysed in real-time is not a question of science, but a question of tools. There are enough high frequency output data sets of climate model simulations that a large class of ‘extremes’ come with the purview of the multi-model ensemble. That in practice no-one is able to query them for a specific kind of extreme statistic in real time is an argument for investment in appropriate analysis software, not a statement that it is scientifically invalid. And of course for each analysis thought needs to be given to what the “Level of Scientific Understanding” is for each conclusion. Are you really implying that statistics of heat waves say have to take a year of work for every individual event? Maybe the first or second time it might take a while, but it gets much easier after the first dozen no?

    I agree that setting up a PPE for a new target is time consuming if that is what you want to do, but that is not the only way to do this, and it wouldn’t be first choice for most events. – gavin]

    Comment by Ed Hawkins — 20 Sep 2012 @ 4:41 PM

  19. I think posters #8 and #11 have a point. Andy S succinctly sums the points I was about to make: witness some of the discussion and media reporting generated in response to the 2012 North American heatwave (caveat, much is still woeful). I’m not suggesting statements are crafted to garner media attention, but in response to climate/weather extremes the average person will look for cues to help interpret such events.

    Comment by Watching the deniers (@WTDeniers) — 20 Sep 2012 @ 5:03 PM

  20. Gavin, thanks for this post. I think this subject is extremely difficult but also important. It is the extreme events that are often most damaging, and if we can develop a probabilistic methodology for attribution, it’s predictive value will far exceed its explanatory value.

    Don Williams has an important point–Agencies like NOAA, the Army Corps, etc., and insurance companies need to be making decisions based on the new climate regime we are creating or the result will be unsustainable loss of life and property.

    Extreme value statistics is still a relatively young field. However, despite the rarity of events in the tail of the distribution, in some ways, these events may actually offer more insight into how the distribution is changing than those near the mean. It is one time where the Central Limit Theorem isn’t our friend.

    Comment by Ray Ladbury — 20 Sep 2012 @ 5:09 PM

  21. It’s the slippery slope argument, I think. Where do you draw the line? I would suggest you draw the line where there is some value of prediction.

    Prediction goes hand in hand with understanding attribution. If volcanic eruptions are predicted very well, then thats all well and good.

    How about the Texas heat wave? Did the numerical models over predict or under predict the event. Or did they do it after the fact (like predicting a cool ocean in the wake of a hurricane -not really a prediction at all).

    An analysis of models verse obs is required for such discussions. Is there any data for Texas Heat Wave?

    [Response: Specific prediction longer than a seasonal forecast for a large scale drought like this is impossible, and it’s tough for seasonal forecasters as well. It has very little to do with fractional attribution to specific drivers which is a statistical argument, not a specific one. i.e. what are the odds for event like this and are they changing? Not that global warming caused this event to happen on Tuesday. There are plenty of interesting questions in prediciton of course, but they aren’t really related. – gavin]

    Comment by Isotopious — 20 Sep 2012 @ 5:30 PM

  22. The collapse in Arctic sea ice that we are currently witnessing is an extreme event that appears to be validly attributed to anthropogenic climate change as it occurs.

    The Nature editorial referred to the Arctic sea ice collapse in the first paragraph before moving to other extreme events like hurricanes and heat waves but what is happening in the Arctic right now seems to answer the question “can we attribute extreme events to climate change in real time?” at least for one form of extreme event.

    Geert (@ 7) made the point that “an attribution study should include a verification section showing that the model represents the relevant mechanisms well enough.” That appears to be simple enough to do for the Arctic sea ice in real time given the existing literature and the clear trend in the data.

    Comment by Chris McGrath — 20 Sep 2012 @ 5:51 PM

  23. Let me give it a try….
    By studying, and therefore proving, Climate Change, we can ‘inform’ business that, with 70-80% of our developed industrial infrastructure within the zone that said research’s have thus been proven will soon be innundated by the rising seas that said industry (and our lifestyles) is/are responsible for, it’s Great Time to think like Sir Arthur C Clarke, when he was writing “Fountains Of Paradise”.
    Now, while the ‘space elevator’ may not ever come to pass, Stratolaunch Systems – which will use a ‘White Knight-like’ carrier, to carry SpaceX’s Falcon 9 Rocket to the edge of space; driving the cost of Earth to LEO launchs down to pennies on the dollar of what it costs today – Is close to being a REALITY.
    That taken….just THINK ABOUT IT!
    The Lunar Surface is quite actually an IDEAL Industrial Environment!
    Why, there’s probably enough Iron-Titanium-Aluminum in the top Meter of Regolithe to PLATE THE MOONS SURFACE several centemeters thick!
    Why….Just Image! No EPA! Why, you could have a Chernobyl AND a Bhopal BLOWING UP EVERY WEEK, and – No Biggie!
    Think about it……One Piece (Seamless, as the seams are the ‘weakest link’) Composite Aircraft, cured in the Giant Vacuum Furnace that is the Lunar Day!!!
    And, well, since you’re goping to have to build it ALL OVER AGAIN – SOMEWHERE! – Any-Old-Way…..Why NOT?!?
    There….there’s my “Best shot du jour”.

    Comment by James Staples — 20 Sep 2012 @ 6:35 PM

  24. “Therefore improved attribution of shifts in extremes (in whatever direction) have the potential to change cost-benefit calculations and thus policy directions.”

    That is IT… no other comes close as a reason to do this stuff.

    This is the most important reason why SOME people will claim that this is scientifically useLESS and not worthy of examination. Stop being so charitable.

    Denialists HAVE to fear attribution becoming more accurate and damage being attributed more often. It is the one thing that WILL get the public to turn on them and recognize their lies.

    It is the “end game” in which the obfuscators finally lose control over the outcome of court battles and “conflicting studies” and the “experts disagree” argument and, as the Tobacco Companies had to do, finally retreat.

    BJ

    Comment by bjchip — 20 Sep 2012 @ 6:43 PM

  25. Yeah I don’t know. I wouldn’t be at all surprised to see AGW factored into a 10 day forecast. Maybe the blocking high will have slightly higher pressure if it were not for global warming. Likely in my opinion, since obs are always being fed into weather models.

    But this is the slippery slope argument, because then a big dump of snow and blizzard-like conditions gets pinned on global warming, since melting arctic ice and increased water vapor made the snow storm “even worse”.

    Pakistan gets washed away because of higher sea temps, global warming again.

    I think if you want to attribute these types of extremes to AGW, then you need to make a prediction before hand. The point being is that there are literally thousands of events going on all the time and picking “some” of the bad ones out before they occur is a clear sign that your attribution is credible.

    Whether you think this is impossible or not is not the issue. It’s far more scientific (and useful) to at least try than being captain hindsight. Applying the climate dice to the specific real world events before they occur is the best measure of attribution. If there is any evidence of more extremes then you will not only get some statistical validation, you will also be in a position to deduced what happen and why.

    When are you ever going to make a mistake (and challenge your own understanding) if you simply attribute things after they occur. Never.

    Comment by Isotopious — 20 Sep 2012 @ 9:05 PM

  26. Any chance of obtaining a complete list of attendees? Is there also any chance that the people who made the cynical remarks referred to in the Nature editorial will step forward to expand here on their comments? It would be a fruitful and interesting discussion to have “in person” instead of being relayed by an editorial.

    Comment by MapleLeaf — 20 Sep 2012 @ 9:18 PM

  27. Litigation is rather important in dealing with climate issues. The EPA endangerment finding for carbon dioxide had to be dragged out of it through court, for example. However, this has been a part of mitigation. Adaptation is resting with legislative bodies so far, such as the decisions to ignore sea level rise in North Carolina or to build stronger levies for New Orleans.

    I feel that the Nature piece misses the main place where attribution will be used, and that is to make people whole when disaster strikes. We still don’t have a farm bill, but a carbon tariff applied to Chinese imports could certainly help to compensate farmers for lost crops. It is in the international arena that attribution will be used the most. The standard of proof is not high there. That China has the largest emissions is reason enough to extract compensation. And it should be. For mitigation we must all work together, but for adaptation, polluter pays and China is the polluter that has pushed climate change into the dangerous regime.

    Comment by Chris Dudley — 20 Sep 2012 @ 10:48 PM

  28. If recent extreme events are not at all due to global warming, we can assume they were aberrations and simply wait for a return to normal.

    But if these events are due in large part to global warming, then we need to spend a lot of money now to change our infrastructure to prepare for a new normal.

    How could anyone claim it’s not useful for society to know which is the case?

    Comment by James McDonald — 20 Sep 2012 @ 10:59 PM

  29. Three papers I found influenced my thinking on this point:

    L. A. Smith, N. Stern, “Uncertainty in science and its role in climate policy”, Phil. Trans. R. Soc. A (2011) 369, 1–24, doi:10.1098/rsta.2011.0149

    K. E. Trenberth, “Framing the way to relate climate extremes to climate change”, Climatic Change, DOI 10.1007/s10584-012-0441-5

    K. E. Trenberth, Attribution of the human influence on climate Volume 1, 2011 2011 John Wiley & Sons, Ltd. 1, http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/WIREspaper-mockup-2.pdf

    I find Smith & Stern to be the most sensible, although I agree with Trenberth, and have advocated his position in my personal contacts. There is a scientific narrative, but few can understand that narrative unless it unfolds dramatically before their eyes. Fewer still understand the need and ramifications of economic adaptation as well as energy adaptation.

    I think a more compelling story can be had in the logs of corporations like Swiss Re (see http://www.swissre.com/rethinking/climate/) who have logged tangible impacts of climate. Ultimately, climate change will have an economic impact, and, in one sense, economic gains in the present are being obtained at penalty of large economic costs in the future. The case needs to be made that, if nothing else, we are ECONOMICALLY tied to Nature.

    Comment by Jan Galkowski — 20 Sep 2012 @ 11:00 PM

  30. > I never get as many media calls as in the wake of an extreme weather event of some sort.

    That says it all, there is a question and scientists are asked. If an answer leads to decisions, good. If it does not pushes forward science at first sight, get better glasses or wait. One can never know what comes behind next corner.

    Aren’t there more urgent questions having a deadline worth to discuss and finally answer? Can’t we postpone science inspects science until climate isn’t a concern any longer?

    Comment by Arcticio — 20 Sep 2012 @ 11:07 PM

  31. Gavin said, ” but if the government sets up a climate service to provide information to the public about climate, then, yes, they should as best as they are able.”

    Then I’d say the primary goal should not be the communication of the science, but the creation of a non-partisan official group to communicate the science.

    Oops. Did I just describe the IPCC?

    Comment by Jim Larsen — 21 Sep 2012 @ 2:27 AM

  32. If extreme weather events are going to be attributed to climate change with such immediacy after the fact, then shouldn’t the methodology used to support that attribution be something that’s applicable for all such extremes? For example… same methodology for extreme flood AND drought, and in particular… extreme heat AND cold?

    [Response: Sure – the methodologies will be the same, though the results will vary from strong to non-existent attributions, and with error bars that vary too, all as a function of whether (and how well) the model(s) work. – gavin]

    I still haven’t heard a peep from attribution studiers into the extreme cold anomalies that were persistent in Europe and Alaska over the winter. No doubt the deteriorating Arctic Ice sheet would have some role in there, but if there isn’t a consistent methodology capable of incorporating all extreme weather events of the same type, then is the exercise really robust? You can’t just rule some prolonged extreme weather events as “not related to climate change” and others as “definitely a signature of climate change”, or can you?

    [Response: This has been the subject of a number of papers from Judah Cohen, more recently Jennifer Francis, and has been frequently speculated on. However, compared to the ones related to heat waves, this is a much more tenuous claim (involving an indirect mechanism of the summer sea ice extent (or Siberian snow cover) impacting the phase of the NAO and the ‘waviness’ of the Jet Stream in the winter). There is simply not the same weight of evidence to support it. – gavin]

    So then, applying the “Moscow Warming Hole” methodology (or any of the recent methodologies that’s been usd) to the European cold snap this past Winter (because that is a similarly extreme and similarly long duration event that would similarly demand attribution)– what are the odds that our warming climate (as represented by the “much richer” statistical modeling in those studies) would produce such an event?

    When(if) yet another similar event shows up this winter, is anyone going to be willing to re-visit the methodology, or are we going just stay with declaring “significant” and “useful” the studying of only extreme warm events as statistically more likely in a warming climate.

    [Response: Lot’s of people are looking at this – and if convincing results are found, you can bet that it will get a lot of attention. My opinion is that if there is a signal, it is likely to be *much* smaller than the year-to-year variability and very difficult to detect with any confidence. – gavin]

    Comment by Salamano — 21 Sep 2012 @ 5:55 AM

  33. Hi Gavin,
    An initial scientific response to any event could continue to be ‘this type of event is projected to become more/less likely under continuing climate change’. Does a ‘climate service’ really need to be able to say ‘the risk of this event was increased/decreased by X% due to climate change’ in real-time?

    Maybe the media and public do want to know this, but there is a risk that when more evidence arrives the value of X will change significantly and our credibility is damaged, even if we had made the appropriate caveats about level of understanding at the time. The media will always lose those caveats in the reporting (e.g. the infamous UK BBQ summer).

    Or, do you think that we can/will be able to provide a reliable value of X in real time?

    I think attributing extremes in general is a good idea, but I am not so sure whether the investment required to do this in real time is worth the risk. What decisions are altered by our ability to attribute straight away?

    cheers,
    Ed.

    [Response: You are correct that it is possible that nuance will be lost by the time that any result hits the headlines (Duh!). But that is true for any result and so should not determine what science gets done. However, as I stated above, the main reason to provide real time access and analysis of *existing* data sets is to provide information that is based on something, rather than having the same discussions with information based on nothing. A side effect of enabling this, better systems will need to be created that will lead to more interesting and more accessible science that will overall end up improving our understanding of how models simulate extremes, what the robust results are (if any) and provide a target against which further progress can be made. – gavin]

    Comment by Ed Hawkins — 21 Sep 2012 @ 6:45 AM

  34. It isn’t the mean that kills, it is the extreme. Knowing how the extremes are changing is critical to scale climate science to people. And, knowing the trends and how they affect the extremes is critical information, for insurance, for disaster relief, and for planning ahead. I don’t understand why one wouldn’t consider this important.

    Comment by Mitch Lyle — 21 Sep 2012 @ 7:01 AM

  35. I greatly appreciate your responses.

    “[Response: Sure – the methodologies will be the same, though the results will vary from strong to non-existent attributions, and with error bars that vary too, all as a function of whether (and how well) the model(s) work. – gavin]”

    “[…if convincing results are found, you can bet that it will get a lot of attention. My opinion is that if there is a signal, it is likely to be *much* smaller than the year-to-year variability and very difficult to detect with any confidence. – gavin]”

    …But you do not think that this issue casts a shadow on current attribution attempts? To me, not only does it NOT seem like ‘we’re there yet’, but worse… there is some sort of growing view that these currcent methodologies are acceptable/publishable/headlining, despite their inability to capture our full climate/weather representation (I realize that is just my opinion). A methodology that renders warm events as ‘clear signals’ and cold events written off as ‘not significant over year-to-year variability’ would seem (to me) to be a biased or incomplete methodology, particularly if such seemingly “impossible” cold weather anomalies occur more frequently. In fact, to me it would point more to the need to better understand future feedbacks of a reduced Arctic ice content in returning anomalously cold air climatologically/teleconnectedly into the mid-latitudes– which could greatly affect the temperature record in time-scales on which these insta-attribution studies have been restricting themselves.

    If the true ‘right’ methodology is still not there, then why continue rushing to press with present studies with a non-comprehensive methodology? It seems like every heat-wave or drought will feature a race by various scientists to take their ‘warming/drying’ climate dice to press, while every cold snap will feature crickets (for now).

    I realize the “final” methodology (whenever that comes) will “confirm” the previous results that our changing climate will generate more warm extremes, but why should this eventuality truly absolve these early forays despite their inability to do much better than simple probabilities with non-linear trendlines, while completely ignoring many other extreme event types?

    [Response: I’m not sure I follow you. First, there is never going to be a ‘final’ methodology – that’s not how science works. Second why is it problematic that some results are going to be more robust and clearer than others? Every model (more or less) shows increased heat waves, but whether they show increased variability in the jet stream leading to increased cold snaps is much less clear (at first cut the models show the opposite). So for the second case the details are going to matter more, and the more something depends on the details the more variance (and uncertainty) will increase too. – gavin]

    Comment by Salamano — 21 Sep 2012 @ 7:20 AM

  36. Different topic — has anyone noticed the British Environmental Audit Committee Report, issued this morning. http://www.publications.parliament.uk/pa/cm201213/cmselect/cmenvaud/171/17102.htm

    Comment by Donald — 21 Sep 2012 @ 7:55 AM

  37. Oops — the date on the British Environmental Audit Committee Report says Sept 12 (not this morning).

    Comment by Donald — 21 Sep 2012 @ 7:59 AM

  38. Salamano
    Huh? Your characterization resembles nothing that is occurring on the planet where I currently reside. When looking at extreme events, what determines whether they are within normal variability or not is the amount by which they deviate from past extremes. A record cold event that deviated by six standard deviations or that persisted for a month and a half or that covered an abnormally large portion of the globe would be recognized as out of the ordinary just as surely as a record hot event.

    Climate change does not mean the end of weather. We still get record lows, just as we get record highs. However, the fact that record highs are far outpacing record lows is certainly evidence of a warming trend. I would recommend reading some of what Tamino has written on his Open Mind blog.

    Comment by Ray Ladbury — 21 Sep 2012 @ 8:55 AM

  39. One industry that cares very much about translating this knowledge into action is the insurance industry. The frequency of severe weather events is a money issue to them. And as they figure that out and translate that knowledge into higher insurance premiums, businesses and individuals will start to see what AGW really means.

    Comment by Garhighway — 21 Sep 2012 @ 10:04 AM

  40. Ed: The ‘users’ (apart from perhaps the media and lawyers) did not identify a valuable use for the attribution of specific individual events (such as the EU or Russian heatwave). The vast majority thought that the attribution of classes of extreme events (e.g. increase in 90th percentile hot days) is a valuable scientific endeavour.

    I dont’ understand this. With “fractional” attribution, these two things are one and the same thing.

    Suppose we compute the expected increase in Nth percentile events, for all N (the “valuable endeavour”). When a given extreme even occurs, we can find its percentile, and then simple arithmetic immediately tells us “this event has been made X% more likely by AGW”, right?

    Comment by toto — 21 Sep 2012 @ 10:45 AM

  41. I suppose by “final” methodology, I’m thinking about a consistent and prevailing scientific understanding about how our past/present warming manifests itself in future weather (much in the same way we’re trying to establish what the various expectations are from El Nino on local/regional 90-day weather experiences). Science eventually settles on various understandings and expectations in this light. So, when I speak of ‘methodology’ I guess I’m talking about a uniform process by which every extreme of a certain “type” are able to be examined for “Global Warming” fingerprints and signatures.

    If you go around with a different methodology for extreme warmth attribution than you do with extreme cold, in my opinion you don’t have a robust approach, but one that’s open to criticism. If results are “Robust” in one sense (indicating the expectation of a __-sigma warm event is much more likely in [global warming model] vs. [persistence]), but then that same methodology renders __-sigma cold events that actually occur as utterly impossible, that to me is a problem. If you’re really trying to identify the change in trend/variance in climate experience, then your approach needs to be able to handle both sides of the extremity coin. [Not just apply statistics + model runs to a declared non-linear temperature trendline when it comes to warm events, but then ditch that and go for discussion about jet streams and all that to dismiss cold events] … If “loaded climate dice” studies are what make it through peer-review and get headlining status on major publications, then those same dice need to be rolled in explanation of the extreme cold events as well, no?

    I don’t see it as being able to separate warm extreme events from cold extreme events and developing different caveats for each. If we’re not quite there yet, than these initial forays suffer somehow from the ability to arrive at a quasi-correct expectation in the ultimate sense, while the whole time doing so in the least portable manner, and offering the least understanding about the entire feedback system.

    If all observed events of type A are declared ‘impossible’ or ‘opposite’ of what a model postulates, then addressing the model or the understanding becomes more important than merely publishing more results each time other events happen to score a hit.

    The baited-breath world that seems so eager to jump at publishing these single-event, right-after-the-fact attribution studies– and the blog/media outlets that rush to promote them, might come across as more measured and reasonable (to me) if they wait and study until they develop a methodological answer that can encapsulate either (a) BOTH extremes into their statistical/model/climate-dice option, or (b) A physical ‘jet stream’ discussion/model/feedback that can explain BOTH extremes in light of the same warming world.

    Nobody’s saying nothing should be researched/published ever until we’re absolutely sure of everything, but (to me) SOMEBODY should be out there cooling the engines on how quickly these studies have been rushed into the literature, promoted all around, while their very real criticisms, cautions, and caveats receive scant mention even though that’s where the real effort should be directed. Maybe that’s what NATURE felt like talking about.

    Comment by Salamano — 21 Sep 2012 @ 10:58 AM

  42. Salamano,
    What are you talking about? You’re making vague insinuations that verge on accusations of malfeasance. I know of nothing that answers to the description you are giving. Give concrete examples.

    The same models are being used on both record-hot and record-cold events. Have you even read any of the studies being done?

    Comment by Ray Ladbury — 21 Sep 2012 @ 12:42 PM

  43. Consider sealevel rise. If we build critical infrastructure with an expected life span of 100 years we need to know the likely bad case scenario. Looking at those sorts of figures over the last 20 years, and how the worst case keeps rising, I suspect that the logical figure is now about 2 meters. Were my county to add some expensive infrastructure in our frequently flooding flood plane again it needs to be 2-3 feet higher than our two alltime record floods during the last decade.

    Comment by RobLL — 21 Sep 2012 @ 2:21 PM

  44. So, to follow up on what Ed Hawkins said in #6, Eli wandered (literally, it’s a very long story) into the conference and sat in on one day’s sessions. There were three or four take homes

    1. There is a huge disjunction between what climate scientists (want to ) work on and what the operators, planners and policy people need.

    2. Aligning the the groups has to start with the operators (of things like water supply systems) if it is to have any effect.

    3. Some (Chatham house rules) proposed that there be a formal organization charged with providing real time attribution to policy makers and the public. Most (at least as the Rabett heard) were dubious about the possibility of establishing such or whether there was a reliable scientific base for doing this at the skill level necessary.

    4. Some interesting information about improvements to data bases, local attribution studies, etc. in the posters.

    5. Oh yes, the magic of meeting people who only knew Eli through his electronic image.

    Comment by Eli Rabett — 21 Sep 2012 @ 2:55 PM

  45. I think that if there had been policy people or corporate types at the conference the response would have been quite different.

    We have had more huge fires this past decade than over the previous century combined. If you include drought, many large cities nearly ran out of water, and some towns did run out of water. We’ve got an appalling plan for Australia’s biggest inland water system that doesn’t appear to factor in extreme drought (or flood). Flash floods are much more frequent – and causing lots of damage. That’s just locally – similar problems are being felt in lots of places around the world.

    Insurance companies have put up rates sending out letters that blame it on more weather disasters (past and expected) because of climate change. Confirmation by scientists would add weight.

    Local councils and governments need to let the public know why infrastructure is going to cost more (and why the cost of repairs after damage will keep rising). Storm water drainage, desal plants, coastal erosion barriers, bridges, ports etc are all going to be built to different specs. That is, they will cost more than they would have without global warming, and we are all going to have to pay somehow.

    Comment by Sou — 21 Sep 2012 @ 4:28 PM

  46. I wonder if the public are becoming confused by or dismissive of, the mixed messages about extreme weather. For example Dr Christy has noted that the IPCC TAR stated “milder winter temperatures will decrease heavy snowstorms” but after the winters of 2009-10 and 2010-2011, advocates of climate change, the Union of Concerned Scientists, stated “Climate Change makes major snow storms more likely”. So here we have climate change making snow storms both less likely and more likely. Here in Australia our Climate Change Commissioner, Dr Tim Flannery, stated in 2007 during a long period of drought “hotter soils meant even rain that falls will not fill our rivers and dams”. Since then the East coast of Australia has been inundated with rain, dams are full, rivers have flooded and Australia has been pronounced drought free. I’m sure you can imagine the attitude of the average Australian looking at his flooded living room, to the pronouncements of the Climate Commissioner

    [Response: The TAR reference is an unsourced statement in WG2 – not WG1 where you would expect it. The WG1 discussion on extremes doesn’t mention heavy snowstorms at all (as far as I can tell). I’m not at all sure where any info on heavy snow events could have explicitly come from in 2001 – even now we don’t have good robust model results on this, so it is conceivable that the WG2 authors just made an assumption that warmer means less snow and therefore less heavy snowstorms. But this is a very loose chain of reasoning and there are plenty of reasons why it might not work out – and how it might work out differently in different regions. On the other hand, claims that more heavy snowstorms are a specific prediction can’t be sustained either. More intense precipitation is predicted (and has been observed), but how that trend and the trend towards less snow in mid-latitudes intersect will be subtle – and again, a definitive study on this is lacking. The situation with droughts and floods are similarly complex, though there are more reliable studies there with results that depend quite a lot on region and season. Note that in all cases these projections are statistical – not absolute, so looking at one flood, drought or snow storm as if it provided some kind of conclusive statement is never going to be sensible. Your overall thrust supports my opening contention though – we need to have more real science on these topics in near real time otherwise the gap gets filled with content-less speculations from people with agendas. – gavin]

    Comment by Ian — 21 Sep 2012 @ 4:51 PM

  47. I haven’t read all the comments up to this point, but I would like to mention that if extremes attribution could improve the Intensity-Duration-Frequency Curves used by local planners and engineers trying to plan for climate impacts this work could be quite useful. I am surprised that no one brought this up. I work in local government and currently we are struggling with how to take climate change forecasts and make them useful for our engineers in the design and maintenance of our stormwater and sewer systems. Typically they are designed to peak events based on 50, 100, and 200 year return periods which are reflected in the IDF curves. These curves are based on historical data, but if the future is going to be defined by changes in climate then we need to adjust the curves. the problem with the climate data is that it has such a large range of possible outcomes (based on different models/scenarios) in our region it is difficult narrow it to something useful. We are looking at using median values, but we are not sure our engineers who like to work in a world (real or imagined) of more certainty. If extreme event attribution could help provide more certainty that would be really be useful.

    Comment by Jason — 21 Sep 2012 @ 4:57 PM

  48. I just thought that, should my previous post get published, that the comment could be made that drought and flood in Australia are manifestations of extreme weather events thus proving the point that climate change is causing extreme weather events. However there are not unique in Australia as there was a much longer drought in the 1890s and indeed a famous Australian poem written in 1906 about the authors love of Australia has the line “of droughts and flooding rains”.

    Comment by Ian — 21 Sep 2012 @ 5:09 PM

  49. RobLL,

    Considering the past two deacdes of sea level rise, the highest rate was ~4mm/yr from 1996-2006. Prior to and subsequent to that time frame, the rate has been ~2mm/yr. The average rate is ~3mm/yr, and shows no sign of accelerating. Therefore, building an infrastructure to last 100 years, needs to be able to withstand about half of a meter of SLR.

    Comment by Dan H. — 21 Sep 2012 @ 5:30 PM

  50. NOAA’s Atlas 14 is the most important and influential climate model in the USA today.
    Tens of thousands of people who probably have never heard of James Hansen but who are building
    our cities, our roads and our infrastructure are guided by Atlas 14. It seems to me that
    it is worthy of critical peer review by the climate research community at Realclimate.org. See
    http://www.nws.noaa.gov/oh/hdsc/PF_documents/Atlas14_Volume2.pdf and search for
    “model”.

    If you can improve its forecasts for coastal areas, then please do so. The value-added would
    be worth a 1000 times whatever federal grant would be needed.

    If you can’t improve Atlas 14 at the present time –but think that the historical data on which it is
    based is becoming obsolete due to climate change and that extreme events outside its
    forecasts are becoming more likely– then please present a case to NOAA for why they
    should add a warning in Atlas 14 so that our builders, engineers and government
    officials/planners will start taking that into account in their current designs.

    Incidentally, the creators of Atlas 14 did test for signs of climate change in the data — they didn’t find
    any of significance rising above the noise. But they only looked up to year 2000.

    Comment by Don Williams — 21 Sep 2012 @ 5:40 PM

  51. Attribution is important. Attribution helps us lay people understand what is freak weather and thus unlikely to be repeated. Attribution helps us understand what is the new normal and therefore very likely to be repeated.

    If we understand what is to come and what might be to come we can prepare. Just maybe, new infrastructure will be built able to cope with future weather not what happened last century.

    Unfortunately scientists and lay people still speak a different language. We still do not understand that dry language really represents “oh sh##”

    Comment by Tony O'Brien — 21 Sep 2012 @ 6:03 PM

  52. SOMEBODY should be out there cooling the engines on how quickly these studies have been rushed into the literature…

    Ah, a Climate Change Tsar, approving or stopping research for purposes of political convenience?

    I thought that was what we were not supposed to do. Confusing.

    How does one “not think of an elephant,” once told?

    Clandestine research, conducted at night using stolen CPU time, published with stolen laser printer runs, always fearful of the Tsar’s secret police, etc. Very healthy.

    Comment by dbostrom — 21 Sep 2012 @ 6:21 PM

  53. Re Don Williams @10 and Ray Ladbury @ 20: In addition to NOAA Atlas 14, NOAA has several hydrometeorological reports (HMRs) covering different parts of the US that contain contours of probable maximum precipitation (PMP). You look at a map for your area, pick a PMP value, and use it as a design precipitation estimate for a structure. PMP is supposedly the worst precipitation that one can expect.

    All HMRs are all out of date, do not reflect current climate conditions, and contain no guidance on how to adjust PMP values for a changing climate; never mind that PMP is an illogical concept to start with. NOAA has no plans to revise the HMRs, yet they are used all the time for estimates of extreme precipitation.

    There needs to be more interaction between climatologists and hydrologists. Hydrologists will say that they haven’t seen some effects predicted by climatologists, yet climate will eventually undermine the hydrologists’ assumption of stationarity. I’m not sure that hydrologists understand climate inertia. The effects predicted by climatologists are likely to show up eventually.

    Comment by Jay Dee Are — 21 Sep 2012 @ 6:22 PM

  54. Re: 42

    See http://www.realclimate.org/index.php/archives/2012/03/extremely-hot/

    There have been studies (like the ones indicated in the above link) that attempt to describe the present likelihoods of recent extreme warm events using modeling of the temperature record and on a probability distribution that shifts toward warmer temperatures. However, when the same assumptions encounter an extreme cold event of the same magnitude, the results borderline on declaring the event a complete impossibility.

    A counter-argument that extreme warm events can be quantified as a signature of our climate change and yet relegate all cold weather events that confound the methodology as highly non-linear products of natural variation doesn’t jive with me because I want to see the methodology work in both applications.

    The same is true of Trenberth’s discussion of attribution http://thinkprogress.org/climate/2012/03/25/451347/must-read-trenberth-how-to-relate-climate-extremes-to-climate-change/ … If a computer is going to run a million iterations of a model and declare that extreme warm event x is only going to occur with the observed increasing likelihood in a warming world, then it should also demonstrate that the cold extreme event is going to occur with requisite plausibility in a warming world. Otherwise, you’re left with virtually unfalsifiable declarations that anything and everything that occurs is a sign of global warming, except when we say it isn’t.

    I can certainly see a shift of both the mean (warmer) and the variance (wider), but (as a hypothetical example) the arctic ice loss scenario could open up a can of worms that provides an extra feedback directly related to Global Warming that primarily generates strong cold anomalies. Who knows, if they get stronger and more frequent with more ice loss, they could stunt the otherwise modeled progression of warming while at the same time upping the prevalence of cold extreme events. The resultant probability distribution would not look normal– not as increasingly rare cold extremes, the same expected increase in warm extremes, and a slightly cooler than previously anticipated overall warming trend.

    Comment by Salamano — 21 Sep 2012 @ 7:33 PM

  55. Perhaps worry about liability is having some effect. The largest polluter, China, is calling for 70% cuts in emissions. http://www.nytimes.com/aponline/2012/09/21/world/americas/ap-lt-brazil-climate-change.html

    Comment by Chris Dudley — 21 Sep 2012 @ 7:38 PM

  56. Ian #46 and #48 is typical of the downside of making statements about the future. Statements imputed to Prof Flannery are trotted out every time it rains as if he was suggesting it will never rain again. As is Mackellar’s verse. Usually as in this case, in the same post or article. It’s silly and I don’t think too many people take much notice any more – only the hard-core science rejectors.

    The reason Prof Flannery gets singled out, and extensively misquoted and/or quoted out of context, is he is Australia’s Chief Climate Commissioner. In Australia that seems to get him the same treatment as Drs Hansen and Mann etc. and Al Gore. (The Climate Commission “was established to provide all Australians with an independent and reliable source of information about the science of climate change, the international action being taken to reduce greenhouse gas emissions, and the economics of a carbon price”.)

    The reason Mackellar’s rhyme gets trotted out is probably because it shows that even a century ago Australia had extreme weather.

    Comment by Sou — 21 Sep 2012 @ 9:30 PM

  57. on the 21st of September, 2012 at 7:33 PM:

    1)Re: realclimate discussion of extermely hot events

    Mr. Salamano writes:
    “However, when the same assumptions encounter an extreme cold event of the same magnitude, the results borderline on declaring the event a complete impossibility.”

    That is not the claim being made in that discussion. And empirical results from Hansen bear out what was said there.

    2)Re: Prof. Trenberth discussion:

    Mr.Salamano writes:

    ” … it should also demonstrate that the cold extreme event is going to occur with requisite plausibility in a warming world. Otherwise, you’re left with virtually unfalsifiable declarations that anything and everything that occurs is a sign of global warming, except when we say it isn’t.”

    This is another strawman. Those declarations are never made.

    3)”I can certainly see a shift of both the mean (warmer) and the variance (wider), …”

    Hansen must be overjoyed at your benediction.

    4)” … but (as a hypothetical example) the arctic ice loss scenario could open up a can of worms that provides an extra feedback directly related to Global Warming that primarily generates strong cold anomalies. Who knows, … ”

    Calculate and publish your results.

    sidd

    Comment by sidd — 21 Sep 2012 @ 10:36 PM

  58. Re Salamano at 41: “SOMEBODY should be out there cooling the engines on how quickly these studies have been rushed into the literature, promoted all around”

    I am not qualified to speak on how science should be managed. I would , however, point out that millions of businessmen aren’t waiting for the science to be settled. Everyday tens of thousands of acres of asphalt and concrete are being laid down, fracking gas wells are being drilled and plans for oil drilling in the Arctic proceed. Those settle the argument with a finality far greater than any thesis put forth in a scientific paper. You can refine a scientific hypothesis –undoing physical construction and financial investment is a lot harder.

    Comment by Don Williams — 21 Sep 2012 @ 10:38 PM

  59. Sou (#56), Im not sure if you actually read what I actually wrote. I didn’t say Prof Flannery had said or had even intimated, that it wouldn’t rain again. The quote I gave was certainly not imputed but was from an interview Professor Flannery had with Sally Sara on ABC television’s Landline program, on February 11 2007. His actual words were “…because the soil is warmer because of global warming and the plants are under more stress and therefore using more moisture. So even the rain that falls isn’t actually going to fill our dams and our river systems.” I thought I had made it clear why I quoted Dorothy McKellar’s poem My Country to emphasise the fact that droughts and floods in Australia are neither new or more extreme than they have been in previous years. According to the Australian Bureau of Statistics (ABS) the longest droughts in Australia were from 1895-1903 and 1958-1968. The most recent drought was from 2003-2007. Of the ten worst floods i Australia’s recorded history two were before 1900 (1852 and 1893), seven were in the 20th century (1918, 1927,1929,1934,1955, 1974,1986) and one in the 21st century 2010-2011. As is apparent the incidence of severe damaging floods in Australia is, to date, highest in the early. Dr Schmidt in his reply notes that “Your overall thrust supports my opening contention though – we need to have more real science on these topics in near real time otherwise the gap gets filled with content-less speculations from people with agendas”. That is exactly the point I was hoping to make but sadly, I obviously didn’t make it plainly enough. Prof Flannery as Climate Commissioner, clearly has an agenda but comments such as those he made in 2007 may well do more harm than good to that agenda.

    Comment by Ian — 22 Sep 2012 @ 12:50 AM

  60. 27 Chris D said, ” For mitigation we must all work together, but for adaptation, polluter pays and China is the polluter that has pushed climate change into the dangerous regime.”

    The Chinese are low-emitters of carbon. Should people be punished for banding together into too large a country? Is your solution for climate change adaptation for the USA to disband into the 50 nations, thus transforming us from not-quite-as-bad-as-China to Climate Saints?

    41 Salamano said, “If all observed events of type A are declared ‘impossible’ or ‘opposite’ of what a model postulates, ”

    Your underlying drive is correct. It’s important to show laymen why certain things are interesting while others, though sparkly, are not. Explicitly stating the numbers for cold extremes gives the layman a comparison, a handle on the issue. Relative orders of magnitude are critical for basic understanding, and they often aren’t obvious before one runs an experiment. This leads to: Expert leaves a minor factor out of the discussion, but not the math. Denialist brings up comparison which feigns universal ignorance of the relative importance of various factors. Public remains confused.

    However, your claim about the existence for such events sorely needs documentation. You mentioned the 2012 European cold snap, but gave no evidence that it was as high sigma as the heat waves you refer to.

    “However, the only all-time cold record set at any specific location was a reading of -33.8°C at Astrakhan in Russia”

    http://www.wunderground.com/blog/weatherhistorian/comment.html?entrynum=62

    Doesn’t sound on a par. Cite something, and after that, give some evidence that models demand that the 2012 cold snap was impossible.

    So yes, your initial drive was spot on. Scientists didn’t make it clear enough that the cold event wasn’t as “big”.
    (clear in public-speak means repetition more so than parsibly perfect.), so you became confused — or I’m confused about 2012 Europe and the same logic applies.

    54 Salamano said, “ strong cold anomalies. Who knows, if they get stronger and more frequent with more ice loss, they could stunt the otherwise modeled progression of warming”

    Doesn’t sound logical to me. A European cold snap will leave extra heat to be absorbed by the ocean, emitted to space, or be distributed around the remaining planet. For this discussion, I’d say distributed around is the dominant factor. Not a robust conclusion, but in a fashion, cold extremes increase hot extremes, and vice versa. Wasn’t winter 2012 a tad warm in the USA?

    49 Dan H said, “The average rate is ~3mm/yr, and shows no sign of accelerating. Therefore, building an infrastructure to last 100 years, needs to be able to withstand about half of a meter of SLR.”

    Did you write North Carolina’s sea level rise policy? The albedo flip from Arctic sea ice has just begun. Greenland’s melt is gaining significance, and WAIS is waking up. That the expected acceleration hasn’t gained traction yet doesn’t invalidate Hansen’s analysis (or other folks’). Please provide a reference which suggests that over the next 100 years, no significant increase in the rate of sea level rise is expected.

    I’ve defended your character repeatedly. Your post makes it less likely I do so in the future.

    Do you spend so much time in the Denialsphere that false-factoids just seem like part of the fabric of life?

    51 TonyO’B said, “We still do not understand that dry language really represents “oh sh##” ”

    There are many professions which require cool thinking under duress. The one I visualize is airline pilots. Listening to the doomed flight crew on the black box often sounds like they’re reading stock prices. They’re trained to fly the plane all the way into the ground.

    Comment by Jim Larsen — 22 Sep 2012 @ 1:32 AM

  61. Jim (#60),

    We are cutting emissions while China is increasing them. This in addition to China being the largest polluter. What entities, other than nations, internationally can be made to pay for the intended harm they cause? A carbon tariff imposed on Chinese exports used to compensate farmers around the world is completely appropriate.

    Comment by Chris Dudley — 22 Sep 2012 @ 7:50 AM

  62. Why should we penalise China, for increasing crop yields?

    Comment by Adam Gallon — 22 Sep 2012 @ 9:21 AM

  63. “We are cutting emissions while China is increasing them. This in addition to China being the largest polluter.”
    Specify and cite please – per capita? per unit GDP? over what historical period? If the industrial production exported to “we” is deducted from the figures, what is the comparison? Do we penalize China for those millions of iPhones folks all over the world are lining up for?
    Emissions are a global and historical problem, not a national one. Has your country taken any measures to limit population increase? How much CO2 does your family emit compared to your parents or grandparents?

    Comment by flxible — 22 Sep 2012 @ 9:59 AM

  64. 61 Chris D said, “We are cutting emissions while China is increasing them.”

    Option 1: All people are equal with regard to access to the atmosphere, as measured intergenerationally. Each nation should be allowed the same emissions per person-year. Since the US has been burning for a long time, we’ve already burned all(?) of our share, and need to pay China (or others) for carbon credits. That China wisely saved her carbon emission quota, well, why should we be allowed to steal China’s unspent carbon credits, just because we wasted our own?

    Option 2: All people are equal with regard to access to the atmosphere, but the past is the past. So each group should be allowed the same emissions per capita. Those who emit more, such as the US, should pay penalties to those who emit less, such as China.

    Option 3: All people are not equal with regard to access to the atmosphere. Carbon-wise, some nations pollute heavily and so should be allowed to continue at a slightly reduced rate. Others pollute far less but must also reduce their emissions by the same percentage. A poor country which emits 1/4 the emissions per capita of a rich country should pay the rich country penalties if the poor country emitted 1/5 the previous decade.

    I think option two is the most fair, as modified by a 20 year phase in period starting with current emissions as a base line.

    Which of the three seems most fair to you, Chris D? 20 further years of unequal rights seems fair enough, don’t you think?

    Comment by Jim Larsen — 22 Sep 2012 @ 10:28 AM

  65. Mr. Chris Dudley writes on the 22nd of September, 2012 at 7:50 AM:

    Re:carbon tariffs on China

    Presumably then, Mr. Dudley will not object to tariffs imposed by the developing nations on OECD countries in reparation for the much larger carbon debt already incurred by the OECD group?

    sidd

    Comment by sidd — 22 Sep 2012 @ 11:37 AM

  66. Ian, while Australia is the second driest continent and has always had droughts and floods, it’s also getting warmer along with the rest of the planet. Floods are hard to measure across the entire continent – more so in the past.

    I doubt too many (except in the deniosphere) would try to argue against the fact that the past couple of years have been extraordinary in regard to widespread flooding – or that so many regions of the country had the ‘wettest ever recorded’ periods (as reported by BoM), or that the big drought wasn’t extreme or that unprecedented (in the record) hot, dry, windy conditions at the tail end of a horrendous drought didn’t cause the February 2009 massive loss of life in the fires.

    Nor do I understand what you find interesting about that quote from Prof Flannery. It’s only when soils are saturated or when you get heavy precipitation that you get runoff. And with some storages down to 10% – it took an awful lot of rain to get them back to safer levels.

    This is just the beginning.

    Comment by Sou — 22 Sep 2012 @ 12:12 PM

  67. No more on chinese emissions please. This is a thread related to the attribution of extreme events, not carbon policy. Thanks

    Comment by gavin — 22 Sep 2012 @ 12:44 PM

  68. > and shows no sign of accelerating

    Citation needed, many sources say the opposite. To hold that claim you’d need to first show that 6 years is enough time to separate out a signal from the noise… good luck with that.

    A 20th century acceleration in global sea-level rise Church & White 2006

    Is Sea Level Rise Accelerating right here on Real Climate

    “And by 2100, rising sea level from ocean thermal expansion and increasing ocean mass (from melting glaciers, ice caps, and the Greenland and Antarctic ice sheets) will expose an additional tens of millions of people annually to the risk of coastal flooding.” Regardless of whether or how much it is currently accelerating, it’s a major problem as it is continuing, there are no years with sea level drop.

    But this isn’t really about attribution science, except for that we know it is occurring due to global warming, so it’s probably off-topic in this thread and we shouldn’t pursue it further here.

    Comment by Unsettled Scientist — 22 Sep 2012 @ 12:52 PM

  69. Salamano, Care to tell us exactly where these horrendous cold spells are occurring? Yes, there have been cold spells, but record highs are far outpacing record lows. Again: give specifics rather than vague impressions. What “record cold events” have been ignored? You sound like Spiro Agnew complaining about the nattering nabobs of negativism.

    What is more, beyond the question of probability, there is the question of what caused the event. If indeed loss of sea ice affects the jet stream allowing cold Canadian air to penetrate further south, I hardly think you could call that a contradiction of the severity of warming.

    Comment by Ray Ladbury — 22 Sep 2012 @ 1:19 PM

  70. Adam (#62),

    China is reducing crop yields. http://www.reuters.com/article/2010/09/01/us-china-crops-idUSTRE68056320100901

    Gavin (#67),

    As the Nature article points out, attribution points out who is to blame for suffering. Weather disasters are no longer acts of God but rather attacks by polluters on the victims of the disasters.

    And Jim (#64), it is polluters who, by their policies, intentionally push us into the the dangerous climate change regime who are liable. If you are putting your foot on the brake when there is a traffic mishap, it is an accident, if you are putting your foot on the gas, it is intentional. You have not considered culpability in your three cases. Nations set policies and so they are the responsible actors.

    There, I only used the C-word once and that in relation to crops, not emissions.

    Comment by Chris Dudley — 22 Sep 2012 @ 1:32 PM

  71. China [edit – OT]

    Comment by Jim Bullis, Miastrada Company — 22 Sep 2012 @ 2:42 PM

  72. Jim,

    Here is one reference:

    http://naturescapebroward.com/NaturalResources/ClimateChange/Documents/GRL_Church_White_2006_024826.pdf

    Note that the Church & White paper was written before the recent decelleration in SLR, and claims that IF the acceleration of the 1930s and 1990s continued, then sea level would rise between 280 and 340 mm by 2100. They also noted the decelleration in the 1960s, and did not account for any potential future decelleration.

    Here is another reference which shows no acceleration.

    http://www.jcronline.org/doi/pdf/10.2112/JCOASTRES-D-10-00157.1

    The increased melt from Greenland has occurred simultaneously with the decelleration in SLR. Apparently, this contribution is not very great at the present. Granted, significant melting of the Greenland glacier would change this, and that is a wild card in the projections. Antarctica, on the other hand, has been in the midst of a prolonged cooling period, and sea ice is at record highs. It seems unlikely that the WAIS would contribute anything to global SLR, and would more likely lead to a decrease de to more snow and ice accumulation.

    [Response: Well, if you think something is unlikely that is obvious all that needs to said. – gavin]

    Comment by Dan H. — 22 Sep 2012 @ 3:53 PM

  73. Re: #71 (Dan H.)

    There’s only one “l” in “deceleration.”

    You refer to the “recent decelleration [sic] in SLR” but that is not statistically significant. My calculations indicate it just fails 90% confidence, and isn’t close to 95%. If you actually read Church & White you may have noticed their reference to ubiquitous decadal variations, perhaps you should have paid more attention. Stop refering to recent deceleration in SLR as though it were an established fact. It isn’t.

    You should also have noted that Church & White suggest about 300mm SLR by 2100 IF the quadratic trend of the 20th century continues. There is very good reason to expect 21st-century sea level rise to exceed even Church & White’s quadratic trend. It has to do with the laws of physics, and the observed acceleration of mass loss from the Greenland and Antarctic ice sheets. There’s even peer-reviewed literature relating temperature change and sea level change (Rahmstorf & Vermeer) which suggests much more than 300mm by the year 2100. As Gavin notes, your claim that it “seems unlikely that the WAIS would contribute anything to global SLR” doesn’t pass muster as either evidence or logic.

    Frankly, it’s just plain foolish to rely on existing curve-fitting trends to forecast 21st-century sea level rise — that’s the same idiotic mistake with which the North Carolina state legislature managed to embarrass itself. Almost as foolish as referring to the abominable paper by Houston & Dean.

    Comment by tamino — 22 Sep 2012 @ 8:52 PM

  74. Gavin – At #35 you state:
    Every model (more or less) shows increased heat waves, but whether they show increased variability in the jet stream leading to increased cold snaps is much less clear (at first cut the models show the opposite).

    Please clarify: Do the models show increased variabilty, but not leading to cold snaps, or do they not show increased variability of any consequence?

    It would seem that the theoretical argument for increased variability of the jet stream is plausible, if not absolutely rigorous: decreased latidutinal thermal gradient implies slower zonal jetsream velocity (via the thermal wind equation), which implies decreased baroclinic phase velocity (at least by linear theory), or more ‘lingering’ and amplified jetstream meandering, as Francis and Vavrus have described. The question is: do GCMs support this picture or not? Isaac Held describes “fruit fly” (stripped-down) models that seem well-suited to test these conjectures, but it seems tnat they have not yet been used to do this. True?

    Thanks in advance for your response.

    Comment by Pat Cassen — 22 Sep 2012 @ 11:48 PM

  75. Yes, as Pat Cassen requests, please clarify what models say about jet stream variability, particularly with respect to attribution. I’ve noticed [and appreciate] that local forecasters are now showing jet stream location on their maps more regularly.

    Comment by flxible — 23 Sep 2012 @ 8:56 AM

  76. re “jet stream locations” I guess you mean this sort of thing, flxible:

    location of jet stream

    Comment by chris — 23 Sep 2012 @ 12:08 PM

  77. It seems to me that both the “Nature” editorial and many of the responses are in need of some conceptual clarity about some terms and, following this, scientific/philosophical conclusions that might flow from increased clarity.

    The subtitle of the “Nature” editorial is “Better models are needed before exceptional events can be reliably linked to global warming.” The editorial warns that climate scientists should be prepared for their skills to be probed in court and, e.g., the legal basis for claims brought by small northern communities facing coastal erosion such as Kivalina, AK, against, e.g., ExxonMobile. The first paragraph of the “Nature” editorial concludes with the statement that climate attribution will need to inform legal and societal decisions and to do so will require enormous research efforts.

    One of the problems is what is meant by “reliably?” If we assume a definition often used by scientists to reject a null hypothesis and/or by scientific evidence admissible in court (as in the suit by Kivalina against ExxonMobil) then the typical standard is that conclusions based on evidence have a 0.05 percent chance of being wrong–a so–called “false positive” error which concludes there is an effect when, in fact, there is none.

    But climate change attribution is terribly messy with its large number of variables and types of uncertainties. Neither the “Nature” editorial nor comments focused on the meaning of “reliable or the issue of how “reliable” the science needs to be for actions to be based on it, and what is the justification for standards about “reliable.”

    If the conventional scientific standard of proof is required, e.g., rejecting a null hypothesis at a level of 0.05 then personally I am doubtful that attribution studies will be widely accepted, at least in the near–term. But if some other standard of proof that is more precautionary is accepted, e.g., rejecting a null hypothesis at a level of, say, 0.10 or 0.20 or whatever, then the studies might have greater utility.

    The salient point here is that given scientific uncertainties and when science is done to inform policy or assist in environmental/human health welfare (as opposed to study nature to simply gain knowledge about it), that an ethical dimension becomes part of the consideration. Insisting on high or generally accepted levels of confidence to form scientific conclusions or gain evidentiary admission into courts will increase the chances of making a false negative error, i.e., concluding there is no effect when, in fact, there really is. And when a false negative error is made, the risks and harms to the environment and the public are increased.

    Accordingly, the concept of “reliable” as used in the “Nature” editorial or in some of the responses to it becomes philosophical and ethical and not merely scientific.

    Comment by John Lemons — 23 Sep 2012 @ 4:55 PM

  78. The new Nature Geoscience paper by Reichler et al poses an interesting question in the light of sea ice shrinkage.

    if no ice is interposed , the angular momentu of the stratospheric polar vortex may be coupled to ocean circulation at high latitudes, somthing presently impeded by even the thinnest ice cover.

    Could this alter long term meridional circulation and thermohaline turnover?

    Comment by Russell — 23 Sep 2012 @ 5:01 PM

  79. 77 John L said, “increase the chances of making a false negative error, i.e., concluding there is no effect when, in fact, there really is. And when a false negative error is made, the risks and harms to the environment and the public are increased.”

    I want to quibble with the term “false negative”. Unless you’ve got a 95% confidence that negative is the proper result, it’s not a false negative, it’s “undetermined”. That’s the same “lie” told by criminal defendants and their supporters upon acquittal, that they’ve been “proven innocent”. It’s a glaring lack in our legal system. Possible verdicts should include guilty, unsubstantiated, and Innocent.

    I also disagree with the level of confidence you’re suggesting. 95% might be reasonable in a criminal trial, but it’s 51% in a civil trial, where there’s no presumption of innocence.

    Attribution of extreme events probably isn’t the hard part. It’s defining the defendants.

    Comment by Jim Larsen — 23 Sep 2012 @ 6:23 PM

  80. Re John Lemons at 77:

    1) American Business is guided far more by political decisions and regulation than by litigation. While the standard is understandably high for winning large awards from someone in a lawsuit, most political and regulatory decisions do not require meeting a test of “0.05 percent chance of being wrong”. If someone 15 yards away aims a handgun at me and fires, the odds of them missing me are greater than 0.05 percent –but that doesn’t mean that the law allows them to pull the trigger.

    2) For example, one issue in the northeast USA is how to handle the increased heavy downpours due to climate change. The state of Pennsylvania and its counties have regulations requiring large property developers to build basins to control the stormwater runoff resulting from laying impervious cover (asphalt, concrete, etc.) over formerly absorbant soil. Volume is calculated from impervious acreage times inches of expected rainfall.

    In the past, state agencies have used the AVERAGE expected rainfall listed in NOAA’s Atlas 14 for various storm events (2 year, 5 years,etc.) to compute the volume.

    However, the city of Philadelphia and two outlying counties (Bucks, Montgomery) just approved a stormwater management plan for a watershed that explicitly requires using the UPPER BOUND of NOAA’s 90 percent confidence interval (instead of the average) for the inches of rainfall. This will require developers to handle about 14 percent more stormwater volume than what would be required if computations were based on the NOAA averages.

    Past heavy rainfall from Hurricanes Floyd and Allison were cited as justification — as examples of extreme precipitation not accounted for by the NOAA averages.

    Comment by Don Williams — 23 Sep 2012 @ 7:21 PM

  81. John Lemons @77 — Use Bayesian reasoning: which hypothesis is best supported by the evidence.

    Comment by David B. Benson — 23 Sep 2012 @ 7:25 PM

  82. Ian @ 59 The full quote from 2007 Flannery interview is
    “We’re already seeing the initial impacts and they include a decline in the winter rainfall zone across southern Australia, which is clearly an impact of climate change, but also a decrease in run-off. Although we’re getting say a 20 per cent decrease in rainfall in some areas of Australia, that’s translating to a 60 per cent decrease in the run-off into the dams and rivers. That’s because the soil is warmer because of global warming and the plants are under more stress and therefore using more moisture. So even the rain that falls isn’t actually going to fill our dams and our river systems, and that’s a real worry for the people in the bush. If that trend continues then I think we’re going to have serious problems, particularly for irrigation.”
    http://www.abc.net.au/landline/content/2006/s1844398.htm

    The section of that quote that Ian cherry-picked has been used extensively in Australia in an attempt to discredit Flannery, a Climate Commissioner by claiming that he said the “dams would never fill again”.

    Looking at the full statement in context, was Flannery wrong? No. Here is a May 2012 article by Karl Braganza, Manager of Climate Monitoring at the Bureau of Meteorology.

    “A 10-20% loss of autumn and winter rainfall has occurred over both the southwest and southeast corners of the Australian continent. This is most significant in the southwest of Western Australia, where the changes have occurred since around 1970. In the southeast, similar rainfall reductions have been apparent since the mid 1990s.”
    “The 10-20% loss of autumn and winter rainfall has been amplified in the subsequent reductions in streamflow and water storages. There are many reasons for this, the simplest being that drier soils and vegetation soak up more moisture and therefore provide less surface runoff when rain does fall.”
    http://theconversation.edu.au/a-land-of-more-extreme-droughts-and-flooding-rains-5184

    Braganza goes on to explain that the record rainfall that fell in 2010-11 was tropical in origin and has not interrupted the long term drying trend in southern Australia.

    Comment by MikeH — 23 Sep 2012 @ 8:26 PM

  83. > vortex may be coupled ….

    Good one, Russell.

    Comment by Hank Roberts — 23 Sep 2012 @ 9:31 PM

  84. One interesting point raised in discussions in the breaks (Chatham house rules apply) was how you attribute / apportion blame / repatriations / compensation in the context of legal actions (if it were to ever go down that route). It was quite clear that its even more fraught than apportioning a fractional attributable risk. Carbon emissions are like taking out a reverse mortgage with large compound interest because of the lags in the system. Not all CO2 emissions are equal in the context of a transient climate response. Carbon we emitted in 1850 has more ‘power’ than carbon a certain C-word (!) emits today in explaining today’s events (C’s (or anyone’s today) matter more / catch up down the line). So it matters not just how much Carbon emission ‘debt’ was taken out but when it was taken.

    As to the editorial its impossible to characterize a two and a half day meeting in a few hundred words. Insofar as it is a window into the world of discussions of a nascient branch of the science it is of interest. But its an editorial (clue is in the name there for the hard of thinking) and the proof will be in terms of demonstrable outcomes in the literature which has a far longer lag time. My personal view that it was a very interesting meeting reflecting both advances and challenges. That is what makes doing science interesting and exciting. I have no doubt progress is being made. There is substantial interest in pushing this forwards but there are many challenges in the tools (observations – not mentioned in the editorial – equally important, and models) and the techniques. That is what science is all about and what makes such meetings hugely valuable.

    Comment by Peter Thorne — 23 Sep 2012 @ 9:50 PM

  85. Response to Jim Larsen #79: I am not sure whether you are agreeing or disagreeing with me. Typically, the false positive or Type I err is confined to about 0.05, wherein in most studies the false negative err is not known or typically ranges around 20–80 percent. Because natural sciences developed with a goal of minimizing Type I or false positive errs (wanting to avoid adding speculative knowledge to the body of knowledge about natural phenomena), Type II errors have basically been ignored. The problem as I stated is under conditions of scientific uncertainty it is almost impossible to meet the 0.05 confidence level for rejection of a null hypothesis and when this happens if the issue concerns environmental/human health then risks to the latter always increases. This raises the moral and ethical and legal questions about what standards of evidentiary proof ought to be used in matters of environmental/human health protection. Allowing a higher probability of err in making a Type I err reduces the likelihood of increasing environmental/human health risks by reducing the likelihood of making a Type II error (false negative). Another words, if we are going to err it is morally better to err on the side of protection of environmental/human health.

    Finally, I am not proposing that the 0.05 level for rejecting null hypotheses be used in environmental/human health issues laden with scientific uncertainty. I am suggesting that the level needs to be increased to 0.10 or 0.20 or some other probability, which would mean reduction in probability of Type II or false negative errs, thereby better protecting the public and environment under conditions of uncertainty.

    To David Benson #81: I agree Bayesian statistic shows promise, but has not been used enough. I wish more people would try to apply it.

    Comment by John Lemons — 23 Sep 2012 @ 10:56 PM

  86. Re: 61. Jim (and to a lesser extent Ray)

    “However, your claim about the existence for such events sorely needs documentation. You mentioned the 2012 European cold snap, but gave no evidence that it was as high sigma as the heat waves you refer to.”

    Okay…so here are some maps for what I’ve been talking about for Alaska and Europe last winter season:

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2012&month_last=8&sat=4&sst=1&type=anoms&mean_gen=11&year1=2011&year2=2011&base1=1951&base2=1980&radius=1200&pol=reg

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2012&month_last=8&sat=4&sst=1&type=anoms&mean_gen=01&year1=2012&year2=2012&base1=1951&base2=1980&radius=250&pol=reg

    …Now, these should qualify as high sigma events, and furthermore, it should NOT even be necessary that these events dwarf or even meet the high sigma events on the ‘hot’ side of the coin. No one is trying to imply that this sort of thing is going to be continuously surpassing regional heat waves.

    As far as models that would render such events impossible– simply pick one. There have been a bunch of attempts to insta-define recent weather events in terms of its anthroprogenic signatures. I would say the famed “Climate Dice” anaology is also one of these same issues. If you take Peter et al (2004) for example…

    http://www.nature.com/nature/journal/v432/n7017/full/nature03089.html

    …such simulations that are used to declare heat wave event X as beyond the threshold magnitude of normal expectability in a non anthroprogenically warming climate…should also be applicable to these cold snap events (read: that do not even need to be anomalously greater by comparison). The conclusions you would draw from such an exercise would be that such events would become statistically near impossible with our human-modified climate as understood by these models.

    Now, that is not to say that in reality they would be impossible…I’m only talking about applying methodology equally in these cases. Sure, someone can wave a hand and say that in our anthroprogenically warming world we can still have absolutely any kind of weather still out there– I’m just talking about the actual distribution of the probability.

    “A European cold snap will leave extra heat to be absorbed by the ocean, emitted to space, or be distributed around the remaining planet. For this discussion, I’d say distributed around is the dominant factor. Not a robust conclusion, but in a fashion, cold extremes increase hot extremes, and vice versa. Wasn’t winter 2012 a tad warm in the USA?”

    My point is NOT that somehow the prevalence of these cold snaps will all of a sudden make the globe anomalously cold this coming November-January… But rather it is proven by the fact that all it needs to do is continuously mitigate an increasing dominance of heat waves themselves. The idea here is that this one process, clearly due to a warming world reducing arctic sea ice, is introducing a currently poorly understood (and apparently sometimes discounted) feedback into the climate system that “stunts” or “mitigates” the overall warming picture to a non-zero degree. Compare such an assertion to an alternative expectation of wholesale “run-away” global warming or near-term “tipping point” climate change. I’m not saying this will have the effect of buying us millenia before citizens of Earth need to do something, but I am saying that this is something that deserves to be a thought in the discussion, because we’re already seeing global temperature anomaly lagging behind the projected expectations, and it’s important to start discussing reasons why rather than simply droning on about error bars.

    In the “new” probability-distribution examples first discussed by Rahmstorf in an aforementioned link (and discussion at Real Climate), a unit value that accounts for both frequency and magnitude of extreme cold events should be decreasing in our warming world. However, if this feedback mechanism starts having regular results of maintaining high-sigma cold snaps in the winter season (remember, in a warming world, cold snaps will have a slightly lesser threshold to leap in order to become highly anomalous– and would NOT need to be record-breaking according to the historical record, but hot events would be)…then it would not be explainable within a normal probability distribution model. Instead it would show something where the variance is greater, yet the frequency of cold events maintained (or even increased), while also having the increased heat events and the average trend inching warmer– but at a slower pace than previously anticipated.

    Comment by Salamano — 24 Sep 2012 @ 12:37 AM

  87. I think you are taking on more formidable forces than you are prepared to admit – a mixture of die-hard climate modelers and government departments. The recent editorial in Nature you reference, Better models are needed before exceptional events can be reliably linked to global warming, makes me despair.
    You tell us that the editorial was influenced by an invitation only workshop, Attribution of Climate and Weather Extremes: Assessing, Anticipating and Communicating Climate Risks. This was…

    Hosted by the Smith School of Enterprise and the Environment, the Environmental Change Institute and the Oxford Martin School, University of Oxford, with the support of the UK Government Foreign and Commonwealth Office, the US National Ocean and Atmosphere Administration, the UK Department of Energy and Climate Change and the Risk Prediction Initiative of the Bermuda Institute of Ocean Sciences.

    Organising Committee:

    Co-Chairs: Peter Stott, Met Office and Randy Dole, NOAA

    Local hosts: Myles Allen and Pete Walton, University of Oxford

    Steering committee: May Akrawi, FCO; Chris Hewitt, Met Office; Marty Hoerling, NOAA; Arun Kumar, NOAA; Falk Niehoerster, Risk Prediction Initiative, Bermuda Institute of Ocean Sciences; Chris Sear, DECC; Peter Thorne, NOAA.

    Key orgaisations for me are are

    The Smith School of Enterprise and the Environment
    The Environmental Change Institute
    Foreign and Commonwealth Office
    US National Ocean and Atmosphere Administration
    UK Department of Energy and Climate Change
    UK Met Office

    Except for the NOAA, I would apply one or more of these phrases to members of the above group:

    Combating climate charge is too expensive for the UK
    Combating climate change is not business friendly
    We won’t believe the real world until we can model it
    We have reputations to preserve

    I’m in despair. Is it justified?

    [Response:No. – gavin]

    Comment by Geoff Beacon — 24 Sep 2012 @ 2:55 AM

  88. 85 John L said, “Type II errors have basically been ignored”

    I believe we generally agree. I’m just saying that those “generally ignored” type 2 errors have to be differentiated further before un-ignoring them.

    Comment by Jim Larsen — 24 Sep 2012 @ 2:56 AM

  89. Nice thoughtful post, Gavin, thanks. Ed and Geert Jan’s responses do a good job of clarifying what the balance of opinion at the workshop was. I’m pleased the workshop did consider seriously the question of the value of extreme event attribution, and also that we managed to include some more skeptical voices, not just the usual team. I don’t think there was ever a chance of resolving the question of utility in a 2 day workshop, and clearly it is something we are going to have to keep thinking about (perhaps we could tempt you along to next year’s meeting in Washington?). I am personally very skeptical about arguments that certain branches of research (like geo-engineering) are Not in the Public Interest (whatever that is) even when the public are evidently interested in them, but I recognise that thoughtful people have a lot of useful things to say about this kind of question (we did invite Mike Hulme, by the way, but he had a prior commitment).

    I’ve just one point to add to the comments (and someone may well have made it already — in which case apologies): I agree that right now we can’t generally address these event-attribution questions in the 24-hour news cycle, but I think it is important to explain to people there is absolutely no reason in principle why we should not. We do weather forecasting in real time, and there is no fundamental difference between ensemble forecasting and probabilistic attribution. And, of course, using the same modelling tools would allow us to evaluate the reliability of attribution claims, something the workshop was rightly very concerned about. But that would require a commitment to climate services well beyond anything envisaged at present.

    I would welcome a broader debate about the balance of resources in climate research between understanding what is happening now versus predicting climate decades to centuries in the future, because I personally don’t think we have the balance right (but then, I would say that, wouldn’t I?).

    Regards,

    Myles Allen

    Comment by Myles Allen — 24 Sep 2012 @ 5:35 AM

  90. 73 Hmmm…Dan H. or Tamino? Tough choice. Not. On to the NOAA Storm Atlas..Jay Dee @ 53 and others: The NOAA Storm Atlas Volume 1 Montana dates to 1973 with an Arkell and Richrds 1986 NOAA update. A weather Service hydrologist described it as a bit of a time warp. This Atlas can’t possibly include the 11-15% increases in the amount of precipitation falling in very heavy precipitation events for the period1958 – 2007 noted by Karl, et. al, 2009. We’re using some more recent USGS hydrological information to try and bring the storm numbers into at least the 21st century but if anyone has any other good ideas I’d be happy to hear them.

    Comment by tokodave — 24 Sep 2012 @ 8:56 AM

  91. As I see it, it is impossible to attribute an extreme event to anthropogenic global warming (AGW), although there are exceptions which I will mention below. But I feel that scientists are in effect lying when they say to the press that they cannot prove an event was caused by AGW. That gives the impression that the event was not caused by AGW. But the scientist cannot prove that it was not caused by AGW. So what he was implying was a lie. What he should say is that the extreme event might have been caused by global warming, which is true.

    Interestingly there was an extreme event where one can say it was caused by global warming – the collapse of the Larsen B ice shelf. Ice sheets calving is normal, but the collapse of the Larsen B was caused by rising temperature and so it was a victim of global warming.

    Obviously, any one wild fire cannot be blamed on global warming especially if it is known that someone dropped a lighted cigarette end. But recently there has been an increase in the number and intensity of wild fires. This is surely due to global warming. If you cannot be certain then why not tell the press you are 90% certain that the increase is due to AGW. That is telling the truth. Just saying you are not sure implies you are 50% sure.

    This is realy about your audience. If it is a journalist keep it simple. If it is a colleague explain all the difficulties.

    Cheers, Alastair.

    Comment by Alastair McDonald — 24 Sep 2012 @ 9:53 AM

  92. Why attribute extreme events So my insurance company will know when to cancel my insurance!

    Seriously, I think the insurance companies look at the extent to which extreme events invalidate their currently used estimation models. And I think one did cancel insurance on a house I owned after Hurricanes Hugo and Fran penetrated deep inland.

    I know that, invalidating models is not exactly the same as attribution, but it’s close.

    Comment by Tom Adams — 24 Sep 2012 @ 11:37 AM

  93. 86 Sal said, “all it needs to do is continuously mitigate an increasing dominance of heat waves themselves”
    Yet in both of your examples, chocolate colored areas exceed purple. And I’m not sure how your evidence could prove your point. Maybe by showing times with VS without severely cold anomalies and count the hot areas?

    Frankly, either I’m seriously missing the point (that happens), or your claim makes no sense. Unless global average temperature decreases, every cold snap will increase the chance of a simultaneous heat wave. And for the cold snaps to deserve the same “coverage”, they WOULD have to be close to the sigma of the heat waves.

    Your link’s conclusion is nowhere near a claim of impossibility:

    “we estimate it is very likely (confidence level >90%)9 that human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude.”

    Comment by Jim Larsen — 24 Sep 2012 @ 1:34 PM

  94. @93 Jim

    “Frankly, either I’m seriously missing the point (that happens), or your claim makes no sense. Unless global average temperature decreases, every cold snap will increase the chance of a simultaneous heat wave. And for the cold snaps to deserve the same “coverage”, they WOULD have to be close to the sigma of the heat waves.”

    My feeling is that this is not entirely true (and yes, I certainly can be wrong). Yes, in a warming world, cold snaps would be more than balanced out by same (and greater) heat spells… But the real issue in my opinion is “how” warming. My contention is that this new polar interaction is a climate-change induced feedback due to the extreme melting of the arctic ice cap. Therefore, the appearance of these cold snaps DO NOT have to more than displace (or even equal) the heat waves, nor do they have to be >= the sigma of the heat waves to be interesting– because to me the incidence of them indicates a wrench in a theoretically modeled system of runaway/accelerated warming.

    “Your link’s conclusion is nowhere near a claim of impossibility:…“we estimate it is very likely (confidence level >90%)9 that human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude.” ”

    My point here is this… Using the same model/methodology as this paper does, what kind of human-induced risk modification do you think would be assigned to coldwaves of 6-12 sigma magnitudes? (answer: pretty low…lower than observed).

    Eventually, somebody’s going to have to start investigating what (cold-related) anomalies/feedbacks are coming into the climate picture to keep global observed temperatures from performing the way models have expected them to. Covering eyes and ears, hand-waving about error bars, and praying for another legit El Nino event is certainly another option. I’m just tossing out a candidate.

    Comment by Salamano — 24 Sep 2012 @ 2:12 PM

  95. 94 Sal,

    Our infrastructure is built to traditional climate, and modern adjustments generally increase both heat and cold tolerance. So even if cold snaps occur which are as cold as ever – but surely not more than a tad colder – we’re prepared for them.

    Or are you making the claim that cold snaps are currently worse than they were in the 60s, and even further warming will accelerate this dangerous trend?

    Comment by Jim Larsen — 24 Sep 2012 @ 3:30 PM

  96. >> I’m in despair.
    >> Is it justified?
    >
    > [Response: No. – gavin]

    This should become a CafePress mug-and-T-shirt for RealClimate.

    Comment by Hank Roberts — 24 Sep 2012 @ 4:56 PM

  97. Re tokodave at 90:
    Montana is better off than New England. NOAA 24 hour precip
    data for New England is Technical Report 40, dated 1961. See
    http://www.nws.noaa.gov/oh/hdsc/currentpf.htm#PF_documents_by_title
    Anyone remember those New Englands floods a few years back?

    So far, NOAA has developed more up to date Atlas 14 volumes
    for the Ohio River Valley/ MidAtlantic states, Puerto Rico/Caribbean,
    Alaska/Hawaii, and Southwest USA.

    NOAA’s website indicates that the Update for the Southeast/Midwest states
    is in progress, with the Northwest USA and New England scheduled for later.
    See http://hdsc.nws.noaa.gov/hdsc/pfds/index.html

    So if you guys want to stick your finger in the mixing bowl , taste the batter,
    and give suggestions to the cook, there is probably still time. I don’t know
    when they will update the recently issued (2004) volumes.

    Comment by Don Williams — 24 Sep 2012 @ 4:57 PM

  98. About the bore-holed comment that mentions the NPR tree ring scientist. His names was Tom Swetnam. He was not claiming that because there is more fuel in forest which haven’t burned that global warming plays no role in the increased risk of wild fires. If we look up his published work, say in Science, we find that he points out that climate change does indeed increase the wild fire risk. Indeed, he has even quantified the increased risk due to climate change.

    In a trail-blazing article in Science Magazine in 2006, Dr. Swetnam and his colleagues showed that a one degree of Fahrenheit temperature translates into a boost in the frequency and duration of large wildfires in the U.S. West. Since the 1980s, warming and drying has caused the fire season to increase more than two months and sparked four times more fires.

    Warming and Earlier Spring Increase Western U.S. Forest Wildfire Activity

    “The overall importance of climate in wildfire activity underscores the urgency of ecological restoration and fuels management to reduce wildfire hazards to human communities and to mitigate ecological impacts of climate change in forests that have undergone substantial alterations due to past land uses. At the same time, however, large increases in wildfire driven by increased temperatures and earlier spring snowmelts in forests where land-use history had little impact on fire risks indicates that ecological restoration and fuels management alone will not be sufficient to reverse current wildfire trends.”

    Comment by Unsettled Scientist — 24 Sep 2012 @ 5:28 PM

  99. Sal said
    ” Therefore, the appearance of these cold snaps DO NOT have to more than displace (or even equal) the heat waves, nor do they have to be >= the sigma of the heat waves to be interesting– because to me the incidence of them indicates a wrench in a theoretically modeled system of runaway/accelerated warming.”

    Who’s theoretical model has runaway warming?

    Comment by Jathanon — 24 Sep 2012 @ 5:49 PM

  100. A counter example is the prediction 6 years ago after Katrina that hurricanes would be increasing in number and intensity. The insurance and re-insurance industries latched onto this speculative report and jacked up already high home insurance rates in Florida by 40%.

    The Sarasota Herald-Tribune won the 2011 Pulitzer Prize for investigative journalism for exposing this circus.

    Florida insurers rely on dubious storm model:
    http://www.heraldtribune.com/article/20101114/article/11141026

    Instead of using historical hurricane rates to set expected disaster losses, they switched to computer modelling that predicted much higher disaster losses. Didn’t happen. Still not happening. Hurricane disaster losses for the 2000 to 2010 decade were right on the historical average, even with Katrina.

    So it’s one thing to do the science, it’s another when speculative headline seeking science gets too far ahead of reality. The science was clearly overplayed in this case. We haven’t had a CAT3 landfall in the US since 2006, an all time record. Global cyclone energy is running near all time lows.

    Florida insurance rates have not declined.

    So do the science, but please don’t declare is as useful before it is ready for prime-time.

    [Response: Our discussion of hurricanes and global warming from 2005. Care to amend? – gavin]

    Comment by Tom Scharf — 24 Sep 2012 @ 6:57 PM

  101. Re Tom Scharf at 100,
    Maybe that hurricane energy went somewhere else — and maybe the insurance companies are not raking in as much money as you think.
    a) From http://www.usgs.gov/newsroom/article.asp?ID=2343#.UGEEHo09zl8
    “The epic flooding that hit the Atlanta area in September was so extremely rare that, six weeks later this event has defied attempts to describe it. Scientists have reviewed the numbers and they are stunning.
    “At some sites, the annual chance of a flood of this magnitude was so significantly less than 1 in 500 that, given the relatively short length of streamgaging records (well less than 100 years), the U.S. Geological Survey cannot accurately characterize the probability due to its extreme rarity,” said Robert Holmes, USGS National Flood Program Coordinator. “Nationwide, given that our oldest streamgaging records span about 100 years, the USGS does not cite probabilities for floods that are beyond a 0.2 percent (500-year) flood.”

    “If a 0.2 percent (500-year) flood was a cup of coffee, this one brewed a full pot,” said Brian McCallum, Assistant Director for the USGS Georgia Water Science Center in Atlanta. “This flood overtopped 20 USGS streamgages – one by 12 feet. The closest numbers we have seen like these in Georgia were from Tropical Storm Alberto in 1994. This flood was off the charts.” …
    “Applying rainfall frequency calculations, we have determined that the chance of 10 inches or more occurring at any given point are less than one hundredth of one percent”, said Kent Frantz, Senior Service Hydrologist for the National Weather Service at Peachtree City. “This means that the chance of an event like this occurring is 1 in 10,000.”

    b) Wiki says Georgia estimated damages were $500 million
    http://en.wikipedia.org/wiki/2009_Southeastern_United_States_floods

    c) There was massive flooding in the Midwest in March 2008 and in the upper Midwest in June 2008.
    http://en.wikipedia.org/wiki/March_2008_Midwest_floods

    d) The May 2010 floods in Tennessee were 1000 year floods.
    http://en.wikipedia.org/wiki/2010_Tennessee_floods

    f) In 2011, Hurricane Irene caused $296 million in damages in New York alone — one of the most costly storms there
    ever.
    http://en.wikipedia.org/wiki/Effects_of_Hurricane_Irene_in_New_York

    Comment by Don Williams — 24 Sep 2012 @ 8:26 PM

  102. Of course one could solve the relevant exercise in chapter 6 of Ray Pierrehumbert’s “Principles of Planetary Climate” to see that a decent approximation to annualized global precipitation increase is as the square of the temperature increase.

    Comment by David B. Benson — 24 Sep 2012 @ 9:48 PM

  103. Dear Don Williams,

    You seem to be very keen on having climate maps not based on climated statistics, but scenario projections based on computer simulations based on climate models, as mandatory(?) guidelines for engineering in the USA.
    For safety thresholds and so on, I assume. You also seem to frame this as kind of responsibility of climate scientists.

    I encourage to read this RC article

    http://www.realclimate.org/index.php/archives/2012/06/far-out-in-north-carolina/

    Especially the section “North Carolina politics” and maybe also the discussion.

    People like senator Inhofe will go as far as combatting such attempts by law, at least when public funds are involved

    Comment by Marcus — 25 Sep 2012 @ 2:53 AM

  104. To Marcus at 103:

    1) I am not competent to prescribe how climate maps should be constructed–
    I merely agree with Gavin’s comment above that climate data provided by the
    US Government should be the best possible — hence, based on current
    science.

    2) As I noted above, NOAA’s Atlas 14 Volume 2 says a 100 year storm
    event in my area has a 24 hour rainfall of 7.63 inches. Yet NOAA’s data
    archive shows the following rainfall events in this area in recent years:
    Aug 28,2011: 8 inches, October 1, 2010: 9.23 inches , Sept 29, 2004:
    6.92 inches, Sept 17, 1999: 9.4 inches.

    To the list of floods I noted in post 101 above I would add the
    New England Flood of May 2006.

    3) Obviously the maps have to be based partially on climated statistics
    but I don’t see how NOAA could have collected a record of rainfall
    occurring on the Atlantic ocean’s surface every six miles for the past
    100 years and going up to 400 miles offshore. Plus the continental
    interior does not have things called currents. So I think Atlas 14 is missing
    something for US East Coast areas which receive large amounts of
    rainfall from offshore hurricanes. But I do not have the professional
    qualifications needed to publish a critique of Atlas 14 in a reputable journal.

    4) Senator Inhofe does not run NOAA. Senator Inhofe also has not
    repealed the First Amendment for scientific journals. If someone at least
    published a critique of Atlas 14, then common citizens could point to that
    critique in local government hearings and argue for additional stormwater
    measures as a precaution. If US scientists fear for their funding, then
    have some European publish it. As I noted, Atlas 14 is the most important
    and influential climate model in the USA today.

    You don’t even have to provide an alternative model — just note where
    reality is not matching government climate data in major ways. I know —
    improbable extreme events can happen. But not, I think, several times
    in the same location within a few years.

    [Response: Atlas 14 is not a climate model. The documentation is quite clear, and the people involved easily findable. If you have a problem with it, you should be contacting them in the first instance. – gavin]

    Comment by Don Williams — 25 Sep 2012 @ 9:56 AM

  105. Tom Scharf wrote: “Global cyclone energy is running near all time lows.”

    For the 12 Atlantic hurricane seasons 2000-2011, the ACE for 8 seasons has been “above normal”, with 4 of those seasons rated as “hyperactive”. Only 2 seasons had ACE “below normal”. Three of the ten Atlantic seasons with the highest numbers of named storms have occurred since Katrina (2007, 2008 and 2010).

    Tom Scharf wrote: “We haven’t had a CAT3 landfall in the US since 2006, an all time record.”

    The number of hurricanes that happen to make landfall in the US has nothing to do with anything.

    And of course, by cleverly selecting CAT3 as an arbitrary cutoff point, you get to ignore CAT2 Hurricane Ike (2008) which caused nearly $28 Billion in damages, and and CAT2 Hurricane Gustav (2008) which caused over $4 Billion in damages, and CAT1 Hurricane Irene (2011) which caused nearly $16 Billion in damages, and tropical storm Lee (2011) which caused over $1.6 Billion in damages and produced unprecedented flooding all over the east coast, as far north as New York and Vermont.

    You are just reciting talking points.

    Comment by SecularAnimist — 25 Sep 2012 @ 11:17 AM

  106. We might also point Tom to Jeff Master’s Wunderblog for an update on the Pacific Supercyclones… http://classic.wunderground.com/blog/JeffMasters/show.html, or his earlier discussions of this year’s series of derechos that hit the east coast or….

    Comment by tokodave — 25 Sep 2012 @ 1:15 PM

  107. The National Geographic article showed that US weather disaster losses only increased 50% ($339 B to $541 B) from the previous 15 years to the most recent 15. Although the main reason was “more people living on higher-value properties in vulnerable areas.”

    Using monetary values to describe natural disasters does not necessarily relate to strength or frequency.

    Comment by Charles — 25 Sep 2012 @ 2:57 PM

  108. The climate policy discussion is now primarily about economics. I think it is essential that extreme weather events due to our warming of the global climate are correctly attributed so that costs can be assessed. As you say, Gavin, one cannot perform a cost/benefit analysis without accurate figures for both sides of the equation.

    The key fact is that analyses to date suggest that the costs of inaction on climate so vastly outweigh those of preventing further damage, that even a little extra evidence of this could move leaders to action. But they do need numbers, and those depend on being able to attribute weather events to human agency.

    Comment by David — 25 Sep 2012 @ 7:18 PM

  109. 1) Re David at 108, before you make a strong effort to provide info for
    cost-benefit analysis, you need to ask if rational cost-benefit analysis
    exists in the US political system.

    For example, we have a transport system based on $40 per gallon
    gasoline –$4/gal at the pump and $36/gal on our income tax to support military
    operations in the Middle East. But better alternatives don’t arise because the market
    signal –pump price — is kept artifically low. A simple fix would be to add a $20/gal
    security fee to the pump price –with an offsetting tax credit so people are not worse
    off. If you then let people pocket any savings they get from going to cheaper
    alternatives, you would have a boom in alternative fuel development.

    That might also significantly reduce carbon emissions, although I’ve heard
    electrical cars referred to as coal-burning vehicles.

    2) If the insurance industry isn’t eating the increased damages from
    heavy flooding, then their corporate clients are via higher premiums. Someone
    would benefit from remedial measures.

    3) But, in my personal opinion, the campaign finance system puts a heavy
    thumb on the scale in the USA.

    Comment by Don Williams — 25 Sep 2012 @ 9:20 PM

  110. > add a $20/gal security fee to the pump price –with an
    > offsetting tax credit so people are not worse off.

    Carbon Tax & 100% Dividend … James E. Hansen …
    http://www.columbia.edu/~jeh1/2009/WaysAndMeans_20090225.pdf

    Comment by Hank Roberts — 25 Sep 2012 @ 9:37 PM

  111. Gavin at #104:

    Mr Williams seems a little excited, but his starting point is relevant enough. In my experience engineering design rainfall intensities have a direct, measurable and substantial effect on the cost of civil infrastructure. A confident, model-based prediction of how they may change over time should be of considerable public policy utility.

    (Another example: The Australian equivalent of Atlas 14 is currently under revision, and early indications are for a substantial design rainfall increase. That is based on data, not model projections, which are being addressed separately.)

    Comment by GlenFergus — 25 Sep 2012 @ 10:25 PM

  112. Peter Thorne (#84),

    I think that if we were to go back and retroactively eliminate the emissions from 1850, it would have astonishingly little effect on the climate signals we are seeing today. Eliminating the the emissions from 2011, however and continuing to do so, would give us a noticeable change from our projected BAU course. There is a problem with the analysis you are invoking. Huge and growing emissions today are much much more important to climate change than tiny emissions in the past, not the other way around.

    Comment by Chris Dudley — 26 Sep 2012 @ 8:51 AM

  113. Chris Dudley (#112)

    I fear you entirely misunderstand my point. The climate system consists of a set of fast atmospheric responses and much slower cryospheric and oceanic responses. So, it takes a long time for the effect of CO2 emissions to be fully equilibriated and the climate find a new steady state. Even if we all decided to go and live in caves today the climate would continue to warm for at least a couple of centuries until the slow response components were in equilibrium with the imposed change in LW radiative balance we have already ‘acheived’. So, the freshly minted CO2 atoms the coal fire power plant down the road belched out for me to type this message have proportionately much less effect on the state of today’s climate than the emissions I made when I first used a BBC B+ computer at primary (elementary) school. By the same token how much C-word emits today is much less important than how much OECD countries emitted prior to 1950 in explaining the state of today’s climate. So, not all carbon emissions are equal in the context of a transient climate change, the historical emissions which have gone further towards equilibrium with the slow changing climate system components are more important in explaining the transient (but NOT the final) climate state. My kids can blame the CO2 I emitted writing this for events visited upon them when they are adults …

    Comment by Peter Thorne — 26 Sep 2012 @ 12:25 PM

  114. Peter (#113),

    I see the origin of your mistake now. You think there is momentum in the climate system. There is not. There is only lag. We are at the temperature today that would be equilibrium for the carbon dioxide concentration of 1982. It is only the concentration above that (emitted in recent years, not 1850) that induces further temperature increase. If, as you say, we were all to go live in caves, there would be a continued increase in temperature until the concentration fell to the energy balance level and then temperature would fall as the concentration continued to fall. If the lag is as short as 30 years (years back to 1982) then the temperature would rise an additional 0.3 C or so and then start to fall around 2050, not centuries from now. For longer lags, you get a later turn around but also a lower maximum temperature. For a 3 century lag we’d get a less than 0.1 C further increase in temperature and turn around to cooling around 2150. There is a little more discussion here: http://www.realclimate.org/index.php/archives/2012/08/an-update-on-the-arctic-sea-ice/comment-page-7/#comment-249977

    Comment by Chris Dudley — 26 Sep 2012 @ 1:46 PM

  115. Chris (#113)

    Where did I suggest this was anything other than the fact there are lags or even infer anything about momentum? Under transient climate change the fact that there are lags means the emissions of 30-years ago are more important than today’s in explaining today’s climate. Nothing to do with momentum and absolutely everything to do with lags. Under transient climate change it is impossible to argue on a physical basis of our understanding of the climate system that the CO2 molecules I emit now have a greater bearing on today’s climate system than those emitted 100 years ago. Not even the fast atmospheric response component has realized the impact of today’s emissions yet. Whereas a similar amount of CO2 that was emitted 100 years ago has had a chance to equilibriate with many components of the climate system. In explaining the climate in the here and now those 100 year old emissions ‘per capita CO2′ have far more say. At equilibrium state they will have equal say. But here and now we are mainly reaping what our parents’ sowed. This is why scenario projections in CMIP3/5 only meaningfully start to diverge in global mean temperature around mid-Century. What we do now will make a difference for our grandkids. In terms of the next 30 years what we do now makes little odds … there is substantial interest due. That is what happens when you have a system which has substantive physical lags and you tweak it.

    Comment by Peter Thorne — 26 Sep 2012 @ 2:28 PM

  116. http://www.agu.org/pubs/crossref/pip/2012GL053338.shtml

    European winter climate and its possible relationship with the Arctic sea ice reduction in the recent past and future as simulated by the models of the Climate Model Intercomparison Project phase 5 (CMIP5) is investigated, with focus on the cold winters. … A pattern of cold-European warm-Arctic anomaly is typical for the cold events in the future, which is associated with the negative phase of the Arctic Oscillation. These patterns, however, differ from the corresponding patterns in the historical period, and underline the connection between European cold winter events and Arctic sea ice reduction.

    Comment by Hank Roberts — 26 Sep 2012 @ 5:00 PM

  117. Chris Dudley & Peter Thorne — The time scale you are considering is far too short. Think centennially.

    Comment by David B. Benson — 26 Sep 2012 @ 6:11 PM

  118. David (#117)

    Or even millenially (is that even a word?) for the ice sheet components. The ocean takes about 200 years (roughly) to completely mix to the abysal ocean but the big ice sheets all bets are off when (if) they would ever be in perfect equilibrium with the climate system (except if they melted entirely of course but lets not even countenance that …). I was using the shorter term for analogy – its hard to think about your great^90 grandkids after all …

    Comment by Peter Thorne — 26 Sep 2012 @ 8:23 PM

  119. Peter Thorne @118 — Try millennially.

    http://www.merriam-webster.com/dictionary/millennially

    Comment by David B. Benson — 26 Sep 2012 @ 9:02 PM

  120. Peter (#115),

    Putting labels on identical particles will get you into trouble. But, I’m not sure that is really where you are having difficulty. I would say also that there were no emissions from 100 years ago that are comparable to today’s. But, mainly I think what you are considering a transient response is what is confusing you. The temperature response to a Heaviside function does take a while to complete. But to get a Heaviside function in concentration, you need ongoing emissions because carbon dioxide is absorbed out of the atmosphere as time goes by. If you take account of that, you’ll understand that the most recent emissions are the most important.

    You claim that we are only seeing the effect of emissions from thirty years ago today. More recent emissions have yet to have any physical effect. But we can rather easily calculate a limit to what would have happened to temperature today had we all gone to live in caves in 1982. Again, we take a 30 year lag as the shortest possible lag for the system and see that there would have been only 0.17 C of warming from 1982 to today instead of the 0.4 C that actually occurred. The more recent emissions are responsible for at least 58% of the warming. For a 3 century lag, they’d be responsible for 88% of the warming since 1982.

    How quickly and how far a variable moves under lag conditions depends on the target it is trying to match. That target is the temperature we’d calculate for a specific climate sensitivity. I’ve been using 3 C per doubling. Raise the concentration of carbon dioxide, and the lagged variable responds immediately to that change. Double the concentration today and the temperature begins to rise today. It may take a while to finish, but it does not take a while to start. Your claim is basically that it does wait to begin to respond. But that is not how energy systems work.

    If you run this script which ends emissions when the concentration reaches 340 ppm, the solid line in the first panel is the target temperature and the dashed line is the response with a thirty year lag, dot-dashed with a 90 year lag and triple-dot-dashed with a 300 year lag (applied starting when emissions end).


    ;use IDL or (free) Fawlty languages
    a=findgen(1000) ;year since 1850
    b=fltarr(1000) ;BAU concentration profile
    b(0)=1
    for i=1,999 do b(i)=b(i-1)*1.02 ;2 percent growth
    ;plot,b(0:150)*4.36+285.,/ynoz ; 370 ppm year 2000
    c=(18.+14.*exp(-a/420.)+18.*exp(-a/70.)+24.*exp(-a/21.)+26.*exp(-a/3.4))/100. ;Kharecha and Hansen eqn 1
    e=fltarr(1000) ;annual emissions
    for i=1,999 do e(i)=b(i)-b(i-1)
    d=fltarr(1000) ; calculated concentration
    t=340.-285. ;target concentration
    f=0 ;flag to end BAU growth
    for i=1,499 do begin & d(i:999)=d(i:999)+e(i)*c(0:999-i)*4.36*2. & if d(i) gt t then begin & e(i+1:999)=0 & f=1 & h=i & endif else if f eq 1 then e(i+1:999)=0 & endfor ;factor of two reproduces BAU growth
    !p.multi=[0,2,2]
    g=alog(d/285.+1)/alog(2.)*3; instantaneous temperature curve.
    gg=g*0.8/1.5
    for i=h, 998 do gg(i+1)=gg(i)+(g(i)-gg(i))/30. ; 30 year lag curve
    gg6=g*0.8/1.5
    for i=h, 998 do gg6(i+1)=gg6(i)+(g(i)-gg6(i))/90. ; 90 year lag curve
    gg12=g*0.8/1.5
    for i=h, 998 do gg12(i+1)=gg12(i)+(g(i)-gg12(i))/300. ; 300 year lag curve
    plot,a+1850.,alog(d/285.+1)/alog(2.)*3.,/ynoz,xtit='Year',ytit='Final warming (C)',charsize=1.5 ; Temperature assumming 3 C per doubling of carbon dioxide concentration
    oplot,a+1850.,gg,linesty=2
    oplot,a+1850.,gg6,linesty=3
    oplot,a+1850.,gg12,linesty=4
    plot,a(0:499)+1850,e(0:499),xtit='year',ytit='carbon dioxide emissions (AU)',charsize=1.5 ;emission profile in arbitrary units
    plot,a+1850.,d+285.,/ynoz,xtit='Year',ytit='carbon dioxide concentration (ppm)',charsize=1.5 ; atmospheric carbon dioxide concentration in ppm

    Comment by Chris Dudley — 26 Sep 2012 @ 10:11 PM

  121. 114 Chris D said, “There is only lag. We are at the temperature today that would be equilibrium for the carbon dioxide concentration of 1982. It is only the concentration above that (emitted in recent years, not 1850) that induces further temperature increase.”

    Crapola. Stop CO2 emissions and temperatures spike immediately. Stuff equalizes over a thousand years, but the aerosol spike over a few months trumps all.

    Comment by Jim Larsen — 26 Sep 2012 @ 11:55 PM

  122. One bunch have no problem with using attribution to guide investment decisions. Fossil fuel companies (remember the bunch who want us to believe climate change is a hoax) are rushing to exploit the opportunity opened up by declining Arctic sea ice.

    How, I wonder, do they know that decline is a reliable trend, not a short-term blip that will correct, unless they trust the very science they have paid professional deniers to traduce? It really does not make sense to invest heavily in Arctic resource extraction if you expect that it will revert to being mostly inaccessible over summer.

    More on my blog.

    Comment by Philip Machanick — 27 Sep 2012 @ 3:11 AM

  123. Chris,

    I think we are arguing semantics.

    But how, logically can the carbon I emit this very moment have more impact than carbon emitted a Century ago to the climate right now? It can’t. In partitioning ‘blame’ for today’s climate, which was my point as to what the legal folks at the meeting were interested in (as a meeting organizer I WAS there after all …), when matters as well as how much because of the interaction between the forcing and the physical processes some of which are very slow. The effect is not instantaneous. The climate is much further out of equilibrium with today’s CO2, than the day before’s than the day before that’s etc etc ad pretty much infinitum since we ever started burning fossil fuels on an industrial scale. Yes, its more complicated than that in reality but this is blog comments not a manuscript proof … and the point stands!

    When and how much CO2 was emitted are both important factors in understanding today’s climate state under a transient climate change when the system stands out of equilibrium. If the industrial revolution had started in 1950 not 1850 but we had released the same amount of CO2 the climate state (and CO2 burden) would be different to today. Agreed? If so, as I say we are in semantics arguing the same point here. When and how much both matter in explaining the climate today.

    Comment by Peter Thorne — 27 Sep 2012 @ 5:20 AM

  124. I don’t think millennialism really helps in this discussion. First, If you are going to take 3000 years to melt the ice, that hardly counts as a large energy imbalance and the temperature should be at its fast feedback equilibrium, or fairly close to it if the carbon dioxide concentration has been held steady for a while. Second, slow feedbacks come into play leading to a 6 C per doubling sensitivity according to Hansen. That makes a lag in the sensitivity to which we would subsequently apply a lag in the temperature response. But, the temperature response lag is likely quick compared to the slow feedback lag so essentially you just have a sensitivity knob. And, that is a different sort of behavior.

    Comment by Chris Dudley — 27 Sep 2012 @ 6:42 AM

  125. David (#117),

    Hansen et al. 2011 give some preference to the intermediate climate response function shown in their fig. 9 based on in situ ocean heat uptake measurements. http://pubs.giss.nasa.gov/abs/ha06510a.html

    In that case, 75% of the response is complete within 100 years and 50% within 8 years. The long tail is not all that important, I’d say, even though it does linger over centuries.

    Comment by Chris Dudley — 27 Sep 2012 @ 7:00 AM

  126. Not even the fast atmospheric response component has realized the impact of today’s emissions yet. … -Peter Thorne

    In what form will that impact be? Just reading blogs, it appears it’s going to be lumped in with internal variability and passed off as all natural. The arctic sea ice was just melted by the AMO, as a for instance.

    Comment by JCH — 27 Sep 2012 @ 8:03 AM

  127. Peter (#122),

    I think you must have gone to the pub with Mr. Zeno while you were at the meeting. He is a great one for semantics. Or maybe you only got half way to the pub on account of the engrossing conversation?

    Here is a persuasive argument that today’s emissions are important: Suppose we set them to zero. The processes that remove carbon dioxide from the atmosphere continue to function. The concentration of carbon dioxide begins to fall dramatically. The rate of temperature increase declines immediately and becomes zero within decades. It is a strong effect. If the absence of today’s emissions is that important, then so is their presence.

    In partitioning blame, we are looking at the effects of climate change. Blame goes to those who made it dangerous, who carried us over the threshold. Those are the recent emitters, not the past emitters. Intent is also important. Those large emitters who are increasing their emissions are culpable. Those who are cutting their emissions are not.

    And, we don’t need any great effort to exact restitution. WTO rules on environmentally motivated trade tariffs are adequate for making an effective instrument.

    Comment by Chris Dudley — 27 Sep 2012 @ 9:28 AM

  128. 127 Chris D said, “Those large emitters who are increasing their emissions are culpable. Those who are cutting their emissions are not.”

    Humans generally fall into two categories today: Those with large emissions who are in a financial bind and so are emitting a tad less, which conveniently allows them to crow about their happenstance morality. Then there’s those whose finances consist of coins, and have the audacity to consider buying a 70MPG(?) micro-car, thus incurring your wrath.

    Comment by Jim Larsen — 27 Sep 2012 @ 11:28 AM

  129. Chris (#127)

    An alternative, equally valid, viewpoint is that blame is by the proportion of responsibility. You can’t blame the single CO2 atom that took us over a threshold for 100% of the damage that accrues. That is backwards to the point of absurdity. Under a natural law tenet blame is by proportional responsibility for consequence regardless of whether that consequence was intended or not.

    Which returns me to my original point that for today’s extreme event (not tomorrow’s) yesterday’s polluter physically has more responsibility than today’s does precisely because there are substantive physical lags in the climate system. Their equal amount of emissions has a disproportionately larger impact on today’s climate state than today’s polluter does.

    CO2 also has no recognized half life. Its not like the snow in your desk top snow tumbler. It will take millenia for all the CO2 to be scavenged. A fair proportion of the CO2 we emitted in the late 19th Century is likely still hanging around in the atmosphere.

    And, finally, last time I checked climate model runs that ceased emissions today tended to take Centuries to millenia to stabalize their global mean temperatures because those long-term components equilibriating does matter in a fully blown AOGCM. But I would defer to (and welcome insights from) Gavin or others more expert on the specific point as to how long AOGCMs take to stabalize if we stop emitting anything tomorrow. Beyond seconding Jim (121) that the immediate effect would be a substantive spike as the short-lived aerosol cooling effect dropped off a proverbial cliff.

    Comment by Peter Thorne — 27 Sep 2012 @ 11:29 AM

  130. > Chris Dudley
    > … declines immediately ….

    Can you post a citation supporting what you’re saying?

    “the lifetime of the surface air temperature anomaly might be as much as 60% longer than the lifetime of anthropogenic CO2 ….”

    Lifetime of Anthropogenic Climate Change: Millennial Time Scales of Potential CO2 and Surface Temperature Perturbations

    DOI: 10.1175/2008JCLI2554.1
    Cited by: http://scholar.google.com/scholar?hl=en&lr=&cites=11690433131824561442&um=1&ie=UTF-8&sa=X&ei=wolkUKCJCKS6iwLSuICgDQ&ved=0CCkQzgIwAA

    Comment by Hank Roberts — 27 Sep 2012 @ 12:16 PM

  131. Hank (#130),

    Don’t quote out of context. The answer is right there.

    Comment by Chris Dudley — 27 Sep 2012 @ 2:18 PM

  132. Peter (#129),

    I’ve demonstrated that more recent emissions are more important for today’s warming than earlier emissions, you have just asserted the opposite. As they say in Monty Python, that is mere contradiction, not argument.

    I’m still working on your last question in #123. I want to implement one of Hansen’s temperature response functions. But I’d guess now the answer will be just the opposite of what you’d expect.

    Comment by Chris Dudley — 27 Sep 2012 @ 2:59 PM

  133. Jim (@121),

    Why is there a huge spike in temperature if the oceans are delaying the full temperature response? There could be a rapid change in forcing, but it is not all that important to my calculation since I am gauging against the climate sensitivity for a doubling of carbon dioxide. If it spikes a little higher, it just turns over to cooling all the sooner. I’m using Kharecha and Hansen, 2008 Global Biogeochem. Cycles, 22, GB3012 eqn. 1 for this. Time scales are given there.

    Comment by Chris Dudley — 27 Sep 2012 @ 3:58 PM

  134. 133 Chris D asks about the temperature spike:

    Aerosols are generally accepted (though with disagreement) as being a significant cooling effect. If we stop emitting carbon today, within a few months the aerosols will be essentially washed out by rain. Do the math, and things warm up way fast. After that, it’s a 1000 year slide to cooler weather.

    Comment by Jim Larsen — 27 Sep 2012 @ 6:23 PM

  135. Chris Dudley @125 — Thanks for the link.

    I ran a simple two box model against GISS net forcings. In doing so I found that an ‘air+upper ocean’ component had a time constant of about 1+ years and an ‘intermediate ocean’ component of 30 to 60 years, maybe longer.

    An electrical circuit analogy is two coupled series RC circuits so that both capacitors share the resistor from the voltage supply and the second capacitor’s sole resistor is connected to the + side of the first capacitor.

    Comment by David B. Benson — 27 Sep 2012 @ 6:39 PM

  136. Jim (#134),

    Yes, but cooling against what? My target temperature in the lag calculation is the full fast feedback climate sensitivity effect. Aerosols are a cooling effect on that, so really, I’m calculating as though the aerosols are already washed out. And, the sooner we meet the target temperature as it falls owing to moving to the caves, the sooner cooling starts (albeit from a higher perch).

    Comment by Chris Dudley — 27 Sep 2012 @ 8:28 PM

  137. Per my last comment, seems like a good reason to experiment with a virus that can kill hundreds of millions if it escapes doesn’t it? .

    Let’s hope it doesn’t escape.

    http://www.lauriegarrett.com/index.php/en/blog/3186/

    Comment by Ron R. — 27 Sep 2012 @ 10:39 PM

  138. Peter, Chris,

    Every credible paper I have read on the temperature consequences of rapid/immediate elimination of fossil fuel combustion presents three components to temperature. First is the 0.8 C over pre-industrial already realized. Second is the ‘climate warming commitment’ due to the lags from effects of CO2 and associated emissions, and the estimates of temperature increase for this component seem to concentrate around 0.6-0.7 C. Third is the aerosol forcing from the rapid precipitation of the fossil sulphates that would occur at the termination of fossil fuel combustion, and here the estimates range from 0.5-1.5 C. So, total temperatures that would occur in a few decades after elimination of fossil fuel combustion today would be in the range of 2.0-3.0 C. And, these computations do not include some of the major feedbacks.

    I have seen no credible evidence that temperatures could be stabilized at 2.0-3.0 C, due to the triggering and enhancing of synergistic positive feedbacks. In fact, I have seen no credible evidence that we are not already in that destabilizing phase at 0.8 C. What are our options under these scenarios?

    Any useful actions we take depend on the climate dynamical mode. I liken it to the state of a reactive energy production system. Consider a fusion reactor, for example. It can operate in three main modes: driven, ignition, burn. In the driven mode, the energy out is some multiple of the energy in. In the ignition mode, the system is transitioning to self-sustainability, and is starting to lose memory of the energy input. In the burn mode, the system is fully self-sustaining (although internal quenching is possible if waste [combustion products] cannot be removed adequately), and has lost memory of the energy input.

    In climate change, I believe we have gone past the driven mode, and are either at burn or ignition. To extricate the planet from this catastrophe, two steps are required. First, fossil fuel combustion has to terminate essentially immediately, to prevent the problem becoming even more intractable. Second, excess CO2 and other greenhouse gases in the atmosphere have to be removed, possibly in concert with other temperature reduction measures such as aerosol addition to insure that the self-sustaining positive feedbacks are quenched.

    But, this whole discussion, while interesting, is equivalent to the Medieval argument as to how many angels can dance on the head of a pin. There is not only no chance of fossil fuel combustion ending rapidly, but all the evidence points to as much exploration and extraction of fossil fuels as allowed by governments, and increasing global demand for fossil fuels in the foreseeable future. And, the real culprits are not the fossil fuel companies and their media and denialist henchmen, although their hands are certainly dirty and covered with blood. They are the fossil fuel equivalent of the drug cartels. The real culprits are the intensive energy use addicts, you and me. We drive the whole process. PBS and WSJ could present in-depth stories tomorrow about the reality of climate change as outlined above, and it would not make one whit of difference in fossil energy consumption among the American people. The Koch Brothers are wasting their money on the Heartland Institute and their ilk. If the truth were told, it would have no noticeable difference on behavior with respect to fossil fuel use.

    Comment by Superman1 — 28 Sep 2012 @ 6:45 AM

  139. Superman1,

    I think you must be double counting somewhere. Ending emissions at a carbon dioxide concentration of 400 ppm means a factor of 1.4 over the pre-industrial concentration. ln(1.4)/ln(2)is about 0.5 so we should see only half the effect of a doubling at most. For a sensitivity of 3 C per doubling, that comes out to an maximum 1.5 C of warming where you give a range of 2-3 C. Thus you are implying a sensitivity of 4 to 6 C per doubling or more. Only a quarter of that range (4-4.5 C) has much support.

    Comment by Chris Dudley — 28 Sep 2012 @ 11:56 AM

  140. Re Superman1 #138 and Chris Dudley #139: Superman1 has been at this on a couple different threads. It’s perfectly legit to refer to feedbacks, and it can even be good to speculate about feedbacks, if you don’t present speculation as settled or consensus. But Superman1 needs to acknowledge that his bottom line after feedbacks is different (way warmer) than that of most others.

    Having a different opinion of the bottom line from most professionals isn’t terrible all by itself. But failure to even acknowledge the opinion gap is a flag, a very large and red one. Superman1, if your claim is that your bottom line is close to the science consensus, you’re going to have to cite a wide variety of scientists who live and work near the middle of the metaphorical road.

    Comment by Ric Merritt — 28 Sep 2012 @ 1:43 PM

  141. Ric Merritt and Chris Dudley #139 and #140,

    It’s difficult to identify a ‘consensus’ on projected temperature increases from ceasing fossil fuel combustion essentially immediately. There is a wide spread, as Ric points out. Further, I would be leery of any ‘consensus’ in such a politically and commercially sensitive area as climate change. Any ‘consensus’ could very well be the ‘consensus’ the research sponsors want to hear. In the Arctic ice projection sphere, I would have put much more weight on someone who knew the business, such as Peter Wadhams, than the ‘consensus’ reached by the IPCC Report in 2007. In the end, one has to go with their best judgment as to which of the myriad numbers is most credible.

    Additionally, we have been basing our discussions on unclassified papers, model results, and measurements. The DoD and intel agencies need the most accurate projections available, in order to be able to do credible outyear planning and procurement. What I’d really like to see are the numbers these agencies are assuming for their models and for their planning, and the methane and other climate-critical measurements their classified sensors are producing. My guess is their numbers are far closer to the ones I use, even on the high side perhaps, compared to other numbers I have seen in the threads.

    Nevertheless, a recent letter to Climatic Change (Correlation between climate sensitivity and aerosol forcing and its implication for the “climate trap”: A Letter. Katsumasa Tanaka & Thomas Raddatz) summarizes the situation as follows:

    “Climate sensitivity and aerosol forcing are dominant uncertain properties of the global climate system. Their estimates based on the inverse approach are interdependent as historical temperature records constrain possible combinations. Nevertheless, many literature projections of future climate are based on the probability density of climate sensitivity and an independent aerosol forcing without considering the interdependency of such estimates. Here we investigate how large such parameter interdependency affects the range of future warming in two distinct settings: one following the A1B emission scenario till the year 2100 and the other assuming a shutdown of all greenhouse gas and aerosol emissions in the year 2020. We demonstrate that the range of projected warming decreases in the former case, but considerably broadens in the latter case, if the correlation between climate sensitivity and aerosol forcing is taken into account. Our conceptual study suggests that, unless the interdependency between the climate sensitivity and aerosol forcing estimates is properly considered, one could underestimate a risk involving the “climate trap”, an unpalatable situation with a high climate sensitivity in which a very drastic mitigation may counter-intuitively accelerate the warming by unmasking the hidden warming due to aerosols.”
    “Furthermore, our illustration shows that, with the large spread in the interdependent simulations after the emission shutdown, the global temperature overshoots the common climate policy target of 2°C warming in the case of CS>5°C. Furthermore, the rate of warming after an emission shutdown exceeds another common target of 0.2°C/decade even with a small CS.”

    I can quote other studies and other temperature ranges, but a 2 C temperature increase is more than adequate for the argument I make.

    Comment by Superman1 — 28 Sep 2012 @ 3:10 PM

  142. Peter (#123),

    So, I took some time to implement the Hansen, Sato, Kharecha and von Schuckmann (2011) Green’s function formalism for their intermediate (fig. 9) temperature response function and you will be unhappy with the result. In that formalism, as soon as emissions are set to zero, the temperature not just of the the fast feedback response without lag starts to fall as expected, but also the ‘lagged’ climate system response to that forcing begins to fall immediately. There is indeed about a 25 year lag on the growing portion of the curve, but no lag on turning to cooling. This is because their function makes 17% of its final response in the first year. I suspect that this comes from the need to accommodate the climate response to volcanic eruptions.

    To me, this seems unphysical since the energy imbalance remains so the temperature should still rise.

    So, we’ll just continue with my 30 year lag implementation which targets the fast feedback sensitivity temperature response. You asked to have the industrial revolution delayed by a century but then have total emissions
    up to today be the same. For ease of implementation, I delayed the onset of emissions by 112 years, took every third year of the original emissions curve and multiplied that new curve by three to preserve the total.

    Rather than rising to 400 ppm, the foreshortened concentration curve rises to 433 ppm. That corresponds to a peak target temperature of 1.8 C above pre-industrial rather than 1.5. However, even though the target temperature is higher as those more impetuous emitters move to their caves, their actual temperature is lower than ours on move day by 0.3 C since they are lagging a later curve. Not to worry though. They reach a temperature 0.02 C higher than the maximum we’d reach (1.16 C above pre-industrial) and only 11 year latter than us. So, there is very little difference in temperature behavior despite the rather different emissions profiles. On beyond the year 2100 things are nearly identical. This echos what a number of scientist have said about quantity of emissions, rather than time profile, being what is important in the long run. And, since the highest temperature reached is very nearly identical, the overall (dangerous) consequences are the same. So, no, your proposed change does not demonstrate that timing is all that important. And, it is still the most recent emissions that are responsible for taking us over the threshold for dangerous climate change as I demonstrated earlier.

    http://pubs.giss.nasa.gov/abs/ha06510a.html

    Comment by Chris Dudley — 29 Sep 2012 @ 10:01 AM

  143. Superman1 (#141),

    James Hansen does not always seem to say what funders want to hear. He’s derived a climate sensitivity for fast feedbacks including natural aerosols of 3 \pm 0.5 C per doubling. Since your concern is anthropogenic aerosols, you should be using the paleoclimate derived figure to make a separation. Clearly, your concerns seem overblown if that approach is taken.

    Comment by Chris Dudley — 29 Sep 2012 @ 10:15 AM

  144. Jim (#128),

    Sorry I missed that one. You speak of happenstance morality but I am thinking of new EPA regulations and new CAFE standards, rather intentional efforts to avoid harm.

    Let us not forget that some countries can and will skip fossil fuel tech based development. And they will not have to pay twice for their energy infrastructure as we will since they do not face sunk costs. Those which insist on pushing forward with fossil fuel based development are causing dangerous climate change in addition to short changing themselves with outdated tech.

    Comment by Chris Dudley — 29 Sep 2012 @ 3:51 PM

  145. Chris Dudley @142 — What makes you think that the energy imbalance remains?

    I expect the result you obtained from your ‘emissions’ set to zero computational experiment; I assume that also included setting aerosols to zero.

    Comment by David B. Benson — 29 Sep 2012 @ 5:44 PM

  146. David (#145),

    The energy imbalance remains owing to the elevated concentration of carbon dioxide in the atmosphere, which takes a while to decline and reach a point where the actual temperature and the temperature predicted for equilibrium agree.

    You can see this if you save one of the scripts I’ve posted here to a file and use the command @filename.txt at the command line of this program: http://fl.net23.net/

    Probably best to save the script in the lib subdirectory in that program package so you don’t have to fiddle with the !path variable.

    Comment by Chris Dudley — 29 Sep 2012 @ 11:40 PM

  147. 144 Chris D said, “Let us not forget that some countries can and will skip fossil fuel tech based development. And they will not have to pay twice for their energy infrastructure as we will since they do not face sunk costs. Those which insist on pushing forward with fossil fuel based development are causing dangerous climate change in addition to short changing themselves with outdated tech.”

    Hmmm, no. Fossil fuels were/are free energy other than externalities. Pure-t-profit and externalities. Well, MAYBE 10% of the price is cost, but… The West used that free energy to build their economies. Also free minerals. Essentially, you’re saying, “We brutalized the planet and DESERVE to continue to do so (at a slightly reduced rate) unencumbered by mere foreigners’ desire to share in the planetary birthright. Obviously, foreigners have NO right to OUR birthright.”

    Nope. Somebody who unfairly consumes nearly all resources has NO right to not share those resources fairly. Once the USA drops below the average CO2 emissions of the planet for the entire history of the industrial revolution, THEN you’ve got a point. Until then, you’re just slurping at the trough.

    And you never answered my three questions.

    1. The USA has REFUSED to increase efficiency, and is now implementing a warped MPGe policy which will allow for the continued spewing of carbon (and even worse, CH4) by pretending about half of the carbon and ALL of the CH4 doesn’t happen when considering electric-based carbon-spewing vehicles even though biofuels and synfuels are the most likely path to motor vehicle solutions. Does that make the USA culpable?

    2. How does the size of a country matter with regard to culpability?

    3. Dang, I forgot the third… but hey, look it up and answer, OK?

    Comment by Jim Larsen — 29 Sep 2012 @ 11:44 PM

  148. [edit – this is all now OT]

    Comment by Chris Dudley — 30 Sep 2012 @ 7:51 AM

  149. Chris Dudley #143,

    “James Hansen does not always seem to say what funders want to hear. He’s derived a climate sensitivity for fast feedbacks including natural aerosols of 3 \pm 0.5 C per doubling. Since your concern is anthropogenic aerosols, you should be using the paleoclimate derived figure to make a separation. Clearly, your concerns seem overblown if that approach is taken.”

    I have examined about two handfuls of papers addressing mainly, or more typically in part, the climate effects of immediate cessation of fossil fuel combustion. They all agree that we have made a ‘warming’ commitment to increased future temperatures, but they disagree substantially on the magnitude of that commitment. The range seems to be from about 1.2 C to perhaps triple that. It depends on values assumed for climate sensitivity and aerosol forcing, both of which have potentially wide discrepancies in their values.

    Hansen’s CS estimation as you have shown is but one among many, although Hansen certainly has impeccable credentials. In his CS derivation, there seems to be the implicit assumption that it is not dependent on the levels of CO2, on the temperature, or on the rate of CO2 concentration increase. I have no idea of how sensitive the results are to these assumptions.

    Further, it is based on an atmosphere of eons ago in contact with a physical boundary that existed eons ago. How that relates to the atmosphere of today in contact with the physical boundaries of today is unclear to me. The implied assumption appears to be conditions are sufficiently similar between then and now for CS estimation purposes. I have no way of knowing how true that is.

    So, we have myriad estimates of committed temperature increase based on myriad estimates of CS and myriad estimates of AF, made by researchers whose personal agendas may or may not be from the same sheet of music. Under those conditions, I have no idea what it means to generate a ‘probability distribution’ of expected outcomes, and base some type of climate mitigation around some ‘mean’. Given the magnitude of the stakes involved in guessing wrong, I would be more inclined to use the precautionary principle and focus on the potential of higher projected temperatures until more convincing opposing data is forthcoming. I’m willing to stay with 2 C at this point, although, as I stated in my previous post, a committed increase of 1.5 C probably wouldn’t alleviate my concern about entering into the ‘ignition/burn’ phase appreciably.

    Back to your comments. The first sentence of your response relates to a point I have made in a number of posts, including the one to which you responded. That is the influence of research sponsors on what gets published and disseminated. From the chapter on Climate Catastrophe in Ahmed’s Crisis of Civilization, I have extracted the following interesting excerpt:

    “The United Nations IPCC report has commendably shifted the debate on climate change by publicly affirming firstly an overwhelming scientific consensus on the reality of human emissions generated climate change, and secondly a startling set of scenarios for how global warming will affect life on Earth by the end of this century if existing rates of increase of CO2 emissions continue unabated. But according to a growing body of scientific evidence, the IPCC’s findings in 2007 were far too conservative – and dangerous climate change is more likely to occur far sooner, with greater rapidity, and higher intensity, than officially recognized by governments.

    INACCURACIES IN THE INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE 2007

    A number of British researchers expressed grave reservations shortly after the release of the UN IPCC Fourth Assessment Report. In particular, David Wasdell, who was an accredited reviewer of the IPCC report, told the New Scientist that early drafts prepared by scientists in April 2006 contained “many references to the potential for climate to change faster than expected because of ‘positive feedbacks’ in the climate system. Most of these references were absent from the final version.” His assertion is based “on a line-by-line analysis of the scientists’ report and the final version,” which was agreed in February 2007 at “a week-long meeting of representatives of more than 100 governments.” Below we highlight three examples from Wasdell’s analysis:

    1) In reference to warnings that natural systems such as rainforests, soils and oceans would be less able in future to absorb greenhouse gas emissions, the scientists’ draft report of April 2006 warned: “This positive feedback could lead to as much as 1.2 degrees Celsius of added warming by 2100”. The final version of March 2007 though only acknowledges that feedback exists and says: “The magnitude of this feedback is uncertain.”

    2) The April 2006 draft warned that global warming will increase the amount of water vapour released into the atmosphere, which in turn will act like a greenhouse gas, leading to an estimated “40-50 percent amplification of global mean warming”. In the final March 2007 report this statement was replaced with “Water vapour changes represent the largest feedback”.

    3) In relation to the acceleration of breakup of arctic and antarctic ice sheets, the April 2006 draft paper talked about observed rapid changes in ice sheet flows and referred to an “accelerated trend” in sea-level rise. The government-endorsed final report of March 2007 said that “ice flows from Greenland and Antarctica … could increase or decrease in future.”54

    4) The conclusion that “North America is expected to experience locally severe economic damage, plus substantial ecosystem, social and cultural disruption from climate change related events” was removed from the final version.55

    In other words, the IPCC Fourth Assessment Report excluded and underplayed direct reference to the overwhelming probability of the rapid acceleration of climate change in the context of current rates of increase of CO2 emissions and positive feedbacks. Wasdell put it down to possible political interference, and there are reasonable grounds for this conclusion.
    As noted by Mike Mann, director of the Earth System Science Center at Pennsylvania State University, and a past lead author for the IPCC: “Allowing governmental delegations to ride into town at the last minute and water down conclusions after they were painstakingly arrived at in an objective scientific assessment does not serve society well.”56

    The possible watering-down of the IPCC’s 2007 Fourth Assessment Report is part of a wider pattern. In the same month, a joint survey by the Union of Concerned Scientists and Government Accountability Project concluded that 58 per cent of US government-employed climate scientists surveyed complained of being subjected to: 1) “Pressure to eliminate the words ‘climate change,’ ‘global warming’, or other similar terms” from their communications; 2) editing of scientific reports by their superiors which “changed the meaning of scientific findings”; 3) statements by officials at their agencies which misrepresented their findings; 4) “The disappearance or unusual delay of websites, reports, or other science-based materials relating to climate”; 5) “New or unusual administrative requirements that impair climate-related work”; 6) “Situations in which scientists have actively objected to, resigned from, or removed themselves from a project because of pressure to change scientific findings.” Scientists reported 435 incidents of political interference over the preceding five years.57 Such large-scale systematic political interference with climate science lends credence to the concern that climate scientists feel unable to voice their real views about the urgency posed by global warming.”

    [Response:David Wasdell is not a reliable source (see here or discussions in the comment thread here) – gavin]

    Comment by Superman1 — 30 Sep 2012 @ 11:12 AM

  150. Jim — look at the sidebar. Right hand side of every page at RealClimate.

    Those are the hosts’ recommendations.
    Those are discussions of related topics, like those you want people focusing on, including the one you forgot.

    Many of us read and some participate in those discussions, or are involved in real life politics rather than blogging.

    Personally I recommend reading EcoEquity — there’s a lot there going back years, about what you’re interested in.

    If you read nothing else on that page, read footnote 1 and follow the link to the 4-minute video there.

    The questions you want talked about — have been and are being talked about extensively.

    Point to well thought out discussion already in progress.

    Comment by Hank Roberts — 30 Sep 2012 @ 2:23 PM

  151. Superman1 (#149),

    You’re not going to get any argument from me that conspiracies exist. There were coal companies pretending to be veterans groups the last time we had serious climate legislation going. But, in science, if you have a new position, you need to do the work to support it. Run some models showing that paleoclimate based estimates are wrong in the way you say they are.

    Don’t forget that Hansen et al. 2011 specifically address this question. Looking at the numbers though, it would seem that Sophie’s pick for net forcing now in fig. 2 fits best with the paleoclimate estimate so even if Conner is right, the situation simply slides back to Sophie’s position since natural aerosols that are included in the paleoclimate sensitivity estimate will not be disappearing. We never get to the full clean sky sensitivity. Even Connor’s scenario faces dilution that I’m not sure they appreciated.

    Finally, if aerosols are as bad as all that, then there is very little lag in the climate system. The temperature is keeping right up with the net forcing. If so, an aerosol driven temperature spike brought on by going to live in caves will be quite brief with temperature starting to decline right way.

    Comment by Chris Dudley — 30 Sep 2012 @ 10:52 PM

  152. Where I was ruled off topic in #148, I would note that the Nature editorial not only mentioned government corruption, but it was quoted here doing so. So, how far off topic can I be if I also mention it as I believe I did?

    [Response: Government corruption has nothing to do with attribution of extreme events. – gavin]

    Comment by Chris Dudley — 1 Oct 2012 @ 11:51 AM

  153. Gavin in #152,

    It does seem to go to the question of “Why bother?” at least for Nature. I would argue (#27 and forward) that existing international instruments are sufficient reason to bother with attribution of extreme events since reparations may be taken as a result. Money is often an answer to the question “Why bother?” This did get into the question of which emissions are responsible for dangerous climate change and further into a question of a right to pollute despite the danger. But all of this seems fairly closely tied to the title question of the article.

    Perhaps if you posted my #148 to the variations, that would address your concern about being off topic? I did not save a copy.

    Comment by Chris Dudley — 1 Oct 2012 @ 12:39 PM

Sorry, the comment form is closed at this time.

Close this window.

0.541 Powered by WordPress