RealClimate logo

Why bother trying to attribute extreme events?

Filed under: — gavin @ 20 September 2012

Nature has an interesting editorial this week on the state of the science for attributing extreme events. This was prompted by a workshop in Oxford last week where, presumably, strategies, observations and results were discussed by a collection of scientists interested in the topic (including Myles Allen, Peter Stott and other familiar names). Rather less usual was a discussion, referred to in the Nature piece, on whether the whole endeavour was scientifically worthwhile, and even if it was, whether it was of any use to anyone. The proponents of the ‘unscientific and pointless’ school of thought were not named and so one can’t immediately engage with them directly, but nonetheless the question is worthy of a discussion.

This workshop was a follow-up to one held in 2009, which took place in a very different environment. The meeting report was typical of a project that was just getting off the ground – lots of potential, some hints of success. Today, there is a much richer literature on the topic, and multiple approaches have been tried to generate the statistical sample required for statements of fractional attribution.

But rather than focus on the mechanics for doing this attribution, the Nature editorial raises more fundamental questions:

One critic argued that, given the insufficient observational data and the coarse and mathematically far-from-perfect climate models used to generate attribution claims, they are unjustifiably speculative, basically unverifiable and better not made at all. And even if event attribution were reliable, another speaker added, the notion that it is useful for any section of society is unproven.

Both critics have a point, but their pessimistic conclusion — that climate attribution is a non-starter — is too harsh.

Nature goes on to say:

It is more difficult to make the case for ‘usefulness’. None of the industry and government experts at the workshop could think of any concrete example in which an attribution might inform business or political decision-making. Especially in poor countries, the losses arising from extreme weather have often as much to do with poverty, poor health and government corruption as with a change in climate.

Do the critics (and Nature sort-of) have a point? Let’s take the utility argument first (since if there is no utility in doing something, the potentially speculative nature of the analysis is moot). It is obviously the case that people are curious about this issue: I never get as many media calls as in the wake of an extreme weather event of some sort. And the argument for science merely as a response to human curiosity about the world is a strong one. But I think one can easily do better. We discussed a few weeks ago how extreme event attribution via threshold analysis or absolute metrics reflected a view of what was most impactful. Given that impacts generally increase very non-linearly with the size/magnitude of an event, changes in extremes frequency or intensity have an oversized influence on costs. And if these changes can be laid at the feet of specific climate drivers, then they can certainly add to the costs of business-as-usual scenarios which are then often compared to the cost of mitigation. Therefore improved attribution of shifts in extremes (in whatever direction) have the potential to change cost-benefit calculations and thus policy directions.

Additionally, since we are committed to certain amount of additional warming regardless of future trends in emissions, knowing what is likely in store in terms of changing extremes and their impacts, feeds in directly to what investments in adaptation are sensible. Of course, if cost-effective investments in resilience are not being made even for the climate that we have (as in many parts of the developing world), changes to calculations for a climate changed world are of lesser impact. But there are many places where investments are being made to hedge against climate changes, and the utility is clearer there.

Just based on these three points, the question of utility would therefore seem to be settled. If reliable attributions can be made, this will be of direct practical use for both mitigation strategies and adaptation, as well as providing answers to persistent questions from the public at large.

Thus the question of whether reliable attributions can be made is salient. All of the methodologies to do this rely on some kind of surrogate for the statistical sampling that one can’t do in the real world for unique or infrequent events (or classes of events). The surrogate is often specific climate simulations for the event with and without some driver, or an extension of the sampling in time or space for similar events. Because of the rarity of the events, the statistical samples need to be large, which can be difficult to achieve.

For the largest-scale extremes, such as heat waves (or days above 90ºF etc), multiple methodologies – via observations, coupled simulations, targeted simulations – indicate that the odds of heat waves have shortened (and odds for cold snaps have decreased). In such cases, the attributions are increasingly reliable and robust. For extremes that lend themselves to good statistics – such as the increasing intensity of precipitation – there is also a good coherence between observations and models. So claims that there is some intrinsic reason why extremes cannot be reliably attributed doesn’t hold water.

It is clearly the case that for some extremes – tornadoes or ice storms come to mind – the modelling has not progressed to the point where direct connections between the conditions that give rise to the events and climate change have been made (let alone the direct calculation of such statistics within models). But in-between these extreme extremes, there are plenty of interesting intermediate kinds of extremes (whose spatial and temporal scales are within the scope of current models) where it is simply the case that the work has not yet been done to evaluate whether models suggest a potential for change.

For instance, it is only this year that sufficient high frequency output has been generically archived for the main climate models to permit a multi-model ensemble of extreme events and their change in time – and with sufficient models and sufficient ensemble members, these statistics should be robust in many instances. As of now, this resource has barely been tapped and it is premature to declare that the mainstream models are not fit for this purpose until someone has actually looked.

Overall, I am surprised that neither Nature or (some?) attendees at the workshop could find good arguments supporting the utility of attribution of extremes – as this gets better these attributions will surely become part of the standard assessments of impacts to be avoided by mitigation, or moderated by adaptation. We certainly could be doing a better job in analysing the data we have already in hand to explore whether and to what extent models can be used for what kinds of extremes, but it is wrong to say that such attempts are per se ‘unverifiable’. As to whether we are better off having started down this path, I think the answer is yes, but this is a nascent field and many different approaches and framings are still vying for attention. Whether this brings on any significant changes in policy remains to be seen, but the science itself is at an exciting point.

153 Responses to “Why bother trying to attribute extreme events?”

  1. 51
    Tony O'Brien says:

    Attribution is important. Attribution helps us lay people understand what is freak weather and thus unlikely to be repeated. Attribution helps us understand what is the new normal and therefore very likely to be repeated.

    If we understand what is to come and what might be to come we can prepare. Just maybe, new infrastructure will be built able to cope with future weather not what happened last century.

    Unfortunately scientists and lay people still speak a different language. We still do not understand that dry language really represents “oh sh##”

  2. 52
    dbostrom says:

    SOMEBODY should be out there cooling the engines on how quickly these studies have been rushed into the literature…

    Ah, a Climate Change Tsar, approving or stopping research for purposes of political convenience?

    I thought that was what we were not supposed to do. Confusing.

    How does one “not think of an elephant,” once told?

    Clandestine research, conducted at night using stolen CPU time, published with stolen laser printer runs, always fearful of the Tsar’s secret police, etc. Very healthy.

  3. 53
    Jay Dee Are says:

    Re Don Williams @10 and Ray Ladbury @ 20: In addition to NOAA Atlas 14, NOAA has several hydrometeorological reports (HMRs) covering different parts of the US that contain contours of probable maximum precipitation (PMP). You look at a map for your area, pick a PMP value, and use it as a design precipitation estimate for a structure. PMP is supposedly the worst precipitation that one can expect.

    All HMRs are all out of date, do not reflect current climate conditions, and contain no guidance on how to adjust PMP values for a changing climate; never mind that PMP is an illogical concept to start with. NOAA has no plans to revise the HMRs, yet they are used all the time for estimates of extreme precipitation.

    There needs to be more interaction between climatologists and hydrologists. Hydrologists will say that they haven’t seen some effects predicted by climatologists, yet climate will eventually undermine the hydrologists’ assumption of stationarity. I’m not sure that hydrologists understand climate inertia. The effects predicted by climatologists are likely to show up eventually.

  4. 54
    Salamano says:

    Re: 42


    There have been studies (like the ones indicated in the above link) that attempt to describe the present likelihoods of recent extreme warm events using modeling of the temperature record and on a probability distribution that shifts toward warmer temperatures. However, when the same assumptions encounter an extreme cold event of the same magnitude, the results borderline on declaring the event a complete impossibility.

    A counter-argument that extreme warm events can be quantified as a signature of our climate change and yet relegate all cold weather events that confound the methodology as highly non-linear products of natural variation doesn’t jive with me because I want to see the methodology work in both applications.

    The same is true of Trenberth’s discussion of attribution … If a computer is going to run a million iterations of a model and declare that extreme warm event x is only going to occur with the observed increasing likelihood in a warming world, then it should also demonstrate that the cold extreme event is going to occur with requisite plausibility in a warming world. Otherwise, you’re left with virtually unfalsifiable declarations that anything and everything that occurs is a sign of global warming, except when we say it isn’t.

    I can certainly see a shift of both the mean (warmer) and the variance (wider), but (as a hypothetical example) the arctic ice loss scenario could open up a can of worms that provides an extra feedback directly related to Global Warming that primarily generates strong cold anomalies. Who knows, if they get stronger and more frequent with more ice loss, they could stunt the otherwise modeled progression of warming while at the same time upping the prevalence of cold extreme events. The resultant probability distribution would not look normal– not as increasingly rare cold extremes, the same expected increase in warm extremes, and a slightly cooler than previously anticipated overall warming trend.

  5. 55
    Chris Dudley says:

    Perhaps worry about liability is having some effect. The largest polluter, China, is calling for 70% cuts in emissions.

  6. 56
    Sou says:

    Ian #46 and #48 is typical of the downside of making statements about the future. Statements imputed to Prof Flannery are trotted out every time it rains as if he was suggesting it will never rain again. As is Mackellar’s verse. Usually as in this case, in the same post or article. It’s silly and I don’t think too many people take much notice any more – only the hard-core science rejectors.

    The reason Prof Flannery gets singled out, and extensively misquoted and/or quoted out of context, is he is Australia’s Chief Climate Commissioner. In Australia that seems to get him the same treatment as Drs Hansen and Mann etc. and Al Gore. (The Climate Commission “was established to provide all Australians with an independent and reliable source of information about the science of climate change, the international action being taken to reduce greenhouse gas emissions, and the economics of a carbon price”.)

    The reason Mackellar’s rhyme gets trotted out is probably because it shows that even a century ago Australia had extreme weather.

  7. 57
    sidd says:

    on the 21st of September, 2012 at 7:33 PM:

    1)Re: realclimate discussion of extermely hot events

    Mr. Salamano writes:
    “However, when the same assumptions encounter an extreme cold event of the same magnitude, the results borderline on declaring the event a complete impossibility.”

    That is not the claim being made in that discussion. And empirical results from Hansen bear out what was said there.

    2)Re: Prof. Trenberth discussion:

    Mr.Salamano writes:

    ” … it should also demonstrate that the cold extreme event is going to occur with requisite plausibility in a warming world. Otherwise, you’re left with virtually unfalsifiable declarations that anything and everything that occurs is a sign of global warming, except when we say it isn’t.”

    This is another strawman. Those declarations are never made.

    3)”I can certainly see a shift of both the mean (warmer) and the variance (wider), …”

    Hansen must be overjoyed at your benediction.

    4)” … but (as a hypothetical example) the arctic ice loss scenario could open up a can of worms that provides an extra feedback directly related to Global Warming that primarily generates strong cold anomalies. Who knows, … ”

    Calculate and publish your results.


  8. 58
    Don Williams says:

    Re Salamano at 41: “SOMEBODY should be out there cooling the engines on how quickly these studies have been rushed into the literature, promoted all around”

    I am not qualified to speak on how science should be managed. I would , however, point out that millions of businessmen aren’t waiting for the science to be settled. Everyday tens of thousands of acres of asphalt and concrete are being laid down, fracking gas wells are being drilled and plans for oil drilling in the Arctic proceed. Those settle the argument with a finality far greater than any thesis put forth in a scientific paper. You can refine a scientific hypothesis –undoing physical construction and financial investment is a lot harder.

  9. 59
    Ian says:

    Sou (#56), Im not sure if you actually read what I actually wrote. I didn’t say Prof Flannery had said or had even intimated, that it wouldn’t rain again. The quote I gave was certainly not imputed but was from an interview Professor Flannery had with Sally Sara on ABC television’s Landline program, on February 11 2007. His actual words were “…because the soil is warmer because of global warming and the plants are under more stress and therefore using more moisture. So even the rain that falls isn’t actually going to fill our dams and our river systems.” I thought I had made it clear why I quoted Dorothy McKellar’s poem My Country to emphasise the fact that droughts and floods in Australia are neither new or more extreme than they have been in previous years. According to the Australian Bureau of Statistics (ABS) the longest droughts in Australia were from 1895-1903 and 1958-1968. The most recent drought was from 2003-2007. Of the ten worst floods i Australia’s recorded history two were before 1900 (1852 and 1893), seven were in the 20th century (1918, 1927,1929,1934,1955, 1974,1986) and one in the 21st century 2010-2011. As is apparent the incidence of severe damaging floods in Australia is, to date, highest in the early. Dr Schmidt in his reply notes that “Your overall thrust supports my opening contention though – we need to have more real science on these topics in near real time otherwise the gap gets filled with content-less speculations from people with agendas”. That is exactly the point I was hoping to make but sadly, I obviously didn’t make it plainly enough. Prof Flannery as Climate Commissioner, clearly has an agenda but comments such as those he made in 2007 may well do more harm than good to that agenda.

  10. 60
    Jim Larsen says:

    27 Chris D said, ” For mitigation we must all work together, but for adaptation, polluter pays and China is the polluter that has pushed climate change into the dangerous regime.”

    The Chinese are low-emitters of carbon. Should people be punished for banding together into too large a country? Is your solution for climate change adaptation for the USA to disband into the 50 nations, thus transforming us from not-quite-as-bad-as-China to Climate Saints?

    41 Salamano said, “If all observed events of type A are declared ‘impossible’ or ‘opposite’ of what a model postulates, ”

    Your underlying drive is correct. It’s important to show laymen why certain things are interesting while others, though sparkly, are not. Explicitly stating the numbers for cold extremes gives the layman a comparison, a handle on the issue. Relative orders of magnitude are critical for basic understanding, and they often aren’t obvious before one runs an experiment. This leads to: Expert leaves a minor factor out of the discussion, but not the math. Denialist brings up comparison which feigns universal ignorance of the relative importance of various factors. Public remains confused.

    However, your claim about the existence for such events sorely needs documentation. You mentioned the 2012 European cold snap, but gave no evidence that it was as high sigma as the heat waves you refer to.

    “However, the only all-time cold record set at any specific location was a reading of -33.8°C at Astrakhan in Russia”

    Doesn’t sound on a par. Cite something, and after that, give some evidence that models demand that the 2012 cold snap was impossible.

    So yes, your initial drive was spot on. Scientists didn’t make it clear enough that the cold event wasn’t as “big”.
    (clear in public-speak means repetition more so than parsibly perfect.), so you became confused — or I’m confused about 2012 Europe and the same logic applies.

    54 Salamano said, “ strong cold anomalies. Who knows, if they get stronger and more frequent with more ice loss, they could stunt the otherwise modeled progression of warming”

    Doesn’t sound logical to me. A European cold snap will leave extra heat to be absorbed by the ocean, emitted to space, or be distributed around the remaining planet. For this discussion, I’d say distributed around is the dominant factor. Not a robust conclusion, but in a fashion, cold extremes increase hot extremes, and vice versa. Wasn’t winter 2012 a tad warm in the USA?

    49 Dan H said, “The average rate is ~3mm/yr, and shows no sign of accelerating. Therefore, building an infrastructure to last 100 years, needs to be able to withstand about half of a meter of SLR.”

    Did you write North Carolina’s sea level rise policy? The albedo flip from Arctic sea ice has just begun. Greenland’s melt is gaining significance, and WAIS is waking up. That the expected acceleration hasn’t gained traction yet doesn’t invalidate Hansen’s analysis (or other folks’). Please provide a reference which suggests that over the next 100 years, no significant increase in the rate of sea level rise is expected.

    I’ve defended your character repeatedly. Your post makes it less likely I do so in the future.

    Do you spend so much time in the Denialsphere that false-factoids just seem like part of the fabric of life?

    51 TonyO’B said, “We still do not understand that dry language really represents “oh sh##” ”

    There are many professions which require cool thinking under duress. The one I visualize is airline pilots. Listening to the doomed flight crew on the black box often sounds like they’re reading stock prices. They’re trained to fly the plane all the way into the ground.

  11. 61
    Chris Dudley says:

    Jim (#60),

    We are cutting emissions while China is increasing them. This in addition to China being the largest polluter. What entities, other than nations, internationally can be made to pay for the intended harm they cause? A carbon tariff imposed on Chinese exports used to compensate farmers around the world is completely appropriate.

  12. 62
    Adam Gallon says:

    Why should we penalise China, for increasing crop yields?

  13. 63
    flxible says:

    “We are cutting emissions while China is increasing them. This in addition to China being the largest polluter.”
    Specify and cite please – per capita? per unit GDP? over what historical period? If the industrial production exported to “we” is deducted from the figures, what is the comparison? Do we penalize China for those millions of iPhones folks all over the world are lining up for?
    Emissions are a global and historical problem, not a national one. Has your country taken any measures to limit population increase? How much CO2 does your family emit compared to your parents or grandparents?

  14. 64
    Jim Larsen says:

    61 Chris D said, “We are cutting emissions while China is increasing them.”

    Option 1: All people are equal with regard to access to the atmosphere, as measured intergenerationally. Each nation should be allowed the same emissions per person-year. Since the US has been burning for a long time, we’ve already burned all(?) of our share, and need to pay China (or others) for carbon credits. That China wisely saved her carbon emission quota, well, why should we be allowed to steal China’s unspent carbon credits, just because we wasted our own?

    Option 2: All people are equal with regard to access to the atmosphere, but the past is the past. So each group should be allowed the same emissions per capita. Those who emit more, such as the US, should pay penalties to those who emit less, such as China.

    Option 3: All people are not equal with regard to access to the atmosphere. Carbon-wise, some nations pollute heavily and so should be allowed to continue at a slightly reduced rate. Others pollute far less but must also reduce their emissions by the same percentage. A poor country which emits 1/4 the emissions per capita of a rich country should pay the rich country penalties if the poor country emitted 1/5 the previous decade.

    I think option two is the most fair, as modified by a 20 year phase in period starting with current emissions as a base line.

    Which of the three seems most fair to you, Chris D? 20 further years of unequal rights seems fair enough, don’t you think?

  15. 65
    sidd says:

    Mr. Chris Dudley writes on the 22nd of September, 2012 at 7:50 AM:

    Re:carbon tariffs on China

    Presumably then, Mr. Dudley will not object to tariffs imposed by the developing nations on OECD countries in reparation for the much larger carbon debt already incurred by the OECD group?


  16. 66
    Sou says:

    Ian, while Australia is the second driest continent and has always had droughts and floods, it’s also getting warmer along with the rest of the planet. Floods are hard to measure across the entire continent – more so in the past.

    I doubt too many (except in the deniosphere) would try to argue against the fact that the past couple of years have been extraordinary in regard to widespread flooding – or that so many regions of the country had the ‘wettest ever recorded’ periods (as reported by BoM), or that the big drought wasn’t extreme or that unprecedented (in the record) hot, dry, windy conditions at the tail end of a horrendous drought didn’t cause the February 2009 massive loss of life in the fires.

    Nor do I understand what you find interesting about that quote from Prof Flannery. It’s only when soils are saturated or when you get heavy precipitation that you get runoff. And with some storages down to 10% – it took an awful lot of rain to get them back to safer levels.

    This is just the beginning.

  17. 67
    gavin says:

    No more on chinese emissions please. This is a thread related to the attribution of extreme events, not carbon policy. Thanks

  18. 68
    Unsettled Scientist says:

    > and shows no sign of accelerating

    Citation needed, many sources say the opposite. To hold that claim you’d need to first show that 6 years is enough time to separate out a signal from the noise… good luck with that.

    A 20th century acceleration in global sea-level rise Church & White 2006

    Is Sea Level Rise Accelerating right here on Real Climate

    “And by 2100, rising sea level from ocean thermal expansion and increasing ocean mass (from melting glaciers, ice caps, and the Greenland and Antarctic ice sheets) will expose an additional tens of millions of people annually to the risk of coastal flooding.” Regardless of whether or how much it is currently accelerating, it’s a major problem as it is continuing, there are no years with sea level drop.

    But this isn’t really about attribution science, except for that we know it is occurring due to global warming, so it’s probably off-topic in this thread and we shouldn’t pursue it further here.

  19. 69
    Ray Ladbury says:

    Salamano, Care to tell us exactly where these horrendous cold spells are occurring? Yes, there have been cold spells, but record highs are far outpacing record lows. Again: give specifics rather than vague impressions. What “record cold events” have been ignored? You sound like Spiro Agnew complaining about the nattering nabobs of negativism.

    What is more, beyond the question of probability, there is the question of what caused the event. If indeed loss of sea ice affects the jet stream allowing cold Canadian air to penetrate further south, I hardly think you could call that a contradiction of the severity of warming.

  20. 70
    Chris Dudley says:

    Adam (#62),

    China is reducing crop yields.

    Gavin (#67),

    As the Nature article points out, attribution points out who is to blame for suffering. Weather disasters are no longer acts of God but rather attacks by polluters on the victims of the disasters.

    And Jim (#64), it is polluters who, by their policies, intentionally push us into the the dangerous climate change regime who are liable. If you are putting your foot on the brake when there is a traffic mishap, it is an accident, if you are putting your foot on the gas, it is intentional. You have not considered culpability in your three cases. Nations set policies and so they are the responsible actors.

    There, I only used the C-word once and that in relation to crops, not emissions.

  21. 71
  22. 72
    Dan H. says:


    Here is one reference:

    Note that the Church & White paper was written before the recent decelleration in SLR, and claims that IF the acceleration of the 1930s and 1990s continued, then sea level would rise between 280 and 340 mm by 2100. They also noted the decelleration in the 1960s, and did not account for any potential future decelleration.

    Here is another reference which shows no acceleration.

    The increased melt from Greenland has occurred simultaneously with the decelleration in SLR. Apparently, this contribution is not very great at the present. Granted, significant melting of the Greenland glacier would change this, and that is a wild card in the projections. Antarctica, on the other hand, has been in the midst of a prolonged cooling period, and sea ice is at record highs. It seems unlikely that the WAIS would contribute anything to global SLR, and would more likely lead to a decrease de to more snow and ice accumulation.

    [Response: Well, if you think something is unlikely that is obvious all that needs to said. – gavin]

  23. 73
    tamino says:

    Re: #71 (Dan H.)

    There’s only one “l” in “deceleration.”

    You refer to the “recent decelleration [sic] in SLR” but that is not statistically significant. My calculations indicate it just fails 90% confidence, and isn’t close to 95%. If you actually read Church & White you may have noticed their reference to ubiquitous decadal variations, perhaps you should have paid more attention. Stop refering to recent deceleration in SLR as though it were an established fact. It isn’t.

    You should also have noted that Church & White suggest about 300mm SLR by 2100 IF the quadratic trend of the 20th century continues. There is very good reason to expect 21st-century sea level rise to exceed even Church & White’s quadratic trend. It has to do with the laws of physics, and the observed acceleration of mass loss from the Greenland and Antarctic ice sheets. There’s even peer-reviewed literature relating temperature change and sea level change (Rahmstorf & Vermeer) which suggests much more than 300mm by the year 2100. As Gavin notes, your claim that it “seems unlikely that the WAIS would contribute anything to global SLR” doesn’t pass muster as either evidence or logic.

    Frankly, it’s just plain foolish to rely on existing curve-fitting trends to forecast 21st-century sea level rise — that’s the same idiotic mistake with which the North Carolina state legislature managed to embarrass itself. Almost as foolish as referring to the abominable paper by Houston & Dean.

  24. 74
    Pat Cassen says:

    Gavin – At #35 you state:
    Every model (more or less) shows increased heat waves, but whether they show increased variability in the jet stream leading to increased cold snaps is much less clear (at first cut the models show the opposite).

    Please clarify: Do the models show increased variabilty, but not leading to cold snaps, or do they not show increased variability of any consequence?

    It would seem that the theoretical argument for increased variability of the jet stream is plausible, if not absolutely rigorous: decreased latidutinal thermal gradient implies slower zonal jetsream velocity (via the thermal wind equation), which implies decreased baroclinic phase velocity (at least by linear theory), or more ‘lingering’ and amplified jetstream meandering, as Francis and Vavrus have described. The question is: do GCMs support this picture or not? Isaac Held describes “fruit fly” (stripped-down) models that seem well-suited to test these conjectures, but it seems tnat they have not yet been used to do this. True?

    Thanks in advance for your response.

  25. 75
    flxible says:

    Yes, as Pat Cassen requests, please clarify what models say about jet stream variability, particularly with respect to attribution. I’ve noticed [and appreciate] that local forecasters are now showing jet stream location on their maps more regularly.

  26. 76
    chris says:

    re “jet stream locations” I guess you mean this sort of thing, flxible:

    location of jet stream

  27. 77
    John Lemons says:

    It seems to me that both the “Nature” editorial and many of the responses are in need of some conceptual clarity about some terms and, following this, scientific/philosophical conclusions that might flow from increased clarity.

    The subtitle of the “Nature” editorial is “Better models are needed before exceptional events can be reliably linked to global warming.” The editorial warns that climate scientists should be prepared for their skills to be probed in court and, e.g., the legal basis for claims brought by small northern communities facing coastal erosion such as Kivalina, AK, against, e.g., ExxonMobile. The first paragraph of the “Nature” editorial concludes with the statement that climate attribution will need to inform legal and societal decisions and to do so will require enormous research efforts.

    One of the problems is what is meant by “reliably?” If we assume a definition often used by scientists to reject a null hypothesis and/or by scientific evidence admissible in court (as in the suit by Kivalina against ExxonMobil) then the typical standard is that conclusions based on evidence have a 0.05 percent chance of being wrong–a so–called “false positive” error which concludes there is an effect when, in fact, there is none.

    But climate change attribution is terribly messy with its large number of variables and types of uncertainties. Neither the “Nature” editorial nor comments focused on the meaning of “reliable or the issue of how “reliable” the science needs to be for actions to be based on it, and what is the justification for standards about “reliable.”

    If the conventional scientific standard of proof is required, e.g., rejecting a null hypothesis at a level of 0.05 then personally I am doubtful that attribution studies will be widely accepted, at least in the near–term. But if some other standard of proof that is more precautionary is accepted, e.g., rejecting a null hypothesis at a level of, say, 0.10 or 0.20 or whatever, then the studies might have greater utility.

    The salient point here is that given scientific uncertainties and when science is done to inform policy or assist in environmental/human health welfare (as opposed to study nature to simply gain knowledge about it), that an ethical dimension becomes part of the consideration. Insisting on high or generally accepted levels of confidence to form scientific conclusions or gain evidentiary admission into courts will increase the chances of making a false negative error, i.e., concluding there is no effect when, in fact, there really is. And when a false negative error is made, the risks and harms to the environment and the public are increased.

    Accordingly, the concept of “reliable” as used in the “Nature” editorial or in some of the responses to it becomes philosophical and ethical and not merely scientific.

  28. 78
    Russell says:

    The new Nature Geoscience paper by Reichler et al poses an interesting question in the light of sea ice shrinkage.

    if no ice is interposed , the angular momentu of the stratospheric polar vortex may be coupled to ocean circulation at high latitudes, somthing presently impeded by even the thinnest ice cover.

    Could this alter long term meridional circulation and thermohaline turnover?

  29. 79
    Jim Larsen says:

    77 John L said, “increase the chances of making a false negative error, i.e., concluding there is no effect when, in fact, there really is. And when a false negative error is made, the risks and harms to the environment and the public are increased.”

    I want to quibble with the term “false negative”. Unless you’ve got a 95% confidence that negative is the proper result, it’s not a false negative, it’s “undetermined”. That’s the same “lie” told by criminal defendants and their supporters upon acquittal, that they’ve been “proven innocent”. It’s a glaring lack in our legal system. Possible verdicts should include guilty, unsubstantiated, and Innocent.

    I also disagree with the level of confidence you’re suggesting. 95% might be reasonable in a criminal trial, but it’s 51% in a civil trial, where there’s no presumption of innocence.

    Attribution of extreme events probably isn’t the hard part. It’s defining the defendants.

  30. 80
    Don Williams says:

    Re John Lemons at 77:

    1) American Business is guided far more by political decisions and regulation than by litigation. While the standard is understandably high for winning large awards from someone in a lawsuit, most political and regulatory decisions do not require meeting a test of “0.05 percent chance of being wrong”. If someone 15 yards away aims a handgun at me and fires, the odds of them missing me are greater than 0.05 percent –but that doesn’t mean that the law allows them to pull the trigger.

    2) For example, one issue in the northeast USA is how to handle the increased heavy downpours due to climate change. The state of Pennsylvania and its counties have regulations requiring large property developers to build basins to control the stormwater runoff resulting from laying impervious cover (asphalt, concrete, etc.) over formerly absorbant soil. Volume is calculated from impervious acreage times inches of expected rainfall.

    In the past, state agencies have used the AVERAGE expected rainfall listed in NOAA’s Atlas 14 for various storm events (2 year, 5 years,etc.) to compute the volume.

    However, the city of Philadelphia and two outlying counties (Bucks, Montgomery) just approved a stormwater management plan for a watershed that explicitly requires using the UPPER BOUND of NOAA’s 90 percent confidence interval (instead of the average) for the inches of rainfall. This will require developers to handle about 14 percent more stormwater volume than what would be required if computations were based on the NOAA averages.

    Past heavy rainfall from Hurricanes Floyd and Allison were cited as justification — as examples of extreme precipitation not accounted for by the NOAA averages.

  31. 81
    David B. Benson says:

    John Lemons @77 — Use Bayesian reasoning: which hypothesis is best supported by the evidence.

  32. 82
    MikeH says:

    Ian @ 59 The full quote from 2007 Flannery interview is
    “We’re already seeing the initial impacts and they include a decline in the winter rainfall zone across southern Australia, which is clearly an impact of climate change, but also a decrease in run-off. Although we’re getting say a 20 per cent decrease in rainfall in some areas of Australia, that’s translating to a 60 per cent decrease in the run-off into the dams and rivers. That’s because the soil is warmer because of global warming and the plants are under more stress and therefore using more moisture. So even the rain that falls isn’t actually going to fill our dams and our river systems, and that’s a real worry for the people in the bush. If that trend continues then I think we’re going to have serious problems, particularly for irrigation.”

    The section of that quote that Ian cherry-picked has been used extensively in Australia in an attempt to discredit Flannery, a Climate Commissioner by claiming that he said the “dams would never fill again”.

    Looking at the full statement in context, was Flannery wrong? No. Here is a May 2012 article by Karl Braganza, Manager of Climate Monitoring at the Bureau of Meteorology.

    “A 10-20% loss of autumn and winter rainfall has occurred over both the southwest and southeast corners of the Australian continent. This is most significant in the southwest of Western Australia, where the changes have occurred since around 1970. In the southeast, similar rainfall reductions have been apparent since the mid 1990s.”
    “The 10-20% loss of autumn and winter rainfall has been amplified in the subsequent reductions in streamflow and water storages. There are many reasons for this, the simplest being that drier soils and vegetation soak up more moisture and therefore provide less surface runoff when rain does fall.”

    Braganza goes on to explain that the record rainfall that fell in 2010-11 was tropical in origin and has not interrupted the long term drying trend in southern Australia.

  33. 83
    Hank Roberts says:

    > vortex may be coupled ….

    Good one, Russell.

  34. 84
    Peter Thorne says:

    One interesting point raised in discussions in the breaks (Chatham house rules apply) was how you attribute / apportion blame / repatriations / compensation in the context of legal actions (if it were to ever go down that route). It was quite clear that its even more fraught than apportioning a fractional attributable risk. Carbon emissions are like taking out a reverse mortgage with large compound interest because of the lags in the system. Not all CO2 emissions are equal in the context of a transient climate response. Carbon we emitted in 1850 has more ‘power’ than carbon a certain C-word (!) emits today in explaining today’s events (C’s (or anyone’s today) matter more / catch up down the line). So it matters not just how much Carbon emission ‘debt’ was taken out but when it was taken.

    As to the editorial its impossible to characterize a two and a half day meeting in a few hundred words. Insofar as it is a window into the world of discussions of a nascient branch of the science it is of interest. But its an editorial (clue is in the name there for the hard of thinking) and the proof will be in terms of demonstrable outcomes in the literature which has a far longer lag time. My personal view that it was a very interesting meeting reflecting both advances and challenges. That is what makes doing science interesting and exciting. I have no doubt progress is being made. There is substantial interest in pushing this forwards but there are many challenges in the tools (observations – not mentioned in the editorial – equally important, and models) and the techniques. That is what science is all about and what makes such meetings hugely valuable.

  35. 85
    John Lemons says:

    Response to Jim Larsen #79: I am not sure whether you are agreeing or disagreeing with me. Typically, the false positive or Type I err is confined to about 0.05, wherein in most studies the false negative err is not known or typically ranges around 20–80 percent. Because natural sciences developed with a goal of minimizing Type I or false positive errs (wanting to avoid adding speculative knowledge to the body of knowledge about natural phenomena), Type II errors have basically been ignored. The problem as I stated is under conditions of scientific uncertainty it is almost impossible to meet the 0.05 confidence level for rejection of a null hypothesis and when this happens if the issue concerns environmental/human health then risks to the latter always increases. This raises the moral and ethical and legal questions about what standards of evidentiary proof ought to be used in matters of environmental/human health protection. Allowing a higher probability of err in making a Type I err reduces the likelihood of increasing environmental/human health risks by reducing the likelihood of making a Type II error (false negative). Another words, if we are going to err it is morally better to err on the side of protection of environmental/human health.

    Finally, I am not proposing that the 0.05 level for rejecting null hypotheses be used in environmental/human health issues laden with scientific uncertainty. I am suggesting that the level needs to be increased to 0.10 or 0.20 or some other probability, which would mean reduction in probability of Type II or false negative errs, thereby better protecting the public and environment under conditions of uncertainty.

    To David Benson #81: I agree Bayesian statistic shows promise, but has not been used enough. I wish more people would try to apply it.

  36. 86
    Salamano says:

    Re: 61. Jim (and to a lesser extent Ray)

    “However, your claim about the existence for such events sorely needs documentation. You mentioned the 2012 European cold snap, but gave no evidence that it was as high sigma as the heat waves you refer to.”

    Okay…so here are some maps for what I’ve been talking about for Alaska and Europe last winter season:

    …Now, these should qualify as high sigma events, and furthermore, it should NOT even be necessary that these events dwarf or even meet the high sigma events on the ‘hot’ side of the coin. No one is trying to imply that this sort of thing is going to be continuously surpassing regional heat waves.

    As far as models that would render such events impossible– simply pick one. There have been a bunch of attempts to insta-define recent weather events in terms of its anthroprogenic signatures. I would say the famed “Climate Dice” anaology is also one of these same issues. If you take Peter et al (2004) for example…

    …such simulations that are used to declare heat wave event X as beyond the threshold magnitude of normal expectability in a non anthroprogenically warming climate…should also be applicable to these cold snap events (read: that do not even need to be anomalously greater by comparison). The conclusions you would draw from such an exercise would be that such events would become statistically near impossible with our human-modified climate as understood by these models.

    Now, that is not to say that in reality they would be impossible…I’m only talking about applying methodology equally in these cases. Sure, someone can wave a hand and say that in our anthroprogenically warming world we can still have absolutely any kind of weather still out there– I’m just talking about the actual distribution of the probability.

    “A European cold snap will leave extra heat to be absorbed by the ocean, emitted to space, or be distributed around the remaining planet. For this discussion, I’d say distributed around is the dominant factor. Not a robust conclusion, but in a fashion, cold extremes increase hot extremes, and vice versa. Wasn’t winter 2012 a tad warm in the USA?”

    My point is NOT that somehow the prevalence of these cold snaps will all of a sudden make the globe anomalously cold this coming November-January… But rather it is proven by the fact that all it needs to do is continuously mitigate an increasing dominance of heat waves themselves. The idea here is that this one process, clearly due to a warming world reducing arctic sea ice, is introducing a currently poorly understood (and apparently sometimes discounted) feedback into the climate system that “stunts” or “mitigates” the overall warming picture to a non-zero degree. Compare such an assertion to an alternative expectation of wholesale “run-away” global warming or near-term “tipping point” climate change. I’m not saying this will have the effect of buying us millenia before citizens of Earth need to do something, but I am saying that this is something that deserves to be a thought in the discussion, because we’re already seeing global temperature anomaly lagging behind the projected expectations, and it’s important to start discussing reasons why rather than simply droning on about error bars.

    In the “new” probability-distribution examples first discussed by Rahmstorf in an aforementioned link (and discussion at Real Climate), a unit value that accounts for both frequency and magnitude of extreme cold events should be decreasing in our warming world. However, if this feedback mechanism starts having regular results of maintaining high-sigma cold snaps in the winter season (remember, in a warming world, cold snaps will have a slightly lesser threshold to leap in order to become highly anomalous– and would NOT need to be record-breaking according to the historical record, but hot events would be)…then it would not be explainable within a normal probability distribution model. Instead it would show something where the variance is greater, yet the frequency of cold events maintained (or even increased), while also having the increased heat events and the average trend inching warmer– but at a slower pace than previously anticipated.

  37. 87
    Geoff Beacon says:

    I think you are taking on more formidable forces than you are prepared to admit – a mixture of die-hard climate modelers and government departments. The recent editorial in Nature you reference, Better models are needed before exceptional events can be reliably linked to global warming, makes me despair.
    You tell us that the editorial was influenced by an invitation only workshop, Attribution of Climate and Weather Extremes: Assessing, Anticipating and Communicating Climate Risks. This was…

    Hosted by the Smith School of Enterprise and the Environment, the Environmental Change Institute and the Oxford Martin School, University of Oxford, with the support of the UK Government Foreign and Commonwealth Office, the US National Ocean and Atmosphere Administration, the UK Department of Energy and Climate Change and the Risk Prediction Initiative of the Bermuda Institute of Ocean Sciences.

    Organising Committee:

    Co-Chairs: Peter Stott, Met Office and Randy Dole, NOAA

    Local hosts: Myles Allen and Pete Walton, University of Oxford

    Steering committee: May Akrawi, FCO; Chris Hewitt, Met Office; Marty Hoerling, NOAA; Arun Kumar, NOAA; Falk Niehoerster, Risk Prediction Initiative, Bermuda Institute of Ocean Sciences; Chris Sear, DECC; Peter Thorne, NOAA.

    Key orgaisations for me are are

    The Smith School of Enterprise and the Environment
    The Environmental Change Institute
    Foreign and Commonwealth Office
    US National Ocean and Atmosphere Administration
    UK Department of Energy and Climate Change
    UK Met Office

    Except for the NOAA, I would apply one or more of these phrases to members of the above group:

    Combating climate charge is too expensive for the UK
    Combating climate change is not business friendly
    We won’t believe the real world until we can model it
    We have reputations to preserve

    I’m in despair. Is it justified?

    [Response:No. – gavin]

  38. 88
    Jim Larsen says:

    85 John L said, “Type II errors have basically been ignored”

    I believe we generally agree. I’m just saying that those “generally ignored” type 2 errors have to be differentiated further before un-ignoring them.

  39. 89
    Myles Allen says:

    Nice thoughtful post, Gavin, thanks. Ed and Geert Jan’s responses do a good job of clarifying what the balance of opinion at the workshop was. I’m pleased the workshop did consider seriously the question of the value of extreme event attribution, and also that we managed to include some more skeptical voices, not just the usual team. I don’t think there was ever a chance of resolving the question of utility in a 2 day workshop, and clearly it is something we are going to have to keep thinking about (perhaps we could tempt you along to next year’s meeting in Washington?). I am personally very skeptical about arguments that certain branches of research (like geo-engineering) are Not in the Public Interest (whatever that is) even when the public are evidently interested in them, but I recognise that thoughtful people have a lot of useful things to say about this kind of question (we did invite Mike Hulme, by the way, but he had a prior commitment).

    I’ve just one point to add to the comments (and someone may well have made it already — in which case apologies): I agree that right now we can’t generally address these event-attribution questions in the 24-hour news cycle, but I think it is important to explain to people there is absolutely no reason in principle why we should not. We do weather forecasting in real time, and there is no fundamental difference between ensemble forecasting and probabilistic attribution. And, of course, using the same modelling tools would allow us to evaluate the reliability of attribution claims, something the workshop was rightly very concerned about. But that would require a commitment to climate services well beyond anything envisaged at present.

    I would welcome a broader debate about the balance of resources in climate research between understanding what is happening now versus predicting climate decades to centuries in the future, because I personally don’t think we have the balance right (but then, I would say that, wouldn’t I?).


    Myles Allen

  40. 90
    tokodave says:

    73 Hmmm…Dan H. or Tamino? Tough choice. Not. On to the NOAA Storm Atlas..Jay Dee @ 53 and others: The NOAA Storm Atlas Volume 1 Montana dates to 1973 with an Arkell and Richrds 1986 NOAA update. A weather Service hydrologist described it as a bit of a time warp. This Atlas can’t possibly include the 11-15% increases in the amount of precipitation falling in very heavy precipitation events for the period1958 – 2007 noted by Karl, et. al, 2009. We’re using some more recent USGS hydrological information to try and bring the storm numbers into at least the 21st century but if anyone has any other good ideas I’d be happy to hear them.

  41. 91
    Alastair McDonald says:

    As I see it, it is impossible to attribute an extreme event to anthropogenic global warming (AGW), although there are exceptions which I will mention below. But I feel that scientists are in effect lying when they say to the press that they cannot prove an event was caused by AGW. That gives the impression that the event was not caused by AGW. But the scientist cannot prove that it was not caused by AGW. So what he was implying was a lie. What he should say is that the extreme event might have been caused by global warming, which is true.

    Interestingly there was an extreme event where one can say it was caused by global warming – the collapse of the Larsen B ice shelf. Ice sheets calving is normal, but the collapse of the Larsen B was caused by rising temperature and so it was a victim of global warming.

    Obviously, any one wild fire cannot be blamed on global warming especially if it is known that someone dropped a lighted cigarette end. But recently there has been an increase in the number and intensity of wild fires. This is surely due to global warming. If you cannot be certain then why not tell the press you are 90% certain that the increase is due to AGW. That is telling the truth. Just saying you are not sure implies you are 50% sure.

    This is realy about your audience. If it is a journalist keep it simple. If it is a colleague explain all the difficulties.

    Cheers, Alastair.

  42. 92
    Tom Adams says:

    Why attribute extreme events So my insurance company will know when to cancel my insurance!

    Seriously, I think the insurance companies look at the extent to which extreme events invalidate their currently used estimation models. And I think one did cancel insurance on a house I owned after Hurricanes Hugo and Fran penetrated deep inland.

    I know that, invalidating models is not exactly the same as attribution, but it’s close.

  43. 93
    Jim Larsen says:

    86 Sal said, “all it needs to do is continuously mitigate an increasing dominance of heat waves themselves”
    Yet in both of your examples, chocolate colored areas exceed purple. And I’m not sure how your evidence could prove your point. Maybe by showing times with VS without severely cold anomalies and count the hot areas?

    Frankly, either I’m seriously missing the point (that happens), or your claim makes no sense. Unless global average temperature decreases, every cold snap will increase the chance of a simultaneous heat wave. And for the cold snaps to deserve the same “coverage”, they WOULD have to be close to the sigma of the heat waves.

    Your link’s conclusion is nowhere near a claim of impossibility:

    “we estimate it is very likely (confidence level >90%)9 that human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude.”

  44. 94
    Salamano says:

    @93 Jim

    “Frankly, either I’m seriously missing the point (that happens), or your claim makes no sense. Unless global average temperature decreases, every cold snap will increase the chance of a simultaneous heat wave. And for the cold snaps to deserve the same “coverage”, they WOULD have to be close to the sigma of the heat waves.”

    My feeling is that this is not entirely true (and yes, I certainly can be wrong). Yes, in a warming world, cold snaps would be more than balanced out by same (and greater) heat spells… But the real issue in my opinion is “how” warming. My contention is that this new polar interaction is a climate-change induced feedback due to the extreme melting of the arctic ice cap. Therefore, the appearance of these cold snaps DO NOT have to more than displace (or even equal) the heat waves, nor do they have to be >= the sigma of the heat waves to be interesting– because to me the incidence of them indicates a wrench in a theoretically modeled system of runaway/accelerated warming.

    “Your link’s conclusion is nowhere near a claim of impossibility:…“we estimate it is very likely (confidence level >90%)9 that human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude.” ”

    My point here is this… Using the same model/methodology as this paper does, what kind of human-induced risk modification do you think would be assigned to coldwaves of 6-12 sigma magnitudes? (answer: pretty low…lower than observed).

    Eventually, somebody’s going to have to start investigating what (cold-related) anomalies/feedbacks are coming into the climate picture to keep global observed temperatures from performing the way models have expected them to. Covering eyes and ears, hand-waving about error bars, and praying for another legit El Nino event is certainly another option. I’m just tossing out a candidate.

  45. 95
    Jim Larsen says:

    94 Sal,

    Our infrastructure is built to traditional climate, and modern adjustments generally increase both heat and cold tolerance. So even if cold snaps occur which are as cold as ever – but surely not more than a tad colder – we’re prepared for them.

    Or are you making the claim that cold snaps are currently worse than they were in the 60s, and even further warming will accelerate this dangerous trend?

  46. 96
    Hank Roberts says:

    >> I’m in despair.
    >> Is it justified?
    > [Response: No. – gavin]

    This should become a CafePress mug-and-T-shirt for RealClimate.

  47. 97
    Don Williams says:

    Re tokodave at 90:
    Montana is better off than New England. NOAA 24 hour precip
    data for New England is Technical Report 40, dated 1961. See
    Anyone remember those New Englands floods a few years back?

    So far, NOAA has developed more up to date Atlas 14 volumes
    for the Ohio River Valley/ MidAtlantic states, Puerto Rico/Caribbean,
    Alaska/Hawaii, and Southwest USA.

    NOAA’s website indicates that the Update for the Southeast/Midwest states
    is in progress, with the Northwest USA and New England scheduled for later.

    So if you guys want to stick your finger in the mixing bowl , taste the batter,
    and give suggestions to the cook, there is probably still time. I don’t know
    when they will update the recently issued (2004) volumes.

  48. 98
    Unsettled Scientist says:

    About the bore-holed comment that mentions the NPR tree ring scientist. His names was Tom Swetnam. He was not claiming that because there is more fuel in forest which haven’t burned that global warming plays no role in the increased risk of wild fires. If we look up his published work, say in Science, we find that he points out that climate change does indeed increase the wild fire risk. Indeed, he has even quantified the increased risk due to climate change.

    In a trail-blazing article in Science Magazine in 2006, Dr. Swetnam and his colleagues showed that a one degree of Fahrenheit temperature translates into a boost in the frequency and duration of large wildfires in the U.S. West. Since the 1980s, warming and drying has caused the fire season to increase more than two months and sparked four times more fires.

    Warming and Earlier Spring Increase Western U.S. Forest Wildfire Activity

    “The overall importance of climate in wildfire activity underscores the urgency of ecological restoration and fuels management to reduce wildfire hazards to human communities and to mitigate ecological impacts of climate change in forests that have undergone substantial alterations due to past land uses. At the same time, however, large increases in wildfire driven by increased temperatures and earlier spring snowmelts in forests where land-use history had little impact on fire risks indicates that ecological restoration and fuels management alone will not be sufficient to reverse current wildfire trends.”

  49. 99
    Jathanon says:

    Sal said
    ” Therefore, the appearance of these cold snaps DO NOT have to more than displace (or even equal) the heat waves, nor do they have to be >= the sigma of the heat waves to be interesting– because to me the incidence of them indicates a wrench in a theoretically modeled system of runaway/accelerated warming.”

    Who’s theoretical model has runaway warming?

  50. 100
    Tom Scharf says:

    A counter example is the prediction 6 years ago after Katrina that hurricanes would be increasing in number and intensity. The insurance and re-insurance industries latched onto this speculative report and jacked up already high home insurance rates in Florida by 40%.

    The Sarasota Herald-Tribune won the 2011 Pulitzer Prize for investigative journalism for exposing this circus.

    Florida insurers rely on dubious storm model:

    Instead of using historical hurricane rates to set expected disaster losses, they switched to computer modelling that predicted much higher disaster losses. Didn’t happen. Still not happening. Hurricane disaster losses for the 2000 to 2010 decade were right on the historical average, even with Katrina.

    So it’s one thing to do the science, it’s another when speculative headline seeking science gets too far ahead of reality. The science was clearly overplayed in this case. We haven’t had a CAT3 landfall in the US since 2006, an all time record. Global cyclone energy is running near all time lows.

    Florida insurance rates have not declined.

    So do the science, but please don’t declare is as useful before it is ready for prime-time.

    [Response: Our discussion of hurricanes and global warming from 2005. Care to amend? – gavin]