RealClimate

Comments

RSS feed for comments on this post.

  1. “Given that it coincides with the bulk of global warming (three quarters of which occurred from 1980 onwards) and is also predicted by models in response to rising greenhouse gases, this post-1980 warming in Russia is, in our view, very unlikely just due to natural variability.”

    Using this simply time-series analysis…is it not possible to pepper PNAS with reams of analyses that describe “very unlikely just due to natural variability” conditions because of what has occurred primarily since 1980? What if there was a spot on Earth that may have stayed flat (or even cooled) since 1980? Would these, using similar analysis, when published, mean anything?

    Comment by Salamano — 6 Nov 2011 @ 6:13 AM

  2. I appreciate the way all this is explained, and it makes a lot of sense. However, I have a comment about this part:

    “This is just like using a loaded dice that rolls five times as many sixes as an unbiased dice. If you roll one six, there is then an 80% chance that it occurred because the dice is loaded, while there is a 20% chance that this six would have occurred anyway.”

    I don’t think it makes sense to talk about an individual event having x% chance of occurring with the bias or trend, and y% without. Probabilities can really only be about sets of data, not individual data points. It makes sense to say that we have, for example, 50% more warm records in the last two decades than the previous two decades, or that we can expect warm records to increase by 10% per decade on average, but not that any particular event has 80% chance of being due to global warming – that’s meaningless.

    It’s OK to make a general statement about the consequences of the warming trend, like “events of this magnitude are now 80% more likely to occur in any given decade”, but it’s not OK to say “this particular event was 80% likely to have been due to the warming trend”. The real world just doesn’t work that way.

    I hope this makes some kind of sense to everyone.

    [Response: I don't agree. You have two possibilities: the climate is either roughly stationary (constant mean and variance over time), or it is not. Similarly, your die/coin/whatever is either fair (equal probs of all possible outcome), or it is not. The analysis allows you to determine the relative likelihoods of the two possibilities. Very straight forward.--Jim]

    Comment by Icarus62 — 6 Nov 2011 @ 6:19 AM

  3. Thank you for the code. I’m a SAS programmer, and at my age, the thought of learning a new language is unappealing, so I translated this to SAS (below).

    My point in this posting is to follow up on my comments at the end of the Moscow warming thread.

    You have made it clear that you are looking at warming from all causes. My issue is that this is being widely interpreted as showing the effect of global warming (loosely defined), while the actual observed warming in this region may have other significant components. My main concern was AMO, under the assumption that the AMO might account for a significant fraction of the recent temperature trend observed in Moscow.

    I wanted a simple sensitivity analysis: What would the results have looked like if we had merely experienced the temperature trend of the 1930s again. Obviously this is crude, yet I think it has some natural appeal for determining how different the present situation (reflecting global warming) is from the past. Obviously, the actual observed 1930s data is a poor proxy for the AMO signal, so I’ll also test for a peak that’s half-a-degree below the 1930′s peak.

    In short, I’ll compare the data as presented, a peak matched to the 1930s (trend ends at 0.5C below then 2009 value), and a further test case (trend ends at 1C below the 2009 value).

    The results are 81% (replicating your main result), 72% (replaying the 1930s peak) and 54% (for 0.5C below the 1930s peak).

    My conclusion is that any temperature trend for this geographic region that looked similar to the 1930s — here, plus or minus half a degree C — would generate a substantially increased likelihood of new high temperature records. In particular, if there is a strong cyclical component due to AMO (impossible to determine empirically, but useful at least as a straw man), then it is proper to attribute the increased odds to warming, but possibly not proper to attribute them to global warming.

    SAS code follows. Vastly wordier than Matlab, but this is not was SAS was designed to do.

    * In SAS, comments are set off by asterisks, every command must end with a semicolon ;
    * DATA is the file you are creating, SET or CARDS is what you are reading from ;

    * Read the columns of data, output as one record with 129 temperature observations in moscow_smooth_1 to moscow_smooth_129 ;
    data temp ; input year temperature ; cards ; * your dataset went here ; run ;
    data temp ; set temp ; counter = year – 1880 ; * for use in transpose step ; run ;
    proc transpose data = temp out = temp1 prefix = moscow_smooth_ ; var temperature ; run ;

    * Run the analysis ;
    data temp2 ; set temp1 ;

    array moscow_smooth (*) moscow_smooth_1 – moscow_smooth_129 ;

    * to mimic a replay of the 1930s warmth, deflate the series so that the 2009 values matches the 1930s peak ;
    * Let A be a factor to be used to test even lower peaks ;
    A = 0 ;
    array moscow_adjusted (*) moscow_adjusted_1 – moscow_adjusted_129 ;
    adjuster = 1 – ((moscow_smooth(57)-A)/moscow_smooth(129)) ;
    * = 1 – 19.04/19.54 for the base case, the factor to deflate the 2009 trend down match the 1930s peak ;
    do j = 1 to 129 ;
    moscow_adjusted(j) = moscow_smooth(j) ; * fill the cells with the originals, then deflate them ;
    if (57 <= j tmax then do ;
    tmax=t ;
    if j >= 120 then excount=excount+1;
    end ;
    end ;
    expected_records = excount/i ;
    end ;
    probability_due_to_trend = 100*(expected_records-0.079)/expected_records ;
    put probability_due_to_trend ; * write it to the log file ;

    * *********************************modified so that 2009 trend value matches 1930s peak ;
    excount2 = 0 ;
    do i = 1 to 100000;
    tmax = -99 ;
    do j = 1 to 129 ;
    t = moscow_adjusted(j) + rannor(-1) ; * trend plus normal variate mean zero variance 1, -1 tells SAS to read the clock for the seed value ;
    *put moscow_smooth(j) t ;
    if t > tmax then do ;
    tmax=t ;
    if j >= 120 then excount2=excount2+1;
    end ;
    end ;
    expected_records2 = excount2/i ;
    end ;
    probability_due_to_trend2 = 100*(expected_records2-0.079)/expected_records2 ;
    put probability_due_to_trend2 ; * write it to the log file ;
    run ;

    Comment by Christopher Hogan — 6 Nov 2011 @ 8:45 AM

  4. Oddly, something about the process of posting garbled the lines in the SAS code above. What’s posted is gibberish (e.g., there are unclosed parentheses, logic aside it would not run as written). I won’t bother to try to post the correct code unless somebody else wants to work with it. The results as stated are correct, the SAS code is garbled.

    Comment by Christopher Hogan — 6 Nov 2011 @ 10:46 AM

  5. I’m agitated that this group, for whom I have the greatest respect, have not mentioned, even in passing that the debate is over. It’s been over for a hundred years.

    1. CO2 was discovered 150 years ago.
    2. It’s properties -it inhibits passage of electro-magnetic-radiation in the ‘heat’ range- were discovered 100 years ago.
    3. The buildup of CO2 in the atmosphere was discovered 50 years ago.

    There’s nothing to discuss! This interminable debate of the ‘hair on the flea on the dog’ may be considered PC but it certainly is not as important as reporters and some contributors make it out to be.

    An important question is this: How do you feed a million people whose homeland has been inundated?

    This massive aggregation of brain-power could absolutely be put to better use.

    Comment by blue7053 — 6 Nov 2011 @ 1:01 PM

  6. the link to the NOAA website should be:
    http://www.esrl.noaa.gov/psd/csi/events/2010/russianheatwave/

    Comment by a.researcher — 6 Nov 2011 @ 1:35 PM

  7. Re: #3 (Christopher Hogan)

    under the assumption that the AMO might account for a significant fraction of the recent temperature trend observed in Moscow

    That’s quite an assumption. I see no more evidence that AMO is responsible for warming than that Leprechauns are.

    You have made it clear that you are looking at warming from all causes. My issue is that this is being widely interpreted as showing the effect of global warming

    I suggest you take that up with those who are making that interpretation.

    Comment by tamino — 6 Nov 2011 @ 1:57 PM

  8. Thanks for an interesting article. Your paper isn’t one I’ve come across before, I’ll read it properly tomorrow on the bus – so apologies if any of the following is actually explained by the paper, on an initial perusal it doesn’t seem so.

    As far as the above article goes: The extreme rise in ratio at the end of the series in both figures 3 & 4 initially reminded me of research I’ll get onto below. However as I don’t think we can say with confidence what the trend would be in the real world I’m not sure about the utility of this in terms of whether or not we are seeing a strong increase of natural disasters in our climate. It seems to me that the primary cause of the massive increase in ratio at the end of the series is due to the decreasing probability of new records as the stationary series gets longer.

    On a related matter I’ve been discussing on the recent open thread (if people have nothing to add on this matter here I’ll let it drop – this issue isn’t in the field I’m most interested in):

    Over at Tamino’s ‘Open Mind’ blog there was a discussion about extreme weather events using . I commented about some research from EMDAT about trends in European floods, “Three decades of floods in Europe : a preliminary analysis of EMDAT data.” PDF here.

    Figure 1 of the EMDAT report shows an exponetial rise in floods from 1973 to 2002. This is against a background in the EMDAT database of a rise in reported flood and storm disasters that is over and above the trends in reported non-hydrometeorological disasters, ref – note in particular the difference between wet and dry earth movements. This would at face value appear to suggest an increase in hydrometeorological disasters and the most plausible explanation for that would be the intensification of the hydrological cycle due to AGW. However figure 2 of the EMDAT report shows that a change in reporting sources from 1998 accounts for much of the recent rise in reported disasters.

    The question I’ve not been able to work out is whether the change in reporting wholly explains the recent increase in floods, or whether there is indeed an underlying increase in floods. If changes in technology (e.g. internet) and reporting ‘fashions’ wholly accounts for this increase then at some stage it should saturate. By that I mean that the press can only report what is happening, if they report an increasing percentage of events then at some stage they’ll near 100% coverage, at which point if the series is stationary the increasing trend will not sustain and will level out.

    I suspect that whilst reporting bias and increasing urbanisation (e.g. building on flood plains) accounts for some of the post 1980 increase, I also suspect there is an underlying element due to the intensified hydrological cycle. If anyone can clarify this, or point to research that does I’d be grateful.

    Blue7053,

    I agree that the debate as to the reality and ongoing nature of AGW is over. However the debate as to it’s progression in terms of impacts (the most important metric for humanity) is not over.

    Comment by Chris R — 6 Nov 2011 @ 2:36 PM

  9. Christopher, you shouldn’t overestimate the role of AMO. I don’t know if you have bought into the mistaken belief that the ‘S-shape’ of the 20th century temperature curve is largely caused by the AMO — it is not. The peak-to-peak of the AMO in global temperatures is no more than some 0.1C (e.g., Knight et al. 2005). Compare this to a total temperature increase over the 20th century (from Rahmstorf & Coumou) of 0.7C, with an interannual variability of +/-0.09C.

    [Response: Indeed. This is a key finding of Knight et al (2005) (of which I was a co-author) as well as Delworth and Mann (2000) [the origin of the term 'Atlantic Multidecadal Oscillation' (AMO) which I coined in a 2000 interview about Delworth and Mann w/ Dick Kerr of Science]. The AMO, defined as a 40-60 year timescale oscillation originating in coupled North Atlantic ocean-atmosphere processes, is almost certainly real. It is unfortunate however that a variety of phenomena related simply to the non-linear forced history of Northern Hemisphere temperature variations have been misattributed (see e.g. Mann and Emanuel Eos 2006) to the “AMO”. In this sense, I fear I helped create a monster. I talk about this in my up-coming book, The Hockey Stick and The Climate Wars. –Mike]

    It gets worse looking at Moscow: there, while both the total temperature increase and (perhaps) the impact of AMO are about double that of the global case, the interannual variability is a whopping +/-1.7C. Any AMO effect would find it hard to stand out against this background.

    About your code: Perhaps put it on a web site and give a link to it in a comment here.

    I tested Stefan’s code in octave and it works. Octave is free and easily installed even on Windows ;-)

    Comment by Martin Vermeer — 6 Nov 2011 @ 2:51 PM

  10. Icarus62, Your interpretation of probability is one possible interpretation (namely, the frequentist), but it is not one that is universally accepted. And what would you say about a probability in quantum mechanics, where the probability of a measurement is well defined in terms of the wave function?

    Generally, however, in my experience, even Frequentists have no trouble assigning probability to a single event.

    Comment by Ray Ladbury — 6 Nov 2011 @ 3:36 PM

  11. Re: #8 (Chris R)

    I looked at the EM-DAT data some time ago. There are obvious, and very strong, effects due to changes in both the likelihood of reporting and the way that effects (e.g. estimated number of fatalities) are estimated. These seem to be extreme for the older data, but likely continue even to more modern times. I concluded, as you suggest, that it’s very difficult to disentangle observational effects from physical changes. So I fear that those data, at least, aren’t sufficient to answer your questions. And that’s a real pity.

    Comment by tamino — 6 Nov 2011 @ 3:40 PM

  12. 5blue7053 says:
    6 Nov 2011 at 1:01 PM
    “I’m agitated that this group, for whom I have the greatest respect, have not mentioned, even in passing that the debate is over. It’s been over for a hundred years.”

    If one is unwilling to re-examine one’s assumptions, analysis, and conclusions, learning and discovery stops. Regardless of what the preponderance of the evidence says, the debate is NOT over, because there is always more to learn, deeper understandings and insights to gain, and the occasional surprise. Have the confidence in your thinking to be willing to question it – always and forever.

    Comment by Geno Canto del Halcon — 6 Nov 2011 @ 4:32 PM

  13. RE:3

    Hey Christopher,

    As you are solving for the observation in 2009/10 you probably should not use those values in the analysis.

    Next, (as you are likely comfortable with current statistical methods, this may not apply to you), basically in statistics, no outlier should be greater then 100% of the range of the population in a normalized distribution. (If we incorporate the 2009/10 anomalies we should re-calculate the median of the population.) In order to determine the probability that a value could be part of a population is whether it is contained in the data set defined by its maximum range. (This is very different from trying to determine the probability that a value is possibly the median of the population.)

    The value above the rolling median in this case, is the maximum temperature above the changing mid-range value or is the “extreme value”. By removing the changing mid-range (normalizing the data as though it represented a stationary climate) and simply plotting the maximum values, it still describes a parabola. The tilted form of the resulting parabola, where the fixed point can be the median and the resulting tilt of the “fixed line”, describes the change from a stationary climate.

    Cheers!
    Dave Cooke

    (Note: In this case, we likely could describe 2 parabolas, if we could extend the data set back another 70 years, with a regional equivalent site/proxy, it may reveal some useful information as to weather pattern oscillations…)

    Comment by ldavidcooke — 6 Nov 2011 @ 4:49 PM

  14. Are we seeing a second effect also perhaps. Posulate that instead of just the average trending upwards, that the standard deviation is increasing as well. Then the effect would be even more pronounced. In fact if you had a flat average, but the standard deviation went up, you’d still get a lot of records. So do we have data on whether the standard deviation, which represents some sort of weather variability measure is changing?

    Comment by Thomas — 6 Nov 2011 @ 6:30 PM

  15. Stefan — I thought this quite a good and clar exercise. But I now, on physical grounds, wish to doubt the white noise assumption. Obviously one could rerun the exerise for 1/f^a noise, pink noise, for various values of a between 0 and 2. Rather than doing so I’ll simply ask if that changes much.

    Comment by David B. Benson — 6 Nov 2011 @ 6:34 PM

  16. Thomas – that’s not right. A larger standard deviation leads to less record-breaking. It’s one of the main findings of the Rahmstorf and Coumou paper (building on earlier work in this regard). That’s why the Moscow July temp series sees less record-breaking warm events, than the GISS global temp series. Moscow in July warms at 1.8°C over the 100 year period, but the year-to-year variability is 1.7°C. GISS on the other hand, has 0.7°C over 100 yrs, but an interannual variability of only 0.088°C.

    The same deal applies to the MSU satellite vs surface temperature datasets. The satellites have larger interannual variability than the ground-based network and therefore a smaller probability of record-breaking. This is explained in the paper, which is freely available here.

    Comment by Rob Painting — 6 Nov 2011 @ 7:05 PM

  17. Re: #16 (Rob Painting)

    I don’t think Thomas was referring to a larger standard deviation series having more records than a smaller one (as was addressed in Rahmstorf & Coumou), but to an increase in standard deviation within a single series leading to more records.

    Comment by tamino — 6 Nov 2011 @ 8:26 PM

  18. Geno “more to learn, deeper understandings and insights to gain, and the occasional surprise” has no relevance to the central debate being over, any more than it does to evolution science, where exactly the same phrase could be used.

    Comment by David Horton — 6 Nov 2011 @ 9:12 PM

  19. Did anyone thought yet about applying memristive functions, to model nonlinearity behavior?

    Comment by prokaryotes — 6 Nov 2011 @ 10:58 PM

  20. 5 blue7053: “How do you feed a million people whose homeland has been inundated?”
    You don’t. They die. The difficult part may be realizing what is possible and what is not possible. Do what you can and don’t try to do what you can’t. The action to take is to stop the Global Warming. You could also try to stop the growth of population.

    The point of continuing to tell people what has already been discovered is that the people have not become convinced enough to act on GW yet. The sooner we act, the more people will be saved. The later we act, the fewer people will be saved. Scientists cannot dictate policy.

    Justice is a human invention found nowhere in Nature. There will be no justice. Excessive altruism will lead you into more of what you don’t want.

    Comment by Edward Greisch — 7 Nov 2011 @ 12:32 AM

  21. Climate change is not an isolated event, as many come to recognise there are cyclical changes, occasionally the phasing is such (with relevant delays) that is likely to produce a cumulative extreme. One of the rare occasions happened in the late 1690s-early 1700s. If the current decadal period is one, time will tell.
    Here:
    http://www.vukcevic.talktalk.net/CET-NV.htm
    I have made an attempt (again time will tell if successful) to capture some of those oscillations related to the North Atlantic, the home of the AMO, which according to the BEST team is of the global importance:
    We find that the strongest cross-correlation of the decadal fluctuations in (global) land surface temperature is not with ENSO but with the AMO.
    http://berkeleyearth.org/Resources/Berkeley_Earth_Decadal_Variations (pages 4&5).

    Comment by vukcevic — 7 Nov 2011 @ 4:34 AM

  22. Something to watch:
    http://capitalclimate.blogspot.com/2011/11/indirect-effects-arctic-ice-loss-has.html
    Will circumstances render the math unnecessary?

    Comment by Pete Dunkelberg — 7 Nov 2011 @ 7:39 AM

  23. To Tamino,comment 7. I don’t have a model that decomposes the local temperature change into a global warming (broadly defined) and other. But neither does anybody here. So ignore my ham-handed exercise in trying to parse out the marginal effects of global warming. There’s still an issue here.

    Nobody can take serious issue with the arithmetic. The observed warming at that location creates substantially higher odds of a new record. Some of the statistically-oriented comments above have noted, maybe that could have been done better, but I think that’s unlikely to be empirically important.

    What’s a little sketchy is that the writeup is of two minds about causality. Formally, it clearly states that it’s not attribution, and ignores the outlier 2010 year when calculating trend. Informally, it’s a tease. Look at the last sentence in “It ain’t attribution” paragraph. I paraphrase: It ain’t attribution (but we all know it’s global warming). Alternatively (but what else could it be). Or compare the temperature trend graph in the prior posting to the data actually used. The graph boosts the 2009 value about 1C relative to the data. That the graph and data differed was noted in the text, yet I’ve never before seen that sort of visual exaggeration used on Realclimate.

    So here’s my technical question. Why? Why do you have to tease about the causality here, rather than test it in some formal way? Was it just a time constraint, or is there some fundamental barrier to doing more than just hint as to cause?

    I can think of any number of naive things to do, other than comparing the current trend to prior peak as I did above.

    So here’s one. Why not take the median temperature prediction from an ensemble of GCMs for this area, with and without GHG forcing, monte carlo with the actual observed distribution of residuals, and attribute the marginal increase in likelihood under the GHG scenario to the impact of GHGs on temperature records. Is that too much like the Dole analysis? Are the models insufficiently fine-grained to allow this? Do they do such a poor job at local temperature prediction? I’m not suggesting the burden of developing the distribution of temperatures via iterated runs of the ensembles, just taking (what I hope would be) canned results and hybridizing them with the current method to parse out to which GHG-driven warming has raised the odds of a new high temperature.

    To me, as this stands, the writeup is unsatisfying. You have unassailable arithmetic that, by itself, strictly interpreted, has little policy content. Then you have not-quite-attribution material wrapped around that – clearly distinguished from the formal analysis. I’m just wondering if there was some fundamental barrier to combining the two, or whether there is no formal (yet easily implemented) way to put the policy context on some firmer ground.

    [Response: Our goal with this paper was to understand better how the number of extremes (both record-breaking and fixed-threshold extremes) changes when the climate changes. We derived analytical solutions and did Monte Carlo simulations; our goal was not to create any "policy content". To give the abstract solutions we found some real-life meaning, we did not confine it to the 100-point synthetic data series we originally worked with but added two simple examples, one of them was the Moscow data. This got much bigger than expected since during the review process the Dole et al. paper came out, and the reviewers asked us to elaborate on the Moscow data - that is how this became a much larger aspect of the paper than originally intended. Our goal was not to revisit the question of what causes global warming. There is plenty of scientific literature on that. -stefan]

    Comment by Christopher Hogan — 7 Nov 2011 @ 9:09 AM

  24. Vukcevic,

    Could you expand on the North Atlantic precursor in your graph. Is it related to the NAO, AMO, or some other factor?

    Comment by Dan H. — 7 Nov 2011 @ 10:21 AM

  25. I have a question on language and testing and language. If say we have a loaded dice that will roll a 6, 5 out of 6 times, then it seems if a 6 were to be rolled we can say the 6 had the probability of (5-1)/5 = 80% being a result of the loading. Is there a better term than do to loading the dice? If we role a 2 it was less likely on the loaded dice, and I’m hopeing for some better language to explain this to people that don’t understand statistics.

    On the substance of the paper there is modeling of the variability of the temperatures and trend line. It seems fairly straight forward to test this model on a fairly large number of data points. The last decade was fairly hot. Has this been tested on a number of regions with historical data and verified to be a good predictive model of records?

    thanks

    [Response: Sure, we've tested this globally on over 150,000 time series, but that paper is still in review. -stefan]

    Comment by Norm Hull — 7 Nov 2011 @ 3:48 PM

  26. 24 Dan H. says:
    7 Nov 2011 at 10:21 AM
    Vukcevic,
    Could you expand on the North Atlantic precursor in your graph. Is it related to the NAO, AMO, or some other factor?

    I would qualify it the other way around:
    The NAO, and some years later AMO, are related to North Atlantic Precursor.
    See the last graph in:
    http://www.vukcevic.talktalk.net/NAOn.htm
    but it is related to the SSN as you can see here:
    http://www.vukcevic.talktalk.net/CET-NV.htm
    this is a relationship that current science may have a bit of a problem verifying, data is there, but mechanism is not so clear cut as the graph may suggest.

    Comment by vukcevic — 7 Nov 2011 @ 4:38 PM

  27. Getting a record ratio curve like that just because you made the climate U-shaped is astounding. It is so counter-intuitive that I predict a lot of people won’t believe it. What happens if you invert the U? Same as the half U?

    [Response: Easy enough to try - that is why we provided the code, so our readers can play... -stefan]

    Comment by Edward Greisch — 7 Nov 2011 @ 11:47 PM

  28. Thanks, an excellent post that knocked some rust off of my stats and probability, and answered some questions in my mind about this kind of study.

    I’ve been thinking for a while that it isn’t any single unusual event that will create the most serious kinds of problems, it is the risk that more than one event happens in the same year. As unusual events become less unusual, that risk goes up. It was convenient that the US farmers had a good year at the same time the Russian farmers had a bad one, and still, one could conjecture that the Arab Spring was sparked by high food prices resulting from Russia not exporting any wheat to that region that year. Probably that fire was already prepared, but it might have needed a spark.

    Which leads me to comments #5 and #20. If the 100 million are isolated by geography, they die, but if they are not, then they inundate whatever nearby areas there are. Then, depending on political and economic conditions of those regions, they either die, are fed, or trigger social upheaval that could potentially result in more deaths than if they had been isolated. I believe all scenarios (with smaller numbers) have played out in history, but it is damnably hard to predict because so much depends on the decisions of those in power at the time. Not that those in power can be blamed always; there are no-win scenarios.

    Comment by Chris G — 8 Nov 2011 @ 12:02 AM

  29. Unless Moscow was chosen a priori the analysis is highly suspect. Run a Monte Carlo simulation of 100 cities and a posteriori pick the city with the largest record and I’m sure you could get a similar result just due to randomness.

    [Response: I think you may not quite understand our analysis. All the Monte Carlo simulations tell us is that with the given (nonlinear) climate trend, the chances of hitting a new record are five times larger than without the climate trend. This conclusion would remain true even if there had not been a record in Moscow in the last decade. Our analysis finds that the chances of a record in the last decade were 41%. It might as well have not occurred. But it is five times as large as the 8% that you would expect in an unchanging climate.
    Think of the loaded dice (by the way, do you native speakers now call the singular a die or a dice??): a large number of tries might show you that your loaded dice rolls twice as many sixes as a regular dice. This information would remain true whether you have just rolled a six or not. But IF you have just rolled a six, you can say there is a 50% chance that this would not have happened with a regular dice. That is just another way of saying that a regular dice on average would only produce half as many sixes. -stefan]

    Comment by James — 8 Nov 2011 @ 12:47 AM

  30. (by the way, do you native speakers now call the singular a die or a dice??)

    Die-hards (pun intented) still use die as the singular noun for that little cube with spots on it, and this is still considered correct as listed in all the main dictionaries I believe.

    However, the Oxford dictionaries (online at least), and possibly others, now list dice as “a small cube with each side having a different number of spots on it”; i.e. use in the singular is approved usage. And under die usage they note “In modern standard English, the singular die (rather than dice) is uncommon. Dice is used for both the singular and the plural”.

    Oxford dictionaries have made this change to common usage at some time since 1994 (which is the date of my edition of the SOED in which “die” is the only singular version given). Time for me to update my SOED!

    So, both versions are now considered correct, and so confusion and arguments will no doubt reign.

    Comment by P. Lewis — 8 Nov 2011 @ 5:37 AM

  31. Stefan, some of us still use the singular forms “die”, “datum”, and “phenomenon”, but we appear to be in danget of extinction.

    Comment by S. Molnar — 8 Nov 2011 @ 5:40 AM

  32. “. . . by the way, do you native speakers now call the singular a die or a dice??”

    “Die” is correct, though I think this is something people often get wrong.

    Comment by Kevin McKinney — 8 Nov 2011 @ 7:28 AM

  33. re: die/dice, see Ambrose Bierce’s Devil’s Dictionary. :-)

    Comment by CM — 8 Nov 2011 @ 9:18 AM

  34. > …but that paper is still in review. -stefan

    And you haven’t even held the press conference yet? You’re falling behind the times. :-)

    Comment by CM — 8 Nov 2011 @ 9:28 AM

  35. @ Edward Greisch 7 Nov 2011 at 11:47 PM re inverting the curve

    Mathematics says that probability of high temperatures with a convex upward curve is the same as the probabilities of record LOWS, with a concave up curve. If you plotted high and low probabilities on the same graph, it (probably &;>) would look like this.

    Comment by Brian Dodge — 8 Nov 2011 @ 10:11 AM

  36. Yes: die is old-school singular. But then I also remember when “inflammable” meant “burns easily”, non-flammable meant it wouldn’t, and you couldn’t find “flammable” in the dictionary.

    So (to get back on topic, sort of…) a heat wave and very dry vegetation would create highly-inflammable conditions…

    [Response: Good example. The use of "inflammable" has caused me problems more than once in reading the old fire literature...how exactly was this forest "inflammable" if it just went up in smoke?? :) Jim]

    Comment by Bob Loblaw — 8 Nov 2011 @ 11:30 AM

  37. “…do you native speakers now call the singular a die or a dice?”

    It depends on the audience. My friends in academia prefer “data are”; those in industry, “data is”. “Indexes” is a verb (seldom used) and “indices” is a noun to the academics, but “indices” is sometimes not received well by customers in industry, etc. To some it merely sounds correct, to others, it sounds pretentious. To me, it is just simpler to think of datum/die as singular and data/dice as plural, but then, I am feeling somewhat like a dinosaur. (#31 ;-) )

    Comment by Chris G — 8 Nov 2011 @ 11:41 AM

  38. It is the nature of knowledge that it is never completely certain, because the contents of our mind is limited by our human abilities to perceive, analyze, and conceive. To make statements like “the debate is over” with reference to scientific discussions is to betray a fundamental lack of understanding regarding epistemology. David Horton is simply demonstrating this lack of understanding, as did 5blue7053 before him. To refuse to debate is to tread into the realm of dogma, not science.

    Comment by Geno Canto del Halcon — 8 Nov 2011 @ 1:43 PM

  39. Re: #39 (Geno Canto del Halcon)

    To make statements like “the debate is not over” with reference to some scientific topics — say, the health risk of smoking for example — is not an exercise in epistomological purity. It’s a despicable, and rather pathetic, excuse to deny reality, usually for the most loathesome of motives.

    It’s also recorded history. Concerning global warming, we are watching history repeat itself.

    Please spare us your haughty appeal to philosophical uncertainty.

    Comment by tamino — 8 Nov 2011 @ 2:06 PM

  40. re: “the debate is over”:

    Some debates are over. (There’s no such thing as phlogiston; the Earth orbits the Sun; CO2 is a greenhouse gas; it’s increasing; the increase comes from human activities.)

    Some aren’t. This post, for instance, is related to a current debate among reasonable people over whether a particular heat wave could have been anticipated based on past trends. That question, in turn, is related to the no less debated question how particular events can be ascribed to man-made global warming.

    I hereby declare the debate over the debate being over, over.

    Comment by CM — 8 Nov 2011 @ 2:38 PM

  41. Geno Canto del Halcon wrote: “To make statements like ‘the debate is over’ with reference to scientific discussions is to betray a fundamental lack of understanding regarding epistemology.”

    To make statements like “Regardless of what the preponderance of the evidence says, the debate is NOT over” (your words in comment #12) without stating specifically what particular “debate” or what particular “scientific discussion” you are talking about, is vacuous.

    Comment by SecularAnimist — 8 Nov 2011 @ 2:45 PM

  42. Yes “die” is the singular. It is forgotten because dice nearly occur in pairs. Other singulars are “addendum”, “agendum” and “stadium”. The word “cattle” is notable for not having a singular. We have words for a single male or female of the species but if you see one some way off and can’t tell the gender we don’t have a word for it.

    Comment by Pete Dunkelberg — 8 Nov 2011 @ 10:02 PM

  43. Stefan, you have doubtless noted that aridity is spreading (see the papers of Dai for instance.) And where it is not arid there are more floods, and where neither of the above is happening precipitation is still more concentrated in fewer stronger events. As these all seem to be ongoing trends, agriculture is going to have a large problem in the future. If drought and floods keep increasing then by 2030 say, if there is a “good year” for both drought and flooding at the same time, there may be a very large famine. How would you compute the cumulative risk of this? [for perhaps a 1 million person starves famine, 10 million and 100 million, for 2030, 2040, 2050 ...]

    This is the information humanity needs. Has it been published? Are you working on it?

    Comment by Pete Dunkelberg — 8 Nov 2011 @ 10:06 PM

  44. Geno Canto del Halcon, look at it this way: scientists are constantly striving for a deeper understanding of gravity. However, new discoveries in this area are not going to make satellites fall out of the sky, nor will Newtonian calculations be any less successful when calculating how to launch a rocket to Mars. Likewise new discoveries in atmospheric physics will not make all the usual processes of energy exchange stop happening.

    Climate science the working out of standard physics and chemistry on a planetary scale. A planet as a whole can gain or lose energy in very few ways. On a planet like ours a big portion of the daily energy loss (to compensate for the energy gain from our star) takes the form of longwave infrared radiation. Certain gases intercept and re-radiate this radiation through the atmosphere. The energy is finally lost to space from the cold thin upper atmosphere. The whole process is mediate by certain gases and is called the greenhouse effect. This is not going to change, just as satellites are not going to fall out of the sky on account of new discoveries. But scientists hardly consider all questions answered. Why do you suppose they keep on doing research?

    In the area of climate change we have not only an interesting application of physics, we also have a very large slowly developing human problem. There is something obvious we can do to keep the problem from getting too large and harmful. We can change the way we get usable energy. The reason this change is delayed is so that the current top energy companies can continue to accumulate large sums of money. Please don’t help them.

    Comment by Pete Dunkelberg — 8 Nov 2011 @ 10:49 PM

  45. Pete Dunkelberg @43 — … precipitation is still more concentrated in fewer stronger events. I’ve certaqinly seen this expressed several times but have not read a definitive study.

    Comment by David B. Benson — 8 Nov 2011 @ 10:52 PM

  46. Pete:

    Yes “die” is the singular. It is forgotten because dice nearly occur in pairs. Other singulars are “addendum”, “agendum” and “stadium”.

    It’s taken a long, long time but English, in recent decades, has finally decided it’s not Latin after all … and that we’re free to throw the shackles and have the language that we want. :)

    And as has been pointed out above, “dice” as the singular is accepted by linguistic authorities (such as the Oxford Dictionary).

    If people think that English must be a dead language, then surely they reject words like “internet”, and all that follows?

    Comment by dhogaza — 9 Nov 2011 @ 12:05 AM

  47. Geno Canto del Halcon,
    I look forward to your bibliography of recent papers on the luminiferous aether.

    Comment by Ray Ladbury — 9 Nov 2011 @ 5:01 AM

  48. #47–Not even trying to resist the OTness of this linguistic stuff. . .

    “how exactly was this forest “inflammable” if it just went up in smoke??”

    Well, it presumably went up “in flame” too–and I think “inflammable” was originally meant to convey “capable of becoming inflamed.” Which now sounds exclusively medical–but I think it was not always thus.

    Comment by Kevin McKinney — 9 Nov 2011 @ 6:12 AM

  49. One question; Does the above data simulation for heat records hold true when looking only at the minimum values (regarding winter records in the the same manner).

    [Response: If you're asking if the same type of analysis could be applied to records, regardless of which end of the scale they're on, the answer is yes.--Jim]

    Comment by james — 9 Nov 2011 @ 9:16 AM

  50. Stafan, thanks for the reply, I look forward to seeing the statistics . I side with the evolving English part of the debate and prefer the term loaded dice, but die is certainly acceptable ;-)

    From a thought experiment point of view, variation will give similar results as directional change. Is this correct as long as the variation is similar to your curve and is bigger than the directional change. Jumping back to the real world, and the paper, it seems that the trend line includes both local climate change and low frequency climate variation. There definitely appears to be some climate variation in the 130 year trend line. Would it not be more proper to say that the dice is then loaded with both local climate change and climate variation, and the comparison model had no cyclical climate variations? This would make the event more likely due to climate change and variation, which seems easy to grasp. The interesting thing is not the direction but the probabilities.

    Following up on the language part, say the January low in oklahoma this year had similar loaded dice against it. Would the probability be (1-5)/1 = -400% that the low was due to climate change? Or would it be better to say since it is a negative number climate change is unlikely to have had anything to do with causation of this record low?

    Comment by Norm Hull — 9 Nov 2011 @ 12:54 PM

  51. #38
    “It is the nature of knowledge that it is never completely certain, because the contents of our mind is limited by our human abilities to perceive, analyze, and conceive.”

    An exercise in uncertainty on science…

    1. Take a 10 Kg weight to a height of say 10 metres – makes the numbers nice and easy.

    2. Place your foot directly below the weight.

    3. Allow the weight to fall.

    4. Use the stimulus of pain as a focus for meditation on the issue of certainty in science.

    5. Use the period of convalescence to learn the sort of basic physics that may stop you doing anything as stupid again.

    6. er

    7. …that’s it.

    Comment by Chris R — 9 Nov 2011 @ 1:30 PM

  52. > precipitation … fewer … stronger … definitive

    David (replying to Pete D.)– can you tell us which studies you’ve looked at that you don’t find definitive or convincing?

    That would help narrow the list to consider what might be useful.
    There are so many hits on this:

    http://www.google.com/search?q=precipitation+intensity+frequency+concentration+trend

    (I thought Google had discontinued using + in searches but it’s still putting them in, so maybe this will work for a while)

    Comment by Hank Roberts — 9 Nov 2011 @ 1:33 PM

  53. Is western Alaska’s snowicane a record breaker?

    Comment by David B. Benson — 9 Nov 2011 @ 9:21 PM

  54. Hank Roberts @52 — Pete Dunkleberg found
    Seung-Ki Min, Xuebin Zhang, Francis W. Zwiers & Gabriele C. Hegerl
    Human contribution to more-intense precipitation extremes
    NATURE VOL 470, 17 FEBRUARY 2011, 378–381
    http://www.nature.com/uidfinder/10.1038/nature09763
    which is very good IMO. However, that study (and many others) fails to confirm that there are areas with decreased average rainfall but increased extreme rains. I’m certainly prepared to believe that so-called desertification will lead to such a result, but I’d like to see some evidence.

    Comment by David B. Benson — 9 Nov 2011 @ 9:39 PM

  55. perhaps:
    http://www.uib.es/depart/dfs/meteorologia/ROMU/formal/paradoxical/paradoxical.pdf
    The paradoxical increase of Mediterranean extreme daily
    rainfall in spite of decrease in total values
    GEOPHYSICAL RESEARCH LETTERS, VOL. 29, NO. 0, 10.1029/2001GL013554, 2002

    “Earlier reports indicated some specific isolated regions
    exhibiting a paradoxical increase of extreme rainfall in spite of
    decrease in the totals. Here, we conduct a coherent study of the
    full-scale of daily rainfall categories over a relatively large
    subtropical region-the Mediterranean-in order to assess whether
    this paradoxical behavior is real and its extent. …”
    Cited by 127

    Comment by Hank Roberts — 9 Nov 2011 @ 11:13 PM

  56. “David B. Benson asks, Is western Alaska’s snowicane a record breaker?”

    I believe Northern Norway has had a couple of storms of similar intensity, and certainly some Antarctic coasts and Southern Ocean islands, but otherwise it’s pretty close. Definitely that’s a storm that imho qualifies quite high on the NH snow storm list, though I don’t know if there’s a comprehensive list made anywhere. We had a beaufort 10 snow storm last year and that too was pretty nasty.

    Comment by jyyh — 9 Nov 2011 @ 11:42 PM

  57. Hank Roberts @55 — That was both prompt and exactly to the point! Moreover, the variations by locality are also of interest. Good find.

    Comment by David B. Benson — 10 Nov 2011 @ 1:18 AM

  58. The changes twixt die and dice are quick.

    Yes, we’ve been losing our adverbs really quick.

    Comment by ccpo — 10 Nov 2011 @ 3:06 AM

  59. I am confused. Well I understand that “If your series is n points long, then the probability that the last point is the hottest (and thus a record) is simply 1/n”. This is true for every point if you look at the whole dataset. However, a record (in my opinion) is not independent. When I role a dice, the every throw is independent of each other and therefore the probability of each throw to give a ’6′ is 1/6. But if I look at the probability of the following throw being a record, this is not independent of the throw before. Example: I throw a 4. then the probability of the next throw being a new record (menas a 5 or a 6) is 1/3. If I throw a 1, it is 5/6. Looking at the series afterwards, of course everyone of this 2 throws has the probability to be the record.

    So in a linear climate, I expect, that new records tend to go near zero.
    Where do I think wrong?

    [Response: You have the concepts right, you just slipped up with the phrasing of your last statement there. Replace "linear" with "stationary" (unchanging) climate, and it's correct. Since n increases with time, 1/n goes toward zero. The only thing linear here is the increase in value of n with time--Jim]

    Comment by Thomas — 10 Nov 2011 @ 6:41 AM

  60. Here’s a start:

    Allen and Soden 2006. Atmospheric Warming and the Amplification of Precipitation Extremes.

    Min et al. 2011. Human contribution to more-intense precipitation extremes.

    Intensification of Summer Rainfall Variability in the Southeastern United States HUI WANG et al. 2010

    http://media.miamiherald.com/smedia/2008/08/07/15/rainwarm.source.prod_affiliate.56.pdf

    Atmospheric Warming and the Amplification of Precipitation Extremes
    Richard P. Allan and Brian J. Soden

    http://www.pnas.org/content/106/35/14773.full.pdf+html
    The physical basis for increases in precipitation
    extremes in simulations of 21st-century climate change

    Paul A. O’Gormana, and Tapio Schneider

    124. Trenberth KE, Dai A, Rasmussen RM, Parsons DB.
    The changing character of precipitation. Bull Am Met
    Soc 2003, 84:1205–1217.
    http://portal.iri.columbia.edu/~alesall/ouagaCILSS/articles/trenberth_bams2003.pdf

    125. Sun Y, Solomon S, Dai A, Portmann R. How often
    will it rain? J Clim 2007, 20:4801–4818.
    http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3672.1

    http://scholar.google.com/scholar?q=allen+soden+precipitation&hl=en&btnG=Search&as_sdt=1%2C22&as_sdtp=on

    Comment by Pete Dunkelberg — 10 Nov 2011 @ 7:16 AM

  61. To Thomas on 59
    IMHO the loaded dice example is a good one for understanding what the 80%, means. If you know you have a loaded dice with that rolls a 6 5 out of 6 times there is an 80% chance when you roll a 6, you would not have gotten it if the dice was not loaded. You are absolutely correct that if there are only 6 discrete values, you have 0% chance of rolling a higher number if you have already rolled a 6.

    Which brings us to the 1/n question. If have n unique numbers, there is a 1/n probability that the last one is the highest independent of the probability distribution. I do not think this is a good assumption on temperature data though, and I doubt they used this approach in the paper. If we characterize the distribution of temperatures, and the high, we should be able to calculate the probability that a new temperature exceeds that one. Now the paper did not take into account climate variability in the “non loaded” case, so we are comparing data for a non-variable or stationary climate versus the current one.

    Comment by Norm Hull — 10 Nov 2011 @ 2:02 PM

  62. Your model leaves out the magnitude of new records. The Moscow heat wave wasn’t just a record, but an extreme outlier. The more interesting question is, “Given that a random record heat wave occurred (just weather), how probable was the Moscow heat wave?”

    Comment by RichardC — 10 Nov 2011 @ 2:57 PM

  63. From WunderBlog:
    http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=1441

    Have intense Northern Hemisphere winter storms increased in number?

    The material for this post comes from three sources: the 2007 IPCC report, a 2009 review titled, Extra-tropical cyclones in the present and future climate: a review, and Weather and Climate Extremes in a Changing Climate, a 2009 report from the U.S. Global Change Research Program (USGCRP). An increasing number of intense winter storms in some regions of the Northern Hemisphere over the last few decades of the 20th century was a common theme of many of the studies reviewed. However, the studies used different measures as to what constitutes an “intense” storm, and have some disagreement on which areas of the globe are seeing more intense storms. A 1996 study by Canadian researcher Steven Lambert (Figure 2) found a marked increase in intense wintertime cyclones (central pressure less than 970 mb) in the latter part of the 20th century. Most of this increase occurred in the Pacific Ocean. Other studies (Geng and Sugi, 2001, and Paciorek et al., 2002) found an increase in intense winter storms over both the North Atlantic and North Pacific in the latter part of the 20th century. Benestad and Chen(2006) found an increase in the number of intense storms over the Nordic countries over the period 1955-1994, but no trend in the western parts of the North Atlantic. Gulev et al. (2001) found a small increase in the number of intense North Pacific storms (core pressure below 980 mb), a large increase in the Arctic, but a small decrease in the Atlantic. McCabe et al. 2001 found an increase at both mid-latitudes and high latitudes, particularly in the Arctic. Hirsch et al. (2001) found that the number of intense Nor’easters along the U.S. East Coast (storms with winds > 52 mph) stayed roughly constant at three storms per year over the period 1951 – 1997. Over the period 1900 to 1990, the number of strong cyclones (less than 992 mb) in November and December more than doubled over the Great Lakes of North America (Angel and Isard, 1998). With regards to Europe, Lionello et al. conclude, “the bulk of evidence from recent studies mostly supports, or at least does not contradict, the finding of an attenuation of cyclones over the Mediterranean and an intensification over Northern Europe during the second part of the twentieth century”.

    In summary, the best science we have shows that there has been an increase in the number of intense wintertime extratropical storms in the North Pacific and Arctic in recent decades. Increased wave heights have been observed along the coasts of Oregon and Washington during this period, adding confidence to the finding of increased intense storm activity. The evidence for an observed increase in intense wintertime cyclones in the North Atlantic is uncertain. In particular, intense Nor’easters affecting the Northeast U.S. showed no increase in number over the latter part of the 20th century. This analysis is supported by the fact that wintertime wave heights recorded since the mid-1970s by the three buoys along the central U.S. Atlantic coast have shown little change (Komar and Allan, 2007a,b, 2008). However, even though Nor’easters have not been getting stronger, they have been dropping more precipitation, in the form of both rain and snow. Wintertime top 5% heavy precipitation events (both rain and snow) have increased over the Northeast U.S. in recent decades (Groisman et al., 2004), so Nor’easters have been more of a threat to cause flooding problems and heavy snow events. In all portions of the globe, tracks of extratropical storms have shifted poleward in recent decades, in accordance with global warming theory. Note that the historical data base for strong winter storms is in better shape than the data base we are using to try to detect long-term changes in hurricanes. The Ulbrich et al. (2009) review article states:

    The IPCC AR4 (cf. Trenberth et al. 2007, p. 312) states that the detection of long-term changes in cyclone measures is hampered by incomplete and changing observing systems. Recent studies found, however, a general reliability of results for cyclones in the Northern Hemisphere. There are no sudden shifts in intensities that would indicate inhomogeneities, and also a comparison with cyclone activity estimated from regional surface and radiosonde data (Wang et al. 2006b; Harnik and Chang 2003) confirmed the general reliability of the data”.

    However, the data is not as good in the Southern Hemisphere, so the finding that intense winter storms are also increasing in that hemisphere must be viewed with caution.

    Comment by Pete Dunkelberg — 10 Nov 2011 @ 11:04 PM

  64. “Haughty”? Resort to name-calling is a fairly certain indicator that the commenter really doesn’t have anything substantive to add to an argument.

    I admire Gavin Schmidt, Eric Steig …all the folks at RealClimate. The very fact that they allow relevant debate to continue on this website is to their credit. So, when someone says they are “irritated” that RealClimate allows the debate to continue, because “the debate is over” I’m inclined to defend what I regard as completely appropriate dialog on the many uncertainties about climate science that remain. I also respect that they don’t bother to discuss every disproved conjecture of some folks – why waste time on what has been disproved, when there is so much remaining to discover and understand.

    Comments about the Earth orbiting the sun, etc. seemingly betray a lack of understanding that despite all the advances scientists have made in understanding orbital mechanics, even with Einsteinian relativistic effects included, our models still fail to predict with complete accuracy the Earth’s orbit. In other words, there is opportunity here for further discovery, at least for those willing to look further.

    Comments about “phlogiston” and the like are also disappointing. There is still a great deal we do not understand about the nature of space, electromagnetic radiation, gravity, the strong force, and the weak force. We don’t understand mass and how it comes about, nor why quarks seemingly have the masses they do, or why their masses don’t add up to the measured masses of protons and neutrons.
    You want certainty? Sorry, you will be disappointed. Even though we may someday be able to see beyond the vail of Heisenberg’s uncertainty principle, uncertainty will remain.

    Comment by Geno Canto del Halcon — 11 Nov 2011 @ 12:49 PM

  65. Extreme? About 1,000 people died in Southeast Asia floods: UN.

    Comment by Pete Dunkelberg — 11 Nov 2011 @ 1:41 PM

  66. #64,
    “…the many uncertainties about climate science that remain…”

    None of which undermine the central points:
    1. The debate over the reality of the post 1975 warming is over.
    2. The debate over the primary human role in most/all (I say ‘all’) of this warming is over.

    Because CO2 is a primary driver of the post 1975 warming it must have had a role in the pre-1975 warming, the physics of the matter didn’t suddenly apply after 1975 and not before. So while claimed solar correlations prior to 1975 may well be correct, there was a background of warming due to CO2 which has been increasing since the late 1800s.

    Yes there are issues still up for discussion – that’s why I read the science as an amateur. Were there no issues for dicussion it wouldn’t be an interesting branch of science.

    But the science is settled on the core issues that are relevant to the question: Is anthropogenic global warming (AGW) a problem that we need to do something about? And the answer is a clear, unequivocal, ‘Yes.’

    The valid question is what to do. Key to deciding that is what impacts we will face, since severity of impacts is a crucial consideration in any cost-benefit decision. To say anything of worth about that I need to read more: I had thought that whilst the AGW signal emerged from noise around 1975 the signal of AGW related weather events was still within the noise. I now suspect it is starting to emerge from the noise of natural variability in key regions and key indices.

    Comment by Chris R — 11 Nov 2011 @ 4:12 PM

  67. Geno Canto del Halcon,
    You evidently do not understand scientific uncertainty. It is knowing the uncertainties that allows us to proclaim with high confidence that a satellite that we slingshot off of half a dozen celestial bodies will arrove at its intended target on a day certain. That is certainly good enough for most purposes.

    Likewise, we can state with near certainty that adding CO2 to the atmosphere will warm the planet–and with high confidence about how much.

    We have reached the point where it is no longer profitable to argue about whether CO2 is a greenhouse gas. It is. And once you have 97% of experts agreeing on anything–even if it is where to go for dinner–you can pretty much take that to the bank.

    If you require certainty, may I direct you to the theology department.

    Comment by Ray Ladbury — 11 Nov 2011 @ 5:36 PM

  68. Will RealClimate have a post on Hansen’s new paper where Hansen shows that the area of the Earth with a greater than 3SD heat anomaly has increased tremendously? I would be very interested in what scientists think of this paper.

    If the 3SD anomaly was 0.5% of the Earth in the 1960′s and is 10% now, does that mean that 95% of the anomaly can be attributed to AGW? Can a % of the 3SD anomaly be attributed to AGW in a scientific way?

    Thank you for your informative blog.

    Comment by Michael Sweet — 11 Nov 2011 @ 6:52 PM

  69. hi guys… long time no post…

    whenever it’s cold where i live, as with that crazy october eastern snow storm this year, there’s someone who says, “so… how’s that global warming theory working out? hahaha…”

    is there a website that would show all the hi and low records set GLOBALLY on a given day? also, is there a site showing the “running” or current global average temperature (and how it compares to “normal”)? i’d like to be able to illustrate how just because it’s cold here, it’s not cold GLOBALLY.

    Comment by Walter Crain — 11 Nov 2011 @ 11:28 PM

  70. I second Mr. Sweet’s call for a discussion of the Hansen paper. I really like Figure 4, which summarizes well the evolution of the temperature anomaly distribution over the decades. I would really like to sees the same graphs for precipitation, humidity and pressure.

    sidd

    Comment by sidd — 11 Nov 2011 @ 11:39 PM

  71. I think that this is the new paper from Hansen about climate extremes.

    “Climate Variability and Climate Change: The New Climate Dice” Hansen/Sato/Ruedy 10/11/11
    PDF here

    Comment by Chris R — 12 Nov 2011 @ 9:58 AM

  72. Ray Ladbury says:
    “…. We have reached the point where it is no longer profitable to argue about whether CO2 is a greenhouse gas.”

    Except, of course, for them as is profiting from the uncertainty and delay. Cough.

    Comment by Hank Roberts — 12 Nov 2011 @ 10:31 AM

  73. You guys from RC will never stop amaze me .
    You even wrote a paper about july 2010 heatwave , it was a blocking high , and what happens in cases like that ? On one end you get unprecedented heat and unprecedented cold on the other one . Its not that hard to work out .
    http://www.esrl.noaa.gov/psd/csi/events/2010/russianheatwave/images/Global_sfcT_July2010.gif

    [Response: And so where exactly is the unprecedented cold you speak of?--Jim]

    Comment by Vlasta — 12 Nov 2011 @ 4:56 PM

  74. Vlasta,
    No, a blocking event will get you high temperatures–not necessarily a record–and it certainly won’t get you a 3.5 sigma event as we saw in Moscow. If you learned some math, perhaps you wouldn’t be continually amazed.

    Comment by Ray Ladbury — 12 Nov 2011 @ 8:06 PM

  75. Thanks to RC and James Hansen; http://www.columbia.edu/~jeh1/mailings/2011/20111110_NewClimateDice.pdf
    I now know why the price of pecans has gone up.

    Comment by Edward Greisch — 12 Nov 2011 @ 10:35 PM

  76. Ray thanks for reply .
    I dont dispute that was 100year event , but if you spent so much time writing your paper you could have mentioned Siberia was 8deg below normal.
    Its pretty much obvious the blocking high was a1500km east from Moscow , and on eastern side of the high , cold air from Barents sea was drawn into Siberia .

    http://earthobservatory.nasa.gov/IOTD/view.php?id=45069

    Comment by Vlasta — 12 Nov 2011 @ 10:55 PM

  77. Ray thanks for reply
    I dont dispute it was a 100year event
    The blocking High was 1500 east from Moscow , and on the eastern side cold air from Barents sea was drawn into Siberia , if you spent so much time wring a paper about it you mentioned it too .
    http://earthobservatory.nasa.gov/IOTD/view.php?id=45069

    Comment by Vlasta — 12 Nov 2011 @ 11:06 PM

  78. Vlasta @76 — Based on historical records, it was more like a 1000 year event.

    Comment by David B. Benson — 13 Nov 2011 @ 1:53 AM

  79. I read that Hansen paper yesterday and blogged on it last night – the implications are very disturbing. I’d go so far as to suggest that if you don’t find it unsettling, you don’t understand it.

    The meteorological conditions leading to the 2010 Russia event are not relevant to the paper’s methodology or findings. Just as the fact that fires have happened naturally before humans would be irrelevant in a trial for arson.

    Figure 2 of Barriopedro et al puts the 2003 and 2010 European Heatwaves into the same perspective as Hansen 2011. That figure also shows the shift Hansen finds.

    The bottom line is that Hansen et al 2010, Rahmstorf & Coumou 2011, and Barripedro et al 2011 all* support the contention that the 2010 Russian Heatwave event was very unlikely without global warming – a warming driven by human activity. Dr Rob Carver of Weather Underground considers the 2010 Russian Heatwave to be a 1000 to 15,000 year event assuming a stationary climate – see here for detail and caveats.

    *Paywall-free links to all those papers here, bottom of page.

    Comment by Chris R — 13 Nov 2011 @ 4:47 AM

  80. I contributed this post on much the same topic to Skeptical Science last year.

    Though the analysis is much more crude, the results seem to agree. I find that pleasing :)

    http://www.skepticalscience.com/Maximum-and-minimum-monthly-records-in-global-temperature-databases.html

    Comment by Toby — 13 Nov 2011 @ 8:58 AM

  81. Vlasta:
    Examining figure 6 from Hansens paper, the cooling in Sibera you mention was a 1 sigma event, hardly unexpected. In addition it was much smaller in area than the Moscow 3.5 sigma heating. Please provide a reference for your extraordinary claim that the Siberia heat was unusual.

    Comment by Michael Sweet — 13 Nov 2011 @ 9:33 AM

  82. Chris R (#79),
    FWIW, I reached the same chilling conclusion you blogged about when I read the Hansen paper. Two things struck me: I don’t know that this has been peer-reviewed, and you have to have a moderate understanding of math/stats to appreciate what it is telling us.

    Frankly, it is so simple, I’d like to see critics try to poke holes in it. So, the attacks on it will come in the form of tangential issues, or from those not skilled at math. I heard once that any idiot can make something complicated; it takes genius to make something simple. There is beauty is this one’s simplicity.

    I think the most important trick this paper accomplishes is to convey that, while it is true that you can not say much about any isolated event, these events are not isolated.

    Comment by Chris G — 13 Nov 2011 @ 11:25 AM

  83. Edward Greisch (75),

    Thank you very much for the link to that paper. Not even half way through, but this particular paragraph really stands out:

    A remarkable feature of Figure 3 is the large brown area (anomaly > 3 σ), which covered between 3% and 6% of the world in 2003-2008, and between 6% and 13% in the past three years. If temperature anomalies were normally distributed, and if anomalies were similar to those of 1951-1980, we would expect the brown area to cover only 0.1-0.2% of the planet.

    Whenever you bring up weather extremes such as the heat wave in Russia or the flooding in Australia there is a pretty good chance that the “skeptics” are going to say something to the effect, “Well, during any given year there is always extreme weather somewhere, highly unusual for one location or another, but if you were to look at the globe as a whole it really wouldn’t be that unusual.” However, that particular paragraph really puts some numbers on it.

    Stated a little less technically, as you might put it in conversation or a letter to the editor, “There is the warming trend, which is bad enough. But on top of this trend is a variability. That part of the world that is experiencing extremely unusual temperatures, in addition to the trend, has increased in size roughly by a factor of 60 over what it was in the 1950s, 60s and 70s.” Then if someone asks what you mean by “highly unusual” you can say something like, “Summers with heat waves, for example, that would normally occur only once every three hundred years.”

    Comment by Timothy Chase — 13 Nov 2011 @ 1:35 PM

  84. When the evening news begins with: Today, the debate over Global Warming,…”, that’s a lie. There is no debate over Global Warming.
    There is a debate over ‘modals’, math, probabilities, statistics,…but not over Global Warming. The debate is over refinements.

    No debate over CO2 in the atmosphere.
    (“The Keeling Curve provided the first clear evidence that carbon dioxide was accumulating in the atmosphere as the result of humanity’s use of fossil fuels. It turned speculations about increasing CO2 from theory into fact. Over time, it served to anchor other aspects of the science of global warming.”)

    No debate over temperature rise.
    (“We see a global warming trend that is very similar to that previously reported by the other groups. The world temperature data has sufficient integrity to be used to determine global temperature trends.” Muller)

    Freidman and Smith use the ‘invisible hand’ as a metaphore proving that ‘Laissez Faire’ generates a more robust system. Unfortunately, as the ‘invisible hand’ put the last propellar on the International Debt Market, the wheels fell off. Laissez Faire (deregulation) produced a catastrophe in the US Banking System, US Housing Market (read ‘Liars Loans), and now,International Sovereign Debt. Laissez Faire produces a house of cards.

    Should not the world be warned that there is an intersection coming up between weather and agriculture? Should not the World be warned that the systems of Supply, Distribution, and Finance too fragile to withstand the coming onslaught? Should they not be told, “These are facts; these are hypothesis.”?

    If the death of a billion people can be reduced to 900 million,is it a worth while thing to do?

    Comment by blue7053 — 13 Nov 2011 @ 2:54 PM

  85. Chris R said

    “The bottom line is that Hansen et al 2010, Rahmstorf & Coumou 2011, and Barripedro et al 2011 all* support the contention that the 2010 Russian Heatwave event was very unlikely without global warming – a warming driven by human activity.”

    Which is far from the truth. If hansen et al is correct, then their are only three possibilities – the standard deviation is incorrect, the variation is increasing, or it is random chance. Now the variation is much higher than global warming in these cases. Rahmstorf and Coumou specifically assume variation does not change with warming, and since they did not do attribution nor account for climate variation, Hansen et al. and Barripedro et al. would lead someone who understands statistics to say they can not attribute it to warming and not variation. Note they do not do any attribution in the paper, and never look at climate variation.

    Now there is the possibility that global warming is causing an increase in standard deviation or causing these climate variation events such as atmospheric blocking. The main paper that examines this is Dole et al. with the Russian Heat Wave, and it can not find attribution. Someone needs to do a great deal more work to find causation between global warming and the russian heat wave if it is to have a scientific basis, and not just a political argument. If climate change is going to cause things like the russian heat wave, the mitigation may be to have better hvac in those areas which will actually increase ghg. Most deaths were related to alcohol or inhalation exasperated by lack of air conditioning following the drought induced wild fires.

    Comment by Norm Hull — 13 Nov 2011 @ 10:58 PM

  86. Re: #85 (Norm Hull)

    You said:

    their [sic] are only three possibilities – the standard deviation is incorrect, the variation is increasing, or it is random chance.

    You left one out: the mean is increasing. That’s the entire point of Rahmstorf & Coumou — to show that even when the standard deviation is larger than the warming, warming still makes such extremes much more likely.

    That’s why the Russian heat wave was so much more likely with global warming. A blocking event makes a heat wave — global warming makes it a once-in-a-thousand-years heat wave. The truly sad part is, it’s no longer once-in-a-thousand-years.

    Comment by tamino — 14 Nov 2011 @ 12:26 AM

  87. tamino,

    If it is 3 sigma greater than the mean, it doesn’t matter if the mean is higher. If you mean that the 3 sigma gets higher sure, Rashmstorf and Coumou did not even look at what made the heat wave so bad. The legth of time, the droughts, etc. For the grid, the russian heat wave had an average july temperature of almost 4 standard deviations from the mean, for june, july, august it was 3.

    Comment by Norm Hull — 14 Nov 2011 @ 2:42 AM

  88. Responding to Norm Hull Tamino (86) writes:

    You left one out: the mean is increasing. That’s the entire point of Rahmstorf & Coumou – to show that even when the standard deviation is larger than the warming, warming still makes such extremes much more likely.

    Rahmstorf and Coumou state as much on page 4:

    The stochastic model discussed above assumes that the statistical distribution of temperature anomalies does not change but is merely shifted toward warmer temperatures, which holds for the two datasets we analyzed here (see Methods). In addition, the distribution can also change itself-possibly it could widen in a warming climate, leading to even more extremes (5, 20). Specific physical processes can alter the distribution of extremes. For example, for the European heat wave of 2003, a feedback with soil moisture has been invoked: Once the soil has dried out, surface temperatures go up disproportionately as less heat is converted to latent heat near the ground; in other words, evaporative cooling becomes ineffective (20).

    Although such mechanisms may play an important role and possibly aggravate extremes, it is nevertheless instructive to consider the first-order baseline discussed in this paper, namely, the effect of a simple shift of a random distribution toward warmer temperatures, “all else remaining equal.” Even this simple case demonstrates that large changes in the number of records are expected to arise due to climatic warming.

    They don’t deny that variability may increase, in fact they acknowledge as much. But for the sake of instruction they consider the simpler case where variation remains the same and the baseline moves in order to demonstrate how a little warming may make extremely unlikely events far more likely.

    Norm Hull writes:

    Now there is the possibility that global warming is causing an increase in standard deviation or causing these climate variation events such as atmospheric blocking. The main paper that examines this is Dole et al. with the Russian Heat Wave, and it can not find attribution.

    Trenberth examines the weaknesses of their analysis in an interview by Joe Romm here:

    NOAA: Monster crop-destroying Russian heat wave to be once-in-a-decade event by 2060s (or sooner)
    http://thinkprogress.org/romm/2011/03/14/207683/noaa-russian-heat-wave-trenberth-attribution/

    Regarding their (lack of) attribution, in part Trenberth states:

    The Dole et al paper “Was there a basis for anticipating the 2010 Russian Heat Wave” is superficial and does not come close to answering the question in an appropriate manner. Many statements are not justified and are actually irresponsible. The question itself is ill posed because we never expect to predict such a specific event under any circumstances, but with climate change, the odds of certain kinds of events do change.

    Later he continues:

    The SSTs were exceptionally high in the Indonesian and northern Indian Ocean regions, partly as a consequence of the previous El Nino (that lasted from May 2009 to May 2010), and partly from global warming. The high SSTs fed enhanced moisture into the monsoon, giving rise to the flooding, and helped drive a strong monsoonal circulation that had a direct downward component right over southern Russia.

    The change in atmospheric circulation that made the heatwave so intense wasn’t due to a change in variability, a moving of the baseline, or a moderate El Nino. Rather, it is reasonable to conclude that each of these played a part. Each was a factor. And thus certain highly improbable events become much more probable under global warming.

    Comment by Timothy Chase — 14 Nov 2011 @ 3:54 AM

  89. Walter Crain @ 69:

    whenever it’s cold where i live, as with that crazy october eastern snow storm this year, there’s someone who says, “so… how’s that global warming theory working out? hahaha…”

    is there a website that would show all the hi and low records set GLOBALLY on a given day? also, is there a site showing the “running” or current global average temperature (and how it compares to “normal”)? i’d like to be able to illustrate how just because it’s cold here, it’s not cold GLOBALLY.

    Because of the accumulated ocean heat content and CO2 it will not be cool globally for many thousands of years. Perhaps you could teach your friends some amazing natural history like winter is cooler than summer in the hemisphere that is having winter at the time, or some advance technical concepts like “average”.

    http://data.giss.nasa.gov/gistemp/maps/

    You can check by month. Pick any month and it is warmer than it used to be. Look for the anomaly at the top right of the map. La Niña years are globally “cool” but they are now warmer than El Niño used to be.

    http://data.giss.nasa.gov/gistemp/

    http://skepticalscience.com/argument.php

    http://capitalclimate.blogspot.com/2010/10/endless-summer-xii-septembers.html

    Comment by Pete Dunkelberg — 14 Nov 2011 @ 7:57 AM

  90. thanks much, pete. i really appreciate your answering.

    that first link you gave is pretty cool – so i can show them the current month. what i’d really like is something for TODAY (or even yesterday). i want to be able to show someone who during a snow storm says, “how’s that global warming going?” that it’s warm pretty much everywhere else today, right now. i don’t want anything nuanced about yearly, monthly averages. i’d like to know the global average for today, and possibly the number of cold vs warm records set yesterday.

    “skepticalscience” is one of my favorite sites to send “skeptics” to. i love how it just knocks down the specious arguments one-by-one.

    Comment by Walter Crain — 14 Nov 2011 @ 10:56 AM

  91. Tamino,
    The probability of the blocking event which caused the Russian heat wave neither increases nor decreases as global temperatures change. The great plains of the US experienced record high temperatures during the historic “dust bowl” in the 1930s, culminating in the summer of 1936. Whether this was a blocking event similar to what occurred in Russia may never be known, but two such events, 74 years apart, seem to be much more common that one in a thousand.
    It is less a question of the standard deviation as Norm suggested, but more a statement that the distribution in non-Gaussian, resulting in higher probabilities of both record highs and lows than expected in a true Gaussian distribution. The slight warming experienced over the 20th century is unlikely to increase the chances of these types of temperature extremes, as the extremes are several standard deviations higher than the mean, and are tied to weather events rather than temperature averages. The North American record high was set in 1913, hardly a year of record warmth, in general.

    Comment by Dan H. — 14 Nov 2011 @ 2:12 PM

  92. Re: #91 (Dan H.)

    The probability of the blocking event which caused the Russian heat wave neither increases nor decreases as global temperatures change

    As far as I know that’s quite true. But if a blocking event which causes a heat wave does occur, then global warming dramatically increases the probability of its being a record-setting heat wave, even a once-in-a-thousand-years record event. That’s the point of Rahmstorf & Coumou 2011.

    the distribution in non-Gaussian, resulting in higher probabilities of both record highs and lows than expected in a true Gaussian distribution

    Rather than explain your error, I’ll simply suggest that you think about this more carefully.

    Comment by tamino — 14 Nov 2011 @ 3:04 PM

  93. Well, thank god Dan H has set the scientific world straight!

    Who needs research when hand-waving will suffice.

    I mean, “Whether this was a blocking event similar to what occurred in Russia may never be known, but two such events, 74 years apart, seem to be much more common that one in a thousand” … wow. Add in a good saharan drought and the Russian event will seem downright common!

    Comment by dhogaza — 14 Nov 2011 @ 3:29 PM

  94. #85 Norm Hull,

    “If hansen et al is correct, then their (sic) are only three possibilities – the standard deviation is incorrect [1], the variation is increasing [2], or it is random chance[3].”

    Tamino has already given you one addition – the mean is increasing [4].

    The Hansen paper gives 3 different approaches using 1951-80 sigma, detrended sigma for ’81-10, and sigma for 81-10. All three give the same overall result – with each passing decade since the 1970s the local temperature anomaly distribution shifts to the right (gets warmer). This dispenses with 1 and 3, leaving us with 2 and 4. We can accept 4, this is a trite observation, the planet is warming.

    However, whilst I’m no mathematician (I’m in Electronics), I’ve noticed something at work and have reproduced it in Excel: A histogram of a timeseries with a trend in it is skewed, it rises more slowly than the original timeseries histogram and drops faster. I’ve just done some runs in Excel using the data analysis pack to make a normally distributed series of random numbers and for each new set this behaviour holds true. Like I say I’m no mathematicain, but I guess this is due to the rising trend.

    This is not however what Hansen finds, and this is the reason Hansen’s paper alarmed me. In figure 4 all of the curves for all periods in both summer and winter approach the x axis at around -3. Yet at the other side there is a positive going skew that has all of the post 1970 periods meeting the X axis above 3. This suggests to me that what is going on is not simply a matter of the curves being shifted by a linear trend (change in mean) but is also a significant upward shift of variance.

    Comment by Chris R — 14 Nov 2011 @ 3:50 PM

  95. Dan H,

    I thought like you, that the small increase in GAT would not cause a substantial increase in high sigma warm episodes. However the Hansen paper suggests (strongly IMO) that this view is incorrect. See also Barriopedro, 2011, “The Hot Summer of 2010: Redrawing the Temperature Record Map of Europe.”
    http://www.see.ed.ac.uk/~shs/Climate%20change/Climate%20model%20results/Heat%20waves.pdf

    Comment by Chris R — 14 Nov 2011 @ 3:58 PM

  96. Vukcevic,

    Could you expand on the North Atlantic precursor in your graph. Is it related to the NAO, AMO, or some other factor? what

    Comment by günal şen — 14 Nov 2011 @ 4:01 PM

  97. Dan H.,
    Yes, I think you explained my objection better than I did. If the odds are being beaten so easily there must be something wrong with the model.

    Timothy Chase,
    If you look at the model this thought experiment is on, there is simply climate variation. Rahmstorf and Coumou do discuss the possiblilities of higher variation, but they do not use it in their model. They also never try to model global warming, and only separate low frequency from high frequency data from local temperature record. Their trend line clearly has local climate variability, and they plainly state local warming is much higher recently than global warming. Hence you can not attribute results from this analysis to global warming.

    As for climate progress and trenbreth criticism of dole, I find it quite hollow. Rolm also claimed Muller didn’t understand climate science because he dared to criticize some of al gore’s exaggerations. Let’s stick to real scientific criticisms and not go off to a political block.

    Comment by Norm Hull — 14 Nov 2011 @ 4:04 PM

  98. Chris R.
    This is the important part of the paper you linked
    “The enhanced frequency for small to moderate anomalies
    of 2-3 SDs is mostly accounted for by a shift in mean summer
    temperatures (compare Fig. 4 with fig. S18). However, the future probabilities of ‘mega-heatwaves’ with SDs similar to
    2003 and 2010 are substantially amplified by enhanced
    variability. Particularly in WE, variability has been suggested
    to increase at interannual and intraseasonal time-scales (1, 2)
    as a result of increased land-atmosphere coupling (28) and
    changes in the surface energy and water budget (2, 29).
    Models indicate that the structure of circulation anomalies
    associated with ‘mega-heatwaves’ remains essentially
    unchanged in the future (SOM text). ”

    In other words smaller variation are indicated by warming but not these heat waves, the temperature anomolies. They suggest increasses in variability but sggest that anouther heatwave of this magnitude is unlikely in the next 50 years. It had been since the 30s for western russia to have a big anomaly. There is coupling so we can look back to the 1930s and find many anomalies world wide.

    Comment by Norm Hull — 14 Nov 2011 @ 4:39 PM

  99. Norm Hull:

    Their trend line clearly has local climate variability, and they plainly state local warming is much higher recently than global warming. Hence you can not attribute results from this analysis to global warming.

    For starters, global warming includes that 3/4 of the globe that is covered by the earth’s oceans. Warming over land, in particular the interior of large continents (as opposed to places like the UK or the Pacific Northwest which have maritime climates) is more pronounced.

    You seem to think global warming should be uniform, and that’s just not true.

    Comment by dhogaza — 14 Nov 2011 @ 4:44 PM

  100. Norm Hull wrote: “Rolm also claimed Muller didn’t understand climate science because he dared to criticize some of al gore’s exaggerations. Let’s stick to real scientific criticisms and not go off to a political block.”

    There are various Internet conventions, from emoticons to acronyms, which attempt to use plain text to convey a guffaw. I do not find any of them adequate.

    Comment by SecularAnimist — 14 Nov 2011 @ 5:13 PM

  101. I am currently (re)reading two books by William Feller:

    An Introduction to Probability Theory and its Applications,
    Volumes 1 (3rd edition, 1968) & 2 (2nd edition, 1971),
    John Wiley & Sons, Inc.

    which are particularly strong on random walks.

    Regarding Moscow (Moscow Oblast) and more widely Eastern Europe/Central Asia temperature excursion of 2010, first note that records are considered to be adequate to determine this was a unprecented event in recorded history (ca. 1000 years). Now treat the temperature changes from one August to the next as a random walk starting from the average temperature. Assuming no trend, we have from Feller 1:III(7.3) that … roughly speaking, the waiting time for the first passage through r [positive increments] increases with the square of r; the probability of a first passage after epoch (9/4)r^2 has a probability close to (1/2).

    If a trend is included, consider the Fokker-Plank diffusion equation of 1:XIV(6.12) which diffuses probabilities for random walks with an advection term. While this directs us to 2:X for the continuous case, we can see enough here to replace r by r-ct due to the advective drift.

    While there is an important lesson here, I suppose one might care to check whether treating the temperatures as a random walk (with or without drift) otherwise seems to accord with the (all too short) temperature record.

    [The reCAPTCHA oracle entones "Statistics taitivi".]

    Comment by David B. Benson — 14 Nov 2011 @ 7:52 PM

  102. Dan H, Cris R.
    The increase in the probability of high sigma events will be strongly affected by a change in the mean. Just look at how quickly the gaussian distribution drops off as sigma increases. So while the increase in the number of one sigma events won’t be very spectacular, the increase in (formerly) five sigma events will be overwhelming.

    Comment by Thomas — 14 Nov 2011 @ 9:41 PM

  103. Thanks for this post.

    Various denier blogs seem to be hailing as some sort of evidence that climate change won’t be damaging (or doesn’t exist), the impending IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. As per normal, they seem to equate uncertainty with ‘not happening’.

    It would be useful to have an article here on RealClimate discussing the report and highlighting the important messages in it after it’s been released.

    Richard Black has an article here that to my mind doesn’t further understanding a whole lot – not because it’s wrong but because of the slant he takes (which isn’t necessarily wrong either, but is written in a way that makes it easier for deniers to twist and run with the wrong message).
    http://www.bbc.co.uk/news/science-environment-15698183

    (Captcha = mischief)

    Comment by Sou — 14 Nov 2011 @ 10:03 PM

  104. Norm Hull (97) wrote:

    Their trend line clearly has local climate variability, and they plainly state local warming is much higher recently than global warming. Hence you can not attribute results from this analysis to global warming.

    Norm, when we talk about global warming we are referring to the global average temperature anomally as it changes over time. There is no rule that says it has to be uniform, either with respect to time or place. As dhogaza points out (99) land warms more rapidly than ocean given the ocean’s greater thermal inertia, the Northern Hemisphere warms more rapidly than the Southern Hemisphere given its greater landmass, the continental interiors warm more rapidly than the coasts given their distance from the ocean where the warming is then amplified by their falling relative humidity, the poles warm more rapidly than the equatorial regions given a variety of mechanisms collectively referred to as “polar amplification”, nights more rapidly than days given that an enhanced greenhouse effect is driving global warming rather than enhanced solar insolation, winters more rapidly than summers for the same reason, and higher latitudes more rapidly than lower latitudes.

    For the last of these please see Tamino’s post:

    Hit You Where You Live
    Tamino, Jan 11, 2008
    linked to at http://www.skepticalscience.com/open_mind_archive_index.html

    … and note that Moscow is at a latitude of 55.45 N.

    Norm Hull (97) wrote:

    As for climate progress and trenbreth criticism of dole, I find it quite hollow. Rolm also claimed Muller didn’t understand climate science because he dared to criticize some of al gore’s exaggerations. Let’s stick to real scientific criticisms and not go off to a political block.

    Your estimate of Joseph Romm and mine no doubt differ a great deal, but setting that aside, I wasn’t quoting Joe Romm. As I made clear(88), I was quoting Trenberth, one of the foremost experts in climatology, who was providing the very sort of physical explanation you were calling for. Not that I thought it was needed or that you would be particularly interested, and your attempt to conflate the two individuals would seem to prove me right. But for the sake of those who might be, I thought it worthwhile to provide a link.

    Comment by Timothy Chase — 14 Nov 2011 @ 10:27 PM

  105. Norm Hull:

    Climate progress and trenbreth criticism of dole, I find it quite hollow. Rolm also claimed Muller didn’t understand climate science because he dared to criticize some of al gore’s exaggerations. Let’s stick to real scientific criticisms and not go off to a political block.

    Romm’s an MIT PhD physicist so I doubt his criticisms of Muller are simply because Muller criticized some of Al Gore’s “exaggerations”, particularly since the latter don’t really exist …

    You also seem to be implying that Trenbreth’s guilty of making political, not scientific, statements … but what is your proof?

    Whatever, you’re not exactly raising the banner high for objective scientific discourse yerself.

    Just saying’

    Comment by dhogaza — 15 Nov 2011 @ 12:07 AM

  106. Thomas,
    The probability of high sigma events does not change appreciably as the mean changes, precisely because they are high sigma.

    [Response: No, no, NO! The probability of high sigma events changes rapidly as the mean of the distribution shifts (assuming a non-uniform distribution). Think about it.--Jim]

    The event which unfoldeded in Russian has a low probability of occurrance (call it the 2.5% upper tail). However, once the event occurs, the actual temperature recorded is not part of a Gaussian distribution. The blocking was stronger than previously, and the mercury broke the record set in 1936. That is two events, 74 years apart, which fit within the upper tail. The event falls within the tail, not the recorded temperature.
    Look at the temperature history. Moscow broke the record high temperature last year, which was previously set in 1936. Many U.S. cities set high temperatures in 1936 which still stand. The probability of the weather conditions occuring, as in 2010 or 1936, remains the same.

    [Response: This stuff varies from wrong to irrelevant.--Jim]

    Comment by Dan H. — 15 Nov 2011 @ 9:11 AM

  107. “The probability of high sigma events changes rapidly as the mean of the distribution shifts (assuming a non-uniform distribution).”

    Did you mean the probability of what used to be high sigma events on one side of the distribution?

    [Response: Yes]

    Comment by Pete Dunkelberg — 15 Nov 2011 @ 11:26 AM

  108. Individual regional events are always going to be difficult to relate to global climate change.

    Surely the record to concentrate on is the global temperature record.

    Now if the assumption is that the Earth is warming at 0.2c per decade, then if a new record is not set within 8 years of the past record, then the assumption will be disproven at the 95% level.

    Looking at Hadcrut this level was passed in 2006 and there is no chance of 2011 breaking the record and I woiuld be willing to bet a very large amount that 2012 wont do so either given the current ENSO state.

    If we look at the probability of the current record being broken by a significant amount of at least 0.1c then approx 18 years would have to pass to disprove the assumption at the 95% level.

    None of the global databases have seen such a new record since 1998. If no new significant record is set by 2016, will scientists have to agree that the current assumption is wrong?

    [Response: No, they will have to agree that your assumptions are wrong. Even if I assume your calculations are right, you're wrongly assuming absolute linearity of T change. Also the paper did also dealt with the setting of global T records. Both of these you would have recognized had you read and understood the paper.--Jim]

    Alan

    Comment by Alan Millar — 15 Nov 2011 @ 12:21 PM

  109. For Timothy Chase on 104,
    I was not saying that global warming would be uneven. I was saying you need to do some work to claim the warming in the last 30 years was all global warming, but that warming and cooling for the 100 years before that should be discounted. I don’t think looking at the clear temperature variability, that you can just claim all of moscow’s warming in the last 30 years is clobal warming. Was the cooling in moscow also caused entirely by global warming? I’m not aguing about all of it, only that someone needs to do an attribution of how much. NOAA seems to have some clear GISS data.

    [Response: That's not what the paper is claiming, or is even about really.]

    As to 105 and your contention, I don’t think any of Trenbreth’s criticism actually stand up to any real scrutiny. You will note he doesn’t find anything that is disputed by science. The author of this article did bring up some points about dole that are scientific in nature, and NOAA quickly started examining them. I would prefer scientific arguments, and not to argue about whether Climate Progress or WUWT stick mainly to science. Trenbreth’s main opinion on attribution studies is if they do not find attribution they are wrong. Trenbreth believes everything in weather is attributed to man, and does not want to check the data, because if it disagrees it must be a type 2 error. He has been quite vocal about this. I would take any of his scientific problems with the study seriously, but he doesn’t seem to have any real disputes.

    [Response: Nonsense. Read the paper and contribute something worthwhile or have your off the cuff nonsense deleted--Jim]

    Comment by Norm Hull — 15 Nov 2011 @ 1:26 PM

  110. Chris G,

    Sorry I missed your post. I’d like to say I’m glad to see someone else is alarmed by the Hansen paper, although given the situation that somehow doesn’t seem appropriate.

    Norm Hull,

    Why have you shifted the subject to Barriopedro et al????

    That said the quote you post opens with “The enhanced frequency for small to moderate anomalies of 2-3 SDs is mostly accounted for by a shift in mean summer temperatures”. In view of which you might like to add this to your reading list: “Greenhouse forcing outweighs decreasing solar radiation driving rapid temperature rise over land.” Phillipona & Durr 2004. It complements the Barriopedro paper rather well.

    Now back to the matter at hand: The paper by Hansen, Ruedy and Sato (Hansen’s paper). You have failed to address the points made in my post #94.

    1) Do you accept that the SD is not incorrect and that Hansen’s observations are not random chance?

    2) Do you accept that the increase in areas covered +2 and +3 sigma temperature excursions (figure 4) is a real increase?

    Comment by Chris R — 15 Nov 2011 @ 2:07 PM

  111. Jim says:
    15 Nov 2011 at 12:21 PM
    [Response: No, they will have to agree that your assumptions are wrong. Even if I assume your calculations are right, you're wrongly assuming absolute linearity of T change. Also the paper did also dealt with the setting of global T records. Both of these you would have recognized had you read and understood the paper.--Jim]

    The 0.2c per decade is certainly not my assumption, as I see nothing in the temperature records of the 20th and 21st centuries which remotely suggests that the Earth is warming at this rate, unless you look at short cherry picked periods.

    The probability calculations, of course, do not assume a near linear yearly increase in temperatures as I am fairly sure that if that was the case we would get a new record far more often!

    The probability of new records being set is a good way of checking assumptions, as it does not require us to wait for 30 or 40 years to check these temperature assumptions.

    After all to get from one place to another we have to go on a journey. I am sure noone has suggested that we are going to wake up on 1/1/2100 and find that the temperatures rose 2.0c overnight!

    Assuming a linear increase in CO2, the first decades of the 21st century should have the strongest forcing effect making new recorde more likely than the last decades of the century. I know that CO2 increase seems to be increasing slightly faster than linear but this just makes it even more probable that a new record should be set, in the timescales I have given.

    Science has to use probabilities in assessing assumptions.

    Of course, just because an assumption has failed at the 95% level doesn’t mean it is definitely false, but that is the level most scientists have set that would require assumptions to be revisited and very strong evidence to be produced if it was not to be altered.

    I don’t think many papers are being quoted out there that have failed at the 95% level.

    [Response: What?--Jim]

    Alan

    Comment by Alan Millar — 15 Nov 2011 @ 2:44 PM

  112. Dan H., “The probability of high sigma events does not change appreciably as the mean changes, precisely because they are high sigma.”

    Utter crap. The mean and standard deviation are uncorrelated only for the normal distribution. If the data are not normal–and far from the mean they likely are not, you simply can’t make such a claim.

    Comment by Ray Ladbury — 15 Nov 2011 @ 2:50 PM

  113. Chris G.
    I was not trying to shift the subject, I was trying to address one of your points. I apologize if that did not come across correctly.

    In answer to your two points
    1) Do you accept that the SD is not incorrect and that Hansen’s observations are not random chance?
    I do not think that Hansen’s observations are random chance, but neither do I think that all of these events are properly ascribed to 3 SD independent probabilities. If events are individual and properly modeled, the probability of any single one exceeding 3SD in the positive direction is slightly more than 0.1%. If there is a great many exceeding this threshold than the model needs to be corrected. There may be something happening that causes a temporary directional shift, or increased variation, or coupling of these events. I would like to see new models especially ones of extreme drought/heat, flood/heat analyzed. Stefan Rahmstorf and Dim Coumou are testing if a different model for trend line which may alter probabilities.

    2) Do you accept that the increase in areas covered +2 and +3 sigma temperature excursions (figure 4) is a real increase?
    I do. This doesn’t mean I have examined the data, but I assume its correct, and the authors are good enough scientists to correct it if they have anything wrong.

    Comment by Norm Hull — 15 Nov 2011 @ 3:06 PM

  114. Alan Millar: for entertainment purposes, can you please tell us how you calculate that “95% level” and how you assess that climate predictions have “failed” it?

    Comment by toto — 15 Nov 2011 @ 3:54 PM

  115. Re: 106, 107, 112 (change in mean and high-sigma events)

    I think that all of you arguing with Dan H. are forgetting the traditional goal-post shift that skeptics use. As soon as the mean goes up, the goal-post for “high-sigma events” goes up, so normal gets redefined and nothing unusual is happening. “Extraordinary” events just don’t happen, because when they do they get re-incorporated into the definition of normal, and now that they are part of normal, they can’t possibly be abnormal, can they?

    Comment by Bob Loblaw — 15 Nov 2011 @ 4:38 PM

  116. > the first decades of the 21st century should have the strongest forcing
    Got that backwards, doesn’t he?

    Comment by Hank Roberts — 15 Nov 2011 @ 5:37 PM

  117. Several commenting here recently seem to have ignored the lesson from my earlier comment.

    Comment by David B. Benson — 15 Nov 2011 @ 5:49 PM

  118. 114 toto says:
    15 Nov 2011 at 3:54 PM

    “Alan Millar: for entertainment purposes, can you please tell us how you calculate that “95% level” and how you assess that climate predictions have “failed” it?”

    The figures come from the IPCC models themselves.

    By checking the average waiting time for a new record from all the models, gives approximately 8 years for any new record and 18 years approximately for a significant new record of at least 0.1c at 95% confidence level.

    Of course, if the Earth has a very high natural variability, outside that contained in the models, then you could expect to wait longer.

    [Response: Are you seriously claiming *you* calculated this? The calculations were presented in our 2008 post. Please do not play games. - gavin]

    However a very high natural variability would cause significant problems for the models, their accuracy, and predictive ability.

    116 Hank Roberts says:
    15 Nov 2011 at 5:37 PM

    “the first decades of the 21st century should have the strongest forcing
    Got that backwards, doesn’t he?”

    CO2s effect is logarithmic in nature and it is clear therefore, with a linear increase in CO2, the forcing effect is stronger in the first decades than the last. Even a small exponential increase (eg first decade 20ppm increse, 2nd 21ppm increase, 3rd 22ppm etc) would derive a weaker forcing effect in the last decade compared to the first.

    A larger exponential increase would indeed produce a stronger forcing effect as time passed.

    Alan

    Comment by Alan Millar — 15 Nov 2011 @ 7:22 PM

  119. > millar … co2 forcing

    We were talking about observable effects on Earth’s climate.
    You’re talking now about CO2 alone, considered separate from everything else?

    Differently placed goalpost, I think.

    The effects — climate change — are not going to emerge clearly from natural variability in the early decades.

    Comment by Hank Roberts — 15 Nov 2011 @ 8:02 PM

  120. Re 108 and 118,
    Gavin do think anything has changed to change the probabilities in your models (95% confidence of a new record of at least 0.1 degree C above the last record temperature)? I don’t think Jim’s comment that understanding this paper would change your projections, but I wanted to make sure I wasn’t missing something.

    [Response: The calculations done on the IPCC models, were simply a counting exercise - no complicated statistics used. But there are some issues that are worth bearing in mind when applying this to the real world. First, the global temperature products are not exactly the same as the global mean temperature anomaly in the models (this is particularly so for HadCRUT3 given the known bias in ignoring the Arctic). Second, both 2005 and 2010 were records in the GISTEMP and NCDC products (I think). Third, the models did not all use future estimates of the solar forcing for instance, and for those that did, they didn't get the length and depth of the last solar minimum right. This might make some small difference (as would better estimates for the internal variability, aerosol forcing etc.). But as an order of magnitude estimate, I think the numbers are still pretty reasonable. - gavin]

    Comment by Norm Hull — 15 Nov 2011 @ 8:19 PM

  121. 119 Hank Roberts says:
    15 Nov 2011 at 8:02 PM

    “millar … co2 forcing

    The effects — climate change — are not going to emerge clearly from natural variability in the early decades.”

    Hank, I kind of agree with you.

    This was the issue that first caused me to doubt the AGW meme.

    I always used to accept the fact of AGW. I assumed the consensus position must be true because my experience in science did not lead me to think that significant obvious flaws could be overlooked.

    However, when the prognosis for Mankinds future became quite apocolyptic I decided to look closer at the facts. That is when, almost immediately, I saw the most obvious flaw in the arguement.

    The predictions for the future were largely based on climate models and it was said that we could rely on these models for the future because they were accurate on the backcast. Yes, when I looked they were quite accurate against the temperature record since 1945.

    That is when my doubts arose!

    They were too accurate against the temperature record!

    Why?

    Because they are not measuring the same thing!

    The GCMs measure and output the climate signal only. The weather signal and cycles, such as ENSO, are averaged out. The temperature record is the climate signal PLUS the weather signal. We know, as you have recognised Hank, that the weather signal can swamp the climate signal over short periods even as long as a decade. The climate signal will however re-emerge in the medium to long term.

    So I found that, on the backcast, the GCMs matched the decadal temperature directions but failed to do so on the forecast. Big red flag for me!

    Not because they have failed to match the direction of the last decade, I would actually expect that to happen fairly often with a reasonably high natural variability, but that they matched so well on the backcast. Two things matching so well when they are not measuring the same thing?

    Hmmm…..

    Alan

    Comment by Alan Millar — 15 Nov 2011 @ 10:03 PM

  122. Bob,
    I think you are missing the point. Ray and Pete are not arguing that these events will become normal in a warmer world, but that they will become more prevalent. My argument was that their probability will not change, because they are well beyond the 2 sigma range. Your “goal post” shift argument does not adequately reflect our discussion here. Personally, I do not feel that Ray, Pete, or anyone else debating the occurrance of these types are events are arguing that they will become normal, and therefore, cease to become “extraordinary.”

    Comment by Dan H. — 15 Nov 2011 @ 10:57 PM

  123. Dan H.

    I think you are missing the point. Ray and Pete are not arguing that these events will become normal in a warmer world, but that they will become more prevalent. My argument was that their probability will not change, because they are well beyond the 2 sigma range.

    That’s crud

    Put 1/millionth of a point on a die and the odds of rolling 1 to 6 will change.

    Comment by dhogaza — 16 Nov 2011 @ 12:04 AM

  124. Dan H. @122 — Let N[m,s] denote the normal probability density with mean m and standard deviation s. Consider (positive) event e such that for N[0,s], p(e) is at the 3.5s level. What is the probability of e for N[1,s]? N[2,s]? N[3,s]?

    Comment by David B. Benson — 16 Nov 2011 @ 12:07 AM

  125. Alan Millar

    The GCMs measure and output the climate signal only. The weather signal and cycles, such as ENSO, are averaged out.

    No, individual model runs show such variation, ENSO-like events, etc.

    Model *ensembles* don’t, just as aggregating an ensemble of coin flips doesn’t (while records of each individual series shows variability such as 3 or 4 heads in a row).

    Your skepticism is based on a misunderstanding as simple as this?

    The mind boggles.

    [Response: Indeed, GCMs also simulate weather, albeit at a coarser scale (though increasingly at a fine scale). And indeed, even model ensembles tend to capture ENSO quite well, since it is pretty fundamental to the physics of the system.--eric]

    Comment by dhogaza — 16 Nov 2011 @ 12:09 AM

  126. Alan Millar…

    Besides the fact that your entire line of thinking is flawed, anyway.

    Airliner design is mostly based on models that aren’t really different than GCMs. The scale is smaller (individual airplanes are smaller than the earth – I’m sure you’ll agree with this). But the events being modeled are similarly finer-grained.

    No model is able to accurately plot the path of an individual molecule in the air over a wing or other surface.

    It’s all done by the equivalent of model ensemble techniques.

    Yet … despite your lack of faith … they fly.

    So I found that, on the backcast, the GCMs matched the decadal temperature directions but failed to do so on the forecast. Big red flag for me!

    Big red flag for me, too, because you first state that natural variation can swamp the climate signal, then proclaim that the fact that we’ve been seeing natural variation (in particular, in the sun’s output) dampen the climate signal (but not significantly) tells you the climate science is all hooey.

    Sort of a tautology denial of some sort …

    The 787 simulator built to train pilots was developed, tested, and used to train the first pilots to fly the plane BEFORE IT EVER FLEW, and on its first test flight, performed exactly as the model had taught the pilots to expect it to.

    Comment by dhogaza — 16 Nov 2011 @ 12:19 AM

  127. > as you have recognised Hank
    Nope, you still have it backwards.

    Comment by Hank Roberts — 16 Nov 2011 @ 1:16 AM

  128. Norm Hull wrote in 85:

    If hansen et al is correct, then their are only three possibilities – the standard deviation is incorrect, the variation is increasing, or it is random chance.

    Norm Hull wrote in 113:

    If events are individual and properly modeled, the probability of any single one exceeding 3SD in the positive direction is slightly more than 0.1%. If there is a great many exceeding this threshold than the model needs to be corrected.

    Do you make a habit of learning things in reverse?

    Comment by Timothy Chase — 16 Nov 2011 @ 3:21 AM

  129. Alan Millar,
    So, you rejected the science because you didn’t understand it that over short periods, variability could mask trend? That’s brilliant. Dude, you know, somehow your conversion story isn’t quite as compelling as that of St. Augustine.

    Comment by Ray Ladbury — 16 Nov 2011 @ 7:54 AM

  130. Ray,
    It does not appear that Alan rejected the science, only the models. The models behaved quite well during a period of solar and ocean activity which generally contributed to higher temperature observations. It wasn’t that the GCMs couldn’t model short term variability, but that the short-term variability became part of the model.

    [Response: How did observed 'short term ocean activity' become 'part of the model'? - gavin]

    Comment by Dan H. — 16 Nov 2011 @ 9:34 AM

  131. 127 Hank Roberts says:

    16 Nov 2011 at 1:16 AM
    “Nope, you still have it backwards”

    129 Ray Ladbury says:

    16 Nov 2011 at 7:54 AM

    “Alan Millar,
    So, you rejected the science because you didn’t understand it that over short periods, variability could mask trend? That’s brilliant. Dude, you know, somehow your conversion story isn’t quite as compelling as that of St. Augustine.”

    I must be living in some sort of alterate reality here.

    One. Where logarithmic effects gets stronger as time progresses. And.

    Two. Where my words are translated to mean the exact opposite of their true meaning!

    I said the rreason I started to doubt the models predictions was because they were too accurate on the backcast not that that they are inaccurate on a period of a decade on the forecast.

    Again!! if the Earth has a fairly high natural variability I would expect the models signal to be constantly swinging between overstating and understating the warming with the true signal emerging in the medium to long term.

    The models signal from 1945 did not do this to any great extent and I find that very strange, given that they are not set upto match the short term (decadal) trend of the temperature record with its inclusion of the weather signal (natural variability).

    That is the reason given for the models signal not matching the past decades temperature trend. That, I don’t find strange, if the models are set up accurately.

    The proof of the pudding will be if the temperature trend swings back the other way and the models climate signal emerges from the natural noise.

    Alan

    Comment by Alan Millar — 16 Nov 2011 @ 10:39 AM

  132. Re: Dan H 2 122

    You said “My argument was that their probability will not change, because they are well beyond the 2 sigma range.”.

    As the temperature distribution shifts due to climate change, probabilities will change at every absolute temperature. Thus, the previous probability at temperature T will no longer apply. What used to be +3 sigma in the old distribution may now only be +2 sigma (if the shift in the mean is equal to +1 sigma) in the new distribution, which has a higher probability.

    The only way to keep the probability constant is to say “+ 3 sigma in the new distribution has the same probability as +3 in the old distribution”, which requires that you be talking about two different temperatures in the two different distributions. You can’t do that if you are looking at the probability of a specific temperature T, which is the goal of the study.

    It would appear that you don’t even realize you are shifting goalposts.

    Comment by Bob Loblaw — 16 Nov 2011 @ 11:43 AM

  133. Bob Loblaw on 132,

    I think you are mistaking proper statistical modeling for some sort of conspiracy. There are two different issue’s here. If you are going to properely model the system and figure out what the 3 sigma probability is, then you need to model the distribution properly, this is not moving goal posts its proper statistical modeling. In fact this article is written to help explain a paper that was talking about how to do these statistics.

    IMHO we all can agree that the Russian Heat Wave was an extreme weather event. I think this is what you mean by goal posts. Then the question is what defines an extreme weather event. There will be disagreement on this, and I don’t want to bring all this up here. Once this is defined you can use statistics, but you will not get a proper probability if you get the distribution of temperature or precipitation wrong.

    Comment by Norm Hull — 16 Nov 2011 @ 12:19 PM

  134. Alan Millar,
    When we hindcast, we know what the fricking weather was. We know what ENSO did. We know if there were volcanic eruptions. We know what the sun did. Why shouldn’t we get better hindcasts? Dude, this ain’t that tough!

    Comment by Ray Ladbury — 16 Nov 2011 @ 12:49 PM

  135. Norm Hull, The Moscow heatwave was an extreme event–by definition, since it was a record. If changes in the standard deviation were dominant, we’d expect to see a symmetry between record highs and record lows. We do not. This is far more consistent with an increasing mean.

    Comment by Ray Ladbury — 16 Nov 2011 @ 12:55 PM

  136. Dan H., The models are part of the science.

    Comment by Ray Ladbury — 16 Nov 2011 @ 12:59 PM

  137. Sorry, I thought you were trying to change the subject.

    With regards your points. We can leave point 2 as we agree on that.

    I can see what you mean by saying that something is changing to make high sigma events happen more often and that this means there is a problem with the model of a normal distribution. However what is changing is the warming trend. As Tamino has pointed out this warming trend increases the likelihood of 3 and 4 sigma events. As the Hansen paper does, you can choose various solutions to this; use the earlier sigma before the warming trend, use sigma from a detrended data set, use sigma from the most recent period. All of these are valid approaches provided it is clear what is being done. As I said in my blog post – I preferred to present the images that use the 1951-80 sigma precisely because I was interested in the changes since that period.

    To claim that the model of a normal distribution is wrong is to ‘throw the baby out with the bathwater’. The change in the proportion of events within different SD demarcations is an important observation, not just a statistical artefact. Yes, the sigma is changing, but it is an important change with real effects in terms of people’s every-day lives, i.e. 10% of the planet’s surface is experiencing 3 sigma warming in summer (using the 1981-2010 detrended temperatures to derive sigma). That is not a result of the model being wrong, it’s a result of warming.

    Comment by Chris R — 16 Nov 2011 @ 1:21 PM

  138. Oops!
    My above post was directed to Norm Hull.

    Comment by Chris R — 16 Nov 2011 @ 1:22 PM

  139. More on weather extremes:

    David Medvigy, Claudie Beaulieu. Trends in daily solar radiation and precipitation coefficients of variation since 1984. Journal of Climate, 2011

    From Princeton’s press release (emphasis added):

    The first climate study to focus on variations in daily weather conditions has found that day-to-day weather has grown increasingly erratic and extreme, with significant fluctuations in sunshine and rainfall affecting more than a third of the planet.

    Princeton University researchers recently reported in the Journal of Climate that extremely sunny or cloudy days are more common than in the early 1980s, and that swings from thunderstorms to dry days rose considerably since the late 1990s. These swings could have consequences for ecosystem stability and the control of pests and diseases, as well as for industries such as agriculture and solar-energy production, all of which are vulnerable to inconsistent and extreme weather, the researchers noted.

    The day-to-day variations also could affect what scientists can expect to see as the Earth’s climate changes, according to the researchers and other scientists familiar with the work. Constant fluctuations in severe conditions could alter how the atmosphere distributes heat and rainfall, as well as inhibit the ability of plants to remove carbon dioxide from the atmosphere, possibly leading to higher levels of the greenhouse gas than currently accounted for.

    Comment by SecularAnimist — 16 Nov 2011 @ 1:23 PM

  140. Ray from 135,

    If you define extreme as a record high and low for a place, certainly your are correct in your analysis. You will also have hundreds of thousands of extremes each decade just in the united states.

    http://www2.ucar.edu/news/1036/record-high-temperatures-far-outpace-record-lows-across-us

    For those that have an extreme definition that chooses weather events that are not as prevelent, it is important to model them correctly. The length of time that it was hot, the lack of rain, the deaths, the wild fires, the crop losses all might be considered in a model that defines what made the Russian Heat Wave an extreme event.

    Comment by Norm Hull — 16 Nov 2011 @ 1:35 PM

  141. Chris R from 137,

    I think part of the problem is definition of terms. If 10% of the planet is now experiencing 3 sigma temperatures, I think you must agree that the odds of our model being correct are not good.

    So let us rephrase, to 10% is experiencing what was calculated to be 3 sigma from a previous temperature distribution. Then their is an implied judgement that this is a bad thing. Now if that distribution was from the coldest of the little ice age, it may be good, and not extreme at all in historical context. But let us not look at good or bad, and in the case of the Russian heat wave, let us agree on the bad (fires, crop failures, death), only the science.

    If you define extreme as 3 SD hotter or colder than some fixed date, then we can certainly determine the areas that follow this definition between that date and now. This is what Hansen et al. has attempted to do, and we can look at the maps created. I think we both agree to this.

    Now there seems to be attribution associated and the idea that these marks are independently 3 sigma. This is where I disagree. To properly test the model the distribution must be tested using the mean from that year. To test attribution the probabilities must be determined from a model without this factor. Let’s say you did this and found under the better modeling the odds were 300:1 instead of 1000:1, I would still say that it is occurring more often then the model. If it was indeed 30:1 instead of 1000:1 then indeed this shifting mean could explain the phenomena. This work needs to be done, and if attribution is important climate change must be separated from climate variability. I do not know what the real number are, but the work should be done if we are to have a good model.

    Now

    Comment by Norm Hull — 16 Nov 2011 @ 2:47 PM

  142. Norm

    There is no implied judgement that this is a bad thing. That judgement is from reasoning aside from the bare observations and the statistical measures obtained from them. Many people here in the UK would love to have more ‘barbecue summers’ – their judgment would vary wildly from mine.

    Because of interannual variations in weather it is meaningless to use the SD of a particular year. All the more so if one uses the SD of a year with a 3 sigma event.

    I remain wholly unimpressed by your claims of model failure. As I have said above; the basic premise is testable in Excel. To avoid the extra pages involved in making sets of normally distributed random numbers I have used the Data Analysis add-on and it’s relevant function. Then the histogram function allows building of the histograms. Even with an SD of 0.5 and a slope incrementing in 1/1000 for a set of 256 numbers the skewing of the histogram is obvious, as is the increase in high SD events (random numbers).

    Now Hansen shows the same thing as is found by an amateur armed with a spreadsheet. Barriopedro figure 2 also impicitly shows the same skewing at work.

    The ‘model’ of a normal distribution isn’t broken. We are seeing the distribution skewed by an increase in mean temperatures. If you think there’s a problem the ball is in your court.

    GISS station data is available here:
    http://data.giss.nasa.gov/gistemp/station_data/

    Comment by Chris R — 16 Nov 2011 @ 4:01 PM

  143. “The ‘model’ of a normal distribution isn’t broken. We are seeing the distribution skewed by an increase in mean temperatures. If you think there’s a problem the ball is in your court.”

    I definitely think climate data does not follow a normal distribution only skewed by global warming. My claim was only, that you need to recalculate your distribution and test the hypothesis and not just assume it is correct. There definitely were a great deal of “extreme events” in the 1930s.

    It is you that wants to lock the distribution in time. I think this is fine if that is your definition of extreme events. It does not stop the responsibility to test the model. Stephan says he is testing his model with a great deal of data sets and I look forward to see how well it predicts. My gut feel from historical records is that the variation does not follow a gausian distribution, but is skewed.

    Comment by Norm Hull — 16 Nov 2011 @ 5:04 PM

  144. Chris R said

    Because of interannual variations in weather it is meaningless to use the SD of a particular year. All the more so if one uses the SD of a year with a 3 sigma event.

    I said about the mean for that year, meaning the trend line. If you are saying all the skew is related to warming, I am saying, go ahead model the warming and see what your model predicts. Is that clearer? You obviously are picking the 3 SD from some model year that is previous to today.

    Even with an SD of 0.5 and a slope incrementing in 1/1000 for a set of 256 numbers the skewing of the histogram is obvious, as is the increase in high SD events (random numbers).

    Now Hansen shows the same thing as is found by an amateur armed with a spreadsheet. Barriopedro figure 2 also impicitly shows the same skewing at work.

    ah, but you are giving it a model and then saying it tests to your model. I am saying that you need to test the skew with a model and see the probabilities. Hanson has not done this. Stephan says he is in the process of peer review on his test, and I am very curious if he will test the variation.

    Comment by Norm Hull — 16 Nov 2011 @ 5:48 PM

  145. In a true Gaussian distribution, and increasing mean should result in increased probability of high sigma events. A look at US record high temperatures over the past century+ shows that this is not the case. While the general trend exists; fewer record highs in the colder years (1880s and 1970s), and more in warmer years, the largest number of records were not set in the warmest years. More record highs were set in the 1930s and 1950s than either the 1990s or 2000s.

    http://1.bp.blogspot.com/_b5jZxTCSlm0/R6O8szfAIbI/AAAAAAAAA8Y/t58hznPmTuw/s320/Statewide%2BRecord%2BHigh%2BTemperatures.JPG

    [Response: Actually you are very incorrect. If you take a large enough sample (all available at NOAA) of daily records (lows, highs of max and mins), it is clear that warm records over the last decade (and even the last year) are running way ahead of low records. - gavin]

    Comment by Dan H. — 17 Nov 2011 @ 7:41 AM

  146. Dan, take another look at Norm’s link:

    http://www2.ucar.edu/news/1036/record-high-temperatures-far-outpace-record-lows-across-us

    I haven’t examined your claim that “More record highs were set in the 1930s and 1950s than either the 1990s or 2000s,” but given the relatively short instrumental record, the clearest metric is not absolute numbers of records, but the ratios of highs to lows. (Since clearly, were the series stationary–apologies if I’m misusing the stats term here!–you’d see decreasing numbers of records over time.)

    And according to Norm’s link, the ratio of high to low is much more skewed toward “warm” for the 2000s than it is for the 1950s–2.04 to 1, versus 1.09 to 1.

    (Sadly, it doesn’t give a similar analysis for the 30s–which, however, we all agree (AFAIK) to have been the warmest decade in the US record so far.)

    I think that was the thrust of Gavin’s inline, which may have seemed a little tangential to your point, Dan–absent the ‘expansion’ I’ve tried to sketch in this comment.

    Comment by Kevin McKinney — 17 Nov 2011 @ 8:32 AM

  147. Oh, I should mention that the graph linked in #145 is unreadably small for me–as usual, trying to zoom in just resulted in a bigger blur. The curve is clear enough, but reading the labels on the axes was just a lost cause.

    Comment by Kevin McKinney — 17 Nov 2011 @ 8:36 AM

  148. Kevin,
    The graph to which Norm linked is a 100% stacked column, which compares the highs to the lows in a given column. No reference can be made to other columns. Both the UCAR report and NOAA (as Gavin states) relates the relative number of highs and lows, and not absolute numbers. By comparison, more record highs were set in the 30s and 50s, but the number of record lows were much higher in those decades, so the ratio of highs to lows is less. That is what the graph shows.
    The recent warming would be reflected in the ratio of record highs to lows, and indeed it is. However, the claims of increasing extremes with an increasing mean is not. The high sigma events of the 30s (and 50s) would still be considered high sigma today. The mean has not increased enough for for these extreme events to fall within the 3 sigma region.

    Comment by Dan H. — 17 Nov 2011 @ 10:59 AM

  149. Dan H. is arguing about his interpretation of one blurry picture.

    Gavin’s response above to Dan begins “Actually you are very incorrect.”

    That’s somehow reminiscent of what Tamino said to Dan earlier.

    There’s a pattern here.

    Dan. H should have read the caption to the picture he’s describing.

    Comment by Hank Roberts — 17 Nov 2011 @ 11:48 AM

  150. Norm Hull,

    The 1930s are irrelevant. We’re talking about the recent warming, the global mean of which is attributable to human activity, i.e. post 1975 – the start of the recent linear trend. Studying the 20th century may well reveal skewing in different directions. However the recent warming trend is atypical of the global temperature behaviour in the 20th century. The 1930s warming was primarily a far-northern pattern probably related to the AMO, an outcome of internal ocean/atmosphere variability. What evidence do you have of more extreme events in the 1930s?

    No. In my amateur experiments in Excel I’m testing an artifical dataset that I know to be of a normal distribution with a prescribed trend – that shows the same sort of skew to the normal distribution that Hansen finds in real results. Furthermore this is a trivial observation – the skewing of the normal distribution in a series containing a trend is well known in statistics.

    I do not want to lock the distribution in time: I have previously stated that Hansen’s use of three sigmas, two over different periods and one using detrended data for the most recent period, is fine. That these three tests show a consistent pattern of skewing of the distribution and the decadal advance of this skewing is conclusive evidence that the change in the distribution is due to the trend.

    Comment by Chris R — 17 Nov 2011 @ 2:03 PM

  151. Dan H. might want to find out where that image comes from originally — what data, when, by whom; it’s much blogged, but I didn’t chase that particular red herring down once I saw how much it’s been reused.

    Here’s a link for a Google image search (apology for the length)

    https://www.google.com/search?tbs=sbi:AMhZZiuDMqGX6GVcIghi467Y9FLS_19bcEKq7rAP2cROXXB8ElQ99MWWZFALoRPT19mVl616s4dk6hNZ1I9VCXIgv-AvZ9CfrPeHN0bSB8qjM9VUr97EAhoHv_18_1vSge2vG-EXA1mHNyKXi1LpfPlqDCcprf1-TXpcLjsEedekzDd-DraKoa3C6oyQP35m5y6Rn7LZbGSB5sEAHJxHPOUATuCjCkWlwLNyp5H7gLBMAr9vrd890ZmTTsvTj5FX9Dv3-a6WZ8BRiKOG2QXNdS4wsO1OlLB6j_1Yu8vavNo_18FyJ4kD-VlSARjYE_19gZBHsuHy98TeTpwMPHtjBDJYOnr48qT_1uaZBTyHMoiOvZCopebYQUPWqOkOo9uvtprZwLTgAxR_1Pnslli9Gbn5ct9K-ITSigmBODa0lma3VjxBGs8jPf_1NV97YprYjM601g-4n62yHp1SkaKlUsbKKUYw3LRmrL46gJ4ClFLNNCjrBQnxtelTFy9C8hd6mxuikiyW5-w09jO7TWbNjCRhvjPhIKELQ-IabUOeAQAprtBmtkf-IsYesLLnxnllkgvxzEwPq-Sy2jJxBx6E66o1XUi4lIi95F56iCnrxUOVQ9kxmpPKVPE_1P1X3QUMA4e7jFho_1GgCo0SD4vy_1BLIeqsCD7-aC4QgEvhpZEkpHgFKfGdrq8DJrO2iaz-mhrmQMfm8bZScfw56XOPXFwDmSVG9ZukbK_1itSywjCZ-EwoU8ThyPHWId2ikjP6RZGv1wpAm6fs9HZeBM2P6PlTlvzocx46Wdxane4DTifV7AmQnkV_1YUUcoRzuk_1t0rEo-5YV9nboL8qn81vh9aDJFerR9mPyPTUHvUQE7Lt4_1pLcU5IezhnXcqyeOCJ15KZ0DvmWKooA2WO2NIz-1Pq2LhON3ToRZRqfcE4Y5-xZMR449UC-nV6c8bL5lVelPKMPGHWp3Shbba0uVV17Yyjb5ZGP3w56YprAd398cXsTxE6kPBr3LY1oyZccOaOOVShK37ugG0e0GLMqSJ2ZTHJe6XYKmvhITa8khCW6TgTtMkUdXGus_1_1oM_1wPYZVwkidfD28TgblvSjJfmNHhB0rZZJ8QpI18yHePKjQ_15C_1GOJfQEhey3OFtlDl4qUTjDJir_1zDTyuTz4JcALGlXhKrOSD05Gu1qe3YJx-cASz3GdZv31KU12dX1QStb3lChgdWQHifqamCpas7bbD6BVx0N5k6kni-nK3vvt7wv2w02vc-wqOX8rJDJv-UQ1-_1VkbKmKHldDsJf58yX25M6NiAiS9cIYV_1Y3eIE4s_1miOdDAVN3Q&hl=en&sa=G&biw=1505&bih=889&gbv=2&site=search&ei=qFrFTuSYGeegiQKwuLjaBQ&ved=0CD4Q9Q8

    Comment by Hank Roberts — 17 Nov 2011 @ 2:08 PM

  152. I must love the smell of red herring in the morning — it was too tempting.

    Looks like Dan H’s chart originates with this blogger, attributed to “NOAA data”
    http://hallofrecord.blogspot.com/2008/02/2007-monthly-us-high-temperature.html

    Comment by Hank Roberts — 17 Nov 2011 @ 2:16 PM

  153. Kevin, Hank, Dan,

    I mainly posted that link to show how many records are really occurring so we could think about what is extreme. We would indeed expect the ratio of highs to lows to go in the direction of the temperature trend. We also might expect more record cold in the 30s as the temperature record really only goes back to 1880s with large numbers of thermometers and without using proxies. The 30s were extreme when it came to records but so are the 2000s, and when measured just by record temperatures part of the reason has to be warming.

    Now there is another point to this all. If we describe extreme a different way do those extremes in the 1930s like the dust bowl still lock extreme in the world of 2011. They definitely do to me. I live in Texas, and this summer has been catagorized as extreme. I would agree, but my city and state did not break the record high for the year, but did break many daily highs and had very little rain. But even given the summer of 2011s extreme we would still consider the dust bowl in the 30s and the decade without rain in the 40s and 50s to be extreme.

    Comment by Norm Hull — 17 Nov 2011 @ 2:39 PM

  154. Norm’s link has data through Sep 2009. The ratio keeps getting worse:

    http://capitalclimate.blogspot.com/2010/10/endless-summer-xii-septembers.html

    I wonder if Capital Climate could round up the full year’s result for 2010? and 2011 in a couple months?

    Comment by Pete Dunkelberg — 17 Nov 2011 @ 3:33 PM

  155. Dan H. keep in mind that the plethora of new highs are all higher than anything in earlier decades including the 30′s.

    Comment by Pete Dunkelberg — 17 Nov 2011 @ 3:37 PM

  156. Hank,

    See Figure 7 bottom right panel of the Hansen paper. It wouldn’t surprise me if US records weren’t currently beating previous decades. Although the NCAR article linked to in #146 should give anyone pursuing that line in an honest fashion pause for thought.

    Comment by Chris R — 17 Nov 2011 @ 3:58 PM

  157. #148–

    Dan, thank you for clarifying–though it wasn’t necessary to explain stacked columns.

    If I understand you correctly, you agree that climate is currently warming, but are skeptical that there are more extremes occurring in general. Yes?

    You speak about the absolute numbers of US records. I had a quick look, but found no survey of them over the decades. It sounds as if you have. Care to share?

    Comment by Kevin McKinney — 17 Nov 2011 @ 4:10 PM

  158. Thanks, Hank–

    The original shows that the data are for *monthly statewide high records.* That is, if any station anywhere in the state has a record high, then that state is considered to have a record high for that month; otherwise, no record. Hence, there are 12 months x 50 states, or 600 “opportunities” for records per year. (Presumably modern definitions of “states” are extended backward to the beginning of the record.) I’ll leave the statistical heavy lifting to others, but it would seem that this individual methodology would necessitate care in making comparisons.

    Also, what I couldn’t see in the small version linked is that the graph was made in 2007, and so has dated a tad. It is quite clear that the blogger thinks he has refuted Meehl et al., even if Dan isn’t making that claim–and quite clear that he hasn’t, since his ‘proxy’ for warming is not validated in any way, just assumed to be conclusive:

    “it is reasonable to infer that:

    –Since 2000 there has been a noticeable absence of high temperature extremes or “heat waves”
    –The 2001-07 period may have been cooler than the 1994-2000 period”

    Um, no–”heat waves” need not set record maxima, and the maxima by themselves aren’t sufficient to characterize the mean temps. (Especially when it’s the minima expected to be affected most strongly by greenhouse warming.)

    I notice he updated in 2009, but not since. Pity. It would certainly be interesting to see what 2010 and 2011 looked like through this particular lens.

    Comment by Kevin McKinney — 17 Nov 2011 @ 4:32 PM

  159. Repeatedly one reads that our atmosphere has 4% more water vapor in it, and that this fact contributes to greater precipitation and more catastrophic weather events.
    But the 4% figure seems small in comparison to the magnitude of the deluges experienced in so many places recently. Surely the 4% extra atmospheric H2O is not evenly distributed in the atmosphere. Aren’t some places just as arid as before and other places finding themselves (or their weather) with atmospheric H2O boosted much more than 4%? Shouldn’t this point of clarification be made when catastrophic weather is discussed?

    Comment by Scott Ripman — 17 Nov 2011 @ 8:01 PM

  160. Scott Ripman @159 — Yes and there are many papers on the subject of rainfall. Briefly, precipitation has increased in mid and high latitudes and seems to be decreasing in the so-called subtropics. The latter includes all across the American south and all around the Mediterrean. Moreover, it may be that increased aerosols lead to more extreme rain events; I left a link regarding that on the Open Variations thread. Also there is a link by Hank Roberts indicating a tendency to increased extreme events in the Mediterrean region.

    Comment by David B. Benson — 17 Nov 2011 @ 10:55 PM

  161. #156–Yes, the Hansen paper’s figure 7 does present a different picture. See here:

    http://www.columbia.edu/~jeh1/mailings/2011/20111110_NewClimateDice.pdf

    Comment by Kevin McKinney — 17 Nov 2011 @ 11:54 PM

  162. Scott @ 159, yes you raise an important point regarding communication, and yes water vapor is very unevenly distributed. Total arid area is increasing and expected to increase more. Drought is a major concern of those who are paying attention. See for instance Dai’s Drought Under Global Warming: A Review. (where is that latest paper of Hansen’s?)

    But excessive rain and flooding are also a growing problem: See my comment # 29 under Scientific Confusion earlier today.
    http://www.realclimate.org/index.php/archives/2011/11/scientific-confusion/comment-page-1/#comment-219404

    This paper:
    Recent trends of the tropical hydrological cycle inferred from Global Precipitation Climatology Project and International Satellite Cloud Climatology Project data
    Y. P. Zhou et al. 2011
    gives a general idea of dry and wet areas and reasons. You might look up Hadley cells, Inter Tropical Convergence Zone (ITCZ) in Wikipedia. I think Zhou’s paper is online somewhere but is cryptic just now so I’ll give you the abstract:

    Scores of modeling studies have shown that increasing greenhouse gases in the
    atmosphere impact the global hydrologic cycle; however, disagreements on regional scales are large, and thus the simulated trends of such impacts, even for regions as large as the tropics, remain uncertain. The present investigation attempts to examine such trends in the observations using satellite data products comprising Global Precipitation Climatology Project precipitation and International Satellite Cloud Climatology Project cloud and radiation. Specifically, evolving trends of the tropical hydrological cycle over the last 20–30 years were identified and analyzed. The results show (1) intensification of tropical precipitation in the rising regions of the Walker and Hadley circulations and weakening over the sinking regions of the associated overturning circulation; (2) poleward shift of the subtropical dry zones (up to 2° decade−1 in June‐July‐August (JJA) in the Northern Hemisphere and 0.3–0.7° decade−1 in June‐July‐August and September‐October‐November in the Southern Hemisphere) consistent with an overall broadening of the Hadley circulation; and (3) significant poleward migration (0.9–1.7° decade−1) of cloud boundaries of Hadley cell and plausible narrowing of the high cloudiness in the Intertropical Convergence Zone region in some seasons. These results support findings of some of the previous studies that showed strengthening of the tropical hydrological cycle and expansion of the Hadley cell that are potentially related to the recent global warming trends.

    You might also like a certain book titled “Hell and High Water.”

    Comment by Pete Dunkelberg — 17 Nov 2011 @ 11:56 PM

  163. Pete Dunkelberg (154) wrote:

    Dan H. keep in mind that the plethora of new highs are all higher than anything in earlier decades including the 30′s.

    I would also keep in mind that in terms of the warming trend the United States has been very fortunate so far. In terms of five-year averages we are still pretty close to where we were in the 1930s, but much of the world left the 1930s behind a long time ago. For the Northern Hemisphere as a whole, the last the time the 5-yr running mean was equal to that of the late 1930s was about 1985. Globally? About 1978.

    Please see:

    GISS Surface Temperature Analysis
    Analysis Graphs and Plots
    http://data.giss.nasa.gov/gistemp/graphs/

    The United States may very well play catch-up at some point. But temperature itself isn’t the worst of it. In terms of the food supply the changes to the hydrological cycle are probably far more important. There is the flooding…

    Groisman et al note:

    … while mean precipitation increase was barely visible during the past century (and was statistically insignificant in the cold season), heavy and very heavy precipitation increased markedly as did the proportion of their totals attributed to these events. Regionally and seasonally, changes in “very heavy” precipitation vary significantly, and are most notable in the eastern two-thirds of the country and primarily in the warm season when the most intense rainfall occurs (Figure 6).

    Pavel Ya. Groisman et al. (2002) Contemporary Changes of the Hydrological Cycle over the Contiguous United States: Trends
    http://www1.ncdc.noaa.gov/pub/data/papers/2002pg05.pdf

    In the lower 48 states the trend between 1910-1999 was for annual rain to increase by 6% per century, but heavy rain (above the 95th percentile) has increased by 6%, very heavy rain (above the 99%-les) by 24%, and extreme rain (above the 99.9%-les) by 33%. (Ibid.)

    A recent fingerprint study shows that models do well in matching the pattern of increases:

    Our results provide to our knowledge the first formal identification of a human contribution to the observed intensification of extreme precipitation. We used probability-based indices of precipitation extremes that facilitate the comparison of observations with models. Our results also show that the global climate models we used may have underestimated the observed trend, which implies that extreme precipitation events may strengthen more quickly in the future than projected and that they may have more severe impacts than estimated.

    pg. 380, Seung-Ki Min et al (2011) Human contribution to more-intense precipitation extremes, Nature,Vol 470, pp. 378-381

    … although they seem to underestimate the actual trend.

    But in my view drought is far more important.

    There will be the drying out of the continental interiors, particularly during summer. As I said earlier, ocean has greater thermal inertia than land. It takes longer to heat it up or cool it down. And in a warming world this implies land will warm more quickly than ocean. Thus the relative humidity of moist maritime air will drop more quickly than before as it moves inland and with it the chances for precipitation. In contrast, precipitation may increase during the wintertime. But with an earlier melt this will imply flooding. And at higher temperatures earlier in the year the moisture will tend to evaporate more quickly, leaving less moisture for later in the year. Reduced moisture during the summer will imply a reduction in moist air convection at the land’s surface. This will imply higher summertime temperatures and in time a net decrease in plant productivity.

    As measured by the Palmer Drought Severity Index (PDSI), we have observed the drying out of land on a global scale.

    Please see:

    However, the PDSI suggests there has likely been a large drying trend since the mid-1950s over many land areas, with widespread drying over much of Africa, southern Eurasia, Canada and Alaska. In the SH, there was a drying trend from 1974 to 1998, although trends over the entire 1948 to 2002 period are small. Seasonal decreases in land precipitation since the 1950s are the main cause for some of the drying trends, although large surface warming during the last two to three decades has also likely contributed to the drying. Based on the PDSI data, very dry areas (defined as land areas with a PDSI of less than –3.0) have more than doubled in extent since the 1970s, with a large jump in the early 1980s due to an ENSO-induced precipitation decrease over land and subsequent increases primarily due to surface warming.

    Climate Change 2007: Working Group I: The Physical Science Basis, 3.3.6 Summary
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-3-6.html

    … and also see:

    http://www.tiimes.ucar.edu/highlights/fy06/dai.html

    Then there is drought driven by the expansion of the Hadley cells. I don’t know how well the models are doing at modeling this expansion now, but previously it was noted that the Hadley cells were expanding at roughly 3X the rate projected by the models. With Hadley Cells, moist air rises in the tropics, gives up its moisture as it cools with increasing altitude, then subsides once it has given its moisture at roughly 30 N/S of the equator. This may very well be a factor in the recent Texas drought. Dallas is at about 32 N.

    Drought will likely have the greatest effect upon our food supply, and we have already observed a net global decrease in plant productivity.

    Please see:

    Earth has done an ecological about-face: Global plant productivity that once flourished under warming temperatures and a lengthened growing season is now on the decline, struck by the stress of drought.

    Drought Drives Decade-Long Decline in Plant Growth, Aug 19, 2010
    http://www.nasa.gov/topics/earth/features/plant-decline.html

    This global shift in plant productivity is an expected result of global warming, but we didn’t expect it so early.

    Comment by Timothy Chase — 18 Nov 2011 @ 12:48 AM

  164. http://web.mit.edu/newsoffice/2011/mass-extinction-1118.html

    “… the end-Permian extinction was extremely rapid, triggering massive die-outs both in the oceans and on land in less than 20,000 years … coincides with a massive buildup of atmospheric carbon dioxide, which likely triggered the simultaneous collapse of species in the oceans and on land.

    … the average rate at which carbon dioxide entered the atmosphere during the end-Permian extinction was slightly below today’s rate of carbon dioxide release into the atmosphere due to fossil fuel emissions.

    … The researchers also discovered evidence of simultaneous and widespread wildfires that may have added to end-Permian global warming, triggering what they deem “catastrophic” soil erosion and making environments extremely arid and inhospitable.”

    The researchers present their findings this week in Science

    Comment by Hank Roberts — 18 Nov 2011 @ 11:11 AM

  165. Sir,
    Those are all valid explanations for the recently observed temperature increases, and generally accepted (although there are a few who will deny them).

    Comment by Dan H. — 18 Nov 2011 @ 9:31 PM

  166. #163–Thank you, Timothy Chase. Good information, as usual. The Zhao and Running was interesting, if not particularly cheery, and I’d missed it completely.

    #165–Dan, to what does “those” refer? I really don’t know what you mean.

    Comment by Kevin McKinney — 19 Nov 2011 @ 9:54 AM

  167. > all valid explanations for the recently observed temperature increases
    > Dan H.

    “Those” aren’t happening now — “anything but the IPCC” requires _two_ hypothetical forcings — one to cancel out the known physics explaining the effect of increasing CO2, and another to explain the facts by something besides CO2.

    “Those” from the PETM — except the rate of change of CO2 — aren’t happening.

    Comment by Hank Roberts — 19 Nov 2011 @ 12:22 PM

  168. Kevin McKinney wrote in 166:

    #163–Thank you, Timothy Chase. Good information, as usual. The Zhao and Running was interesting, if not particularly cheery, and I’d missed it completely.

    Not a problem. Fortunately the paper itself, two comments and a response have been made open access by Science:

    Maosheng Zhao and Steven W. Running (20 August 2010) Drought-Induced Reduction in Global Terrestrial Net Primary Production from 2000 Through 2009, Science, Vol. 329 no. 5994 pp. 940-943
    http://www.sciencemag.org/content/329/5994/940.full

    Arindam Samanta et al. (26 August 2011) Comment on “Drought-Induced Reduction in Global Terrestrial Net Primary Production from 2000 Through 2009″, Science, Vol. 333 no. 6046 p. 1093
    http://www.sciencemag.org/content/333/6046/1093.3.full

    Belinda E. Medlyn (26 August 2011) Comment on “Drought-Induced Reduction in Global Terrestrial Net Primary Production from 2000 Through 2009″, Science, Vol. 333 no. 6046 p. 1093
    http://www.sciencemag.org/content/333/6046/1093.4.full

    Maosheng Zhao, Steven W. Running (26 August 2011) Response to Comments on “Drought-Induced Reduction in Global Terrestrial Net Primary Production from 2000 Through 2009″, Science, Vol. 333 no. 6046 p. 1093
    http://www.sciencemag.org/content/333/6046/1093.5.full

    Might make for interesting discussion at some point. There has been some movement on the other points as well. I might try to get back to them a little later.

    Comment by Timothy Chase — 19 Nov 2011 @ 1:44 PM

  169. Those oil-rich black shales we hear so much about lately?
    They were laid down in anoxic conditions.
    http://www.sciencedirect.com/science/article/pii/S0264817206001061

    Does this mean that by pushing climate to the kind of anoxic ocean that Peter Ward warns of, we’re setting up conditions to restore the Earth’s petroleum supply over the longer term?

    That would be good news for the descendants of, you know, the opossums or squids or whatever form of intelligent live eventually evolves on this planet.

    Comment by Hank Roberts — 19 Nov 2011 @ 6:18 PM

  170. Kevin,
    I was responding to SirCharge where he stated that the difference between last years heat wave and the earlier heat wave (1936) was that Moscow has been built up significantly since then, such that the increase above 1936 levels could be attributed to steel and asphalt.
    The “plethora” of new highs mentioned earlier are largely the result of more stations being in existence. Many of the neighboring stations which recorded record highs in 1934 or 36 did not record records recently.
    Your assessment about my beliefs in #157 are correct.

    Comment by Dan H. — 19 Nov 2011 @ 10:53 PM

  171. > steel and asphalt … new highs … more stations

    Watts’s ‘surface stations’ notion failed for the US — he’s been shown wrong. You assert it now for Russia, because — why?

    And

    > new highs mentioned earlier are largely the result of more stations

    More thermometers used, and you assert this makes temperatures go up?

    ——

    More on the PETM study:

    http://www.cbc.ca/news/technology/story/2011/11/17/science-mass-extinction.html

    “Scientists finally know the date — and hence the likely cause — of a massive extinction that wiped out 95 per cent of life in the oceans and 70 per cent of life on land …. Most affected species met their demise within 20,000 years — a blink of an eye on the geological timescale. In all, the mass extinction lasted less than 200,000 years, wiping out huge forests of conifer trees, tree ferns, big amphibians, large reptiles such as dimetrodons, mammal ancestors called synapsids and a huge diversity of fish and shellfish.

    Charles Henderson (middle) of the University of Calgary collects material from a sedimentary layer in Shangsi, Sichuan Province, China. Charles Henderson (middle) of the University of Calgary collects material from a sedimentary layer in Shangsi, Sichuan Province, China. The precise timing coincides with a huge outpouring of carbon dioxide and methane from volcanic lava flows in northwest Asia known as the Siberian traps.”

    —–
    Reminder: http://web.mit.edu/newsoffice/2011/mass-extinction-1118.html

    “… the end-Permian extinction … coincides with a massive buildup of atmospheric carbon dioxide, which likely triggered the simultaneous collapse of species in the oceans and on land.

    … the average rate at which carbon dioxide entered the atmosphere during the end-Permian extinction was slightly below today’s rate of carbon dioxide release into the atmosphere due to fossil fuel emissions.”

    Comment by Hank Roberts — 20 Nov 2011 @ 3:43 PM

  172. A map for you Dan H. I forget now who posted this on another blog:

    http://www.nrdc.org/globalWarming/hottestsummer/images/summernighttime.png

    Comment by Holly Stick — 20 Nov 2011 @ 3:48 PM

  173. #170–Thank you, Dan. I still can’t find his comment, but no matter.

    Have to say, UHI as an explanation doesn’t really wash, if that was the point. Temporally, 30s Moscow may have been less built up, but wasn’t Rahmstorf et al referring to ‘the Moscow station?’ In that case, we’d need to see what the temporal structure of the ‘build up’ *around the staation* was before proposing it to have some explanatory value.

    But the main thing is, the “Moscow heat wave” affected a huge area of central Russia–not just Moscow. Satellite image:

    http://earthobservatory.nasa.gov/IOTD/view.php?id=45069

    All those forest fires weren’t due to asphalt being solar-heated. . .

    Comment by Kevin McKinney — 20 Nov 2011 @ 6:34 PM

  174. Kevin, Hank, and Holly,
    I just said that his explanation was plausible. Moscow broke its all time record high by 1.5C. The summer of 2010 was similar to the summer of 1936. I cannot say whether the record broke the previous highs because of the urban buildup, an extended blocking event, or global warming. Likewise, I will not make an assertion that the U.S. did not break their record highs for any of the above reasons.
    I will maintain that these types of record highs are not the result of an increasing mean, but rather a specific weather event, whose frequency of occurance has not changed. When the weather event (blocking) occurs, specific local conditions will determine whether new records are broken. Russia broke many records in 2010. In the U.S., many of the 1936 records still stand.

    Comment by Dan H. — 20 Nov 2011 @ 8:04 PM

  175. > plausible
    arguable, at least, which may be closer to what you’re looking for.

    Plausible, only by ignoring the infrared absorbtion physics — the same physical behavior that makes your laser CD player work.

    Comment by Hank Roberts — 20 Nov 2011 @ 8:20 PM

  176. #174–Well, of course there are specific “local” causes when records fall. But that doesn’t mean that the bigger context isn’t important, or doesn’t contribute to the outcome.

    But leaving the specifics aside, Dan, if you accept a warming trend, how does an increased probability of warm records *not* follow–unless you posit a decrease in variability, perhaps? Your agnosticism seems a tad “over-determined!”

    Comment by Kevin McKinney — 21 Nov 2011 @ 10:20 AM

  177. Kevin,
    Exactly! Much of the warming trend has been attributed to higher low temperatures. This has resulted in an increased mean with a smaller standard deviation. If you go to the NOAA weather site and check the records for 2011, the monthly high maximum temperatures have outpaced the low maximum temperature 527 – 324. However, the monthly high minimum temperatures have exceeded the low minimum temperatures 781 – 240. More of the observed warming can be attributed to an increase in minimum temperatures.

    [Response: Unintelligible nonsense. You are really confused on this whole topic. Further unsupported opinions and gibberish are going straight to the borehole.--Jim]

    Comment by Dan H. — 21 Nov 2011 @ 2:02 PM

  178. > exactly

    Not exactly.

    “The eight warmest years on record (since 1880) have all occurred since 2001, with the warmest year being 2005.
    Additionally (from IPCC, 2007):
    The warming trend is seen in both daily maximum and minimum temperatures, with minimum temperatures increasing at a faster rate than maximum temperatures….”
    http://www.epa.gov/climatechange/science/recenttc.html

    Another reason that’s a problem:
    Rice yields decline with higher night temperature from global warming

    Comment by Hank Roberts — 21 Nov 2011 @ 3:00 PM

  179. Jim,
    A question for you. Hank’s post claims that minimum temperatures have risen faster than maximums, yet there is no claims of gibberish, or unsupported opinion. Yet, when I show actual numbers that says the same think, with a reference, it is called, “unintelligible nonsense.” Therefore, do you feel that Hank is as confused as you claim I am?

    [Response: No, I don't think Hank is 1/10 as confused as you are. And your "actual numbers" are meaningless without proper reference and discussion.--Jim]

    Comment by Dan H. — 21 Nov 2011 @ 5:36 PM

  180. He’s such a mixer, and so eager for attention. Not worth much though.

    He doesn’t cite sources, so you can’t see the context for what he says. He consistently provides fake context so when he does post something factual, its meaning for a naive reader is — bent, distorted.

    That’s how ‘out of context’ and ‘unsupported’ claims are useful — to delay and confuse.

    Tamino nailed this tactic inline, weeks ago, here: http://tamino.wordpress.com/2011/11/07/berkeley-and-the-long-term-trend/#comment-56469

    Comment by Hank Roberts — 21 Nov 2011 @ 6:11 PM

  181. Decent summary as of last January here:
    Is the U.S. climate getting more extreme?
    Posted by: JeffMasters, 2:37 PM GMT on January 24, 2011

    “… The portion of the U.S. experiencing month-long maximum or minimum temperatures either much above normal or much below normal has been about 10% over the past century (black lines in Figures 2 and 3.) However, over the past decade, about 20% of the U.S. has experienced monthly maximum temperatures much above normal, and less than 5% has experienced maximum temperatures much cooler than normal. Minimum temperatures show a similar behavior, but have increased more than the maximums (Figure 3).*

    Climate models [tell us that] minimum temperatures should be rising faster than maximum temperatures if human-caused emissions of heat-trapping gases are responsible for global warming, which is in line with what we are seeing in the U.S. using the CEI.

    While there have been a few years (1921, 1934) when the portion of the U.S. experiencing much above normal maximum temperatures was greater than anything observed in the past decade, the sustained lack of maximum temperatures much below normal over the past decade is unique. The behavior of minimum temperatures over the past decade is clearly unprecedented–both in the lack of minimum temperatures much below normal, and in the abnormal portion of the U.S. with much above normal minimum temperatures….”
    ____________
    *Figure 3.
    http://www.wunderground.com/hurricane/2011/2010_cei_min.png
    The Annual Climate Extremes Index (CEI) for minimum temperature, updated through 2010, shows that about 35% of U.S. had minimum temperatures much warmer than average during 2010. This was the 7th largest such area in the past 100 years. The mean area of the U.S. experiencing minimum temperatures much warmer than average over the past 100 years is about 10% (thick black line.) Image credit: National Climatic Data Center.

    Comment by Hank Roberts — 21 Nov 2011 @ 7:32 PM

  182. [edit - stay substantive - we are not interested in attacks on other commenters]

    Comment by Dan H. — 21 Nov 2011 @ 7:33 PM

  183. Ten key messages from the IPCC Special report on extreme events.

    http://cdkn.org/2011/11/ipcc-srex/

    not quite as being depicted in the craposhere.

    Comment by john byatt — 21 Nov 2011 @ 8:05 PM

  184. #176-182, mostly–

    I don’t think that records are the best way to analyze this (sorry, Dr. Meehl.) You throw out a lot of information–which is perhaps part of the attraction, since there is a ton of data and if you don’t have software which can automate some of the processing, it can get tedious. But records are ‘pre-culled!’

    But I digress–the point is, all those non-record data can give a more complete picture, and don’t introduce the complication of past records being ‘wiped out.’ Which is why Figure 4 of Hansen et al, 2011–the “climate dice” paper–seems to me to have quite a bit more to say about variability over time. See figure 4, where the distribution is clearly broader for recent decades:

    http://www.columbia.edu/~jeh1/mailings/2011/20111110_NewClimateDice.pdf

    Comment by Kevin McKinney — 21 Nov 2011 @ 11:50 PM

  185. 177 Dan said, “Much of the warming trend has been attributed to higher low temperatures. This has resulted in an increased mean with a smaller standard deviation.”

    You’re doing the apples to oranges thing. That there is a lower difference between daily highs and daily lows does not change the standard deviation of daily highs, daily averages, or daily lows in and of itself.

    The whole question of whether a single event was caused by global warming is moot. Every day for a region is completely different due to AGW, including very cold days that wouldn’t have occurred without AGW. The 2010 blocking event? Wouldn’t have occurred without AGW. Of that we’re essentially certain. Another blocking event that didn’t happen in August 2007? Might have occurred without AGW. We can say with certainty that we’ll get more Moscow warming type events with AGW, but the dates of their occurrences will be completely different compared to a non-AGW world. If we get 5 times as many of these events in a specific AGW world, then the odds are that none of those 5 events will coincide with the 1 event we would get without AGW. Thus, AGW both prevents and causes heat waves.

    100% of the weather we get is completely different because of AGW and there is no way to guess what the weather would have been without AGW. All we can determine is means and variability. Hank pointed out that if the mean increases due to AGW, then the only way to get fewer extreme highs is by lowering variability. That’s correct whether we’re talking daily high, mean, or low temperatures. That’s a good question for the moderators – does AGW change variability?

    Of course, all this breaks down if we drive temperatures high enough that heat waves which had 0% probability without AGW start occurring.

    Comment by RichardC — 22 Nov 2011 @ 1:06 AM

  186. Richard,
    How can you be so certain that the 2010 blocking event would not have occurred without AGW? The records that were broken last year were set in 1936, in what many assert was a similar blocking event. A similar situation occurred in the U.S. See the link to which Kevin posted in #184, comparing the two hottest summers in the U.S. in the 1930s with those of the past decade. AGW did not cause this recent blocking event any more than it cause a similar occurrance 75 years ago. I am curious how you can say that we know with certainty that these events will increase in the future.
    The NOAA has an ongoing investigation into the Russian heat wave of 2010. A recent assessment has been published :
    http://www.agu.org/pubs/crossref/2011/2010GL046582.shtml

    Kevin,
    Thanks for the link. I just wish Hansen had expanded his graphs further back in time. His figure 8 indicates that there may have been much greater spread in the data in previous years.

    Comment by Dan H. — 22 Nov 2011 @ 9:36 AM

  187. “Monday, March 21, 2011
    Tamino is watching the food
    Open Mind takes on what I have argued is the critical “final common pathway” of global warming risks — crop and supply chain failures ….

    ‘Why did the Russian wheat harvest suffer so? Because of the record-breaking heat wave and drought which plagued a massive region, at just the wrong time for Russian agriculture. And one of the contributing factors is: global warming.

    That’s the ugly, deadly dangerous secret the denialists don’t want you to think about. That’s why they sank so low as to try to blame inflation of food prices on attempts to fight global warming, when it’s really due to: global warming.’
    …”
    http://theidiottracker.blogspot.com/2011/03/tamino-is-watching-food.html

    Comment by Hank Roberts — 22 Nov 2011 @ 10:57 AM

  188. Oh, for those tracking the tactics of assertion without citation, compare the assertions posted above without any source to what you find looking for information from sources.

    Yes, you’ll find crap sources out there that the deniers fall back on when pressed for any citation. Stay with the science sites and avoid the “worldclimatereport” kind of spin sites.

    One characteristic of reliable science writing — it evolves over time:

    Scientific Assessment of 2010 Western Russian Heatwave
    http://www.esrl.noaa.gov/psd/csi/events/2010/russianheatwave/

    Sep 9, 2010

    “The western Russia heat wave during summer 2010 was the most extreme heat wave in the instrumental record of 1880-present for that region…. Here we provide the various stages of one such scientific assessment ….

    … a chronology of the scientific investigation, including questions asked, hypotheses posed, data used, and tools applied. The effort began with a rapid initial response, assembled as preliminary material and discussion as a web page, which can be read under the header Preliminary Assessment that was last updated 9 September 2010.

    … the team posed the rhetorical question whether the western Russia heat wave of July 2010 could have been anticipated. The nature of this analysis was more extensive that the initial rapid response, and the vetting of our analysis was conducted through the peer review process leading to the publication of a paper in Geophysical Research Letters in March 2011. The contents of this paper including the Supplemental figures that accompanied the work can be read under the header Published Assessment.

    The assessment process continues at NOAA/ESRL, and we provide further ongoing analysis …. we attempt to reconcile several differing perspectives on the causes for the heat wave, illustrate the diversity of questions being posed and how answers to those may at times confound the public (and even scientific) understanding of causes.

    … Several papers have appeared subsequent to ours in the peer reviewed literature…. We provide links to these papers and their principal conclusions on this web page under the header Related Publications….”

    Comment by Hank Roberts — 22 Nov 2011 @ 11:31 AM

  189. And follow the links they give at ESRL, e.g. to
    http://www.esrl.noaa.gov/psd/csi/events/2010/russianheatwave/warming_detection.html
    Ongoing Scientific Assessment of the 2010 Western Russia Heatwave
    Draft – Last Update: 3 November 2011
    Disclaimer: This draft is an evolving research assessment and not a final report. Comments are welcome. For more information, contact Dr. Martin Hoerling …”

    Remember how science works. There’s no great original founder on which everything later is based, and this is a good example, citing work and assessing whether what appear to be contradictions or disagreements can be drawn out with more work to get closer to a better description of what happened.

    Science doesn’t grow like a single tree, it grows like kudzu.

    Comment by Hank Roberts — 22 Nov 2011 @ 11:41 AM

  190. From the previous link from NOAA:
    “Model simulations and observational data are used to determine the impact of observed sea surface temperatures (SSTs), sea ice conditions and greenhouse gas concentrations. Analysis of forced model simulations indicates that neither human influences nor other slowly evolving ocean boundary conditions contributed substantially to the magnitude of this heat wave. They also provide evidence that such an intense event could be produced through natural variability alone. Analysis of observations indicate that this heat wave was mainly due to internal atmospheric dynamical processes that produced and maintained a strong and long-lived blocking event, and that similar atmospheric patterns have occurred with prior heat waves in this region. We conclude that the intense 2010 Russian heat wave was mainly due to natural internal atmospheric variability.”

    Comment by Dan H. — 22 Nov 2011 @ 2:04 PM

  191. Dan asks, “Richard,
    How can you be so certain that the 2010 blocking event would not have occurred without AGW? ”

    Let’s use Hansen’s visualization that weather is like rolling dice. An extreme blocking event of a specific magnitude starting on a specific day in a specific area and lasting for a specific duration might be like rolling a 6 100 times in a row. Very very unlikely. The key is that the system without AGW represents a new reality, and so the two (with AGW VS without AGW) are independent events. That there was such an event in our reality has zero influence on the odds of such an event at the same time in the alternative reality where humans never emitted any CO2. Thus, we MUST re-roll those 100 dice! Depending on how picky you want to get in the definitions, one could say it would be impossible to get the same results.

    Another way to think about it is GCMs. Do runs all starting 10 years before the event until you get the event in question. Now change the parameters to represent no AGW. Do ONE more run. What are the odds of getting the same event? Essentially zero. Thus, it is incorrect to say that the 2010 event would have been “the same” but a little lower temperature without AGW. It is correct to say that without AGW the 2010 event would not have occurred. Without AGW, 100% of all weather on the planet would be different from what occurred, including the heat waves that did not occur because of AGW.

    Comment by RichardC — 22 Nov 2011 @ 2:04 PM

  192. Oh, pathetic; Dan handwaves “from NOAA” and quotes from the 2010 publication. That works because people won’t have a link and see the explicit pointers to more recent work on the same page. This is how obfuscation works; claim what seems to be almost a cite to a good source, but quote a bit out of context, so anyone not skeptical can be fooled into thinking it’s up to date information.

    Tiresome.

    Comment by Hank Roberts — 22 Nov 2011 @ 2:20 PM

  193. Dan ponders, ” I am curious how you can say that we know with certainty that these events will increase in the future.”

    I believe Hank’s point. With an increased mean and constant variability, the number of extreme highs will invariably increase. This is true regardless of whether climate sensitivity is 1C or 3C or 6C. To counter this, you’ll have to provide some evidence that weather variability will decrease as global mean temperature increases. I’m assuming you aren’t proposing that climate sensitivity is 0C or below.

    Comment by RichardC — 22 Nov 2011 @ 2:20 PM

  194. I have to say, Dan H. is really good at what he does.

    He takes a graph from a blog and posts a link to the image, making up a claim about it.

    The blog he took the image from is gailtheactuary — Gail Tverberg.

    You could look at the source — if you take the time to figure that out, because Dan won’t tell you where he got it.

    Nor will he tell you what the caption says, nor how to find what the author had to say about the chart.

    Clever, he is.

    [Response: Not very actually. Just persistent--Jim]

    Here’s the context:
    https://www.google.com/search?q=site%3Aourfiniteworld.com+food-price-index-vs-brent-oil-price-june-2011

    Background from the blogger:

    http://ourfiniteworld.com/2011/02/16/a-look-behind-rising-food-prices-population-growth-rising-oil-prices-weather-events/

    I’m tired of chasing Dan’s stuff.
    Anyone else want a turn?
    http://www.pbfcomics.com/archive_b/PBF216-Thwack_Ye_Mole.jpg

    [Response: He's used up his allotment of tolerance, thinking that it was unlimited.--Jim]

    Comment by Hank Roberts — 22 Nov 2011 @ 2:47 PM

  195. Hank,
    In case you did not notice, the quote that I posted was from the abstract of the same publication to which you referenced in post #188 (i.e GRL, March, 2011). Do you have the link to a more recent publication? None of your commentary, just facts please.
    If you have any other publication that shows another explanation, I would like to see that also. It would be nice to back up assertions with facts.

    Comment by Dan H. — 22 Nov 2011 @ 3:54 PM

  196. Dan H,

    On this page the first graphic is a copy of figure 2 from Barriopedro et al. Notice how 5 of the years in the last 10 years are at the very warmest edge of the distribution, or beyond it. Notice also the mini-graph below the main graph of the statistical distribution. Barriopedro find that when taking into account the uncertainty ranges in proxy reconstruction of temperature; “at least two summers in this decade have most likely been the warmest of the last 510 years in Europe.” That is significant because those uncertainty ranges are large.

    On this page you’ll find part of one of the graphics from the recent Hansen paper. This shows that not only is there a shift due to the warming trend, but there is also an increase in the high end, i.e. extreme warm events. Note in particular the top graphic there; that’s for summer which is the season being discussed.

    Never mind the discussion on those pages, you can discuss those over there if you want, but you’ll also find links to the papers involved.

    As a former sceptic (I was quietly in denial) I still have some doubts about the models, I suspect you do too. So lets leave aside considerations based on models.

    Barriopedro et al’s argument about incidents like the Moscow 2010 heatwave being unlikely to recur for decades is based on model results, and a key thread of the NOAA argument is again based on models. The element of the NOAA argument that isn’t is how much of an outlier 2010 was (top graphic). But Barriopedro figure 2 shows us that of the last 10 years in Europe five have been at the very top end of the statistical distribution, and Hansen shows us that globally there has been a massive increase in high sigma warm events.

    What conclusions do you draw from the observations I outline – no quote mining from other sources – what do you think?

    Comment by Chris R — 23 Nov 2011 @ 2:23 PM

  197. > Dan H. says 22 Nov 2011 at 3:54 PM
    > the quote that I posted was from …
    > GRL, March, 2011). Do you have the link
    > to a more recent publication? None of your commentary, just facts please.

    As I had posted earlier on 22 Nov 2011 at 11:31 AM
    >> … follow the links they give at ESRL ….

    I quoted their directions at length for how to find that material.

    Here again, briefly, supplying a clickable link:

    “… Several papers have appeared subsequent to ours in the peer reviewed literature…. We provide links to these papers and their principal conclusions on this web page under the header Related Publications

    Comment by Hank Roberts — 23 Nov 2011 @ 4:19 PM

  198. “Related Publications” link should go to:
    http://www.esrl.noaa.gov/psd/csi/events/2010/russianheatwave/pubs.html

    Comment by Hank Roberts — 23 Nov 2011 @ 4:33 PM

  199. SREX (The emails are boring this time around, the trolls more so )

    Certainly the report signals a need for countries to reassess their investments in measures to manage disaster risk. New disaster risk assessments that take climate change into account may require countries and people to refresh their thinking on what levels of risk they are willing and able to accept. This comes into sharper focus when considering that today’s climate extremes will be tomorrow’s ‘normal’ weather and tomorrow’s climate extremes will stretch our imagination and capacity to cope as never before. Smart development and economic policies will need to consider changing disaster risk as a core component unless ever more money, assets and people are to be washed away with the coming flood.

    While both the authors of this blog are Co-ordinating Lead Authors of the IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation, and members of the Core Writing Team of the Report’s Summary for Policy Makers, the article does not represent the views of the IPCC or necessarily of either of the author’s host organisations.

    Comment by john byatt — 23 Nov 2011 @ 6:57 PM

  200. Well, this is top of the pops isn’t it:

    Increase of extreme events in a warming world
    Stefan Rahmstorf1 and Dim Coumou
    + Author Affiliations

    Potsdam Institute for Climate Impact Research, PO Box 601203, 14412 Potsdam, Germany
    Edited by William C. Clark, Harvard University, Cambridge, MA, and approved September 27, 2011 (received for review February 2, 2011)

    Comment by JohnBills — 28 Nov 2011 @ 6:38 AM

Sorry, the comment form is closed at this time.

Close this window.

0.639 Powered by WordPress