RealClimate

Comments

RSS feed for comments on this post.

  1. In a typical display of good faith honest brokery, Roger Pielke Jr has accused you of playing games, being uninterested in furthering scientific knowledge, publishing the paper as an attempt to generate media coverage and generally being a pair of untrustworthy sneaks. Nice.

    http://rogerpielkejr.blogspot.com/2011/10/games-climate-scientists-play.html

    A pretty disgraceful attack if you ask me.

    [Response: It is factually wrong, too. Pielke claims we did not report the sensitivity to the start date, but we did. With start date 1910 we get a probability of 78% that the record heat was due to the warming trend, and with start date 1880 we get a probability of just over 80%. It says so very clearly in the passage that Pielke emailed me about:

    "Because July 2010 is by far the hottest on record, including it in the trend and variance calculation could arguably introduce an element of confirmation bias. We therefore repeated the calculation excluding this data point, using the 1910–2009 data instead, to see whether the temperature data prior to 2010 provide a reason to anticipate a new heat record.With a thus revised nonlinear trend, the expected number of heat records in the last decade reduces to 0.47, which implies a 78% probability [(0.47 − 0.105)∕0.47] that a new Moscow record is due to the warming trend. This number increases to over 80% if we repeat the analysis for the full data period in the GISS database (i.e., 1880–2009), rather than just the last 100 y, because the expected number for stationary climate then reduces from 0.105 to 0.079 according to the 1∕n law.”

    The only reason why some of our analysis covers 100 years is because it started out as a purely theoretical study with synthetic time series produced by Monte Carlo simulations, and for those I just picked 100 years. Only later did we apply it to some observational series as practical examples.

    Very sinister games indeed!
    stefan]

    Comment by SteveF — 26 Oct 2011 @ 5:29 AM

  2. Furthermore, I note from the comments of his blog that Roger has sent Stefan an email (somewhat more polite than his blog post) asking for clarification of the research. One would have thought he would do this before launching a blog based attack. I wonder what on earth makes him think that writing an innuendo laced diatribe and then sending an email to your target is a reasonable way to proceed?

    [Response: Well, this approach has worked very well for McTiresome.--eric]

    Comment by SteveF — 26 Oct 2011 @ 5:37 AM

  3. Truly excellent to see progress in this area.

    Thank you Stefan and Dim Coumou

    Comment by John P. Reisman (OSS Foundation) — 26 Oct 2011 @ 9:34 AM

  4. Stefan,

    Roger informs me that you have confirmed his critique, and calls it a “fine example of cherrypicking”:

    “Rahmstorf confirms my critique (see the thread), namely, they used 1910-2009 trends as the basis for calculating 1880-2009 exceedence probabilities. You can call that jackassery or whatever. I call it math and stand by my critique. A finer example of cherrypicking you will not find.”

    [Response: That is truly bizarre, since what I responded to Pielke (in full) was: "We did not try this for a linear trend 1880-2009. The data are not well described by a linear trend over this period." As shown in the paper and above, our main conclusion regarding Moscow (the 80% probability) rests on our Monte Carlo simulations using a non-linear trend line, and of course is based on the full data period 1880-2009. Nowhere did we "use 1910-2009 trends as the basis for calculating 1880-2009 exceedence probabilities", and I can't think why doing this would make sense. Faced with this kind of libelous distortion I will not answer any further questions from Pielke now or in future. As an aside, our paper was reviewed not only by two climate experts but in addition by two statistics experts coming from other fields. If someone thinks that using a linear trend would have been preferable, that is fine with me - they should do it and publish the result in a journal. I doubt, though, whether after subtracting a linear trend the residual would fulfill the condition of being uncorrelated white noise, an important condition in this analysis. -stefan]

    Comment by thingsbreak — 26 Oct 2011 @ 9:34 AM

  5. SteveF wrote: “One would have thought he would do this before launching a blog based attack.”

    Perhaps one who is unfamiliar with Roger Pielke Jr’s longstanding pattern of such behavior would have thought that.

    Comment by SecularAnimist — 26 Oct 2011 @ 9:47 AM

  6. A counterintuitive feature of all this: if there’s no trend, that makes the record still more of a statistical outlier, yet it must be due to natural variability since there’s nothing else left to which to attribute it. But if there is a trend, that opens up a possible alternate attribution–even as the record becomes ‘less extreme,’ so to speak.

    Almost a paradox, and something that will probably generate some confusion. (Indeed, perhaps I have it wrong, and it has therefore already done so!)

    Comment by kevin McKinney — 26 Oct 2011 @ 9:58 AM

  7. The problem Stefan explains with the spuriously-normalized year-round urban heat island adjustment in the previous study is a crisp, iconic example of how statistical techniques can go astray if you lose track of the meanings behind the numbers.

    Stefan’s “key result” is also very well-phrased: “The number of record-breaking events increases depending on the ratio of trend to variability.” But of course!

    Thanks for your work here, and for another good read.

    Comment by Daniel C Goodwin — 26 Oct 2011 @ 10:26 AM

  8. Well, since Roger has no reputation worth guarding, what, indeed, does he have to lose?

    Comment by Ray Ladbury — 26 Oct 2011 @ 11:19 AM

  9. Stefan-

    Could you answer the following two questions?

    The addition of data from 1880 to 1910 to the analysis changed the trend (and expected probability of extremes in the current decade, using your unique definition of “trend”) as compared to the analysis starting in 1911 by:

    a) zero
    b) a non-zero amount

    A second question for you:

    Please point to another attribution study — any ever published — that uses the same definition and operationalzation of “trend” that is introduced in this study.

    Thanks!

    [Response: I'll let Stefan answer the second question if he can make sense of it (I cannot, since there is no new definition of trend introduced), but I'll answer the first question. The answer is (b).--eric]

    Comment by Roger Pielke, Jr. — 26 Oct 2011 @ 11:21 AM

  10. This is a bit off subject. I love what this site does and it’s helped to make me much more informed about AGW. I come everyday and read the articles and postings.

    I’ve been trying to find the Youtube video of the Heartland Institutes forum on AGW which I believe occured in 2008. The main speaker asked the attendants to please quit using “It hasn’t warmed since 1998″ meme because, as I remember, he told them “When we make claims that are so easily debunked we lose credibility.”

    I first saw this clip in one of your posts, as I remember.

    Thanks, dj

    Comment by Dale — 26 Oct 2011 @ 11:27 AM

  11. Thanks Eric,

    What then is (a) the value for the trend starting from 1880 vs. 1910 and (b) the expected probability of extremes in the current decade started from 1880 (it was 0.47 starting from 1911)?

    Thanks!

    [Response: You're missing the argument about the importance of nonlinear trends, particular wrt the large increase in warming since 1980, in driving the pattern of expected and observed extremes, as discussed on pages 3-4 of the article. You seem to be thinking only in terms of a linear trend. With a nonlinear trend, what happened from 1880 to 1910 is relatively less important than it would be in an analysis based on a linear trend --Jim]

    Comment by Roger Pielke, Jr. — 26 Oct 2011 @ 11:46 AM

  12. Thingsbreak has a relevant update that is apropos.

    I don’t know about how other RC readers and lurkers would feel, but once again, Pielke’s games are boring and shrill IMO.

    Comment by Former Skeptic — 26 Oct 2011 @ 12:01 PM

  13. @ Dale

    Try this one (but it’s from 2009): http://www.viddler.com/explore/heartland/videos/58/

    Comment by Daniel Bailey — 26 Oct 2011 @ 12:02 PM

  14. @10 Dale, I think you refer to the presentation given by Scott Denning in 2010: http://www.youtube.com/watch?v=kkL6TDIaCVw

    He gave another excellent presentation this year at the same conference: http://www.youtube.com/watch?v=P-oXWUdoXX0

    Comment by cynicus — 26 Oct 2011 @ 12:07 PM

  15. Interesting about record setting events not being as frequent when the variability is greater compared to a long-term trend. Intuitively I guess that if the long-term trend is basically flat, there’s less likelihood of either record heat waves or cold snaps over time because there’s less nudging either way. If the trend is warming I’d expect fewer record-setting cold spells, but that would be more than offset by the rising frequency of record heat waves. With a long-term cooling trend I’d expect the opposite. Strong short-term variability would mean that even in a warming trend there would be more cool years than for the same trend with less variability, and vice-versa for cooling trends.
    Is that about right?

    @4 things break:
    I could easily find a better example in his cherry-picking 1998 as the starting point to construct a temperature trend that was flat, when no other years around that time produced flat trends. He exited the discussion at SkS with the excuse that he’d blog about it later, but so far hasn’t. And as with Stefan and Dim’s paper, he misrepresented (or severely and continually misunderstood) what what actually presented to him.

    Comment by WheelsOC — 26 Oct 2011 @ 12:15 PM

  16. @14WheelsOC

    That (SkS) was Senior, not Junior.

    Comment by thingsbreak — 26 Oct 2011 @ 12:20 PM

  17. The conclusion I reached in 1975 was “punctuated equilibrium”. This is the first study that I have seen that reaches in that direction.
    While it is appropriate to consider the question from the “bottom up”, the specie will be gone before the data is all in. Calculating the category of the tornado is silly when the periphery is spawning chicks that are bending steel.
    You are still applying mathematics to data which is appropriate because that is what you know how to do. We can generate no definitive answers by considering all the denizens of Lovelock’s lovely forest. But it’s a biological process, nevertheless.

    Comment by blue7053 — 26 Oct 2011 @ 12:22 PM

  18. @14 Regarding SkS and the short-term trend: this concerns the father of the Roger Pielke that things-break is talking about. Trees, apples and distance travelled seems to apply here.

    Comment by cynicus — 26 Oct 2011 @ 12:31 PM

  19. This is the point when Pielke Jnr. should step down gracefully, be honorable and apologize for jumping the gun.

    Instead what I’m reading here and on his blog is Roger Jnr. being contrarian, argumentative and making yet more unfounded and snide allegations. Ironic given that roger authored a book titled “The honest broker”. His behaviour (now and prior to this) has been the every antithesis of that expect from an “Honest broker”. Andy Revkin has Pielke Jnr. in his Rollerdex why?! Andy are you reading this and taking note?

    Roger might also want to read Barriopedro et al. (2011) who looked at much longer study period than the data GISS permit and who concluded:

    “Mega-heatwaves” such as the 2003 and 2010 events broke the 500-year-long seasonal temperature records over approximately 50% of Europe. “

    Quite scary really.

    There is another option open to Roger, instead of nit picking and engaging in innuendo and making false accusations, he can demonstrate convincingly and quantitatively to all here that he has a valid point that has a marked impact on the paper’s findings. In other words, he can try and stick to the science.

    Comment by MapleLeaf — 26 Oct 2011 @ 12:36 PM

  20. https://www.google.com/search?q=misunderstood+Pielke+Jr.
    That’s settled.

    Could we not talk about that any more?
    Could we talk about the science paper as published?

    [Response: Your link was broken so I fixed it. Here is a nice example of an economist 'misunderstanding' RPJr..]

    Comment by Hank Roberts — 26 Oct 2011 @ 12:43 PM

  21. Whoops, my mistake. I’ll just shut up now.

    Comment by WheelsOC — 26 Oct 2011 @ 1:36 PM

  22. Thanks Jim,

    I understand what you are saying, but I am asking for some numbers to back up Eric’s claim in #9 above.

    Can you provide the numbers that show how (a) the addition of 1880-1909 alters the trend calculation (from that starting in 1911) and (b) based on adding 1880-1909, how that changes the expected number of heat records (from the 0.47 based on 1911).

    My assertion is that the addition of 1880-1909 changes neither of these values, and Eric said otherwise. So I’d like to see the numbers.

    I agree with you 100% that “With a nonlinear trend, what happened from 1880 to 1910 is relatively less important than it would be in an analysis based on a linear trend”.

    What I would like to see is the quantification of “relatively less important” in this case. Can you provide the numbers behind (a) and (b)?

    Thanks!

    [Response: You can back calculate (b) from the numbers given: (x - .079)/x ~= 0.8, so x ~= 0.4--Jim]

    Comment by Roger Pielke, Jr. — 26 Oct 2011 @ 1:39 PM

  23. I’ve been trying for some time to draw attention, including the excellent RealClimate crews, to this 1992 vintage JASON report:

    http://www.fas.org/irp/agency/dod/jason/statistics.pdf

    In 1992 they set a probability on there being no climate change at less than p=1e-5 by pretty much the same method this new article does.

    Comment by Poul-Henning Kamp — 26 Oct 2011 @ 1:53 PM

  24. I am not Stefan, but if I understood the paper correctly, I might be able to answer Roger’s questions.

    What then is (a) the value for the trend starting from 1880 vs. 1910 and (b) the expected probability of extremes in the current decade started from 1880 (it was 0.47 starting from 1911)?

    (a): the trend is a curve, not a value. The curve from 1880 is essentially a backward extension of the curve from 1910. Around 1910 you might see small differences (due to the smoothing technique used, and edge effects) but not later.
    (b) It too will be 0.47. Note that the last decade only has to “beat” the last previous record, and for this positive-trending time series that record will not be from between 1880 and 1910 — rather, from around 1930 or thereabouts.

    …the same definition and operationalzation of “trend” that is introduced in this study.

    I will not claim that low-pass filtering goes all the way back to Joseph Fourier… but I wouldn’t be surprised if it did ;-)

    Comment by Martin Vermeer — 26 Oct 2011 @ 2:09 PM

  25. -24-Martin Vermeer

    Indeed, this is my view as well from reading the paper. Another way to put the same point — the addition of 1880-1909 does not change anything. In fact, no matter what the temperature did prior to 1880, if I extended the record back to say 1750, I could be 99% certain that the recent record was due to trend (using the 1/n rule).

    My question about the use of “trend” was specific to the climate attribution literature, and not the idea of smoothing itself;-)

    Thanks.

    Comment by Roger Pielke, Jr. — 26 Oct 2011 @ 2:29 PM

  26. Roger first claimed that Stefan left out the 1880-1910 data.

    In actuality, data from 1880-2009 were used in the paper. Roger was grossly mischaracterizing Stefan and Dim.

    Roger then claimed that Stefan “confirmed” Roger’s critique, explicitly claiming that Stefan confirmed that he “used 1910-2009 trends as the basis for calculating 1880-2009 exceedence probabilities.”

    In actuality, Stefan merely agreed that he did not use a linear trend from 1880-2009. Roger was grossly mischaracterzing Stefan.

    Roger is now claiming that his original position was that “the methodology used by Rahmstorf renders the data prior to 1910 irrelevant to the mathematics used to generate the top line result”, a position that appears nowhere in his blog post.

    In actuality, Roger originally claimed that the 1880-1910 data were ignored entirely. Roger was grossly mischaracterizing Roger.

    I’m seeing a pattern.

    Comment by thingsbreak — 26 Oct 2011 @ 2:37 PM

  27. Very odd how Roger Jnr. is now trying continue the discussion here while ignoring the fact that he has accused the authors of cherry picking and deceiving readers; I find that pretty duplicitous. Even more bizarrely, Roger is now asking other people to provide him with numbers while failing to back up his assertions with numbers.

    Has Roger Jnr. any intention of apologizing to the authors and any intention of making his own calculations?

    Stefan and the RC moderators continue to show incredible patience and tolerance.

    Comment by MapleLeaf — 26 Oct 2011 @ 2:48 PM

  28. for Dale: answered in the most recent of the Open threads

    Comment by Hank Roberts — 26 Oct 2011 @ 3:00 PM

  29. This is too funny. Roger is now saying that he never claimed Stefan and Dim did not include the 1880-1910 data, but rather that the “the methodology used by Rahmstorf renders the data prior to 1910 irrelevant to the mathematics used to generate the top line result”.

    Yet, Roger himself plainly said those data were excluded from analysis:

    “any examination of statistics will depend upon the data that is included and not included. Why did Rahmsdorf and Coumou start with 1911?”

    “There may indeed be very good scientific reasons why starting the analysis in 1911″

    “my assertion [is] that they analyzed only from 1911.”

    The last one is particularly amusing, in its full context:

    “You can assert that they were justified in what they did, and explain why that is so, but you cannot claim that my assertion that they analyzed only from 1911 is wrong.”

    No, Roger, I can and do say that your “assertion that they analyzed only from 1911 is wrong.”

    Comment by thingsbreak — 26 Oct 2011 @ 3:18 PM

  30. [OT - please note your post was moved to the open thread]

    Comment by Boze Penguin — 26 Oct 2011 @ 3:38 PM

  31. Thanks for this article. It is applicable to
    http://community.nytimes.com/comments/dotearth.blogs.nytimes.com/2011/10/20/skeptic-talking-point-melts-away-as-an-inconvenient-physicist-confirms-warming
    where McTiresome misreads Dai.

    Comment by Edward Greisch — 26 Oct 2011 @ 4:03 PM

  32. Technically, if we’re going with non-linear (curvy) trendlines, then you only need about 30 years of recent uptick to claim correlatable significance (and >50% derived percentages) for nearly any meteorological deviations extending back all the way to 1850 if you like, no? Can we declare a non-linear trend and then start the data at 1950 and still be fine? The veritable attribution floodgates are open, yes?

    Comment by Salamano — 26 Oct 2011 @ 4:06 PM

  33. A more appropriate title for Roger’s post is “The games ‘skeptics’ play”. Below is the latest inane commentary that Roger Jnr. is now peddling:

    Stefan Rahmstorf flips out:

    “Faced with this kind of libelous distortion I will not answer any further questions from Pielke now or in future.”

    He also tries the appeal to authority:

    “our paper was reviewed not only by two climate experts but in addition by two statistics experts coming from other fields.”

    And finally:

    “If someone thinks that using a linear trend would have been preferable, that is fine with me – they should do it and publish the result in a journal.”

    Well guess what? That has already been done:
    http://www.agu.org/journals/gl/gl1106/2010GL046582/

    And they found no trend from 1880. Makes one wonder why RC11 decided to adopt a methodology that makes that earlier period disappear and then flip out when pressed on that methodological choice.

    http://www.realclimate.org/index.php/archives/2011/10/the-moscow-warming-hole/comment-page-1/#comment-217595

    More empty rhetoric. Just because Dole et al. (the paper that gives Roger the answer he craves) used a linear trend it does not make it correct or superior. Which yields a better fit to the data (i.e., R^2) a linear trend or a non-linear trend?

    Also, Stefan said “If someone thinks that using a linear trend would have been preferable, that is fine with me – they should do it and publish the result in a journal.”

    Roger then presents Dole et al. again to try and say “ah ha!”. But nowhere in their paper do Dole et al. make the case that a linear trend is preferable to a non-linear trend. . So that is not a valid answer to Stefan’s challenge. It seems that Roger should read Dole paper again too.

    I’m assuming that Stefan used a linear trend in Fig. 2 above to be consistent with Dole et al. Roger also needs to read the main post above, it nicely explains why Dole et al did not find a warming trend in the data. So the entire point about trends is moot.

    The fact remains that Roger Pielke Jnr’s initial libelous assertion is still wrong, but that has not stopped him form repeating it.

    The large picture of cherries on Roger’s page (his dad prefers to use images of Pinocchio to smear climate scientists) are clearly intended to smear Stefan and Dim, and climate scientists in general. Him doing that, and his disgraceful behaviour here and from the safety of his blog goes to underscore the vacuity of Roger’s supposed argument. It is Roger who is ‘flipping out’.

    Fortunately for Roger, it is never to late to admit error and say sorry. Well, at least doing so is not an issue for men of honor and integrity, or “honest brokers” for that matter.

    Comment by MapleLeaf — 26 Oct 2011 @ 5:24 PM

  34. Stefan — Clear. Thank you.

    Comment by David B. Benson — 26 Oct 2011 @ 6:08 PM

  35. A remark to the question of anthropogenic influence of the Moscow heatwave:
    All discussions and the papers I have seen until now (including Dole et al. and the paper discussed here) have concentrated on the influence of the increase of mean temperature change.
    However, as is widely acknowledged and also stressed by Dole et al., a main reason for the heatwave was a long lasting “blocking” situation, (and they attribute that “automatically” to natural variability). Blocking has to do with the persistence of weather situations in general. One possible impact of anthropogenic climate change might be a change of the (spatial and temporal) persistence of weather situations (e.g. stationary rossby waves). There is not much literature on this topic, although persistence of weather situations might have a high impact on extreme events (droughts as well as floods).
    Any thoughts?

    Comment by Urs Neu — 26 Oct 2011 @ 6:15 PM

  36. Ricky Rood looks at other ways in which the 2010 Russian heat wave was extreme: http://www.wunderground.com/blog/RickyRood/comment.html?entrynum=208.

    Let’s continue: The winter of 2010 and the spring of 2011 were characterized by very high food prices. An essay by Sarah Johnstone and Jeffrey Mazo entitled, Global Warming and Arab Spring, draws a convincing line that the pressure on food prices was a contributor to the start of the revolutions of the Arab Spring – the tumultuous uprising against many Arab governments. (also here) To diffuse the arguments that are sure to follow – this was a contributor, along with many other factors that came together to fuel a movement. This is the idea of climate extremes as a threat multiplier.

    When we talk about climate change and global warming, we often talk of it in the future. We talk about droughts and floods. But the consequence of droughts and floods include damage to crops and damage to cities. The impacts are local and direct, for some, but beyond the immediate, local impacts are the impacts through markets, budgets, and political systems. As these impacts tumble across the world, the results are unpredictable.

    The reality of global warming is that events such as the Russian heat wave occur more frequently. The markets connect events globally. They connect parts of the world with agricultural excesses and deficiencies – but, if droughts and floods are more frequent and more extreme, then markets connect deficiencies with deficiencies. The impact of climate change is more disruption, more instability – a threat multiplier.

    Comment by Pete Dunkelberg — 26 Oct 2011 @ 6:25 PM

  37. I have just recently heard of Extreme Value Theory (EVT) and have been reading the blog post “Extreme Heat” at Tamino’s blog and also viewing Dan Cooley SAMSI talk on “Statistical Analysis of Rare Events”.

    my question is can you give me some idea of the tradeoffs involved in deciding when to take a EVT vs Monte Carlo approach to the topics of extreme events.

    thanks.

    Comment by oarobin — 26 Oct 2011 @ 8:19 PM

  38. RE: 35

    Hey Urs,

    As what I intended to share earlier today is lost, may I suggest in its place that before we consider that an event is abnormal that we examine some simple points. One, the runway numbers of Domodedovo International, as an indication of local normal seasonal winds. Two, the 2010 wind-rose for July in Moscow.

    If the weather system was abnormal, the two values should not share a similarity. If the system was normal; but, more intense the runway directions and wind-rose should match up well. Suggesting it may not be a change in the pattern; but, the duration/intensity. This should point towards whether or not it is natural variation, “on steroids”.

    The next effort should be to back out to get a wider view to see if there might be a corresponding; but, inverted weather variation within say, 3000km to the West. Point being, most extremes in the temperate zone should have a “inverted partner”.

    If you have a condition changing say, the convective processes in one region there should be a equivalent; but, opposite event, (cool rainfall), in the vacinity. The distance between them being governed by the height of the Rossby convective plume, where heat and water vapor content suggest the residence time. (Add more heat and the duration and content of high altitude water vapor increases as well as the height that it is lifted too/falls from. (think PSCs))

    The main point I am suggesting is, though analysis and the law of large numbers can tell us a lot about processses that are measured with what seems to be random numbers, actually may not apply here.

    With an X-barred we can find the change about an expected mean. With a rule such as three points on the same side of the mean suggests the values around the mean may be skewed. I do not think we can equate that variability decreases in the face of extremes. I think that idea misses that weather may not follow normal analytic rules…. (As it is an irrational and partial open system process, the only limits are governed by airmass content and direction (phase).)

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 26 Oct 2011 @ 10:22 PM

  39. @35 A recent publication that addresses stationary Rossby waves: “Warm Season Subseasonal Variability and Climate Extremes in the Northern Hemisphere: The Role of Stationary Rossby Waves” by S. Schubert, et.al. in Journal of Climate Vol 24, no. 18, 4773-92 (15 Sept. 2011). I quote from their abstract http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-10-05035.1
    “The first REOF in particular consists of a Rossby wave that extends across northern Eurasia where it is a dominant contributor to monthly surface temperature and precipitation variability and played an important role in the 2003 European and 2010 Russian heat waves.” (REOF is a rotated empirical orthogonal function)

    Comment by John Pollack — 26 Oct 2011 @ 10:59 PM

  40. > most extremes in the temperate zone should have a “inverted partner”.

    Thus there can’t be an overall trend, just variations?

    Has someone published work establishing such an “inverted partner” exists, do you recall? Or is this an idea tossed out hoping someone can find facts for it? (Not me, I’m done searching for such, but I remain fascinated by the process.)

    Comment by Hank Roberts — 26 Oct 2011 @ 11:38 PM

  41. Urs, I did find quite a few papers, poking around, some mentioning changes in blocking with warming climate. Have you any in particular? I know nothing about the subject, obviously. Here’s a press release: http://www.sciencedaily.com/releases/2010/02/100218125535.htm
    http://www.eurekalert.org/pub_releases/2010-02/uom-wpt021810.php

    Comment by Hank Roberts — 26 Oct 2011 @ 11:41 PM

  42. from the Eurekalert link:
    “Atmospheric blocking occurs between 20-40 times each year and usually lasts between 8-11 days, Lupo said. Although they are one of the rarest weather events, blocking can trigger dangerous conditions, such as a 2003 European heat wave that caused 40,000 deaths. Blocking usually results when a powerful, high-pressure area gets stuck in one place and, because they cover a large area, fronts behind them are blocked. Lupo believes that heat sources, such as radiation, condensation, and surface heating and cooling, have a significant role in a blocking’s onset and duration. Therefore, planetary warming could increase the frequency and impact of atmospheric blocking.

    “It is anticipated that in a warmer world, blocking events will be more numerous, weaker and longer-lived,” Lupo said. “This could result in an environment with more storms. We also anticipate the variability of weather patterns will change dramatically over some parts of the world, such as North America, Europe and Asia, but not in others.”

    Lupo, in collaboration with Russian researchers from the Russian Academy of Sciences, will simulate atmospheric blocking using computer models that mirror known blocking events, then introduce differing carbon dioxide environments into the models to study how the dynamics of blocking events are changed by increased atmospheric temperatures. The project is funded by the US Civilian Research and Development Foundation – one of only 16 grants awarded by the group this year. He is partnering with Russian meteorologists whose research is being supported by the Russian Federation for Basic Research.

    Lupo’s research has been published in several journals, including the Journal of Climate and Climate Dynamics. He anticipates that final results of the current study will be available in 2011.”

    Comment by Hank Roberts — 26 Oct 2011 @ 11:42 PM

  43. In fact, no matter what the temperature did prior to 1880 [my emph. - MV], if I extended the record back to say 1750, I could be 99% certain that the recent record was due to trend (using the 1/n rule).

    Roger, minor nit… reaching 99% would mean, by the 1/n rule, going back 2 kyrs, not 260 years ;-)

    More importantly, “no matter what the temperature did prior to 1880″, not true. Temperatures would have to remain well below 1930s level for the whole period 1750-1880, for this to work out, not setting any record that would not subsequently be broken in the 1930s. Only then would your statement hold.
    But then, wouldn’t such a prolonged low-temperature period provide added background to the recent record-breaking decade? Don’t you agree that ‘unprecedented in 260 years’ is a legitimately stronger statement than ‘unprecedented in 130 years’?

    My question about the use of “trend” was specific to the climate attribution literature, and not the idea of smoothing itself;-)

    Sure, and I only wanted to point out that people have been doing these things since well before climatology became a science with a name of its own. But no, the paper isn’t doing ‘attribution’. Attribution is physics, not something a purely statistical analysis can do.

    [Response: Martin, that is actually one of those cases where a layperson's intuition agrees with statistics. Any layperson will understand that a 1000-year heat record is more exceptional and less likely to happen just by chance - and thus more likely due to climate change - than a 100-year record or a 10-year record. That is indeed the case, and is why considering that the Moscow heat wave was a 130-year record gives a greater probability of it being due to climate change than looking at it just as a 100-year record. Very simple.
    Spot-on observation about attribution, by the way. -stefan]

    Comment by Martin Vermeer — 27 Oct 2011 @ 1:40 AM

  44. RPJr writes in an update (2):

    “I’ll be using the RC11 paper in my graduate seminar next term as an example of cherry picking in science — a clearer, more easily understandable case you will not find.”

    Let’s hope the students also learn about critical thinking skills. Perhaps a pointer to SkS could provide them with a pool of cherrypicking examples that Roger could have used to his heart’s content.

    Comment by Bart Verheggen — 27 Oct 2011 @ 1:57 AM

  45. RE:40

    Hey Hank,

    The original NOAA paper, that Drs. Trenberth and Rahmstorf countered concerning the lack of a warming trend, describes the “Siberian Partner”. You cannot have an “event” without a counter in a closed system. The changes we see are where the partial open system is closing down.

    That there are extremes in both directions was projected tens of years ago. That these “events” should have a corresponding counter event is what concerns me wrt polar sea ice melt.

    If there is a “pipeline” that exhausts equatorial heat, then it should point to a cooling effect at a lower latitude. It is kind of like the development of a Great Plains tornado outbreak. Overcome the 4km/700-500mb temperature inversion and like a cork in a soda bottle, the release of the backpressure should release most of the contents. We are not seeing that “event” in the weather patterns…

    It is like if you reduce the heat by allowing it to escape, you are reducing the pressure, causing the “water” to “boil” anyway… You can’t win for losing until you either remove the pot from the stove or put a pressure lid on it…

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 27 Oct 2011 @ 7:12 AM

  46. #44–”Perhaps a pointer to SkS could provide them with a pool of cherrypicking examples that Roger could have used to his heart’s content.”

    What a delightfully wicked suggestion!

    Comment by kevin McKinney — 27 Oct 2011 @ 7:47 AM

  47. Bart @44 you must then also be referring to Roger Pielke Jnr’s dad’s cherry picking exposed at Skeptical Science ;) Examples of cherry-picking by Pielke Snr. have have also been discussed here at RealClimate. So Roger Jnr. need not look very far at all for real, genuine examples of shameless cherry picking, not trumped up ones.

    And yes, any student with critical thinking skills will see right through Roger Jnr’s plan, especially when they read the paper in question ;)

    Comment by MapleLeaf — 27 Oct 2011 @ 9:41 AM

  48. Sorry, David, once again, I have no idea what you’re talking about.
    This is why cites to sources help in discussing science, and you don’t.

    Comment by Hank Roberts — 27 Oct 2011 @ 9:42 AM

  49. MapleLeaf @ 47: “And yes, any student with critical thinking skills will see right through Roger Jnr’s plan, especially when they read the paper in question ;)”

    Um, yes, about that paper, where is it besides somewhere over the paywall?

    Very few people will read it, so RPJr scores for the absurd but ever so convenient for Big Carbon “evil climatologist” meme.

    [Response: See link in our PS. ]

    Comment by Pete Dunkelberg — 27 Oct 2011 @ 10:27 AM

  50. As entertaining as it is, we should probably ignore Pielke Jnr. digging himself ever deeper into denial and a deep hole ;) Urs Neu @35 asked about blocking. I found this paper that may be useful, but their focus is on winter temperatures.

    Stefan,

    I was very interested to read that the annual mean UHI adjustment was applied for all months in the GISTEMP data. What about HadCRUT and NCDC? Do they do the same? What would have Dole et al. (and you) have found for July and August trends had you looked at NCDC or HadCRUT?

    Much focus has been placed on July temperatures, but the Russian heat wave extended into mid August. Any thoughts on that?

    Comment by MapleLeaf — 27 Oct 2011 @ 10:28 AM

  51. RE:48

    Hey Hank,

    Though slightly OT, look to Dr. Muller’s paper a clear example can be found near the same latitude between the heating in Alberta Canada contrasting the cooling in Siberia. To wax philosophical, Einsteinian physics suggests that for every point of dark matter (“black hole” if you will) there should be a contrasting “white hole”. The basics of closed system physical principles going back to Newton suggests this.

    (Cosmologically, if we remove matter from our perception/space-time you are left with “dark energy”.)

    Similarly in atmospheric physics, change the conditions and the physics of the process change. (One thing does not change, you can transform matter to energy, if you invest sufficient energy, which should have a counter of remove sufficient energy and you get matter.)

    The point being in a closed system the physics dictates balanced/equal reactions. It is when a closed system is partially open you change the equation. We have sufficient evidence of equal and opposites though the may be a difference in perception due to one being concentrated and the other diverse. We can percieve the concentrated effect, we cannot percieve the diverse.

    Now as to specific URLs, sorry my current tools do not support that ability. As to URL availability they are clearly available as most have either been referenced here in RC or in other media, (those which are based on integrity rather then popularity).

    (Hence the reason I have tried to now coach my point in universal principles. Using a sledge to hammer a nail is possible; but, you end up with a sore thumb…, sorry…)

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 27 Oct 2011 @ 10:29 AM

  52. Pete @ 49,

    But surely an honest lecturer would make sure the students had access to a copy of the paper under discussion, as well as a link to the informative post by the authors here at RC? Oh, but being open and honest will not help Roger’s narrative…never mind then.

    Comment by MapleLeaf — 27 Oct 2011 @ 11:44 AM

  53. For a first approximation, an outlier in the direction of a trend tells you what’s coming. Granted you have to know what the trends are first. Just from the shape of a bell curve (and this is far from original) one can see that you don’t have to move the mean very far in one direction for outliers on that side to become much more frequent. A one hundred year heat wave or flood readily becomes a ten or five year event. The new hundred year flood is going to drown people.

    Comment by Pete Dunkelberg — 27 Oct 2011 @ 12:01 PM

  54. Those objecting to the 100 year Russian record seem oblivious to how extraordinary that century’s outliers may be..

    In 1915 , reflecting the abrupt increase in settlement caused by the opening of the Trans-Siberian Railroad hosted a wave of climate-related forest fires that released teragrams of smoke and soot over a burnt area possibly exceeding a million square kilometers.

    Nature;, 323 ,116-117, 1986

    Comment by Russell — 27 Oct 2011 @ 1:50 PM

  55. MapleLeaf, John, thanks for the links.

    Thanks, Hank. Lupo’s project seems still to be in the phase of announcement: http://www.eurekalert.org/pub_releases/2011-09/uom-mrt092011.php (from Sep 2011).
    However, it is not only “blocking” (which relates to strong high pressure systems that “block” fronts from intruding an area) which is interesting. Blocking is only one part of the problem. It might be interesting to look at the whole system of Rossby waves. Persistence of waves cannot only occur with high pressure, but also e.g. with low pressure systems (potentially leading to flooding) or with persistent zonal flow (supporting the occurrence of extreme cyclones). We are planning a project in that direction. So I’m looking around for what already exists.
    If you could provide links to the papers you found that look at changes in blocking, I would appreciate.

    Besides, I am currently at the WCRP conference in Denver, where the question of changes in circulation patterns due to anthropogenic climate change as an issue for attribution of events has been put up (but not answered…) just this morning…

    Comment by Urs Neu — 27 Oct 2011 @ 2:24 PM

  56. oarobin @37 — I’m an amateur at this. Extreme value theory (EVT) appears to require discarding all but the most extreme event in a time interval, say a year. If that is correct, much important information has been lost. I can’t comment on a Monte Carlo approach because I’m uncertain just what you are referring to.

    Regarding precipitation, hereinafter just rain, there are attempts to fit the tail of the rainfall histograms using a (generalized) Pareto distribution. This has the advantage of using all the big, hence, rare, events. Even better IMHO is fitting the entire rainfall historgram. I have found efforts using the log-normal distribution and also the gamma distribution; neither works as well as desired measured over all locations.

    There are but two preliminary attempts (that I have found on the web) using stable distributions. These are found under names such as Levy stable, levay-Pareto or even alpha stable. For a taste of the remarkable properties of stable distributions, see the introductory chapter of John P. Nolan’s forthcoming book; this chapter is web accessible.

    To whet your appitite, one of the stable distribution is called by Nolan, and maybe also Feller (1971), the Levy distribution. For large x it rolls off as -3/2, so looks like one of the genralized Pareto distributions out at the rare event end. But unlike the other distributions mentioned in earlier paragraphs, it is stable and so enjoys the i.i.d. property well known from the most popular of the stable distributions, the normal distribution. Of course the Levy distribution is one-sided and has a very heavy right tail, heavier than the log-normal or gamma distributions. To the extent that the Levi distribution or close relatives actually characterize rainfall events it suggests that some rare events will be truely exceptional, so exceptional that perhaps for reasons of physical limitations one should instead consider a truncated version. (Various truncated versions of stable distributions similar to the Levy distribution appear to be popular in finance.)

    I have, just in the past week+, discovered a remarkable property of the Levy distribution, not shared by any other stable distribution, which motivates using it to characterize rainfall events. This helps in encouraging you to study the Levy distribution for the purposes of better understanding rare rainfall events.

    Comment by David B. Benson — 27 Oct 2011 @ 6:01 PM

  57. @David B. Benson,
    thanks for the reply (i am even more of an amateur at this). The monte carlo approach i was referring to was the one described in the Rahmstorf/Coumou paper whereby one fits a statistical model to a time series with random and deterministic components ,then generate a dataset of 1000s’s of model runs with the same deterministic component but varying randoms components and then calculate statistics on the dataset see update”PS (27 October)” on the main post for a better explanation. there is a nice introductory description of EVT available @ http://www.samsi.info/communications/videos by dan cooley (including a short course by the same speaker and several other nice talks) that shows why it is a reasonable idea to throw away most of the data when looking at extremes(i think it has something to do with biasing the data fit to model parameters as there are just far more nonextreme values).

    i was looking for something along the lines of the strengths and weaknesses in the two approaches.

    your stuff on stable distribution looks nice have you got a writeup(link) on this or is this original research?

    Comment by oarobin — 28 Oct 2011 @ 12:11 AM

  58. RE:55

    Hey Urs,

    Just an interjection, the NOAA paper wrt Moscows summer temp. abnormality, discussed that the driver of the Blocking condition related to a weakened polar vortex. Personally after several years of examining the NCDC SRRS NH Analysis of the 250mb data set, it seems that cross zonal flow strengthens when the heat content differential between the equator and pole are greatest. What appears to happen is the upper air currents shift from more zonal flows to more cross zonal flows as evidenced in the N. Jet Stream. (This can be noted in the changes in the polar vortex early last Winter with the weak vortex changing to a strong “split” AO pattern late Winter.)

    This change in zonal flow reduces the zonal drive pushing Rossby waves within the zone, resulting in increases in Blocking/Cut-Off conditions. The end result is greater residence periods. Add in that the conditions seem to be indicated by the trailing condition of a La Nina event, seems to suggest that the contributor has more to do with Arctic heat flow patterns then the equatorial heat content.

    I have not seen any papers addressing this condition other then something in the 2004-05 range discussing the changes in the Walker circulation and the NASA papers regarding the Arctic upper atmosphere heat content being elevated during the El Nino event about the same time. Sorry not much help.

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 28 Oct 2011 @ 7:47 AM

  59. David Cooke,

    You do not need any “skills” to post a link. All you have to do is copy and paste it from your browser (I assume you know how to do that).

    Like this:
    http://www.realclimate.org/?comments_popup=9247

    Put it on a line by itself, and it will take care of itself. It may not be elegant, but it works fine.

    Comment by Susan Anderson — 28 Oct 2011 @ 8:08 AM

  60. David and Oarbin,
    The issue with extreme-value statistics is that, since it seeks to characterize the extremes of a distribution, the data needed for meaningful characterization is rare. So, unless you have lots of data, extreme value analysis is at best an exercise in futility.

    Basically, though, you can sort of think of extreme value theory (EVT)as being similar to the Central Limit Theorem. The Central limit Theorem says that near the mode, any unimodal distribution looks Normal. EVT says that the extremes of a distribution will exhibit one of 3 behaviors, corresponding to 3 limiting distributions–decaying exponentially (Gumbel), decaying according to a power law (Frechet) or having negligible probability above some finite limit (reversed Weibull). You can represent all three behaviors with a single generalized extreme value (or Fisher-Tippett) distrubution.

    As David points out, you can use a few “highest” samples to get a fit in some cases. I can think of two reasons for using Monte Carlo over EVT
    1)limited (but still large) datasets
    2)autocorrelation in the data

    Comment by Ray Ladbury — 28 Oct 2011 @ 10:23 AM

  61. RE:59

    Hey Susan,

    Assuming, you use a browser with copy/paste features. It also assumes you have a personal library of bookmarks associated with your browsing tool…

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 28 Oct 2011 @ 10:24 AM

  62. Re: 60

    Hey Ray,

    Wouldn’t using a “R” chart over time provide a similar function, though with a larger data set. Point being, if Range is an indicator of change, then if, there is a correlation with a trend you would have confirmation of its validity?

    This would seem to be an obvious test and unlikely to have been missed. This was one of the questions I asked 6 years ago. “Why if the daily range is dropping year over year, would the mean be increasing.” The answer appears to be related to wv.

    This led to the next set of questions, such as: How can the wv be increasing if the reported specific humidity/temperature are not. Both of which led towards a solution set suggesting advection. Hence, where I am consigned to today, trying to find the gating elements for maintaing flat specific values in a atmosphere dominated by a TSI imballance.

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 28 Oct 2011 @ 11:19 AM

  63. Dave Cooke:

    Copy/Paste is not a part of your browser, it is a basic operating system function that will work for any text or numbers in all programs you run.

    I have pretty much stopped reading your posts because I have no way to read up on or verify the complicated material that you comment on. Citations are a normal component of scientific discussion and it is impolite to not provide them.

    Steve

    Comment by Steve Fish — 28 Oct 2011 @ 11:23 AM

  64. ldavidcooke,
    I am not sure how range would help you in fitting extremem value distributions.

    OTOH, decreased diurnal and annual range is a prediction for a greenhouse warming mechanism. CO2 works whether the sun is shining or not.

    Comment by Ray Ladbury — 28 Oct 2011 @ 12:22 PM

  65. RE:63

    Hey Steve,

    Not on 1994 USL-V ver. 4.1113 via Mosaic Gopher or Nintendo’s Opera Internet service, both of which does not support flash or pdf capability… Similar to most 2nd gen. android 1.5-2.0 tablets… As to impolite, hmmm…, life experience, and ambiguious research connected thoughts are going to be hard to reference from memory over 10-40yrs, sorry…

    [Response: Can we at least try and stay on topic? - gavin]

    Comment by ldavidcooke — 28 Oct 2011 @ 1:32 PM

  66. Hi Stefan,

    I thought it would be fun to look at the different months in the Moscow data on the grounds that your result should be reproducible for any month. My comments are based on your post here. It will be some time before I can get a copy of the paper from the library.

    I started this by creating the non-linear trend lines for each month, reproducing near enough the one shown above for July. It immediately becomes clear that each month has a very different trend line. It was also apparent that the July trend over the last 30 years is much higher than the average monthly trend. Given the amount of variation in the trend lines I couldn’t see an obvious reason to prefer using a trend line based on July rather than using the annual average trend.

    I was wondering how sensitive the results are to the trend line. It looks to me that using a different trend line can affect the likely hood of a stationary climate setting a new record (because the trend line used will affect an estimate of the natural variance) and also the likely hood of a warming climate setting a new record because of variation in the recent trend; The average trend is half that of the July trend. Would that not reduce the probability to approx (0.235-0.105)/0.235 = 55% ?

    I think my concern is that the July trend line is being distorted by noise, noise that is not due to the global warming trend in Moscow. I also wonder, given that trend lines for each month show so much variation, if is it sound to make trend lines based on monthly data?

    Comment by mdenison — 28 Oct 2011 @ 1:58 PM

  67. NOTE PLEASE added text in the main post above.
    “PS (27 October): Since at least one of our readers failed to understand the description in our paper, here we give an explanation of our approach in layman’s terms….”

    Go to the top of the page for the new explanatory material.
    Thank you Stefan.

    Comment by Hank Roberts — 28 Oct 2011 @ 2:03 PM

  68. RE: 64

    Hey Ray,

    Concur, where as it is correct that a rising tide lifts all boats and it does it regardles the time of day. Hence, individual station Ranges would not be affected by GHG. (Until recently I thought it might have been instrumentation/interpretation changes. About a year ago, wv and heat transport started to make sense when looking at the NCDC, NCEP, SRRS NH analysis 250mb isotach station graphs for the last 10yrs and comparing the patterns to large scale weather patterns.)

    Back to the extremes fit, I tried a similar analysis using the USHCN data record. My results found that for my sample sites, they demonstrate a evenly distributed extreme monthly value over the last 80 years. The main change I saw was not so much TMax, as much as Tavg. TMin for the 30 long range (roughly since 1880), rural, monthly histories I sampled, demo’ed a roughly +0.35 C trend with Tmax values in the +0.65 range. When I selected to remove the 1 std. deviation values, the classic skewed/tipped parabola emerges. The point is variation and extreme fit seem to be resolution dependent in my experience. It is when we move to an annual value the extremes clearly take on the classic trend.

    It becomes curious to me as to how resolution plays into the analysis, have you insights into this?

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 28 Oct 2011 @ 2:22 PM

  69. oarobin @57 — The Wikipedia page on stable distributions is a place to start but I do recommend using your search engine to find Nolan’s chapter 1 of his forthcoming book on stable distributions.

    I don’t produce web pages for my writeups; my discovery regarding the LEvy distribution might be of interest to others and so mayhap be published.

    I don’t entirely agree with Ray Ladbury’s comment at least for rare rainfall events; there are quite long records for some locations and from the satellite era worldwide data. But there is a substantive point behind insisting upon a stable distribution along the lines of Emmy Noether’s (first) Theorem; a symmetry discovered in nature leads to a conservation law. Since a stable distribution is given by the values of four parameters it seems plausible that some stable distribution ought to give a decent fit to the rainfall histogram for a given location. Then accepting that beauty is truth (and so stability is real) one can then read off the probability for exceeding a given amount, rare events which civil engineers need to plan for.

    [I'll just mention that I don't currently see how to use these ideas for the rare events of droughts and heat waves.]

    Comment by David B. Benson — 28 Oct 2011 @ 5:55 PM

  70. David, FWIW, I agree that rainfall events will be easier to fit to an extreme value distribution than will droughts and heat waves. After all, insurance companies have been doing just this for decades. That is also one reason why they are worried about climate change–the process generating extreme weather events may no longer be stationary.

    Comment by Ray Ladbury — 28 Oct 2011 @ 7:44 PM

  71. Here’s the paper:
    http://www.pik-potsdam.de/~stefan/Publications/Nature/rahmstorf_coumou_2011.pdf

    Comment by Pete Dunkelberg — 29 Oct 2011 @ 8:36 AM

  72. Ray Ladbury @70 — The Levy distribution is a pdf for all events, not just the extreme ones. (It has to be of Frechet type, i.e., type II as it rolls off as a -3/2 power law why out at the right tail.) My goal is to justify a Levy distribution for daily rainfall totals based on some rather well-understood physics (which doesn’t appear to be known by hydrologists) in order to motivate those with lotsa data to see how well the Levy distribution fits the data. If its fairly good one could then fine-tune by using a ‘nearby’ stable distribution; however I don’t intend to carry that very far forward.

    Once done, I hope to be able to offer an approach which will better predict the recurrence time of rare rainfall events in the the face of increasing temperatures. I opine it ought to do better than the usual EVT approaches using just the extreme events or even the (generalized) Pareto distribution which perforce uses just the rare events as well, but more of those.

    SInce droughts and heat waves are well correlated and having no decent ideas for a pdf to describe heat waves I can’t go on to do the simplified physics of anti-rain, i.e., droughts. So unless a really good idea comes along, I’ll have to ignore that aspect of too-wet/too-dry to concentrate on just the too-dry part.

    Comment by David B. Benson — 29 Oct 2011 @ 5:33 PM

  73. Does this paper or any other paper cover how temperature variability has changed with warming?

    It seems trivial to me that if temperature variability stays the same, then there will eventually be more high temperature records over time if the overall trend is up.

    Are there any studies that have documented statistically significantly more high temp extremes world wide?, i.e. not just in one location?

    Comment by Filipzyk — 30 Oct 2011 @ 9:17 AM

  74. One of the things that continuous distribution including those represented by extreme value theory ignore, is that there are extreme values at which the thing they are representing breaks and the next values of the variable being observed are simply an infinite repeat of the last extreme value. For example a value of $0 for a bond or stock. There may be such points in the climate system.

    Comment by Eli Rabett — 30 Oct 2011 @ 9:19 AM

  75. Re: 74

    Hey Eli,

    For instance, Snowball Earth or possibly final sequence Red Star absorption of inner planets? That seems to define the potential range…, at least in the instance under discussion. Are you suggesting pleateuing of trends, establshing a different baseline?

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 30 Oct 2011 @ 10:05 AM

  76. Filipzyk @ 73, Capital Climate http://capitalclimate.blogspot.com/ often shows a chart of high vs low records for the numerous weather stations in the US. It isn’t showing just now, but if you click on “older posts” at the bottom right a couple times you ought to find it.

    Comment by Pete Dunkelberg — 30 Oct 2011 @ 12:27 PM

  77. Last two winters

    Warm extremes versus cold snaps AGU

    http://www.agu.org/news/press/pr_archives/2011/2011-30.shtml

    Comment by john byatt — 30 Oct 2011 @ 2:45 PM

  78. Filipzyk @ 73/Pete Dunkelberg @ 76:
    The latest analysis, through the summer, is here:
    http://capitalclimate.blogspot.com/2011/09/us-heat-records-continue-crushing-cold.html

    Comment by CapitalClimate — 30 Oct 2011 @ 4:37 PM

  79. Thanks for the links everyone.

    Comment by Filipzyk — 30 Oct 2011 @ 6:50 PM

  80. Thanks to John Byatt and Cap Climate for the links. John Byatt, this seems to be the paper:
    “Recent warm and cold daily winter temperature extremes in the Northern Hemisphere” – Kristen Guirguis, Alexander Gershunov, Rachel Schwartz, and Stephen Bennett (2011)

    http://meteora.ucsd.edu/cap/pdffiles/2011GL048762.pdf

    ht AGW Observer
    http://agwobserver.wordpress.com/2011/09/21/papers-on-northern-hemisphere-winters-2009-2010-and-2010-2011/

    The last words of the paper may be of interest to some here:

    The broader Hemispheric and regional picture shows that
    warm events occurring during the two most recent winters
    were much more extreme than the cold outbreaks and are
    consistent with a long‐term and accelerating warming trend.
    For longer term projections, in future work, the methodology
    presented here will be modified to account for nonstationarity
    and applied to AR5 model integrations to study impacts of
    climate change on extreme weather in all seasons.

    Comment by Pete Dunkelberg — 30 Oct 2011 @ 8:16 PM

  81. It isn’t the continuity of a probability density function (pdf) but rather that physical (and economic) objects have state. A pdf is adequate for a memoryless object such as a pair of dice. If there is some form of memory then one studies a Markov process which is to represent the physical (or economic) system.

    The mention of Snowball Earth in a previous comment reminded me of just how frequently I was surprised by the (relative) stability of Earth’s clamate while reading and then studying Ray Pierrehumbert’s “Principles of Planetary Climate”. There is the Lovelock/Margulis Gaia hypothesis for which keeping the thermodynamic lessons in “Out of the Cool” helps keep in prespective. There is also Ward & Brownlee’s “Rare Earth” as well. Still in all, the stability continues to amaze.

    Regarding eorobin‘s earlier question about EVT versus Monte Carlo (MC) methods: I’m not impressed with the applications of EVT I’ve looked into regarding rainfall intensity; perhaps the fault is my own. I have now read the Stefan Rahmstorf and Dim Coumou PNAS paper linked at the beginning of this thread. I’m impressed by the clarity of the concise writing and only wish I could do somewhere near as well. For the questions addressed the methods appear to my amateur eyes to be completely sound and above reproach (other than the minor points addressed already in that paper). I will certainly keep this MC method in mind if I ever go so far as attempting to address the role of a warming world upon local rainfall, although I’m hoping to obtain an analytic expression for the particular (simplified) physics I have in mind.

    Comment by David B. Benson — 30 Oct 2011 @ 10:25 PM

  82. How does this relate to:
    http://dotearth.blogs.nytimes.com/2011/10/29/october-surprise-white-snow-on-green-leaves/
    “October Surprise: White Snow on Green Leaves”
    By ANDREW C. REVKIN?

    It doesn’t seem at all reasonable to me that there would still be green leaves in the Hudson valley on October 29. Nor does it seem reasonable that snow would be a surprise on October 29, given that south of Buffalo, N.Y., it used to snow every September. The Hudson valley is only about 1400 feet lower in elevation, so the lapse rate shouldn’t do it.

    Did variability grow with the warming? Did the Rossby waves suddenly rotate 45 degrees of longitude? I doubt that snow on October 29 is unprecedented in the historical record. The trees should have protected themselves by dropping their leaves. Will the trees die? September and most of October must have been like July to prevent the trees from “knowing” that it was time for the leaves to change color. Any comments on this?

    Will the new climate force a change from broadleaf trees to evergreens? White snow on green broad leaves is indeed strange. Are seasons changing more abruptly now?

    [Response: Good questions. There is some evidence for later dates of leaf drop in some locations, but I don't have the details at my fingertips. Heavy snow before leaf drop is by no means unprecedented, particularly since early season snows tend to be very wet and heavy, as this one is. Yes, a lot of those deciduous trees will indeed die and others will sustain major damage that both weakens them and increases their susceptibility to pathogens. Deciduous trees do protect themselves by dropping their leaves, but since this process is dominated by photoperiod and climatic conditions, there's no way they can just jettison all their leaves immediately in response to a freak storm. The leaves have to form an abscission layer, which is hormonally mediated, for them to drop, no matter how much weight you put on them. The amount of damage done would depend on how many leaves were in the process of doing this (i.e. about to drop), when the snow hit. Color change and leaf drop are temporally close, but not synchronous. You would have to have large and persistent/consistent changes in early season snow loads for that factor by itself to shift forests from deciduous to evergreen dominance, but it's not impossible. A more likely contributor to such a shift is increasing aridity--conifers have the advantage in dryer climates, as a broad generalization. Lastly, at large spatial scales, there is more evidence for earlier springs than later autumns, but there is some evidence for the latter also.--Jim]

    Comment by Edward Greisch — 31 Oct 2011 @ 11:36 AM

  83. I’m trying to track down an article that was written up in RealClimate that made a very approachable argument for when something can be attributed to global warming. The idea was that if an event is far out of the ordinary but fits a global warming model very closely, a degree of probability can be assigned regarding the likelihood that the event was in fact due to global warming. If I remember correctly, they gave a probability for the French heat wave of 2003 being caused by global warming. Does anyone have a link to that article or the RealClimate writeup?

    Comment by Erica Ackerman — 31 Oct 2011 @ 6:53 PM

  84. Erica Ackerman @83 — I don’t know about the 2003 West European heat wave, but the PNAS paper linked at the beginning of this thread offers a method to calculate that likelihood and does so for the East Europe/Central Asian heat wave of 2010.

    Comment by David B. Benson — 31 Oct 2011 @ 8:20 PM

  85. @Edward Greisch #82

    I got interested in this October event as well, so I downloaded GHCN for my most local weather station (which happens to be a rural site, so no UHIs for me!). The variable I was looking at is date of first freeze, since that’s the limiting factor I would think in October snows. For my location, the average date of first freeze is Nov. 5 with a standard deviation of 10+ days, so an event like this isn’t out of the ordinary.

    Comparing 1900-1970 vs. 1970-2010, I found that there was barely any shift at all in the first freeze date (~1/2 of a day). However, the interesting thing I did find was that the date of first freeze was substantially more variable between 1970 and 2010, with a sigma of 10 days from 1900-1900 and 15 days from 1970-2010 (and it’s just smaller sample, 1900-2010 also had a larger standard deviation of about 13). I also plotted the distribution of days and the increased variability was quite evident. Has anyone done a more global analysis along the same lines?

    Comment by Paul from VA — 31 Oct 2011 @ 10:37 PM

  86. Erica #83,

    The 2003 article was Peter A. Stott, D. A. Stone, and M. R. Allen, “Human contribution to the European heatwave of 2003,” Nature 432, no. 7017 (December 2, 2004): 610-614.

    The RC post was probably this:
    http://www.realclimate.org/index.php/archives/2011/02/going-to-extremes/

    Comment by CM — 1 Nov 2011 @ 2:00 AM

  87. fyi

    http://rogerpielkejr.blogspot.com/2011/11/anatomy-of-cherry-pick.html

    Comment by ob — 1 Nov 2011 @ 9:56 AM

  88. @87 ob:

    Sigh.

    Comment by thingsbreak — 1 Nov 2011 @ 11:30 AM

  89. Stefan,
    I’m a statistician and this analysis bothers me. I’m NOT a gobal warming sceptic – the opposite in fact. One problem is I don’t have access to the paper, so I can only rely on the summary here.

    This issue is attribution of a single local extreme seasonal weather event to a global trend.

    [Response: No, the issue is the attribution of a local/regional record event to a local/regional trend.--Jim]

    To a first approximation, there is only one global average temperature and we can do statistical inference on it on an ongoing basis. But there are numerous regional seasonal, monthly or daily temperatures. New York average in summer; Melbourne in January; hottest Ausgust night in Los Angeles etc.

    It seems to me that the problem is we are naturally drawn to these extremes. And this is what has happened here. You have taken a single extreme, defined both by time (July) and a small region (around Moscow), and done an analysis on the basis that it was somehow independently chosen. If the analysis was done on Moscow after July 2009 without knowledge of the extreme event in 2010, the conclusion would be more credible.

    [Response: That's exactly what they did. They removed 2010 from the trend calculation and computed their numbers based on 1910 to 2009 and then also from 1880 to 2009. The 2010 value is not included--Jim]

    A fair analysis would take into account the prior probability of picking that extreme event (defined both by location and time unit) after it occurred – probably unquantifiable – but as far as I know these analysis are not done a priori. Presumable the approach would be to define a large number of regional time slices and do a similar analysis.

    If the same analysis (as in the paper) was done in 1939 – after the 1938 and 1936 records, one wonders if the conclusion would be even stronger. (An additional complication here is the climate time series show positive autocorrelation, so closely spaced records may not be as unusual as would first appear.)

    [Response: If you did the same analysis before (not after), you would get the same basic finding, i.e. the climate was more likely non-stationary than stationary. Which it likely was, as the low frequency variation shows. And positive autocorrelation is what you expect in a trending series, the opposite of expectation in a non-trending one having the same amount of white noise. If the trend is upward, you will get record and near record high temperatures temporally close to each other--Jim]

    I work in medical stats rather than climate stats, but I often wonder how extreme local events could be attributed to global warming. It seems to me the magnitude of the event is important – this maximum is 3 degrees greater than the previous – but this also suffers from selection bias. Repetition is important, but autocorrelation must be considered. In summary this is a non-trivial statistical problem.

    [Response: Yes, the amount by which the 1930s record was beaten is yet more evidence that the cause of the 2010 record event is due to non-stationarity of the climate.--Jim]

    Comment by Bruce Tabor — 2 Nov 2011 @ 6:12 AM

  90. MOSCOW | Mon Oct 25, 2010 3:21pm EDT

    MOSCOW (Reuters) -
    Nearly 56,000 more people died nationwide this summer than in the same period last year, said a monthly Economic Development Ministry report on Russia’s economy.
    http://www.reuters.com/article/2010/10/25/us-russia-heat-deaths-idUSTRE69O4LB20101025

    Comment by Colorado Bob — 2 Nov 2011 @ 8:06 AM

  91. Bruce – the paper is freely available. How about reading it first? Try here.

    Comment by Rob Painting — 2 Nov 2011 @ 8:16 AM

  92. @87 & 88
    Are we feeling devastated yet?

    See Roger’s comment on the NOAA draft.

    Comment by Paul Middents — 2 Nov 2011 @ 3:56 PM

  93. I have a concern about nonliner trending that shows increased warming in the 1980-2009 period:

    Global temperatures, especially northern hemisphere temperatures, have a periodic, apparently naturally occuring cycle showing up in HadCRUT3.

    I made some attempts at using Fourier analysis in Hadcrut3. I found a sinusoidal component holding up for 2 cycles (1876-2004) with period of 64 years, amplitude .218 degrees C peak-to-peak, with peaks at 2004, 1940, and 1876. What I suspect causes this is AMO and a possibly-loosely-linked long period component of PDO. This accounts for nearly half the warming from the 1973 dip to the 2005 peak in smoothed HadCRUT3.

    So, I would be concerned about warming in a time period largely coinciding with 1972-2004 being used as an indicator of how much warming is to be expected in the future.

    Comment by Donald L. Klipstein — 2 Nov 2011 @ 7:45 PM

  94. Way back at # 74, Eli wrote: “One of the things that continuous distribution including those represented by extreme value theory ignore, is that there are extreme values at which the thing they are representing breaks and the next values of the variable being observed are simply an infinite repeat of the last extreme value. For example a value of $0 for a bond or stock. There may be such points in the climate system.”

    I’ll suggest that these breaks in variable pattern are much more likely in climate proxy data than in the physical measurements of temperature, rainfall, etc in climate change.

    The notion that the only significant break points in climate change trends would be “Snowball earth”, or “final sequence Red Star absorption of inner planets” is disingenuous, and distracting. Climate change on earth is unlikely to come anywhere near either end of that silly continuum, but proxy data already demonstrate a number of absolutist dynamic trends.

    Its not hard to argue that a primary motivation for the denial noise about global warming, is to shift attention from those proxy data.

    Tree rings do not accumulate when the tree dies due to drought, or after shifts in timing of first/last frost, or if bark beetle infestations surge, due to milder deep frost in winter (as is absolutely true across millions of evergreen forest acreage in the US West and Alaska). A graph of human quality of life for residents in Shismaref Alaska will soon hit a static point, when the village moves inland to avoid encroaching sea level (and most residents scatter). Rising mortality due to hotter regional summer temps suggests a number of static points on the graph (50,000 in one summer, in one relatively small area?).

    Of course insurance companies are worried – denial doesn’t work when losses are there in front of the agent. Oil and coal companies and their shills focus on denying the larger question, to postpone the inevitable attribution of harm and need for lifestyle changes for as many news cycles (and personal fortune accumulations) as can be.

    Comment by phil mattheis — 2 Nov 2011 @ 8:28 PM

  95. Way back at #74, Eli said: “One of the things that continuous distribution including those represented by extreme value theory ignore, is that there are extreme values at which the thing they are representing breaks and the next values of the variable being observed are simply an infinite repeat of the last extreme value. There may be such points in the climate system.”

    The notion that the only relevant extremes in climate change are “Snowball Earth or possibly final sequence Red Star absorption of inner planets” is disingenuous, as if only a single global average temperature matters, and distracting from the multitude of climate proxy data which absolutely do show absolutist break points.

    Tree rings no longer vary in annual growth when the tree dies due to drought, or change in frost cycle, or bark beetles surge due to milder winter minimums (as is absolutely the case over millions of evergreen acreage in the US West and Alaska). Quality of life trends in Shismaref, Alaska will reach static points on graphs when the village site is abandoned due to encroaching sea level, and the residents scatter. Proxy data has been useful to supplement and confirm measurable weather parameters in defining climate, and have even more value in defining impact of the changes they document.

    Of course insurance companies are worried: denial does not work when the damage is there in front of the agent. Oil and coal folks, and their willing shills, continue to deny the larger reality of global warming, to postpone any discussion of attribution for as many news cycles (and installments to personal fortune) as possible.

    Maybe the next step is to shift focus on the proxy data from confirmation of climate change, to what it does even better – provide defining evidence of how we are spoiling the nest for ourselves, and not destroying the earth as snowball or red death. The debate over reality of global warming is settled for anyone seriously looking, and is simply the wrong old question.

    Comment by phil mattheis — 2 Nov 2011 @ 8:45 PM

  96. Paul Middents – So Roger Pielke Jnr has a small coterie of sycophants. Is that unusual for fantasists? It’s difficult to ascertain whether he can’t understand Rahmstorf & Coumou’s paper, or his ideological blinkers are so powerful that he’s hell-bent on misrepresenting what they did.

    [Response: He definitely doesn't understand the paper, that's for sure. Textbook case of misrepresentation. We'll try to explain it to him one more time.--Jim]

    As for the NOAA page, I wonder why they shift their definition of “Western Russia” in their response? Seems to be a strong warming hole in the north-East segment of their map, which includes Moscow. I’ve had a brief look at the RSS satellite temp data. Their global map shows strong warming in the region (from 1979 onwards), but I lack the technical nous to construct a decent image.

    Also the NOAA guys seems to be somewhat inconsistent. Their analysis of the physical changes (attribution) was restricted in scope (as Kevin Trenberth pointed out), but their statistical analysis is more broad in scope. Seems a bit schizophrenic.

    I guess it boils down to this: which is the better indicator of record-breaking warm extremes in Moscow?, the local temp series?, or Western Russia temps?

    Comment by Rob Painting — 2 Nov 2011 @ 9:16 PM

  97. Re: #89 (Bruce Tabor)

    If I understand you correctly, then I think there’s something to what you say, which boils down to this: we wouldn’t even be talking about Moscow if it hadn’t had such an extreme heat wave. Hence we’re not discussing New York, Melbourne, or Los Angeles. Given that Moscow was chosen because of its temperature record, the extreme isn’t so unlikely, so it’s attribution to climate change is less plausible.

    But it seems to me that this paper isn’t about “How likely is this extreme with climate change?” It’s about “How much more likely is this extreme with climate change, than without?” The fact that Moscow is chosen *because* of its heat wave doesn’t alter that likelihood ratio. In fact the main result of the paper doesn’t depend on observed data at all — no matter what the data or its origin, the likelihood of a new record changes when the time series is nonstationary, and when the series mean has shifted by a notable amount new records become far more likely. In addition: that seems to be what has happened in Moscow.

    I’ll also mention that the autocorrelation of, say, Moscow temperature for a given single month’s time series (say, July) is pretty weak. And if you take the view that the time series is nonstationary, then we should look at the autocorrelation of the residuals from the nonlinear trend, which look to me to be indistinguishable from zero.

    [Response: Hi Tamino, we've thought quite a bit about this issue as well. If the five-fold increase in the expected number of records was something special to Moscow, and we had picked Moscow because of the record found there, then this would be a concern. But of course we've done this analysis for the whole globe, for all months, and this five-fold increase is nothing special - neither in the expected number of records (based on the trends) nor in the number actually observed. But that is another paper. -stefan]

    Comment by tamino — 3 Nov 2011 @ 11:07 AM

  98. Re: #97 (Tamino)
    Some previous comments wandered very close to a post hoc fallacy, this nails it:

    “But it seems to me that this paper isn’t about “How likely is this extreme with climate change?” It’s about “How much more likely is this extreme with climate change, than without?”

    The likelihood that this extreme occurred is unity, 100% probability, with or without climate change. This gets back to the best known post hoc fallacy, “with all the possible outcomes of the universe, what is the probability that it would end up the way it did?” The answer again is unity.

    [Response: That's not what he meant. He's talking about relative strengths of evidence before the fact.--Jim]

    Comment by Matt Skaggs — 3 Nov 2011 @ 3:34 PM

  99. Jim and Tamino,
    First some common ground:

    “no matter what the data or its origin, the likelihood of a new record changes when the time series is nonstationary, and when the series mean has shifted by a notable amount new records become far more likely.”

    Simple and clear. However, independent of climate change, what is the probability that a new high temp record will be set somewhere next year? I assert that it is close to unity in any given year.

    [Response: That's a function of the number of stations you evaluate and the record length at each one. Whether it's close to one or not cannot be stated ahead of time without that information. If you mean every station in the GHCN record, then yes, you are right, because there are a lot of them. Every station in Iowa, then probably not]

    I further assert that if the temperature record at that site is broadly similar to the Moscow record (nonlinear increasing trend), an analysis will yield similar results to what is shown in the paper. I think you need a statistically significant number of extreme events temporally clustered to avoid the post hoc fallacy and satisfy the claim of “more likely.” I just don’t see how you can look at one extreme event a posteriori and ascribe some level of improbability to the null hypothesis that it was caused by natural variation.

    [Response: Yes, that's the point. A record similar to the Moscow record demonstrates a nonlinear trend--the values are autocorrelated to some degree. They wouldn't be so in a stationary situation. This is evidence that the record is more likely due to the trend than to combined chance and stationarity. It does get tricky though if you have these multi-decadal scale oscillations overlaid on the trend. Then you need more time, or more series, to get the same level of confidence. But that's sort of a different point than you're making isn't it? By the way, this was fished out of the spam folder.--Jim]

    Comment by Matt Skaggs — 3 Nov 2011 @ 5:48 PM

  100. Re: 95

    Hey Phil,

    If you will please notice in a discussion of extremes, recognition of the boundaries or range are used to establish a mean. It depends on the definition of the hypothesis to define the analysis being explored. I was mainly hoping for Eli to share the hypothesis…

    Hence, if we define the range of the extremes to be the conditions which life could possibly be seen on Earth, I suggested one possible range, (Keeping in mind that at some point in the final sequence Earth will become tidal locked and living organisms have been detected at depths over several hundred feet underground.)

    If you are stating that the extremes are in reference to the maintenance of the climate of between 1880 and 1990 then the extremes are much smaller. However, it does raise a question as to how different would this 110yr period be when compared to the last 3ky, or 30ky? The issue is we have better measures of the changes between 12kya to today then we have of 24kya to 12kya.

    As to extremes clearly the current extremes have not been seen in over 1000 yrs Sorry, I had not meant to be offensive, I was only hoping for a clarification. As I had stated elsewhere there have been rates of change at least as rapid in the last 130yrs, just the drivers were different and the baseline/mean lower. (Tilted parabola)

    If we look at total outliers in a more variable climate (similar to the pre-anthropogene) the ratio of high to low show the higher variation count in the last 130yrs do appear more often, when looking my local records; however, overall my region is not very representative. There are changes occurring, the difficult thing for me is trying to define the processes involved.

    Personally, I can see where changes in weather patterns can drive the averages higher, I am trying to understand how CO2 is driving the weather patterns. Applying an extreme mathematical tool in an attempt to differentiate between natural distribution and a forced/skewed distribution can be as simple as a dot plot. If we have multiple peaks a dot plot would easily point to multiple processes being measured. Likewise if the outliers show a skew the evidence should be clear. An examination of extremes would not seem to offer a clearer image, the point I guess is, what’s the point?

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 3 Nov 2011 @ 5:54 PM

  101. > I am trying to understand how CO2 is driving the weather patterns

    http://www.aip.org/history/climate/GCM.htm

    Comment by Hank Roberts — 3 Nov 2011 @ 6:32 PM

  102. Re:100

    Hey Hank,

    The closest that history came to describing how CO2 and other Anthropogenic activities affect weather patterns was the blurb about the “GCM” modelers trying to analyze one and two dimension models before the Charney Panel moved on to CLIMAP. I had read this or domething similar back in 2003.

    The point remains I know CO2 increases wv content. I do not argue that the CO2 at 130ppm greater level then the last 1ky adds up to 1.85w/m^2 to the atmosphere 7×24. I am comfortable that the scientists have a series of models which can replicate current changes and can project future changes within a reasonable margin of error.

    I am still trying to understand the three dimension process of how the radiant energy of CO2 and other human activities affects weather patterns. I am confident eventually it will be demonstrated. As a hobbist, I want to try to understand the science and the basis of knowledge that will go into defining the global processes.

    So far I have seen excellent work wrt the radiative components. Dr. Schmidt and several discussions on UKww Climate Discussion and Analysis in 2007-8 did a lot for helping me understand the combination of band saturation and “splatter”.

    Moving into the third dimension is a bit confusing as the advection of tropical heat when “enhanced” with added atmospheric radiative energy convects to a higher altitude and requires an in-rush/updraft of more surface air/wv content, inessence, chilling the surface (to close to its original temp., though with a higher salinity) and increasing the specific humidity at altitude over the ocean. Which leaves me with a wealth of questions and so far few answers. I do appreciate your help though, hopefully as time goes on more will come to light.

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 3 Nov 2011 @ 8:41 PM

  103. Jim,
    Thanks for taking the time to respond. My comment was a bit half-baked, sorry about that. Bad netiquette on my part.

    You wrote:
    “It does get tricky though if you have these multi-decadal scale oscillations overlaid on the trend. Then you need more time, or more series, to get the same level of confidence. But that’s sort of a different point than you’re making isn’t it?”

    No, that was pretty much my point. My earlier post was meant to establish that the Moscow event is not unique, independent of autocorrelation, if you look at all stations worldwide. There were probably similar events caused by blocking highs in the cooler 1970s. The rest of the comment would be that if you performed a post hoc analysis of an extreme event in the early 1930s, you would see a similar nonlinear temperature excursion preceding it, and be able to make a similar claim of improbability. From that perspective, I am unable to see how the study can differentiate between a trend-induced improbable event and an oscillation-induced improbable event, in a post hoc analysis of a single event.

    [Response: There is always the chance that you've got some low frequency oscillation going on, with a wavelength greater than the length of your record, rather than a monotonic trend. It's not meant to distinguish between these two, and indeed it can't, without at some point resorting to physics, paleodata and other external information, or widening the scope of the study, in space or in time or both. It's meant only to address the probability of a stationary process (i.e. temperature) over the given record.--Jim]

    “By the way, this was fished out of the spam folder.”

    I’m not sure what this is meant to convey.

    Comment by Matt Skaggs — 4 Nov 2011 @ 9:16 AM

  104. Matt, #103 — It only means your post got caught in an automatic spam filter by mistake. Expect this to happen sometimes. In this case a moderator spotted it and helped out. Don’t expect this to happen always.

    Comment by CM — 4 Nov 2011 @ 12:35 PM

  105. > I am still trying to understand the three dimension process of how the
    > radiant energy of CO2 and other human activities affects weather patterns.

    You’re not alone in that effort. Weart says: “The climate system is too complex for the human brain to grasp with simple insight. No scientist managed to devise a page of equations that explained the global atmosphere’s operations….”
    http://www.aip.org/history/climate/GCM.htm

    Comment by Hank Roberts — 4 Nov 2011 @ 2:58 PM

  106. Re – my post at #89, Jim’s comments, #91 (Rob Painting) and 397 (tamino),

    I’ve looked at the paper briefly. Thanks Jim for your comments. I think I’ve got it. I still have an important issue to raise.

    Although Stefan’s attention was drawn to the Moscow record because it was so extreme, the initial question is to what extent can a new record in the past decade in Moscow be attributed to random variability about a stationary time series versus an underlying trend. The actual record was (correctly) ignored in order to estimate this trend.

    The task then is to estimate the expected number of records (in the last decade) in a stationary setting (0.105 for the shorter time series, 1910-2009) versus in the expected number in the presence of an underlying trend (0.47 records per decade). So [(0.47 − 0.105)/0.47] ~ 80% probability that A new Moscow record is due to the warming trend.

    All well and good, except the conclusion states, “…we estimate that the local warming trend has increased the number of records expected in the past decade fivefold, which implies an approximate 80% probability that THE 2010 July heat record would not have occurred without climate warming.”

    This is where the problem occurs. I’ll first try to illustrate using Bayes theorem then I’ll do a thought experiment.

    For shorthand, let WR => record due to warming & A => analysis was done, and noWR => no warming record; so P(WR|A) means the “probability a record event is due to warming GIVEN that you have chosen to analyse it”. This is what the concluding statement is in fact saying, as soon as you swap from “a record” to “the record”.

    Using Bayes theorem:
    P(WR|A) = P(A|WR)*P(WR)/P(A), We expand P(A)to give:
    P(WR|A) = P(A|WR)*P(WR)/[P(A|noWR).P(noWR) + P(A|WR).P(WR)]

    Now P(WR) is estimated in the paper to be 0.8. It is immaterial to this argument that its estimation ignores the 2010 record itself, because it is then used post-hoc to make statements about the probability of the 2010 record. I suspect that this analysis would not have been done if 2010 was not a record year. To give a realistic estimate of the probability that THE 2010 record was due to the warming trend we must be able to calculate the ratio
    P(A|WR)/P(A),
    i.e. what is the probability this event would have been analysed given that it was a record divided by the total probability it would be analysed. record or not. The expansion shows that this calculation must in practice consider the probability of analysis if the record had not occurred (quite low I assume). This ratio is almost certainly much greater than 1.

    Consider two people who buy lottery tickets. One buys one $1 ticket among the millions available a week. He wins the $1 million prize by luck after 10 years. The other starts buying one a week but each year she increases this by an extra ticket. Two a week in year 2,…, ten a week in year 10. Finally she wins the jackpot. Now we want to calculate in each case how much the trend in ticket buying has contributed to their respective jackpot wins.

    We can say definitively that the trend has not contributed at all to the probability of the man winning in year 10 but multiplied the probability of the women winning in the 10th year 10-fold. However, we cannot attribute the woman’s actual win 90% to the trend (the man had no trend so it is clearly dumb luck). The reason is that we chose to analyse them because they won the jackpot.

    So until you can quantify the probability that this event was analysed given that it was a record, you cannot make post-hoc attributions of a record to a regional climate trend.

    Now my intention is not to give succor to the likes of Roger Pielke, but rather to contribute to this difficult problem of attribution of regional climate change, as in my view, global climate change is beyond reasonable doubt.

    [Response: Bruce, see my response to Tamino at #97. According to our analysis, the warming trend in Moscow has increased the likelihood of a record to five times what it would be without such a trend. This is true whether a jackpot (the 2010 record) was then won or not - it is a result of the Monte Carlo analysis. But the question then is: is such a warming trend special to Moscow? Did we find and analyse this special case because we were alerted to it by the news about the record? (Did we find that woman who bought ever more lottery tickets because she won in the end?) For the woman the answer is probably yes, for Moscow, no. Similar warming trends, raising the odds in a similar manner, are found all over the globe, this is nothing special to Moscow. A woman who increases her ticket-buying like this, in contrast, would be an extremely special case.
    But: even if we only found this woman because she won, this would not change the facts of how her ticket-buying strategy has increased her odds of winning, the analysis of this would remain completely true. It just would be a rare case, not a common one. -stefan]

    Comment by Bruce Tabor — 5 Nov 2011 @ 5:35 AM

  107. I have an observation and a question related to mutidecadal fluctuations, and the interpretation of this as a study relating solely to global warming.

    I realize this is a statistical analysis, not an attribution study. That said, it is going to be discussed as if it were an attribution study — that global warming raised the odds of this extreme event. If discussed as such, the results appear a high outlier relative to similar-looking attribution studies. The ones I’ve read about (here in Realclimate, mostly) seem to find that global warming is resulting in a modest increase in the likelihood of occurrence of specific extreme events. In the 20% range or so. So something is materially different here, to find 80%, compared to attribution studies typically showing much smaller changes in likelihood of extreme events.

    Let me cut to the case. As I understand this, you asked the question, how likely would you have been to see the new high if there had been no recent trend. (Where your definition of trend is clear, and defined in a neutral, statistical sense as the curve fit to the data). Then, how likely with the recent trend added. So far, so good.

    Now we get to the attribution part. If the 1930s peak and the 2010 peaks in this region were due in part to the atlantic multidecadal oscillation, then would a more conservative measure of the pure effect of global warming have been to ask, not how likely the new high would have been absent any trend, but how likely if the 1930s trend had repeated itself?

    I note that you clearly describe this in terms of the statistical trend. I also see that it is widely interpreted as showing that “global warming” greatly increased the odds. But if “the trend” in the most recent years combines the effects of AMO and global warming, then that interpretation is wrong. (In effect, just as you will see people plot the raw sea surface temperature data and incorrectly attribute all the change in the region to “AMO”, you’ve tracked the raw surface temperature change, and others are incorrectly attributing the entire effect to “global warming”.)

    Let me put it another way: If the recent statistical trend combines both AMO and global warming, then the correct heuristic interpretation of the result is an 80% increase in likelihood, relative to a world in which both global warming and AMO had ceased.

    In a crude sense, to the extent that GCMs either replicate an AMO-like phenomenon, or produce large but nonperiodic fluctuations in sea-surface temps (AMO-like in size but not periodicity), then implicitly, the alternative hypothes in those studies (a world without global warming) has a lot more variation in it than this study does.

    Maybe that reconciles the 80% here compared to much lower estimates in attribution studies. (Alternatively, I may completely have misunderstood what you did).

    I’m suggesting that if this is going to be interpreted as a study of the impact of global warming, maybe a more conservative measure would be to monte carlo a world in which the 1930s trend repeated itself (a simple proxy for AMO), instead of a world with no trend. Surely that would show a smaller increase in the likelihood of extreme events. I would be curious to see if that would align the results of this study more closely with prior attribution studies of extreme events.

    In short, my question is not about how you defined the trend. That seems clear. Its about whether, if the recent trend combines AMO and global warming signals, interpretation of this as if it were an attribution study of global warming may overstate the pure effect of global warming.

    Comment by Christopher Hogan — 5 Nov 2011 @ 7:20 AM

  108. Re: #106 (Bruce Tabor)

    I believe your analysis is mistaken — particularly your application of Bayes’ theorem. The condition “WR” for “record due to warming trend” is fine, but you shouldn’t be comparing that to “noWR” for “no record.”

    You should really be comparing “WR” to “RnoTR” — that a record is observed but it’s NOT due to trend. Then we have
    P(WR|A) = P(A|WR) * P(WR) / [ P(A|RnoTR) * P(RnoTR) + P(A|WR * P(WR) ]

    The only sensible estimate is P(A|WR) and P(A|RnoTR) are the same — analyzing the Moscow record was equally likely whether it’s due to a trend or not.

    Comment by tamino — 5 Nov 2011 @ 8:46 AM

  109. RE:107
    Hey Christopher,

    Regardless of the application of tools the provided sample would be fairly small if examining simple extremes. In Figure 1, that there is an underlying 1deg.C rise in the extremes regardless the assumed source appears to be unaddressed, a question seems to be whether this could be related to UHI, (I believe it unlikely). Remove this, (positive extreme range 1880 time frame, equates to 3deg.C, @1940 roughly 4deg.C and @2000 roughly 5deg.) And part of the trend remains. That fossil fuel emissions would likely have been kept in check, either by the 1920′s-30′s cooling ocean SSTs globally, there had to be a different process at work.

    Proceeding, if we remove the trending many associate with GW, or roughly a 0.8degC rise the outliers continue to trend. If we suggest the AMO is resonsible for the 2degC rise in 1880 and it were a fixed value, the remaining change between 1940 and 2000 would still appear to be roughly 1degC positive. In essence, if we are considering a AMO of between the 60 and 70yrs, and a temperature contribution of between 2-3deg.C, the GAT trend remains, it would require a stretch to suggest that the AMO could be responsible for a 2-4degC variation, though not impossible.

    That we are aware of other contributing processes both having positive and negative influences would seem to suggest that the effects of the AMO are not likely to be responsible for the extremes we see. As we increase our knowledge of the forcings and their observable effects I believe the picture will improve in its focus. Dealing with probabilities at this scale and current state of science makes a hard conclusion difficult to define at this time, IMHO. However, the pathway to the necessary knowledge is based on the platforms established by works such as this one.

    Cheers!
    Dave Cooke

    Comment by ldavidcooke — 5 Nov 2011 @ 9:56 AM

  110. To 109, I’m not suggesting that AMO by itself explains the new record. I understand that. The state of the AMO now is not materially different from the 1930s (largely, I guess, because it’s defined by de-trended sea surface temperatures).

    I’m suggesting that it’s improper to characterize the results as showing that “global warming”, by itself, accounts for the 80% increase in likelihood of a new record temperature. Not that the authors did that, but that others will.

    If I understand how the study worked, the 80% reflects the comparison of likelihood with actual trend, relative to likelihood with stationary (no trend) temperatures. That is a comparison of temperatures reflecting both AMO and “global warming”, relative to a world that had neither. I’m asking that, to isolate the pure effect of global warming, probabilities conditional on actual trend be compared to a different alternative, which would be a trend reflecting only AMO. And my suggestion for getting a back-of-the-envelope estimate was literally to replicate the (e.g.) 1910 to 1930 trend, and tack it on the end of the timeseries ending in 2010, then proceed with the analysis. (I realize there’s more to it than that, but that’s the gist of it.)

    If, when you did that, the marginal increase in likelihood of new extremes fell more in line with the (e.g.) 20% that seems typical of recent attribution studies, then this would place what is now an extreme outlier result (as it is being interpreted, not as the authors described it) more in line with the rest of the literature.

    Comment by Christopher Hogan — 5 Nov 2011 @ 12:09 PM

  111. Re:110

    Hey Christopher,

    At issue is the difficulty in separating the various contributors. I believe the primary issue with the AMO effect is that the Arctic patterns are likely the driving force of circulation. The issue that this analysis is based on is; How do you separate primary forcing and secondary or even tertiary forcing from natural variation? In this case I believe they took the right choice by not separating them or assigning a “SWAG” value.

    If you knew the local variation limits or range maximums you could then suggest that any value that exceeds the maximum range would have to be driven by another process. The question is how do you determine what the maximum range component is? If we look at the apparent repeating maximums and subtract the variation in the mean, you are in essence normalizing the data set. (Removing the trend.) To then assign a value of 100 to the 1940s peak you get what appears to be a normal distribution if you remove the @2000 and above values. Though there are major indications of a process change when the normal variation starts stagnating on one side or the other of the mean. The cooling stagnation with a snap temperature rebound is very characteristic of weather, which is abnormal in normal statistics.

    If anything, it is this one characteristic which appears to suggest that the extreme warming event is a normal weather event. Weather unlike large populations of random numbers or measures of single processes does not “behave”. Though short periods may appear to be elastic, they are not elastic in the sense of equal and opposite effect. Again the reason is we are measuring both a closed system and an open system which appears to share the same source; but different sinks. One of the two, leaks…

    That the source does not seem to vary then the destination of the energy in the system should be entrophy. Hence, if the energy is increasing in the system there has to be a change in the sinks. Thus, a rebound which is higher can only happen if the total energy in the system is increasing. The measure of that increase is what is being attempted. To move beyond the work that has been done here will require additional effort and measures. I believe many would like to achieve as you request; however, it will have to wait for another day and more information before it can be achieved.

    Cheers!
    Dave Cooke

    (PS: And yes I know I bury it, I just write as I think it and it takes too long for me to render the husk from the kernal to respond in a clear concise manner, sorry.)

    Comment by ldavidcooke — 5 Nov 2011 @ 8:58 PM

  112. Re. Stefans response to my #106 post, Tamino at #97 and #108,

    Thanks guys,

    I stand corrected (in the algebra as well, thanks Tamino). I realised there was an error in my thinking soon after pressing “Submit”. It’s in the nature of blogs that not enough thinking and error correction occurs before publication.

    In a nutshell, my concern was that the estimated ATTRIBUTION of the record to a warming trend versus random variation about the mean was not made INDEPENDENTLY of the 2010 record itself, a record which clearly drew the attention of Stefan and Dim Coumou and other researchers.

    As Tamino points out, the decision to analyse was driven by the record itself, INDEPENDENT of whether it was ATTRIBUTED (due) to a trend or random variation.

    Furthermore the process of estimating the ATTRIBUTION excluded the 2010 record. So the only remaining issue is whether the trend immediately prior to 2010 contains INFORMATION about the 2010 record that is a result of short range AUTOCORRELATION rather than a longer term trend. Tamino in #97 points out that the autocorrelation is weak. The six years leading up to 2010 were close to the long term mean.

    So congratulations to Stefan & Don on the paper and thank you Tamino for helping me get my statistical thinking right.

    I’m looking forward to your paper Stefan!

    Comment by Bruce Tabor — 6 Nov 2011 @ 1:31 AM

  113. Why do I have to search into the farthest depths of the internet to find any relevant and compelling research on GW and mainstream outlets these days have nothing to say unless its some quack scientist with a quack theory that they think will bring some ratings? Thanks or the post.

    Comment by Mary Latim — 17 Nov 2011 @ 12:06 PM

Sorry, the comment form is closed at this time.

Close this window.

0.485 Powered by WordPress