RealClimate logo

Unforced variations: Nov 2014

Filed under: — group @ 2 November 2014

This month’s open thread. In honour of today’s New York Marathon, we are expecting the fastest of you to read and digest the final IPCC Synthesis report in sub-3 hours. For those who didn’t keep up with the IPCC training regime, the Summary for Policy Makers provides a more accessible target.

Also in the news, follow #ArcticCircle2014 for some great info on the Arctic Circle meeting in Iceland.

410 Responses to “Unforced variations: Nov 2014”

  1. 351

    #339–Victor, since you posted out of curiosity as to responses, and since my response also disappeared in the system somehow, let me quote it verbatim:


    By this I meant that the data presented was pretty meaningless, especially without any explicit context whatever, and therefore didn’t seem to encourage ‘sensible conversation,’ as the Borehole description has it.

    As you say, the series is quite noisy, which makes drawing conclusions difficult (although there are statistical tools to attack such problems.) It rather reminds me of those who, in considering the surface temperature record, prefer the noisy TLT data to the exclusion of the instrumental record, and then further restrict the S/N ratio by considering only arbitrarily short spans, such as 16 years.

    What I’ve learned about statistical epistemology tells me that’s not a good idea.

  2. 352
    Steve Fish says:

    Re- Comment by Victor — 25 Nov 2014 @ 11:35 PM, ~#339

    Victor, the only epistemological question regarding your presentation of high water records from a single location is whether these data are, by themselves, valid in a discussion of global climate and sea level change. They are not, and this conclusion has nothing to do with confirmation bias.


  3. 353
    Russell says:

    Anthony Watts website has yet again unwittingly given us something to be thankful for.

  4. 354
    JimB says:

    My hat goes off to Jon Keller at 346 for listening to the very patient people who were willing to treat Mr. Keller as someone who was sincerely posting questions about what he considered to be issues with the accuracy of the ice core data. In my opinion he turned out to be a ‘skeptic’ in the best meaning of the word.

  5. 355
    Meow says:

    @25 Nov 2014 @ 11:35 PM:

    …So. What is the moral of my little presentation? I guess it would be this: 1. unless there is a very clear and unmistakable trend, data like the Acqua Alta readings are difficult if not impossible to evaluate; and 2. it’s awfully easy to fall into the trap of confirmation bias, i.e., seeing in the data what you want to see, rather than what is actually there. And that goes for BOTH sides of the climate change coin.

    You forgot (3) It’s awfully difficult to think of a case in which cherrypicking is a valid epistemological approach; and (4) It’s awfully easy to fall into the trap of false equivalence.

    P.S. No blog visits for you.

  6. 356
    Hank Roberts says:

    More for Jon Keller on rates of change:

    Also relevant, hat tip to someone who found an un-paywalled copy while it lasts:

    “The question really is not whether the loss of the sea ice can be affecting the atmospheric circulation on a large scale,” she said. “The question is, how can it not be?”

    “And then my life changed,” Francis says. Before the GRL paper appeared, she estimates she spent just one-quarter of her time working on outreach and communication. Soon after, that fraction rose to 80%. Since 2011, she has logged more than 150 media mentions and speaking engagements. She’s an articulate scientist, after all, with a surprising take on a topic that
    everyone loves to talk about: the weather.

    As Francis has accumulated media appearances, however, opposition to the hypothesis has grown steadily among researchers….

  7. 357
    Hank Roberts says:

    P.S. — when I read

    “opposition to the hypothesis has grown steadily among researchers….”

    I don’t think disbelief.
    I think this is a valuable hypothesis, because that’s science at work, asking:
    “What would prove it wrong?”

  8. 358
    Hank Roberts says:

    Gavin wrote above:

    …near bedrock, the ice gets messed up and turned over, and so it isn’t quite as easy as going to the deepest spot and drilling. – gavin] – See more at:

    Also, from what I’ve read, the bottom of the icecap melts, more than we imagined a decade or so ago, e.g.

  9. 359
    Victor says:

    I recently found an even better example of how easy it is for scientists to read what they want to see into their data. In an article published in the AMS Journal Online in 2003, the authors claim their “Results reveal a statistically significant increasing trend in snowfall for the lake-effect sites, whereas no trend is observed in the non-lake-effect settings.” (

    To illustrate, they display the following pair of graphs:

    These graphs have been resuscitated by certain media outlets to argue that the recent lake-effect storm in the Buffalo area was influenced by global warming. The uppermost graph has been touted as representing a distinct upward trend in lake-effect snowfall from 1931 to 2001, implying that last week’s storm was a natural product of the same trend.

    In response to the global warming claims, an article by Sierra Rayne, entitled “Climate Hysteria and the Buffalo Snowpocalypse,” appeared recently at the American Thinker website. And yes, I know about that site, and strongly disagree with just about everything I’ve seen there, including most of their “denier” accusations. Yet Rayne has some very meaningful points to make and makes them very clearly. Here’s what she has to say about that lake-effect graph:

    “A 2003 study that used oxygen isotopes to distinguish local lake-effect snow from snow formed outside the region showed a sharp increase in lake-effect events over the last few decades.” The following graph was then shown, taken from this 2003 study in the Journal of Climate:

    Looking closely at this figure tells me that it is very likely that the linear regression line shown is inappropriate, and that this dataset actually disproves increasing lake effect snow in the region from anthropogenic climate change. All of the increase in lake effect snow appears to have taken place between 1930 and 1970, and since 1970 there appears to be no significant upward trend in the data (don’t get tricked by the single high datapoint in 2001 – focus on the pattern since 1970).

    Sure enough, when I digitized the graph and analyzed the data myself, I found absolutely no significant trend in lake effect snowfall from 1970 onward.”

    To my surprise, I discovered that she was right. But there was even more to it, as I later noted in some comments on my blog (see my following post)

  10. 360

    The global temperature surface anomaly data of Oct 2014 have been published by MetOffice (, NASA GISS ( and NCDC NOOA( When I use the absolute temperature data “” published at to transform the anomaly data to temperatures, I get for global temperaure of Oct 2014, 14.67 °C (HADCRUT4), 14.79 °C (GISTEMP LOTI) and 14.74 °C (NOOA). The global temperature on the average is 14.73 +- 0.06 °C. The October temperatures of 4 years fall into this interval: 2003, 2005, 2012, and 2014. Summary: the October 2014 global temperature is high but no record, when the uncertainty of the data is taken into account.
    Now let us look at the 1 yr running mean trends: the 30 yr trend is 0.184+-0.005 °C/decade with falling tendency since 2004. The 60 yr trend is 0.126 +- 0.002 °C/decade. It is rising since 1995. The differences are caused by the natural variability of the global climate. The climate projections made by modeling were not very successful in the recent 2 decades, because they neglect the natural variability, which is mainly caused by ocean streams. I have tried to make my own forecasts of the global temperature. My forecast is based on the historical temperature data. The idea behind is that the forcings of the climate (sun irradiation, albedo, oceans, GHG, land-use, aerosol, etc.) are varying slowly in time and the “inertia” of the global climate is large. Simple least squares fitting is used with several choices of fitting functions. The best fitting function for the temperature was found to be T(t) = c0+c1*t+c2*t^2+c3*sin(c4*t+c5) where the ci are fitting parameters. The 60yr forecast from 1954 to now with data 1850-1954 is shown here and the forecast from now to 2074 with data 1850-2014 is shown here . A temperature rise of the annual average from 14.6 °C to 15.5 °C is to expected in the next 60 years.

  11. 361
    prokaryotes says:

    Quote by Hank Roberts (#356): “..opposition to the hypothesis has grown steadily among researchers.”

    Well, it would be good to know more details then, because that statement is very broad and a bit misleading, because there is also research which supports her theory. The basic principle appears still sound to me, which suggest more blocking patterns, it can be observed. Maybe it’s a seasonal thing and probably not uniform.

  12. 362
    Chris Dudley says:

    Hank (#345),

    Which is only our old friend the water vapor feedback, not some new effect involving differences in surface emissivity. So, there seems to be a problem with the paper.

  13. 363
    Chris Dudley says:

    You have made a mistake applying the HADCRUT absolute file to GISS or NOAA anomalies. Differences in method and baseline make that unworkable with any precision. October 2014 does appear to be a record in the HADCRUT anomaly data for October.

  14. 364
    Hank Roberts says:

    > prokaryotes says:
    > Quote by Hank Roberts (#356): “..opposition to the hypothesis …”
    > Well, it would be good to know more details then

    I cited the source from which the quote came; click the link for details.
    As Blake puts it, “opposition is true friendship” — better than lack of interest, for a scientific hypothesis. There’s plenty to find with Scholar.

    > Victor … lake effect snowfall …
    You have rediscovered what some citing papers had to say. There’s plenty to find with Scholar:
    E.g. Trend identification in twentieth-century US snowfall: The challenges, which is itself cited by 40 more recent papers

    You haven’t proved or disproved a trend there, Victor, that’s the point; you’ve discovered what the many papers have said, it’s noisy data.

    This past weeks’ snowfall adds another data point worth considering, as this time we have the high resolution reports on this last snowfall of “bands” — heavy snow, light snow, almost no snow, very different depths in repeating bands. That’s detail, mapped informatively. Take a look. Was that banding identified previously, or did it cause a lot of excess noise in the data depending on where banding fell on different weather stations?

  15. 365
    catman306 says:

    I’m soliciting opinions about this work:

    The Polar Circulation is So Wrecked That Surface Winds Now Rotate Around Greenland

    [Response: It’s akin to predicting weather with a dowsing rod. He’s taking snapshots of circulation and pretending that this is predictive – it’s pure wishful thinking. – gavin]

  16. 366
    Victor says:

    #364 Thanks for that link, Hank. It’s good to see the problem has been “officially” recognized. It wasn’t my intention to prove or disprove a trend, however. It was to point out a fundamental problem with identifying trends in general. Evaluation of snowfall trends is only one example. And I don’t think noise is really the issue.

    A good summary of several different methods of measuring surface temperature is presented at the Skeptical Science website, titled “Are surface temperature records reliable?” ( They answer in the affirmative, using several graphs to illustrate what looks like a broad general consensus: in every case we see a distinct upward trend.

    On their “Temperature Trend Calculator (, which I’m sure you’re all familiar with, the trends are displayed very clearly in graphs produced by each of ten different methods. All you have to do is press the “calculate” button and it’s all done for you. Cool!

    Now while every single example shows an upward trend, I’m fascinated by the different pictures presented in each, and how they influence our perception of the data in each case. The GISS Temp graph begins at roughly 1880 and ends, I suppose, in 2011. We see three roughly parallel blue lines showing a steadily rising trend. But what we see in the actual observations is NOT a steadily rising trend. From 1880 to 1910 there’s a rather strong trend downward. Followed by a strong trend upward until 1941, which in turn is followed by another downtrend, followed by a very slight uptrend until 1979, when we see a very steep upturn peaking in 1998, the year of the notoriously strong El Nino. This is followed by the famous “hiatus,” from 1998 to present — but given the overall scale of the graph, the hiatus is barely noticeable, little more than a blip. So. The ONLY periods where we see a clear and distinct uptrend are 1910 through 1941 and 1979 through 1998. All else is either going down or (more or less) level.

    So what is the basis for the trend lines? Since all three are perfectly straight, I’m assuming they must be anchored at two points only, which I assume to be the starting point of the graph, in 1880, and the ending point, in 2011 (???). So what can be the meaning of a trend with arbitrary starting and ending points? Obviously it would be different if anchored at different dates. What’s especially disturbing is the fact that we would see the exact same “trend” if the measurements produced a simple diagonal from beginning to end, rather than the very wavy and wayward line we actually see.

    The RSS graph, beginning with the date of the earliest satellite measurements, in 1980 I presume, presents a very different picture, largely because of the difference in scale. Yet we see a very similar set of trend lines, all roughly in parallel, all slanting upwards. It’s this graph that especially reminds me of the lake-effect graph discussed earlier, because it too can be divided into two distinct parts, with two very different trends.

    From 1980 to 1998 there is an uptrend — though without the El Nino peak in ’98 it would be pretty shallow. Then, from 1998 to roughly the present, we see no real trend at all — due to the relatively brief timescale, this second part takes up roughly half of the entire graph and is therefore impossible to ignore, as with the GISS graph. Even if we omit the ’98 El Nino and anchor in 2003 and 2013 (???), we actually see something of a down trend — with an anomalous peak around 2010, which clearly isn’t part of any trend in either direction. Interestingly, however, if we were to anchor at the low point we see roughly at the year 2000, there would be a slight uptrend in this half of the display. So clearly a LOT depends on where the anchor points are chosen, and how many are needed to do justice to any data set.

    The Skeptical Science article makes much of all the very similar trends, but says nothing about the anchoring issue. And I’m wondering whether there is anything in the literature that deals with this problem.

  17. 367

    #359–Sorry, but that’s an example of moving goalposts. The original claim was that there was a statistically significant trend over most of the 20th century. That is obviously not contravened by the counterclaim that there is not such a trend over the time frame from 1970 onwards.

    Assuming the claim of lack of statistically significant trend is correct–about which I am properly skeptical, and do not take a position one way or the other–that does not imply that there is no trend. It just means that we can’t tell to within specified probability whether the linear fit is due to chance or not. That could be due to too small a sample size (ie., too short a record). As such, it doesn’t ‘disprove’ the overall analysis (though it may cast some doubt on the correlation.) Ms. (?) Rayne may have gone a ‘bridge too far’ in drawing conclusions in this respect.

    It would be interesting to see what subsequent work says–and it looks as if Hank may have found some for us!

  18. 368
    sidd says:

    I was annoyed that the recent crop of geoengineering papers at the royal society did not address the schemes for mineral carbonation a la Schuiling or Kelemen. These involve accelerated weathering of rocks such as olivine or peridotite; see eg doi:doi:10.5194/esdd-2-551-2011 and doi:10.1073͞pnas.0805794105

    I had shied away from the Schuiling scheme since it involves a mining effort comparable to current carbon mining rates with consequent surface and ecological impact. But Kelemen has an interesting twist where surface sea water might be pumped into fractured peridotite formations with much less impact. While Schuiling estimates costs of approx 10 euro/ton CO2 captured at scale, i can find no costs for the Kelemen scheme using surface seawater. I would appreciate leads to further research on the Kelemen variant using surface sea water.


  19. 369
  20. 370
    MARodger says:

    Paul Berberich @360.
    May I first take the liberty of translating your comment,

    ‘The global average temperature for October 2014 is the highest on record for that month but not with statistical significance. The linear rise of annual global average temperature, for 30 year periods peaks in 1985-2004 and for 60 year periods have been rising since 1936-95. The GCMs have been poor at predicting global average temperature in the last two decades. This is due to natural variability, mainly due to ocean processes.. A curve-fitting forecast is here presented.’

    And to reply:-
    Although you apparently simplistic calculation of the statistical speard of the October 2014 global temperature is flawed, the assertion is correct.
    You will find the peak linear 30-year trend 1985-2004 is an artifact of the 1985-87 temperatures rather than an artifact of the 2004-06 data. If you fix the start of you regression at, say, 1980 you wil get a peak in 2007 as this graphic using GISS data demonstrates.
    Your curve fitting has been done before and to suggest that past wobbles in global temperature are all the result of natural processes and indeed the product of regularly recurring natural processes is not just highly unlikely but, in my own contention, impossible.

  21. 371

    The temperature anomaly of HADCRUT for Oct 2014 is 0,61 +- 0,14 °C. There are many October months before which have temperature anomalies within this error interval.
    If different organizations publish data for the same quantity it should be possible to compare them. I have transformed the data sets of GISS and of NOOA to the reference time interval of HADCRUT4 and then used the temperature data “” to calculate the global temperatures. I am surprised that the overall agreement of the three data sets is better than +- 0.14 °C.

  22. 372
  23. 373
    Hank Roberts says:

    > our old friend the water vapor feedback

    Yeah, I don’t think anyone’s arguing that it’s a novel physical phenomenon. They’re saying (1) it’s measured rather than assumed, and (2) as there are more months of open water and humid air, that same feedback operating for more months makes some measurable difference.

    No? If they screwed up something in the paper, I’d think the journal would print a letter pointing out what they missed.

    I just haven’t yet understood why this wouldn’t make a difference.

  24. 374
    Hank Roberts says:

    > Victor … I digitized the graph and analyzed the data myself,
    > I found absolutely no significant trend

    > Victor … It wasn’t my intention to prove or disprove a trend

    You found absolutely no significant trend, unintentionally?

    You’re discovering one of the basics taught in Statistics 101, but you’re making up your own terms — like “anchoring” — and asking others to use your terms.

    I suggest, if you take Stat 101, and use the standard terms for this, it will make sense to you.

    See the discussion at, especially the Notes page, answers some of your questions, e.g.
    where he writes:

    After many requests, I finally added trend-lines (linear least-squares regression) to the graph generator. I hope this is useful, but I would also like to point out that it can be fairly dangerous…

    Depending on your preconceptions, by picking your start and end times carefully, you can now ‘prove’ …

    Stat 101 changes the world as we see it, if we pass the course.

  25. 375
    Mal Adapted says:

    Meow, in response to Victor:

    You forgot (3) It’s awfully difficult to think of a case in which cherrypicking is a valid epistemological approach; and (4) It’s awfully easy to fall into the trap of false equivalence.

    Feynman’s wisdom bears repeating: “The first principle [of Science] is not to fool yourself, and you are the easiest person to fool.”

  26. 376
    Chris Dudley says:

    Paul (#371),

    Well, for example, GISS and HADCRUT used different coverage of the globe, with more polar coverage in GISS. So, the unpublished (apparently deliberately) eqivilent to the HADCRUT absolute file would be different. GISS allows you to do absolute annual global temperature estimates, but not monthly.

  27. 377

    Victor, you do realize that you said “I don’t think noise is really the issue,” and then proceeded to write several paragraphs largely about statistical noise?

    To answer your question about how linear trends are calculated:

    Other than that, let me just endorse what Hank said.

  28. 378
    Victor says:

    #374 Hank Roberts
    > Victor … I digitized the graph and analyzed the data myself,
    > I found absolutely no significant trend
    > Victor … It wasn’t my intention to prove or disprove a trend

    Hank, that first quote was from Sierra Rayne, the author of the article I was discussing. Sorry if that wasn’t clear. She was the one eager to disprove the trend. My intention was to make a more general point about how trends are determined and evaluated.

    I’m not a statistician (though I’ve collaborated with statisticians), so I’m not aware of the standard methodology or terminology used in modeling trends. However, I don’t think it’s necessary to be familiar with statistical methods and terminology to see the very obvious problem noted by Rayne, which can indeed be applied to many other cases. I read the “Notes” page you linked to, which does reassure me that at least some climate scientists are aware of this problem. I certainly agree that “by picking your start and end times carefully, you can prove” just about anything you like. However, the authors of the Skeptical Science piece I cited seem unaware of the problem. And since articles such as the one in Skeptical Science are the sort of thing the general public reads, I’m concerned that they (we) are being misled.

  29. 379

    Excuse me for my bad English.(My German is bad either). But I’m thinking and I want to say it. I used a simple fitting procedure to find the global temperatures in the future. It is a more sophisticated mathematical method than a trend calculation. It says nothing about the causes of temperature change. In this respect modelling the climate is superior. But as long as there is no agreement of the calculations with the data, people don’t believe it. The problem is similar to find the temperature coefficient of a Pt sensor. You can start with a first principle calculation of the electron phonon interaction in Pt or you can just look it up in the Handbook of Physics and Chemistry.

  30. 380

    Chris (#376)
    I have used both the global data files and the gridded data files of NOOA, GISS and HADCRUT4. The missing data of the gridded data files were filled by extrapolation. Then I have calculated the gobal temperature anomalies and the temperature. The differences are smaller than 0.1 °C for October 2014. The errors may be larger for historical times. But we are mainly interested in the present and in the future. I hope the situation of bad data coverage will be solved in future. You can download the software and the data from my Website.

  31. 381
    Marco says:

    Victor @366, seriously? All your questions are answered on the “see here for more information” (with *here* being a clickable link). Those “three lines” are not just any three lines, but the actual trend in the middle, and the confidence interval (which you will find to be not quite straight lines).

  32. 382
    MARodger says:

    Paul Berberich @379.
    You say of your method – “It says nothing about the causes of temperature change.” I do not agree. By modelling the global average temperatures with recurring cycles, you are adopting a rationale that such cycles can exist, indeed do exist as physical mechanisms. You have proposed there exists a 61-year cycle wobbling global temperatures 2.2ºC peak-to-peak. But where is the evidence for such a phenomenon? What possible mechanism is there that could be at work? I see none.
    And do consider that your model could also be use to calculate past temperatures. I think you would agree that it performs rather poorly looking back in time.

  33. 383
    Victor says:

    #381 Marco

    Thanks for pointing out that clickable link, which I didn’t notice before. I clicked it and learned something about the problems with those trend lines, which the creator of that website certainly acknowledges — so thanks. But I wonder how many others visiting that site noticed the link and bothered to click. The authors of the Skeptical Science article I cited apparently did not, since the whole point of that article was how all the trendlines reinforced one another. Sure they do. If you want them to.

    As for the issue of “noise,” as raised by Kevin, I get the feeling that any data that doesn’t conform to the trend is regarded as noise. In other words, damn the data, what’s really important is the statistics. Which makes sense, I suppose, since the latter is a lot more malleable than the former.

  34. 384
  35. 385
  36. 386

    “As for the issue of “noise,” as raised by Kevin, I get the feeling that any data that doesn’t conform to the trend is regarded as noise.”

    Er, no, Victor–at least, “no”, if you mean that the noise is used as excuse in some essentially arbitrary process. And “no” again, in that data itself is not regarded as noise. (Except, I suppose, in the case where it is demonstrated that there is, in fact, no signal present at all.)

    However, there is a “yes”, too, since the whole idea of a regression analysis consists of modeling data as the composite of *both* signal and noise. The former varies in some relatively simple, systematic fashion; the latter is often quasi-random variation but could also be periodic. Parsing the one from the other in a useful fashion is what the analysis is about.

    This can’t be done carelessly or arbitrarily–there are established standards and practices regarding what is statistically correct, appropriate, and robust. I’m not a statistician, either, so I can’t go into a lot of detail as to exactly what they are. But the whole reason for their existence is basically to help people not fool themselves–whether through the confirmation bias that concerns you, or through some other error.

    Standard measures, notably but not exclusively the ‘standard deviation’, afford knowledge about the signal and noise, and the data set as a whole. Those characterizations then constrain what can be concluded from the analysis. A proper statistical analysis includes measures comparing the strength of the signal (or trend) with that of the noise. And if the balance tilts too far toward noise, the researcher knows not to draw conclusions.

  37. 387
    wili says:

    victor, you seem to be obsessed with a particular Skeptical Science article. The appropriate place to take up questions about that article and its claims would be at Skeptical Science. I don’t think anyone here can enlighten you any more on the subject.

    Gavin, does your dismissal of the robertscribbler piece (@#365) extend to J. Frances’ theories that he draws on, or just to his application of them in this instance? Thanks ahead of time for any clarification.

    (Please delete if this is a duplicate.)

  38. 388
    Hank Roberts says:

    > Victor
    > I’ve collaborated with statisticians …
    > I’m not aware of the standard methodology or terminology …
    > I don’t think it’s necessary to be familiar
    > with statistical methods and terminology …


  39. 389
    Rafael Molina Navas, Madrid says:

    #382 MARodger (> Paul Berberich)
    “And do consider that your model could also be use to calculate past temperatures. I think you would agree that it performs rather poorly looking back in time”.
    Quite right … You know, scheptics grasp anything they “think” could explain rising temperatures rather than GHGs.
    It seems there is kind of app. 61 yr cycle in relative positions of mainly our bigger planets and the Sun. I´m very fond of tidal matters, and even think those positions could produce kind of tidal effects on the Sun, that could affect its radiation, but very SLIGHTLY of course …
    I have not deeply considered those posibilities … If real, it would be neglectful noise.

  40. 390

    Victor, #366 & 383–

    Rereading those comments, you may actually be starting to ‘get’ the point made by me and many others regarding your initial posts here–or at least, have acquired enough knowledge about stats to begin to understand the issues. Let me explain.

    What you are calling the ‘anchoring problem’ is the crux. First, let me clarify (perhaps unnecessarily) that the trend line is not actually directly ‘anchored’ by the two points chosen to bound the period of analysis. The trend is the product of the line- or curve-fitting process applied to all the data analysed, so every data point will influence the line to some degree (though it may be very small if there are many data points.) So ‘anchor’ may or may not mean what you think it does; I can’t tell where your current understanding of trend calculations is.

    That point out of the way, you are absolutely right that the ‘anchor points’ chosen are vital. However, you are absolutely wrong to think that the Skeptical Science authors are unaware of it. Here’s the deal: it’s well-known that periods of analysis cannot be chosen arbitrarily. Most often, the constraint is practical: either one uses all the data that one has (as in the case of the satellite temperature record, which starts in ’79), or all the data that one can process, due to computational limits or something similar. But the important thing that you *don’t* do is to pick your analysis period based on the result that you ‘want’ to see. That is the very essence of the ‘cherry-pick.’

    And that is precisely why I applied that term to your original argument: the only justification for picking ’98 as the start of the analysis period is that *after the fact* it ‘looks like’ warming ‘stops’. Note that it’s a different story if there is some physical justification as to why warming could be expected to stop: then a statistical analysis might be appropriate. (And, hopefully not to muddy the waters too much, there are statistical tests to determine–objectively!–whether there is a case to be made that actual ‘breakpoints’ exist in the data.) Some folks have attempted to supply this by invoking climatic ‘cycles’, but despite categorical claims for their existence, they are mostly not well-supported by the evidence.

    That’s also the tip-off that the ‘stopped warming in ’98’ meme is not motivated by a search for the truth, but by the desire to ‘debate.’ (I should know, I met my first real girlfriend on the debate team…! ;-) ) Folks promulgating it almost invariably pick that approximate time frame and the RSS temperature record, ignoring all other information, and (in many cases) the knowledge that the data since then do not admit of statistical significance.

    As to your comments about the Skeptical Science article, I think that you are overstating the emphasis on trends. Though definitely important, that’s only the last of their points; the temperature datasets are in good agreement overall despite the differences in detail that you correctly note:

    This is already a long comment, so I’ll leave it here and add a couple of concluding remarks in another comment.

  41. 391
    Hank Roberts says:

    PS for Victor: once you’ve reviewed the basics, say up to p.201
    (or, of course, confirmed you knew all that already)
    you will share vocabulary basic to understanding the words and pictures.

    You asserted that “any data that doesn’t conform to the trend is regarded as noise” by statisticians — showing you missed the basics.

    Larry Gonick presents the basics in that little book.

  42. 392
  43. 393

    Comment, part deux–Victor, I want to just apply what I said in my previous comment and also in my current #386. I suspect that I may be duplicating some of what Hank’s links cover–honestly, I haven’t checked them out yet–but here goes.

    1) As Marco noted, the central line is the trend line. It’s calculated in a completely standardized fashion, as I linked previously; there’s nothing ‘malleable’ about it whatever. There may be some debate about whether a linear trend is the way to go to best characterize the data, but if you choose a linear fit, there’s essentially no ‘wiggle room’ about how you do it. (There are complications like the possible need to correct for autocorrelation, but these matters are not arbitrary, although some of the refinements may become debatable.)

    2) The upper and lower lines–curves really, as Marco noted–are the confidence intervals. They are calculated–again, in standardized ways–to show, essentially, how likely it is that the observed data are due to chance or (on the other hand) to the effect being studied. By far the most common interval is 95%, roughly 2 standard deviations. If data points fall outside those upper and lower limit lines, then the model is in trouble, as (again, roughly speaking) there is likely less than a 5% chance that that occurred by chance.

    3) The locations of those lines or curves depends upon the variability of the data. If global mean temperature were less variable, then those lines would be closer together, since smaller variances are typical in that case, and thus a datapoint that’s ‘x units’ from the trend is more unusual–ie., less likely to be a product of chance–than it would be in a more variable data set. Hope that’s at least a bit clear…

    Anyway, the shape of those curves is worth commenting upon. You’ll note that the intervals become wider toward the temporal extremes of the record. There are different reasons for this, if I’m remembering the SkS tool correctly. (If not, corrections are invited from the peanut gallery!) For recent times, the issue is the number of data points you have. IIUC–If I Understand Correctly–it takes a minimum of 5 data points to even reach that two SD (95%) confidence interval. So very recent intervals can’t reach the threshold of statistical significance. After that, it’s possible but you need lower variability. Gradually, as you have more data, it becomes easier to reach that standard. The shape of the curves for recent yars reflects this

    By contrast, the early years of the record–and I’m speaking here of the instrumental data sets, NCDC and the like–have wider intervals not because of temporal limitations, but because of spatial and measurement ones: the data in the earliest years of the instrumental record, there were fewer stations taking measurements, and their measurements are less well documented and therefore less certain. So the confidence interval expands accordingly. (Again, corrections are invited for any confusions I may be promulgating unwittingly here.)

  44. 394
    Victor says:

    #388 Hank Roberts

    You took my phrase out of context. Let’s put it back: “However, I don’t think it’s necessary to be familiar with statistical methods and terminology to see the very obvious problem noted by Rayne, which can indeed be applied to many other cases.”

    When a proposition is prima facie false, no amount of mathematical legerdemain is going to make it true. Though it can certainly serve to obscure the problem. If you can’t see the obvious problem inherent in the graph analyzed very reasonably by Rayne, then your claims to be a scientist are in serious doubt. If you define a trend strictly in terms of linear regression, then of course the graph displays a “trend.” And anything in it that doesn’t fit the trend is “noise.” Fine. That’s what’s known as a tautology and it means nothing. Subjecting the graph to logical analysis, we see very clearly that there is in fact no single trend, but at the very least two different trends. And no algorithm is going to change that.

    Your failure (and the failure of others commenting here) to accept Rayne’s very straighforward analysis speaks more loudly to the failure of “the science” behind climate change dogma than any lengthy screed at any “denier” website could do.

  45. 395
    AIC says:

    Looking forward to an article about data from OCO2…

  46. 396
    Jack Roesler says:

    Dear experts:
    Could anyone here answer the objections from a guy on a Yahoo article, to which I’ve been commenting. Here’s his comment:

    “art 20 hours ago 0 0
    The MWP was warmer in mid and northern latitudes as were the Oceans which were .4C warmer than current. The recent South American proxies confirmed LIA was Worldwide.

    NOAA has repeatedly made claims like “Hottest May”.. “Hottest June” only to admit later they were wrong. The satellites temp record disagrees as well.

    In any case, warming is Not the issue. The contribution of C02 to the warming ongoing since the Maunder Minimum IS THE ISSUE . The latest and best science shows little, if any.

    There are now 14 papers in the peer reviewed journals recalculating Climate Sensitivity Down . Some as low as .30C by 2100, indistinguishable from Natural Variability .

    Like I said before .This centuries science ,Jack R ..Try it…You’ll like it.”

  47. 397
    Hank Roberts says:

    I’ve never claimed to be a scientist, nor a statistician. I do have some coursework in both science and math, and try to read the journals as best I can.

    Victor is here promoting something he learned from

    an article by Sierra Rayne, entitled “Climate Hysteria and the Buffalo Snowpocalypse,” appeared recently at the American Thinker website

    It’s not even blog science.

  48. 398

    “Your failure (and the failure of others commenting here) to accept Rayne’s very straighforward analysis speaks more loudly to the failure of “the science” behind climate change dogma than any lengthy screed at any “denier” website could do.”

    – See more at:

    Not nearly as loudly as your attempts to grapple with statistical evidence without, you know, knowing any stats.

  49. 399
    Hank Roberts says:

    Some folks I know and respect on this subject write:

    Well, we finally finished it.

    The National Fair Shares report is … worth reading even if you think that we’re doomed, because it very carefully works out what it would mean to hold to the IPCC’s carbon budgets, in …. a climate accord that works for everyone, even the developing countries, one designed to preserve “equitable access to sustainable development” even as it drives a rapid global phase out of all carbon emitting technologies.

    We don’t actually think we’re doomed, of course. If we did, we could never have written anything like this. We think humanity is going to rally. Or at least that it could.

    That and much more at

  50. 400
    Victor says:

    #391 OK, Hank, fair enough. Looks like one of those books “for complete idiots,” which suits me fine. Thanks for the link.