Let the games begin!

To confuse the metaphor even further, Roger Pielke Sr loudly declared that whatever the results of the Watts paper it will end up being a game changer:

The TOB effect could result in a confirmation of the Watts et al conclusion, or a confirmation (from a skeptical source) that siting quality does not matter. In either case, this is still a game changing study.

If only people would change the games they play…

My inclination is just to sit back and watch the spectacle, admire the logic-defying leaps, marvel at the super-human feats of hubris and, in two weeks time, remark on how little actually changed.

Page 2 of 2 | Previous page

116 comments on this post.
  1. Chip Knappenberger:

    Gavin (#96),

    I understand how using a different standard deviation to calculate z-scores produces a different shaped sample distribution (why the left-hand panel in the bottom row of Fig 4 looks different from the right-hand panel). But I don’t understand how using a different mean does. The only difference between anomalies from the 1951-1980 baseline (which I take to mean the 1951-1980 average) and anomalies from 1981-2010 baseline is a constant shift. If I generate a string of normally distributed random numbers, normalize them, and plot their distribution, and then add a constant to all the original values, divide through by the original stdev and plot them again, the shape of the distribution stays the same (it is only shifted to the right). So I don’t understand why the right-hand panel in Fig 4 looks different from the right-hand panel in Figure 9 (the standard deviation used is the same in both panels, but the mean is different). Consequently, I don’t follow the point being made by Hansen about the need to keep the baseline fixed at some reference period.

    I guess I should just accept that I don’t understand and leave it at that, since I can’t seem to grasp what Rattus and you have repeatedly explained to me. Thanks, though, for taking the time to try to help.

    -Chip

  2. Mark Shapiro:

    Chip (# 101) and others,

    Figure 9 graphs the anomalies (by decade) from three different baselines for NH land. The first (on the left) is from 1951-1980, and thus is identical to the first graph on the bottom row of figure 4. Since the next two graphs on figure 9 use different baselines than figure 4 (for comparison), the entire populations are slightly different, so the distributions are different. They are not simply adding a constant to each anomaly, rather they calculate each anomaly from three different, overlapping populations. Thus you expect the shape of each distribution to change modestly.

    The authors discuss the reason for this additional analysis in the section titled “Reference Period.” Think of it as a parallax view, perhaps; it shows some robustness of the their conclusions.

  3. MARodger:

    Chip Knappenberger @101
    I am not a great fan of waving around in public analyses like the one you’ve been enquiring about, Hansen et al (Pdf). I feel the logic of the method it uses is not easily comprehendible, not obvious enough. So the precedent is set such that any aspiring numerlogist could present a nonsense mess of similar graphs to prove black is white or whatever (Of course, some of them already do that.) and only those smarter than the average bear will ever be able to tell that its not genuine.

    I also consider the comments replying to you here has not been helpful, and this may be reflective of the less-than-straightforward logic. Your comment @96 I think pretty much conforms to my understanding of it (although the Response suggest we are both wrong! And we’re off-topic! Oh no!!).

    In figure 4 each gridbox has a temperature data set. Each gridbox’s data will have a mean & sd which the choice of anomaly base will shift up or down. But when the anomaly base is changed, each gridbox’s anomaly zero will shift by different amounts. So the graph’s shape (a sum of lots of different local anomalies) is dependent on choice of anomaly base.

    In figure 9, Hansen et al is calculating the sd for each gridbox on a sub-set of its temperature record. “Standard deviations are for the indicated base periods.” The implications for the graph is that baseline periods with more gridboxs showing greater scatter, more climatic variability, will be represented in figure 9 with a narrower bell-shape.
    Climate variability increased in recent decades, and thus the standard deviation increased. Therefore, if we use the most recent decades as base period, we “divide out” the increased variability. Thus the distribution function using 1981–2010 as the base period (Fig. 9, Right) does not expose the change that has occurred toward increased climate variability.

    Of course I could be wrong on this. A quick read off the screen is no substitute for a proper old fashioned “paper reading.” And if I did print it out, I might come to understand that I don’t know what on Earth he’s on about, Boo Boo.

  4. Mickey Reno:

    Question about “adjustments” to the US temp records: IF the correction curve of the temp records looks substantially similar to the warming curve, and the non-adjusted curve looks flatter, would that indicate a physical, natural process, or would it indicate human bias? Or a combination of the two (and if so, in what proportion)?

    [Response: There is no absolute answer. You have to see what the adjustments were for (TOBs, station moves, instrument changes, UHI etc.) and what the uncertainties on that correction was, and how robust the answer is to all these issues. For the US, the basic warming is robust no matter how you do the necessary adjustments (and coherent with warming in the surrounding ocean, lake warming, phenology changes, glacial retreat, snow cover decreases etc.). Different methodologies – NCDC vs BEST for instance, give very similar changes. Of course, uncertainty in the adjustments adds a little to uncertainty in the final answer, but this is nothing as large as some have been claiming. – gavin]

  5. Zeke Hausfather:

    Mickey Reno,

    If you have concerns about the NCDC homogenization process, this is an excellent paper to read: http://www.agu.org/pubs/crossref/2012/2011JD016761.shtml

  6. Patrick 027:

    Re 102 MARodger “each gridbox’s anomaly zero will shift by different amounts.” – I wish I had thought of that! (Maybe I did and thought it would tend to average out in aggregate?)

  7. Jim Larsen:

    Chip, as a layman I think Hansen is saying that the length of the base period introduces significant error in counting high sigma events. Over 30 years the average temperature rises enough that any algorithm asking “How cold was this day in deviations, not degrees?” would have to contend with the fact that a cold snap “one would remember” back in 1980 was surely colder than the same sigma event today. Compare two days 30 years apart with the same temperature. They’re really different but the algorithm treats them as identical. Now start walking the base period forward in time…

    Hansen is saying, pick a period of stability, good data, and as close to pre-human as possible. That’s 1951-1980. It’s only obvious flaw to me is the aerosols.

  8. Anon-obs:

    Now, consider that our white coat science team members have grown old or have lost traction for other reasons. Could we manufacture another expert member? YES, of course, and even a much better one.

    Recruit someone with a Name in his own scientific field and expressed doubts concerning the climate science. Emphasize initially his/her critical views in aggressive press releases. Provide his/her with money to do the simple sums that are required to prove beyond reasonable doubt that the global temperature has risen over the past 100 years.

    That simple task completed, let him/her declare that his/her results demonstrate indeed beyond reasonable doubt that the world has warmed up. Call this the definite proof, original and fully satisfactory, certainly beyond all previously published work on the topic.

    The author then declares his conversion from a climate sceptic to a believer in global warming as all his/her doubts have been removed. Add as an extra embellishment a note that the humanity may have at least something to do with it.

    A climate expert of high public stature has now been manufactured. In particular, he/she also has demonstrated irreproachable character and scientific integrity by flipping over his/her personal views as a consequence of his/her own research results.

    He/she will be famous, will be loved by the media, will do great on the speech circuits, and will be invited to all political hearings to present his/her expert views and opinions on all matters even remotely connected to climate science.

    The Establishment will unwittingly play along, egged by barbs sent to their direction.

    As an example, he/she may well state that 90% of all claims made by Al Gore in his films and presentations are not supported by science. This will be devastatingly credible, coming from such famous professional source with exceptional scientific integrity. Also all his/her other opinions will be weighty and can be heavily promoted (scientific proof need not be asked, it would be insulting in fact).

    Never forget the basic messages: “Impression is everything”, “Doubt is our product” and “Create controversy”. This is how harmful laws are not passed.

    So, let the games begin…

  9. bob:

    is this news story a big deal?

    http://phys.org/news/2012-08-deep-sea-temperature-reconstruction-reveals.html

  10. Poul-Henning Kamp:

    There is an entirely separate and valid argument for using 1950-1981 as base period:

    A very high percentage of our current infrastructure and real estate was designed, planned and built during that period, all based on our perception of the climate back then.

    A five sigma weather event relative to that baseline will often be outside the design envelope of normal safety-class buildings (ie: houses) from back then. High safety-class (ie: buildings with lots of people in them) will be less vulnerable. At least for now.

  11. MARodger:

    bob @109
    If you are interested in the MPT or the whys & wherefores of ice age forcings, the paper (abstract available here) is probably a very big thing.

  12. sidd:

    Re: MPT

    Hansen did an analysis relating NH ice sheet growth to NH spring insolation over tha last few stades. Could such an analysis be extended further into the past to see if the MPT can be explained in similar terms ?

    sidd

  13. sidd:

    Discussion of the climate dice paper, baselines extended back to 1931, no important change in the results.

    http://www.columbia.edu/~jeh1/mailings/2012/20120811_DiceDataDiscussion.pdf

    sidd

  14. tmarvell:

    Muller was on PBS this morning.

    http://thedianerehmshow.org/shows/2012-08-14/new-consensus-climate-change

    He sort of blames climate scientists for the skeptics’ views, saying that the scientists exaggerate their claims and make obviously unsubstantiated claims. Skeptics, he claims, then become suspicious of the overall AGW claims.
    I think he is silly. Of course, a few odd climate scientists go too far, but that happens in any scientific area. It’s the general consensus that counts. Skeptics can always find red herrings to attack. But they are dishonest in that they imply that these outliers represent the scientific community.

  15. Deep Climate:

    Berkeley Earth, part 1: Divergences and discrepancies

    … Here, the post-1950 Berkeley Earth “complete” land series is compared to the preliminary Berkeley series released in 2011, as well as to GHCN-only simulated series, based on overall attributes of those unreleased series provided in the Berkeley Earth companion “methods” paper. The 2011 and 2012 “complete” (ALL) series Berkeley versions both fall squarely in the range of the latest comparable series from the three other groups post-1950. However, the two Berkeley ALL series diverge over the 1980-2010 period, and lie completely outside each others’ 95% confidence intervals in the 2000s, when baselined to 1950-1979. The 1950s average absolute temperature is 0.42 °C higher in the GHCN 2012 series than in the ALL 2012 series, a 25-sigma difference with respect to the reported uncertainty in the GHCN series. The GHCN 2012 series falls halfway between the 2012 ALL and 2011 ALL series in the 2000s. As well, there is an increasing widening between the Berkeley 2012 GHCN and ALL series the further one goes back before the 1950-1979 baseline period, with the ALL series about 0.3 C cooler in the early 1800s.

    Other issues requiring further analysis are also identified, particularly a reported reversal in the long-term trend of narrowing diurnal temperature range starting in 1987, which contradicts previous GHCN-based analyses.Taken together, these issues cast doubt on the robustness of the present Berkeley Earth analysis, and point up the need for more open data access and improved diagnostics in order to further assess the reliability of the Berkeley Earth approach to surface temperature analysis.

  16. Eccentric & Anomlaous:

    The big picture is this: an independent, previously-skeptical team of researchers obtained results that confirm AGW. No Earth-shattering science, but the overall result should be seen as positive for climate scientist who have been making the same claims for many, many years.