RealClimate logo


Berkeley earthquake called off

Filed under: — eric @ 24 October 2011

Anybody expecting earthshaking news from Berkeley, now that the Berkeley Earth Surface Temperature group being led by Richard Muller has released its results, had to be content with a barely perceptible quiver. As far as the basic science goes, the results could not have been less surprising if the press release had said “Man Finds Sun Rises At Dawn.” This must have been something of a disappointment for anyone hoping for something else.

For those not familiar with it, the purpose of Berkeley Earth was to create a new, independent compilation and assessment of global land surface temperature trends using new statistical methods and a wider range of source data. Expectations that the work would put teeth in accusations against CRU and GISTEMP led to a lot of early press, and an invitation to Muller to testify before Congress. However, the big news this week (e.g. this article by the BBC’s Richard Black) is that there is no discernible difference between the new results and those of CRU.

Muller says that “the biggest surprise was that the new results agreed so closely with the warming values published previously by other teams in the US and the UK.” We find this very statement surprising. As we showed two years ago, any of various simple statistical analyses of the freely available data at the time showed that it was very very unlikely that the results would change.

The basic fact of warming is supported by a huge array of complementary data (ocean warming, ice melting, phenology etc). And shouldn’t it have helped reduce the element of surprise that a National Academy of Sciences study already concluded that the warming seen in the surface station record was “undoubtedly real,” that Menne et al showed that highly touted station siting issues did not in fact compromise the record, that the satellite record agrees with the surface record in every important respect (see Fig. 7 here), and that numerous independent studies (many of them by amateurs) also confirmed the warming trend?

If the Berkeley results are newsworthy, it is only because Muller had been perceived as an outsider (driven in part by trash-talking about other scientists), and has taken money from the infamous Koch brothers. People acting against expectation (“Man bites dog”) is always better news than the converse, something that Muller’s PR effort has exploited to the max. It does take some integrity to admit getting the same answer as those they had criticized, despite their preconceptions and the preconceptions of their funders. And we are pleased to see Muller’s statement that “This confirms that these studies were done carefully and that potential biases identified by climate change sceptics did not seriously affect their conclusions.” It’s far from the overdue apology that Phil Jones (of CRU) deserves from his critics, but it’s a start.

But Muller’s framing of the Berkeley results is still odd. His statement, that had they found no warming trend, this would have “ruled out anthropogenic global warming”, while true in a technical sense, would not have implied that we should not worry about human drivers of climate change. And it would not have overturned over a century of firmly established radiative-transfer and thermodynamics. Nor would it have overturned the basic chemistry which led Bolin and Eriksson (reprinted here) to predict in 1959 that fossil fuel burning would cause a significant increase in CO2 — long before the results of Keeling’s famous Mauna Loa observations were in. As a physicist, Muller knows that the reason for concern about increasing CO2 comes from the basic physics and chemistry, which was elucidated long before the warming trend was actually observable.

In a talk at AGU last Fall, Naomi Oreskes criticized the climate science community for being reluctant to take credit for their many successful predictions, so here we are shouting it from the rooftops: The warming trend is something that climate physicists saw coming many decades before it was observed. The reason for interest in the details of the observed trend is to get a better idea of the things we don’t know the magnitude of (e.g. cloud feedbacks), not as a test of the basic theory. If we didn’t know about the CO2-climate connection from physics, then no observation of a warming trend, however accurate, would by itself tell us that anthropogenic global warming is “real,” or (more importantly) that it is going to persist and probably increase.

Muller’s other comments do very little to shed light on climate change, and continue to consist largely of putting down the work of others. “For Richard Muller,” writes Richard Black, “this free circulation also marks a return to how science should be done,” the clear insinuation being that CRU, GISS, and NOAA had all been doing something else. Whatever that “something else” is supposed to be completely eludes us, given that these groups all along have been publishing results in the peer-reviewed literature using methods that proved easy to reproduce using easily available data (and in the GISTEMP case, complete code). In one sense, though, we do agree with Muller’s quote: nobody has stolen his private emails and spun them out of context to make his research look bad.

Laudably, Muller’s group have submitted their research to peer-reviewed journals, and the submitted drafts are available on their website. Amidst a number of verifications of already well-established results on the fidelity of the surface station trends, they also claim to have discovered something new. In their paper Decadal Variations in the Global Atmospheric Land Temperatures, they find that the largest contributor to global average temperature variability on short (2-5 year) timescales in not the El Nino-Southern Oscillation (ENSO) (as everyone else believes), but is actually the Atlantic Multidecadal Oscillation (AMO). This is pretty esoteric stuff, but it would actually be quite interesting if it were true — though we hasten to add that even if true it would have no significant bearing on the interpretation of long term temperature trends. Before anyone gets too excited though, they should take note that the basis for this argument is that the correlation between the global average temperature and a time series that represents the AMO is higher than for one that represents ENSO. But what time series are used? According to the submitted paper, they “fit each record [ENSO and AMO times series] separately to 5th order polynomials using a linear least-­squares regression; we subtracted the respective fits… This procedure effectively removes slow changes such as global warming and the ~70 year cycle of the AMO, and gives each record zero mean.” Beyond the obvious fact that if one removes the low frequencies, than we’re really not talking about the AMO anymore (the “M” in “AMO” stands for “Multidecadal”), one has to be rather cautious about this sort of data analysis. Without getting into the nitty-gritty technical details here, suffice it to say that Muller & Co are proposing a new understanding of global temperature variability, and their statistical approach is — at the very least — poorly described. There is a large literature on how to do this sort of thing, not to mention previous work on the AMO and its relationship to global temperatures (e.g. this or Mann and Park (1999) (pdf), among many others), which the Berkeley group does not cite.

Overall, we are underwhelmed by the quality of Berkeley effort so far — with the exception of the efforts made by Robert Rohde on the dataset agglomeration and the statistical approach. And we remain greatly disappointed by Muller’s public communications (e.g. his WSJ op-ed) which appear far more focused on raising his profile than enlightening the public about the state of the science.

It will be very interesting to see what happens to these papers as they go through peer review. No doubt, they will improve: that’s one of the benefits of the peer review process (suddenly popular again!). In the meanwhile, Muller & Co. have a long way to go before they can claim to be the best (as opposed to just the BEST). By launching his BEST project, Muller has no doubt ensured a place for himself in shaping the narrative on climate change science, but it remains to be seen to what extent he is going to contribute to the science of climate change.


208 Responses to “Berkeley earthquake called off”

  1. 201
    Hank Roberts says:

    http://www.wrcc.dri.edu/coopmap/
    Zoom in and have a look for yourself

  2. 202
    Romain says:

    Ray Ladbury,

    “Remember, what matters are the trends, and those tend to track”
    Not according to BEST results…
    In North California, you have half of the stations cooling, half warming (roughly). And that for 70 years of recording. I agree that in California (and probably USA) we are somehow ‘oversampling’ so that the average will tell us something representative (and that it is warming). But what it tells us (I think), is that the temperature autocorrelation falls apart at a much shorter distance than previously thought. No?
    So I still don’t get the oversampling by a factor of 4 when talking about world land surface coverage…
    And the uncertainty associated…

    Hank Roberts,
    ‘Are those unadjusted temperatures, do you know?’
    To be honest, it is not clear to me what BEST did with the data and what they are showing in this figure…Are you saying these results should not be trusted?

  3. 203
    ldavidcooke says:

    Re: 198

    Hey Romain,

    EMF is three dimensional, whether you are sampling a series of changes past one point or a series of points for one change in a varying property. The rules are the same, if you intend to increase accuracy based on spacial conditions you have to insure the conditions are also similar at each site. There in lies the answer to your original question; but, others here have shared this better elsewhere in the archives, you might consider researching the inital GAT/GCHN modelng description discussions from a few years back.

    Cheers!
    Dave Cooke

  4. 204
    Hank Roberts says:

    >> Romain
    >> ‘Are those unadjusted temperatures, do you know?’

    >To be honest, it is not clear … what they are showing
    >in this figure…Are you saying these results should not be trusted?

    You don’t know what you’re looking at
    You post on what you believe it might be.

    That is recreational typing.

    Look it up. Make the effort, eh?

    Google your name +watts +audit
    Sipping from the fonts of uncertainty, eh?

  5. 205
    Romain says:

    Idavidcook,
    Thanks for your constructive answer, I will search in the archives for better explanation. Because I still don’t get your point. ;-)

    Hank Roberts,
    “You post on what you believe it might be”
    Spot on. I read these BEST papers, but just once, so not enough to understand every single point. That is why I came here, explaining my problem/doubt/skeptiscism and asking questions. Because I already had some helpful clarifications and corrections of my mistakes in the past.
    The other comments I left in the other sites are confirming I think the uncertainties are underestimated. I am not claiming I hold the truth. But I am still struggling to understand the 4x over-sampling statement. So besides your micro smear campaign on my name and the ‘you can look it up yourself’ answers, do you have anything more directly useful to this point?

  6. 206
    ldavidcooke says:

    RE:205

    Hey Romain,

    To make it more simple, imagine if you will two WX stations 500km separate. One at sea level 200km off the coast of LAX, consider the other placed 300 km East in the middle of Death Valley.

    If you had an anti-cyclonic (High Pressure in the NH) overhead at the two different stations at nearly the same latitude you would get very different temperatures. At the surface you would also have near stagnat wind and low wv at the DV station. While at the ocean station, you might get a strong N-Nw wind flow and high wv condition.

    The conditons are such, one sample could rudely offset the other. However, take both WXs and elevate them 1km and you may find the difference between the two are slight.

    Hence, to replicate the regional gridded value the raw temperatures could be adjusted to correct for the local characteristic, (with a low level of confidence, due to the number of degrees of freedom), or you could simply note the degree of change. Hence, by tracking the average and the change over time you can see your trend value.

    So what happens if local conditions change? Say a multi-decadle condition which modifies the local weather, such as a S-N wind sweeping the DV region and laying down rain. The trend may go negative for several years…, while the ocean goes positive. Likewise, say a La Nina occurs which sweeps cooler drier air down the S. Cally coast. Again your trend could go negative.

    These are examples of individual WX stations demo’ing a local variation. However, if you sample a min. of 4 stations to represent the regional average over time (typically 30 years), the change in the trendlines are likely to be very closely representative of the regional conditions.

    By then linking the changes in regional values to a large scale grid you get a fairly good representation of the hemisphere or globe. (Remember, it is not the extremes or the raw temperatures you are measuring; but, the change in the average over time. Also, remember that by sampling the trend for 4 randomly selected sites within a region you should be able to represent that regions climatic conditions over a 30 year period with a min of a 1 Sigma (roughly 67%) and likely a 2 Sigma (or roughly a 97%) confidence level.) The key is to insure a random selection of WX, if you are not going to include the whole of the population, as most climate temperature analysis are currently done.

    Cheers!
    Dave Cooke

  7. 207
    Hank Roberts says:

    Romain, you asked

    “… Are you saying these results should not be trusted?”

    Of course not. How could I know from looking at a picture?

    I suggested you find out what the picture you cited represents with the little blue and red dots — what numbers are illustrated?

    Adjusted? unadjusted?

    Quote the caption, at least, or any explanation you found with the picture.

  8. 208
    Hank Roberts says:

    PS for Romain — you’re asking questions about a figure from the group’s paper — did you inquire of the authors, as they invite?

    See their website:

    http://berkeleyearth.org/contact.php

    “… For general email inquireies or feedback on our analysis and papers, please contact info@berkeleyearth.org. We are very grateful for substantive feedback on our work.

    We have been receiving many emails, and may not be able to respond to everyone who contacts us. However, do read all messages, and we will add frequently asked questions along with our responses to the Berkeley Earth website….”

    If you did not ask there before posting your doubts at various blogs,
    why not? Let us know if you did so we can watch for their response.


Switch to our mobile site