Readers may recall discussions of a paper by Thompson et al (2008) back in May 2008. This paper demonstrated that there was very likely an artifact in the sea surface temperature (SST) collation by the Hadley Centre (HadSST2) around the end of the second world war and for a few years subsequently, related to the different ways ocean temperatures were taken by different fleets. At the time, we reported that this would certainly be taken into account in revisions of data set as more data was processed and better classifications of the various observational biases occurred. Well, that process has finally resulted in the publication of a new compilation, HadSST3.
Figure: The new HadSST3 compilation of global sea surface temperature anomalies and the uncertainty.
HadSST3 not only greatly expands the amount of raw data processed, it makes some important improvements in how the uncertainties are dealt with and has a more Bayesian probabilistic treatment of the necessary bias corrections. That is to say that instead of picking the most likely factors and providing a single reconstruction, they perform a Monte Carlo experiment using a distribution of factors and provide a set of 100 reconstructions – the average and spread of which inform the uncertainties. This is a noteworthy approach and one which is likely to set a new standard for other reconstructions. (The details of the procedures are outlined in two new papers Kennedy et al, part I and part II).
One potential problem is going to be keeping the analysis up-to-date. Currently, HadSST2 (and HadCRUT3) use the real time updating related to NCEP-GTS. However, this service was scheduled to be phased out in March 2011 (was it?), in lieu of near-real time updating of the underlying ICOADS dataset. This is what HadSST3 uses, but unfortunately, the bias corrections in the modern period rely on being able to track individual fleets. For security reasons, data since 2007 has been anonymized (so you can’t tell what ship reported what data), and so the HadSST3 analysis currently stops at 2006. Apparently this is being worked on, so hopefully a solution can be found. Note that the ocean temperatures in the GISTEMP analysis use the Reynolds satellite SST data from 1979 and so are unaffected by the ICOADS security issue.
The ocean temperature history is obviously a big part of the global surface air temperature history and these new estimates will be used eventually in updates of the HadCRUT3 product. Currently HadCRUT uses HadSST2 and we can expect that to be updated soon.
Obviously, when a new analysis is performed, it is interesting to see how it differs from previous ones. The differences between HadSST3 and HadSST2 are shown here:
and are important in a few key time-periods – the 1940s (because of issues highlighted previously), the 1860s to 1890s (more extensive data), and perhaps the last few years (related to more minor changes in technologies and corrections). The biggest difference around 1946-8 is just over 0.2ºC.
One odd feature of the HadSST2 collation was that the temperature impact of the 1883 Krakatoa eruption – which is very clear in the land measurements – didn’t really show up in the SST. Thus comparisons to model simulations (which generally estimate an impact comparable to that of Pinatubo in 1991) showed a pretty big mismatch (see Hansen et al (2007)). With the larger amount of data in this period in HadSST3, did the situation change?
Figure: 1880’s comparison of (left) global surface air temperature anomalies using HadSST2 (as part of GISTEMP) and the GISS AR4 simulations, (right) global SST estimates from HadSST2 and HadSST3.
It seems clear that the new data (including HadSST3) will be closer to the models than previously, if not quite perfectly in line (but given the uncertainties in the magnitude of the Krakatoa forcing, a perfect match is unlikely). There will also be improvements in model/data comparisons through the 1940s, where the divergence from the multi-model mean is quite large. There will still be a divergence, but the magnitude will be less, and the observations will be more within the envelope of internal variability in the models. Neither of these cases imply that the forcings or models are therefore perfect (they are not), but deciding whether the differences are related to internal variability, forcing uncertainties (mostly in aerosols), or model structural uncertainty is going to be harder.
So how well did the blogosphere do?
Back in 2008, a cottage industry sprang up to assess what impact the Thompson et al related changes would make on the surface air temperature anomalies and trends – with estimates ranging from complete abandonment of the main IPCC finding on attribution to, well, not very much. While wiser heads counselled patience, Steve McIntyre predicted that the 1950 to 2000 global temperature trends would be reduced by half while Roger Pielke Jr predicted a decrease by 30% (Update: a better link). The Independent, in a imperfectly hand drawn graphic, implied the differences would be minor and we concurred, suggesting that the graphic was a ‘good first guess’ at the impact (RP Jr estimated the impact from the Independent’s version of the correction to be about a 15% drop in the 1950-2006 trend). So how did science at the speed of blog match up to reality?
Here are the various graphics that were used to illustrate the estimated changes on the global surface temperature anomaly:
Figure: graphics from Steve McIntyre, Roger Pielke Jr and the Independent. Update: RP Jr’s graph shown above is an emulation of McIntyre’s adjustments, his own attempt is here (though note that the title on the graph is misleading).
Now, we don’t yet have the real updates to the HadCRUT data, but we can calculate the difference between mean HadSST3 values and HadSST2, and, making the (rough) assumption that ocean anomalies determine 70% of the global SAT anomaly (that’s an upper limit because of the influence of spatial sampling and sea ice coverage), estimate an adjusted HadCRUT index as SAT_new=SAT_old + 0.7*(HadSST3-HadSST2).
Figure: Estimated differences in the global surface air temperature anomaly from updating from HadSST2 to HadSST3. The smoothed curve uses a 21 point binomial filter.
While not perfect, the Independent graphic is shown to have been pretty good – especially for a hand-drawn schematic, while the more dramatic implications from McIntyre or Pielke were large overestimates (as is often the case). The impact on trends is maximised for trends starting in 1946 (a 21% drop in the 1946-2006 case), but are smaller for the 1956-2006 trend (11% decrease from 0.127±0.02 to 0.113±0.02 ºC/decade) (the 50 year period highlighted in IPCC AR4). More recent trends (i.e. 1970 or 1975 onwards) or much longer trends (1900 for instance) are barely changed. For reference, the 1950-2006 trend changes from 0.11±0.02 to 0.09±0.02 ºC/decade – a 17% drop in line with what was inferred from the Independent graphic. Note that while the changes appear to lie within the uncertainties quoted, those uncertainties are related solely to the fitting of an regression line and have nothing to do with structural problems and so aren’t really commensurate. The final analysis will probably show slightly smaller changes because of the coverage/sea ice issues. Needless to say the 50% or 30% reductions in trends that so excited the bloggers did not materialize.
In summary, the new HadSST3 analysis is a big step forward in both data coverage and error analysis. The differences are much smaller however than the somewhat exaggerated (and erroneous) blog speculations that appeared in the immediate aftermath of the Thompson et al paper. Those speculations were wrong because of a lack of familiarity with the data (McIntyre confused his buckets), basic mistakes (such as applying ocean-only corrections to global temperature series) and an over-eagerness to overturn established positions. Among the lessons to be drawn should be that science does not travel at the speed of blog and that confirmation biases and desires to be first on the block sometimes get in the way of the rational accumulation of knowledge. Making progress in science takes work, and with big projects like this, patience.