Last week there was an international media debate on climate data which appeared to be rather surreal to me. It was claimed that the global temperature data had so far shown a “hiatus” of global warming from 1998-2012, which was now suddenly gone after a data correction. So what happened?
One of the data centers that compile the data on global surface temperatures – NOAA – reported in the journal Science on an update of their data. Some artifacts due to changed measurement methods (especially for sea surface temperatures) were corrected and additional data of previously not included weather stations were added. All data centers are continually working to improve their database and they therefore occasionally present version updates of their global series (NASA data are currently at version 3, the British Hadley Centre data at version 4). There is nothing unusual about this, and the corrections are in the range of a few hundredths of a degree – see Figure 1. This really is just about fine details.
Fig. 1 The NOAA data of global mean temperature (annual values) in the old version (red) and the new version (black). From Karl et al., Science 2015
What got some people excited was the fact that the latest corrections more than doubled the trend over 1998-2012 in the NOAA data, which basically just illustrates that trends over such short periods are not particularly robust – we have often warned about this here. I was not surprised by this correction, given the NOAA data were known for showing the smallest warming trends recently of all the usual global temperature series. After the new update, the NOAA data are now in the middle of the other records (see Fig. 2). This does not change anything for any relevant finding of climate research.
Fig. 2 Linear trends in global temperature over the period 1998 to 2012 in the records of various institutions. To calculate the values and uncertainties (±2 standard deviations) the interactive trend calculation tool of Kevin Cowtan (University of York) was used; follow this link for more info on the data sources. The red bar on the left shows the old NOAA value, the blue bar the new one.
In addition, the entire adjustment and the differences between the data sets are well within the uncertainty bars (also shown in Fig. 2). These uncertainties reflect the fluctuations from year to year, caused by weather and things like the El Niño phenomenon – these make the data “noisy”, so that a trend analysis over short time intervals is quite uncertain and significantly depends on the choice of the beginning and end years. The period 1998-2012 is a period with a particularly low trend, since it begins with the extremely warm year 1998, which was marked by the strongest El Niño since the beginning of the observations, and ends with a couple of relatively cool years.
Some media reports even gave the impression that the IPCC had confirmed a “hiatus” of global warming in its latest report of 2013, and that this conclusion was now overturned. Indeed the new paper by Karl et al. was framed around the period 1998-2012, because this period was specifically addressed in the IPCC report. However, the IPCC wrote the following (Summary for Policy Makers, p.5):
Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends. As one example, the rate of warming over the past 15 years (1998 – 2012; 0.05 [-0.05 to 0.15] °C per decade), which begins with a strong El Niño, is smaller than the rate calculated since 1951 (1951- 2012; 0.12 [0.08 to 0.14] °C per decade).
The IPCC thus specifically pointed out that the lower warming trend from 1998-2012 is not an indication of a significant change in climatic warming trend, but rather an expression of short-term natural fluctuations. Note also the uncertainty margins indicated by the IPCC.
Imagine that in some field of research, there is some quantity for which there are five different measurements from different research teams, which show some spread but which agree within the stated uncertainty bounds. Now one team makes a small correction (small compared to this uncertainty), so that its new value no longer is the lowest of the five teams but is right in the middle. In what area of research is it conceivable that this is not just worth a footnote, but a Science paper and global media reports?
I have often pointed out that the whole discussion about the alleged “warming hiatus” is one about the “noise” and not a significant signal. It is entirely within the range of data uncertainty and short-term variability. It is true that the saying goes “one man’s noise is another man’s signal”, and to better understand this “noise” of natural variability is a worthwhile research topic. But somehow, looking at the media reports, I do not think that the general public understands that this is only about climatic “noise” and not about any trend change relevant for climate policy.
Technical Note: For calculation of trends (as far as they do not come from the paper by Karl et al.) I have used the online interactive trend calculation tool of Kevin Cowtan. In their paper Karl et al. provide (as the IPCC) 90% confidence intervals, while Cowtan gives the more common 2-sigma intervals (which for a normal distribution comprise 95% of the values, and thus are wider). In addition, Karl et al computed annual averages of the monthly data before further analysis, while Cowtan calculates trends and uncertainties straight from the monthly data (which I think is cleaner). To avoid showing inconsistent confidence intervals (and since the new data by Karl et al. are not yet available as a monthly data) I have not included intervals for the NOAA data in Fig. 2. In any case, the intervals are similar in size for all records if they are calculated consistently. The long-term trends shown in gray dashed lines hardly differ between the datasets and have very narrow confidence intervals (± 0.02 °C per decade for 1951 to 2014).