To nobody’s surprise, all of the surface datasets showed 2016 to be the warmest year on record.
Barely more surprising is that all of the tropospheric satellite datasets and radiosonde data also have 2016 as the warmest year.
Coming as this does after the record warm 2015, and (slightly less definitively) record warm 2014, the three records in row might get you to sit up and pay attention.
There a few more technical issues that are worth mentioning here.
Impact of ENSO
The contribution of El Niño to recent years’ anomalies in the GISTEMP data set are ~0.05ºC (2015) and ~0.12ºC (2016), and that means the records would still have been set even with no ENSO variability.
I calculated these values using a regression of the interannual variability in the annual mean to the Feb-Mar MEI index. This has (just) the maximum correlation to the annual means (r=0.66). The impact of ENSO on other indices is similar, but does vary – the datasets that don’t interpolate to the Arctic, in recent years at least, have a slightly stronger ENSO signal, as do the satellite tropospheric records. Doing the same procedure with the HadCRUT4 data, does change the ordering – with 2015 staying as the record year, but using the Cowtan and Way extension, the results are the same as with GISTEMP. Which brings us to another key point…
Impact of the Arctic
It’s perhaps not obvious in the first figure, but the magnitude of the record in 2016 is much larger in GISTEMP and Cowtan&Way (and in the reanalyses), than it is in HadCRUT4, NCEI and JMA. This is in large part due to the treatment of the Arctic. The latter 3 records all ‘conservatively’ don’t include areas where there aren’t direct observations in their global means. This is equivalent to assuming that the missing areas are, on average, warming at the same rate as the global mean. However, this has not been a good assumption for a couple of decades. Arctic anomalies this year were close to 4ºC above the late 19th Century, over 3 times as big an anomaly as the global mean.
This divergence between the ‘global’ averages wouldn’t matter if all comparisons were done against masked model output, but this is often skipped over for simplicity. I personally think that both HadCRUT4 and NCEI should start producing a ‘filled’ dataset using the best of the techniques currently available so that we can move on from this particular issue.
Do I have to mention the ‘pause’?
Apparently yes. The last three years have demonstrated abundantly clearly that there is no change in the long term trends since 1998. A prediction from 1997 merely continuing the linear trends would significantly under-predict the last two years.
The difference isn’t yet sufficient to state that the trends are accelerating, but that might not be too far off. Does this mean that people can’t analyse interannual or interdecadal variations? Of course not, but it should serve as a reminder that short-term variations should not be conflated with long term trends. One is not predictive of the other.