Yesterday was the day that NASA, NOAA, the Hadley Centre and Berkeley Earth delivered their final assessments for temperatures in Dec 2020, and thus their annual summaries. The headline results have received a fair bit of attention in the media (NYT, WaPo, BBC, The Guardian etc.) and the conclusion that 2020 was pretty much tied with 2016 for the warmest year in the instrumental record is robust.
There is some more background here:
A bit more background on the temperature anomalies in 2020, which were statistically tied with 2016 for the warmest year in the instrumental record. pic.twitter.com/y3L6vgVnc3— Gavin Schmidt (@ClimateOfGavin) January 14, 2021
[Note we will work on the model-observation comparison page to add the 2020 data point to the graphs [DONE], and update the datasets to their latest versions, but nothing dramatic will change – the latest observations remain pretty much in line with what models predicted. ]
But there are a few issues that readers here might appreciate that goes beyond what usually gets reported.
How does ENSO affect annual temperatures?
If you do a regression of the year-to-year variations in global temperature, you’ll find that the highest correlations are with the spring ENSO index (the February-March average to be specific, but almost any index from the winter/spring works equally well). Using that regression, you can estimate that the 2016 El Niño added 0.11ºC to the global temperatures in that year, and that we would have expected a much smaller 0.03ºC for 2020 (given the slight ENSO positive conditions early in the year. However, in the map of 2020 anomalies, the tropical Pacific looks to be (on average) slightly negative in phase, driven by the emerging La Niña event this fall/winter.
That suggests that we could usefully build a more complex connection between the global mean and ENSO – either with more predictors (say Feb/Mar but also Oct/Nov?) or by using a lagged model on the monthly anomalies. I’d be interested in any results people get and if it changes the ENSO-corrected annual timeseries substantially.
What are the uncertainties in these estimates?
There have been some real advances over the years in how we think about the uncertainty in these estimates. The work with the HadCRUT ensemble, the Berkeley Earth statistical model and the work in Lenssen et al (2019) for GISTEMP, have all gone way beyond the old style of estimates from a decade ago. But there are some aspects of the uncertainty that remain hard to analyse – for instance, it isn’t quite right to assume the margin for one year is independent of the margin for the next year – since they will have been similarly affected by the station network at these times or the homogeneity adjustments which will be similar for both. So while the probabilities for record years given here are reasonable, they may be due for a (minor) revision in the near future.
The perennial issue of the Arctic coverage is now almost done with. HadCRUT5 now extrapolates into the Arctic as well, and the upcoming revisions to the NOAA methodology (Vose et al, in press) do the same as well as ingesting Arctic buoy data. This will effectively eliminate the cool bias that resulted from only partially weighting the Arctic changes and reduce the difference between the products to almost negligible values (except where the HadSST and ERSST products differ).
Structural uncertainty in satellite records
The main focus of these annual announcements is on the in situ land station/ocean buoy/ship data compilations, but as many will know there are a number of satellite products of related variables that offer an independent view of recent trends. Specifically, there are the MSU TLT products (from RSS and UAH) and the AIRS instrument data (flying on NASA’s Aqua satellite since 2003). These products are the combination of raw data (brightness temperatures in the microwave band and IR band respectively) together with complex retrieval algorithms which correct for the presence of clouds or surface emissivity or atmospheric distortions of various sorts. As such, the retrievals are often updated as improved methods are found, or calibration targets refined, or corrections found.
RSS retrievals are on version 4, UAH on version 6, and the AIRS retrievals have just moved to version 7. At each new version, the whole record is reprocessed and while the new results are often highly correlated with the older versions, trends can sometimes be quite different. This ‘structural’ uncertainty in the long term is often neglected when comparing these observations with other products or model output. Nonetheless, it is a significant issue. To illustrate this, note the difference between UAH and RSS below – highly correlated year-to-year, but radically different trends (and therefore interpretations) over the length of the record. For reference, the GISTEMP and HadCRUT5 products can barely be distinguished.
Similarly, the two versions of the AIRS product (v6 (red) and v7 (pink)) are well-correlated from year to year, but diverge notably in the early years of that record (2003-2006). The point being that structural issues in satellite products can have much larger impacts than structural issues in the surface station products (even if you consider the polar problem). Only drawing conclusions that are robust to these issues seems sensible.
And one pet peeve.
It seems I need to say this every year, but attempts to give a high-precision absolute temperature value for a single year are scientifically invalid. Our knowledge of the absolute global mean temperature has an uncertainty of about 0.5ºC, while the uncertainty in the annual mean anomaly is more like 0.05ºC. You don’t get an accurate number by adding an inaccurate one to an accurate one. What happens when this occurs is that updates in the absolute global mean (because of a new reanalysis, better observational data, etc.) can dwarf the year-to-year anomaly and you end up with an ‘absolute’ number from 24 years ago weirdly being larger than an absolute number today:
One example is sufficient to demonstrate the problem. In 1997, the NOAA state of the climate summary stated that the global average temperature was 62.45ºF (16.92ºC). The page now has a caveat added about the issue of the baseline, but a casual comparison to the statement in 2016 stating that the record-breaking year had a mean temperature of 58.69ºF (14.83ºC) could be mightily confusing. In reality, 2016 was warmer than 1997 by about 0.5ºC!
Just don’t do it.
- N.J.L. Lenssen, G.A. Schmidt, J.E. Hansen, M.J. Menne, A. Persin, R. Ruedy, and D. Zyss, "Improvements in the GISTEMP Uncertainty Model", Journal of Geophysical Research: Atmospheres, vol. 124, pp. 6307-6326, 2019. http://dx.doi.org/10.1029/2018JD029522
- R.S. Vose, B. Huang, X. Yin, D. Arndt, D.R. Easterling, J.H. Lawrimore, M.J. Menne, A. Sanchez‐Lugo, and H.M. Zhang, "Implementing Full Spatial Coverage in NOAA’s Global Temperature Analysis", Geophysical Research Letters, vol. 48, 2021. http://dx.doi.org/10.1029/2020GL090873