RealClimate logo


Climate Sensitivity Estimates and Corrections

You need to be careful in inferring climate sensitivity from observations.

Two climate sensitivity stories this week – both related to how careful you need to be before you can infer constraints from observational data. (You can brush up on the background and definitions here). Both cases – a “Brief Comment Arising” in Nature (that I led) and a new paper from Proistosescu and Huybers (2017) – examine basic assumptions underlying previously published estimates of climate sensitivity and find them wanting.

More »

References

  1. C. Proistosescu, and P.J. Huybers, "Slow climate mode reconciles historical and model-based estimates of climate sensitivity", Science Advances, vol. 3, pp. e1602821, 2017. http://dx.doi.org/10.1126/sciadv.1602821

Nenana Ice Classic 2017

Filed under: — gavin @ 2 May 2017 - (Español)

As I’ve done for a few years, here is the updated graph for the Nenana Ice Classic competition, which tracks the break up of ice on the Tanana River near Nenana in Alaska. It is now a 101-year time series tracking the winter/spring conditions in that part of Alaska, and shows clearly the long term trend towards earlier break up, and overall warming.

2017 was almost exactly on trend – roughly one week earlier than the average break up date a century ago. There was a short NPR piece on the significance again this week, but most of the commentary from last year and earlier is of course still valid.

My shadow bet on whether any climate contrarian site will mention this dataset remains in play (none have since 2013 which was an record late year).

Model projections and observations comparison page

Filed under: — gavin @ 11 April 2017

We should have done this ages ago, but better late than never!

We have set up a permanent page to host all of the model projection-observation comparisons that we have monitored over the years. This includes comparisons to early predictions for global mean surface temperature from the 1980’s as well as more complete projections from the CMIP3 and CMIP5. The aim is to maintain this annually, or more often if new datasets or versions become relevant.

We are also happy to get advice on stylistic choices or variations that might make the graphs easier to comprehend or be more accurate – feel free to suggest them in the comments below (since the page itself will be updated over time, it doesn’t have comments associated with it).

If there are additional comparisons you are aware of that you think would be useful to include, please point to the model and observational data set(s) and we’ll try and include that too. We should have the Arctic sea ice trends up shortly for instance.

What is the uncertainty in the Earth’s temperature rise?

Filed under: — group @ 11 April 2017

Guest commentary by Shaun Lovejoy (McGill University)

Below I summarize the key points of a new Climate Dynamics (CD) paper that I think opens up new perspectives on understanding and estimating the relevant uncertainties. The main message is that the primary sources of error and bias are not those that have been the subject of the most attention – they are not human in origin. The community seems to have done such a good job of handling the “heat island”, “cold park”, and diverse human induced glitches that in the end these make only a minor contribution to the final uncertainty. The reason of course, is the huge amount of averaging that is done to obtain global temperature estimates, this averaging essentially averages out most of the human induced noise.

Two tough sources of uncertainty remain: missing data and a poor definition of the space-time resolution; the latter leads to the key scale reduction factor. In spite of these large low frequency uncertainties, at centennial scales, they are still only about 13% of the IPCC estimated anthropogenic increase (with 90% certainty).

This paper is based on 6 monthly globally averaged temperature series over the common period 1880-2012 using data that were publically available in May 2015. These were NOAA NCEI, NASA GISTEMP, HadCRUT4, Cowtan and Way, Berkeley Earth and the 20th Century Reanalysis. In the first part on relative uncertainties, the series are systematically compared with each other over scales ranging from months to 133 years. In the second part on absolute uncertainties, a stochastic model is developed with two parts. The first simulates the true temperatures, the second treats the measurement errors that would arise from this series from three different sources of uncertainty: i) usual auto-regressive (AR)-type short range errors, ii) missing data, iii) the “scale reduction factor”.

The model parameters are fit by treating each of the six series as a stochastic realization of the stochastic measurement process. This yields an estimate of the uncertainty (spread) of the means of each series about the true temperature – an absolute uncertainty – not simply the spread of the series means about their common mean value (the relative uncertainty). This represents the absolute uncertainty of the series means about a (still unknown) absolute reference point (which is another problem for another post).

More »

References

  1. S. Lovejoy, "How accurately do we know the temperature of the surface of the earth?", Climate Dynamics, 2017. http://dx.doi.org/10.1007/s00382-017-3561-9

Serving up a NOAA-thing burger

Filed under: — gavin @ 9 February 2017

I have mostly been sitting back and watching the John Bates story go through the predictable news-cycle of almost all supposed ‘scandalous’ science stories. The patterns are very familiar – an initial claim of imperfection spiced up with insinuations of misconduct, coordination with a breathless hyping of the initial claim with ridiculous supposed implications, some sensible responses refuting the initial specific claims and demolishing the wilder extrapolations. Unable to defend the nonsense clarifications are made that the initial claim wasn’t about misconduct but merely about ‘process’ (for who can argue against better processes?). Meanwhile the misconduct and data falsification claims escape into the wild, get more exaggerated and lose all connection to any actual substance. For sure, the technical rebuttals to the specific claims compete with balance of evidence arguments and a little bit of playful trolling for the attention of anyone who actually cares about the details. None of which, unfortunately, despite being far more accurate, have the narrative power of the original meme.

The next stages are easy to predict as well – the issues of ‘process’ will be lost in the noise, the fake overreaction will dominate the wider conversation and become an alternative fact to be regurgitated in twitter threads and blog comments for years, the originators of the issue may or may not walk back the many mis-statements they and others made but will lose credibility in any case, mainstream scientists will just see it as hyper-partisan noise and ignore it, no papers will be redacted, no science will change, and the actual point (one presumes) of the ‘process’ complaint (to encourage better archiving practices) gets set back because it’s associated with such obvious nonsense.

This has played out many, many times before: The Yamal story had a very similar dynamic, and before that the ‘1934‘ story, etc. etc.

Assuming for the sake of politeness that sound and fury signifying nothing is not the main goal for at least some participants, the question arises: since this is so predictable why do people still keep making the same mistakes?

More »