As previewed last weekend, I spent most of last week at a workshop on Climate Sensitivity hosted by the Max Planck Institute at Schloss Ringberg. It was undoubtedly one of the better workshops I’ve attended – it was focussed, deep and with much new information to digest (some feel for the discussion can be seen from the #ringberg15 tweets). I’ll give a brief overview of my impressions below.
As we’ve discussed previously, there are multiple classes of observational data that could provide some constraints on how warm the planet will get as CO2 increases (either on a multi-decadal timescale (TCR) or for the long term equilibrium (ECS)). Principally, there is paleo-climate data from previous quasi-equilibria like the Last Glacial Maximum or Eocene; evidence from the instrumental trends since the 19th Century; and climatological observations that correlate to longer term responses (either in the mean, seasonally or over interannual variations). Each class of data has its own problems, either in terms of observational quality, its relationship to sensitivity, or the level of simplicity of the underlying model.
The workshop spent a lot of time examining the explicit (and implicit) assumptions that underlie the methods and constructively criticising all of them to see how they can be best reconciled (because we are just looking for one number after all). Andy Dessler recorded his own talk on short term constraints and posted it already. Slides from the other talks are posted on the MPI website.
There were two major themes that emerged across a lot of the discussions: the stability of the basic ‘energy balance’ equation () that defines the sensitivity, , to zeroth order; and the challenge of estimating cloud feedbacks from process-based understanding. The connection occurs because the clouds are the cause of the biggest variation in sensitivity across GCMs.
The first topic was triggered by extensive evidence in models that there are structural variations associated with changes of base state, time variations, spatial variations and the different physics of each forcing that imply needs to be thought of as more than constant. For instance, plotting against in an experiment with an abrupt forcing (such as 4xCO2) should give a straight line (red) if were constant, but instead there is almost always some curvature implying that temperature changes a more for the same forcing change after a century or so than at the start (blue line).
That curvature implies that applying a strictly ‘constant ‘ model to a limited set of observations (such as the trends over the last hundred years) is likely to bias any estimate of the sensitivity. Quantification of these issues is ongoing. Without a resolution though (or a set of reasonable corrections), efforts to combine multiple constraints without taking this into account are going to be flawed.
— Gavin Schmidt (@ClimateOfGavin) March 26, 2015
The cloud feedback discussion was extremely interesting since there are a multitude of different theorised effects depending on which clouds are being discussed (Mark Zelinka and Graeme Stevens did a great job in particular in explaining these effects). The variation in climate sensitivity in models seems to be dominated by the simulations of low clouds (which are a net cooling to the climate) which have a tendency to disappear as the climate warms. Whether this can be independently constrained in the observations is unclear.
The conversations around these issues got into multiple connected areas, including aerosol forcings, observational uncertainty, climate model tuning and independence, the nature of probability, Bayesian updating, detection and attribution, and internal variability. It looks like there will be some interesting upcoming papers on many of these aspects that will help clarify matters.
While the workshop wasn’t designed to produce a new assessment of the evidence, we did spend time specifying the problems there would be if equilibrium sensitivity was less than 2ºC or greater than 5ºC. Specifically, what would have to be true for all the evidence to fit? This was useful at underlining the challenge in shifting or constraining the ‘classic’ range by very much.
More generally, the workshop was a great example of how a diverse group (in attitude, background, temperament as well as the more usual classes), can tackle complex and difficult problems in a situation with only minimal distractions. Some of the these discussions were (to say the least) quite intense, but were all done in a spirit of constructive collegiality so nobody came to blows ;-).
As an aside, it also underlines the problems with a move towards more virtual conferences – I know of no online methodology to replicate the intensity or depth of the conversations, sustained over a week that we had here. Much in science can certainly be done fine by video conferencing (including a PhD defence I ‘attended’ in Paris, or for recent testimony to the Texas Legislature I gave by Skype), but the experience at this kind of focused workshops is really the hardest challenge to emulate.