On sensitivity: Part I

And then there are the recent papers examining the transient constraint. The most thorough is Aldrin et al (2012). The transient constraint has been looked at before of course, but efforts have been severely hampered by the uncertainty associated with historical forcings – particularly aerosols, though other terms are also important (see here for an older discussion of this). Aldrin et al produce a number of (explicitly Bayesian) estimates, their ‘main’ one with a range of 1.2ºC to 3.5ºC (mean 2.0ºC) which assumes exactly zero indirect aerosol effects, and possibly a more realistic sensitivity test including a small Aerosol Indirect Effect of 1.2-4.8ºC (mean 2.5ºC). They also demonstrate that there are important dependencies on the ocean heat uptake estimates as well as to the aerosol forcings. One nice thing that added was an application of their methodology to three CMIP3 GCM results, showing that their estimates 3.1, 3.6 and 3.3ºC were reasonably close to the true model sensitivities of 2.7, 3.4 and 4.1ºC.

In each of these cases however, there are important caveats. First, the quality of the data is important: whether it is the LGM temperature estimates, recent aerosol forcing trends, or mid-tropospheric humidity – underestimates in the uncertainty of these data will definitely bias the CS estimate. Second, there are important conceptual issues to address – is the sensitivity to a negative forcing (at the LGM) the same as the sensitivity to positive forcings? (Not likely). Is the effective sensitivity visible over the last 100 years the same as the equilibrium sensitivity? (No). Is effective sensitivity a better constraint for the TCR? (Maybe). Some of the papers referenced above explicitly try to account for these questions (and the forward model Bayesian approach is well suited for this). However, since a number of these estimates use simplified climate models as their input (for obvious reasons), there remain questions about whether any specific model’s scope is adequate.

Ideally, one would want to do a study across all these constraints with models that were capable of running all the important experiments – the LGM, historical period, 1% increasing CO2 (to get the TCR), and 2xCO2 (for the model ECS) – and build a multiply constrained estimate taking into account internal variability, forcing uncertainties, and model scope. This will be possible with data from CMIP5, and so we can certainly look forward to more papers on this topic in the near future.

In the meantime, the ‘meta-uncertainty’ across the methods remains stubbornly high with support for both relatively low numbers around 2ºC and higher ones around 4ºC, so that is likely to remain the consensus range. It is worth adding though, that temperature trends over the next few decades are more likely to be correlated to the TCR, rather than the equilibrium sensitivity, so if one is interested in the near-term implications of this debate, the constraints on TCR are going to be more important.

Page 3 of 3 | Previous page


  1. M. Aldrin, M. Holden, P. Guttorp, R.B. Skeie, G. Myhre, and T.K. Berntsen, "Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content", Environmetrics, vol. 23, pp. 253-271, 2012. http://dx.doi.org/10.1002/env.2140

104 comments on this post.
  1. Nic Lewis:

    Ray Ladbury #93
    “I agree that Jeffrey’s Prior is attractive in a lot of situations. However, it is not clear that it would help in this case, is it? I mean in some cases, JP is flat”

    The form of the Jeffreys’ prior depends on both the relationship of the observed variable(s) to the parameter(s) and the nature of the observational errors and other uncertainties, which determine the form of the likelihood function. Typically the JP is only uniform where the estimation is of a simple location parameter, with the measured variable being the parameter (or a linear function thereof) plus an error whose distribution is independent of the parameter.

    Where (equilibrium/effective) climate sensitivity (S) is the only parameter being estimated, and the estimation method works directly from the observed variables (e.g., by regression, as in Forster and Gregory, 2006, or mean estimation, as in Gregory et al, 2002) over the instrumental period, then the JP for S will be almost of the form 1/S^2. That is equivalent to an almost uniform prior were instead 1/S, the climate feedback parameter (lambda), to be estimated.

    The reason why a 1/S^2 prior is noninformative is that estimates of climate sensitivity depend on comparing changes in temperature with changes in {forcing minus the Earth’s net radiative balance (or its proxy, ocean heat uptake)}. Over the instrumental period, fractional uncertainty in the latter is very much larger than fractional uncertainty in temperature change measurements, and is approximately normally distributed.

    There is really no valid argument against using a 1/S^2 prior in cases like Forster & Gregory, 2006 and Gregory et al, 2002, and that is what frequentist statistical methods implicitly use. For instance, Forster and Gregory, 2006, used linear regression of {forcing minus the Earth’s net radiative balance} on surface temperature, which as they stated implicitly used a uniform in lambda prior for lambda. When the normally distributed estimated PDF for lambda resulting from that approach is converted into a PDF for S, using the standard change of variables formula, that PDF implicitly uses a 1/S^2 prior for S. However, for presentation in the AR4 WG1 report (Fig. 9.20 and Table 3) the IPCC multiplied that PDF by S^2, converting it to a uniform-in-S prior basis, which is highly informative. As a result, the 95% bound on S shown in the AR4 report was 14.2 C, far higher than the 4.1 C bound reported in the study itself.

    Where climate sensitivity is estimated in studies involving comparing observations with values simulated by a forced climate model at varying parameter settings (see Appendix 9.B of AR4 WG1), the JP is likely to be different from what it would be were S estimated directly from the same underlying data. Where several parameters are estimated simultaneously, the JP will be a joint prior for all parameters and may well be a complex nonlinear function of the parameters.

  2. Aaron Franklin:

    I’m in need of some clarification on what we should be now using as a GWP for methane.

    From Archer 2007:
    …..so a single molecule of additional methane has a larger impact
    on the radiation 5 balance than a molecule of CO2, by about a factor of 24 (Wuebbles and Hayhoe, 2002)……
    …..To get an idea of the scale, we note that a doubling of methane
    10 from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and 10 times present methane would be equivalent to about a doubling of CO2. A release of 500 Gton C as methane (order 10% of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2……
    …..The current inventory of methane in the atmosphere is about 3 Gton C. Therefore, the release of 1 Gton C of methane catastrophically to the atmosphere would raise the methane concentration by 33%. 10 Gton C would triple atmospheric methane.

    (so doubling atmos methane requires 3 Gton release, 10x present methane requires 30 Gton released?)

    Here also GWP methane is taken as 24. As we know 20yr GWP methane is commonly stated as 72 (IPCC) or 105 (shindel).

    Factoring in findings of :
    Large methane releases lead to strong aerosol forcing and reduced cloudiness 2011 T. Kurt ´en1,2, L. Zhou1, R. Makkonen1, J. Merikanto1, P. R¨ais¨anen3, M. Boy1,N. Richards4, A. Rap4, S. Smolander1, A. Sogachev5, A. Guenther6, G. W. Mann4,K. Carslaw4, and M. Kulmala1

    -That previous GWP methane figures need x1.8 correction factor….
    We should be using 20yr GWP methane of 130 or 180. This is 5.4 or 7.5 times the 24 GWP that Archer 2007 appears to be using?

    So maybe the above should say, looking at a 20yr period(using the 100 becomes 180 gwp)?:

    …..To get an idea of the scale, we note that a [100% increase/7.5= 13% increase] of methane from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and [10 times/7.5= 1.333times] present methane would be equivalent to about a doubling of CO2. A release of [500/7.5=66.7] Gton C as methane (order [10%/7.5=1.3%] of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2……

  3. Chris Colose:

    Aaron Franklin (102)

    I wouldn’t go so far to say that the collective climate science community has completely moved on from the idea, but I’d argue that GWP is a rather outdated and fairly useless metric for comparing various greenhouse gases. It is also very sensitive to the timescale over which it is calculated.

    It’s correct that an extra methane molecule is something like 25 times more influential than an extra CO2 molecule, although that ratio is primarily determined by the background atmospheric concentration of either gas, and GWP typically assumes that forcing is linear in emission pulse, which is not valid for very large perturbations. But because there’s not much methane to begin with, it’s not true that 1.33x methane has more impact than a doubling of CO2 (we’ve already increased methane by well over this amount)…a doubling of methane doesn’t even have nearly as much impact as a doubling of CO2.

    The key point, however, is the much longer residence time of CO2 in the atmosphere…GWP tries to address this in its own mystical way, but there are much better ways of thinking about the issue. See the recent paper from Susan Solomon, Ray Pierrehumbert, and others.

  4. CM:

    Re: The Norwegian findings (#96-100), they’re still under review. Scroll down to “update”:

    Clicking the Cicero link provided there takes you round trip — RealClimate is extensively referenced.