# The IPCC AR5 attribution statement

Figure 2. Assessed likely ranges (whiskers) and their mid-points (bars) for attributable warming trends over the 1951–2010 period due to well-mixed greenhouse gases (GHG), other anthropogenic forings (OA), natural forcings (NAT), combined anthropogenic forcings (ANT), and internal variability. The HadCRUT4 observations are shown in black with the 5–95% uncertainty range due to observational uncertainty.

These estimates are the “assessed” trends that come out of fingerprint studies of the temperature changes and account for potential mis-estimates (over or under) of internal variability, sensitivity etc. in the models (and potentially the forcings). The raw material are the model hindcasts of the historical period – using all forcings, just the natural ones, just the anthropogenic ones and various variations on that theme.

The error bars cover the ‘likely’ range (33-66%), so are close to being ±1 standard deviation (except for the observations (5-95%), which is closer to ±2 standard deviations). It is easy enough to see that the ‘ANT’ row (the combination from all anthropogenic forcings) is around 0.7 ± 0.1ºC, and the OBS are 0.65 ± 0.06ºC. If you work that through (assuming normal distributions for the uncertainties), it implies that the probability of the ANT trend being less than half the OBS trend is less than 0.02% – much *less* than the stated 5% level. The difference is that the less confident statement also takes into account structural uncertainties about the methodology, models and data. Similarly, the best estimate of the ratio of ANT to OBS has a 2 sd range between 0.8 and 1.4 (peak at 1.08). Consistent with this are the rows for natural forcing and internal variability – neither are significantly different to zero in the mean, and the uncertainties are too small for them to explain the observed trend with any confidence. Note that the ANT vs. NAT comparison is independent of the GHG or OA comparisons; the error bars for ANT do not derive from combining the GHG and OA results.

It is worth asking what the higher confidence/lower error bars are associated with. First, the longer time period (an extra 5 years) makes the trends clearer relative to the noise, multiple methodologies have been used which get the same result, and fingerprints have been better constrained by the greater use of spatial information. Small effects may also arise from better characterisations of the uncertainties in the observations (i.e. in moving from HadCRUT3 to HadCRUT4). Because of the similarity of patterns related to aerosols and greenhouse gases, there is more uncertainty in doing the separate attributions rather than looking at anthropogenic forcings collectively. Interestingly, the attribution of most of the trend to GHGs alone would still remain *very likely* (as in AR4); I estimate a roughly 7% probability that it would account for less than half the OBS trend. A factor that might be relevant (though I would need to confirm this) is that more CMIP5 simulations from a wider range of models were available for the NAT/ANT comparison than previously in CMIP3/4.

It could be argued that since recent trends have fallen slightly below the multi-model ensemble mean, this should imply that our uncertainty has massively increased and hence the confidence statement should be weaker than stated. However this doesn’t really follow. Over-estimates of model sensitivity would be accounted for in the methodology (via a scaling factor of less than one), and indeed, a small over-estimate (by about 10%) is already factored in. Mis-specification of post-2000 forcings (underestimated volcanoes, Chinese aerosols or overestimated solar), or indeed, uncertainties in all forcings in the earlier period, leads to reduced confidence in attribution in the fingerprint studies, and an lower estimate of the anthropogenic contribution. Finally, if the issue is related simply to an random realisation of El Niño/La Niña phases or other sources of internal variability, this simply feeds into the ‘Internal variability’ assessment. Thus the effects of recent years are already embedded within the calculation, and will have led to a reduced confidence *compared to* a situation where things lined up more. Using this as an additional factor to change the confidence rating again would be double counting.

Page 2 of 3 | Previous page | Next page