In the real world we attribute singular events all the time – in court cases for instance – and so we do have practical experience of this. If the evidence linking specific bank-robbers to a robbery is strong, prosecutors can get a conviction without the crimes needing to have been ‘unprecedented’, and without having to specifically prove that everyone else was innocent. What happens instead is that prosecutors (ideally) create a narrative for what they think happened (lets call that a ‘model’ for want of a better word), work out the consequences of that narrative (the suspect should have been seen by that camera at that moment, the DNA at the scene will match a suspect’s sample, the money will be found in the freezer etc.), and they then try and find those consequences in the evidence. It’s obviously important to make sure that the narrative isn’t simply a ‘just-so’ story, in which circumstances are strung together to suggest guilt, but which no further evidence is found to back up that particular story. Indeed these narratives are much more convincing when there is ‘out of sample’ confirmation.

We can generalise this: what is a required is a model of some sort that makes predictions for what should and should not have happened depending on some specific cause, combined with ‘out of sample’ validation of the model of events or phenomena that were not known about or used in the construction of the model.

Models come in many shapes and sizes. They can be statistical, empirical, physical, numerical or conceptual. Their utility is predicated on how specific they are, how clearly they distinguish their predictions from those of other models, and the avoidance of unnecessary complications (“Occam’s Razor”). If all else is equal, a more parsimonious explanation is generally preferred as a working hypothesis.

The overriding requirement however is that the model must be predictive. It can’t just be a fit to the observations. For instance, one can fit a Fourier series to a data set that is purely random, but however accurate the fit is, it won’t give good predictions. Similarly a linear or quadratic fit to a time series can be useful form of descriptive statistics, but without any reason to think that there is an underlying basis for such a trend, it has very little predictive value. In fact, any statistical fit to the data is necessarily trying to match observations using a mathematical constraint (ie. trying to minimise the mean square residual, or the gradient, using sinusoids, or wavelets, etc.) and since there is no physical reason to assume that any of these constraints apply to the real world, no purely statistical approach is going to be that useful in attribution (despite it being attempted all the time).

To be clear, defining any externally forced climate signal as simply the linear, quadratic, polynomial or spline fit to the data is not sufficient. The corollary which defines ‘internal climate variability’ as the residual from that fit doesn’t work either.

So what can you do? The first thing to do is to get away from the idea that you can only be using single-valued metrics like the global temperature. We have much more information than that – patterns of changes across the surface, through the vertical extent of the atmosphere, and in the oceans. Complex spatial fingerprints of change can do a much better job at discriminating between competing hypotheses than simple multiple linear regression with a single time-series. For instance, a big difference between solar forced changes compared to those driven by CO2 is that the stratosphere changes in tandem with the lower atmosphere for solar changes, but they are opposed for CO2-driven change. Aerosol changes often have specific regional patterns change that can be distinguished from changes from well-mixed greenhouse gases.

Page 2 of 3 | Previous page | Next page