Past reconstructions: problems, pitfalls and progress

This issue revolves around what the proxy records are really recording and whether it is constant in time. This is of course a ubiquitous problem with proxies, since it well known that no ‘perfect’ proxy exists i.e. there is no real world process that is known to lead to proxy records that are only controlled by temperature and no other effect. This leads to the problem that it is unclear whether the variability due to temperature has been constant through time, or whether the confounding factors (that may be climatic or not) have changed in importance. In the case where the other factors seem to be climatic (d18O in ice cores for instance), the data can sometimes be related to some other large scale pattern – such as ENSO and could thus be an indirect measure of temperature change.

In many cases, proxies such as Mg/Ca ratios in foraminifera have laboratory and in situ calibrations that demonstrate a fidelity to temperature. However, some proxies, like d18O which do have a temperature component, also have other factors that affect them. In forams, the other factors involve changes in water mass d18O (correlated to salinity), or changes in seasonality. In terrestrial d18O records, the precipitation patterns, timing and sources are important – more so in the tropics than at high latitudes though.

A more prosaic, but still important, issue is the nature of what is being recorded. Low resolution data is often not a snapshot in time, but part of a continuous measurement. Therefore the 100 year spaced pollen reconstruction data from Viau et al (2006) (Loehle #13), are not estimates for the mid-point of each century, but are century averages. Linear interpolation between these points will give a series that actually has a different century-long means. The simplest approach is to use a continuous step function with each century given the mean, or a spline fit that preserves the average rather than the mid-point value. It’s not clear whether the low resolution series in Loehle (#4, #5, #6, #10, #13,#14, #15, #17, #18) were treated correctly (though to be fair, other reconstructions have made similar errors). It remains unclear how important this is.

Issue 3: Calibration

Correlation does not equal causation. And so a proxy with a short period calibration to temperature with no validating data cannot be fully trusted to be a temperature proxy. This arises with the Holmgren et al (1999) speleothem grey-scale data (Loehle #11) which is calibrated over a 17 year period to local temperature, but without any ‘out-of-sample’ validation. The problem in that case is exacerbated by the novelty of the proxy. (As an aside, the version used by Loehle is on an out-of-date age model (see here for an up-to-date version of the source grey-scale data – convert to temperature using T=8.66948648-G*0.0378378) and is already smoothed with a backwards running mean implying that the record should be shifted back ~20 years).

As mentioned above, there are a priori reasons to assume d18O records in terrestrial records have a temperature component. In mid-latitudes, the relationship is positive – higher d18O in precipitation in warmer conditions. This is a function of the increase in fractionation as water vapour is continually removed from the air. Most d18O records – in caves stalagmites, lake sediment or ice cores are usually interpreted this way since most of their signal is from the rain water d18O. However, only one terrestrial d18O record is used by Loehle (#9 Spannagel), and this has been given a unique negative correlation to temperature. This might be justified if the control on d18O in the calcite was from local cave temperature impact on fractionation, but the slope used (derived from a 5-point calibration) is more negative even than that. Unfortunately, no validation of this temperature record has been given.

Issue 4: Compositing

Given a series of records with different averaging periods, spatial representation and noise levels, there are a number of problems in constructing a composite. Equal averaging is simple but, for instance, implies giving equal weight to a century-mean North American continental average (Viau et al, Loehle #13) to a single decadally varying N. American point (Cronin et al, #3), despite the fact that one covers a vast area and time period and the other is much less representative. Unsurprisingly, the larger average has much less variability than the single point. To address this disparity, a common practice is to normalise the records by their standard deviation and to areally weight records – but without that, the more representative sample ends up playing a much smaller role.

Page 3 of 4 | Previous page | Next page