It is not every day that I come across a scientific publication that so totally goes against my perception of what science is all about. Humlum et al., 2011 present a study in the journal Global and Planetary Change, claiming that most of the temperature changes that we have seen so far are due to natural cycles.
They claim to present a new technique to identify the character of natural climate variations, and from this, to produce a testable forecast of future climate. They project that
the observed late 20th century warming in Svalbard is not going to continue for the next 20–25 years. Instead the period of warming may be followed by variable, but generally not higher temperatures for at least the next 20–25 years.
However, their claims of novelty are overblown, and their projection is demonstrably unsound.
First, the claim of presenting “a new technique to identify the character of natural climate variations” is odd, as the techniques Humlum et al. use — Fourier transforms and wavelet analysis — have have been around for a long time. It is commonplace to apply them to climate data.
Using these methods, the authors conclude that “the investigated Svalbard and Greenland temperature records show high natural variability and exhibit long-term persistence, although on different time scales”. No kidding! Again, it is not really a surprise that local records have high levels of variability, and the “long-term persistent” character of climate records has been reported before and is even seen in climate models.
The most problematic aspect of the paper concerns the Greenland temperature from GISP2 and their claim that they can “produce testable forecasts of future climate” from extending their statistical fit.
Of course, these forecasts are testable – we just have to wait for the data to come in. But why should extending these fits produce a good forecast? It is well known that one can fit a series of observations to arbitrary accuracy without having any predictability at all. One technique to demonstrate credibility is by assessing how well the statistical model does on data that was not used in the calibration. In this case, the authors have produced a testable forecast of the past climate by leaving out the period between the end of the last ice age and up to 4000 years before present. This becomes apparent if you extend their fit to the part of data that they left out (figure below).
I extended their analysis back to the end of the last ice age. The figure here shows my replication of part of their results, and I’ve posted the R-script for making the plot. The full red line shows their fit (“model results”) and the dashed red lines show two different attempts to extend their model to older data.
For the initial attempt, keeping their trend obviously caused a divergence. So in the second attempt, I removed the trend to give them a better chance of making a good hind cast. Again, the fit is no longer quite as remarkable as presented in their paper.
Clearly, their hypothesis of 3 dominant periodicities no longer works when extending the data period. So why did they not show the part of the data that break with the pattern established for the 4000 last years? According to the paper, they
chose to focus on the most recent 4000 years of the GISP2 series, as the main thrust of our investigation is on climatic variations in the recent past and their potential for forecasting the near future.
One could of course attempt to rescue the fit by proposing that some other missing factor is responsible for the earlier divergence. But this would be entirely arbitrary. Choosing to ignore the well known (anthropogenic) factors affecting current climate, on the other hand, is not arbitrary at all.
Humlum et al. also suggest, on the basis of a coincidence between one of their cycles (8.7 years) and a periodicity in the Earth-Moon orbital distance (8.85 years), that the Moon plays a role for climate change (seriously!):
We hypothesise that this may bring about the emergence of relatively warm or cold water masses from time to time in certain parts of oceans, in concert with these cyclic orbit variations of the Moon, or that these variations may cause small changes in ocean currents transporting heat towards high latitudes, e.g. in the North Atlantic.
How wonderfully definitive. They, however, admit that their
main focus is the identification of natural cyclic variations, and only secondary the attribution of physical reasons for these.
So, if the curve-fitting points to periodicities that are anywhere near any of the frequencies that can be associated with a celestial object, then that’s apparently sufficient. You can get quite a range of periodicities if you consider all the planets in our solar system, their resonances and harmonics, see any of Scafetta’s recent papers for more examples. And of course, if there are parts of the data that do not match the periodicity you believe to be there, you can just throw it away to make the cycles fit. Quite easy really.
In short, Humlum et al’s results are similar to those I discussed concerning meaningless numerical exercises, and their efforts really bring out the points I made in my previous post: arbitrarily splitting a time series up into parts generally does not allow one to learn anything.
- O. Humlum, J. Solheim, and K. Stordahl, "Identifying natural contributions to late Holocene climate change", Global and Planetary Change, vol. 79, pp. 145-156, 2011. http://dx.doi.org/10.1016/j.gloplacha.2011.09.005