Two and a half years ago, a paper was published in Nature purporting to be a real prediction of how global temperatures would develop, based on a method for initialising the ocean state using temperature observations (Keenlyside et al, 2008) (K08). In the subsequent period, this paper has been highly cited, very often in a misleading way by contrarians (for instance, Lindzen misrepresents it on a regular basis). But what of the paper’s actual claims, how are they holding up?
At the time K08 was published, we wrote two posts on the topic pointing out that a) the methodology was not very mature (and in our opinion, not likely to work), and b) that the temperature predictions being made (for the 10 year overlapping periods Nov 2000-Oct 2010, Nov 2005-Oct 2015 etc.), were very unlikely to come true. These critiques were framed as a bet to see whether the authors were serious about their predictions, similar in conception to other bets that have been offered on climate related matters. This offer was studiously ignored by the scientists involved, who may have thought the whole exercise was beneath them. Oh well.
However, with the publication of the October 2010 temperatures from HadCRUT, the first prediction period has now ended, and so the predictions can be assessed. Looking first at the global mean temperatures…
we can see clearly that while K08 projected 0.06ºC cooling, the temperature record from HadCRUT (which was the basis of the bet) shows 0.07ºC warming (using GISTEMP, it is 0.11ºC). As in K08 this refers to T(Nov 2000:Oct 2010) as compared to T(Nov 1994:Oct 2004). For reference, the IPCC AR4 ensemble gives 0.129±0.075ºC (1) (and a range of -0.07 to 0.30ºC related to internal variability in the simulations) (using full annual means).
More interestingly, we can look at the regional pattern. The K08 supplemental data showed their predicted anomaly along with anomalies from a free-running version of their model
the standard IPCC results for the 2005-2015 period (which is half over), rather than the 2000-2010 period, but the patterns might be expected to be similar:
The anomalies are with respect to the average of all the decadal periods they looked at, which is roughly (though not exactly equal to) a 1955-2004 baseline. The actual temperature changes for 2000-2010, using GISTEMP for convenience, look like this:
It is striking to what extent they resemble the spatial pattern seen in the
AR4 ensemble free-running version rather than the initiallised forecast, though there are also some correlations there too (for instance, west of the Antarctic peninsula, related to the ozone-hole and GHG related increase in the Southern Annular Mode).
It is worth emphasising that the RC bet offer was not frivolously made, but reflected some very clear indications in the paper that the predictions would not come true (as explained in our second post). Specifically, their ‘free’ model run, without data assimilation, performed better in hindcasts when compared to observed data, i.e. the new assimilation technique degraded the model performance. Both previous hindcasts showing cooling of the model were wrong. Since global warming took off in the 1970s, the observed data have never shown a cooling in their chosen metric (ten-year means spaced 5 years apart). Other climate models run for standard global warming scenarios only rarely show this level of cooling. On the other hand, there is a simple explanation for such a temporary cooling in a model: an artifact known as ‘coupling shock’ (e.g. Rahmstorf 1995), which arises when the ocean is switched over from a forced to a coupled mode of operation, something that has no counterpart in the real world.
The basic issue is that nudging surface temperatures in the North Atlantic closer to observed data would probably nudge the Atlantic overturning circulation in the wrong direction since changing the temperature without changing the salinity will give the opposite buoyancy forcing to what would be needed. The model indeed shows negative skill in the critical regions of the North Atlantic which are most affected by the overturning circulation. All this can be seen from the paper. Last but not least, by the time the paper was published three quarters of the 2000-2010 forecast period were over with no sign of the predicted cooling – barring an unprecedented massive temperature drop, the prediction was always very unlikely.
Was this then an “improved climate prediction“? The answer is clearly no.
So what can we conclude? First off, the basic idea of short term predictions using initialised ocean data is a priori a good one. Many groups around the world are exploring to what extent this is possible, and what techniques will be the most successful. However, before claiming that a new methodology is an improvement on other efforts and that it predicts a very counter-intuitive result, a lot of effort is required to demonstrate that even theoretically or in ideal circumstances that it will work. This can involve ‘perfect model’ experiments (where you test to see whether you can predict the evolution of a model simulation given only what we know about the real world), or hindcasts (as used by K08), and only where there is demonstrated skill is there any point in making a prediction for the real world. It is nonetheless important to try new methods, and even when they fail, lessons can be learned about how to improve things going forward.
It is perhaps inevitable that novel prediction methods that appear to ‘go against the mainstream’ are going to be higher profile than they warrant in retrospect – such is the way of the world. But scientists need to appreciate that these high profile statements will be taken and spread far more widely than they possibly anticipate. Thus it behoves them to be scrupulous in explaining the context, giving the caveats and making clear the experimental nature of any new result. This is undoubtedly hard, especially where there are people ready to twist anything to fit an anti-AGW agenda, but we should at least try.
Note, we asked Noel Keenlyside if he wanted to comment on our assessment of their prediction, and he declined to do so. We would be still be happy to post any of his or his co-authors comments in response though.
Update Dec 2: The Stuttgarter Zeitung newspaper (in German) followed up on this and got the following comments from the authors:
“The forecast for global mean temperature which we published highlights the ability of natural variability to cause climate fluctuations on decadal scale, even on a global scale. I am still completely convinced that this is correct.”
“I do not want to comment on this.”
Then an indirect quote: the fact that warming for 2000-2010 was greater than predicted in their study does in itself not speak against their study, and then
“You have to look at this long-term. I would not weigh a few years earlier or later too much.” But if the forecast turns out to be wrong by 2015, “I will be the last one to deny it”.