Decadal predictions

It’s worth pointing out that ‘skill’ is defined relative to climatology (i.e. do you do a better job at estimating temperature or rainfall anomalies than if you’d just assumed that the season would be just like the average of the last ten years for instance). Some skill doesn’t necessarily mean that the predictions are great – it simply means that they are slightly better than you could do before. We should also distinguish between skillful (in a statistical sense) and useful in a practical sense. An increase of a few percent in variance explained would show up as improved skill, but that is unlikely to be of good enough practical value to shift any policy decisions.

So given that we know roughly what we are looking for, what is needed for this to work?

First of all, we need to know whether we have enough data to get a reasonable picture of the ocean state right now. This is actually quite hard since you’d like to have subsurface temperature and salinity data from a large part of the oceans. That gives you the large scale density field which is the dominant control on the ocean dynamics. Right now this is just about possible with the new Argo float array, but before about 2003, subsurface data in particular was much sparser outside a few well travelled corridors. Note that temperature data are not sufficient on their own for calculating changes in the ocean dynamics since they are often inversely correlated with salinity variations (when it is hot, it is often salty for instance) which reduces the impact on the density. Conceivably if any skill in the prediction is simply related to surface temperature anomalies being advected around by the mean circulation, it could be possible be useful to do temperature only initializations, but one would have to be very wary of dynamical changes and that would limit the usefulness of the approach to a couple of years perhaps.

Next, given any particular distribution of initialization data, how should this be assimilated into the forecasting model? This is a real theoretical problem given that models all have systematic deviations from the real world. If you simply force a model temperature and salinity to look exactly like the observations, then you risk having any forecast dominated by model drift when you remove the assimilation. Think of a elastic band being pulled to the side by the ‘observations’, but having it snap back to it’s default state when you stop pulling. (A likely example of this is the ‘coupling shock’ phenomena possibly seen in the Keenlyside et al simulations). A better way to do this is via anomaly forcing – that is you only impose the differences from the climatology on the model. That is guaranteed to have less model drift, but at the expense of having the forecast potentially affected by systematic errors in, say, the position of the Gulf Stream. In both methods of course, the better the model, the less bad the problems. There is a good discussion of the Hadley Centre methods in Haines et al (2008) (no sub reqd.).

Assuming that you can come up with a reasonable methodology for the initialization, the next step is to understand the actual predictability of the system. For instance, given the inevitable uncertainties due to sparse coverage or short term variability, how fast do slightly differently initialized simulations diverge? (Note that we aren’t talking about the exact path of the simulation which will diverge as fast as weather forecasts – a couple of weeks, but the larger scale statistics of ocean anomalies). This appears to be a few years to a decade in “perfect model” tests (where you try and predict how a particular model will behave using the same model but with an initialization that mimics what you’d have to do in the real world).

Finally, given that you can show that the model with its initialization scheme and available data has some predictability, you have to show that it gives a useful increase in the explained variance in any quantities that someone might care about. For instance, perfect predictability of the maximum overturning streamfunction might be scientifically interesting, but since it is not an observable quantity, it is mainly of academic interest. Much more useful is how any surface air temperature or rainfall predictions will be affected. This kind of analysis is only just starting to be done (since you needed all the other steps to work first).

Page 2 of 3 | Previous page | Next page