Musings about models

With the blogosphere all a-flutter with discussions of hundredths of degrees adjustments to the surface temperature record, you probably missed a couple of actually interesting stories last week.

Tipping points

Oft-discussed and frequently abused, tipping points are very rarely actually defined. Tim Lenton does a good job in this recent article. A tipping ‘element’ for climate purposes is defined as

The parameters controlling the system can be transparently combined into a single control, and there exists a critical value of this control from which a small perturbation leads to a qualitative change in a crucial feature of the system, after some observation time.

and the examples that he thinks have the potential to be large scale tipping elements are: Arctic sea-ice, a reorganisation of the Atlantic thermohaline circulation, melt of the Greenland or West Antarctic Ice Sheets, dieback of the Amazon rainforest, a greening of the Sahara, Indian summer monsoon collapse, boreal forest dieback and ocean methane hydrates.

To that list, we’d probably add any number of ecosystems where small changes can have cascading effects – such as fisheries. It’s interesting to note that most of these elements include physics that modellers are least confident about – hydrology, ice sheets and vegetation dynamics.

Prediction vs. Projections

As we discussed recently in connection with climate ‘forecasting‘, the kinds of simulations used in AR4 are all ‘projections’ i.e. runs that attempt to estimate the forced response of the climate to emission changes, but that don’t attempt to estimate the trajectory of the unforced ‘weather’. As we mentioned briefly, that leads to a ‘sweet spot’ for forecasting of a couple of decades into the future where the initial condition uncertainty dies away, but the uncertainty in the emission scenario is not yet so large as to be dominating. Last week there was a paper by Smith and colleagues in Science that tried to fill in those early years, using a model that initialises the heat content from the upper ocean – with the idea that the structure of those anomalies control the ‘weather’ progression over the next few years.

They find that their initialisation makes a difference for a about a decade, but that at longer timescales the results look like the standard projections (i.e. 0.2 to 0.3ºC per decade warming). One big caveat is that they aren’t able to predict El Niño events, and since they account for a great deal of the interannual global temperature anomaly, that is a limitation. Nonetheless, this is a good step forward and people should be looking out for whether their predictions – for a plateau until 2009 and then a big ramp up – materialise over the next few years.

Model ensembles as probabilities

A rather esoteric point of discussion concerning ‘Bayesian priors’ got a mainstream outing this week in the Economist. The very narrow point in question is to what extent model ensembles are probability distributions. i.e. if only 10% of models show a particular behaviour, does this mean that the likelihood of this happening is 10%?

The answer is no. The other 90% could all be missing some key piece of physics.

Page 1 of 2 | Next page