Is Climate Modelling Science?

This means that validating these models is quite difficult. (NB. I use the term validating not in the sense of ‘proving true’ (an impossibility), but in the sense of ‘being good enough to be useful’). In essence, the validation must be done for the whole system if we are to have any confidence in the predictions about the whole system in the future. This validation is what most climate modellers spend almost all their time doing. First, we look at the mean climatology (i.e. are the large scale features of the climate reasonably modelled? Does it rain where it should, is there snow where there should be? are the ocean currents and winds going the right way?), then at the seasonal cycle (what does the sea ice advance and retreat look like? does the inter-tropical convergence zone move as it should?). Generally we find that the models actually do a reasonable job (see here or here for examples of different groups model validation papers) . There are of course problematic areas (such as eastern boundary regions of the oceans, circulation near large mountain ranges etc.) where important small scale processes may not be well understood or modelled, and these are the chief targets for further research by model developers and observationalists.

Then we look at climate variability. This step is key, but it is also quite subtle. There are two forms of variability: intrinsic variability (that occurs purely as a function of the internal chaotic dynamics of the system), and forced variability (changes that occur because of some external change, such as solar forcing). Note that ‘natural’ variability includes both intrinsic and forced components due to ‘natural’ forcings, such as volcanoes, solar or orbital changes. A clean comparison relies on either being able to isolate just one reasonably known forcing, or having enough data to be able to average over many examples and thus isolate the patterns associated solely with that forcing, even though in any particular case, more than one thing might have been happening. (A more detailed discussion of these points is available here).

While there is good data over the last century, there were many different changes to planet’s radiation balance (greenhouse gases, aerosols, solar forcing, volcanoes, land use changes etc.), some of which are difficult to quantify (for instance the indirect aerosol effects) and whose history is not well known. Earlier periods, say 1850 going back to the 1500s or so, have reasonable coverage from paleo-proxy data, and only have solar and volcanic forcing. In my own group’s work, we have used the spatial patterns available from proxy reconstructions of this period to look at both solar and volcanic forcing in the pre-industrial period. In both cases, despite uncertainties (particularly in the magnitude of the solar forcing), the comparisons are encouraging.

Recent volcanos as well have provided very good tests of the model’s water vapour feedbacks (Soden et al, 2002), dynamical feedbacks (Graf et al., 1994; Stenchikov et al., 2002), and overall global cooling (Hansen et al, 1992). In fact, the Hansen et al (1992) paper actually predicted the temperature impact of Pinatubo (around 0.5 deg C) prior to it being measured.

The mid-Holocene (6000 years ago) and Last Glacial Maximum (~20,000 years ago) are also attractive targets of model validation, and while some successes have been noted (i.e. Joussaume et al, 1999, Rind and Peteet, 1985) there is still some uncertainty in the forcings and response. Other periods such as the 8.2kyr event, or the Paleocene-Eocene Thermal Maximum are also useful, but clearly as one goes further back in time, the more uncertain the test becomes.

Page 2 of 3 | Previous page | Next page