A lot of press and commentary came out this week concerning a presentation and press release from Tim Barnett and Scripps colleagues presenting at the AAAS meeting (The Independent, John Fleck ,(and again) David Appell…etc). Why did this get so much attention given that there is no actual paper yet?
Basically, it is because it is a really good idea. The background is this: whenever forcings change, there is a delay in the climate response because of the large thermal inertia of the oceans – it’s takes a long time to warm them up or cool them down. While they are changing, there will be a ‘radiation imbalance’ at the top of the atmosphere. In the case of an increase in greenhouse gases (which cause a warming), that implies that the planet will be absorbing more solar radiation than it emits as longwave radiation. This imbalance will persist until the planet regains it’s ‘normal’ quasi-equilibrium. (The planet is never in perfect equilibrium of course, there are small fluctuations in the annual mean that occur in the absence of any forcings, but these fluctuations are small compared to the current anthropogenic forcings and the implied net imbalance).
This imbalance is really an important quantity – estimates of how much warming is in the ‘pipeline’, the size of the aerosol cooling effect etc. all depend on knowing what this number is. However, it is very difficult to measure from space. Getting an accurate global average of all the long wave energy coming out, along with a correctly calibrated estimate of all the solar radiation coming in and estimating the difference between them to the necessary accuracy (fractions of a W/m2) is currently beyond our technical capabilities.
So then, how do we estimate it? All of that energy has to be going somewhere, and it is easy to show that neither the land surface nor the glaciers can be storing this energy. Therefore it is going into warming the oceans and indeed historical analysis of ocean temperatures by Levitus and colleagues in 2000, and updated recently in GRL does show warming that is consistent with the radiative imbalance suggested by climate models (around 0.5 W/m2 from 1955 to 1995 and possibly as high as 0.7 W/m2 over the last decade, Hansen et al (2002)).
The advantage of the ocean heat content changes for detecting climate changes is that there is less noise than in the surface temperature record due to the weather that affects the atmospheric measurements, but that has much less impact below the ocean mixed layer. The disadvantage is that comprehensive ocean measurements do not go back very far.
Previous work by Barnett’s group showed that coupled models when forced with greenhouse gases did give ocean heat content changes similar to that shown in the data. But questions remained concerning the degree of decadal variability, the length of the record and the balance in the models between aerosol forcing and climate sensitivity (which can’t really be disentangled using this measure). (A recent report from the National Academies discusses this in more detail). With the latest round of modelling results now having been performed and archived for the IPCC 4th Assessment Report, the time is appropriate to revisit the question with more up-to-date models and observations.
When this is done, will it really provide the ‘final proof’ of man-made global warming? As I indicated above, the preliminary work by Levitus, Barnett, Hansen and others has already demonstrated that this is a good approach. Thus it is more a question of refining the details, rather than suddenly ‘proving’ global warming. If the latest round of models compare better to the data than they did before (as claimed at the AAAS), and if the result is robust to some of the remaining uncertainties (in aerosol forcing, ocean model components etc.), then it will certainly add to the ‘balance of evidence’ that man-made global warming is already here.