Net ocean heat content changes are very closely tied to the net radiative imbalance of the planet since the ocean component of the climate system has by far the biggest heat capacity. Thus we have often made the point that diagnosing this imbalance through measurements of temperature in the ocean is a key metric in evaluating the response of the system to changes in CO2 and the other radiative forcings (see here).
In a paper I co-authored last year (Hansen et al, 2005), we compared model results with the trends over the 1993 to 2003 period and showed that they matched quite well (here). Given their importance in evaluating climate models, new reports on the ocean heat content numbers are anticipated quite closely.
Recently, a new preprint with the latest observations (2003 to 2005) has appeared (Lyman et al, hat tip to Climate Science) which shows a decrease in the ocean heat content over those two years, decreasing the magnitude of the long-term trend that had been shown from 1993 to 2003 in previous work (Willis et al, 2004) – from 0.6 W/m2 to about 0.33 W/m2. This has generated a lot of commentary in some circles, but in many cases the full context has not been appreciated.
With any new data sets there are a number of questions that must always be asked: Are the measurements really representing what is claimed? (in particular, are there sampling or definitional problems?). Do related data provide some support for the results? If correct, what are the potential causes? and, most importantly, what part of the changes are related to predictable deterministic effects? This last question brings up the issue of model evaluation, because of course, the models can only be expected to reproduce the deterministic long-term component.
Given some of the ongoing discussion, it obviously still needs to be pointed out that year-to-year fluctuations in any of the key metrics of planet’s climate are mostly a function of the weather and cannot be expected to be captured in climate models, whose ‘weather’ is uncorrelated with that in the real world. So claims that two years worth of extra data of any quantity somehow prove or disprove climate models are simply erroneous. Clearly, life would be simpler without weather ‘noise’ cluttering up the system, but this is something that just needs to be dealt with. Dealing with it means paying more attention to long term changes than to short term fluctuations and making sure that enough ensemble simulations are made with the models to isolate the signal from the noise.
Going back to the data, are there any potential problems? Well, as addressed by the authors, this time frame is the period when the ARGO floating profilers really start to be important in improving the coverage of data (look at the difference in coverage in their figure 8 between 2002 and 2005). The profilers have clearly been the best thing to happen to ocean observations in decades. Not using the profilers gives a smaller recent change – but with increased error bars because of the deterioration of the sampling. Additionally, some parts of the ocean, particularly the Arctic are still not being sampled sufficiently. These effects may yet prove to be part of the story.
What about any supporting data? One problem is that if the ocean has lost heat at the suggested rate, then the thermal exapansion part of recent sea level rise should have decreased (i.e. sea level should have dropped). Overall, sea level however has continued to rise unabated according to the altimeter satellites. The only way to reconcile the results would be to have had a sharp compensating increase in freshwater from the ice sheets adding to sea level (from 0.7 mm/yr to 2.9 mm/yr). This is conceivable (though unlikely), but clearly would not be good news!
If however, we assume that the data are reasonably accurate, what could be going on? Some of the changes are clearly due to ocean circulation changes – an increased advection of warm water from the sub-tropical Atlantic to the North for instance, but the biggest contribution are the changes seen in the sub-tropical South Pacific. The heat can either have been subducted below the 700m level (the bottom depth for this analysis), advected sideways (no real evidence for that though), or lost through the surface (either to the atmosphere, or directly out to space). The third possibility is thought the most likely.
This in turn can have had a number of possible causes: ‘natural’ tropical variability – for instance, the winter (DJF) tropical Pacific cooled over these two years, possibly as part of larger-scale ENSO variability. Alternatively, it may be due to a change in the forcings. Possible candidates are an as-yet-unquantified increase in aerosol forcings from Asian sources. These haven’t been included in simulations since the data on emissions aren’t yet in.
On a larger point, the radiative imbalance in the AR4 models is a function of how effectively the oceans sequester heat (more mixing down implies a greater imbalance) as well as what the forcings are. Therefore, there is a variation in that modelled value across the models – some of which are smaller than our reported figure (all are significantly positive though).
A slightly more subtle (and slightly more valid) criticism is that the reported magnitude of decadal variability in the OHC numbers is larger than is seen in most coupled models. Some recent work has shown that sampling may play a role here, but it wouldn’t necessarily be surprising if this was so. Even in our paper last year we stated that earlier reported decadal variations were not well simulated. There is obviously much that remains to be understood about annual to decadal variability, however, it must be remembered that it is only on the longer time scales that we expect the forced signal to dominate over the internal ‘noise’. On this basis the ocean heat content changes remain a good validation of the climate model simulations.