Ocean heat content revisions

Hot on the heels of last months reporting of a discrepancy in the ocean surface temperatures, a new paper in Nature (by Domingues et al, 2008) reports on the revisions of the ocean heat content (OHC) data – a correction required because of other discrepancies in measuring systems found last year.

Before we get to the punchline though, it’s worth going over the saga of the OHC trends in the literature over the last 8 years. In 2001, Syd Levitus and colleagues first published their collation of ocean heat content trends since 1950 based on archives of millions of profiles taken by oceanographic researchers over the last 50 years. This showed a long term upward trend up, but with some very significant decadal variability – particularly in the 1970s and 1980s. This long term trend was in reasonable agreement with model predictions, but the decadal variability was much larger in the observations.

As in all cases where there is a data-model mismatch, people go back to both in order to see what might be wrong. One of the first suggestions was that since the spatial sampling became much coarser in the early part of the record, there might be more noise earlier on that didn’t actually reflect a real ocean-wide signal. Sub-sampling the ocean models at the same sampling density as the real observations did increase the decadal variability in the diagnostic but it didn’t provide a significantly better match (AchutaRao et al, 2006).

Other problems came up when trying to tally the reasons for sea level rise (SLR) over that 50 year period. Global SLR is a product of (in rough order of importance) ocean warming, land ice melting, groundwater extraction/dam building, and remnant glacial isostatic adjustment (the ocean basins are still slowly adjusting to the end of the last ice age). The numbers from tide gauges (and later, satellites) were higher than what you got by estimating each of those terms separately. (Note that the difference is mainly due to the early part of the record – more recent trends do fit pretty well). There were enough uncertainties in the various components so that it wasn’t obvious where the problems were though.

Since 2003, the Argo program has seeded the oceans with autonomous floats which move up and down the water column and periodically send their data back for analysis. This has at last dealt with the spatial sampling issue (at least for the upper 700 meters in the ocean – greater depths remain relatively obscure). Initial results from the Argo data seemed to indicate that the ocean cooled quite dramatically from 2003 to 2005 (in strong contradiction to the sea level rise which had continued) (Lyman et al, 2006). But comparisons with other sources of data suggested that this was only seen with the Argo floats themselves. Thus when an error in the instruments was reported in 2007, things seemed to fit again.

In the meantime however, calibrations of the other sources of data against each other were showing some serious discrepancies as well. Ocean temperatures at depth are traditionally made with CTDs (a probe that you lower on line that provides a continuous temperature and salinity profile), Nansen bottles (water samples that are collected from specified depths) or XBTs (eXpendable bathy-thermographs) which are basically just thrown overboard. CTDs are used over and again and can be calibrated continuously to make sure their pressure and temperature measurements are accurate, but XBTs are free falling and the depths from which they are reporting temperatures needs to be estimated from the manufacturers fall rate calculations. As the mix of CTDs, bottles, XBTs and floats has changed over time, minor differences in the bias of each methodology can end up influencing the trends.

(If this is all starting to sound very familiar to those who looked into the surface stations or sea surface temperature record issues, it is because it is the same problem. Almost all long historical climate records were not collected with the goal of climate in mind.)

Page 1 of 2 | Next page