Confusion has continued regarding trends in global temperatures. The misconception ‘the global warming has stopped’ still lives on in some minds. We have already discussed why this argument is flawed. So why have we failed to convince ;-) ?
Una traduzione in italiano è disponibile qui.
The confused argument hinges on one data set – the HadCRUT 3V – which is only one of several estimates, and it is the global temperature record that exhibits the least change over the last decade. Other temperature analyses suggest greater change (warming). Thus, one could argue that the HadCRUT 3V represents the lower estimate, if a warming could be defined for such a short interval.
A comparison with other temperature analyses, such as the NASA/GISS (pink in the figure on the left), reveals differences. We can also compare with model-generated data (re-analyses), keeping in mind that one must be very careful with these data since they are not appropriate for studying long-term climate change (they give a misrepresentation of trends – at least on a local scale). Nevertheless, information from independent data suggest an increase in global mean temperatures even over the last decade.
All scientific questions involve some degree of uncertainties (error bars), and these can only be reduced if one can prove that they are influenced by an external factor (‘contamination’) or if some of the data are not representative for the study. Hence, if some of the data are incorrect, then it’s fair to exclude these to reduce the error bars. But this requires solid and convincing evidence of misrepresentation, and one cannot just pick the low values and claim that these describe the upper limit without proving that all the data with higher values are wrong. In other words, arguing that a lower limit is the upper bound is utter nonsense (even some who claim they are ‘statisticians’ have made this mistake!).
Another issue is that some of the data – i.e. the data from the Climate Research Unit (CRU) – have incomplete coverage, with large gaps in the Arctic where other data suggest the greatest increases in temperature. The figure below reveals the holes in the data knowledge. The figure compares the HadCRUT 3V data with the NCEP re-analysis.
Figure caption: The difference between Oct. 2007 – Sep. 2008 temperature average and the 1961-1990 mean temperature for HadCRUT 3V (upper left) and NCEP re-analysis (upper right). Below is a comparison between the 12-month 60N-90N mean temperature evolution (red=NCEP, black = HadCRUT 3v)). (click on figures for PDF-version)
Re-analysis data are results from atmospheric models where observed data have been fed into the models and used to correct the simulation in order to try to get a best possible description of the real atmosphere. But it’s important to note that the NCEP re-analysis and other re-analyses (e.g. ERA40) are not regarded as being appropriate for trend studies due to changes in observational systems (new satellites coming in etc). Nevertheless, a comparison between the re-analyses and observations can highlight differences, which may suggest where to look for problems.
The animated figure shows the temperature difference between the two 5-year periods 1999-2003 and 2004-2008. Such results do not show the long-term trends, but it’s a fact that there have been high temperatures in the Arctic during the recent years.
The recent Arctic warming is visible in the animated plot on the right showing the NCEP re-analysis mean temperature difference between the periods 2004-2008 and 1999-2003.
The NOAA report card on the Arctic was based on the CRUTEM 3v data set (see figure below) which excludes temperatures over the ocean – thus showing an even less complete picture of the Arctic temperatures. The numbers I get suggest that more than 80% of the grid-boxes north of 60N contain missing values over the most recent decade.
Figure caption: The difference between Nov. 2007 – Oct. 2008 temperature average and the 1961-1990 mean temperature for CRUTEM 3v (upper left) and NCEP re-analysis (upper right). Below is a comparison between the 12-month 60N-90N mean temperature evolution. (click on figures for PDF-version)
The funny thing, however, is that the last decade of the Arctic CRUTEM 3v temperatures are closer to the corresponding estimates from NCEP re-analysis than the more complete HadCRUT 3v data. This may be a coincidence. The re-analyses use additional data to fill in the voids – e.g. satellite measurements and predictions based on the laws of physics. Thus, the temperature in areas with no observations is in principle physically consistent with surrounding temperatures and the state of the atmosphere (circulation).
Below is a figure showing a similar comparison between HadCRUT 3v and GISTEMP (from NASA/GISS). The latter provides a more complete representation of the Arctic by taking spatial correlation into account through an extrapolating/interpolating in space. But GISTEMP does not really have a better empirical basis in the Arctic, but the effect from the extrapolation (the filling in of values where there is missing data) gives the recent high Arctic temperatures more weight.
A comparison between temperatures over the most recent available 30-year period (1978-2007) shows high temperatures over parts of Russia (Figure below – upper left panel), and the difference between the GISTEMP and HadCRUT 3v shows a good agreement apart from around the Arctic rim and in some maritime sectors (upper right panel). The time evolution of the Northern Hemisphere mean for the two data sets is shown in the lower panel, showing a good agreement over most of the record, but with slightly higher GISTEMP estimates over the last 10 years (the global mean was not shown because my computer didn’t have sufficient memory for the complete analysis, but the two data sets also show similar evolution in e.g. the IPCC AR4).
Figure caption: (upper left) HadCRUT 3V mean T(2m) anomaly over 1976-2005 (wrt to 1950-1980) ; (upper right) The GISS – HadCRUT 3V difference in mean T(2m) over 1976-2005; and (lower) the Northern Hemisphere mean temperature variations (red=GISTEMP, black=HadCRUT 3v).
Note, the low Arctic sea-ice extent over the last summers are independent evidence of high Arctic temperatures.
The insufficient observational coverage has also been noted by the IPCC AR4 and by Gillett et al. (Nature Geoscience, 2008), who argue that the observed warming in the Arctic and Antarctic are not consistent with internal climate variability and natural forcings alone, but are directly attributable to increased GHG levels.
They also suggested that the polar warming is likely to have discernable impacts on ecology and society (e.g.).
In their study, there are at least 15 grid boxes with valid data (usually representing one measurement) over 1900-2008 period. Furthermore, the only valid observations they used from the Northern Hemisphere were from the Arctic rim, as opposed to in the high Arctic itself. The situation is slightly better for the Antarctic (with one observation near the South Pole). Nevertheless, the title ‘Attribution of polar warming to human influence’ [my emphasis] is a bit misleading. Parts of the high-latitudes yes, polar no.
The attribution study was based on series of 5-yr-mean temperatures and spatial averages of 90 degree sectors (i.e. to four different sectors), where sectors and periods with no valid data were excluded.
There are some caveats with their study: The global climate models (GCMs) do not reproduce the 1930-1940 Arctic warm event very well, and the geographical differences in a limited number of grid-boxes in the observations and the GCMs may have been erased through taking the average value over the 90-degree sectors.
The 1930-1940 Arctic warming was probably not externally forced, but one could also argue that the models do not capture all of the internal variations because few reproduce similar features. Furthermore, the present GCMs have problems reproducing the Arctic sea-ice characteristics (which tends to be too extensive), ocean heat content, and fail to capture the ongoing decrease in Arctic sea-ice area. Most of these problems are seen in the gap with no CRUTEM 3v data, but there are also some uncertainties associated with the lack of data in the Polar regions.
The optimal fingerprint analysis hinges on the assumption that control simulations with the GCMs realistically reproduce the climate noise. I think that the GCMs do a good job for most of the planet, but independent work suggest local problems in the Arctic associated with a misrepresentation of the sea-ice extent. This may not have affected the analysis much, if the problem is limited to the high Arctic. Furthermore, the results suggested a one-to-one correspondence in trends between simulations and observations, but the analysis also gave a regression coefficient of 2-4 for natural forcings. The latter suggests to me that there may be some problems with the analysis or the GCMs.
Thus, this is probably not the final word on the matter. At least, I’m not convinced about the attribution yet. The whole boils down to insufficient amounts of empirical data (i.e. observations), GCM limitations at the high-latitudes, and too large data gaps. But the pronounced changes in the Arctic are consistent with AGW. The irony seems to be that the real world shows signs of more dramatic changes than the GCMs project, especially if you look at the sea-ice extent.
The lack of data in the polar region is a problem, and the ongoing International Polar Year (IPY) campaign is a huge concerted international effort to improve the data. Data is irreplaceable, regardless of the modelling capability, as science requires the theory to be tested against independent empirical data. The re-analyses provide a physically consistent description of the atmosphere – suggesting high temperatures in the Arctic – but we can only be sure about this when we actually have been there and made the real measurements (some can be done by satellites too)
A glimpse into the technical details
More technically, the complicated analysis involved a technique called ‘optimal fingerprinting‘ or ‘optimal detection’, looking for best signal in the noisy data and puts emphasis on regions where the GCMs give most realistic description of the climate variations. Basically, the optimal fingerprint techniques involved linear least-squares regression, which is familiar to many analysts.
The analysis of Gillett et al. involved ‘time-space’ orthogonal empirical functions (EOF) with truncation of 28 (and up to 78 modes for the Arctic, where the maximum truncation was the number of sectors multiplied with the number of 5-yr means – see supplementary material Fig. S3). These come into the equation through the estimation of the noise (covariance matrix), i.e. the internal variations and their magnitude. The clever thing is that they let each EOFs describe a set of 20 maps of 5-year-mean temperatures, thus representing both the spatial features as well as their chronology.
For the mathematically inclined, EOFs are similar to eigenvectors, and are mainly used to prepare data before further analysis. The purpose of using EOFs is often either to (i) compress the information or (ii) to make the data more ‘well-behaved’ (in mathematical terms: orthogonal). While one typically only use a few of the first EOFs, Gillett et al. experimented with just one up to the whole set because they took advantage of their orthogonal properties to allow the calculation of the inverse of the noise co-variance matrix. This is a neat mathematical trick. But this doesn’t help if the GCMs do not provide a good description of the internal variations.