Guest commentary by Steve Sherwood
There are four independent instrumental records of sufficient length and potential accuracy to tell us about 20th-century climate change. The two longest ones are of temperature near the Earth’s surface: a vast network of weather stations over land areas, and ship data from the oceans. While land surface observations go back hundreds of years in a few places, data of sufficient coverage for estimating global temperature have been available only since the end of the 19th century. These have shown about a 0.7 C warming over land during the last century, with somewhat less increase indicated over oceans. The land records contain artifacts due to things like urbanization or tree growth around station locations, buildings or air conditioners being installed near stations, etc., but laborious data screening, correction procedures, and a-posteriori tests have convinced nearly all researchers that the reported land warming trend must be largely correct. Qualitative indicators like sea ice coverage, spring thaw dates, and melting permafrost provide strong additional evidence that trends have been positive at middle and high northern latitudes, while glacier retreat suggests warming aloft at lower latitudes.
The other two climate records, so-called “upper air” records, measure temperatures in Earth’s troposphere and stratosphere. The troposphere—that part of the atmosphere that is involved in weather, about 85% by mass—is expected to warm at roughly the same rate as the surface. In the tropics, simple thermodynamics (as covered in many undergraduate meteorology courses) dictates that it should actually warm faster, up to about 1.8 times faster by the time you get to 12 km or so; at higher latitudes this ratio is affected by other factors and decreases, but does not fall very far below 1. These theoretical expectations are echoed by all numerical climate models regardless of whether the surface temperature changes as part of a natural fluctuation, increased solar heating, or increased opacity of greenhouse gases.
It turns out that the upper-air records have not shown the warming that should accompany the reported increases at the surface. Both the Microwave Sounding Unit (MSU) satellite (analyzed by the University of Alabama in Huntsville by John Christy and Roy Spencer) and weather balloon data (trends reported by a number of researchers, notably Jim Angell at NOAA) have failed to show significant warming since the satellite record began in late 1978, even though the surface record has been rising at its fastest pace (~0.15 C/decade) since instrumental records began. On the other hand both records have shown dramatic cooling in the stratosphere, where cooling is indeed expected due to increasing greenhouse gases and decreasing ozone (which heats the stratosphere due to its absorption of solar ultraviolet radiation). The sondes in particular have shown a lot more cooling than the satellites, almost certainly too much, leading one to wonder whether their tropospheric trends are also too low.
The non-warming troposphere has been a thorn in the side of climate detection and attribution efforts to date. Some have used it to question the surface record (though that argument has won few adherents within the climate community), while others have used it to deny an anthropogenic role in surface warming (an illogical argument since the atmosphere should follow no matter what causes the surface to warm). The most favored explanation has been that the “lapse rate,” or decrease in temperature as you go up in the atmosphere, has actually been increasing. This would contradict all of our climate models and would spell trouble for our understanding of the atmosphere, especially in the tropics.
This assumes that the observed trends are all real, which is reasonable when two independent measurements agree. But both upper-air observing systems are poorly suited in many respects for extracting small, long-term changes. These problems are sufficiently serious that the US National Weather Service (NESDIS) adjusts satellite data every week to match radiosondes, in effect relying upon radiosondes as a reference instrument. This incidentally means that the NCEP/NCAR climate reanalysis products are ultimately calibrated to radiosonde temperatures. Recent developments concerning the MSU satellite data are discussed in a companion piece.
What can the Radiosonde data tell us?
Radiosondes themselves have significant problems and were also not designed for detection of small climate changes. These problems have been well documented anecdotally, and have been dutifully acknowledged by those who have published trends in radiosonde temperatures. The cautions urged by these researchers in interpreting the results have not always been taken on board by others however.
Few if any sites have used exactly the same technology for the entire length of their record, and large artifacts have been identified in association with changes from one manufacturer to another or design upgrades by the same manufacturer. Artifacts have even been caused by changing software and bug fixes, balloon technology, and tether lengths. Alas, many changes over time have not been recorded, and consistent corrections have proven elusive even for recorded changes. While all commonly used radiosondes have nominal temperature accuracy of 0.1 or 0.2 K, these accuracies are verified only in highly idealized laboratory conditions. Much larger errors are known to be possible in the real world. The most egregious example is when the temperature sensor becomes coated with ice in a rain cloud, in which case upper tropospheric temperatures can be as much as 20 C too warm. This particular scenario is fairly easy to spot and such soundings can be removed, but one can see the potential problems if many, less obvious errors are present or if the sensor had only a little bit of ice on it! Another potential problem is pressure readings; if these are off, the reported temperature will have been measured at the wrong level.
The Sherwood et al. study in Science Express concerns one particular type of long-recognized radiosonde error, that caused by the sun shining on the “thermistor” (basically, a cheap thermometer easily read by an electric circuit). This problem has been documented, notably by Luers and Eskridge (1995,1998), but correcting for it in the past has proven difficult and previously its magnitude was poorly known except under controlled conditions. The most popular radiosonde manufacturer worldwide today is the Vaisala corporation, whose strategy for coping with solar heating is to concede that it will happen and try to correct for it: the thermistor is mounted on a “boom” that sticks into the air flow where the sun can shine on it, but the heating error is estimated from the measured ascent rate and solar zenith angle and subtracted from the reported temperature. The magnitude of this correction can be several degrees, has varied with changing designs, and may not always have been properly applied in the past especially if time of day, station location, or instrument version were incorrectly coded. The US radiosonde, until recently made exclusively by the VIZ corporation and now under contract to two separate manufacturers, has followed the strategy of trying to insulate the thermistor from solar effects by ducting it inside a white plastic and cardboard housing. However, this strategy is unlikely to completely prevent solar heating. The first US radiosonde designs, which had less effective shielding and lacked the white coating subsequently applied to the sensor to limit is solar absorption, showed obvious signs of solar heating error. Many other radiosonde designs exist; larger countries historically designed and built their own sondes, but some countries have abandoned their national sondes and started buying from (usually) Vaisala.
The Sherwood et al. study is the first to try and quantify the solar-heating error over time. We recognized that the true difference between daytime and nighttime temperatures through the troposphere and lower stratosphere should, on average, be rather small, and moreover should have changed very little over the last few decades. We also recognized that this difference could be observed quite accurately by examining consecutive daytime and nighttime observations. Nighttime observations at many stations are much more rare than daytime ones, so this strategy means throwing out most of the daytime data; this is one reason why previous, less focused investigations did not detect this particular problem. This data-treatment technique revealed that, as you go back farther in time, the daytime observations become progressively warmer compared to nighttime observations. This is a clear indication that, back in the 1960’s and 1970’s especially, the sun shining on the instruments was making readings too high. This problem disappeared by the late 1990’s.
The key thing here is not simply the existence of this problem, but the change over time. It turns out that in the tropics the artificial boost in the early readings was just about equal, on average, to the increase in surface temperature over the 1979-97 period (the trend in solar heating bias was -0.16 K/decade averaged from 850-300 hPa). In other words, this effect by itself could explain why reported temperatures did not increase–the increases in actual air temperature were nearly balanced by decreases in the (uncorrected) heating of the instrument by the sun. This effect was large in the tropics because of heavy reliance on daytime data in previous climatologies, and because the daytime biases there changed the most. Correcting for this one effect does not bring trends into perfect agreement with those predicted based on the surface—they still fall slightly short in the tropics during the last two decades, and are too strong in the southern hemisphere extratropics when measured over the last four decades—but these remaining discrepancies are well within what would be expected based on other errors and the poor spatial sampling of the radiosonde network.
An important caveat is that, when instrument designs change, this can affect not only the daytime heating of the thermistor but can also affect the accuracy at night. Thus, correcting for this effect alone does not guarantee an accurate atmospheric trend. The other errors are, unfortunately, not as easy to quantify as the solar heating error. It is not clear what direction they may have pushed trends. Thus we are still in the dark as to the exact amount of warming that has occurred in the atmosphere. The one thing we do know is that we should not hang our hat on the trends in the reported observations until this, and all other problems, are sorted out.
The most likely resolution of the “lapse-rate conundrum,” in my view anyway, is that both upper-air records gave the wrong result. The instrument problems uncovered by these papers indicate that there is no longer any compelling reason to conclude that anything strange has happened to lapse rates. From the point of view of the scientific method, the data do not contradict or demand rejection of the hypotheses embodied by models that predict moist-adiabatic lapse rates, so these hypotheses still stand on the basis of their documented successes elsewhere. Further work with the data may lead us to more confident trends, and who knows, they might again disagree to some extent with what models predict and send us back to the “drawing board.” But not at the present time.
J. K. Luers, R. E. Eskridge, J. Appl. Meteor. 34, 1241 (1995).
J. K. Luers, R. E. Eskridge, J. Climate 11, 1002 (1998).