Guest post by Tamino
In a paper, “Heat Capacity, Time Constant, and Sensitivity of Earth’s Climate System” soon to be published in the Journal of Geophysical Research (and discussed briefly at RealClimate a few weeks back), Stephen Schwartz of Brookhaven National Laboratory estimates climate sensitivity using observed 20th-century data on ocean heat content and global surface temperature. He arrives at the estimate 1.1±0.5 deg C for a doubling of CO2 concentration (0.3 deg C for every 1 W/m^2 of climate forcing), a figure far lower than most estimates, which fall generally in the range 2 to 4.5 deg C for doubling CO2. This paper has been heralded by global-warming denialists as the death-knell for global warming theory (as most such papers are).
Schwartz’s results would imply two important things. First, that the impact of adding greenhouse gases to the atmosphere will be much smaller than most estimates; second, that almost all of the warming due to the greenhouse gases we’ve put in the atmosphere so far has already been felt, so there’s almost no warming “in the pipeline” due to greenhouse gases already in the air. Both ideas contradict the consensus view of climate scientists, and both ideas give global-warming skeptics a warm fuzzy feeling (but not too warm).
Despite the celebratory reaction from the denialist blogosphere (and U.S. Senator James Inhofe), this is not a “denialist” paper. Schwartz is a highly respected researcher (deservedly so) in atmospheric physics, mainly working on aerosols. He doesn’t pretend to smite global-warming theories with a single blow, he simply explores one way to estimate climate sensitivity and reports his results. He seems quite aware of many of the caveats inherent in his method, and invites further study, saying in the “conclusions” section:
Finally, as the present analysis rests on a simple single-compartment energy balance model, the question must inevitably arise whether the rather obdurate climate system might be amenable to determination of its key properties through empirical analysis based on such a simple model. In response to that question it might have to be said that it remains to be seen. In this context it is hoped that the present study might stimulate further work along these lines with more complex models.
What is Schwartz’s method? First, assume that the climate system can be effectively modeled as a zero-dimensional energy balance model. This would mean that there would be a single effective heat capacity for the climate system, and a single effective time constant for the system as well. Climate sensitivity will then be
where S is the climate sensitivity, τ is the time constant, and C is the heat capacity. Simple!
To estimate those parameters, Schwartz uses observed climate data. He assumes that the time series of global temperature can effectively be modeled as a linear trend, plus a one-dimensional, first-order “autoregressive” or “Markov” or simply “AR(1)” process [an AR(1) process is a random process with some ‘memory’ of its previous value; subsequent values y_t are statistically dependent on the immediately preceding value y_(t-1) through an equation of the form y_t = ρ y_(t-1) + ε, where ρ is typically required to be between 0 and 1, and ε is a series of random values conforming to a normal distribution. The AR(1) model is a special case of a more general class of linear time series models known as “Autoregressive moving average” models].
In such as case, the autocorrelation of the global temperature time series (its correlation with a time-delayed copy of itself) can be analyzed to determine the time constant τ. He further assumes that ocean heat content represents the bulk of the heat absorbed by the planet due to climate forces, and that its changes are roughly proportional to the observed surface temperature change; the constant of proportionality gives the heat capacity. The conclusion is that the time constant of the planet is 5±1 years and its heat capacity is 16.7±7 W • yr / (dec C • m^2), so climate sensitivity is 5/16.7 = 0.3 deg C/(W/m^2).
One of the biggest problems with this method is that it assumes that the climate system has only one “time scale,” and that time scale determines its long-term, equilibrium response to changes in climate forcing. But the global heat budget has many components, which respond faster or slower to heat input: the atmosphere, land, upper ocean, deep ocean, and cryosphere all act with their own time scales. The atmosphere responds quickly, the land not quite so fast, the deep ocean and cryosphere very slowly. In fact, it’s because it takes so long for heat to penetrate deep into the ocean that most climate scientists believe we have not yet experienced all the warming due from the greenhouse gases we’ve already emitted [Hansen et al. 2005].
Schwartz’s analysis depends on assuming that the global temperature time series has a single time scale, and modelling it as a linear trend plus an AR(1) process. There’s a straightforward way to test at least the possibility that it obeys the stated assumption. If the linearly detrended temperature data really do behave like an AR(1) process, then the autocorrelation at lag Δt which we can call r(Δt), will be related to the time constant τ by the simple formula
In that case,
τ = – Δt / ln(r),
for any and all lags Δt. This is the formula used to estimate the time constant τ.
And what, you wonder, are the estimated values of the time constant from the temperature time series? Using annual average temperature anomaly from NASA GISS (one of the data sets Schwartz uses), after detrending by removing a linear fit, Schwartz arrives at his Figure 5g:
Using the monthly rather than annual averages gives Schwartz’s Figure 7:
If the temperature follows the assumed model, then the estimated time constant should be the same for all lags, until the lag gets large enough that the probable error invalidates the result. But it’s clear from these figures that this is not the case. Rather, the estimated τ increases with increasing lag. Schwartz himself says:
As seen in Figure 5g, values of τ were found to increase with increasing lag time from about 2 years at lag time Δt = 1 yr, reaching an asymptotic value of about 5 years by about lag time Δt= 8 yr. As similar results were obtained with various subsets of the data (first and second halves of the time series; data for Northern and Southern Hemispheres, Figure 6) and for the de-seasonalized monthly data, Figure 7, this estimate of the time constant would appear to be robust.
If the time series of global temperature really did follow an AR(1) process, what would the graphs look like? We ran 5 simulations of an AR(1) process with a 5-year time scale, generating monthly data for 125 years, then estimated the time scale using Schwartz’s method. We also applied the method to GISTEMP monthly data (the results are slightly different from Schwartz’s because we used data through July 2007). Here’s how they compare:
This makes it abundantly clear that if temperature did follow the stated assumption, it would not give the results reported by Schwartz. The conclusion is inescapable, that global temperature cannot be adequately modeled as a linear trend plus AR(1) process.
You probably also noticed that for the simulated AR(1) process, the estimated time scale is consistently less than the true value (which for the simulations, is known to be exactly 5 years, or 60 months), and that the estimate decreases as lag increases. This is because the usual estimate of autocorrelation coefficients is a biased estimate. The word “bias” is used in its statistical sense, that the expected result of the calculation is not the true value. As the lag gets higher, the impact of the bias increases and the estimated time scale decreases. When the time series is long and the time scale is short, the bias is negligible, but when the time scale is any significant fraction of the length of the time series, the bias can be quite large. In fact, both simulations and theoretical calculations demonstrate that for 125 years of a genuine AR(1) process, if the time scale were 30 years (not an unrealistic value for global climate), we would expect the estimate from autocorrelation values to be less than half the true value.
Earlier in the paper, the AR(1) assumption is justified by regressing each year’s average temperature anomaly against the previous year’s and studying the residuals from that fit:
Satisfaction of the assumption of a first-order Markov process was assessed by examination of the residuals of the lag-1 regression, which were found to exhibit no further significant autocorrelation.
The result for this test is graphed in his Figure 5f:
Alas, it seems this test was applied only to the annual averages. For that data, there are only 125 data points, so the uncertainty in an autocorrelation estimate is as big as ±0.2, much too large to reveal whatever autocorrelation might remain. Applying the test to the monthly data, the larger number of data points would have given this more precise result:
The very first value, at lag 1 month, is way outside the limit of “no further significant autocorrelation,” and in fact most of the low-lag values are outside the 95% confidence limits (indicated by the dashed lines).
In short, the global temperature time series clearly does not follow the model adopted in Schwartz’s analysis. It’s further clear that even if it did, the method is unable to diagnose the right time scale. Add to that the fact that assuming a single time scale for the global climate system contradicts what we know about the response time of the different components of the earth, and it adds up to only one conclusion: Schwartz’s estimate of climate sensitivity is unreliable. We see no evidence from this analysis to indicate that climate sensitivity is any different from the best estimates of sensible research, somewhere within the range of 2 to 4.5 deg C for a doubling of CO2.
A response to the paper, raising these (and other) issues, has already been submitted to the Journal of Geophysical Research, and another response (by a team in Switzerland) is in the works. It’s important to note that this is the way science works. An idea is proposed and explored, the results are reported, the methodology is probed and critiqued by others, and their results are reported; in the process, we hope to learn more about how the world really works.
That Schwartz’s result is heralded as the death-knell of global warming by denialist blogs and Sen. Inhofe, even before it has been officially published (let alone before the scientific community has responded) says more about the denialist movement than about the sensitivity of earth’s climate system. But, that’s how politics works.