Michael Crichton is probably thanking his lucky stars that Von Storch’s original article appeared after (I guess) the completion of “State of Fear”. Otherwise he might have cited it, only to regret it now.
Thanks for an extremely lucid explanation of the statistical analysis behind coupled climate models. I know just enough about these models to realize that the issue of red vs. white noise is a huge one. You have demonstrated how the output of a model can be changed significantly by statistical assumptions that do not necessarily have any basis in the physical record. Of course, models should be constantly tested and reevaluated, and I have no doubt that there will be changes, possibly significant ones, to the Mann analysis. You have made it pretty clear, though, that Von Storch’s critique of the Mann results depends upon ahistorical assumptions, and not any inherent defects in the Mann analysis itself. Which sort of leads you to wonder about Von Storch’s motivation … Nah, no scientist would ever be results driven. Right?
It will surprise no one to learn that this confuses me. Is it the noise alone that is red (or white)? if so, how do we know the redness of realword data? Doesn’t this imply that the signal can be seperated from the nosie?
[Response: See the hyperlink given for year to year correlations. This leads to a derivation by David Ritson showing that in the limit of low signal-to-noise ratios (which is what Von Storch et al assume in their analyses), the noise autocorrelation is equal to the autocorrelation in the combined signal+noise (which is what the proxies represent, by definition). This is easily verified in synthetic examples. – mike]
Ok, sorry, I should have clicked the link first. Still, how sound is it to assume that all signal change is low, for small year intervals? Aren’t there multiple cycles in play that are additive? Maybe I should just be quiet.
[Response: Actually, the “slow signal” assumption isn’t necessary. Experiments using synthetic proxies from model simulations and making no restrictive assumptions, show that at the low signal-to-noise ratios assumed by Von Storch, the noise autocorrelation approaches that of the signal+noise (i.e., proxy) series. -mike]
I find this elaboration quite interesting. However, I was reading something about a climate sensitivity climb of about 15 to 78 % due to positive feedback on SPIEGEL online and other German online news, as well as on the PIK Potsdam homepage, and wanted to know a little more about the specific data, because the SPIEGEL online normally provides quite polemic data about climate change. Could you contribute another article about this issue or at least comment on that issue in this thread? thanks and best regards, Nico
jhm, #6: “how sound is it to assume…” — there’s no assumption to question.
They are looking at actual collected tree rings: “Using data from the North American network of seventy sets of tree rings extending from 1400 to 1980 you obtain an actual one-year AR1 mean autocorrelation factor with a value close to 0.15 ….”
It’s always good to see guest commentaries on RealClimate. It’s great to have the authors of the studies discuss their studies in an interactive forum like a blog.
This short post was really helpful to me. My scientific experience is limited to a bachelors and a little lab technician work about ten years ago. I sometimes find it challenging to completely follow the topics, especially the discussion of the statistics.
This was a lucid explaination of “red” and “white” noise and how the use of the statistics effect the conclusions the scientist make.
Comment by Joseph O'Sullivan — 24 May 2006 @ 8:38 PM
The article you link to says the autocorrelation coefficients were calculated using data from 1400 – 1880 (not 1980 as suggested in the post — while the data goes through 1980, the calculation uses only data through 1880).
What are the results if you calculate coefficents using data from 1400 – 1980?
[Response: As stated, the autocorrelation coefficients vary between about 0.15 and 0.3 depending on precisely what interval is used and precisely which proxy data are used. If the 20th century is included, the lag-1 autocorrelation coefficients are slightly larger (closer to 0.3), but they are arguably inflated by secular trends which are not reflective of simple persistence but forced long-term trends. -mike]
The article you link to uses a differencing technique “to remove the large variance highly correlated slow component from consideration prior to determining the AR1 autocorrelation component.”
This doesn’t sound like a straight-forward estimate of an autocorrelation coefficient. What do you get if you use the standard method of estimating autocorrelation coefficients? Also, in the Von Storch analysis, which type of model are they using: a standard AR1 model, or a model that corresponds to this differencing technique that removes the highly correlated slow component?
Thanks in advance for the clarification.
[Response: You misunderstand what he’s done. He hasn’t performed any differencing of the data at all. He has simply calculated the lag-1 autocorrelation coefficient of the actual proxy data and has provided an argument for why this should be representative of the noise autocorrelation. In fact, it is easy to verify that this is the case using synthetic examples from climate model simulations with added AR(1) noise. -mike]
If you could indulge me some more, I need some more education.
My reading of the Ritson paper you link to suggests that the differencing technique he uses removes ANY highly autocorrelated slow component before calculating the AR1 coefficient.
My question then is, what happens if there is NO temperature signal in the data, but there IS an extraneous, highly autocorrelated signal that is not temperature related (say a CO2 fertilization effect or a precipitation signal). Does the procedure remove the confounding, signal? Presumably the procedure cannot tell the difference between the “true” signal and an extraneous signal with the same statistical properties. So is it possible the procedure is removing exactly the type of red noise that Von Storch is trying to simulate?
[Response: You misunderstand what he’s done. He hasn’t performed any differencing of the data at all. He has simply calculated the lag-1 autocorrelation coefficient of the actual proxy data and has provided an argument for why this should be representative of the noise autocorrelation. In fact, it is easy to verify that this is the case using synthetic examples from climate model simulations with added AR(1) noise. -mike]
Thanks again, and I apologize for my ignorance in this area.
Four comments, which are at least three too many. First, I don’t see that this makes any difference with respect to whether von Storch, et al’s paper tested the MBH reconstruction method, not that any was needed. Reconstruction methods should work for all values of the year-to-year correlations
[Response: You misunderstand the distinction between the level of performance intrinsic to a particular method, and the level of performance associated with a particular error structure in the data. What Wahl et al have shown in their Science criticism of Von Storch et al is that there is no intrinsic bias in reconstructing low-frequency variability using the types of climate field reconstruction methods introduced by Mann et al (1998). This finding is independently supported by this recent published study. The response of Von Storch et alto the Wahl et al criticism was to argue that the method could still yield an underestimate of the low-frequency variability if a very large level of persistence (rho=0.71) was assumed to exist in the noise. This is an assumption about the data, not the method, and any climate reconstruction approach will fail to perform well under such assumptions. This is the first problem in Von Storch’s argument. The 2nd problem is that there is no independent support for his assumption of such high levels of noise autocorrelation. The value assumed by Von Storch (0.71) a priori implies only a handful of statistical degrees of freedom over the calibration period, and assumes that proxy data selectively lose low-frequency climate signal relative to high-frequency climate signal by a variance ratio of more than 6 to 1 . This is frankly absurd, whichis what Ritson has indeed shown: the value assumed by Von Storch et al is completely inconsistent with the available evidence, which is that the noise autocorrelation is somewhere in the range of rho=.15 to 0.3, implying a very minor loss of statistical degrees of freedom, and a very minor increase of low-frequency noise amplitudes relative to the high-frequency noise. –mike]
and what von Storch did was to propose a way (I assume he was the first to try this) to test them.
Indeed, it would be good to test with a range of values for persistence. to see whether a reconstruction method itself imposed an averaging or persistence on the data. The issue is not only to test against a realistic set but to find limits of the reconstruction method if any exist.
Second, if the year to year correlations vary between 0.15 and 0.30, it might be interesting to kick the can down the road and ask whether the persistence (the rate) itself is persistent. This, of course, ignores the problem of noise in the data sets, which might be the cause of the implied variation. What is the year-to-year variation in instrumental data, both locally and globally?. Does persistence change under strong forcing? With that informtation, more realistic test sets could be generated.
Third, and this is a point I raised earlier in other venues, is the issue of telecommunication in the models used to construct the test climates, the reconstructed climates and the climate measurements. It appears to me that this is a much more stringent test of all three, and an issue that was only hinted at in von Storch, and seriously addressed in the construction of global temperature records. Ideally the results should match across the board.
[Response: I assume you mean “teleconnections”, and this indeed a key point in the discussion. Tests with model simulations are only useful if the large-scale teleconnection patterns in the model are reasonably faithful to those for the real climate. The erroneous initialization by Von Storch et al discussed in this previous RC post compromises any hopes for realism in terms of the teleconnection patterns in the model, because the pattern of long-term drift due to spinup problems is very different from the teleconnection pattern associated with real climate phenomena that are presumably important on the relevant timescales (ENSO, the forced pattern of response to volcanic, solar, and anthropogenic forcings, the NAO, etc.). By contrast, the NCAR CSM 1.4 simulation of the past 1000 years used in the more recent Mann et al (2005) study is likely to be far more faithful to the real climate in its teleconnection patterns, because there is no such long-term drift present in the model surface temperature field analyzed. –mike]
Fourth, the most interesting issue that von Storch raised was that the Mann reconstruction industry, or at least the early MBH ones, implies a very low climate sensitivity or overestimates of previous forcing.
[Response: This is not the case, as long as the substantial uncertainties in both response (i.e., the temperature reconstructions) and forcing (e.g. solar, volcanic and anthro ghg+aerosol) are properly taken into account. The range of sensitivities indicated by these reconstructions are consistent with the range indicated from other evidence (e.g. the instrumental temperature record, the Last Glacial Maximum estimates, etc.), when uncertainties are taken into account. –mike]
How does this fit with the recent bounding of climate sensitivity within models and the historical record?
I will be pleased to know the reply from Hans Von Stoch. I am looking forward to se his reply on RealClimate – if not please post a link.
Is it possible for him to run his simulation taking in consideration the corrections you have presented here?
Comment by Klaus Flemloese, Denmark — 25 May 2006 @ 10:52 AM
So, ad hoc, von Storch et al. fold in unphysical noise to their analysis, and get a result that disagrees with Mann et al. This verifies the old computer science dictum, “garbage in, garbage out.” I agree with Don Strong, this also provides evidence for the AGW denialist “computer models have nothing to do with reality” talking point.
thanks for the comment. However, I’d like to read something on RealClimate about it. I was reading the article on the PIK homepage, but unfortunately the data provided does not suffice (at least for me). Would be nice to hear something by the professionals about it on this blog.
#11 & #12, I guess I am confused like Terry. In Ritson’s paper, the third equation down, it appears that the proxy data is differenced Y(j)=X(j)-X(j-1) ??? Phil
[Response: We’re checking with David Ritson for confirmation. However, the average raw lag-one autocorrelation coefficient for the full set of 112 (unprocessed) predictors used by MBH98 is rho=0.28 with a standard error of 0.03; if the 20th century is not included owing to the argument that the natural autocorrelation structure is contaminated by the anthropogenic trend, the value is lower, rho=0.245 +/-0.03. In either case, the inflation factor is minimal compared to what is assumed by Von Storch. -mike]
[Response:(update) David Ritson confirms that the procedure in question is exactly as specified in the linked attachment and is designed to find, within specified approximations, the AR1 coefficient that describes the proxy associated random-noise. As mentioned above, we ourselves independently find an AR1 coefficient (for the combined noise+signal) between 0.25 and 0.30 for the MBH98 network, close to the Ritson value and qualitatively lower than the value used by von Storch. –mike]
It is offtopic but I have a question and it would be great if someone answer:
Does anyone has a link, where the greenhouse effect is described very detailled (with scientific facts and mathematical equations and things like that (I hope you understand what I mean))? I find always only very superficial views of the greenhouse effect. Would be great (sorry that my english is not very good).
[Response: The wikipedia entry on this is quite good, and provides links to other very good sources of information. -mike]
#11,#12 Mike, Thanks for the clarification and update. Since you obtained similar but slightly higher results then D. Ritson, can I assume you differenced the proxies like Ritson or did you obtain your results without differencing? Thanks Phil
[Response: No differencing, precisely what was indicated above. To repeat again the point made several times above, the autocorrelation level of the signal+noise (i.e., the proxy), as easily verified with synthetic examples from model simulations, approaches the autocorrelation level of the noise component of the proxy, under the assumption of low signal-to-noise ratios. No differencing is required. -mike]
[Response: “Tropics are expanding” is a rather strange summary of what this very interesting paper actually means. Rasmus and I will be doing a post on it as “Part III” of our discussion of circulation changes that accompany global warming. –raypierre]
#22 Mike, Thanks for your quick response and patience. Phil
[Response: We’re happy to answer good faith questions such as yours here. Others, however, who seem to pose questions that gratuitously ignore answers already provided previously, are unlikely to see their comments get screened through. -mike]
Thanks for the long reply to #14. There are some points on which I still have questions. First, it should be useful to test reconstruction methods which have no internal physics against unrealistic data sets to explore the limits of the reconstruction method. I certainly was not defending v. Storch et al.’s data set. Second, while there is lots of noise in the data, if the implied temperature sensitivity of a reconstruction is at the low end of the most probable range, one has to seriously consider whether this is due to noise, a problem with the measurements, a problem with the reconstruction method or a problem with climate models.
[Response: We’re only talking about “noise” here as a property of the proxy data, so it cannot indicate a problem with any particular reconstruction method or model. It the context being discussed, “noise” is an intrinsic property of the proxy data themselves. It is certaintly nonetheless a worthy question to ask what the source of the “noise” is. In many cases, its a sampling problem–the ice core is only recording conditions at the time that snow is being deposited, the trees are only responding to growing season conditions and can be influenced by threshold-dependent processes not easily represented by seasonal means, etc. -mike]
Finally, you left me hanging on the issue of teleconnections, does the NCAR model do a better job or has this issue not been considered. If not, is it worth considering?
[Response: As mentioned previously, the model certainly does a better job in describing physically-based teleconnections to the extent that the variability in the model is dominated by realistic climate signals, e.g. ENSO, the NAO, which display meaningful spatial patterns of coherence in the climate field (surface temperature) of interest, whereas the variability in the ECHO-G simulation used by Von Storch et al is dominated by an unphysical pattern of long-term drift whose pattern may have no correspondence with any real climate signal at all. But aside from that, one can certainly ask whether different models do a better job in their large-scale teleconnections. For example, do the models exhibit realistic ENSO-like variability? The more realistic the model’s ENSO, the more realistic the interannual variability, since this is one of principle climate phenomena that dominates coherent global-scale variability on these timescales in the real climate. Also relevant is the issue of whether the models in question exhibit large-scale patterns of atmospheric response to e.g. the tropical SST perturbations associated with e.g. ENSO, or to external radiative forcing changes (e.g. volcanic, solar, and greenhouse gas) that seem realistic. There are many potential metrics by which the realism of a model’s variability (including its teleconnections) might be measured. This is the role of detailed intercomparison projects such as “AMIP” and “CMIP”, and similar ongoing intercomparison projects described in the peer-reviewed literature. There is no short, simple answer of the sort “this model has better teleconnections than that model” that would be satisfactory however. -mike]
>23 – that CSM article is good. It answers JHM’s question, the ‘Tropics’/ ‘Temperate Zone’ boundary was defined as described — and they’re describing a global change. Someone also posted a link to the Mercury News’s less clear article in another thread.
CSM: “From 1979 to 2005, the highest temperature increases in the lowest layer of the atmosphere, the troposphere, occurred in vast swaths centered on 30 degrees latitude. Meanwhile the steepest cooling in the next layer of atmosphere, the stratosphere, occurred in these same regions. The net effect, researchers say, has been to nudge the average paths of swift rivers of air known as the subtropical jet streams farther north and south. These paths mark the meteorological border between the tropics and temperate regions….”
The “tropics” can be defined either atsronomically or climactically.
The astronomical definition is the area between the tropic of capricorn and cancer. This is defind by the Earth’s tilt, and changes by a few degrees over a 41000 year cycle.
The climactic definition is something esle; maybe a climatologist can fill you in there. But my guess is that it would be the warm, wet, low lattitude part of the globe in which frontal systems are a fairly minor part of the weather.
Think you’ll find that Russian guy’s name is Markov.
[Response: I believe that the translation of the Cyrillic is non-unique. One can find translations of the Russian name to “Markov”, “Markow”, and “Markoff”. I agree that “Markov” is more commonly encountered. -mike]
Why is it that Mann attracts such elaborately contrived criticism?
[Response: I’ll take this as a rhetorical question. -mike]
The Torn and Harte article was published today on the GRL site.
GEOPHYSICAL RESEARCH LETTERS, VOL. 33, L10703, doi:10.1029/2005GL025540, 2006
Missing feedbacks, asymmetric uncertainties, and the underestimation of future warming
M. S. Torn, J. Harte
Historical evidence shows that atmospheric greenhouse gas (GhG) concentrations increase during periods of warming, implying a positive feedback to future climate change. We quantified this feedback for CO2 and CH4 by combining the mathematics of feedback with empirical ice-core information and general circulation model (GCM) climate sensitivity, finding that the warming of 1.5-4.5°C associated with anthropogenic doubling of CO2 is amplified to 1.6-6.0°C warming, with the uncertainty range deriving from GCM simulations and paleo temperature records. Thus, anthropogenic emissions result in higher final GhG concentrations, and therefore more warming, than would be predicted in the absence of this feedback. Moreover, a symmetrical uncertainty in any component of feedback, whether positive or negative, produces an asymmetrical distribution of expected temperatures skewed toward higher temperature. For both reasons, the omission of key positive feedbacks and asymmetrical uncertainty from feedbacks, it is likely that the future will be hotter than we think.
published 26 May 2006.
(Might this be a good topic for a new discussion??)
Re #28: I think an answer is useful, for new readers of the site at least.
It’s because the eye-catching graphic from Mann et al (the “hockey stick”; so-called because it shows a 850-year relatively flat graph of average Northern Hemisphere surface temperatures [the "handle"] followed by a sharply rising recent record the “blade”] “likely” due to anthropogenic effects from circa 1850) was prominently featured by the IPCC Third Assessment Report (TAR) in 2001; see here. Arguably the graphic was over-featured relative to its importance to the TAR’s scientific conclusions, so that and the fact that paleodendrochronology is an endeavor with necessarily large error bars (see the grey areas in the graphic) made it the obvious point of attack for those wishing to undermine the TAR. (When I say over-featured, I *don’t* mean that the science was in any way exaggerated, but simply that it was the most impressive graphic available and so it got much more public exposure than other aspects of the TAR.) The irony of these attacks is that even a firm finding that natural variation made for a bumpier “shaft” (i.e., a warmer Medieval Warm Period and a colder Little Ice Age) wouldn’t undermine the TAR’s conclusions in the slightest; see here. Had there been no “hockey stick” available, some other aspect of the science inevitably would have become the major focus for politically motivated attacks on the TAR.
While I’m sure Mike has enjoyed his time in the sun :), the additional overwhelming evidence for anthropogenic global warming that has accumulated in the last five years has already served to radically reduce the ranks of the climate science deniers, and publication of the Fourth Assessment Report (AR4) should pretty well be the end of that chapter. People like Inhofe and Crichton will probably go to their graves claiming that climate science is all a giant conspiracy, and of course the science can never be perfect, but after the AR4 comes out in a year I suspect they’ll find themselves reduced to a status not much better than that of the Flat Earth Society.
With all respect to your work and with the risk of some misunderstanding because of my different scientific origin (engineering hydrology), I would like to make a few comments on your treatise “Deriving AR1 Autocorrelation Coefficients from Tree-Ring Data” that you link in your above post and seems to be the background document for the post.
[Response: Dear Prof Koutsoyiannis,
I agree with you that instead of talking about “simple Markoff processes” I would be better to replace with “stationary Markoff processes”. I should have added the condition that the record length be greater than (alpha+1)/(alpha-1) the decoherence time. To put things in context I was addressing a specific response of von Storch et al. to a comment of Wahl, myself and Amman, that said It was realistic to believe that the proxy-specific noise could be simulated with AR1 noise of a given variance and an AR1 alpha of 0.7. This is not consistent with actual data. I was interested in your ideas on limits to more complex than Markoff descriptions, I certainly would be interested in looking in a more leisurely way at what you have written and getting back to you. I am however not sanguine as to how much further the data permits meaningful more complex descriptions?
I supply quick off the cuff answers(?) to your specific points below – David R.
1. In my opinion it is useful that the author of a scientific text underlines the hypotheses which he/she uses to derive the results — and not leave the reader to guess them.
[Response:Agreed, see above.]
2. Apparently you use the hypothesis of stationarity and ergodicity — the latter is obvious from your notation, which is in terms of time averages rather than ensemble averages. Of course these are not strong hypotheses and everybody uses them; however, one must have always in mind that ergodicity has an asymptotic character (e.g. a stationary process is mean-ergodic if its time average tends to the ensemble average as time tends to infinity; Papoulis, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 1991, p. 428).
3. You also use the hypothesis that the process X(j) (representing the growth-amplitudes) is an AR(1) process (please note that I have dropped your subscript _i_ to simplify notation as I am writing in ASCII mode). Even though you put this hypothesis for a component of X(j) that you call “noise amplitude”, it becomes also the case for the initial (decomposed) process X(j), given that you assume equality in time for what you call “slow component”. Such a hypothesis is a strong yet unjustified one; in my opinion there is no reason that nature’s signals should be AR(1).
[Response:No, I assume ONLY that the signal (climate temperatures} is slowly varying with time and nothing else about its structure. ie s(j), s(j+1) and (s(j+2) are close to equal.]
4. Loosely speaking, the AR(1) hypothesis is equivalent with an hypothesis that a single time scale (e.g. the annual) dominates in nature. But I am glad to see in your paper the recognition of a “signal component with comparatively large excursions over multi-decadal periods”. So I agree with you that, in addition to fluctuations on the annual scale, there exist fluctuations on over-annual scales. In the case that we follow a multi-scale thinking, a maximum entropy consideration will result in a non Markovian (non AR(1)) dependence, and most probably in a process with long-range dependence (LRD) or long term persistence (LTP). This I tried to show in my paper:
Koutsoyiannis, D., Uncertainty, entropy, scaling and hydrological stochastics, 2, Time dependence of hydrological processes and time scaling, Hydrological Sciences Journal, 50(3), 405-426, 2005.
[Response: I assume along with everybody else that the climate signal contains regions of warmth Medieval warm period and Little ice-age for example. These may result deterministically from externals such as solar forcing. ]
5. Even with a simpler thinking, just with the superposition of fluctuations on three time scales, e.g. annual, decadal and centennial, one arrives at a process that is virtually equivalent (meaning for lags as high as 1000 years) to a process with LRD. This I demonstrated in my paper:
Koutsoyiannis, D., The Hurst phenomenon and fractional Gaussian noise made easy, Hydrological Sciences Journal, 47(4), 573-595, 2002.
[Response:See answer to 4]
6. From a more philosophical — if you allow me to say — standpoint, viewing complex natural phenomena as AR(1) processes, which means Markovian processes, may be too simplified. Recall from the theory of stochastic processes that a Markovian process is by definition “a stochastic process whose past has no influence on the future if its present is specified” (Papoulis, ibid., p. 635). Thus, for me it is very difficult to imagine that only the present state of a complex natural system matters for its future and that we can drop our knowledge of its past. On the other hand, compared to a time independent (like head/tail outcomes in coin tossing) view of natural processes, in which even the present does not matter for the future, certainly a Markovian view is a progress.
[Response:We are discussing proxy-specific noise, factors such as disease. Higher order effects such as a tree that was diseased three yeara ago may be more or less susceptible this year are probaly out of reach.]
7. I was able to verify your main result in your treatise that alpha = 1 + 2 rho, where alpha is the lag one autocorrelation of the process X(j) and rho is the lag one autocorrelation of the process Y(j) as you define it. (Here I have used the notational convenience rho for your fraction in your penultimate equation — I hope that my understanding is correct that this is lag one autocorrelation). In my opinion there is no need to do — as you did — a decomposition of the process X(t) into a slow component and a noise amplitude. I think that such a decomposition is fuzzy, subjective, and not necessary because you can obtain your result without any decomposition (and without your accompanying assumption s(j) = s(j + n), which may not be justified). If one simply defines Y(j) := X(j) – X(j +1) (i.e. in terms of the actual process rather than the decomposed one) and also assumes an AR(1) autocorrelation function, one directly obtains alpha = 1 + 2 rho.
[Response:No, I am afraid you are missing an essential point. What one wants is the lag one autocorrelation of the process e(j). For simulation purposes the AOGCM program supplies the perfect proxies, temperatures precipitation etc. To get the simulated proxies, one must add in one’s best estimate (based on real proxy data) of proxy specific noise which is NOT provided by the AOGCM.]
8. However as I wrote above, the AR(1) hypothesis is a strong one and it would be better to avoid it. In this case, one can easily obtain that your relation alpha = 1 + 2 rho (equivalently rho = -(1 -alpha)/2) becomes rho = -(1 – 2 alpha + 2 alpha2)/(2 – 2 alpha), where alpha2 is the lag two autocorrelation of the process X(j). Your formula is a special case of the general one, obtained by substituting alpha2 = alpha^2 (i.e. assuming a Markovian process). Given that alpha2 is unknown in an approach such as yours, we cannot estimate alpha from rho. But we can estimate its upper and lower bounds. Assuming stationarity, we can put the restriction that the size 3 autocorrelation matrix of the process X(j) is positive definite. In this case, a positive determinant results in the constraint -1 + 2 alpha^2 < = alpha2 <= 1. From this constraint, using simple algebra, we can find an interval for alpha given the value of rho.
[Response:See above 7]
9. You may say that this interval of alpha is too wide and thus not helpful in an accurate point estimation of alpha. Well, this is the optimistic view. The interval for alpha would be that wide if we knew precisely the value of the lag one autocorrelation rho. But we only have a sample estimate of rho — thus the range of alpha is even wider. More specifically, in my paper:
Koutsoyiannis, D., Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48(1), 3-24, 2003,
I have demonstrated that the classic estimator of autocorrelation (that you use) implies high bias if the process exhibits LRD. You may also find there citations pointing that bias exists also in the AR(1) process. In addition to bias, there also exists significant variability and thus uncertainty in estimates. Therefore one should be very careful in such statistical calculations, because they entail bias and uncertainty — in contrast to typical arithmetic calculations.
[Response:Yes and no, see above]
10. Having some experience with statistical uncertainties and particularly with complex interactions of uncertainties (and the magnification of the total uncertainty) when ones combines two or more random variables in a single expression, personally I would avoid calculating statistics of a process X(j) based on the differenced process Y(j) = X(j) – X(j +1) (or, much worse, on a process involving differences of some subjectively defined components of X(j) as you did). You can check the magnification of uncertainty even with arithmetic calculations, assuming for instance a pair of values X(j) and X(j +1) close to each other and attributing a certain percentage of uncertainty in each of the two. In this respect, I would prefer to base my estimations on the process X(j) per se and in addition to be as aware and careful as possible of the uncertainty and bias in statistical estimations, especially for processes which might exhibit LRD — a case not well covered so far in classical statistical texts.
[Response:What is available are the simulated perfect proxies and separately ones best estimate of PROXY-SPECIFIC noise. Unlike in most engineering applicationscone is trying to create not the real world but an approximation to it. One then applies ones chosen algorithm (quite separate from anything I discussed) to see how well it functions to extract signals buried in a noisy environment. One is only validating signal extraction, NOT setting values to the signal. These are obtained when one applies ones analysis algorithm to real-world data.]
Having posted a comment earlier (and waiting — and trusting — it to appear on RealClimate) I take the opportunity to express my gratefulness to RealClimate and its contributors, for hosting and disseminating some of my thoughts and ideas. A couple of comments that I posted in another occasion were very kindly received and replied in RealClimate (even though they might be different from the majority of ideas I read in RealClimate) and subsequently widely read and disseminated. I really appreciate this, particularly because I have experienced difficulties in publishing my ideas (including some of the articles I mentioned in my earlier comment — and perhaps my style and thoughts in my earlier comment on this thread were influenced by very recent editorial experiences of the same negative type).
In addition, I wish also to express my thanks to Professor Ritson because (albeit indirectly) he made two very important (in my opinion) points that perhaps I did not emphasize earlier: (1) that statistics and stochastic processes are important in climatic research and (2) that we should recognize the “large excursions over multi-decadal periods” in climate and hydrology. Such a recognition is none other than a phenomenological (or physical if you wish) view of what is called in stochastic processes long-range dependence (LRD), else known as long-term persistence (LTP), Hurst behaviour, scaling behaviour or long memory. Here, I would like to say that the latter term is in my opinion the worst of all, because prompts people to think of a physical mechanism inducing long memory to climate or hydrology. And since such a mechanism is difficult to imagine, people are reluctant to accept this behaviour. However, if one (similar to Professor Ritson) recognizes excursions on multi-scale periods and superimposes the excursions of the different scales, one can obtain precisely the scaling (LRD) behaviour without the intervention of any weird long memory mechanism.
Thanks for your kind reception of my comments and your responses. Based on your responses, I have the feeling that we can converge, at least partially. Therefore, I will put my emphasis not to some different views that I may have for some of your points, but to points that I feel we can converge.
I am happy that you “assume along with everybody else that the climate signal contains regions of warmth Medieval warm period and Little ice-age for example.” This could be a good point for convergence. I also agree with your statement that “These may result deterministically from externals such as solar forcing.” But I wish to discuss it further and first your term “deterministically”. I hope you could agree with me that a specific storm that causes severe damages results deterministically from some atmospheric dynamics. This dynamics is in fact the basis of the meteorological prediction of the storm, cast some days earlier. At the same time, nobody would accuse meteorologists for not having predicted the storm a year or a century earlier. Because of the complexity and chaotic behaviour, we all recognize that it may be impossible to accomplish such a long-term prediction. Therefore, in engineering, given that we have to design works that will last say a century, we use a probabilistic or stochastic approach to describe storms and to construct what we call “design storm”, a hypothetical severe storm that has some pre-specified probability of occurrence.
We could expand this logic to other simpler phenomena, e.g. the movement of a die. There is some deterministic dynamics in this movement; however we all say that the outcome of the die is random (cf. Einstein’s apothegm “God does not play dice”).
After this, I hope you will agree with me that the Medieval warm period and the Little ice-age are not MORE deterministic than the evolution of mean daily temperature or the mean annual temperature. So, if I have the right to use a stochastic description for the annual temperature, as you did with your proxies, I feel that I have the right to use a stochastic description for over-annual fluctuations or excursions such as the Medieval warm period and the Little ice-age. Of course you may disagree with me. You may say that these excursions should be modelled not stochastically but only deterministically. In this case I will ask you: Could you give me your deterministic dynamics for the variations of solar activity and their impacts to the atmosphere and particularly the global average temperature? Could you apply your deterministic dynamics for the past and hindcast the climate over the last 2000 years? Could you apply your deterministic dynamics for the future and forecast the climate over the next 2000 years? In these questions I deliberately used long periods because we need long periods to observe such long-term fluctuations.
As you see, by profession, I do not have any problem to use stochastic descriptions of natural phenomena. In fact I am very satisfied with the answers I am getting from my stochastic descriptions for engineering designs and for supporting water management decisions. But in fact, in hydrology we follow the paradigm of physics. In my knowledge and view, in the late 19th century, physicists abandoned the mechanistic paradigm and were thus able to develop disciplines such as statistical thermophysics (including the entropy concept, first put on probabilistic grounds by Boltzmann) and quantum physics. In both these disciplines probability has a major role and replaces mechanistic concepts, explanations and analogues (e.g. Lavoisier’s subtle caloric fluid).
If we accept that one is allowed to use stochastic descriptions the question is: Which stochastic description can be appropriate for hydroclimatic processes, i.e. reproduce the Medieval warm period and Little ice-age, and the persistent droughts and floods of Nile? (I mentioned Nile because we have a lot of information covering many centuries â�� obviously such behaviours have been observed in other rivers, as well). A Markovian (AR(1)) description? I would say no. I have played a lot with several stochastic models and I think the simplest is a scaling model (also known as fractional Gaussian noise and with many other names — see my post in http://landshape.org/enm/?p=25).
Please allow me to say that a simple scaling stochastic model is not a complex description, as you characterize it in your first response above. It is a very simple description, in some aspects simpler than Markovian. And it has a very simple interpretation: combine fluctuations or excursions on several times scales, and you get a scaling process. Amazingly, the resultant scaling process, by combining different initial components, is simpler than the components. But this may not be a surprise or a unique phenomenon: Combine several weird distribution functions by taking the sum of the different random variables. You get the extremely simple normal distribution — the central limit theorem (notably the normal distribution results also from the maximum entropy principle, regardless of the central limit theorem).
Having said these, it’s a marvel to me that climatologists have been so strongly reluctant to adopt the scaling description for climatic processes. I marvel to read statements that long-term persistence is not “…a proper recognition of the physics and dynamics underlying the [climatic] systems…” and that “â�¦ a simple model [an AR(1) model] for climatic noise has the advantages that it is (i) motivated by the actual underlying physics (see e.g. Hasselmann, 1976â�¦” (the quotations are from a review — apparently by a climatologist — that I received recently). A description that cannot reproduce important phenomena, such as the Medieval warm period, the Little ice-age and the persistent multiyear droughts and floods, has been regarded as consistent with the “actual underlying physics”!. At the same time, a description that can reproduce them, lacks “a proper recognition of the physics and dynamics”!
I apologize if I have been verbose. I must stop here saying that my thoughts on these issues are presented in more detail in my (just published) paper:
Koutsoyiannis, D., Nonstationarity versus scaling in hydrology, Journal of Hydrology, 324, 239-254, 2006.
[Response:Your comments suggest a misunderstanding of a fundamental issue here, namely the distinction between stochastic and deterministic behavior of the climate. There is a fundamental difference between the underlying statistical behavior of climate forcings and the underlying statistical behavior of the climate response to a specified forcing. The stochastic model of AR(1) noise is only ever invoked to explain the unforced component of surface temperature variability. It is nonsensical to attempt to fit a stochastic model to the sum of both unforced and deterministic forced variability. The changes in mean temperature in simulations of e.g. the past 1000 years show that the low-frequency changes in hemispheric mean temperature can be explained quite well in terms of an approximately linear response to changes in natural changes in radiative forcing (see our discussion here, and the additional reviews cited). This is analogous to the fact that the annual cycle in surface temperature at most locations can be described well in terms of an essentially linear response to seasonal changes in insolation. Obviously, the underlying statistical behavior of the forcings themselves on these two timescales is quite different. But it doesn’t matter, from the point of view of understanding the physics of the climate system, what the underlying statistical nature of the variations in forcing is. The response of the system to those changes in forcing is deterministic—any two realizations with small differences in initial conditions will converge, not diverge, in their trajectories with regard to e.g. the global or hemispheric mean temperature, and those trajectories are essentially linearly related to the changes in forcing themselves, at least over the time interval and range of changes in forcing over this timescale. Finally, none of this has any bearing on the statistical description of “noise” present in proxy climate records. It is extremely difficult to reject the null hypothesis of weakly autocorrelated AR(1) red noise in this case. Why one would entertain highly elaborate models of “long-range dependence” and “random walk behavior” when such a simple null hypothesis cannot be rejected, is beyond me. –mike ]
[Response:If I may interject, David’s point is that if the non-climatic ‘noise’ in the proxy series can be modelled as AR(1), what is the likely magnitude of the correlation? He points out that it is much smaller than was recently assumed. Your statements are related to whether the whole series can be modelled as AR(1). These are obviously different issues. However, your statements above seem to imply that climatic series should be thought of as purely stochastic with no deterministic component. This is not likely to be well accepted by most climatologists – because of course would imply that there is no predictability of climate response to any change in external conditions. The fact that many climate changes can be understood in terms of changing solar forcing, volcanic eruptions, greenhouse gas changes, orbital forcing etc. are obvious counter examples to this idea. Therefore the more accepted description is that climate time series consist of a deterministic component together with intrinsic variability and some ‘noise’. In your stochastic descripitions, I am unaware of how you can distinguish these different components, and thus make a claim about the intrinsic variability characteristics. As Mike indicates, I don’t think you can reject the simplest AR(1) hypothesis. – gavin]
PS. May I ask you one clarification: When you speak of “… proxy-specific noise, factors such as disease …” I do not think that you imply that the time dependence is caused by diseases and such factors. Am I right?
I addressed my comment to professor Ritson, and apparently, when I spoke of convergence I did not have in mind people who do not discuss the argumentation but avail themselves the power to diagnose “misunderstanding” of their colloquist (and do it very often as I see even in this page) and also the power to characterize the simplest thing as “high elaborate”. My experience was different in previous discussions in this blog, as I wrote earlier. But in principle, I do not think that such conditions, as given away by Mike’s response, favour continuation of a dialogue. So I have to stop here, thanking for hosting my earlier thoughts and for the kind responses (including Gavin’s latest one) and adding just two things:
[Response: Gavin and Mike identified precisely what is wrong with your argument. It is crystal clear from what you were writing above — see the references to AR(1) not being able to reproduce the Medieval Warm, etc. — that you were attempting to apply non-Markov scaling models to the whole record, signals and all, and not just the noise around the signal. It is a matter of some considerable dispute whether the actual climate record can be captured by a self similar scaling process (multifractal or otherwise) but even if it is — as is the case for homogeneous turbulence — scaling properties do not absolve one from the need to look at the underlying physical processes giving rise to that scaling, and do not prevent one from making use of what one knows about the physical behavior of the system. That includes things like ocean response times, radiation balance, solar forcing mechanisms, and yes, even things like the nature of the noise introduced by tree diseases on climate proxies. Your assertion that climate scientists have been reluctant to make use of scaling is just bizarre. Scaling laws are used in characterizing clouds and boundary layer turbulence all the time and naturally in your own field (hydrology) it is one of the standard tools. For that matter, if you look at Ritson’s contributions in Phys. Rev. Letters you’ll see quite a firm understanding of the application and misapplication of scaling to large scale climate processes. It’s not scaling processes themselves that get resistance. It’s the overselling of scaling arguments — the sloppy statistics that lead some to see scaling where none exists, the unsupported claims for “universality,” and the assertions that finding scaling somehow replaces the need for an understanding of the actual physical processes governing the climate system. –raypierre]
(1) Gavin, I do not imply that “there is NO predictability of climate response to any change in external conditions”; I just imply that there is uncertainty (for the response as well as for the change of external conditions) and thus necessity for stochastic descriptions. And I do not think that stochastic descriptions imply “NO deterministic component”; in contrast, they take into account all known deterministic components (cf. the description of the annual cycle by cyclostationary stochastic models).
[Response: I don’t think I’ve ever claimed that stochastic descriptions are not important. My only point concerned how statisitical descriptions of paleo-data can’t be used to extract the purely intrinsic component of variability. – gavin]
(2) Mike, you may well believe that you understand and other people misunderstand, but it may be very difficult to prove it because understanding is not an algorithmic process (as articulated by Sir Roger Penrose).
[Response: For those who haven’t read Penrose’s book, The Emperor’s New Mind , the assertion is that a combination of quantum sensitivity, chaos, and general relativity operating at the unimaginably small Planck scale combine to defeat Godel’s theorem, allowing the brain to perform marvels of understanding and inference that would be inaccessible to purely algorithmic methods. Invoking Penrose in order to show that Mike can’t prove he understood what other people wrote has to count as one of the most creative ways I’ve ever seen of avoiding responding to a substantive criticism. –raypierre ]
Why is it that Mann attracts such elaborately contrived criticism?
It’s curious that he alone attracts such critics, since there are several independent studies that show the same results. I guess it’s just the case that “every prospect pleases/and only Mann is vile.”
Apologies to Reginald Heber’s From Greenland’s Icy Mountains–Missionary Hymn
From the first hit, the most recent paper as mentioned earlier in this thread:
“…what about the changing mean case of Figure 2(b)?
Obviously, the interpretation and modelling of the changing mean affects seriously the design and management of hydrosystems. Here two different modelling approaches have been followed: (1) the nonstationarity approach, and (2) the scaling approach. These are the theme of this paper and are discussed in detail in the next two sections. As will be demonstrated, the two modelling approaches differ dramatically in their logic and interpretation of natural phenomena, and most importantly, in the estimates of uncertainty they produce, which according to the second approach are much higher than in the first. Thus, the comparison of the two modelling approaches, which is the target of the paper, is not merely a theoretical issue but it has significant practical consequences…”
Looking a bit more into this, I actually think the hydrology issues merit a thread, if the hosts want to have one dedicated to this. I’ve asked grader operators and road culvert engineers around forest sites what they’d do to put a foot of topsoil back on a mountainside — after a Forest hydrologist had looked at the lichen on the rocks and said “a century ago, this had about a foot of topsoil — now it has about two thirds of an inch.”
Adult lichen, all about the same size, covering the rocks down to about a foot above current top of the dirt; in that last foot a decreasing sizes down to tiny dots half an inch above the dirt, then bare rock. So the topsoil went away over a century and the lichen grew as the rock was bared.
None of the road engineers have even thought about the question, each found it interesting, nor had they heard what the hydrologist had told me about the mountainside.
It’s a practical discipline without time to wait, answers are given all the time — so how they decide starts to be a real concern if capturing carbon by restoring topsoil is to be a normal task.
David Ritson is only going to be available on a limited basis over the next few months, but will be happy to respond directly to any further queries when available. See link above for contact information.