RealClimate

Comments

RSS feed for comments on this post.

  1. Setting aside temporal autocorrelation for now, Cohn and Lins should reject their null hypothesis that temps can follow random walks in an unconstrained way because multicellular life would have been wiped out by now if there were no negative feedbacks. Then they should construct a new null hypothesis which includes some feedbacks. What would be the nature of those feedbacks? They would have to build something like today’s GCM’s to encompass them.

    Comment by Steve Latham — 16 Dec 2005 @ 5:29 PM

  2. Meteorologists make short-term predictions. What about long-term? Can you predict the location of a thunder-storm a few years from now, a few months?

    [Response:Short-term predictions depend more on the initial conditions, whereas long-term predictions depend on changes in the boundary conditions, the forcings. You cannot predict the exact weather (atmospheric state) because of the chaotic nature of the atmopshere, so a thunderstorm cannot be predicted. It may, however, be possible to predict the frequency of future thunderstorms, given suitable models. It's a bit like the seasonal cycle: I can easily predict that the summar will be warmer than current (winter) conditions, but I cannot say much about a specific date during the summer. -rasmus]

    I’m not a Global Warming skeptic, I’m just angered of how the scientific community is handling this subject. G.W. (Global Warming) is too politicized, by that I mean the scientists who believe in G.W. always interpret their evidence to agree with their point of view, and skeptics of G.W., interpret their evidence to agree with their point of view. Also the alarmists (G.W. believers) are supported by politicians and environmentalists, because they know that the study they are supporting would agree with their opinions. Same with the skeptics, they are paid by Oil and Industrial companies because the study would favor their reasoning. Global Warming is happening, but it’s a scientific circus.

    [Response:The issue has been politicised, but my view is that it's because of people outside the scientific community. Think about the scientists (like me) who gets caught in this 'cross-fire' (I honestly did not envisage this when I entered the field of climate science - call me a nerd if you like). It has really been frustrating to see our field become distorted through the media, and this is one reason for why RC came about. Here at RC, we aim to stay out of the political aspects and focus on the scientific issues. For the political you should visit Prometheus -rasmus]

    Comment by Vahan Hartooni — 16 Dec 2005 @ 6:20 PM

  3. Perhaps I’m not getting it. Objects the size of baseballs do not just randomly leap into the air from a position of repose. Fires do not just randomly start in the absence of fuel. Rain does not just randomly fall from a clear blue sky. A multi-decade trend in global temperature must have some physical cause. The increased energy has to come from somewhere, be it natural or anthropogenic or a combination of both. An increase in CO2 must come from somewhere; an increase in solar radiation must come from somewhere. How can a measured increase in CO2 or solar radiation or planetary albedo be “random”? What does random even mean in such a macroscopic context? Perhaps this is why I’m not getting it.

    Doug Watts

    [Response:You are getting it! I think it's others who don't. The universe is not random. Physics rule. -rasmus]

    Comment by Douglas Watts — 16 Dec 2005 @ 6:28 PM

  4. Re #3 I have to say that I agree with Douglas Watts. Scientists seem to believe in these natural cycles, but they must have a cause. They seem to admit that El Nino exists, but deny the tidal atmospheric effects of the sun and the moon. This shows that scientific opinion is driven more by public opinion than by the facts.

    Cheers, Alastair.

    [Response:???. But atmospheric and oceanic tides are well-established now, and explained in terms of physics. El Nino also has a theory (or several). -rasmus]

    Comment by Alastair McDonald — 16 Dec 2005 @ 7:21 PM

  5. Re #2; it is reasonable to conclude that at least one side of the debate is behaving inappropriately.

    However, it is not so easy to conclude that both sides are unreasonable. One side may be advancing positions so far outside what is supported by the evidence, that the other side is essentially forced to be adamant about what is supported by the evidence.

    In that case, the unreasonable side will have achieved a public relations victory if you perceive both sides as comparably unreasonable.

    Consider the case of evolution, or the case of the health impacts of tobacco. Organized groups spend a great deal of effort raising doubt and confusion in opposition to sound scientific conclusions. The only way to decide these questions is to appeal to the evidence, and this is difficult in a case where one side is obfuscating the evidence.

    This sort of polarization does not occur very often within the scientific community itself. Rather, if you see something like this going on, one side usually has the agreement of the scientific community, and typically that side, though not by any means infallible, has dramatically more useful and informed opinions. The other side will be expressing commercial interests and/or strongly held philosophical preconceptions but will not be expressing science.

    In such cases, every time a member of the public sees a moral equivalence between the two groups constitutes a defeat for truth.

    If you don’t have the time to investigate the evidence for yourself you are best off looking to established scientific groups in related disciplines for advice on where the evidence truly points.

    [Response:You only have to trawl the scientific literature! Then you may ask if the scientific community is reliable. Think about the state of our modern civilisation, what would it have been without scientific progress? I would argue that many things taken for granted in our modern society has been piggybacked by science. Science (here used in a wider meaning including engineering) has also formed our culture and enabeled you guys to read this blog. Nevertheless, this is a far cry from the argument that the global mean temperature behaving like a random walk. -rasmus]

    Comment by Michael Tobis — 16 Dec 2005 @ 7:54 PM

  6. Rasmus wrote “It’s natural for molecules under Brownian motion to go on a hike through their random walks (this is known as diffusion), however, it’s quite a different matter if such behaviour was found for the global planetary temperature, as this would have profound physical implications.”

    Well!

    Global warming is caused by the emissions from greenhouse gases which radiate based on their excitation, not on their temperature. Brownian motion proves that atoms exist. It does not explain the emissions from greenhouse gases which need quantum mechanics to understand how they operate.

    [Response:Right, there are two aspects to this radiation: the continuum associated with the atoms kinetic energy and the band absorption associated with the atomic electron configurations. The excitation of the molecules is caused by an absorption of a photon, as they cannot keep losing energy by through radiation without gaining some. Quantum physics determine what the electronic levels are, i.e. at which frequencies the line spectra are. But, in the real world, the lines broaden to frequnecy bands, due to several complicating factors. -rasmus]

    Comment by Alastair McDonald — 16 Dec 2005 @ 7:55 PM

  7. The paper at hand seems to draw an anology between Brownian movement and natural variation in the Earth’s climate and suggests researchers adopt as the null hypothesis that observed climate changes are due to “random” forces akin to that ascribed to Brownian movement (or the ‘drunkard’s walk’).

    George Gamow gave a nice illustration of how Brownian movement of air molecules could conceivably cause all of the air in a room to collect in one corner of the room, thus suffocating the person sitting in a chair in the opposite corner of the room reading Gamow’s book (One, Two, Three … Infinity). Gamow then calculated the probability of this occurring, which was exceedingly small, and therefore, provided a good fit with empirical observation.

    [Response:The probability for this happening is infinitesimally small, so for practical reasons, this can be regarded as an impossibility (unless you are a fan of the Hitch hikers guide to the Galaxy). -rasmus ]

    Consistent with Gamow’s line of reasoning and evidence, one can say it is possible a large, documented variation in climate is “random” in Gamow’s sense of the word, but the probability of such a purely random event actually occurring on Earth is very low (ie. about the same probability as all of Earth’s atmosphere collecting in one corner of the Earth). As such, the use of such an event as the null hypothesis seems to be stretching the matter rather thin.

    Moreover, the inception of a purely “random” cause for observed climate trends on Earth seems to represent a confusion of logical type. Brownian movement at the molecular scale is not caused by forces unknown to science. The forces are very well known. However, the collisions are so numerous as to make predictions of exactly where one molecule will end up after 5 minutes so difficult that we call this motion “random.” But even within this “random” pattern of molecular movement at the microscopic scale, we can like George Gamow safely presume the unlikelihood of all of the air molecules in a room randomly collecting in one small corner and suffocating us.

    Empirical observation supports this. The sun cannot “randomly” increase its total energy output because its output is due to hydrogen fusion and there is no known physical explanation for how the sun could suddenly burn hydrogen in its core at a lower rate, or how the energy output of each fusion reaction could suddenly become greater or less. If empirical observations showed an increase in solar radiation over time, it would be inappropriate to use as a null hypothesis that this change is “random” in the sense that this change has no physical explanation or cause.

    A similar argument could be made for the orbits of the planets. If Mars suddenly flew out of its orbit and sped out of the solar system, few astronomers would cite “randomness” as the null hypothesis for this phenomena. At the galactic, solar and planetary scale very few, if any, phenomena occur “randomly.” Stars with plenty of hydrogen left in their cores to burn do not just “randomly” stop fusing hydrogen into helium at a given moment (or alternately, become supernova).

    Oceans do not just “randomly” increase 10C in temperature or precipitate all of their dissolved salts onto the ocean floor. Ice sheets 100,000 years old do not just “randomly” melt in a few decades. Asteroids do not “randomly” crash into Earth. Rather, physical forces require asteroids crash into Earth if their orbital path coincides with the position of the Earth. In fact, if “randomness” actually existed at the megascopic, planetary scale we would have to envision an asteroid heading straight to Earth and randomly “jumping” out of its trajectory at the last possible second in defiance of gravity.

    This is why I think the use of the “randomness” concept drawn from Brownian movement at the molecular level is inappropriate for planetary, megascopic phenomena like climatic trends and this use is not supported by empirical observation. If I’ve made a mistake here, I would welcome a correction. Thanks

    [Response:I agree with you! -rasmus]

    Comment by Douglas Watts — 16 Dec 2005 @ 9:08 PM

  8. The issue of autocorrelation (random walk is a very particular form of autocorrelation) has been discussed in detail on Climate Audit website. It’s kind of ripping a straw man to argue against temps as a complete random walk. The question is, is there some autocorrelation character to the data. Comparison versus ARIMA models suggests that temps are not iid (independant noise) or iid on a trend. Various physical rationales (El Nino, ocean effects, damage to trees (for proxies)) suggest themselves to explain the degree of autocorrelation. Also, those who want to assume that the data is iid, ought to put a little onus on themselves to prove it…rather than to expect the McKintyres of the world to disprove the converse. After all…it is the iiders who are advancing temperature reconstructions with various assumtions of iid in the standard deviations and such.

    [Response:This is why it's best to use control integrations with GCMs to obtain null-distributiions. -rasmus]

    Comment by TCO — 16 Dec 2005 @ 10:11 PM

  9. adding to #7: It seems to me that the analogy between Brownian motion and weather/climate is that next month’s weather is random in the Brownian sense, but the climate must obey the laws of physics, just like a gas obeys P=T/V.

    [Response:Not random, but chaotic and unprdictable. -rasmus]

    What happens to the random walk of a smoke particle if we increase temperature of the system? I don’t know, but I do know that, volume held constant, the pressure will increase. And, barring some as yet undiscovered negative feedback, I have a pretty good idea what will happen as we add GHGs to the atmosphere.

    [Response:The molecules kinetic energy - speed - would increase. The mean free path would be the same if the density is the same. But this is not really directly relevant for the issue that I discussed. -rasmus]

    Comment by Tim McDermott — 16 Dec 2005 @ 11:01 PM

  10. Re: “Take an analogy: how the human body works, conscienceness, and our minds. These are aspects the medical profession does not understand in every detail due to their baffling complexity, but medical doctors nevertheless do a very good job curing us for diseases, and shrinks heal our mental illnesses.”

    You may wish to choose different analogies; if consciousness and our minds are truly “well understood,” some Nobel prizes are clearly in order! As for doctors curing us of diseases, I think we can agree that the vast majority of infectious disease cures arise from imitating nature (e.g. antibiotics, immunizations) rather than being created de novo based on our understanding of the body. The development of statins for artery disease does fit your analogy, but most other chronic disease treatments seem to be derived from natural sources, trail and error, or just treat symptoms rather than the underlying problem. Finally, and most unfortunately, our ability to heal mental illnesses still lags far behind our ability to heal physical illnesses.

    [Response:OK. I'm no medical doctor. -rasmus]

    Comment by Armand MacMurray — 17 Dec 2005 @ 1:55 AM

  11. Re: “If we did not understand our atmosphere very well, then how can a meteorologist make atmospheric models for weather forecasts?”

    The persistent inability to correctly forecast whether it will rain tomorrow makes this argument a hard sell. Certainly, much of the problem is a lack of detailed-enough knowledge of initial conditions. However, a lack of modeling topography, local water/atmosphere interactions, 3-D patterns of cloud cover, and so forth likely play major roles. Without a “gold standard” model that is able to reliably predict future conditions (and thus cannot possibly have been tweaked to produce a desired answer, even inadvertently), one cannot show that all the relevant factors have been understood and included in the model.

    [Response:Re: hard to sell: If this were not true, why do you think most countries keep running those weather models several times a day, every day a year? -rasmus]

    Perhaps part of the issue is the definition of “well understood”. Naively, one would expect that “well understood” means one can robustly predict future conditions. The fact that certain GCMs can produce realistic features such as ENSO is certainly extremely promising. However, if it is not understood why other models do not produce a realistic ENSO, that would argue against a good understanding of ENSO, and so forth. Again naively, I would expect that a “well understood” climate system would imply a good understanding of the sign & magnitude of cloud effects on the climate, and an ability to model those.

    [Response:We have some idea about this -rasmus]

    Rather than dealing with malleable and easily-misunderstood phrases like “well understood” and “not well understood”, it would be useful to specify the list of real-world climate system features (both positive, e.g. ENSO and negative, e.g. lack of runaway heating/cooling) that a realistic model would reproduce, and to refer to our current understanding in terms of which features are understood/reproduced correctly and which are not.

    [Response:As I stated, there are some short comings. There is also the issue of scale range, from microscopic to planetary and the question how to take all these scales into account in a computer code. Tell me anouther field which runs model predictions as extensively as weather prediction. -rasmus]

    Comment by Armand MacMurray — 17 Dec 2005 @ 2:31 AM

  12. Dear Dr. Benestad,

    I think you are to be commended for bringing this article to the attention of RC readers. I hope anyone who’s not a member of AGU will buy it (for the modest price of US$ 9.00) and read it.

    Will all due respect, your argument against the thesis of this article seems circular. You argue against the statement in the Cohn article that “the climate system is not well understood” by saying climate scientists make “complex models – GCMs – that replicate the essential features of our climate system” that “provide a realistic description of our climate system”. This would seem to be (at least) begging the question. Citing the models meteorologists use to make weather forecasts would not seem to inspire confidence in predictions by models of persistent long term trends. Having a model that looks correct for even a decade may be analagous to a stopped watch being right twice a day when talking about millennium time scales usually seen in climate systems.

    You start your article by questioning whether current perceived trends are happening by chance. However that’s not Dr. Cohn’s argument. As he puts the points in conclusion: ” powerful trend tests are available that can accommodate LTP” and therefore it’s surprising that “surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence”. He further concludes: “These findings have implications for both science and public policy. For example, with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: natural climatic excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century”. You may disagree with Dr. Cohn, but his arguments need to be taken into account (and surely will be being published in the presitgious GRL).

    [Response:Dear Geoff. This issue is known as 'attribution' and is a wide topic in climate research. I think that simple statistical models are not adequate for proper attribution, but that control simulations with GCMs are needed. It's important to take physical considerations into accout and to get both the physics and the statistics right. -rasmus]

    Comment by Geoff — 17 Dec 2005 @ 8:41 AM

  13. “However, it is not so easy to conclude that both sides are unreasonable.”

    Science is neutral, so one side is just skewing reality. Uncertainties exist but the naysayer argument fails on its face. How can both sides be culpable when so much data indicate warming is ocurring at record levels? What we have here is a political shell game, and science can lose that because of the he said she said reporting. Sure one side consists of NASA and other top climate specialists but then Bjorn Lomborg a social statistician comes along and poof, reality is out of the public window. It’s amazing and insidious.

    Comment by Mark A. York — 17 Dec 2005 @ 8:50 AM

  14. Sorry, I only read the first part & am jumping in perhaps too fast. In the social sciences there is a big difference between experiments and, say, surveys, that use stats. The latter relies a lot more on good theory (your physics) for understanding and interpretation.

    Since our experiment with planet earth is only in its initial phase, we have to rely on all sorts of stats that really need a lot good theory – good physics.

    Now, we could just complete the experiment — pump as much GHGs into the atmosphere as we possibly can (I think we have 200+ years left of coal, which we might be able to burn in 100 years if we really try hard) — and see who’s right. Or, we can use the best stats AND theory (& geological knowledge) we have to date, figure out what’s happening & work to prevent what looks to be the mother of all disasters (from our 2 million year human perspective).

    Comment by Lynn Vincentnathan — 17 Dec 2005 @ 12:17 PM

  15. Just an aside regarding comment #10. In support of the analogy: we understand bodies quite well and things like many infectious and chronic diseases can be diagnosed with great success. Curing those things is the hard part. I think the science suggesting that anthropogenic GHGs are warming the planet is like the diagnosis; getting humanity to stop dumping so much into the atmosphere is the cure (there are other ways to treat the symptoms, maybe) and is more difficult.

    Comment by Steve Latham — 17 Dec 2005 @ 2:44 PM

  16. i have recorded the 3 day forcast every day this week on my home page – ur – not a scientist – but every day it forcast rain for the next 3 days – everyday here in llandudno (where the forcast was for) we have had blue skies and sun. These models based on statistics and physics aren’t much cop if you cancel your holiday because of them on the other hand i saved my money by staying here in the sun – don’t know where the rain clouds went but something was right and wrong and i was right to be suspicious as this has happened before. – dont worry – the drs cant cure my mental illness but it disapears with medication. – and YES i have taken it !

    [Response:Keep in mind that the skill of forecast is often limited by politics, i.e. how many resources are going to be spent getting best initial conditions - that means, how many real-time observations to incorporate in the observational network (from weather stations to satellite programs), how much computer resources to make available several times every day, and so on. One such consideration is the model resolution which has an implication for the question whether the forecast will be right for every place within the model's grid box (~10 by 10 sq. km). Nevertheless, I would argue that the forecasts in general are quite good - otherwise we wouldn't have operational weather forecasts. I think it's also fair to say that people tend to remember when the forecasts miss rather when they are correct. -rasmus]

    Comment by APRIL STEWARD — 17 Dec 2005 @ 3:56 PM

  17. … “there was slow global warming, with large fluctuations, over the century up to 1975 and subsequent rapid warming of almost 0.2°C per decade”. http://data.giss.nasa.gov/gistemp/2005/

    GISS NASA Figure 1 (a) shows a 5 year trend line for global surface temperature anomaly (1880 to 2005).

    The trend line hits 0.0 in 1937 and again in 1976.

    Between 1937 and 1976:
    max ~ 0.1 Deg C in 1942
    min ~ -0.05 Deg C in 1965

    I agree that “there was slow global warming, with large fluctuations, over the century up to 1975 and subsequent rapid warming of almost 0.2°C per decade”, as said in the link above.

    Many people want to know what caused the minor max and min between 1937 and 1976. Some scientists have said that the slight cooling in mid century can be attributed to aerosols.

    I think ENSO explains both the max and min between 1937 and 1976.
    EL Nino dominated 1930s-1940s.
    La Nina or neutral dominated 1950s-1960s.

    Comment by Pat Neuman — 17 Dec 2005 @ 5:06 PM

  18. Re: response to #11: “Tell me anouther field which runs model predictions as extensively as weather prediction.”

    I think we’d all agree that popularity isn’t really the best measure of effectiveness. Most of the hair regrowth treatments out there don’t work, yet are still popular. :)

    My point is that it would be very useful to have a list of agreed-upon climate features that any “gold standard” model should be able to reproduce, without reproducing any non-realistic features. Each model would then have its own “scorecard,” which would give a rough idea of how close it was to the desired “gold standard.” The predictions of a 49/50 model would inspire more confidence than the predictions of a 30/50 model. Have there been any (formal or informal) movements in the modelling community to establish such benchmarks?

    [Response:I wouldn't know much about hair growth products, but I am convinced that the field of meteorology is well-established and the forecasts useful. Science in general - climate science has a common base with general sciences in terms of physics & chemistry - has also proved to be successful interm of advancing our civilisation. Regarding the "gold standard", there exists one: the CMIP2 and the most recent integrations with the climate models done for the next IPCC report. One such "gold standard" is the 'climate senstitvity. -reasmus]

    Comment by Armand MacMurray — 17 Dec 2005 @ 5:22 PM

  19. Anybody notice in this hottest year on record that the temperature in the Nothern Hemisphere has been essentially flat for the last three years?
    Why isn’t the trend accellerating? Could there be, gasp, negative feedbacks as yet not defined?

    Comment by joel Hammer — 17 Dec 2005 @ 8:04 PM

  20. “Recent warming coincides with rapid growth of human-made greenhouse gases. Climate models show that the rate of warming is consistent with expectations (5). The observed rapid warming thus gives urgency to discussions about how to slow greenhouse gas emissions (6)”.

    http://data.giss.nasa.gov/gistemp/2005/

    The observed rapid warming gives more than an “urgency to discussions about how to slow greenhouse gas emissions”, it gives urgency to cut greenhouse gas emissions in any way possible.

    Comment by Pat Neuman — 17 Dec 2005 @ 11:34 PM

  21. #19 Joel, GT temperatures are not flat lining, they were slowly increasing, with a bit of a faster rate by now. Northern Hemisphere of 2005 had 6 months with monthly anomalies above +1 degree C. 2004 had 3, 2003 had 3, famous 1998 had 3. Every other previous years combined from 1997 going back to 1880 had no months above 1 C. Furthermore, other independent multiple fields of research have showed the same thing. My own research, the oblate sun method, saw 2005 all time high temps coming by stunning observations of consistently vastly expanded sun disks measured in the winter-early spring of 2005, statistics usually suggests that the sun disk may vary wildly, but not at one point, in a expanded streak never seen since the beginning of my observations 4 seasons ago. Nowhere will you find more dramatic change than in the Polar regions, many experienced Arctic hunters, adventurers and scientists had equally surprising 2005 experiences during the same time period, they either saw early thaws, running rivers when their shouldn’t be, shrinking once upon a time permanent lake ice sheets. An impressive experience to add, despite all these unusual events, was flying over Arctic Quebec well after it had +30 C weather in early May , while the great number of lakes there were still covered with ice. Somehow heat prevailed despite ice and snow still on the surface. These events are no statistical blips, rather overwhelming evidence of stronger warming.

    Comment by wayne davidson — 18 Dec 2005 @ 4:35 AM

  22. “If this statement were generally true, then how could climate scientists make complex models – GCMs – that replicate the essential features of our climate system? The fact that GCMs exist and that they provide a realistic description of our climate system, is overwhelming evidence demonstrating that such statement must be false – at least concerning the climate scientists.”

    This, of itself, is no argument. Astrologers create incredibly complicated models of planetary motion and what it means for people’s lives. The fact that complicated astrological models exist is no proof that there is any scientific content to these models.

    [Response:I think most people understand my message here, but the proof lies within the evaluation. When it comes to oye example, a model for planetart motion is one thing (could even be scientific), to say whatever it means for peoples' life is another (religion, in my eyes). Climate models are 'extended versions' of weather models. To my knowledge, there is no other scientific model 'exposed' as much to the public as weather models. They have succeeded in predicting features like tropical instability waves, which have subsequently been found in nature. That's impressive. It was atmospheric models that helped unveiled the 'chaos effect' to Lorenz, leading to a fundamental and profound understanding of our nature. Weather models help save lives when extreme weather arise. These models can be broken down to a small number of equations, some of which can be isolated and solved analytically (actually, the way they were designed was the other way round...). These analytical solutions help us understand importand sides of atmposheric phenomena. -rasmus]

    Slightly less rhetorically, economists create complicated models of the economy. They incorporate a huge amount of knowledge about how the economy works. And yet their ability to forecast is pitiful.

    I believe that climate science has a lot to learn from economics. Both are non-experimental sciences. This creates unique statistical challenges. Economists have, for a large part, realised the limitations of their forecasting ability. Climate scientists seem a little less aware of the limitations of their craft because of their being based in the physical and experimental sciences (and the statistical techniques that implies).

    [Response:Personally, I think the other way round - to my knowlredge, no economist sent people to the moon - the scientists & engineers did! My proposition is that our highly advanced society is built foremost on science, and secondary on economy (which is primarily a means of distributing our goods). Can you prove that economy forecasts ever have been correct? (Economic tigers, world bank, etc have not impressed...) -rasmus]

    Not being able to read the paper in question makes it difficult for me to say more. But I sense that there is a straw-man somewhere here. Statistics never proves the null-distribution…”it is not given that chosen statistical models such as AR (autoregressive), ARMA, ARIMA, or FARIMA do provide an adequate representation of the null-distribution”… it merely fails to reject the null hypothesis. The same is true of any statistical model – to which all climate models belong. Thus, I could equally say, it is not given that chosen GCM models do provide an adequate representation of the null-distribution. At heart, they are both vacuous statements. But further, the implication seems to be that a random walk model is simplistic – if anyone believes this they haven’t studied the stock market. It remains fundamentally true that a random walk is the best model of the stock market – but it is so much more complicated than that statement would make it seem.

    [Response:You have to get both the physics right and the statistics righ! Basically, your statistical model needs to be representative of the process you are analysing-rasmus]

    Comment by JS — 18 Dec 2005 @ 4:48 AM

  23. Re #22

    The funny thing about economic models is that they don’t directly incorporate a huge amount of knowledge about how the economy works. For example, models used in the climate integrated assessment space distill a large number of disparate facts into a few principles that are often more aspirational than empirical (e.g. perfect foresight, market equilibrium) then use those with limited attention to fit to data. Forecasting models are typically statistically sophisticated and dynamically trivial, i.e. they use a lot of data in the context of a simple model that is abstract about the underlying ‘physics’ of information and material flows in the economy. The fact that they can’t forecast is probably partly a consequence of the complexity of behavior, partly lack of ‘physics’.

    I’m sure that there are technical things that climate scientists could learn from economists and vice versa. But personally I hope climate scientists avoid learning too much from economics – economists may be aware of their limits at forecasting things like business cycles, but they are nevertheless paradigm bound and cheerfully contribute to policy design with models that are largely assumption-driven.

    Global climate may be non-experimental because we only have one, but many of the more micro physical aspects of climate are subject to experiment or at least detailed measurements. Climate data now comes in terabytes, while all the global macroeconomic data used to inform the majority of models would probably fit on one CD. Economists routinely discount micro experimental data when it contradicts established macro approaches. Certainly climate models have to parameterize some relationships that are sub-grid-scale or poorly understood, but at least those can be somewhat constrained by data and physical principles.

    One shouldn’t be too hard on economists – economics may be fundamentally a harder nut to crack than climate, due to the infinite regress of modeling human behavior and the paucity of measurements compared to the number of agents or dimensions in the system. I don’t think analogies based on economics shed much light on climate science.

    [Response:You are right - one should always be open to the possibility that one can learn from other disciplines. And to some extent, climatologists do pick up some ideas from economics -econometrics. I was perhaps unduly hard on the economics community, however, I'm not really qualified to discuss eonomic matters as I'm not an expert on that field (although I admitedly have taken a couple of courses in economics which I found dull - still got good grades though...). I still could not resist making the critical (and provoking) remarks. -rasmus]

    Comment by Tom Fiddaman — 18 Dec 2005 @ 3:04 PM

  24. As with most sub-topics in this field to which I am new, I sense some solutions and methodologies are partial glimpses of the very complex process of climate emulation on computer.
    At first view, application of a Brownian movement view of the hockeystick upramp in global warming seems worthwhile, if only for its application of delimited randomness onto what is known to be a cyclical process albeit one of very long periodicity. Yet, the author, rasmus, has dissected deftly the sampling methodology required to justify a Brownian interpretation’s superimposition on what is actually a very well tempered equilibrium; so, the commenters who jocundly remind the author about the preposterousness of ignoring gas diffusion laws or harmonic motion lemmas which describe planetary systems motion have touched on the same weak spot in the Brownian analysis as applied to climate. I found an interesting comment, above, in the reference to noise; for me, this engendered other considerations such as Laplace and Fourier timeslices of climate processes; so, for instance, a small timespan could be described within the long cycle of, e.g., CO2 dissoved in ice as presented in the EPICA data recently presented on this realclimate.org site; the EPICA graph showing 600 kYears as a 7-cycle 90 kYear/cycle fairly sinus waveform really brought home the harmonic nature of the climate process, though, as mentioned, the cycle is quite long, thereby, enhancing the utility of partial differential views, of which the Brownian paradigm might serve to help comprise the noise. Yet, in a sense, I agree with the immediate intuition of our presenter, that the presentation of a proposed overarching solution based on Brownian motion, a theoretical realm which challenged even the curious Einstein, might itself distract the neophyte from more robust research and conceptualization. Incidentally, I might well be disinclined to adopt the purported Prometheus website approach, if that website were accessible, which it is not at the present moment; though even political approaches are de rigeur. However, for the scientific part, clearly, as the author observes, well tempered computerized reference systems are consulted for weather prediction over short term in most countries and are sufficiently reliable for that application; much math far beyond time derivatives is entailed in that software. The piquant part of the computerized view of near-future climate is now it is being called upon to contribute to longer timeframe climate trend definition. The realclimate.org site has provided a suitable forum to encourage that endeavor for the newcomers, and clearly often helps with peer interchange, much as any meritorious professional association would, serving as a substrate and reference.
    If we need to examine longterm climate with Brownian glasses, a similar but distinctly separate pair of lenses should be formed of Van der Waal concepts, though the Cohn and Lins US Geological Survey article discussed by rasmus is a Brownian overture exclusively; there is a key two-state component of weather in the fluid phase, gas and liquid; but certainly multiple other properties of matter are involved and the depiction extends well beyond a three-state model incorporating solids, as well as, as alluded to above by one contributor, the quantum-like shift which occurs during transition from one phase to the other: a boiling Brownian movement would look different from an oscillating solid crystal lattice. At first view, gas plasma, thin-film coating physics, and a goodly measure of electricity and magnetism tempered by other insights of particle physics and celestial systems dyanmics are all first areas which attract my interest as ways to look outside of that very long 90 kYear sine wave, to perceive perhaps the sawtooth form it is taking and the reasons from many sectors of human understanding reflecting elements of that.

    In the biologic sphere I would question how much genetic work, so to speak, it takes to form a species such as polar bears; and how anomalous it might be to find that within a single span of a century all that genetic impetus was lost to a receding polar ice cap, thereby driving polar bears into extinction likely within our lifetime.

    Comment by JohnLopresti — 18 Dec 2005 @ 3:29 PM

  25. re 24

    Clips from National Wildlife Federation’s Bear Family Tree, at:
    http://www.nwf.org/wildlife/grizzlybear/familytree.cfm

    There are only eight species of bear living in the world today.

    All eight species have a common ancestor, Ursavus, that lived more than 20 million years ago.

    The Ursavus family line split into two subfamilies of what are considered ancestral bear-dogs: the Ailuropodinae (which ultimately evolved into the giant panda (Ailuropoda melanoleuca) that lives in China today) and the Agriotherium (which ultimately evolved into the Ursidae lineage).

    About 15 million years ago, Ursidae diverged into two new lineages: the Tremarctinae, known as short-faced bears; and the Ursinae, known as true bears.

    Ursinae gave rise to the six other bear species that exist in the world today. About 3.5 million years ago, early Ursine bears began migrating to North America by way of the Bering Land Bridge. These bears evolved into the American black bear (Ursus americanus).

    The brown or grizzly bear (Ursus arctos) began to evolve 1.6 million years ago. Brown bears were once found throughout Europe and Asia and eventually wandered into North America, following the same route taken by ancestors of the black bear. Scientists believe that the brown bear lineage split over 300,000 years ago to form the polar bear (Ursus maritimus), theorizing that a group of early brown bears became isolated in colder regions and ultimately adapted to life on ice.

    Scientists believe that the brown bear lineage split over 300,000 years ago to form the polar bear (Ursus maritimus), theorizing that a group of early brown bears became isolated in colder regions and ultimately adapted to life on ice.

    Comment by Pat Neuman — 18 Dec 2005 @ 4:14 PM

  26. Hey, if economists could issue storm warnings as readily as the Weather Service can issue a Severe Weather Warning, would you want them so you could take your money out of the stock market before each little crash?

    Somehow I doubt we’ll ever see such a forecast, even if it’s possible.

    The economists can start ignoring a third of the money supply
    – “cease publication of the M3 monetary aggregate. …
    http://www.federalreserve.gov/releases/h6/discm3.htm

    Imagine some climate scientists announcing that a third of the data collected on the climate system was being discontinued — we’ll quit looking at solar output, or carbon dioxide output, or methane residence time in the atmosphere — people would raise holy hell, because the information’s producing useful predictions.

    The economists? They don’t have preductions that work, they don’t have any data they know are indispensable, if it’s inconvenient to keep publishing info they can stop without a lot of bother.

    I think it tells you who’s “trendy” — who’s able to actually use information to publish predictions, lots of models with differences — and realistically compare results season and year at a time.

    Is it a trend? Trust your climatologist before your economist, for answers that are testable.

    Comment by Hank Roberts — 18 Dec 2005 @ 4:49 PM

  27. Re: response to #18
    Rasmus, as a biologist myself I certainly agree that science is important and vital for civilization. However, as you may have (sadly) noticed in the news recently for biology, testing results is most important (of course usually not because of falsification!). Thanks for the pointer to the CMIP2. I took a quick look there and it seems that the benchmarking there is all related to CO2 sensitivity. Is there another site/resource/paper that has done more basic benchmarking along the lines of what I mentioned in posts #11 & #18, or has that not been done yet?

    Comment by Armand MacMurray — 18 Dec 2005 @ 5:36 PM

  28. I think your phrase “… who in essence pitch statistics against physics” represents a false dichotomy.

    If we had a perfect record of temperatures, etc., for the last couple of millenia, we would be able to look at the data, and see if it was autoregressive- or not. We do not have this information.

    It is entirely wrong to suggest that a GCM can substitute. The historical record is what it is- even if we do not have that information. If the GCM fails to replicate reality, it is the GCM which is at fault; so the GCM provides no proof that historical records were autoregressive or not.

    Of course, if the GCM is perfect and models reality exactly, then there will be no problem if the GCM shows no autoregression. But since we don’t have the validation that GCMs are perfect, and there is no perfect historical record to check against, it is a bit of a moot point.

    In the meantime, you have done work on i.i.d. models; but did you test to see if the data had an autoregressive character ?

    cheers
    per

    [Response:Here is a reason why meteorologists do not use statistical models for weather forecasts. When you travel by plane, the aviation authority depends on good forecasts for your safety. Statistical models are not adequate. You really need to include the physics!!! The same argument goes with climate research. Physics is a vital basis for much of our understanding (statistics maybe isnt...?). Nobody has to my knowledge provided a proof that the statistical autoregressive models that you point to is appropriate to earth's climate. But you have also misunderstood me if you think that I say our climate is iid (a climate change is not iid per definition). Rather, the choice of the autoregressive models and their representation temporal structure may be wrong. These models are chosen because they are simple and convenient. They may be adequate for some types of questions, but not for detection and attribution when it comes to climate change. It's not sufficient to say that the global mean temperature is stochastic (random), but you really need to address the physics. -rasmus]

    Comment by per — 18 Dec 2005 @ 9:54 PM

  29. This discussion is directly relevant to several papers that I have published in the peer review literature regarding the effect of human activity on climate change. First, lets be very specific about the mathematical formula for a random walk. The simplest representation states that the current value of a variable Y equals its value during the previous period plus some random variable. For temperature data, this would imply that this year’s temperature depends on temperature from the previous period. But this formula cannot describe how air surface temperature carries over from one period to the next. Think how quickly temperature dissipates from day to night – there is no physical mechanism by which the atmosphere can carry additional warmth from one year to the next.

    That said, the effect of human activity on temperature can be modeled as a random walk because these effects carry over from one year to the next. For example, carbon dioxide persists for a long period in the atmosphere, so the atmosphere ‘carries over’ the warming effects of anthropogenic carbon emissions from one year to the next. Similarly, the capital stock that emits greenhouse gases and sulfur persists for one year to the next, and so imparts a signal in the temperature record.

    We use these signals as ‘fingerprints’ to detect the effects of human activity on the historical temperature record. Statistical techniques to do so have been developed over the last decade, and Clive Granger won the 2003 Nobel Prize in economics for his pioneering work in this area. Using two separate statistical techniques, I have been able to show (along with David I Stern, James H. Stock, and Heikki Kauppi) that the stochastic trends in the radiative forcing of greenhouse gases and sulfur emissions can ‘explain’ much of the general increase in temperature over the last 130 years. One of these papers appears in the Journal for Geophysical Research (Kaufmann, R.K. and D.I. Stern. 2002 Cointegration analysis of hemispheric temperature relations. Journal of Geophysical Research. 107 D2 10.1029, 2000JD000174) and another has been accepted for publication in Climatic Change (Kaufmann, RK, H. Kauppi, and James. H. Stock, Emissions, concentrations, and temperature: a time series approach, Climatic Change). This second paper is available at my home page http://www.bu.edu/cees/people/faculty/kaufmann/index.html

    Comment by Robert K. Kaufmann — 18 Dec 2005 @ 10:15 PM

  30. Re #23
    “Climate data now comes in terabytes, while all the global macroeconomic data used to inform the majority of models would probably fit on one CD. Economists routinely discount micro experimental data when it contradicts established macro approaches.”

    Quantity does not equal quality. And, apart from anything else, you would be wrong about the quantity of data available in macroeconomics – let alone microeconomics.

    As to the other, perhaps you should read up about the 2002 Nobel laureates in economics. Alternatively, perhaps you could discuss how physicists reconcile quantum and macro phenomena? Isn’t quantum physics routinely discounted when discussing planetary motion, atmospheric circulation and practically any other macro-scale phenomena?

    [Response:The beauty of quantum physics is that in provided a consistent picture with the macrophysics when scaled up (because of a vast number particles and the laws of probability). But, to use quantum physics for macroscales is in general silly (onless you look at line emissions and alike), as you would spend the rest of your life calculating. -rasmus]

    Comment by JS — 19 Dec 2005 @ 12:27 AM

  31. Thank you Robert for your contribution. It is good to see econometricians getting involved in this area – as I suggested above, I think that climate scientists could learn a thing or two from them with regard to statistical technique.

    For those of you who may not have the motivation to follow the link I’ll provide a quote from that paper that I think is important:

    “The presence of stochastic trends invalidates the blind application of standard statistical techniques such as ordinary least squares (OLS) because they may generate spurious regression results.”

    I made a similar point in an earlier comment that seems to have tripped the filters here for manual review so hasn’t appeared yet. This is directly relevant because as Robert observes in the linked paper:

    “the results indicate that the time series for temperature, anthropogenic emissions of CO2 and CH4, and their atmospheric concentrations contain a stochastic trend.”

    So, to those climate scientists who don’t believe there is anything to learn from economics, open your mind – you might learn something.

    [Response:This touches an interesting point: what would the physical implications be of a 'stochastic trend'? In physics, there are certain things that aren't really stochastic, but nevertheless can be modelled as stochastic because we do not have sufficient information about the system. E.g. the displacement of molecules follows brownian motion (diffusion). This enables us to predict the displacement of the bulk of mloecules, but is not very good for one specific particle. For that, we need to know its interactions (collisions) with other particles. Then, there are a number of other physical examples which have much less of a 'stocxhastic' appearance: planetary motions and oscillators. In fact - an this discussion is now veering off onto philosophy - the world is not stochastic, but there are physical laws which creates order (if it were merely stochastic, then you could explain how we could sit here having this discussion - there would be no life forms...). To say there is a 'stochastic trend' is a cop out - there must be an underlying physical cause. Also, would it not be possible to make predictions with the 'stochastic trend models', because that would be a contradiction.? We have one explanation for the trend based on physics: GW; there are no good alternatives. -rasmus]

    Comment by JS — 19 Dec 2005 @ 1:11 AM

  32. Re #30

    Don’t forget Herbert Simon in 1978. Generally though behavioral and experimental economics has commanded limited attention. Certainly there’s next to nothing of behavioral economics embedded in the dominant CGE and intertemporal optimization models used for policy.

    The reconciliation of quantum and macro in physics is quite different from building a micro foundation for macroeconomics. The macro physical principles employed in climate models are well-supported by experiment; quantum mechanics is generally just a refinement that applies at scales irrelevant to climate. Not so in economics. It’s a long way from PV=nRT to “assume an infinitely lived representative agent with logarithmic utility…”.

    I agree that quantity is not quality, but consider what’s available to someone wanting to build a regional energy-economy model: energy supply and demand with a little bit of fuel detail, some technical detail on electric power plant efficiency and the like, GDP, prices on a limited range of commodities, national accounts, population, and not a heck of a lot else. Some, like GDP, come with all sorts of measurement and interpretation baggage that make the problems of temperature pale by comparison. Other key pieces, like capital stocks, are largely missing or inferred from the same fundamental sources. I’m sure most economists would kill for the equivalent of 650,000 years of deltaD, or a satellite that would take real-time gridded measurements of household consumption.

    Comment by Tom Fiddaman — 19 Dec 2005 @ 2:42 AM

  33. “the world is not stochastic, but there are physical laws which creates order”

    You seem to misunderstand the meaning of stochastic. Do they not teach statistics to physicists anymore? Who was it that said “God does not play dice”? You seem to be applying an incredibly classical picture of the universe – wasn’t it once believed that if you knew the positions of all particles in the universe you could then completely forecast the future? Hasn’t that been shown to be false?

    [Response:Stochastic = 'A process with an indeterminate or random element as opposed to a deterministic process that has no random element'. Other definitions are: 'Applied to processes that have random characteristics' or 'This is the adjective applied to any phenomenon obeying the laws of probability'. So, it depends which definition you chooses. The latter definition appears to apply to almost everything, whereas the former seems to equate stochastic with 'random'.
    I think this is quite an interesting side of this discussion. As far as I remember, the existence of atoms was gleaned from the Brownian motion work of Einstein. Thus, there had to be some physics behind the 'stochastic' motion. It's therefore somewhat ironical that a process, once used to derive knowledge about the underlying physics, now is presented as if things just happen randomly without any thought about the physics.
    Sadly, the degree statistics is tought in physics is in my opinion not enough. It was also Einstein who famously said that God doesn't play with dice .
    One thing is to predict everything in a deterministic way- the classical neutonian universe - it's another matter arguing that everything is stochastic or random - as order does emerge at least on local scales. You are right that an established view in the past was that the unviverse worked like clock work and in theory could be predicted if one had a perfect knowledge about the initial state of the universe and sufficient resources. This is no longer the paradigm, but the new paradigm does not preclude the role of physical laws. There is quantum physics and the non-linear chaos effect which limit our ability to predict. Still, energy cannot be created or destroyed, and a planet's global mean temperature must fit in the greater picture of physics. -rasmus]

    To take an example from the paper linked above – would you care to discuss the non-stochastic process by which CO2 emissions from power plants are determined? Once the CO2 is in the atmosphere there may be physical laws which create order – but the amount of CO2 released into the atmosphere is determined by fundamentally stochastic processes.

    [Response:I take this to be in the meaning 'Random or probabilistic but with some direction'. The quastion is then to what degree 'random' and what degree 'direction'. There are well-established increases in the atmopsheric CO2, and there are indications that isotope ratios can provide a clue about the portion that comes from fossil sources. But, in our case, I don't think the question of the CO2 sources and whether the gas' pathway is stochastic or not, is all that relevant, given the atmospheric concentrations. For single molecules that absorb IR, the re-emission is random in terms of direction, but for a volume of air, there is such a vast number of molecules that the statistical property of the bulk action easily can be predicted. I would say that the definition 'Random or probabilistic but with some direction' with little degree of randomness for the bulk property and high degree of 'direction' (equal amount going up as down).-rasmus]

    Next topic… Occam’s razor. If it walks like a stochastic process, and if it quacks like a stochastic process, what is wrong with calling it a stochastic process?

    [Response:I would remind you again about Einstein's Brownian motion, and that this phenomenon was used to infer the existence of atoms. You could as well say that the particles were just random, and be happy with that. But, insight shows that there are underlying physical laws to even processes that appear stochastic. In gases and diffusion, it's interaction (collision) between the gas molecules. It's hard to predict, but the physics is there nevertheless. And even more so when you consider the average kinetic energy of the molecules - the temperature. If the temperature takes a hike, then there must be a supply of energy: first law of thermo-dynamics. -rasmus]

    Comment by JS — 19 Dec 2005 @ 5:34 AM

  34. I want to re-iterate the meaning of stochastic that is described by JS. We can think about stochastic trends in terms of Occams Razor. There are two general types of time series, stationary or non-stationary. Stationary variables have a constant mean and variance that can be approximated from a long set of observations. As such, these variables have “no trend.” Clearly, the global temperature time series is not stationary.

    Nonstationary variable have no long-run mean and the variance goes to infinity at time goes to infinity. Put simply, a nonstationary time series tends to increase or decrease over time. As such, the time series for global temperature is non-stationary.

    The increase/decrease can be caused by a deterministic trend or a stochastic trend. A deterministic trend is one in which a variable increases year after year. But we know that human activities that affect climate change are not deterministic. For example, carbon emissions do not increase year-after year at a constant rate (yes they generally grow, but in a stochastic nature). There are economic recessions, in which emissions decrease – witness the Great Depression in the US or the collapse in the Former Soviet Union.

    A stochastic trend derives its increase/or decrease from the cumulated effects of a stationary variable(s). Note that variable does not have to have a mean value of zero. So, you can think of emissions as a non-zero disturbance that is “integrated” by the atmosphere. This approach allows scientists to “match” the stochastic trends in the radiative forcing of radiatively active gases with temperature. When we do that, the link is very clear – the stochastic trends in the radiative forcing of greenhouse gases and human sulfur emissions match (or cointegrate in technical terms) the stochastic trends in global and hemispheric surface temperatures.

    [Response: Maybe this was implied before, but while a time series with no long-term mean and variance that increases with the length of the sample is clearly (pathologically?) non-stationary, there are many non-stationary processes (where the mean or variance change through time) but that remain with bounds. For instance, ENSO may be non-stationary as a function of the base state of the tropical Pacific, but it still has a bounded mean. - gavin]

    Comment by Robert K. Kaufmann — 19 Dec 2005 @ 3:07 PM

  35. A couple of comments.

    1) Many climate time series, especially temperature, are obviously not i.i.d. (I think you understand this, and have said so yourself. I say this only because you seem to stray away from this obvious point at times.) There is obviously persistence in temperature from day-to-day, and from year-to-year. Any model which assumes i.i.d. data is obviously wrong and inferences from it will be wrong.

    2) Temperature is also obviously not a pure random walk because there seem to be upper and lower limits on temperature. I would be astonished if anyone truly believed that temperature were a pure random walk. An ARIMA process (and its relatives), however are not random walks and there are many time-series models which can easily incorporate mean-reversion and avoid the absurd implications of a pure random walk model (where temperature would tend to become infinitely high or low).

    3) The statistical implications of autocorrelation in time series is well known and it is well-known that the conclusions you can legitimately draw from auto-correlated sereis are highly dependent on the nature of the autocorrelation. It is equally well known that ignoring longrun persistence in such series can easily lead to grossly incorrect inferences.

    4) The distinction you draw between physics and statistics is, I think, a bit too simplistic. Statistics are used all the time in physics — they are used anywhere where there is a residual “noise” that cannot be accounted for explicitly. The use of statistics here (including ARIMA models and their relatives) is completely appropriate. Indeed, I think it helps a great deal (and is, in fact essential) to understanding how certain we can be about the results in this area. This is exactly what statistics are for.

    5) An understanding of the statistical properties of climate-rated time-series is essential here because it gives a great deal of insight into how certain we can be of the conclusions we draw and (perhaps more importantly) how sensitive the conclusions are to different assumptions. What statistics is telling us here is that seemingly small tweaks to our assumptions about long-term trends in the temperature record are extremely important to the conclusions. Statistics also tell us that such long-term persistence is EXTREMELY difficult to detect much less accurately measure. This is because you need an extremely long and accurate time-series to be able to say anything with confidence about long-run persistence.

    6) It is far from obvious that GCM models are any better than simple-minded time series models at prediction here. Yes, they are based on physical models, but they are always calibrated to the data and so inherit the statistical properties of the data. Therefore, they can be seen as just very complicated statistical models themselves. They also have a host of other assumptions build in, and (as are all models) they are only approximations of reality, and can be quite sensitive to seeminly innocuous assumptions. For instance, they may assume linearity in some variables which is reasonably accurate over a large range, but is completely inappropriate when the system reaches extreme states. FWIW, in economics, in the fifties and sixties, econometricians built extremely complicated models that they thought would yield a big increase in predictive power. But, the models turned out to be a disappointment — it turned out that very simple time-series models were actually better at economic prediction than the hyper-sophisticated models. The economy is just too complicated for the models. This is why economists are so much more aware of these time-series issues.

    [Response:I agree on points #1, #2 & #3. On point #4 you are right physics uses statistics, but often statistics does not glean to physics, so the relationship between the two often is not reciprocal. I believe that both physics and the statistics must be right. I do not believe in just applying a statisical analysis and that's it (Occam's razor?). The analysis must be accompanied by a consideration of the physics. I think that the use or ARIMA-type models, calibrated on the trended series itself, will not provide a good test for the null-hypothesis, since you a priori do not know whether the process you examine is a 'null-process'. The risk is that the test is already biased by using the empirical data both for testing as well as tuning the statistical models used to represent the null-process. Regarding #5, I think it's not necessary to go to lengths with statistical analysis to arrive at the conclusion that a discrimination of the null-hypothesis depends on how you model the null-process - I'd be surprised if it were otherwise (isn't it logical?). This also relates back to #4, and I think that you also have similar arguments embedded in your comments. No surprise, I disagree on #6, partly because there is more to the story: model evaluation and experience with these models. GCMs also embed more information (physics) about how the climate system works. ARIMA-type models do not contain any physics, one never knows if the ARIMA-type models really are representative or just seeming to be so, but I agree that they are convenient tools when we have nothing else. The question is not about using ARIMA-type models or not, but what conclusions you really can infer from them. Here we are looking at a test, and it's extremely important that this test is not pre-disposed. Such tests are extremely delicate, as you say, since they depend on the assumption about the null-process that in this case we do not really know (a significant trend or not?). On the other hand, if we can utilise insight about the underlying physics (which GCMs do), we can do far better. After all, the discovery of atoms as a result of stochastic Brownian motion have enabled far more useful predictions than what a simple stochastic view ever could. -rasmus]

    Comment by Terry — 19 Dec 2005 @ 5:21 PM

  36. Is the pattern NASA GISS described (first 3 lines of comment #17 above) something that the scientists agree is real — whether or not it’s a trend, is there agreement we’ve had that pattern of warming as described?

    It’s hard (as a non-scientist, reading along) to figure out what people agree is happening, let alone whether trends and causes can explain observations.

    I find myself wishing for a (small, quiet) weblog where those who disagree on so much would post only what they agree upon. Like the NASA summary, perhaps?

    Comment by Hank Roberts — 19 Dec 2005 @ 6:01 PM

  37. re 35 … Many climate time series, especially temperature, are obviously not i.i.d. …

    You lost me there, but I kept reading anyway.

    As a hydrologist involved with prediction, I understand what you said here: “it turned out that very simple time-series models were actually better at economic prediction than the hyper-sophisticated models. The economy is just too complicated for the models. This is why economists are so much more aware of these time-series issues”. Modeling the runoff from varying intensities of rainfall and snowmelt splattered over a partially porous canvas of variable terrain, soils and land use, varying in time, and with multiples sources of input all having varying degrees of error and bias makes it more than difficult to keep a complex runoff model update in real time, needed for making instant crest predictions based on forecast precipitation, temperatures, winds and humidity. It’s no wonder we rarely get it right to the tenth of a foot.

    Comment by Pat Neuman — 19 Dec 2005 @ 6:02 PM

  38. Gavin,

    Time to dust off your statistics. Non-stationarity, even within bounds, has profound implications for the validity of standard statistical hypothesis testing. Many fundamental results about the properties of OLS regression and t-statistics rely on variances and means having a defined limit as n->infinity – that is, they require stationarity not just boundedness.

    [Response: One has to be very careful here. Many variables which are quite stationary may appear not to be if described by an inappropriate metric. A simple example is a lake that responds to the difference between random inputs of precipitation and random losses through evaporation. Let us, for simplicity, suppose that runoff is minimal, and that the boundaries of the lake are vertical. Then changes in the level of the lake over a given time interval (say, a month) are linearly proportional to the average difference between the precipitation and evaporation rates over that time interval, with the proportionality constant determined by the lake geometry. Let us suppose that the precipitation and evaporation rates are both normally-distributed white noise (not a bad approximation in many cases). Then the change in lake level from one month to the next, like the precipitation-evaporation series, is described by a normally-distributed white noise process. About as stationary as can be! But suppose that you decide, for sake of ease, not to measure the average precipitation and evaporation rates (the climatological variables responsible for any observed changes in the lake level), but to measure, instead, the lake level (e.g. in meters) from one month to the next, and you eventually obtain a very long series of monthly measurements of the monthly mean lake level. Any guesses as to the statistical properties of this alternative 'climate' variable? - mike]

    [Response: For your main point I wouldn't disagree. But non-stationarity needs to be demonstrated and it is often difficult to distinguish from stationary, but oddly non-Gaussian behaviour (i.e. the GISP ice core record). -gavin]

    Comment by JS — 19 Dec 2005 @ 6:15 PM

  39. Mike,

    Absolutely. At an abstract level, one needs to consider very carefully whether one is measuring a stock or a flow. The simplest method of dealing with integrated/non-stationary series is to difference them. In your lake example, you would observe that the level was non-stationary but that its first difference was stationary and apply your statistical methods to this difference. That is the point – you need to know the statistical properties of the variable you are measuring; you can not afford to be ignorant of these properties or your statistics will be fundamentally flawed.

    In a similar manner, because of very long persistence, atmospheric concentrations of gasses are much more like the level of the lake than the rainfall. You need to take account of that in any statistical modelling of them. But, to break the analogy with the lake, additions to the atmosphere of GHGs from human sources are also non-stationary because they derive from human economic activity which does not follow a stationary process (or even a trend stationary process). Any statistical anlysis which does not take account of these statistical features of the data will be on shaky ground.

    In sum, it is not that stationary or non-stationary analysis is better, but that one needs to know the statistical properties of the variables one is analysing and apply the appropriate techniques. Application of standard OLS techniques to non-stationary variables (of which, let us not kid ourselves, there are a lot in climatology) will lead to flawed results. I have yet to see Dickey-Fuller or similar tests considered as standard in climatology, yet in the econometrically based contribution from Robert above it is the first test that is conducted. It is fundamental to establish whether the series you are dealing with are stationary or not before running your regressions.

    Comment by JS — 19 Dec 2005 @ 8:06 PM

  40. Re response to #33

    Thank you for the link to the God does not play dice article. Quite interesting.

    While tangential to the current dicussion, it captures the point I was trying to make earlier (somewhat inadequately) about reconciling micro and macro phenomena.

    (And a minor request to the mods – could you tweak the formatting so that your comments in #33 are all in green – it’s pretty clear who is saying what but some confusion might result at the moment.)

    Comment by JS — 19 Dec 2005 @ 9:16 PM

  41. Water temperature has a large influence on evaporation rates from large lakes. Evap from Superior and Michigan-Huron has been greatest in winter.

    Comment by Pat Neuman — 19 Dec 2005 @ 10:34 PM

  42. Rasmus:

    Thanks for the thoughtful reply to #35.

    I agree that physics should inform the statistical analysis. Understanding the heat budget helps establish priors on what the boundaries of the model should be and so helps us form priors on the appropriate choice of statistics that should be applied.

    I just wanted to reinforce my point that statistics can also be very helpful in understanding the power of our models and the confidence we should have in their predictions, and some elementary time-series analysis goes a long way here.

    Just looking at the various temperature plots and proxy series makes me think that we are dealing with highly autocorrelated series with both long-term, medium-term, and short-term trends superimposed. We know there is short term correlation just by looking at the last hundred years of surface temperature data. We know there is long term correlation from the fact of repeated glaciations, with swings of (I am told) 6 degrees or so. I would be very surprised if there were not medium-term correlations also (with periodicities of 100 to 1000 years).

    With such priors, it is very difficult to say with certainty that recent temperature movements are historically anomalous based on the temperature record alone. This, in fact, is I think at the heart of much of the debate. (Much of the impact of MBH was the result that reconstructed temperatures were found to be extremely stable over time. You cite to it for that proposition. This was powerful because it makes the recent temperature increase appear anomalous. Essentially, much of the argument about MBH is whether it correctly estimates the variance of past temperatures.)

    Supporting the AGW conclusion is the recent temperature increase which seems to be undeniable. The increase in CO2 is also undeniable and given the physics, it seems likely that increased CO2 has some relationship to increased temperature.

    But, the statistics of autocorrelated series should teach us some humility, and inference based on such highly autocorrelated series should give us pause. It is very difficult to estimate long term trends. It is even more difficult to estimate long-term variances (because long-term variances depend critically on the long-term trend and the stationarity of the series.) And that is what much of this debate is about … what is the natural variability of temperature?

    Comment by Terry — 19 Dec 2005 @ 11:13 PM

  43. Can you please explain the within-the-post reply to my comment number 8? I’m not even disagreeing (yet), just don’t understand what point was being made. Thanks.

    [Response:GCMs do give a reasonable representation of the persistence and time structure of the global mean temperature. GCMs can also be run with a constant forcing, providing a null-distribution for the case when ther eis no change in the forcing (which is not the case for the real world).
    By the way, I do not say the global mean temperature is iid - rather the reverse since there is a trend! (A climate change implies a change in the distribution, and hence the data cannot then also be identically distributed.)
    But now we are mixing the two aspects of iid: (1) independence and (2) identical distribution. By referring to autocorrelation, your arguments concern the former. If you subsample the data with sufficient interval so that there is no memory between the subsequent observations (requires a long data series), then it is reasonable to say that the data is independent (lets say 'chaos' erases the memory of a previous state). Then if the climate is stable, then the pdf of the climatic parameter (global mean temperature) is constant, and it is reasonable to say that the subsampled data would be iid. But if there is a trend, then the data would not be iid. -rasmus]

    [Response: To save on pointless blog-to-blog ping pong, the difference between this statement and Manabe and Stouffer (1996) is that MS96 describe results from a control run (no forcing), while the real world and the latest AR4 runs include forcing effects (D. A. Stone et al, in press). Since forcings have trends they necessarily impart autocorrelation structure into the temperature spectra. This is reasonably modelled (and I suggest you register for access to the IPCC AR4 model data to check for yourselves). The key thing is to compare apples with apples. -gavin]

    Comment by TCO — 20 Dec 2005 @ 8:11 AM

  44. Has the title post been edited/updated since originally written?

    Comment by TCO — 20 Dec 2005 @ 6:13 PM

  45. Re #8, can you point to an explanation of your very brief point “why it’s best to use control integrations with GCMs to obtain null-distributiions” — I am wondering if this is a shorthand reference to an ongoing argument about what’s best to use and to obtain, or if it’s a generally agreed basis for making models.

    Comment by Hank Roberts — 20 Dec 2005 @ 9:50 PM

  46. You say: “We have heard arguments that so-called ‘random walk’ can produce similar hikes in temperature (any reason why the global mean temperature should behave like the displacement of a molecule in Brownian motion?).”

    Well, there is the fact that climate, like a molecule undergoing brownian motion, is subject to a very large number of forcings (anthropogenic, meteorological, astronomical, biological, and geological for climate, to name a few) with psuedo random distributions in time. There is also the fact that the historical climate record as revealed by ice cores exhibits a lot of apparently random variation. I’m finding some difficulty in appreciating the point of your snark here. Exactly how much evidence do you need?

    [Response:Do you not believe that the first law of thermodynamics matters for the global mean temperature? -rasmus]

    You also say: “Another common false statment, which some contrarians may also find support for from the Cohn and Lins paper, is that the climate system is not well understood.”

    So does that mean that climate models are now making accurate predictions? Can they, for example, predict right now which of the next 5 years will be the warmest? Can they, without special ex-post facto tuning, accurately reproduce the climate behavior of the last 150 years? If the answers to any or (more likely) all of these questions is no, what is the content of your implication that the climate system *is* well understood.

    [Response:I think you are mixing the concepts 'understand' with 'predicting' Have you hear about the so-called 'butterfly effect'/chaos? -rasmus]

    Comment by CapitalistImperialistPig — 21 Dec 2005 @ 3:30 AM

  47. Climate is something like Brownian motion in a sense as mentioned by C.I.P. (#46). On the other hand, variation of temperature cannot be pure Brownian motion, or pure Brownian motion plus constant linear trend as in Cohn and Lins’s model, because it has lower and upper bounds of physically possible values. (Even the motion of Brown’s pollens or spores may not be pure Brownian motion near the edge of the container.) Probably a better stochastic model of climate would be Brownian motion plus some restoring force, or a forced-dissipative system. The question here should be, as I think, that how good is the unbound Brownian motion (with linear trend) as an approximation of a more realistic model in the context at hand.

    [Response:The mean kinetic energy of the molecules in a gas is conserved, but the molecules are free to 'wander off' without constraints (until the hit a boundary, such as the walls of a container). -rasmus]

    Comment by Kooiti Masuda — 21 Dec 2005 @ 4:11 AM

  48. I take the statement that the climate system is well-understood means that all relevant physical causes have been identified, and their effects have been quantified (which is part of knowing whether something is relevant or not). The lack of predictability has to do with limited computational power and lack of initial value data. These in turn have to do with the scale of the climate system and the fact of the chaotic dynamics of the system.

    The question is then how do we know that the climate system is well-understood (i.e., all relevant physical causes have been identified) if we cannot predict? I assume the answer is partly that global climate models produce qualitatatively the features we observe, including large scale circulations, climate cycles, etc.. Secondly, on say, a couple of recent decades for which we have substantial data, the global climate models match the data statistically as well.

    Are the above statements correct? How would you amend them?

    [Response:Ibelive your statements are fair. You could even strect a bit further and ask 'what makes us thing that the ARIMA-type models are rights?' and apply the same demands on them. What do you get? -rasmus]

    Comment by Arun — 21 Dec 2005 @ 12:31 PM

  49. Thanks for the update. I think there has been an abnormally large amount of confusion in this thread, and the update helps clear up quite a bit of the confusion. In the end, I don’t think there is really much disagreement here … just confusion.

    I would like to take one last try at a point you make in the update.

    “Some of the response to my post on other Internet sites seem to completely dismiss the physics. Temperature increases involve changes in energy (temperature is a measure for the bulk kinetic energy of the moleclues), thus the first law of thermodynamics must come into consideration. ARIMA models are not based on physics, but GCMs are.”

    I think a distinction between statistics and physics isn’t the right way to think about things for a few reasons.

    1) It isn’t either/or. Statistical tools are an extra layer of analysis laid on top of the physics. Most importantly here, they tell us what inferences we can legitimately draw from the data … how confident can we be in our statements about what the data shows and the predictions we can make from the data and the physics. Where statistics can be especially helpful is in giving us very quick insight into the uncertainty of our results, and a little statistical insight goes a long way. In this case, the insight is that statistical significance is MUCH lower (perhaps orders of magnitude smaller) in the presence of autocorrelation than absent autocorrelation.

    2) I don’t think it is accurate to say that statistical models such as ARIMA models are not based on physics. I think it is more accurate to say they incorporate the physics because they operate on the the physical data which incorporates all of the physical phenomena of interest. If the physical system you are studying exhibits persistence, then it shows up in the data as autocorrelation. Thermodynamics, heat budgets, you name it, are all in there, in the data.

    [Response:I think you are correct in your assertion that statistical models should implicitly reflect underlying physical processes explaining for instance the degree of persistence/serial correlation. But I do not think that ARIMA models are constructed out of physical considerations - they merely reflect the empirical data. You are also right that one should not separate 'physics' and 'statistics'. I argue that you have to get both right, and I am sceptical to analysis where only statistical aspects are taken into account. A warming trend cannot just happen without a cause, and there must be some physics driving it. If you are looking into the cause, such as in this particular case, I do not believe that statistical models are appropriate because i: they are used to test a null-hypothesis where no antropogenic forcing (of just solar volcanoes) is assumed, ii) they are trained on empirical data subject to forcings (be it anthopogenic as well as solar/volcanic). -rasmus]

    Comment by Terry — 21 Dec 2005 @ 1:10 PM

  50. In reply to my #6 rasmus wrote “Right, there are two aspects to this radiation: the continuum associated with the atoms kinetic energy and the band absorption associated with the atomic electron configurations. ”

    N2 and O2 radiate neither continuum nor band radiation. CO2 does not radiate continuum radiation, and H20 only radiates continuum radiation at a very low level. Thus, temperature dependent continuum radiation can be ignored when considering atmospheric gases. The band radiation is broadened by pressure, not temperature. This means that the current breed of GCMs which contain layers of atmosphere emitting radiation based on their temperatures are not using the correct physics.

    Surely there is at least one scientist at RealClimate with enough scepticism of the established science to see that Dr John Christy is right and the models are wrong.

    Comment by Alastair McDonald — 21 Dec 2005 @ 2:29 PM

  51. Re: reply to comment 46; Rasmus – “Do you not believe that the first law of thermodynamics matters for the global mean temperature? -rasmus]”

    Well, duh! What are forcings but effects that change the energy fluxes? I’m afraid I don’t understand this snark of yours either.

    Rasmus again: “[Response:I think you are mixing the concepts 'understand' with 'predicting' Have you hear about the so-called 'butterfly effect'/chaos? -rasmus]”

    Let’s see if I understand your point about predict vs. understand. Historical sciences understand, physical science predicts, or as Lord Rutherford put it, “there are only two kinds of science, physics and stamp collecting.” Sure do sound like stamp collecting to me.

    Since you mention it, there is a question that’s been on my mind. As you know, chaotic systems have both a fast and a slow manifold, so what would a slow manifold eigenvector look like, and why should we expect that butterflies flap always in the fast?

    Comment by CapitalistImperialistPig — 21 Dec 2005 @ 3:13 PM

  52. Re #49 and your update

    I have to reiterate some of the things Terry has said and comment, we seem to be talking at cross purposes.

    On one level, your description of statistics bears no relation to what I’m thinking of and, to my mind, what statistics really is. You seem to be talking about a philosophical model of the universe that is either deterministic or random. That is really irrelevant to statistical analysis.

    Statistics is a wonderful tool to wield Occam’s razor with. It also minimises the all too human tendency to only see the results one wants to see. Statistical tests ultimately tell you if your chosen model actually has some predictive ability or is merely a terribly comlicated black box. If your model has no ability to outperform a simple univariate model then you should wield Occam’s razor. One of the most insightful and telling developments in the field of finance was that you can’t beat the random walk model of exchange rates. Billions of dollars have been expended on trying to beat the market and predict exchange rates even a few minutes ahead but none of these models can beat a simple random walk model of exchange rates.

    But even regardless of that point – statistics is not about simple univariate random walk models. It is a tool for evaluating the resutls from your model. Your model can incorporate as many physical laws as it likes and it will still need to be evaluated using statistics. It is never a case of physics or statistics – it must be physics and statistics. And the point here is that even if you are the best physicist in the world, there are elementary errors you can make when evaluating your model if you don’t apply the appropriate statistical techniques. I have discussed non-stationarity here because it seems most relevant, but there are other errors you can make. And the point of Cohn and Lins as I understand it is that climatologists are making elementary errors because they are not properly accounting for the autocorrelated nature of their data. It is not that you should throw out physics (although beware of Occam’s razor).

    [Response: Maybe I can interject. First, I think we really all agree that statistics and physics are both useful in this endeavour. The 'problem' such as it is with Cohn and Lins conclusions (not their methodology) is the idea that you can derive the LTP behaviour of the unforced system purely from the data. This is not the case, since the observed data clearly contain signals of both natural and anthropogenic forcings. Those forcings inter alia impart LTP into the data. The models' attribution of the trends to the forcings depends not on the observed LTP, but on the 'background' LTP (in the unforced system). Rasmus' point is that the best estimate of that is probably from a physically-based model - which nonetheless needs to be validated. That validation can come from comparing the LTP behaviour in models with forcing and the observations. Judging from preliminary analyses of the IPCC AR4 models (Stone et al, op cit), the data and models seem to have similar power law behaviour, but obviously more work is needed to assess that in greater detail. What is not a good idea is to use the observed data (with the 20th Century trends) to estimate the natural LTP and then calculate the likelhood of the observed trend with the null hypothesis of that LTP structure. This 'purely statistical' approach is somewhat circular. Maybe that is clearer? -gavin]

    Comment by JS — 21 Dec 2005 @ 6:34 PM

  53. Gavin, I am not following this. When I fit an ARMA model to the CRU annual instrumental data using R I had to detrend it first. It then yielded very high AR values (>0.95). R would not actually let me fit it without detrending it and gave a message to that effect. I am not sure as I haven’t gone into it in detail, but the trend 1. may not affect the AR coefficient, and hence the ‘trendiness’ of the series 2. it is easy to get rid of anyway, and 3. the 20th Century trend may not have actually affected Cohn and Lins results. Anyway, you can get the ARMA structure and hence ‘trendiness’ in spite of the trend. Care to explain how forcing affects LTP estimates?

    [Response: The forcing is not linear, so linear detrending will not remove correlations related to the changes in forcings. There will always be red-noise in a climate record due to the thermal inertia of the ocean. The issue is whether there is any LTP in the absence of forcings. That is what is relavant for the null hypothesis - gavin]

    Comment by David Stockwell — 21 Dec 2005 @ 9:55 PM

  54. let me see if I can follow gavin’s reply to #52 ?

    Although we have accurate temperature data for the last two centuries, and it does show autocorrelation, you are hypothesising that this is due to the natural and anthropogenic forcing. We cannot extrapolate from this to previous times, because there was no anthropogenic forcing.

    It is very tempting to conclude that you are suggesting that it is only anthropogenic forcing which causes autocorrelation. Clearly, if natural forcings can cause autocorrelative behaviour, then we would have to conclude that previous temperature records could be autocorrelated.

    I have to say it is very difficult to understand why we should accept your hypothesis that only current conditions, and no other, should result in autocorrelated temperatures. It seems to me to be speculation.

    I believe that there are historical temperature records going back over thousands of years. Is there not evidence from these series of autocorrelation ?

    I do not understand your logic with respect to GCMs. You say that the behaviour of GCMs must be validated. But it appears to be an integral part of your case that you cannot do that validation with the temperature records of the last two centuries. How then will we ever be able to test whether gcms adequately represent the autocorrelative (or otherwise) properties of nature, if we do not have a database to test them against ?

    yours
    per

    [Response: It may help you to follow if you actually read what is said. All forcings (specifically solar, but also GHGs and aerosols etc.) impart LTP, so the observed LTP is mix of the LTP in the unforced system (which we want to know) and the LTP imparted by the forcings (which is already known). How then is one to estimate LTP in the unforced climate? We can take a GCM and see what it does in the absence of forcings. But to compare it to the real world we need to run it with as much of the observed forcings as we manage. Then comparing the forced GCM to the real world and doing the same analyses, we can ask whether the the autocorrelation structure of the model iis similar to that of the data. If so, we would then have some grounds for supposing that the background LTP as estimated from the control GCM might be reasonable. Why you appear to think that GCMs should not be tested against the real world is beyond me. - gavin]

    Comment by per — 21 Dec 2005 @ 10:34 PM

  55. Gavin:

    A request for clarification. When you talk about estimating the LTP in the “unforced system,” do you mean unforced just by AGW or do you mean unforced by everything, i.e., unforced by either AGW or “natural” forcings?

    I am guessing that you mean unforced by just AGW, and that you want to estimate the LTP of the “natural” or “non-AGW” system. If so, why isn’t a reasonable estimate of the natural LTP the LTP we observe in the natural + AGW system? Is there some reason to believe that AGW seriously distorts the estimated LTP? Perhaps because AGW is somehow larger or more persistent than “natural” forcings? Off the top of my head, don’t see why this would be the case.

    Or maybe I’m just completely wrong. I don’t pretend to understand GCMs very well.

    Comment by Terry — 22 Dec 2005 @ 12:26 AM

  56. Re: #55
    Terry, Gavin means “unforced by everything,” not just by AGW forcings.

    Comment by Armand MacMurray — 22 Dec 2005 @ 12:48 AM

  57. Re: #56

    Armand:

    Oh. … Then I missed the boat on this one. I have no idea why you would care about long-term persistence in a system with no forcings whatsoever. I thought we were interested in whether recent trends were consistent with a system without AGW forcing. Then, I thought the relevant question was whether a non-AGW system with non-AGW forcings can exhibit trends comparable to the recent one, in which case the recent trend could be non-AGW.

    What am I missing? Does it have something to do with understanding whether the climate system itself can generate persistence (as opposed to persistence generated by the forcings)? Why should we care whether the persistence comes from the system or the forcings?

    I am beginning to suspect that I have missed something fundamental here. Perhaps I should just be quiet for a while.

    Comment by Terry — 22 Dec 2005 @ 2:48 AM

  58. Dear Gavin
    Re: 54
    I put my question to see if I could follow what you wrote. We agree that GCMs should be tested against real world data.

    I am a bit confused about your suggestion of a climate with no forcings; if I understand correctly, that would be a climate with no solar input, GHGs or aerosols, for example, and hence it is not a realistic prospect to have data from such a situation.

    Surely discussions about extrapolating the historical temperature records must use the historical temperature data, which is subject to all the normal, natural forcings ? If you are accepting that the natural forcings impart LTP (or autocorrelation), surely you are accepting that the historical temperature record is autocorrelated ? Surely, then, you are at one with Cohn and Lins ?

    It has been brought to my attention that Pelletier (PNAS 99, 2546) describes autocorrelation in deuterium concentrations in the vostok ice core over periods up to 200,000 years; which I would understand to be a temperature proxy.

    yours
    per

    [Response: Forcings in the sense meant are the changes to the solar input/GHGs/aerosols etc. Attribution is all about seeing whether you can match specific forcings to observed changes and this is used for all forcings solar, volcanic and GHG included. The baseline against which this must be tested is a control run with no changes in any forcing. -gavin]

    Comment by per — 22 Dec 2005 @ 4:03 AM

  59. Re #18,

    A similar project as CMIP-2, AMIP-2 for atmospheric models only, compared the results of several (20) climate models for a first-order forcing: the distribution of the amount of the sun’s energy reaching the top of the atmosphere (TOA) dependent on latitude and longitude in the period 1985-1988. See the recently published work of Raschke ea..

    Comment by Ferdinand Engelbeen — 22 Dec 2005 @ 7:34 AM

  60. I would like to pick up on a comment made by per (#58) about testing GCM’s against real-world data. As an outsider to the GCM community, I did such an analysis by testing whether the exogenous inputs to GCM (radiative forcing of greenhouse gases and anthropogenic sulfur emissions) have explanatory power about observed temperature relative to the temperature forecast generated by the GCM. In summary, I found that the data used to simulate the model have information about observed temperature beyond the temperature data generated by the GCM. This implies that the GCM’s tested do not incorporate all of the explanatory power in the radiative forcing data in the temperature forecast. If you would like to see the paper, it is titled “A statistical evaluation of GCM’s: Modeling the temporal relation between radiative forcing and global surface temperature” and is available from my website
    http://www.bu.edu/cees/people/faculty/kaufmann/index.html

    Needless to say, this paper was not received well by some GCM modelers. The paper would usually have two good reviews and one review that wanted more changes. Together with my co-author, we made the requested changes (including adding an errors-in variables” approach). The back and fourth was so time consuming that in the most recent review, one reviewer now argues that we have to analyze the newest set of GCM runs – the runs from 2001 are too old.

    The reviewer did not state what the “current generation” of GCM forecasts are! Nor would the editor really push the reviewer to clarify which GCM experiments would satisfy him/her. I therefore ask readers what are the most recent set of GCM runs that simulate global temperature based on the historical change in radiative forcing and where I could obtain these data?

    [Response: The 'current runs' are the ones made available as part of the IPCC 4AR. For your purposes, you will want to look at the simulations made for the 20th Century and there are (I think) 20 different models from 14 institutions with mutliple ensembles available. You need to register for the data, but there are no restrictions on the analyses you can do (info at http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php ). Many of the runs have many more forcings than you considered in your paper which definitely improve the match to the obs. However, I am a little puzzled by one aspect of your work - you state correctly that the realisation of the weather 'noise' in the simulations means that the output from any one GCM run will not match the data as well as a statistical model based purely on the forcings (at least for the global mean temperature). This makes a lot of sense and seems to be to equivalent to the well-known result that the ensemble mean of the simulations is a better predictor than any individual simulation (specifically because it averages over the non-forrced noise). I think this is well accepted in the GCM community at least for the global mean SAT. That is why simple EBMs (such as Crowley (2000) do as good a job for this as GCMs. The resistence to your work probably stems from a feeling that you are extrapolating that conclusion to all other metrics, which doesn't follow at all. As I've said in other threads, the 'cutting-edge' for GCM evaluation is at the regional scale and for other fields such as precipitation, the global mean SAT is pretty much a 'done deal' - it reflects the global mean forcings (as you show). I'd be happy to discuss this some more, so email me if you are interested. - gavin]

    Comment by Robert K. Kaufmann — 22 Dec 2005 @ 12:33 PM

  61. In today’s news:
    http://www.nature.com/nature/journal/v438/n7071/abs/nature04348.html#a1

    Comment by Hank Roberts — 22 Dec 2005 @ 8:07 PM

  62. It seems to me that you are kicking the can down the road. Physics provides a connection between forcings and temperatures, and as I understand you want to use GCMs to obtain temperature series which can be compared to measurements (either proxy or instrumental) to determine whether there is a trend. You contrast this with statistical analysis of the temperature measurements.

    However, this merely displaces the problem to one of whether the forcings have trends, and how will you determine that. The physics of some of the forcings are in really rough shape, there are lots of forcings, some of them go one way, some go the other, etc. Worse, the GCM models only provide a range of temperatures, so even more uncertainty is introduced.

    A minor niggle, in answer to number 6, gavin did not point out that molecules can be excited by collisions as well as absorption of photons. The interchange of energy between translational and vibrational modes leads to heating of the atmosphere by absorption of IR radiation and to the thermal population of vibrationally excited states of CO2 and H2O which radiate.

    Comment by Eli Rabett — 23 Dec 2005 @ 1:35 AM

  63. Physics provides a connection between forcings and temperatures

    Indeed, now if you don’t understand the behaviour of your system (i.e. physics) you can’t model it, right?

    How about tidal effects?

    C. D. Keeling and T. P. Whorf, 2000, The 1,800-year oceanic tidal cycle: A possible cause of rapid climate change, PNAS, April 11, 2000; 97(8): 3814 – 3819.

    C. D. Keeling and T. P. Whorf, 1997, Possible forcing of global temperature by the oceanic tides, PNAS 1997 Aug 5;94(16):8321-8

    Comment by Hans Erren — 23 Dec 2005 @ 6:16 AM

  64. Hans Erren is making a typical argument in denial, that if you don’t understand everything you don’t understand anything. Just about every scientifically based issue that must be dealt with in the policy arena must endure this attack, for example the discussions about tobacco and CFCs. The argument is useful politically for two reasons. First it casts doubt on what the overwhelming science points to, second, it is an excuse to delay (more study needed). However, this argument ignores a basic truth about physics.

    A major reason that physics is useful is that even complicated systems have only one or two dominant “forcings” so that simple models incorporating only a few elements are useful, even accurate.

    Detailed modeling may require addition of complications, but it is rare to unheard of that all of the added features push the system in the same direction, and for the most part they cancel out. What more complex models allow you to do is to gain insight into the behavior of the system beyond the coarse grained simple model. That has been the story of the anthropic greenhouse effect, and why predictions of global effects have remained relatively stable over 100 years, no matter how many additional details are added to the model or additional forcings are included.

    The first paper that Hans points to raises an important point in its penultimate paragraph

    “Even without further warming brought about by increasing concentrations of greenhouse gases, this natural warming at its greatest intensity would be expected to exceed any that has occurred since the first millennium of the Christian era, as the 1,800-year tidal cycle progresses from climactic cooling during the 15th century to the next such episode in the 32nd century.”

    Those in denial insist that this is an either/or problem (either anthopically driven warming OR something else) and are busy thowing every piece of something else they can think of against the wall to see what sticks. Unfortunately it is a problem of A AND B, as is illustrated here. Frankly, at this time I have no idea of the importance of the cited oceanic tidal cycle, but C. D. Keeling was certainly not a doubter on greenhouse warming.

    A particularly frequent example of this diversion is the argument about CO2 mixing ratios rising after warming began to bring the planet out of an ice age. To the extent that the evidence supports this, and the data is interesting, but perhaps not rock solid, it is clear that increased solar input caused by orbital effects, increased CO2 concentrations, which then in turn reinforced the warming, a positive feedback as it were. What this says is that clearly increasing CO2 mixing ratios warms the surface. It does not matter whether the jolt is delivered by anthropic fossil fuel burning, or from the effect of any other positive forcing.

    Comment by Eli Rabett — 23 Dec 2005 @ 2:05 PM

  65. I’m going to make a prediction (grin)

    The next big idea will be cooling the earth by using comet dust or other material from earth-orbit-crossing objects — blowing them up to introduce large quantities of fine dust into the upper atmosphere — creating “dust events” like those that show up in the climate cores, without introducing large volumes of water to the stratosphere; this will require finding a dry, dusty, frangible comet.

    Then, if other forcings change and we get too cold, follow with a nice wet comet, to warm things up.

    I suppose climate modeling is going to lead to terraforming, eventually. I hope we get it right.

    http://freefall.purrsia.com/ff1200/fv01190.htm

    Comment by Hank Roberts — 23 Dec 2005 @ 5:32 PM

  66. Eli, I feel I must reply to your remark “The interchange of energy between translational and vibrational modes leads to heating of the atmosphere by absorption of IR radiation and to the thermal population of vibrationally excited states of CO2 and H2O which radiate.” The heating of the atmosphere means, from energy condsiderations, that the radiation absorbed by the greenhouse gases does not equal the radiation emitted, the normal expression of Kirchhoff’s Law. Furthermore, the effect of temperature on greenhouse gas emissions is called Doppler broadening. The American Meteorological Glossary remarks that ‘At normal temperatures and pressures Doppler broadening is dwarfed by collision broadening, but high in the atmosphere Doppler broadening may dominate and, indeed, provides a means of remotely inferring temperatures.’ See:
    http://amsglossary.allenpress.com/glossary/browse?s=d&p=40
    In other words, the effects of Doppler broadening do not affect the troposphere where the climate is decided.

    Comment by Alastair McDonald — 24 Dec 2005 @ 4:54 AM

  67. #66 , Alastair, That may be true, the stratosohere should warm for example, but that was not the case for 2005 (reference the WMO 2005 summary , no Strat. warming) all the action was in the Troposphere, especially shortly above the surface till the tropopause, the warming is happening there. Although some controversy has been mentionned by radiosonde instrument thermistor accuracy and satellite resolution questions, there are other ways to see this. Literally see it, especially in the Polar regions, by increasing brightness of twilight during the long night, a product of ever expanding upper warmer air interfacing with colder surface air , trapping light more often then ever. There are other ways as well to prove that its mostly in the troposphere. I would suggest that water vapour is the biggest factor , being increased by Greenhouse gases. I see this now, by clear polar nights with star magnitudes seen not as dim as previous years (maximum of 4.7 mag.) and unusually warm surface temperatures given the lack of clouds.

    Comment by wayne davidson — 24 Dec 2005 @ 4:27 PM

  68. Polar bears treading on thin ice

    Climate change blamed for decline in population along Hudson Bay coast

    http://www.theglobeandmail.com/servlet/ArticleNews/TPStory/LAC/20051224/POLAR24/TPEnvironment/

    Comment by Stephen Berg — 24 Dec 2005 @ 6:19 PM

  69. http://www.nwtwildlife.rwed.gov.nt.ca/Publications/speciesatriskweb/polarbear.htm

    Population Size and Trends
    The world population of polar bears has been estimated at 22,000-27,000 animals. These bears live in 10 to 15 separate sub-populations. There is little or no contact between groups. Canada has the largest population with an estimated 15,000 polar bears. Three sub-populations, an estimated 3,000 bears, can be found along the arctic coasts of the NWT. The two largest of these populations are stable. The third, a small population shared with Nunavut, is increasing in size.

    Comment by Hans Erren — 25 Dec 2005 @ 7:52 AM

  70. #69: The trend data they’re reporting are for the Northwest Territories ONLY. That’s the NWT wildlife agency site you’re referencing.

    22,000-27,000 animals worldwide.

    Of these, 3,000 can be found along the artic coasts of the NWT. And of these 3,000 ONLY, two [sub]populations are stable, the third, a SMALL population (hundreds? doesn’t say) is increasing.

    The 3,000 do not form a random sample, you can’t extrapolate data from the small numbers in the NWT to the worldwide population.

    Predictions that polar bears may face extinction are based on their natural history. They den on land, they wander and feed on polar ice after the winter freeze sets in. If polar ice sheets no longer connect with land in winter, polar bears will disappear, that’s a given. Even if there remains a winter freeze-up connecting ice sheets with land, if the bears are stuck on land too long, they’ll starve or be in poor health before they can travel to their seal hunting grounds on ice. Unlike other bear species, which are omnivorous, polar bears are more strictly carnivorous (they’ll eat vegetation as an extreme measure only).

    Hudson’s Bay is apparently warming to the point where changes in the amount of time it’s frozen is affecting polar bear populations. Most of the Hudson’s Bay summer habitat (the most famous being the area surrounding Churchill) lies far to the south of the summer habitat utilized by the NWT subpopulations you reference. We would expect the effects of warming to appear in Churchill in advance of problems showing up further north.

    Comment by Don Baccus — 25 Dec 2005 @ 3:00 PM

  71. In reply to #66.

    1. Kirchoff explicitly allowed for systems which only absorb and emit in restricted wavelength regions (as molecules do). The argument he presented starts with by considering two parallel plates, one of which absorbs and emits at all wavelenths, and the other one of which only absorbs and emits betwee LAMBDA and LAMBDA + dLAMBDA (if HTML was designed at CERN why the devil didn’t Berners-Lee build decent sub/superscripting and Greeks into the thing.) At equilibrium the ratio of emissivity to absortivity of both systems are equal. This can be generalized to all substances with a zeroth law of thermodynamics type argument. In other words, Kirchoff’s law applies to molecules.

    2. High in the atmosphere is a relative concept. For practical purposes when considering the greenhouse effect, the top of the atmosphere is a few kilometers while much of the absorption and emission of radiation occurs relatively close to the surface. In any case the number density of molecules at 6 km is only about a factor of 2 less than at the surface (average velocity decreases by ~10%)

    Comment by Eli Rabett — 27 Dec 2005 @ 12:36 AM

  72. re 64:

    Eli, I published a peer reviewed model calculation on coal maturation using physics first principles. GCM’s are not using first principles, they use parameterisations to calculate sub cell relationships eg between sea surface temperature and precipitation, as individual thunderstorms cannot be modeled.
    Arctic cloud modeling is a joke.

    [Response: There is no parameterisation in a GCM that connects SST to precipitation. Arctic cloud modelling is difficult, it is not a joke. -gavin]

    Comment by Hans Erren — 27 Dec 2005 @ 12:04 PM

  73. re 70
    http://www.polarbearsinternational.org/bear-facts/

    Polar Bear Status Report
    Polar bears are a potentially threatened (not endangered) species living in the circumpolar north. They are animals which know no boundaries. They pad across the ice from Russia to Alaska, from Canada to Greenland and onto Norway’s Svalbard archipelago. No adequate census exists on which to base a worldwide population estimate, but biologists use a working figure of perhaps 22,000 to 25,000 bears with about sixty percent of those living in Canada.

    In most sections of the Arctic where estimates are available, polar bear populations are thought to be stable at present. Counts have been decreasing in Baffin Bay and the Davis Strait, where about 3,600 bears are thought to live, but are increasing in the Beaufort Sea, where there are around 3,000 bears.

    In the 1960s and 1970s, polar bears were under such severe survival pressure that a landmark international accord was reached, despite the tensions and suspicions of the Cold War. The International Agreement on the Conservation of Polar Bears was signed in Oslo, November 15, 1973 by the five nations with polar bear populations (Canada, Denmark which governed Greenland at that time, Norway, the U.S., and the former U.S.S.R.).

    The polar bear nations agreed to prohibit random, unregulated sport hunting of polar bears and to outlaw hunting the bears from aircraft and icebreakers as had been common practice. The agreement also obliges each nation to protect polar bear denning areas and migration patterns and to conduct research relating to the conservation and management of polar bears. Finally, the nations must share their polar bear research findings with each other. Member scientists of the Polar Bear Specialist Group meet every three to four years under the auspices of the IUCN World Conservation Union to coordinate their research on polar bears throughout the Arctic.

    With the agreement in force, polar bear populations slowly recovered. The Oslo agreement is one of the first and most successful international conservation measures enacted in the 21st century.

    Comment by Hans Erren — 27 Dec 2005 @ 12:13 PM

  74. re 73.

    Hans,

    On average, the data used in your summary on polar bear status is 10 years out of date and was of poor-fair quality.
    See: http://pbsg.npolar.no/status-table.htm

    Comment by Pat Neuman — 27 Dec 2005 @ 1:36 PM

  75. re 74:
    Thanks for the update
    The table cited shows a lot of unknowns, two decreasing and two increasing populations.
    The decreasing groups (3600) are the most “harvested” with 202 kills per year. I’d suggest a moratorium here…

    Comment by Hans Erren — 27 Dec 2005 @ 4:41 PM

  76. #73: In what way does your comment address the point that predictions of future troubles for polar bears are based on their natural history? Are you seriously trying to argue that the fact that populations are stable today indicates that they’ll survive if their habitat changes significantly?

    Such thinking is just silly. Predicting the polar bear’s future depends on the accuracy of two things, our ability to predict climate change and the species’ response to the habitat change that follows.

    Current population numbers aren’t relevant.

    Comment by Don Baccus — 27 Dec 2005 @ 4:46 PM

  77. Re: #73,

    From the same site:

    http://www.polarbearsinternational.org/bear-facts/climate-change/

    “Climate Change

    The Arctic’s climate is changing, with a noticeable warming trend that is affecting polar bears. The region is experiencing the warmest air temperatures in four centuries. The Intergovernmental Panel on Climate Change, the U.S. EPA, and the Arctic Climate Impact Assessment all report on the effect of this climatic change on sea-ice patterns. A recent report notes that there has been a 7% reduction in ice cover in just 25 years and a 40% loss of ice thickness. It also predicts a mostly ice-free arctic summer by 2080 if present trends continue. Many scientists believe that the Arctic will continue to grow warmer as a result of human activity, namely, the introduction into the atmosphere of increasing quantities of carbon dioxide and other â??greenhouse gasesâ??. While there is no consensus on whether human activity is the most significant factor, the Arctic has in fact been warming, whatever the cause.

    Anecdotal evidence indicates that polar bears may be leaving the sea ice to den on land in winter. In Russia, large numbers of bears have been stranded on land by long summers that prevent the advance of the permanent ice pack. Some Inuit hunters in Canada say they can no longer hunt polar bears in the spring because of early ice melts. In the Hudson Bay area, research (sponsored in part by PBI) has found that areas of permafrost have declined, leaving polar-bear denning areas susceptible to destruction by forest fires in the summer. A warm spring might also lead to increased rainfall, which can cause dens to collapse.

    Polar bears depend on a frozen platform from which to hunt seals, the mainstay of their diet. Without ice, the bears are unable to reach their prey. In fact, for the western Hudson Bay population of polar bears (the population near Churchill in the Province of Manitoba, Canada), researchers have correlated earlier melting of spring ice with lower fitness in the bears and lower reproduction success. If the reduced ice coverage results in more open water, cubs and young bears may also not be able to swim the distances required to reach solid ice.

    Further north, in areas where the ice conditions have not changed as much, seal populations have grown (either through migration or more successful reproduction) and polar bear populations are expanding.

    Because polar bears are a top predator in the Arctic, changes in their distribution or numbers could affect the entire arctic ecosystem. There is little doubt that ice-dependent animals such as polar bears will be adversely affected by continued warming in the Arctic. It is therefore crucial that all factors which may affect the well-being of polar bears be carefully analyzed. Conservative precautionary decisions can only be made with a full understanding of the living systems involved.”

    Comment by Stephen Berg — 27 Dec 2005 @ 11:20 PM

  78. Eli, in 71.1 you describe a thought experiment where a black-body is separated from a grey-body by air or a vacuum. Even in the case where they are separated by air, the gas plays no part in the experiment. For you to then assert that air is a grey-body is a non sequitur because Kirchhoff is clearly ascribing a solid to that role. Moreover, Kirchhoff was the first to discover that the strength of emission lines is independent of the radiation in which they form. In other words, as the background radiation increases they change from being emission into absorption lines. As you must be aware, lines are not formed by the same process as that which causes continuous blackbody radiation, ie the effect from electronic vibrations. The radiation from greenhouse gases is due to molecular vibrations.

    In reply to 71.2 when I say high, I mean above the height at which the radiation at the effective temperature is emitted. Doppler broadening starts to be important at heights above 40 km well above the 6 km where conventional wisdom says the greenhouse effect operates.

    Comment by Alastair McDonald — 28 Dec 2005 @ 7:26 AM

  79. re 77:

    The region is experiencing the warmest air temperatures in four centuries.

    four?

    Which means 1600 was hotter than present?

    Comment by Hans Erren — 28 Dec 2005 @ 8:14 PM

  80. Re: #79, “Which means 1600 was hotter than present?”

    No. Not necessarily. The statement surmises that the temperatures today have been greater than those of any year prior to 400+ years ago.

    It does not state whether the temperatures prior to 1600 were warmer than today, but given the great accuracy of the Hockey Stick graph, it is likely that the temperatures today have been the warmest for far longer than 400 years.

    Comment by Stephen Berg — 29 Dec 2005 @ 5:21 AM

  81. Re 79. No it means that records only go back 400 years to the time when Frobisher, Davis, Baffin, and Hudson were the first europeans to explore there and make records of the conditions. This that was the time of the Maunder Minimum at the end of the Little Ice age which drove the Vikings out of Greenland, so it seems rather silly for you to assert that the temperature in 1600AD were warmer than today.

    [Response:I'm not sure if we can conclude that the LIA (Little Ice Age) drove the Vikings out of Greenland (where did they go?), although it may be one plausible explanation. There could also be other explanations for why the Viking settlements on Greenland perished. After all, the Inuits seemed to manage to survive. Furthermore, the conditions on Greenland may have been local and not necessarily the same as for the entire globe. -rasmus]

    Comment by Alastair McDonald — 29 Dec 2005 @ 8:17 AM

  82. Even though I do not concur with the views of this article (Naturally trendy? rasmus; 16 Dec 2005 @ 4:50 pm), I must congratulate the author for discussing and disseminating to climatologists the recent work of Cohn and Lins and, indirectly, the consequences of the related natural behaviour (that this work examines) to statistical inferences and modelling.

    In fact this “trendy” behaviour has been known for at least 55 years since Hurst reported it as a geophysical behaviour or for 65 years since Kolmogorov introduced a mathematical model for this. It is known under several, more or less successful, names such as: Long Term Persistence, Long Range Dependence, Long Term Memory, Multi-Scale Fluctuation, the Hurst Phenomenon, the Joseph Effect and Scaling Behaviour; other names have been used for mathematical models describing it such as: Wiener Spiral (the first term used by Kolmogorov), Semi-Stable Process, Fractional Brownian Noise, Self-Similar Process with Stationary Intervals, or Simple Scaling Stochastic Process.

    I think that this behaviour relates to climatology far more than to any other discipline and I wonder why it has not been generally accepted so far in climatological studies (or am I wrong?). In contrast in many engineering studies, for example of reservoir designs, the consequences of this behaviour are analysed. Also, the same behaviour has been studied by economists and computer and communication scientists in their own time series.

    Of course, this behaviour is not only met in the record analyzed in Cohn and Lins’ work. In contrast, a lot of studies have provided evidence that it is probably omnipresent at all series and at all times i.e., not only in the 20th century – but it can be seen only on long records. To mention a single example, the Nilometer data series (maximum and minimum levels of the Nile River), which clearly exhibits this behaviour, extends from the 7th to at least the 13th century AD. Cohn and Lins’ article contains a lot of references to works that have provided this evidence. For those who may be interested on more recent references, here is a list of three contributions of mine, trying to reply to some questions related to the present discussion:

    What is a “trend”? What is the meaning of a “nonstationary time series”? How are these related to the scaling behaviour? See: Koutsoyiannis, D., Nonstationarity versus scaling in hydrology, Journal of Hydrology, 2006 (article in press; http://dx.doi.org/10.1016/j.jhydrol.2005.09.022 ).

    Can simple dynamics (which do not change in time) produce scaling (“trendy”, if you wish) behaviour? See: Koutsoyiannis, D., A toy model of climatic variability with scaling behaviour, Journal of Hydrology, 2006 (article in press; http://dx.doi.org/10.1016/j.jhydrol.2005.02.030 ).

    Why the scaling behaviour (rather than more familiar ones described by classical statistics) seems to be so common in nature? See: Koutsoyiannis, D., Uncertainty, entropy, scaling and hydrological stochastics, 2, Time dependence of hydrological processes and time scaling, Hydrological Sciences Journal, 50(3), 405-426, 2005 (http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.50.3.405.65028;jsessionid=noyCMpKB1OFcDF7U5H?cookieSet=1&journalCode=hysj).

    [Response:Thanks for your comment. I am not saying there is no long-term persistence; that is well-known. But, I'm saying that there is physical reasons for such behaviour, and that must be acknowledged in order to understand the phenomenon. The time structure - presistence and some of the short-term hikes which can be ascribed to 'natural variations' - can for instance be explained from oceans' heat capacity and either changes in natural forcings (volcanoes or solar) or chaos. They do not happen spontaneously and randomly (I suppose there was some confusion about what I meant by 'random', which I used in the meaning 'just happens' without a cause). Thus, my point is that physical processes are at play giving rise to these phenomena, and thus pure statistical models do not reveal all sides of the process. Although these statistical models may give a similar behaviour to the variations of the earth - if their parameters are optimally set - they do not necessarily prove that the process always behave that way. There may be changes in the circumstances (e.g. different external forcing). There have been claims that GCMs have not been proved to be representative for our climate, but I believe this is more true for the statistical models.
    A change in the global mean temperature is different to, say the flow of the Nile, since the former implies a vast shift in heat (energy), and there has to be physical explanations for this. It just does not happen by itself. Again, some of such temperature variations can be explained by changes in the forcing. Hence, when dealing with attribution, then the question is to which degree are the variations 'natural'. When one uses the observations for deriving a null-distribution, and one does not know how much of the trend is natural and how much is anthropogenic, then this may lead to circular reasoning and a false acceptance of the null-hypothesis. This is not a problem with GCMs which can be run with natural forcing only and with combined natural and anthopogenic. The GCMs also give a good description of our climate's main features. -rasmus]

    Comment by Demetris Koutsoyiannis — 30 Dec 2005 @ 12:20 AM

  83. To #78

    1. A black body is one whose absorptivity and emissivity is unity at all wavelengths

    2. A grey body is one whose absorbtivity and emissivity are the same at all wavelengths but less than unity.

    Therefore what I described in 71 is NOT a grey body but one whose absorptivity and emissivity change as a function of wavelength, e.g. a molecule.

    Kirchoff’s law is quite general. A simple derivation can be found at http://ceos.cnes.fr:8100/cdrom/ceos1/science/dg/dg10.htm.

    The idea is that a body (such as a volume of atmosphere, a point that Alastair does not appear to recognize) at equilibrium has a constant temperature. Therefore the amounts of energy absorbed and emitted must be equal. It is then trivial to show that the absorptivity and emissivity at any wavelength must be equal (see the URL for a detailed derivation)

    For a molecule, the absorptivity is zero at most wavelengths and so is the emissivity. Both are non-zero only where there are molecular absorption lines.

    Kirchoff’s law applies to the atmosphere, both for components that consist only of molecules and those where there are aerosols (for example clouds).

    To move to Alastair’s second point, line shapes are determined by a combination of pressure broadening (Lorentzian**) and Doppler broadening (Gaussian). The combination of these two functions produces a Voigt line shape. This is the appropriate shape to use under atmospheric conditions. While Doppler is not dominant in the troposphere, it cannot be neglected.

    The Gaussian Doppler broadening profile is determined by the velocity distribution of the molecules and is thus only temperature dependent. The Lorentzian** “pressure” broadened lineshape is determined by the number of collisions/second and their duration. At normal temperatures and pressure (and lower in both parameters) one can safely assume that collisions are binary and instantaneous, which yields a Lorentzian line shape. The number of collisions/second is determined both by number density and temperature, thus pressure broadening is also a function temperature. You could look up the various parameters for CO2 in databases

    Avert your eyes if you are really not interested in ultimate details.
    **If you get REALLY good at measuring line shapes you have to start allowing for collision times that are finite (collision time means the time during which the collision partners interact). This modifies the Lorentzian line shape and is called a Chi factor. You could google it.

    Comment by Eli Rabett — 31 Dec 2005 @ 1:34 AM

  84. A few points on the response to #82 by rasmus, which I appreciate:

    1. “Statistical questions demand, essentially, statistical answers”. (Here I have quoted Karl Poppers’ second thesis on quantum physics interpretation – from his book “Quantum Theory and the Schism in Physics”). The question whether “The GCMs [...] give a good description of our climate’s main features” (quoted from the rasmus’s response) or not is, in my opinion, a statistical question as it implies comparisons of real data with model simulations. A lot of similar questions (e.g., Which of GCMs perform better? Are GCMs future predictions good enough? Do GCM simulations reproduce important natural behaviours?) are all statistical questions. Most of all, the “attribution” questions (to quote again rasmus, “how much of the trend is natural and how much is anthropogenic” and “to which degree are the variations ‘natural’”) are statistical questions as they imply statistical testing. And undoubtedly, questions related to the uncertainty of future climate are clearly statistical questions. Even if one believes that the climate system is perfectly understood (which I do not believe, thus not concurring with rasmus), its complex dynamics entail uncertainty (this has been well documented nowadays). Thus, I doubt if one can avoid statistics in climatic research.

    2. Correct statistical answers demand correct statistics, appropriate for the statistical behaviours exhibited in the phenomena under study. So, if it is “well known” that there is long term persistence (I was really happy to read this in rasmus’s response) then the classical statistical methods, which are essentially based on an Independent Identically Distributed (IID) paradigm are not appropriate. This I regard as a very simple, almost obvious, truth and I wonder why climatic studies are still based on the IID statistical methods. This query as well as my own answer, which is very similar to Cohn and Lins’ one, I have expressed publicly three years ago (Koutsoyiannis, D., Climate change, the Hurst phenomenon, and hydrological statistics, Hydrological Sciences Journal, 48(1), 3-24, 2003 – http://www.extenza-eps.com/IAHS/doi/abs/10.1623/hysj.48.1.3.43481). In this respect, I am happy for the discussion of Cohn and Lins work hoping that this discussion will lead to more correct statistical methods and more consistent statistical thinking.

    3. Consequently, to incorporate the scaling behaviour in the null hypothesis is not a matter of “circular reasoning”. Simply, it is a matter of doing correct statistics. But if one worries too much about “circular reasoning” there is a very simple technique to avoid it, proposed ten years ago in this very important paper: H. von Storch, Misuses of statistical analysis in climate research. In H. von Storch and A. Navarra (eds.): Analysis of Climate Variability Applications of Statistical Techniques. Springer Verlag, 11-26, 1995 (http://w3g.gkss.de/staff/storch/pdf/misuses.pdf). This technique is to split the available record into two parts and formulate the null hypothesis based on the first part.

    4. Using probabilistic and statistical methods should not be confused with admitting that things “happen spontaneously and randomly” or “without a cause” (again I quoted here rasmus’s response). Rather, it is an efficient way to describe uncertainty and even to make good predictions under uncertainty. Take the simple example of the movement of a die and eventually its outcome. We use probabilistic laws (in this case the Principle of Insufficient Reason or equivalently The Principle of Maximum Entropy) to produce that the probability of a certain outcome is 1/6 because we cannot arrive at a better prediction using a deterministic (causative) model. This is not a denial of causal mechanisms. If we had perfectly measured the position and momentum of the die at a certain moment and the problem at hand was to predict the position one millisecond after, then the causal mechanisms would undoubtedly help us to derive a good prediction. But if the lead time of one millisecond needs to be a few seconds (i.e. if we are interested about the eventual outcome), then the causal mechanisms do not help and the probabilistic answers become better. May I add here my opinion that the climate system is perhaps more complex than the movement of a die. And may I endorse this thesis saying that statistical thermophysics, which is based on probabilistic considerations, is not at all a denial of causative mechanisms. Here, I must admit that I am ignorant of the detailed structure of GCMs but I cannot imagine that they are not based on statistical thermophysics.

    5. I have difficulties to understand rasmus’s point “A change in the global mean temperature is different to, say the flow of the Nile, since the former implies a vast shift in heat (energy), and there has to be physical explanations for this.” Is it meant that there should not be physical explanations for the flow of the Nile river? Or is it meant that the changes in this flow do not reflect changes in rainfall or temperature? I used the example of Nile for three reasons. Firstly, because its basin is huge and its flow manifests an integration of climate over an even more extended area. Secondly, because it is the only case in history that we have an instrumental record of a length of so many centuries (note that the measurements are taken in a solid construction known as the Nilometer), and the record is also validated by historical evidence, which for example witness that there were long periods with consecutive (or very frequent) droughts and others with much higher water levels. And thirdly, because this record clearly manifests a natural behaviour (it is totally free of anthropogenic influences because it covers a period starting at the 6th century AD).

    6. I hope that my above points should not be given a “political” interpretation. The problem I try to address is not related to the political debate about the reduction of CO2 emissions. Simply I believe that scientific views have to be as correct and sincere as possible; I also believe that the more correct and sincere these views are the more powerful and influencing will be.

    [Response:Thank you Demetris! I think this discussion is a very good one and it is important to look at the different sides. I do think you make some very valid points. I for one thing agree that the climate system is complex, however, I think that although we do not have a 'perfect knowledge' (whatever one means with this term, if one choses to be philosophical...) about our climate, we still have sufficient knowledge to make climte models and make certain statements. I am not an 'anti-statistics' guy. Statistics is a fascinating field. In fact, most of my current work is heavily embracing statistics. But statistics is only so much, and there is, as you say, inappropriate ways and appropriate ways to apply statistics. In addition, I argue that you need the physical insight (theory). I do not propose that the Nile river levels are not a result of physical processes, but I argue that the physical processes behind the river discharge are different to those behind the global mean temperature, and the displacement of a molecule (Brownian motion) if you like. Because they are affected by different processes, there is no a priori reason the think that they should behave similarly. Yes, they may have some similar characteristics, but the global mean temperature represents the heat content of the surfae air averaged over the globe whereas the Nile river discharge is affected by the precipitation over a large river basin, which again is affected by transport of humidity and i.e. the trajectories of storms (or whatever cloud systems causing the rainfall). When it comes to using statistical models deriving null-distributions for testing significance of trends, it should be noted that the actual data always have been subjected to natural variatiions in the forcing, be it the orbital parameters (Milankovitch) for the proxy data, volcanoes, solar or anthropogenic (GHGs or landscape changes). I think that only such changes in forcing can produce changes in the global mean temperature, because energy has to be conserved. If you use ARIMA-type models tuned to mimic the past, then the effect of changes in forcing is part of the null-process. For instance, using proxy data that include the past ice ages, then the transitions between warm eras and cold glacial periods are part of the null-process. You may say that over the entire record of hundred thousand years, you would see little trend, but that's not really the issue. We are concerned about time scales of decades to century when we talk about global warming. I would therefore argue that at these time scales, there would be a significant trend during the transitional periods between warm interglaciacial periods and the ice ages. We also know (or think) that there are physical reasons for the ice age cycle (changes orbital parameters). Now, the transition between the ice ages and warmer climates is slow compared to the present global warming. Also, we know that the orbital parameters are not responsible for the current warming. One can more or less rule out solar effect as well, as there is little evidence for any increase in solar activity since 1950. Have to dash now. Thanks for you comments and a happy New year to you! -rasmus]

    Comment by Demetris Koutsoyiannis — 31 Dec 2005 @ 11:51 AM

  85. Re #84 and the response to it:
    I do not think we have any foolproof physical intuition as to what the noise level in a global mean temperature time series should be. The claim that only changes in forcing can produce changes in global mean temperature because of “conservation of energy” is clearly not correct. One expects internal variability to create some noise in the balance between incoming and outgoing radiation; additionally, the global mean temperature need not be proportional to the total energy in the ocean-atmosphere system. ENSO produces a global mean temperature response of a few tenth of a degree after all. One can easily imagine that variability in oceanic convection and associated sea ice changes could produce even larger variations on longer time scales. I am impressed with how small the noise level in the global mean temperature generated by GCMs is, and how small it seems to be in reality, but it is not obvious to me why it is that small. The size of this noise level is centrally important, but it would be better to say that this is an emergent property of our models rather than something that we understand intuitively from first principles. We should not overstate our understanding of the underlying physics. Given a fixed strength of the “noise source”, the plausible argument that the resulting noise level is proportional to the climate sensitivity (the externally forced perturbations and naturally occuring fluctuations being restored by the same “restoring forces”) is currently being discussed on another RC thread. So the substantial uncertainty in climate sensitivity should translate into uncertainty in the low frequency noise level for global mean temperature.
    With regard to questions of statistical methods, I would only add that analysis of the global mean temperature, however sophisticated one’s methods, is unecessarily limiting — what we really need are alternative approaches to multi-variate analyses of the full space-time temperature record. A focus on the global mean IS arbitrary.

    [Response:Thanks Isaac for your comment. I agree with you that internal variations can produce fluctuations in the global mean temperature, and that if the system is chaotic (which I believe), the magnitude of these variations are determined by the system's attractor. Hence, there may be internal shifts in heat which subsequently affect the global surface mean estimates, like ENSO (this is a physical reason for why the temperature varies). I agree that ENSO can cause temperature fluctuations on a few tenth of a degree Centigrade, but the time scale of ENSO (~3-8 years) is too short to explain the recent temperature hike that has taken place over the last 3 decades. However, in order for the global mean temperature to move away from the attractor, I believe you need to change the energy balance as well. I interpret the evidence of past glacial periods as periods when this happened, and would expect an enhanced greenhouse effect to have a similar effect. I think you are absolutely right that full space-time multi-variate analyses are needed to resolve this question. -rasmus]

    Comment by Isaac Held — 1 Jan 2006 @ 10:23 PM

  86. Comment #84, response, rasmus wrote: … statistics is only so much, and there is, as you say, inappropriate ways and appropriate ways to apply statistics. …

    People, please review the procedure below, which was used in providing the latest flood advisory for the St. Louis River at Scanlon, Minnesota.


    Procedure used to make flood advisories at Scanlon, MN

    Please go to the web page at:
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif

    The page shows ensemble trace plots for:

    St. Louis River at Scanlon, MN
    Latitude 47.1 Longitude 92.6
    Forecast for the period 12/26/2005 12hr – 3/26/2006 12 hr
    Conditional simulation based on current conditions of 12/19/2005

    Right side of page shows: [Trace Start Date]

    Below the column heading [Trace Start Date] are historical year dates
    (1948-2002) for processed Precipitation and Temperature time series data (P and T data for basin area, six hourly basis, units mm, Celsius).

    The processed P and T data were used along with starting condition model states (snow water equivalent, soil moisture, frozen ground indexes on 12/19/2005) to generate 55 conditional flow traces shown in the plot.

    The conditional flow traces at Scanlon show that most traces with large values (peaks greater than 6500 cubic feet per second before the ending date of 3/26/2006) were based on P and T input time series from later years (1975-2002) of the historical period (1948-2002).

    I think that the 90 day trace plots at Scanlon indicate that seasonal warmth producing snowmelt came earlier in the year during the more recent period (1975-2002) than during the older period (1948-1974) used in generating the conditional flow traces. The conditional flow traces are being used (operationally), to provide exceedance probabilities for maximum river flow and stage at Scanlon for the forecast period (12/26/2005 12hr – 3/26/2006 12 hr).
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.exc.90day.gif

    To find the probabilities of exceedance of maximum flow and stage at other river gage sites in the Upper Midwest:
    click: New Probabilistic Products now Operational
    at: http://www.crh.noaa.gov/ncrfc/
    and proceed by clicking the small circles on the maps of the Upper Midwest.

    People, do you think the procedure used above is an appropriate way to apply statistics in making flood advisories for rivers to be used by agencies and the public interested in potential river conditions for the 90 day period ending March 26, 2006?

    Please post your questions or comments to realclimate.org.

    http://www.realclimate.org/index.php?p=228

    Comment by Pat Neuman — 2 Jan 2006 @ 9:47 AM

  87. About GT’s complexities stemming from its inherent chaotic behaviour, I don’t believe it is so chaotic, it can be read at a local level. I am basing this opinion on my own work, with respect to predicting GT’s, specifically of the Northern Hemisphere. I consider influx of air from everywhere as a GT measurement, a local surface air temperature measurement is the result of influx from air coming from everywhere else. This would mean that ENSO is not strictly a regional phenom spreading out its influence on a global basis, but rather ENSO is a result of weather characteristics from everywhere influencing ENSO. Proof is in the pudding, having predicted NH GT’s accurately by looking at vertical sun disk measurements which are influenced by very thick near horizon atmospheres, I was looking at multiple influx of air, giving a net average sun disk size, a summing up of advection and hadley circulation in a simplified way. I’ve done the same measurements in polar and temperate climate zones and found the same expanding sun disk size trend. It is rather possible to measure GT trends by analyzing what extra regional air masses are doing with the one you are living in. The same rule applies for the hurricane region, some current concepts of a regional cycle giving various hurricane activities depending on what phase of this alleged cycle is on, is an incorrect interpretation, regional isolationism of weather does not exist on a meso scale.

    Comment by wayne davidson — 2 Jan 2006 @ 2:21 PM

  88. Re 84: Gavin Rasmus, you say:

    I do not propose that the Nile river levels are not a result of physical processes, but I argue that the physical processes behind the river discharge are different to those behind the global mean temperature, and the displacement of a molecule (Brownian motion) if you like.

    In fact, while the physical processes behind the two are different, the physical laws governing them may be the same. Constructal theory, which has been much in the news lately, explains widely disparate physical systems (heat loss by animals, flying speed of birds, the formation of drainage system, heat transport, and many more). See “The constructal law of organization in nature: tree-shaped flows and body size”, at http://jeb.biologists.org/cgi/content/full/208/9/1677, for a good discussion of constructal theory. It has wide, and largely unrealized, potential application in climate science.

    w.

    [Response:Thanks Willis. I think that these ideas are interesting and that there may be somtething in that. But these ideas apply more to biological matters, don't they? Although all processes are based on the same fundamental physics principles, there are also important differences between rainfall over a given region and Brownian motion on the one hand and a planets mean surface temperature on the other. The latter is constrained by some restoring effect such as increased or reduced heat loss when it makes an excursion away from its equilibrium, while I think it's hard to find such restoring effects in the former.

    On another note, it occurs to me that people may seem that I have my lines of logic crossed somewhere: on the one hand I argue that we do have substantial knowledge about our climate system (that is not the same as to say we know everything or have a 'perfect' understanding!), whereas some disagree and say we do not really know that much (I think the view on this depends a bit on your expectations). On the other hand, I also argue that we do not know the real null-distribution that is required for testing trends since the past observations are affected by (natural) changes in the forcing. In that sense, I argue that we do not have a perfect knowledge. Then we have the argument that the kind of structure that Constructal theory predicts should be valid for most processes or that the ARIMA-type models are representative for the null-process- that implies that we do know a lot about the climate system. I think we have a substantial body of knowledge about our system, but there are also many things we do not know. When it comes to the original question about determining the significance of trends, I think the cleanest way to carry out the test is to use a GCM in experiments with and with prescribed variations in the forcings. I think that part of the issue is also how one defines a 'trend' in this case: is it the long-term rate of change over the entire history (eg temperature change over the last millions of years), or is it a systematic rate of change caused by changes in the forcing (i.e. a response, and it may for instance be the systematic change in temperature in the transitions between glacial and warm periods). I have used the latter interpretation in this discussion because I think this is more relevant to the present question. Anyway, I think this is a good discussion, and there probably people who disagree with me(?). -rasmus]

    Comment by Willis Eschenbach — 2 Jan 2006 @ 7:29 PM

  89. RE#85 Comment:”I agree that ENSO can cause temperature fluctuations on a few tenth of a degree Centigrade, but the time scale of ENSO (~3-8 years) is too short to explain the recent temperature hike that has taken place over the last 3 decades.”
    Then how about the PDO phase shift circa 1976-1977?

    Comment by Michael Jankowski — 3 Jan 2006 @ 4:26 PM

  90. Re: #86:

    Pat, I would guess (without offending anyone, hopefully), that your question is more within the domain of hydrometeorology than climate science per se.

    Judging by the breakthroughs in short term flood products, I tend to have (possibly blind) faith in NWS flood estimates. Seasonal flood products (looking out 30-90 days) are less reliable than short-term ones. Consider the loss in power when going from flash flood warning to a probabilistic 90-day outlook, as one example.

    Comment by Kenneth Blumenfeld — 3 Jan 2006 @ 5:48 PM

  91. re 90.

    I’d like to duck general questions for now, and focus instead on trying to make the specific example in 86. more understandable.

    Question #1:

    Do you think the conditional flow traces at Scanlon, Minnesota (St. Louis R. basin) are an indication that winter (Jan through Mar 26) climate has warmed within the Upper Midwest in recent decades?
    See: http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif

    Question #2:

    Do you think the procedure explained in 86. is an appropriate way to apply statistics in providing agencies and the public with spring flood advisories for rivers in the Upper Midwest?

    Comment by Pat Neuman — 3 Jan 2006 @ 9:46 PM

  92. Re: 25 To: Pat Neuman In re: polar bear evolution
    I appreciate the 1600 K-year timeline. I had broad knowledge of the approximate figure from work I did as a protege of people working in the Martin Almagro group thru the German Institute of Archeology, Madrid, albeit, a long time ago, and topically mostly -pithecantropus oriented.
    Fortunately we have a recent report of a dog genome. If there is an interested party considering the ursus maritimus polar bear genome, that might be an energizing study for the purpose of interpreting the 130-year industrial revolution sourced climate warming impact generated upon polar bears. I will check your paleontology link and others to begin formulating an improved perspective of how to incorporate this in a genetic-work model. Perhaps it is a futile hope at this point, but it seems to me Joule was a creator of concepts in his own time, and there might be a way to characterize the work evolutionary forces must exert to develop a working genetic lifeform such as the polar bear. This particular lens might be helpful, as well, in quantifying other quantum changes in species extinctions. Although several thread contributors are following these matters from a science vantage, the following are an u. maritimus image
    from a government link to the biologists overseeing bear counting, and an image from the legal entity organizing a diverse assemblage of smaller and less equipped groups to petition for better population tracking and a kind of EIR

    Comment by JohnLopresti — 3 Jan 2006 @ 9:55 PM

  93. Re: Pat’s recent comments (I note that we are somewhat off-topic):

    “Do you think the conditional flow traces at Scanlon, Minnesota (St. Louis R. basin) are an indication that winter (Jan through Mar 26) climate has warmed within the Upper Midwest in recent decades?”

    I think they indicate that the water has begun flowing earlier in northeast MN, which would indicate warmer conditions. I would not jump to the conclusion about the entire Upper Midwest based on those data alone…though it is probably true.

    “Do you think the procedure explained in 86. is an appropriate way to apply statistics in providing agencies and the public with spring flood advisories for rivers in the Upper Midwest?”

    I do, because it integrates historical data with current conditions. I believe large short-term hydrologic events do make it into the probabilistic outlooks also. It’s not perfect, but I do think it is a reasonable product.

    Comment by Kenneth Blumenfeld — 4 Jan 2006 @ 7:41 PM

  94. re 93.

    Kenny’s note in 93. said: ‘we are somewhat off-topic’. I think the questions in 90. were on topic for ‘Naturally trendy’. The conditional flow simulation trace plots at Scanlon MN on the St. Louis River (west of Duluth MN), which are shown at:
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif

    do not appear ‘Naturally trendy’, but they do appear ‘trendy’, which I think is an indication that some regional winter climate elements have warmed in part of the Upper Midwest in recent decades.

    The traces at Scanlon, which were based on processed historical precipitation and temperature data from 1948-2002 records, are part of a larger amount of evidence showing that many hydrologic climate elements have shown warming within the Upper Midwest since the late 1970s, showing unnatural warming trends.

    Other Upper Midwest winter climate trends showing warming in recent decades can be viewed from my 2003 article at the Minnesotans For Sustainability website titled Earlier in the Year Snowmelt Runoff for Rivers in Minnesota, Wisconsin and Minnesota, see Figure 1 of the article at:
    http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman_table_figure1.htm

    In the article, I showed that timing for beginning spring snowmelt runoff changed to earlier in the
    year following the late 1970s (by 2-4 weeks) compared to the timing from 1900 to late 1970s. The beginning of spring snowmelt runoff Julian days are shown in Figure 1 at three river stations located within the Upper Midwest, including:

    Red River at Fargo ND
    St Croix River at Wisconsin/Minnesota border
    St. Louis River at Scanlon near Duluth MN

    Although I understand that things can’t be perfect, I believe that professional hydrologists should try to adjust for inadequacies in modeling procedures and forecasts, which currently do not take account of the large amount of evidence showing that hydrologic climate warming has been happening in the Upper Midwest. From 86, the conditional flow traces are being used (operationally), to provide exceedance probabilities for maximum river flow and stage at Scanlon for the forecast period (12/26/2005 12hr – 3/26/2006 12 hr).

    http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.exc.90day.gif

    Even more importantly, more recent data indicates that rainfall has become more frequent in winter months for parts of the Upper Midwest, especially in central and southern parts of the region (Illinois, Iowa and southern Wisconsin). In January of 2005, major near record flooding occurred on rivers in the Illinois River basin, due mainly to December and January rainfall.

    I think it would be helpful for RC moderators to comment on this, especially on the note by Kenny that we are somewhat off-topic with this discussion.

    Comment by Pat Neuman — 5 Jan 2006 @ 9:21 AM

  95. For any time series, figuring out what statistic would be appropriate and running the test might contribute to discussing whether there’s a trend. Here’s one page of tools (picked at random from a Google search for such, I’ve found them from time to time online)

    http://members.aol.com/johnp71/javastat.html

    The biggest surprises and the lessons I carry with me from graduate statistics in the 1970s were:
    1) ask a statistician before collecting the data, not afterward, if you want to collect useful data.
    2) It takes appallingly more samples over a short period, or collection of data over a far longer period, than the naive grad student would imagine, before you have collected enough data for meaningful statistical analysis.

    Lots of what’s online is just numbers and graphs but not statistical evaluations. Yes, the curves are scary ….

    Comment by Hank Roberts — 5 Jan 2006 @ 7:32 PM

  96. re 93. 94. 95. Thank you all for your comments.

    I’m thinking the links and description I provided above on the
    probabilistic products may be difficult to understand for people who are unfamiliar with the Advanced Hydrologic Prediction Service (AHPS).

    General description of the U.S. operational AHPS probability service can be viewed at: http://www.weather.gov/ahps/about/about.php

    I think the additional operational trace plots below in Michigan and Minnesota may help in developing an improved understanding of the system.

    AREA MAP: U.P. MICHIGAN
    http://www.crh.noaa.gov/ncrfc/ahps/esp_maps/map/zoomin_fcst_18_m10000.php

    Ontonagon River at Rockland, MI
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/RKLM4.traces.gif

    Sturgeon R. at Alston, MI
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/ALSM4.traces.gif

    AREA MAP: MINNESOTA
    http://www.crh.noaa.gov/ncrfc/ahps/esp_maps/map/zoomin_fcst_6_m10000.php

    Mississippi R. at Aitkin, MN
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/ATKM5.traces.gif

    St. Louis R. at Scanlon, MN
    http://www.crh.noaa.gov/images/ncrfc/data/ahps/SCNM5.traces.gif

    Comment by Pat Neuman — 5 Jan 2006 @ 9:30 PM

  97. Re 88, Rasmus, thanks for your comment. Regarding constructal theory, you say:

    [Response:Thanks Willis. I think that these ideas are interesting and that there may be somtething in that. But these ideas apply more to biological matters, don't they?

    The amazing thing about constructal theory is that it applies to any system with flow, whether biological or physical, organic or inorganic. For example, see AH Reis, A. Bejan, “Constructal theory of global circulation and climate”; Journal of Geophysical Research Atmospheres

    w.

    Comment by Willis Eschenbach — 6 Jan 2006 @ 3:30 AM

  98. A reply to John Lopresti in comment 92.

    Tracking the Great Bear: Mystery Bears
    By Jim Halfpenny, Ph.D.
    Website: Bears and Other Top Predators Magazine
    http://www.cryptozoology.com/articles/mysterybears.php

    http://groups.yahoo.com/group/Paleontology_and_Climate/message/13646

    Comment by Pat Neuman — 6 Jan 2006 @ 10:46 AM

  99. Re: #89, “RE#85 Comment:”I agree that ENSO can cause temperature fluctuations on a few tenth of a degree Centigrade, but the time scale of ENSO (~3-8 years) is too short to explain the recent temperature hike that has taken place over the last 3 decades.”
    Then how about the PDO phase shift circa 1976-1977?”

    http://www.atmos.washington.edu/~mantua/REPORTS/egec_pdo.pdf

    Compare the graphs shown in the above article with the global temperature anomaly graph (i.e. the Hockey Stick).

    Comment by Stephen Berg — 6 Jan 2006 @ 11:23 AM

  100. Stephen, thank you for the link on PDO in your comment (99).

    I recently downloaded annual temperature data for stations with monthly and annual temperature data in Alaska (1950-2005, some 1930-2005). I used Excel software to create annual temperature time plots for the stations that have good quality data (few missing daily max/mins). It’s helpful to look at figure 2 of your link on PDO as I’m evaluating the rate of surface warming at the Alaska stations. PDO has had some influence on the overall rising trend in air temperatures at some of the stations, but not much influence compared to the pronounced warming trend coincident with rising concentrations of GHGs in the atmosphere.

    Comment by Pat Neuman — 6 Jan 2006 @ 10:04 PM

  101. On being a skeptic, I just looked at the large PDF file for global temperatures in a link you provided in a different thread. If I am counting dots on the graph properly, the banner headline should have read:
    ESSENTIALLY NO CHANGE IN GLOBAL TEMPERATURE IN THE LAST 8 YEARS.

    Now you know why there are die hard skeptics out there (or our here, whatever).

    Comment by joel Hammer — 7 Jan 2006 @ 9:58 AM

  102. ESSENTIALLY NO CHANGE IN GLOBAL TEMPERATURE IN THE LAST 8 YEARS

    This is just cherry picking a couple of trees and ignoring the forest. Look at all the dots and it is obvious that the year to year changes are very eratic, which renders such headlines as yours ridiculous.
    - In 1998 you could have declared “temperatures rising 1.7oC per decade since 1996″
    - In 2000 you could have declared “Dramatic Cooling since ’98: severe ice age in less than a century at this rate”
    - Last year your headline would have been “7 year cooling trend continues”
    - This year “No change in 8 years”

    etc, etc. Clearly ridiculous. If you want some meaning out of those dots you need to use a little intelligence and come up with a reasonable smoothing algorithm. In the case of the graph at http://data.giss.nasa.gov/gistemp/2005/2005_fig1.gif the red line shows you a five year mean that of course stops in 2002. This mean trend also goes up and down so you need a little intelligence looking at that too.

    Now you know why there are die hard skeptics out there

    Not really. Can any intelligent person look at that graph and not see the overall temperature rising steeply since the 70′s??

    Comment by Coby — 7 Jan 2006 @ 2:21 PM

  103. Re #102

    What is mean trend? What is the definition of a trend? Perhaps silly questions but perhaps not easy to answer. So, I suggest that we re-read the paper by Cohn and Lins (2005) to which this thread is devoted.

    Comment by Demetris Koutsoyiannis — 8 Jan 2006 @ 9:02 PM

  104. Sorry, I meant to say “trend in the mean”. I don’t know if that is non-sensical in the jargon of statistics, but it means something to me in the vernacular I am speaking, hopefully to any readers as well.

    Comment by Coby — 9 Jan 2006 @ 12:12 AM

  105. Forgive me if you have already answered this question.

    I understand that you use GCMs to determine if current temperature trends can only be explained with the addition of anthropogenic greenhouse gases. Do you use some kind of significance testing to do this? i.e. do you calculate the probability of the observed temperature data without anthropogenic greenhouse gases? If so, do you publish the significance levels?

    Thanks

    [Response: The "detection and attribution" stuff gets quite complex. Probably still the best reference is the TAR: here for the D+A chapter summary; you'll have to follow it down for the details. In short, yes its done with sig testing, yes the sig levels are published - William]

    Comment by Mark Frank — 9 Jan 2006 @ 2:56 AM

  106. Re#99 – The phasing trend certainly match the 1900-present global temperature trend pretty well.

    Re#100 – The Alaska Climate Research Center seems to disagree with you http://climate.gi.alaska.edu/ClimTrends/Change/4904Change.html . Their data suggests almost all of the net temperature change since the late 1940s in Alaska was due to the PDO shift.

    Comment by Michael Jankowski — 9 Jan 2006 @ 9:54 AM

  107. Regarding Alsakan temps and PDO, here’s something I made earlier.

    Comment by Tom Rees — 9 Jan 2006 @ 11:01 AM

  108. Say, Mike (current 106): that dip in the charty thing ~1999 from your link…wasn’t that about the time the system was supposed to flip back to a cold regime? Natural cycle would have a return about that time. Did it happen? If not, why not? Be careful with your tout here, is whut I’m sayin’.

    Best,

    D

    Comment by Dano — 9 Jan 2006 @ 1:17 PM

  109. RE#108-The negative PDO index 1999 was short-lived and apparently not the start of a complete phase shift. The net PDO from 1999-2003 was very slightly negative, but it increased after 1999 to become strongly positive again.

    I’m not sure what “tout” you’re getting at. If you’re suggesting I’m claiming that the average global temperature is dictated solely by the PDO, you’re wrong. But does it have a significant effect on our temperature record? Possibly.

    As far as “natural cycle would have a return about that time,” I’m not sure why the PDO should operate as clockwork. And if you look here http://www.beringclimate.noaa.gov/reports/Figures/Fig1NP04.html , the phases can last a very long time. The previous negative phase (based on the annual average) seemed to last from 1943-1976. It was strongly positive for 9 yrs before that, and slightly positive from 1900-1943 before that. A 1976-1999 phase would be relatively short in comparison.

    Rasmus was open to the effect of ENSO on temps, but suggested the short length of the cycle doesn’t explain the rise in temps over the past 30 yrs. Hence, I wanted to know what his feelings were about an ENSO-esque phenomenon such as the PDO, which does happen to fit the 30 yr time scale and which has been tied by some to a large amount of the net temp change (at least on a regional scale) of the last roughly 3 decades. I think it’s a fair question.

    Comment by Michael Jankowski — 9 Jan 2006 @ 3:20 PM

  110. Re #s 106/7/8: Thanks for that work, Tom. Were you aware of two recent papers, one of which questions whether the PDO is an independent climate trend at all (as opposed to a product of SST increases modified by several other influences)?: http://ams.allenpress.com/amsonline/?request=get-abstract&doi=10.1175%2FJCLI3527.1 . Other papers have demonstrated that recent SST increases are a function of AGW. The second paper shows, among other interesting results (more hockey sticks, anyone?), that the the 1976 shift was much exaggerated relative to past comparable changes, and pins it to AGW: http://www.sciencemag.org/cgi/content/abstract/311/5757/63?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=California+PDO&searchid=1136833970897_13545&FIRSTINDEX=0&journalcode=sci . The combination of these two papers seems fairly powerful.

    MJ’s presumption that recent changes in natural cycles (assuming the PDO is one) should be assumed to be uninfluenced by AGW is, well, a little presumptuous IMHO. As a small footnote, to repeat what I discovered the last time this issue came up here, that Alaska climate site is not written by climatologists. One might also consider that people under the thumb of Ted Stevens are going to be a little careful about what they say regarding long-term climate trends in Alaska.

    Comment by Steve Bloom — 9 Jan 2006 @ 3:46 PM

  111. Re 109:

    Well, the tout is the the subject of your comment 106 – ‘natural cycle’ is the reason for the AK warming. There is indirect evidence that there is periodicity in the temp shifts (e.g. here and here, figs 5,6) and at the very least definitive statements can’t be made – that is: if there are regular periods of change, then the natural cycle only briefly returned. Point being the tout is old and is arguably dated, and in line with my ‘sc(k)eptic’ comments that a natural cycle (or fill in other skeptic argument: _____ ) hypothesis has, to be generous, limited empirical backing.

    Best,

    D

    Comment by Dano — 9 Jan 2006 @ 4:37 PM

  112. Had I hit refresh before composing, I’d have seen Steve’s post – to which I would have referred to reinforce (and probably clarify) my point.

    The implication that an anthropogenic signal/influence has overpowered natural cycles has growing empirical evidence, whereas empirical evidence showing the current warming is part of a natural cycle despite anthro inputs is…where, exactly?

    D

    Comment by Dano — 9 Jan 2006 @ 4:46 PM

  113. > where, exactly?

    I dunno. I’ve looked too.

    Even assuming all the change so far is due to natural forcings, either they’re from outside the planet (solar input) or they’re just rearranging the heat retention. If there is added new heating from changing solar input, the anthropogenic CO2 is predictably going to add to the natural boost in CO2 — the combination would presumably retain the added solar heat longer than would happen naturally, isn’t that right? I’d expect that to be worked out as part of predicting what will happen.

    Maybe it has been. I haven’t found it.

    Apropos of wondering where the data might be, I’d expect a “climate audit” requirement will require industry to produce relevant data — CFCs, methane, CO2, projections. As the energy industry is affecting public health like the medical and pharmaceutical businesses, it would presumably be similarly required to make more disclosures than other businesses about what it’s doing and how it’s testing its projections.

    Comment by Hank Roberts — 9 Jan 2006 @ 6:55 PM

  114. Re#110, Please point out where I have made any such presumption.

    BTW, Tom’s work in #107 states, “A substantial part of this warming is a result of changes in the PDO.” This is exactly what I have been trying to point-out and is the main conclusion of the ACRC article you wish to discredit.

    The ACRC does have a climatologist on staff, but regardless, the data speaks for itself. If you think their data or analysis is in error (and maybe their analysis is crude), please post something to that effect. I also don’t see what Ted Stevens has to do with anything. You’re really grasping at straws if you are going to start a “political conspiracy theory” chant instead of indicating flaws with the data and/or methods.

    Your Science link…did you possibly link to the wrong article? I get “Planktonic Foraminifera of the California Current Reflect 20th-Century Warming,” for which the summary doesn’t say anything about the PDO other than to say, “It is currently unclear whether observed pelagic ecosystem responses to ocean warming, such as a mid-1970s change in the eastern North Pacific, depart from typical ocean variability” – a statement which doesn’t seem very supportive of your claim. Maybe it’s in the body somewhere? I don’t have a subscription…

    Re#111, “natural cycle is the reason for the AK warming” (your words) does not equate with “Their data suggests almost all of the net temperature change” (my words). If you had simply looked at the link I had provided, you would find that not all of the AK warming was explicilty explained away as “natural cycle.” So I’m not sure who you are having this “100% natural” argument with.

    Re#112, “The implication that an anthropogenic signal/influence has overpowered natural cycles has growing empirical evidence…” This tout is old and dated. Few people dispute that there is an anthropogenic influence on climate. Sc(k)eptics simply question the significance of AGW and what the cost vs benefits of potential solutions (and if any are “worth” undertaking) are.

    Comment by Michael Jankowski — 9 Jan 2006 @ 6:56 PM

  115. Re #110-112,

    One seems to forget that solar activity of the last halve century is higher than at any time of the last 8,000 years and increased dramatically in the past century (including a more than doubling of the sun’s magnetic field).

    Thus that the higher temperatures near the Californian coast and up to Alaska are higher now is not surely “apparently anthropogenic”…

    Comment by Ferdinand Engelbeen — 9 Jan 2006 @ 7:01 PM

  116. re current 114:

    I’m not sure who you are having this “100% natural” argument with

    Nor am I, as I neither quantified nor implied an amount. You said ‘almost all’, BTW.

    Sc(k)eptics simply question the significance of AGW and what the cost vs benefits of potential solutions (and if any are “worth” undertaking) are

    My apologies for missing this. Even though all your comments in this thread were about natural cycles, I should have related them to CBAs. I’d like to have addressed solutioning in my answers, but that would mean getting off-topic for this thread.

    Re current 115:

    One doesn’t forget (‘solar variability is unlikely to have been the dominant cause of the strong warming during the past three decades’).

    One presumes you’re speaking of recent warming, rather than the cause of the PDO/phase shift/foraminifera/data points, but thanks.

    Best,

    D

    Comment by Dano — 9 Jan 2006 @ 7:34 PM

  117. Re #116,

    Dano, I am not sure that the sentence ‘solar variability is unlikely to have been the dominant cause of the strong warming during the past three decades’ wasn’t added to have the research published in Nature, as nothing in the research itself does support such a conclusion…

    But more important, the PDO shift was in 1976, that is four decades ago, when the oceans were already heated up by other reasons than increased GHGs. Since then, the PDO remained relative constant, which may be or not linked to global warming.

    Further, the full article about the Californian foraminifera has some interesting points:

    Our findings point to the possibility that anthropogenic warming has affected marine populations since the early 20th century, although only the ocean warming of the late 20th century has been confidently attributed to the accumulation of greenhouse gases

    and

    Our results indicate that the variability of foraminifera in the California Current in the 20th century is linked to variations in SST and is atypical of the preceding millennium. Given that the trend in global SSTs has been attributed to increases in greenhouse gases in the atmosphere (17â??19), it follows that the best explanation for this ecosystem aberration is anthropogenic warming that has passed a threshold of natural variability.

    This – and the graphs – indicate that the change to more tropical foraminifera species already started in the first halve of the century, which is mainly connected to solar changes, and was on full strength in mid-1970′s, while the attribution of GHGs to the warming of the oceans is confined to the second halve, and mainly after 1975, but even that is questionable. References 17-19 are for Levitus ea. 2001, and Barnett ea. 2005. Levitus ea. 2005 only points to others (Hansen) to suggest that the ocean warming of the past five decades may be attributed to GHGs, but at the same time warns that natural variability may have a large influence for decadal periods (like the 1980-1990 heat content decrease of the oceans!). Barnett tries to link the ocean’s heat content to GHGs with a model, but the model results (significantly!) don’t catch any observed cycle between 10-100 years…

    And about other natural causes: the tropical oceans warmed with some 0.085 K/decade in the past decades. This is accompanied by a shift of cloud cover, leading to 2 W/m2 more insolation (and some 5 W/m2 more IR back to space). The change in insolation (which may cause more ocean heating) and the loss to space (which may reduce the heat flow to higher latitudes) are far higher than the changes in radiation balance caused by GHGs (and opposite in net sign!)…

    Comment by Ferdinand Engelbeen — 9 Jan 2006 @ 8:51 PM

  118. Re: 117

    Ahh, Thank you Ferdi, so your solar point was apropos of PDO. And, I agree: the Solanki conclusions supported little, as they were initial findings, although no doubt many are familiar with his graph (fig2 pg eight) that indicates a departure from the cycle in 80s but otherwise helps your point [and also plots T departure which is an optic to consider foram finding in next para].

    And your the oceans were already heated up by other reasons than increased GHGs is interesting, as it is my understanding – overviews in my linkies in 109 – that the area of the ocean that undergoes shifts in the PDO was in a cool phase just before 1976. And if AK temps are linked to ocean temps, they aren’t reflected in the record (hence the reason for all this back-and-forth on AK temps and phase-shift stuff). And I wasn’t aware of evidence indicating reasons for ocean heating in early 20th C – I’m sure there was some heat storage somewhere that flipped the N Pacific, and you’ll help me understand where that heat came from.

    And I’m not sure the foram paper (I haven’t read it yet) supports your implication that the foram record correlates to the solar changes, judging from the abstract which doesn’t mention it. I’d be happy to report back to you whether your implication is true after I read it (a couplea weeks probably). But I note with interest your timing comment, which is a good question to keep in mind while reading (and also these duen phase shifts and how they’re pointed out in the photos of cores).

    Lastly, thank you for pointing out the evidence of natural cloud cycles in the tropics.

    Best,

    D

    Comment by Dano — 9 Jan 2006 @ 10:16 PM

  119. Re #110. The PDO is calculated from sea surface temperatures (SSTs) after correcting for long-term trends in temperature. So the shift in the PDO in the 70s is not simply an artefact of the general rise in SSTs. Your second link shows that the PDO is a product of other ocean processes. This is not too surprising, it indicates that the PDO, like the NAO, is an epiphenomenom. It’s nonetheless real – pacific SSTs really do show this dipole fluctuation. It’s possible (indeed likely) that these underlying processes (e.g. oceanic Rossby waves) will be affected by global warming – although this doesn’t mean that the change in the PDO can be explained in this way.

    Regarding the link the Alaskan temps – a connection between surface air temps in Southern Alaska with SSTs in the adjacent pacific is not too suprising. There is, however, no discernable statistical link between the PDO and global surface temps. One thing to note about the analysis provided by ACRIC is that the temps are a geometric mean of surface stations. Since most stations are in the south, the result is a spatial bias. Since the effect of the PDO is strongest in the south, this analysis overemphasizes the effect of the PDO. If you look at the Alaskan Arctic (e.g. Barrow), there is little effect of the PDO, yet temperatures there have risen markedly.

    [Response: Are you sure you don't mean an arithimetic mean (i.e. summing up all the stations and dividing by the number of stations). Geometric means would be a little problematic, and not terribly useful... -gavin]

    Comment by Tom Rees — 10 Jan 2006 @ 9:20 AM

  120. How far inland in miles might it? be before a typical PDO looses its effect on surface temperatures inland, south of Anchorage … say for a drop of 1 degree F or less in the annual temperature for that year (with consideration given to climate station elevations).

    Comment by Pat Neuman — 10 Jan 2006 @ 11:27 AM

  121. Re #119 – sorry Gavin arithmetic mean. Geometric mean would be… interesting!

    Re #120 – I don’t know how far inland. The correlation between PDO and temps at Nome anf Fairbanks is a little less than Anchorage, but still quite strong. The correlation with temps at barrow is negligible. If you do a multiple linear regression using PDO and global surface temps (GISS) as covariates, you get the following:

    Nome = -3.458 + 0.731*PDO + 1.987*GISS
    Fairbanks = -2.696 + 0.764*PDO +0.994*GISS
    Barrow = -12.80 + 0.172*PDO +3.278*GISS
    Anchorage = 1.880 + 0.838*PDO + 0.869*GISS

    This suggests that, after removing the effect of the PDO, Fairbanks and Anchorage are warming at approximately the same rate as the global average, Nome at about twice the global average, and Barrow at around three times the global average.

    Comment by Tom Rees — 10 Jan 2006 @ 12:12 PM

  122. I have been rereading the Cohn and Lins paper with increasing but partial comprehension. A couple of things confuse me. Can anyone explain?

    1) In section 4 they demonstrate how the value of d (a measure of autocorrelation) has a large effect on significance. When using the temperature record as an example (section 5 of the paper) they don’t specify the value of d they used. Have I missed something fundamental?

    2) They have chosen a temperature record from 1856-2002. I get the point that assuming autocorrelation enormously reduces the significance of an estimated value of beta.
    Nevertheless, I believe the physics suggests that the recent warming is an accelerating trend, not a straight line, with most of the increase in the last 30 years. Had they repeated the same example on say 1960-2005 would they not have got a much higher value of estimated beta and therefore a much more significant result even assuming autocorrelation?

    Thanks in advance for any help.

    Cheers

    Comment by Mark Frank — 13 Jan 2006 @ 10:34 AM

  123. A tale of hypothesis testing – expanding on Hans Von Storch and the Mexican Hat
     

    Trackback by Everyday thoughts — 14 Jan 2006 @ 3:48 AM

  124. If you are trying to proove a connection between global warming and human activities then the GCMs are needed.

    If you are simply trying to proove that temperatures are increasing then surely you are better off using an empirical model describing the relationship between temperature and time without it necessarily giving you any information about causality. You could also try models including the forcing variables. If this is done then the forcing variable should be included in the model in forms suggested by the physics. But I think trend analyses are most useful here for simply showing that a trend exists.

    I have only been able so far get access to the abstract of Cohn and Lin’s paper so some of my comments might be a bit off target.

    What I’ve noticed is that everyone seems to be only talking about linear trends. Has anyone tried fitting non-parametric smoothers such as smoothing splines or lowess smoothers to the temperature vs. time data? These might be quite useful in highlighting what is going on. They also allow for non-linearity in the data which could otherwise inflate autocorrelation in a linear trend model.

    Statistics should not be treated as a black box. It needs to be applied by someone who is familiar both with the statistics and with the physical processes involved. I could be wrong here but have any of the contributors to this blog been burnt by black box approaches to statistics? If so I could use the stories as cautionary tales for other statisticians.

    Comment by Lloyd Flack — 16 Jan 2006 @ 1:28 AM

Sorry, the comment form is closed at this time.

Close this window.

0.600 Powered by WordPress