From time to time, there is discussion about whether the recent warming trend is due just to chance. We have heard arguments that so-called ‘random walk‘ can produce similar hikes in temperature (any reason why the global mean temperature should behave like the displacement of a molecule in Brownian motion?). The latest in this category of discussions was provided by Cohn and Lins (2005), who in essence pitch statistics against physics. They observe that tests for trends are sensitive to the expectations, or the choice of the null-hypothesis .
Cohn and Lins argue that long-term persistence (LTP) makes standard hypothesis testing difficult. While it is true that statistical tests do depend on the underlying assumptions, it is not given that chosen statistical models such as AR (autoregressive), ARMA, ARIMA, or FARIMA do provide an adequate representation of the null-distribution. All these statistical models do represent a type of structure in time, be it as simple as a serial correlation, persistence, or more complex recurring patterns. Thus, the choice of model determines what kind of temporal pattern one expects to be present in the process analysed. Although these models tend to be referred to as ‘stochastic models’ (a random number generator is usually used to provide underlying input for their behaviour), I think this is a misnomer, and think that the labels ‘pseudo-stochastic’ or ‘semi-stochastic’ are more appropriate. It is important to keep in mind that these models are not necessarily representative of nature – just convenient models which to some degree mimic the empirical data. In fact, I would argue that all these models are far inferior compared to the general circulation models (GCMs) for the study of our climate, and that the most appropriate null-distributions are derived from long control simulations performed with such GCMs. The GCMs embody much more physically-based information, and do provide a physically consistent representation of the radiative balance, energy distribution and dynamical processes in our climate system. No GCM does suggest a global mean temperature hike as observed, unless an enhanced greenhouse effect is taken into account. The question whether the recent global warming is natural or not is an issue that belongs in ‘detection and attribution’ topic in climate research.
One difficulty with the notion that the global mean temperature behaves like a random walk is that it then would imply a more unstable system with similar hikes as we now observe throughout our history. However, the indications are that the historical climate has been fairly stable. An even more serious problem with Cohn and Lins’ paper as well as the random walk notion is that a hike in the global surface temperature would have physical implications – be it energetic (Stefan-Boltzmann, heat budget) or dynamic (vertical stability, circulation). In fact, one may wonder if an underlying assumption of stochastic behaviour is representative, since after all, the laws of physics seem to rule our universe. On the very microscopical scales, processes obey quantum physics and events are stochastic. Nevertheless, the probability for their position or occurrence is determined by a set of rules (e.g. the Schrödinger’s equation). Still, on a macroscopic scale, nature follows a set of physical laws, as a consequence of the way the probabilities are detemined. After all, changes in the global mean temperature of a planet must be consistent with the energy budget.
Is the question of LTP then relevant for testing a planet global temperature for trend? To some degree, all processes involving a trend also exhibit some LTP, and it is also important to ask whether the test by Cohn and Lins involves circular logic: For our system, forcings increase LTP and so an LTP derived from the data, already contains the forcings and is not a measure of the intrinsic LTP of the system. The real issue is the true degrees of freedom – number of truely independent observations – and the question of independent and identically distributed (iid) data. Long-term persistence may imply dependency between adjacent measurements, as slow systems may not have had the time to change appreciably between two successive observations (the same state is more or less observed in the successive measurements). Are there reasons to believe that this is the case for our planet? Predictions for subsequent month or season (seasonal forecasting) is tricky at higher latitudes but reasonably skilful regarding El Nino Southern Oscillation (ENSO). However, it is extremely difficult to predict ENSO one or more years ahead. The year-to-year fluctuations thus tend to be difficult to predict, suggesting that LTP is not the ‘problem’ with our climate. On the other hand, there is also the thermal momentum in the oceans which implies that the radiative forcing up to the present time is going has implications for following decades. Thus, in order to be physically consistent, arguing for the presence of LTP also implies an acknowledgement of past radiative forcing in the favour for an enhanced greenhouse effect, since if there were no trend, the oceanic memory would not be very relevant (the short-term effects of ENSO and volcanoes would destroy the LTP).
Another common false statment, which some contrarians may also find support for from the Cohn and Lins paper, is that the climate system is not well understood. I think this statement is somewhat ironic, but the people who make this statement must be allowed to talk for themselves. If this statement were generally true, then how could climate scientists make complex models – GCMs – that replicate the essential features of our climate system? The fact that GCMs exist and that they provide a realistic description of our climate system, is overwhelming evidence demonstrating that such statement must be false – at least concerning the climate scientists. I’d like to iterate this: If we did not understand our atmosphere very well, then how can a meteorologist make atmospheric models for weather forecasts? It is indeed impressing to see how some state-of-the-art atmopsheric-, oceanic models, and coupled atmospheric-oceanic GCMs reproduce features such as ENSO, the North Atlantic Oscillation (or Arctic or Antarctic Oscillation) on the larger scales, as well as smaller scale systems such as mid-latitude cyclones (the German model ECHAM5 really produces impressive results for the North Atlantic!) and Tropical Instability Waves with such realism. The models are not perfect and have some shortcomings (eg clouds and planetary boundary layer), but these are not necassarily due to a lack of understanding, but rather due to limited computational resources. Take an analogy: how the human body works, conscienceness, and our minds. These are aspects the medical profession does not understand in every detail due to their baffling complexity, but medical doctors nevertheless do a very good job curing us for diseases, and shrinks heal our mental illnesses.
In summary, statistics is a powerful tool, but blind statistics is likely to lead one astray. Statistics does not usually incorporate physically-based information, but derives an answer from a set of given assumptions and mathematical logic. It is important to combine physics with statistics in order to obtain true answers. And, to re-iterate on the issues I began with: It’s natural for molecules under Brownian motion to go on a hike through their random walks (this is known as diffusion), however, it’s quite a different matter if such behaviour was found for the global planetary temperature, as this would have profound physical implications. The nature is not trendy in our case, by the way – because of the laws of physics.
Update & Summary
This post has provoked various responses, both here and on other Internet sites. Some of these responses have been very valuable, but I believe that some of these are based on a misunderstanding. For instance, some seem to think that I am claiming that there is no auto correlation in the temperature record! For those who have this impression, I would urge to please read my post more carefully, because it is not my message. The same comments goes for those who think that I’m arguing that the temperature is iid, as this is definitely not what I say. It is extremely important to be able to understand the message before one can make a sensible response.
I will try to make a summary of my arguments and the same time address some of the comments. Planetary temperatures are governed by physics, and it is crucial that any hypotheses regarding their behaviour are both physically as well as statistically consistent. This does not mean that I’m dismissing statistics as a tool. Setting up such statistical tests is often a very delicate exercise, and I do question whether the ones in this case provide a credible answer.
Some of the response to my post on other Internet sites seem to completely dismiss the physics. Temperature increases involve changes in energy (temperature is a measure for the bulk kinetic energy of the moleclues), thus the first law of thermodynamics must come into consideration. ARIMA models are not based on physics, but GCMs are.
When ARIMA-type models are calibrated on empirical data to provide a null-distribution which is used to test the same data, then the design of the test is likely to be seriously flawed. To re-iterate, since the question is whether the observed trend is significant or not, we cannot derive a null-distribution using statistical models trained on the same data that contain the trend we want to assess. Hence, the use of GCMs, which both incorporates the physics, as well as not being prone to circular logic is the appropriate choice.
There seems to be a mix-up between ‘random walk’ and temperatures. Random walk typically concerns the displacement of a molecule, whereas the temperature is a measure of the average kinetic energy of the molecules. The molecules are free to move away, but the mean energy of the molecules is conserved, unless there is a source (first law of thermodynamics). [Of course, if the average temperature is increased, this affects the random walk as the molecules move faster (higher speed).]
joel Hammer says
On being a skeptic, I just looked at the large PDF file for global temperatures in a link you provided in a different thread. If I am counting dots on the graph properly, the banner headline should have read:
ESSENTIALLY NO CHANGE IN GLOBAL TEMPERATURE IN THE LAST 8 YEARS.
Now you know why there are die hard skeptics out there (or our here, whatever).
Coby says
This is just cherry picking a couple of trees and ignoring the forest. Look at all the dots and it is obvious that the year to year changes are very eratic, which renders such headlines as yours ridiculous.
– In 1998 you could have declared “temperatures rising 1.7oC per decade since 1996”
– In 2000 you could have declared “Dramatic Cooling since ’98: severe ice age in less than a century at this rate”
– Last year your headline would have been “7 year cooling trend continues”
– This year “No change in 8 years”
etc, etc. Clearly ridiculous. If you want some meaning out of those dots you need to use a little intelligence and come up with a reasonable smoothing algorithm. In the case of the graph at http://data.giss.nasa.gov/gistemp/2005/2005_fig1.gif the red line shows you a five year mean that of course stops in 2002. This mean trend also goes up and down so you need a little intelligence looking at that too.
Not really. Can any intelligent person look at that graph and not see the overall temperature rising steeply since the 70’s??
Demetris Koutsoyiannis says
Re #102
What is mean trend? What is the definition of a trend? Perhaps silly questions but perhaps not easy to answer. So, I suggest that we re-read the paper by Cohn and Lins (2005) to which this thread is devoted.
Coby says
Sorry, I meant to say “trend in the mean”. I don’t know if that is non-sensical in the jargon of statistics, but it means something to me in the vernacular I am speaking, hopefully to any readers as well.
Mark Frank says
Forgive me if you have already answered this question.
I understand that you use GCMs to determine if current temperature trends can only be explained with the addition of anthropogenic greenhouse gases. Do you use some kind of significance testing to do this? i.e. do you calculate the probability of the observed temperature data without anthropogenic greenhouse gases? If so, do you publish the significance levels?
Thanks
[Response: The “detection and attribution” stuff gets quite complex. Probably still the best reference is the TAR: here for the D+A chapter summary; you’ll have to follow it down for the details. In short, yes its done with sig testing, yes the sig levels are published – William]
Michael Jankowski says
Re#99 – The phasing trend certainly match the 1900-present global temperature trend pretty well.
Re#100 – The Alaska Climate Research Center seems to disagree with you http://climate.gi.alaska.edu/ClimTrends/Change/4904Change.html . Their data suggests almost all of the net temperature change since the late 1940s in Alaska was due to the PDO shift.
Tom Rees says
Regarding Alsakan temps and PDO, here’s something I made earlier.
Dano says
Say, Mike (current 106): that dip in the charty thing ~1999 from your link…wasn’t that about the time the system was supposed to flip back to a cold regime? Natural cycle would have a return about that time. Did it happen? If not, why not? Be careful with your tout here, is whut I’m sayin’.
Best,
D
Michael Jankowski says
RE#108-The negative PDO index 1999 was short-lived and apparently not the start of a complete phase shift. The net PDO from 1999-2003 was very slightly negative, but it increased after 1999 to become strongly positive again.
I’m not sure what “tout” you’re getting at. If you’re suggesting I’m claiming that the average global temperature is dictated solely by the PDO, you’re wrong. But does it have a significant effect on our temperature record? Possibly.
As far as “natural cycle would have a return about that time,” I’m not sure why the PDO should operate as clockwork. And if you look here http://www.beringclimate.noaa.gov/reports/Figures/Fig1NP04.html , the phases can last a very long time. The previous negative phase (based on the annual average) seemed to last from 1943-1976. It was strongly positive for 9 yrs before that, and slightly positive from 1900-1943 before that. A 1976-1999 phase would be relatively short in comparison.
Rasmus was open to the effect of ENSO on temps, but suggested the short length of the cycle doesn’t explain the rise in temps over the past 30 yrs. Hence, I wanted to know what his feelings were about an ENSO-esque phenomenon such as the PDO, which does happen to fit the 30 yr time scale and which has been tied by some to a large amount of the net temp change (at least on a regional scale) of the last roughly 3 decades. I think it’s a fair question.
Steve Bloom says
Re #s 106/7/8: Thanks for that work, Tom. Were you aware of two recent papers, one of which questions whether the PDO is an independent climate trend at all (as opposed to a product of SST increases modified by several other influences)?: http://ams.allenpress.com/amsonline/?request=get-abstract&doi=10.1175%2FJCLI3527.1 . Other papers have demonstrated that recent SST increases are a function of AGW. The second paper shows, among other interesting results (more hockey sticks, anyone?), that the the 1976 shift was much exaggerated relative to past comparable changes, and pins it to AGW: http://www.sciencemag.org/cgi/content/abstract/311/5757/63?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=California+PDO&searchid=1136833970897_13545&FIRSTINDEX=0&journalcode=sci . The combination of these two papers seems fairly powerful.
MJ’s presumption that recent changes in natural cycles (assuming the PDO is one) should be assumed to be uninfluenced by AGW is, well, a little presumptuous IMHO. As a small footnote, to repeat what I discovered the last time this issue came up here, that Alaska climate site is not written by climatologists. One might also consider that people under the thumb of Ted Stevens are going to be a little careful about what they say regarding long-term climate trends in Alaska.
Dano says
Re 109:
Well, the tout is the the subject of your comment 106 – ‘natural cycle’ is the reason for the AK warming. There is indirect evidence that there is periodicity in the temp shifts (e.g. here and here, figs 5,6) and at the very least definitive statements can’t be made – that is: if there are regular periods of change, then the natural cycle only briefly returned. Point being the tout is old and is arguably dated, and in line with my ‘sc(k)eptic’ comments that a natural cycle (or fill in other skeptic argument: _____ ) hypothesis has, to be generous, limited empirical backing.
Best,
D
Dano says
Had I hit refresh before composing, I’d have seen Steve’s post – to which I would have referred to reinforce (and probably clarify) my point.
The implication that an anthropogenic signal/influence has overpowered natural cycles has growing empirical evidence, whereas empirical evidence showing the current warming is part of a natural cycle despite anthro inputs is…where, exactly?
D
Hank Roberts says
> where, exactly?
I dunno. I’ve looked too.
Even assuming all the change so far is due to natural forcings, either they’re from outside the planet (solar input) or they’re just rearranging the heat retention. If there is added new heating from changing solar input, the anthropogenic CO2 is predictably going to add to the natural boost in CO2 — the combination would presumably retain the added solar heat longer than would happen naturally, isn’t that right? I’d expect that to be worked out as part of predicting what will happen.
Maybe it has been. I haven’t found it.
Apropos of wondering where the data might be, I’d expect a “climate audit” requirement will require industry to produce relevant data — CFCs, methane, CO2, projections. As the energy industry is affecting public health like the medical and pharmaceutical businesses, it would presumably be similarly required to make more disclosures than other businesses about what it’s doing and how it’s testing its projections.
Michael Jankowski says
Re#110, Please point out where I have made any such presumption.
BTW, Tom’s work in #107 states, “A substantial part of this warming is a result of changes in the PDO.” This is exactly what I have been trying to point-out and is the main conclusion of the ACRC article you wish to discredit.
The ACRC does have a climatologist on staff, but regardless, the data speaks for itself. If you think their data or analysis is in error (and maybe their analysis is crude), please post something to that effect. I also don’t see what Ted Stevens has to do with anything. You’re really grasping at straws if you are going to start a “political conspiracy theory” chant instead of indicating flaws with the data and/or methods.
Your Science link…did you possibly link to the wrong article? I get “Planktonic Foraminifera of the California Current Reflect 20th-Century Warming,” for which the summary doesn’t say anything about the PDO other than to say, “It is currently unclear whether observed pelagic ecosystem responses to ocean warming, such as a mid-1970s change in the eastern North Pacific, depart from typical ocean variability” – a statement which doesn’t seem very supportive of your claim. Maybe it’s in the body somewhere? I don’t have a subscription…
Re#111, “natural cycle is the reason for the AK warming” (your words) does not equate with “Their data suggests almost all of the net temperature change” (my words). If you had simply looked at the link I had provided, you would find that not all of the AK warming was explicilty explained away as “natural cycle.” So I’m not sure who you are having this “100% natural” argument with.
Re#112, “The implication that an anthropogenic signal/influence has overpowered natural cycles has growing empirical evidence…” This tout is old and dated. Few people dispute that there is an anthropogenic influence on climate. Sc(k)eptics simply question the significance of AGW and what the cost vs benefits of potential solutions (and if any are “worth” undertaking) are.
Ferdinand Engelbeen says
Re #110-112,
One seems to forget that solar activity of the last halve century is higher than at any time of the last 8,000 years and increased dramatically in the past century (including a more than doubling of the sun’s magnetic field).
Thus that the higher temperatures near the Californian coast and up to Alaska are higher now is not surely “apparently anthropogenic”…
Dano says
re current 114:
I’m not sure who you are having this “100% natural” argument with
Nor am I, as I neither quantified nor implied an amount. You said ‘almost all’, BTW.
Sc(k)eptics simply question the significance of AGW and what the cost vs benefits of potential solutions (and if any are “worth” undertaking) are
My apologies for missing this. Even though all your comments in this thread were about natural cycles, I should have related them to CBAs. I’d like to have addressed solutioning in my answers, but that would mean getting off-topic for this thread.
Re current 115:
One doesn’t forget (‘solar variability is unlikely to have been the dominant cause of the strong warming during the past three decades’).
One presumes you’re speaking of recent warming, rather than the cause of the PDO/phase shift/foraminifera/data points, but thanks.
Best,
D
Ferdinand Engelbeen says
Re #116,
Dano, I am not sure that the sentence ‘solar variability is unlikely to have been the dominant cause of the strong warming during the past three decades’ wasn’t added to have the research published in Nature, as nothing in the research itself does support such a conclusion…
But more important, the PDO shift was in 1976, that is four decades ago, when the oceans were already heated up by other reasons than increased GHGs. Since then, the PDO remained relative constant, which may be or not linked to global warming.
Further, the full article about the Californian foraminifera has some interesting points:
and
This – and the graphs – indicate that the change to more tropical foraminifera species already started in the first halve of the century, which is mainly connected to solar changes, and was on full strength in mid-1970’s, while the attribution of GHGs to the warming of the oceans is confined to the second halve, and mainly after 1975, but even that is questionable. References 17-19 are for Levitus ea. 2001, and Barnett ea. 2005. Levitus ea. 2005 only points to others (Hansen) to suggest that the ocean warming of the past five decades may be attributed to GHGs, but at the same time warns that natural variability may have a large influence for decadal periods (like the 1980-1990 heat content decrease of the oceans!). Barnett tries to link the ocean’s heat content to GHGs with a model, but the model results (significantly!) don’t catch any observed cycle between 10-100 years…
And about other natural causes: the tropical oceans warmed with some 0.085 K/decade in the past decades. This is accompanied by a shift of cloud cover, leading to 2 W/m2 more insolation (and some 5 W/m2 more IR back to space). The change in insolation (which may cause more ocean heating) and the loss to space (which may reduce the heat flow to higher latitudes) are far higher than the changes in radiation balance caused by GHGs (and opposite in net sign!)…
Dano says
Re: 117
Ahh, Thank you Ferdi, so your solar point was apropos of PDO. And, I agree: the Solanki conclusions supported little, as they were initial findings, although no doubt many are familiar with his graph (fig2 pg eight) that indicates a departure from the cycle in 80s but otherwise helps your point [and also plots T departure which is an optic to consider foram finding in next para].
And your the oceans were already heated up by other reasons than increased GHGs is interesting, as it is my understanding – overviews in my linkies in 109 – that the area of the ocean that undergoes shifts in the PDO was in a cool phase just before 1976. And if AK temps are linked to ocean temps, they aren’t reflected in the record (hence the reason for all this back-and-forth on AK temps and phase-shift stuff). And I wasn’t aware of evidence indicating reasons for ocean heating in early 20th C – I’m sure there was some heat storage somewhere that flipped the N Pacific, and you’ll help me understand where that heat came from.
And I’m not sure the foram paper (I haven’t read it yet) supports your implication that the foram record correlates to the solar changes, judging from the abstract which doesn’t mention it. I’d be happy to report back to you whether your implication is true after I read it (a couplea weeks probably). But I note with interest your timing comment, which is a good question to keep in mind while reading (and also these duen phase shifts and how they’re pointed out in the photos of cores).
Lastly, thank you for pointing out the evidence of natural cloud cycles in the tropics.
Best,
D
Tom Rees says
Re #110. The PDO is calculated from sea surface temperatures (SSTs) after correcting for long-term trends in temperature. So the shift in the PDO in the 70s is not simply an artefact of the general rise in SSTs. Your second link shows that the PDO is a product of other ocean processes. This is not too surprising, it indicates that the PDO, like the NAO, is an epiphenomenom. It’s nonetheless real – pacific SSTs really do show this dipole fluctuation. It’s possible (indeed likely) that these underlying processes (e.g. oceanic Rossby waves) will be affected by global warming – although this doesn’t mean that the change in the PDO can be explained in this way.
Regarding the link the Alaskan temps – a connection between surface air temps in Southern Alaska with SSTs in the adjacent pacific is not too suprising. There is, however, no discernable statistical link between the PDO and global surface temps. One thing to note about the analysis provided by ACRIC is that the temps are a geometric mean of surface stations. Since most stations are in the south, the result is a spatial bias. Since the effect of the PDO is strongest in the south, this analysis overemphasizes the effect of the PDO. If you look at the Alaskan Arctic (e.g. Barrow), there is little effect of the PDO, yet temperatures there have risen markedly.
[Response: Are you sure you don’t mean an arithimetic mean (i.e. summing up all the stations and dividing by the number of stations). Geometric means would be a little problematic, and not terribly useful… -gavin]
Pat Neuman says
How far inland in miles might it? be before a typical PDO looses its effect on surface temperatures inland, south of Anchorage … say for a drop of 1 degree F or less in the annual temperature for that year (with consideration given to climate station elevations).
Tom Rees says
Re #119 – sorry Gavin arithmetic mean. Geometric mean would be… interesting!
Re #120 – I don’t know how far inland. The correlation between PDO and temps at Nome anf Fairbanks is a little less than Anchorage, but still quite strong. The correlation with temps at barrow is negligible. If you do a multiple linear regression using PDO and global surface temps (GISS) as covariates, you get the following:
Nome = -3.458 + 0.731*PDO + 1.987*GISS
Fairbanks = -2.696 + 0.764*PDO +0.994*GISS
Barrow = -12.80 + 0.172*PDO +3.278*GISS
Anchorage = 1.880 + 0.838*PDO + 0.869*GISS
This suggests that, after removing the effect of the PDO, Fairbanks and Anchorage are warming at approximately the same rate as the global average, Nome at about twice the global average, and Barrow at around three times the global average.
Mark Frank says
I have been rereading the Cohn and Lins paper with increasing but partial comprehension. A couple of things confuse me. Can anyone explain?
1) In section 4 they demonstrate how the value of d (a measure of autocorrelation) has a large effect on significance. When using the temperature record as an example (section 5 of the paper) they don’t specify the value of d they used. Have I missed something fundamental?
2) They have chosen a temperature record from 1856-2002. I get the point that assuming autocorrelation enormously reduces the significance of an estimated value of beta.
Nevertheless, I believe the physics suggests that the recent warming is an accelerating trend, not a straight line, with most of the increase in the last 30 years. Had they repeated the same example on say 1960-2005 would they not have got a much higher value of estimated beta and therefore a much more significant result even assuming autocorrelation?
Thanks in advance for any help.
Cheers
Lloyd Flack says
If you are trying to proove a connection between global warming and human activities then the GCMs are needed.
If you are simply trying to proove that temperatures are increasing then surely you are better off using an empirical model describing the relationship between temperature and time without it necessarily giving you any information about causality. You could also try models including the forcing variables. If this is done then the forcing variable should be included in the model in forms suggested by the physics. But I think trend analyses are most useful here for simply showing that a trend exists.
I have only been able so far get access to the abstract of Cohn and Lin’s paper so some of my comments might be a bit off target.
What I’ve noticed is that everyone seems to be only talking about linear trends. Has anyone tried fitting non-parametric smoothers such as smoothing splines or lowess smoothers to the temperature vs. time data? These might be quite useful in highlighting what is going on. They also allow for non-linearity in the data which could otherwise inflate autocorrelation in a linear trend model.
Statistics should not be treated as a black box. It needs to be applied by someone who is familiar both with the statistics and with the physical processes involved. I could be wrong here but have any of the contributors to this blog been burnt by black box approaches to statistics? If so I could use the stories as cautionary tales for other statisticians.