This entire field of climate change is bringing out some interesting features of the scientific enterprise as currently practiced — namely in the evolving world of peer-reviewed literature.
Could one of the RealClimaters comment on this journal (Global and Planetary Change) as a place for publishing climate modeling results? Since a number of this paper’s claims do appear to be embarrasingly wrong (“new technique…”), I find it hard to believe that, had this paper been subject to peer-review by the climate modeling community, the points rasmus makes in this post would have been made and the paper likely not published as is.
And this is my tentative point. With so many journals now competing for articles, and with so much interest in this particular topic, it seems amateurish results are finding a place for publication with regularity (Spencer’s latest offering in Remote Sensing comes to mind, as do any number of articles in Energy and Environment). Of course, with so much interest, the process will run its course rather quickly, with poor results being picked apart (or utterly shredded, in this case) and discarded. But it is interesting that what would appear to be such a poor-quality piece of work in such a high-interest field can easily find publication…
Humlum seems to be another “Fun with Fourier” types. I wonder if they have ever undertaken the exercise of seeing how many periodic functions must be added to approximate a finite series. It is surprisingly few. It’s like they are claiming a great coup for fitting 4 points to a cubic polynomial. And once you throw in some noise, fuggedaboutit.
Numerical analysis is just that ‘a numerical analysis’, it may point to something or may not. To take it as a confirmation of some process is just as erroneous as to dismiss it as worthless. Sometimes number of different factors may point in the same direction, in which case the science should not dismiss it out of hand. Here is an example which may contradict current understanding despite everything here being based on well known data: http://www.vukcevic.talktalk.net/CET-NVa.htm
btw. Fourier transforms spectral analysis often gives misleading results, there are far better ways , see the last graph in the above link, no beloved ‘60 year cycle’ either in the global, N. hemisphere or the CET data sets.
This will be interesting if the prediction proves out, otherwise not. The upcoming 20 years will be rich with commentary, and comparisons, of the many predictions made to date. It will be one of the most exciting 20-year epochs in the history of science.
#3 Global and Planetary Change is a legitimate scientific journal. It has a impact factor of 3.6, which is quite good. Many interesting papers are published there. It is probably a classical example of “tunnel effect”. Peer review is not 100% efficient.
With the recent methane news, this makes a perfect storm for the author, unless he’s on his deathbed. He’s really set himself up for embarrassment sooner rather than later by making specific weather claims over a relatively short period with most real-world evidence pointing the other way. That the journal is credible and the author has expertise in related fields makes it worse. There are too many all-too-human scientists losing their minds in pursuit of the bubble reputation, money, or something.
Unfortunately, given the readiness of the infotainment industry and their funders to seize this kind of stuff, it is, to put it mildly, counterproductive.
I’m looking at the Humlum, Solheim, and Stordahl paper and it seems to me:
(1) They are decomposing data by projecting onto these wavelet basis sets,
(2) Looking for coincidences between planetary resonances and regions of high energy in the resulting projections,
(3) Concluding that when these regions include some resonances that these relationships must be causal.
First, their claim on page 146 that somehow the CWT (continuous wavelet transform) localizes in time and frequency better than the Fourier is incorrect: It suffers exactly the same kinds of Heisenberg-like uncertainty relationships. See, e.g., Lang, Forinash, “Time-frequency analysis with the continuous wavelet transform”, Am.J.Phys. 66(9), September 1998, 794-797.
Second, there is no discussion of pre-filtering the finite series to minimize spectral leakage. Perhaps this is a function in the software they used, but it seems it should be mentioned. See, e.g., http://en.wikipedia.org/wiki/Spectral_leakage. This is a worse problem for non-stationary signals, and, at least in the Fourier case, it’s usually dealt with by using a moving window.
Third, the LATERAL UNCERTAINTY in the frequencies of their lobes which are the basis of their extraterrestrial interpretations is significant. It’s made worse by uncertainty in their original data, which is clearly non-zero, and by spectral leaking, assuming it wasn’t compensated by filtering.
Fourth, I think they really need to do the statistical test in terms of mean-square-error or something in the time domain, not in the frequency or wavelet domain. Using a white noise null is really a red herring. ANYTHING, ANY temperature profile is going to show significance against that. What they should do, I think, is estimate the uncertainty in the frequency domain, and jitter those spectra and see how the temperature profiles differ across a collection of jitters from the measured data.
Maybe I’ll look more at this. Not a lot of spare time, though.
If I was a journalist coming into this posting, well-intended or not, I would be unimpressed with the use of language that (understandably) mocks the analyses and ideas of these authors. It does not help the case of the science community to come down to the same level – to use the same type of dismissive language – as disingenuousness skeptics use (e.g., to write paranthetically, No kidding! Or, Seriously! Etc.) Let’s maintain the high and respectful standard that our community is known for, please.
I hope and trust that Rasmus and/ or someone will provide a formal rebuttal to the paper.
I am not certain that analysis by O. Humlum et al. should be entirely dismissed.
– O. Humlum et al. (page4, Fig.2) “The record is dominated by three periods of about 68.4, 25.7 and 16.8 years long.
These are also periods found in the much longer CET records as it can be seen in this graph http://www.vukcevic.talktalk.net/Spectra.gif (from the post # 5)
– According to the Arctic’s four centuries long geomagnetic profile there is, as yet not entirely understood, an apparent correlation to the solar activity, which is more likely to be the cause of any oscillations rather than the precession of the Moon’s orbital nodes. http://www.vukcevic.talktalk.net/CET-NAP-SSN.htm
Humlum is a contrarian well known in Scandinavia, but fortunately not much outside – at least hitherto, so maybe it´s not worth the while giving him this attention. But, like e.g. Tim Ball or Ian Plimer, he is a classical contrarian in the sense that his own field (physical geography) is only marginally bordering climate science, and he has followed the well-trodden path of such people by authoring lots and lots of outrageous rubbish about global climate in the popular literature but (wisely) stayed clear of the scientific literature with that.
Here (like in many Danish and Norwegian newspapers), he advances claims such as “only 4-5% of atmospheric CO2 is manmade”, “the CO2 rise might be a result of the ocean heating”, “Antarctica is not warming” and many other shopworn septic talking points. He even pushes Jaworowski´s (from LaRouche´s Schiller Institute) and Segalstad´s ridiculous claims that the closed polar ice magically levels out the enclosed CO2 content over time (!) – of course, he never elaborates exactly how this would happen? why this process should settle on 180-90 ppm? why this apparently did not level out the earlier interglacial levels of 280 ppm? or why this unknown process mysteriously must have stopped around the time when human CO2 emissions kicked in?)
Many of us have regularly tried to point out these obvious howlers directly to him on Danish climate blogs, but he has always failed to answer. I have tried a little test at my terrestrial ecology section (I´m a biologist), and every single student I have asked who were just passingly familiar with the carbon cycle, sources and sinks and basic isotopic facts, has immediately been able to point out the obvious weaknesses (e.g. “if the atmospheric CO2 rise came from oceanic outgassing, then why has oceanic CO2 risen too”?).
I would like to ask Rasmus: Have you personal experience with Humlum? Do you honestly think he believes the rubbish he´s peddling? I must say that I don´t believe that a professor of physical geography like Humlum could honestly be ignorant about this or believe these arguments himself.
Comment by Christoffer Bugge Harder — 16 Dec 2011 @ 6:15 AM
Yvan Dutil @ 2: “I have seen it yesterday in arxiv.”
Comment by Pete Dunkelberg — 16 Dec 2011 @ 8:44 AM
Mike Ellis, I disagree. Ridicule is a perfectly legitimate response to the ridiculous. I think humanity has reached a point where we must collectively realize that we cannot afford to suffer fools gladly.
Your argument is in itself a red herring. You can wave the red herring of ‘those that call it like it is are just big meanies’ trying to bully those that are more interested in buying into bad methodology to support their own bias confirmation… but that just confirms for me that your argument is in itself a distraction from relevant facts in evidence.
If the author is already peddling denialism based on limited facts used out of context, and this new paper is published likely just to be used as the latest red herring distraction in the global warming argument by examining “Svalbard and Greenland temperature records” in a too limited time span without relevant context, which, just in case some may not have noticed does not represent the region known as planet Earth, uses too short a time span in relation to mechanism outside of the examined region because it is in fact a regional analysis; one is left with a reasonable conclusion that the paper is designed to be precisely what I suspect it is designed for, to be a red herring distraction in the argument between science and science denialism regarding global warming.
A climate model based on curve fitting is going to be wrong, because AGW produces weather that we have not seen before. On the other hand, computer simulations of climate that do not include carbon forcing and ice dynamics are also going to be wrong. For stakeholders, it does not matter why a model is wrong. All that matters is the accuracy of the final answer.
AR4 missed the time frame for system changes that caused the Arctic sea ice retreat and all of the follow-on feedback cycles (e. g., CH4 releases from the Arctic sea bed).
If the computer models (and AR5) significantly understate the time frame or effects of AGW, then it really does not matter whether stakeholders read computer model output, the latest drool by a curve fitter, or the Farmer’s Almanac. In each case, the effect will be poor policy and bad planning.
When dishonesty prevails and threatens humanity, it is necessary to use anything and everything that comes to hand. Especially with those who wish to distract from the excellent scientific discussion that should otherwise prevail in this field. Humor is also soothing to the soul of those beleaguered souls who work very hard to get real news and real science out to a public that mostly would prefer to ignore it and hope it will go away. There are a few mosquitoes here who have discovered they can get people to swat away rather than hiking up the rather steep trail, as well. Per Luntz et al., distract and vilify works to delay. Unfortunately, we do not at this point have time for this slick anti-sense.
Comment by Susan Anderson — 16 Dec 2011 @ 12:34 PM
Confidence intervals, Bayesian or otherwise, may be of interest to statisticians, which I am not, my background is in engineering, hence I look to physical principles as an evaluation of, not numeric probability, but a consequential possibility.
Susan, When dishonesty threatens humanity, it is necessary only to use the most powerful of weapons–the truth. It is difficult to wield, sharp as a razor on all sides, but those who have mastered it cannot be vanquished.
Among other things, it would be nice to know what four meteorological stations in Svalbard were used to to construct their “composite” temperature record. Furthermore, how was the composite constructed? In three of their figures (1, 4, and 5) three different time periods are used to smooth mean annual air temperature. Why? Lots of questions.
Looks like an advanced case of pareidolia. This “it’s natural” stuff seems to be fashionable among deniers at present. Guess the “no increase since 1998″ stopped working for them.
Leaving everything else about physics climatology ecology oceanography etc aside, the idea that it is just pure chance/coincidence that we have had such a rapid increase in temperature in last 30 years is the most extreme example of wishful thinking the world has seen.
I believe what they used is identical with the “Svalbard Lufthavn” record available from eKlima. At least y chart of that looks like Humlum et al.’s. The actual “Lufthavn” record goes back to 1975, but the composite record going back to 1912 presumably also includes the “Svalbard Radio” station (compare this).
David Horton says “..the idea that it is just pure chance/coincidence that we have had such a rapid increase in temperature in last 30 years is the most extreme example of wishful thinking the world has seen..”
If you role a dice, there is an equal chance of any integer between 1 and 6. Repeat 50 times, twice in separate columns. Very likely there will be no relationship between the two columns of numbers, however, it is likely you will find sequences which do match up. On my first attempt from row 31 to 39…..Wow, a linear correlation which can explain 81 % of the variance, what’s the chance of that?
And if each roll of the dice represents 3.33 years, then we have 30 years. This does not prove that you are wrong, or that climate change is random rather than deterministic, but is does prove that it could be random.
Waiting for more data is controversial, but the longer the series the less chance it is random. But even then you would have to prove that climate change is determined by humans, rather than some other deterministic cause.
Re: David Horton @34 This “it’s natural” stuff seems to be fashionable among deniers at present. Guess the “no increase since 1998″ stopped working for them.
I was just commenting on this pair of denier arguments of at Skeptical Science. The two are mutually exclusive. The “no increase since 1998″ argument depends on assuming that CO2 is the only factor affecting temperatures, so any reduction in the rate of increase is evidence of failure of AGW theory. The “it’s natural variation” argument depends on the assumption that there are lots of other factors (known or unknown) that affect global temperature, so there is no reason to think that CO2 has any effect at all.
JGarland, The truth has setbacks, but it does eventually win. An edifice of lies has within it the seeds of its own destruction, however grand it may look. Eventually, contradictions pile up and all but the most ideologically blinkered either die or abandon the lies.
Nature will keep giving us the same answers. She will even turn up the volume if we do not listen. Eventually, the truth cannot be denied–which is fortunate, as in science,the truth is all we have or we aren’t doing science.
The decision about whether to wait for more data can also be made scientifically. One must look at the consequences of waiting vs. the consequences of making a decision with imperfect data. We are long past the point where each additional day puts us further and further into the hole. It is simply foolish to bet the future of civilization on a 20:1 longshot.
Is it possible that random internal variability can cause temperature trends?
An Internal Combustion Engine Analogy.
Let an engine w/ temperature operating at 107 deg C @ 2350 rpm undergo the following random rpm changes @ 1 min intervals.
2350 rpm @ time = 0 mins mark
2550 rpm @ time = 1 mins mark
2000 rpm @ time = 2 mins mark
2150 rpm @ time = 3 mins mark
2000 rpm @ time = 4 mins mark
2350 rpm @ time = 5 mins mark
2600 rpm @ time = 6 mins mark
2300 rpm @ time = 7 mins mark
2800 rpm @ time = 8 mins mark
2100 rpm @ time = 9 mins mark
2550 rpm @ time = 10 mins mark
2150 rpm @ time = 11 mins mark
2350 rpm @ time = 12 mins mark
2450 rpm @ time = 13 mins mark
2500 rpm @ time = 14 mins mark
2850 rpm @ time = 15 mins mark
2300 rpm @ time = 16 mins mark
2150 rpm @ time = 17 mins mark
A change in 50 rpm equals a 1 deg C change, and requires 1 min equilibrium time.
The temperature response (deg C) @ each 1 min time mark is as follows:
From a completely random source of variability with no linear relationship (e.g. r squared = 0.045), we find a temperature response curve with an r squared of 0.715
No, Isotopious, it’s not some kind of dice throwing exercise, it’s a sudden sharp rise in temperatures after thousands of years of relatively stable ones, at precisely time CO2 levels in association with population and industrial growth also sharply rise. I don’t know how you would calculate the odds of such a “coincidence” being due to “chance” but you certainly wouldn’t do it by a few coin tosses.
[Response: David Thomson, the great Bell Labs statistician, shows at least part of this calculation. See the paper in PNAS, freely available.–eric]
The sequence of rolls of loaded dice is random, even if you lose twice as often as expected. How much has the additional 110ppmv CO2 loaded the weather dice? Wanna bet whether the effect is linear?
Russian heat wave 2010(15000+ deaths), European heat wave 2003(40,000 deaths), Australian “driest on record” drought 2009-2010, Australian wettest March on record 2011, Mexico/Texas single worst drought on record(600000 less head of cattle, November beef cattle price of $119.00 per cwt is up $2.00 from last month and $25.00 higher than November 2010,USDA Nov Report), ice extent for November 2011 third lowest in the satellite record, tripling of weather related insurance losses in the last decade(but not non-weather catastrophes), 2010 second ’100-year’ Amazon drought in 5 years, Iraq heat wave hits 52 degrees C, Intel reduced its projected fourth quarter revenue figure by $1 billion Monday, because it said personal computer makers are adjusting their orders for Intel processor chips in line with reduced supplies of computer hard-disk drives caused by the Thailand floods…
Random events, or loaded dice – and how much are we betting against Nature’s house?
Isotopious, I think it was Joan Baez who said, “If you have a choice between a hypothetical situation and a real one, take the real one.” Nearly 40 years of warming is pretty damned strong evidence that we are doing something–and then there’s the physics. Denialists seem to always want to ignor the physics. They also hate it when you point out that the warming was predicted over 115 years ago. A confirmed prediction counts as very strong evidence in science. Prediction is far more important than explanatory power. You say we should wait.
We already have over 20% increase in the proportion of Earth’s land area in drought. We already have km wide plumes of methane bubbling up in the Arctic. We are already on track to have an ice-free Arctic before 2050. Rising temperatures are already decreasing crop yields. The health of the oceans is already being adversely affected by rising temperatures and acidification.
Meanwhile, the human population is on track to crest (we hope) at around 10 billion sometime around midcentury. We will have to meet the needs of all of those people in an environment where petroldum has become scarce and expensive at a time when our agricultural methods have become critically dependent on it.
Given all those facts and the strength of evidence that the current warming epoch is anthropogenic, what possible argument could a sane person construct for waiting? If we are not responsible, our progeny may not refer to the current epoch as the anthropocene, but rather as the anthropo-obs-cene.
“… 3 points for every statement that is logically inconsistent.
5 points for each such statement that is adhered to despite careful correction.
5 points for using a thought experiment that contradicts the results of a widely accepted real experiment….
… 10 points for arguing that while a current well-established theory predicts phenomena correctly, it doesn’t explain “why” they occur, or fails to provide a “mechanism”….”
Comment by John E. Pearson — 17 Dec 2011 @ 12:57 PM
52, Hank Roberts: … 10 points for arguing that while a current well-established theory predicts phenomena correctly, it doesn’t explain “why” they occur, or fails to provide a “mechanism”….”
Very droll. But why only “well-established”? Some “well-established” theories were critiqued in this fashion long before they became well-established. to my knowledge, gravity has never been explained (“why” or “how”) since Newton declined to “feign” hypotheses. Einstein turned “attraction” into curvature of space without explaining “how” or “why”, and that took a while to become “well-established” in its turn. Does any one know why the “universe” is expanding — i.e. what caused the explosion, or what mechanism held it all together just “prior” to the explosion?
Your “Internal Combustion Engine Analogy” is internally inconsistent. If the engine changes temperature by 1deg/50RPM, and you change the RPM from 2350 to 2550 at t=1min, then the temperature would equilibrate one minute later(t=2) at +4 degrees((2350-2550)/50 = 200/50 = 4), or 111 deg(107+4), not the 108 deg you show.
Since Humlum et. al. is paywalled, I can’t see their references – is the prior work in this area by KLYASHTORIN and LYUBUSHIN,“CYCLIC CLIMATE CHANGES AND FISH PRODUCTIVITY”, which found cycles with periods of 16.8, 25.6, 26, 32, 32.5, 38.6, 39, 53.9, 54, 55.3, 60, 60.2, 50-70, 72, 75.8, 76, 99, and 108 years in the climate record consistent with their results?
Folks, we scientists outside the realm of climate science depend on your expertise to sort the “wheat from the chaff”. (I guess it won’t surprise you to learn I am an agricultural scientist.) Frankly, some papers on climate change are too deep for me, but as a scientist, I understand the value of the peer-reviewed literature. I am helping to lead an effort to educate Extension agents (and ultimately farming communities) on climate change, and the most powerful thing I can point to is the peer-reviewed literature. So while blog exchanges are helpful, the most powerful rebuttal will be a thoughtful peer-reviewed response.
#47 Thanks Eric, anyone contemplating the idea that the temperature rise of the last 4 decades can be duplicated by tssing some coins should read the Thomson PNAS paper. Would be useful if the exercise was repeated now with an additional 20 years data or so. The odds that the link between CO2 and temp are due to chance must now be infinitely small. And in turn the link between both and melting ice on cap and glacier, sea level and acidity, changes in ecology of plants and animals, increases in record and extreme weather events. The idea that what is happening to our planet is just a random natural event we happen to be witnessing right now is so absurd it falls over as soon as voiced for any intelligent person.
*Septic Matthew* – living as we do in a rural area, our bodily waste products from the lavatory etc drain into a septic tank. Septic has always meant to me at any rate, poisonous, infected, or the accumulation of potentially disease inducing matter in a concrete lined underground tank that must be cleaned out periodically. I associate septic with pus, turds, and other such putrid material. Your very cogent posts are well written with correct spelling. It is puzzling then to see you describe yourself as “septic” rather than a sceptic/skeptic? Surely you do not want us to think of you as a turd? Or is this an attempt at irony? If it is, I have to tell you that the precise meaning of your irony laden name evades me. Are you really a sceptic about global warming etc, and present yourself as the microbe that reveals and heals the diseased wound afflicting a healthy body? Or can’t you spell?
Isotopious 45: I am reminded of a statistician and a pit crew mechanic watching a car race. The statistician has an outsider’s view and makes deductions based on relative position and lap time. The mechanic has an insider’s view and makes deductions based on race congestion, fuel load and tire wear. What is the chance that they are both wrong when they independently reach the same conclusion?
#55–Thanks for the NYT story; it’s a good one, despite a couple of simplifications that grated me just a bit. A tease:
“Dr. Walter Anthony had already run chemical tests on the methane from one of the lakes, dating the carbon molecules within the gas to 30,000 years ago. She has found carbon that old emerging at numerous spots around Fairbanks, and carbon as old as 43,000 years emerging from lakes in Siberia.
“These grasses were food for mammoths during the end of the last ice age,” Dr. Walter Anthony said. “It was in the freezer for 30,000 to 40,000 years, and now the freezer door is open.”
Unfortunately, a lot of it was also a bit–alarming. Increased wildfire has been well-documented in several more southerly locales around the world, but apparently it’s (anecdotally) increasing in tundra country, with highly carbonaceous results.
Problem is that the program proposed does not constrain model mis-specification error, even if its terms are all physical. An alternative is an intensely data-based model using stochastic processes, where the calibrations and dependencies come from the data series themselves.
The idea is to subsume the entirety of behavior in a windowed sample of the system-under-study’s recent past, and base diagnostics and remedial measures entirely upon that window’s charcterization.
These could very well be informed by efforts to determine, for instance, marginal change in climate depending upon incremental adds of greenhouse gasses, from, for example, paleoclimate studies.
Ultimately, a physics-based model would be expected to have greater explanatory and confirmatory power. Still, as a preliminary informant of further experiment and policy, it seems the empirical model is pretty good.
I am angry that this article was even published. Whoever reviewed this paper and gave it the green light did so with the intention of muddying the waters. It doesn’t matter if the paper gets trashed by the scientific community, it’s already out there, and not many people will check back to see what the reviews are. People will accept it as fact, or if by chance they hear about bad reviews, they’ll see proof that there really is debate about what causes global warming.
#69 Jan Galkowski says: Still, as a preliminary informant of further experiment and policy, it seems the empirical model is pretty good.
If your comment refer to this link: http://www.vukcevic.talktalk.net/RF.htm
it is appreciated. Your suggestion although appropriate, is beyond my capacity and competence to withstand scrutiny of a professional statistician, however if you or anyone with appropriate skills is interested in pursuing the matter to the degree required for publishing, my email is on the graph in the above link.
Mr. Galkowski thanks again.
Chris Korda, As a general rule, I try to avoid introducing any term that contains in order the letters t, h, r, o and b. However, I would note that humans are showing themselves to be no more adept at avoiding the consequences of population biology than a colony of yeast in a bottle of beer.
Ah, I see that Isotopious subscribes to the theory that a watch that is broken is more accurate than a watch that is miscalibrated by a microsecond, as the former will be right twice a day. Dude, did you ever even take a science class?
Unfortunately I don’t have access to the original paper but from the description here it appears to be rubbish. A purely statistical analysis that doesn’t take into account physics can at best predict short term phenomena or situations that are exact analogues to past variations with similar causality to situation they purport to be modelling. To “disprove” anthropogenic change they are more or less claiming that the underlying causality has no effect. You have to be a brave scientist to claim that curve fitting trumps physics. Or stupid. Funny how denialists cling to arguments like “it’s just a model” or “it’s just statistics” when the numbers don’t agree with their prejudice but they are very happy to accept this sort of garbage when it suits their case.
The difference between denial and skepticism: a skeptic rejects any bogus argument or evidence whether they like the conclusion or not; a denialist only rejects findings that don’t support their prejudices.
On the positive side, check out these pix of animals near my home on my new blog. I hope they will give anyone tiring of the fight against destruction of the biosphere a lift.
This is what Dan O’Neill long ago called the “over-the-counter-culture” — today it’s the “over-the-counter-consensus” — it’s about the money with these people, always the money, always short term.
The PR stuff quotes:
“… ICSC chief science advisor, Professor Bob Carter of James Cook University in Queensland, Australia, and author of … “ Climate: the Counter Consensus ” says, “Science has yet to provide unambiguous evidence …. New Zealand-based Terry Dunleavy , ICSC founding chairman and strategic advisor … ICSC energy issues advisor, Bryan Leyland of Auckland, New Zealand. … ICSC science advisor Professor Ole Humlum, of the Institute of Geosciences, University of Oslo …”
I wonder what the editor of that journal was thinking?
“Climate4you” is Humlum’s climate blog, by the way. It’s read-only.
Some of the claims and charts might be worth scrutiny — there’s a cloud-cover-vs-temperature chart for example, where he takes data from different published sources and draws a line showing change over time. I wonder if he published.
There is plenty of evidence for significant cycles in all ranges of periods. The Milankovitch cycles of around 400Ka, 100ka, 40ka and 23ka are widely accepted and are attributable to astronomical configurations. Cycles of 2300 and 208 years are found in many climate series as well as solar proxies. There are other longer and shorter period that are also well established.
I do agree with you that finding a cycle of 8.7 years and attributing it to the 8.85 year lunar cycle is unjustified. The agreement over 4000 years data should 0.01 year or better unless the lunar period has changed over that time. Theer is too much of this sort or error made.
Russian researchers (Afanasiev and others) have found evidence for the 9.3 year lunar cycle (half 18.6 year cycle) in very long salt deposit series. Using this data Afanasiev has devised a method that he calls “Nanocycles Method” which uses the change in the period and its interaction with the annual seasonal cycle to accurately determine the age of geological deposits for the last 600+ma. This work is well established in Russia and it is a pity that more Western geologists and climatologists are not aware of it.
[Response: You need to be careful here. The cause of the Milanovitch cycles is very well established: predictable variations in the earth’s orbit, and hence the amount of sunlight received as a function of season. The seasonal cycle likewise of course. Furthermore, these cycles cause huge (tens of watts/m^2) changes in insolation, which means there is a clear physical mechanism for their influence on climate. They *should* have an influence on climate on purely physical grounds, and that’s why their influence is easy to detect on statistical grounds. Everything else you are referring to has either no known physical mechanism underling it or involves a known but very very small effect (e.g. .in the case of the the 11-year sunspot cycle, which is of order 0.1 W/m^2). If it exists, the supposed link between the moon and salt deposition could, I suppose, be related to the tides, but this would still have nothing to do with climate.–eric]
Interesting read, but I am not sure how relevant it is. Meanwhile, as Eric stated, Milankovitch cycles are fairly well known with predictable effects. Solar cycles are less well known, except for the 22-year sunpsot cycle, as are their effects. http://solarscience.msfc.nasa.gov/SunspotCycle.shtml
Response to response above: Agreed that Milankovitch cycles are stronger and so more easily established. However the nature of cycles is that as more data becomes available it is possible to establish the significance of weaker cycles also. See the above links please. I didn’t mention the cycle of 350-355 years in my first comment, because I wanted to limit that to cycles proven beyond reasonable doubt. But a cycle of ~355 years was reported by Chizhevsky more than 50 years ago, and shows up in both of these series also.
There is a lot of well established cycles information that is not taught or understood well in Universities, simply because it is so interdisciplinary. I would highly recommend the paper by Edward R Dewey of the Foundation for the Study of Cycles (which went defunct about 1998 and should not be confused with the present FSC): http://www.cyclesresearchinstitute.org/cycles-general/case_for_cycles.pdf
“… insofar as cycles are meaningful, all science that has been developed in the absence of cycle knowledge is inadequate and partial. Thus, if cyclic forces are real, any theory of economics, or sociology, or history, or medicine, or climatology that ignores non-chance rhythms is manifestly incomplete, as medicine was before the discovery of germs.”
Ray Tomes : “[I]nsofar as cycles are meaningful” makes Dewey’s statement a truism. The scientific (or economic, or whatever else) job is to determine whether or not they are indeed meaningful, a rather more difficult task than simply finding apparent cycles in time-series. It is only achieved by determining actual causes. While cycles may point to physical causes and thus be worth investigating, many won’t be.
The last 30 years of climate data is not going to have much influence on, for instance, a 355 year cycle, nor will such a cycle say anything about the physics behind AGW. It is actual current events which are pertinent to the very different atmospheric conditions which mankind has created and continues to change. AGW explains much of those events, don’t you think?
Comment by David B. Benson — 20 Dec 2011 @ 9:03 PM
Analysis of cycles goes way back in meteorology and things just got worse with the invention of the fast Fourier transform. The only requirement on data for a successful Fourier analysis is that the series be bounded and (Lebesgue) integrable.
The USAF once published a Fourier study of sunspot numbers that was so bad I tried to make fun of it with a study that related flying accident rate to unit number via a cosine function. It explained 80% of the variance.
The lab that published the sunspot junk withdrew the report but got their revenge by submitting my study to the flying safety office. It took months to convince them that I was kidding.
David B Benson @84: Yes, there needs to be some form of restoring force for displacements (or what you called elasticity) for cycles to exist. You are right that people look for patterns – it is what science is about. In cycles research we use Bartel’s significance test to see whether we are justified in believing in a cycle. The p~0.05 significance of the 207.7 year cycle in C14 might be dismissed on its own, but the p<0.0002 significance of a 206.9 year cycle in Be10 confirms this as a strongly significant cycle.
Cugel @83: Yes, as stated a truism. But if you look at Dewey's paper (which is just a tiny fraction of the research done) then the existence of cycles in every field of scientific research is well established. Also, agreed that finding a cycle does not tell us the cause, but if several matching cycles periods are found in conceivably related fields then it gives a strong hint for further study.
Rather than use Fourier analysis which limits us to an integral number of cycles in our data, we allow the cycle frequency to vary continuously and determine the frequency to 100 times that precision and typically 10 times that accuracy.
In my view the question is not whether humans or natural cycles (or natural randomness) cause climate change. Very clearly both do. The question is how to break down the fluctuations into the various parts so that the human component can be isolated and the natural cycles can be extended.
The levelling off in temperatures since about 1998 was entirely expected on the basis of the dominant cycles. The 2300 year cycle is rising for centuries to come. The 208 year cycle reached a maximum around 1998, and so was rising for the entire 20th century and will fall for the entire 21st century. The quasicycle of 50 – 60 years was at a high in 1941 and 1998. Lows therefore occur around 1970 and ~2025.
If one adopts a view that only one of these factors is THE cause of fluctuations then one must be proved wrong in the future, whether or not humans act differently. Only by recognizing all factors can we hope to get near to the right answers.
[Response: If it was all ‘entirely’ expected on the basis of your cycles then a) they should have been detectable prior to 1950 and all subsequent changes should have been predictable on the basis of what was known then. In which case, the ‘cyclists’ should have been warning people about increasing temperatures in the 1950s (perhaps you have a cite for that?). Otherwise, you are left with a post-hoc fit to the temperatures that doesn’t even come close to ‘recognizing all the factors’ and which has no predictive power. – gavin]
David Benson, and then of course, there is my favorite example of periodicity in the first 10 digits of the base of Napieriand logarithsm, e-2.718281828…, which being transcendental,of course, cannot be periodic. The human brain is programmed to look for order, and it will find it whether it is there or not. It is risky to posit periodicity or even quasi-periodicity without understanding some sort of mechanism that could be driving it.
EFS_Junior #65, re: the Norwegian denier group Klimarealistene,
Count yourself lucky you don’t have a PDF translator. The booklet you refer to is slightly more coherent bunk than their first booklet, due to the involvement of Humlum and others with academic backgrounds, but bunk it is. The bit of it that is relevant to the present discussion: There’s no fun-with-Fourier stuff, but Humlum pulls a trick to hide the incline that uses the central Greenland reconstruction. This issue has been addressed at SkepticalScience (compare this for a slightly different version of local post-1855 warming).
Interestingly, random fitting is what many denialists are accusing modern scientists of these days, when in fact the are not simply just looking at the patterns and saying ‘hey, these things go up and down’.
This is because they are identifying the mechanisms. Raymond H Wheeler did not identify mechanisms, he merely said ‘hey, I think I see patterns’. Well, many that tried LSD also saw patterns… Many that look at the vastness of interactions in the universe see patterns. The trick is identifying the mechanisms that drive the changes…
Funny how those that are most confused about climate change often claim change often claim the kettle is black and while simultaneously… such as it’s cooling and we are heading back to an ice age, or it’s warming, but it’s natural cycle…
Can you answer this for me: How can it be cooling and warming at the same time?
Ray Tomes: “Only by recognizing all factors can we hope to get near to the right answers.”
NO! NO! NO! NO! NO!
It is by no means necessary to understand everything to make reliable predictions. What is needed is to understand those factors and forcings contributing signficantly to the system. That is the biggest problem with the “Fun with Fourier” types. They make no assessment of physical significance or mechanism. Let them find enough cycles and they will fit any curve just by estimating Fourier coeffficients for the series.
Imagine that a 29th-degree polynomial (for example) were found to be an excellent fit for a drunkard’s path home from his neighborhood pub. Now, suppose that this curve fit were used to predict the drunkard’s precise path home on subsequent trips. Clearly, it wouldn’t work since the staggering movements of the drunk are random; there is no particular physical reason that he lurches to the right at one spot and to the left at another. However, we CAN predict with a reasonable amount of confidence that our inebriated friend will end up at a particular address — his home. (If he’s sober enough to walk, he’s likely to be sober enough to remember where he lives.)
Similarly, curve fits of fluctuations in past weather patterns are unlikely to have any predictive power. However, based on known physical laws, we can predict with a great deal of confidence that if greenhouse gases continue to increase we will arrive at a specific address — a warmer world.
Comment by Jerry Steffens — 21 Dec 2011 @ 12:38 PM
The Foundation for the Study of Cycles is still chugging along, apparently fueled by stock-market techies. The amazing power of cycles is best appreciated by looking at http://www.cycleslibrary.org and clicking on the index of articles published in the foundation’s quarterly journal. It has everything from sunspots to caterpillars.
The history of cycles in meteorology is largely that of Charles Greeley Abbott, who did pioneer work at Harvard in infrared astronomy and measuring the solar constant. He became enamored of cycles in the 1920s and published a couple of books and dozens of papers on the subject. His prestige drew a lot of effort into cycle study, but by the 1940s nobody had yet come up with a useful cyclic forecast. In the 1950 Compendium of Meteorology, an authoritative mid-century review of the state of the science, the cyclic approach is entirely dismissed for its lack of physical foundation.
Since several of the comments address what I would otherwise have attempted to write (probably less eloquently), I will instead simply attempt to cite a poem [from memeory so I might not have it quite right]. This should be recited by every cyclist before deciding to (not) offer up yap, yet another pattern.
I met a bear upon the stair.
I met a bear who wasn’t there.
He wasn’t there again today. Oh how I wish he’d go away.
[Response: Beautiful. Is that A.A. Milne or…? -eric]
Comment by David B. Benson — 21 Dec 2011 @ 9:04 PM
I find it quite amusing that so many people consider the use of cycles as some sort of hocus pocus. Are you aware that tides are predicted accurately this way? And many other things. You examples of curve fitting are all silly. It is essential to look at the statistical significance.
Gavin @87 reply: The data available today are much better than in 1950. As are the computers. Why not start in 2000? It was already clear then that the 50-60 year cycle would decline until 2025 and the 208 year cycle would decline until 2100, although the 2300 year cycle will continue upwards. Do some searches for yourself on these cycles and you will find them well established. In the two links I provided for the Be10 and C14 proxies, the phase of the 208 year cycle is very clear and a peak around 2000 AD was not at all hard to predict.
[Response: We’re very well aware of the presence of near-periodic variability in sunspots. And yes, we’re also well aware of the tides. The objection here is to the misuse of these facts to make unsubstantiated claims about the predictability of climate.–eric]
Since you are concerning yourself only with the frequency and not with the physics of how much a particular “cycle” contributes in terms of forcing, would you care to explain how your approach differs from simply approximating any time series with a Fourier series? How is that even interesting?
Are you aware that with any large data set of any type you can apply computer analysis and find patterns within… the larger the better. Take for example hypothesis about the bible code.
Without a mechanism all you have is a guess based on the analysis. It has been said that statistics are like bikini’s, they show a lot, but often critical components remain hidden.
At the same time, while cycles do exist in natural variability, that does not diminish the impacts and potentials of human induced forcing on the system. It’s too easy to use ‘cycles’ as a red herring distraction to divert attention to a more real and serious problem. And when it comes to economics, it is unwise to ignore real and serious problems.
Fourier prediction of tides has its own interesting history. For one thing, this approach lends itself to solution by an elegant mechanical combination of gears and pulleys. NOAA has its No. 2 tide machine on exhibit in the Rockville, MD headquarters of the National Ocean Survey. It’s not as complicated as Babbages’s mechanical computer, but still a fine sample of mathematical machinery. It takes me only a few seconds to look at a circuit board, but I spent hours admiring the design of the tide machine when I worked for NOAA. Here’s the place to look: http://co-ops.nos.noaa.gov/images/mach1b.gif
The fact that the Greenland record covers the entire Holocene suggests that another test – slightly different to that usefully applied by Rasmus – can be applied to the technique (which is a frequency domain description being extrapolated into a forecast). A 4kyr “training set” can be regressed in 1ka steps back to the start of the Holocene, the major component putative cycles extracted by Fourier analysis, and the effectiveness of the “prediction” for the subsequent 1 ka can then be evaluated. This makes the test closer to the “local” forecast being suggested in this paper. I wouldn’t be too optimistic about it working, though!
The fact that the original dataset covers the entire Holocene suggests another slightly different test for the technique (which is a frequency-domain description extrapolated into a prediction) apart from the one already usefully applied by Rasmus. The 4 kyr “training set” can be taken back in 1 ka steps to the start of the Holocene. A Fourier analysis of each 4 kyr interval can then be used to produce a local “prediction” for the subsequent 1 ka, based on the putative cycles in the preceding 4 kyr. I’m not too optimistic that the method will pass this test, though! It is obviously unsuited to anticipating the effects of unprecedented forcing components, but it would be nice to know whether it works at all for natural background cycles.
Just to add to Eric’s inline response @79, one should also be aware of the 100,000 year problem.
The obliquity correlates to temperature very well between 0.8 million and 2.7 million years ago (41K obliquity world).
However, since 800,000 years ago, the earth started to skip the 2nd and 3rd cycles. Scientists cannot explain why this happens. There are plenty of theories, but very little confidence (hence the name: 100k problem).
Why wasn’t it warm like today 40,000 years ago, instead of being cool?
Thus, there is no clear physical mechanism to explain our current interglacial. The clear explanation became untenable 800,000 years ago.
[Response: The 100kyr ‘problem’ is greatly overstated. Check back here in a day or two and I’ll have a slew of references for you on this point.-eric]
Eric reply to @98: Why do you say that my suggestions are unsubstantiated? The cycles that I have mentioned are found in a variety of proxies. They have been given names and appear frequently in the literature. (more below)
Ray Laybury @100: Fourier series does not always make good predictions because it does not determine the accurate period. A subtle point perhaps. Determining significant cycles frequencies is interesting because it does allow prediction. Of course for a series like sunspots, predictions are not5 awfully good for the 11 year cycle because its period is not very stable. But it will tell you about coming turning points. Likewise, the 50 – 60 year cycle in climate is not of stable period. But we can see that it had peaks around 1941 and 1998. So the next trough is expected about 2025 and the following peak around 2055.
Isotopius @105: The 100,000 year problem (to explain) is that the 41,000 year component of Milankovitch is expected to dominate over the 100,000 year component. In some past periods (millions of years ago) it did. But now it is the other way around. The fact is that there are some problems with Milankovitch theory.
My suggestion (and I have seen others suggest this too) is that insolation is not the only factor. Another factor could be the variations of the earth’s orbital shape cause it to vacuum clean a different part of the near earth space, picking up more dust and meteorites as the orbit moves into new regions. This would explain why the 100,000 year period has more effect than expected.
John @101: Bible codes are quite a different thing. Are you suggesting that meaningful codes are found more often than would be suggested by chance? I think not. I am suggesting that cycles periods are found more often than would be expected by chance, and that the ones that I have referred to are accepted as real for that reason. I recommend again, look at http://www.cyclesresearchinstitute.org/dewey/case_for_cycles.pdf which is a well checked and tested result which lists hundreds of time series in many disciplines where significant cycles have been found. This does tell us things about the Universe that are not in the common awareness of scientists looking at only one discipline.
Isotopius @105 I know Eric is going to address this, but Isotopius might like to consider Peter Huyber’s latest paper, in the current edition of Nature, which effectively concedes that precession, tilt and eccentricity all play a role in the climate forcing of the last 1Ma. He doesn’t identify eccentricity as such in the title of the paper, but it is there in his orbital forcing model, as the amplitude modulation of precession, and needed for his model to work. Significantly, this paper is from one of the initiators of a small “we don’t trust Milankovitch tuning” splinter-group within palaeoceanography.
The editorial describes the paper as the first firm quantitative evidence in favour of the Milankovitch theory – in my opinion this is absurd, because the Milankovitch hypothesis was validated by Imbrie, Hays and Shackleton in 1976 and by many papers afterwards – I expect Eric will document them. Huyber’s paper it would be better described as the end (for the time being at least) of yet another attempt to challenge the already well established, and justifiably dominant, Milankovitch hypothesis. This is the current model of the governing principle of climate behaviour on 5kyr – 1Ma timescales over the Cenozoic at least.
The “100,000 year problem” has many possible solutions: the difficulty is selecting which one is the best, ie closest to reality. Muller and McDonald’s “inclination” model, by the way, isn’t one of the currently viable ones. Strangely, Huyber’s paper characterises the Hays, Imbrie and Shackleton 1976 solution as a “precession only” solution, whereas it manifestly and clearly identified all three Milankovitch components – eccentricity, tilt and precession, as involved. Huybers has in effect admitted it was right all along.
I am a little confused by your comments in this thread. Possibly others are also.
You appear to be discussing cycles in the sun’s activity. But you also stray beyond this @87 where you talk of the question of identifying the different drivers of climate change – human activities & natural cycles. Then without pause for an answer to this question you continue – “The levelling off in temperatures since about 1998 was entirely expected on the basis of the dominant cycles.” This is strong stuff, Ray Tomes.
So can we please go back to your question about how (I assume you are arguing) the sun is affecting the climate in some major or minor way? Otherwise your comments will remain as intractable as them there biblical codes.
[Response: Sure. But the magnitude is important — which is my repeated point here. The solar variations are more than an order of magnitude smaller than the greenhouse forcing.–eric]
Solar and temperature oscillations are not always synchronous, and often are in anti-phase, but that doesn’t exclude need for a further research. http://www.vukcevic.talktalk.net/CET-GMF.htm
One could claim the above just a coincidence, but considering that each set of data is product of daily records averaging, statistical probability of it being coincidence is likely to be negligible. It is not claimed that link is direct, indirect or even two parallel processes with a common cause.
Since the above is not in line with the current understanding, worth attention even if the moderator does not approve of the post.
The paper you mention speaks of the influence from a man from mars. It discusses many different concepts and parameters regarding cycles. But what I am saying is that you can take specific time periods of climate data or all of the climate data and identify all sorts of cycles in it.
But that does not mean you understand why the cycles happen. So some may see cycles in the data and then try to claim why that cycle happens without actually understanding the mechanism. In other words, identifying cycle sin data without mechanisms is basically like identifying patterns in the bible through various forms of analysis.
So the man form Mars might see climate patterns on Earth and deduce that something is causing natural fluctuations, but even deductive reasoning may not be able to show the really interesting parts hidden under the bikini, and therefore leave any such claims about mechanism wanting.
But, pertaining to whatever your argument may be, what do natural cycles have to do with anthropogenic forcing and increased radiative forcing or human induced albedo changes or the like?
Let me help you out on this one. Nothing, other that their usefulness in determining the difference between natural variation cycles and human induced forcing and change.
So what I can determine by logical reasoning, using the same logic that is illustrated in the paper you linked to, is that your focus is to hold up natural cycles as if it means something, or possibly means human induced forcing is not occurring and that natural cycles dominate in current climate change…
My friend was just contemplating your postulation. She said of course there are natural cycles in nature. There is winter and spring, summer and fall. There is night and day. Then she said what does that have to do with anthropogenic global warming?
And Ray, the whole point of the article about curve fitting and natural cycles is that it is inappropriate to make strong claims about random fits without mechanism, attribution and supporting physics and observations, unless you are perfectly willing to accept that the fact that the confidence in any assumptions indicated by any such ‘curve fitting’ is likely lower in contrast to more relevant methods.
Eric’s reply @106 & MARogers @110: Does anyone doubt that solar fluctuations must translate into climate fluctuations? I presume the question is also related to the question of amplitude. This assumes that we understand all of the mechanisms which it is generally being stated we do not (e.g. even the Milankovitch cycles). I suggest that if both Solar proxy’s and temperature proxies both show the same length cycle of the same phase over a long period of time, then, even if your calculations do not support the amplitude of the temperature variations, there is something going on.
I will give just one possible explanation which no-one has probably serious considered. Suppose that there are cyclical fluxes of energy throughout the Universe which affect both the activity and temperature of the Sun and Earth. In that case a 207 year cycle might appear of the same phase in both. If you then calculate from the Solar “cause” to the Earth “effect” you will get too small an answer, as you do.
Now, this idea is no doubt way out of left field to all climate researchers. But to cycles researchers that is not the case. As Dewey showed in his “The Case for Cycles” paper, and as many others have shown before, contemporaneously and since, there are often found cycles in seemingly unrelated things which have common period and phase – Dewey calls this cycle synchrony. I urge you to look at the evidence for such things before coming to the conclusion that you really understand how the entire universe works. And I refer you back to Dewey’s quote about medicine before the discovery of germs.
I guess that I am not all that impressed when people rediscover–for the nth time–the Fourier series. Cycles and quasi-cycles are visible everywhere we look. Select a period, and probably somewhere you will find something that “sort of” oscillates with that period. If you ignore the physics, you’ll be able to approximate any time series with a few terms. What is missing is the mechanism by which the cycle becomes a forcing. Hell, they don’t even usually bother looking for the mechanism behind the oscillation.
Contrast this approach with that of Foster and Rahmstorf 2011, who consider just the 3 most important “noise” terms and show that they explain the vast majority of the noise about the linear trend in global temperatures. F&R 2011 is scientific modeling; the “fun-with-Fourier” types are simply mathturbating.
Ray Tomes &115
You say it is not the sun that is driving these oscillations in the earth’s climate, the oscillations that you said ‘levelled off temperatures since about 1998′. You are proposing that the driver is possibly “cyclical fluxes of energy throughout the Universe” and that is why the effect is greater than the changes in the sun would cause alone.
You urge us “to look at the evidence for such things” which perhaps I would if you point me in the right direction. But don’t rush with those directions as I (and my colleague Higgs Boson), to be able to look, will first have to stop laughing.
“Suppose that there are cyclical fluxes of energy throughout the Universe which affect both the activity and temperature of the Sun and Earth… Dewey…synchrony”
Now seriously, you may be able to help yourself out if you go back and re-read what people have been telling you about curve fitting. Only this time, play devil’s advocate against yourself and try to understand the topic from their point of view assuming that they may actually be on to something and are not somehow simply closed minded.
Speculation is a fun way to get the juices going, but too much impatient fantasizing and you start to part company with reality– slow to discern and often unfriendly though it may be.
Several of my replies have not appeared. Please at least tell people that you are censoring me if you are, so that they do not think that I am not answering.
[Response: Repetitive and uninformative comments go to the Bore Hole in order to maintain signal to noise. – gavin]
Repeatedly people say “curve fitting” and such. Yes, all science is in fact curve fitting if you think about it. The question is, is it statistically significant or not? Those that say I did not understand the accusations have ignored this point. If a cycle has p<0.0001 chance of happening by chance then it makes no sense to refer to it as curve fitting. It is very likely real. Likewise, if a cycle shows in a variety of different time series with same period and each p<0.01 say. I trained in statistics and can assure you that Dewey's findings stand up to very close scrutiny.
A question for you: Can you explain why about a dozen different species around the world show a population cycle of 9.6 years, all with the same phase? In the case of Canadian Lynx, the population grows to about 10x what it was and collapses again each cycle. Please do not tell me it is a predator prey cycle with snowshoe rabbits, because you will not be able to explain all the other species with that sort of thinking. Coincidentally, there is evidence for a cycle of 9.6 years in ozone with same phase. Why? There are things going on that do not vaguely fit any accepted human "science" models.
[Response: I’m a little confused here – calculating significance with respect to a reasonable null hypothesis is very much a human ‘science’ model, and yet you appear to be claiming that your results lie outside of that. That seems contradictory. But regardless, all science is not ‘curve fitting’ – it is based on finding physical explanations for phenomena, not randomly correlating things to numerological fantasies. – gavin]
Ray Tomes, to contend that all science is curve fitting betrays a deep misunderstanding of what science is and how it is done. A p<0.0001 means nothing if you are using the wrong probability model. What you are doing is just a poor approximation of the first stage of scientific inquiry–exploratory data analysis. Moreover, you are applying only a tiny fraction of the tools available for EDA, so you are doing a poor job of that.
As to your contention about 9.6 years, google the law of large numbers. Or better yet, actually do some work and look into the population biology of the Canadian Lynx and try to understand the mechanism. Then you would be a lot closer to doing science. Right now, you're just a friggin' Pythagorean.
A cautionary blog post — remember this when you hear “hockey stick” — it’s a visual illusion, one of the common ways we fool ourselves about rate of change. There’s no point out in the future to worry about. It’s here, now.
… there seems to be a fundamental misunderstanding about the properties of an exponential function. And the lack of a fundamental understanding of the exponential have grave consequences — we may draw incorrect conclusions that have huge errors because of it. Albert A. Bartlett perhaps said it best,”The greatest shortcoming of the human race is our inability to understand the exponential function.”
There is no such thing as a knee to an exponential function.
It’s not completely clear to me why we are fooled. But basically, I think it has something to do with the idea that regardless of where your observation point is, looking “left” always looks flat and small, while looking “right” always looks steep.
Not understanding the power of the exponential will result in us making bad decisions. …the “knee of the of curve” (or where the curve “hockey sticks”). “Where is it?” you are asked. You should think carefully about how you are going to answer this question that doesn’t quite make sense. It’s quite the “gotcha.”
Thank you for highlighting Huyber’s latest paper. I read it all, and it was interesting. It takes a while to get your head around, but I think this analogy helps:
‘Nearly all anomalously cool summer days are associated with significant cloud cover, however, well over two-thirds of days with significant cloud cover are not associated with anomalously cool summer days.’
….So major deglaciation nearly always coincides with insolation maxima, however, over two-thirds of the time insolation maxima are not associated with major deglaciation events.
91, Ray Ladbury: It is by no means necessary to understand everything to make reliable predictions.
I agree wholeheartedly!
What is necessary is to accumulate a bunch of independent tests to confirm that the predictions are reliable enough for the intended purposes. How large a “bunch” is necessary, and how reliable is “reliable enough for the intended purposes” are next to impossible to specify a priori. To date, climate predictions a few years in advance have not been reliable enough for planning purposes anywhere on earth (witness the Queensland Australia mistaken decision not to enlarge their dam and reservoir system, for one example.)
Ray Tomes’s contention, to which you objected, could be toned down a little to: unknown parts of the total mechanism might be influential enough that our ignorance of them makes the predictions of our models too inaccurate for practical purposes. Toned down that way, I doubt that you could make a case that the point is demonstrably false.
90, John P. Reisman: Funny how those that are most confused about climate change often claim change often claim the kettle is black and while simultaneously… such as it’s cooling and we are heading back to an ice age, or it’s warming, but it’s natural cycle…
Can you answer this for me: How can it be cooling and warming at the same time?
Do you have examples of the first claim?
It can be warming in some places and cooling in other places. It can be alternately warming and cooling.
We can have some evidence of cooling, and have some evidence of warming, and not know whether there is any net warming or cooling for a span of time.
63, mike: Surely you do not want us to think of you as a turd? Or is this an attempt at irony? If it is, I have to tell you that the precise meaning of your irony laden name evades me
A “septic skeptic” is a skeptic who repeatedly doesn’t accept the truth of assertions (refutations, explanations, etc) of AGW promoters. That fairly characterizes me. I chose “Septic Matthew” to distinguish myself from another “Matthew”; I took the insult into my monicker the same as others adopted the insults “Yankee”, “Knickerbocker”, “Hoosier”, “Dodger” and others.
Comment by Septic Matthew — 29 Dec 2011 @ 10:03 PM
#129–In my experience, John is correct; many garden-variety faux skeptics do indeed maintain rhetorical positions that are internally inconsistent. That’s what convinced me, more than anything else, that the mainstream has it more or less right.
The clearest public example of ‘warming and cooling’ (IMO) was Monckton in 2006 with the infamous ‘no SUVs in space’ gotcha. He was arguing, in essence, that Earth wasn’t warming, but Mars was warming at the same rate that Earth was.
Other inconsistencies I’ve noted over the years include selective approval of the use of modeling techniques (‘climate models bad, DMI high Arctic temps [based upon reanalysis data] good‘), selective views of climate sensitivity, redefining IR absorption and re-emission to be equivalent to IR reflection (I think that was the late John Daly), and of course the ever-popular temporal inconsistency of the moving goal post.
The irony of the current discussion is that you can claim that not only can it be ‘warming and cooling’ at the same time, it almost always is–that is, on different time-scales.
For example, where I am, it’s clearly cooling right now as a result of normal season changes, but it is just as clearly warming due to cyclic diurnal variability. (And, based upon the HadCRU monthly means I glanced at yesterday, it may also be cooling on a yearly scale while warming on a monthly scale, due to ENSO-related effects and other sorts of ‘natural variability.’) At the same time, I have every reason to think that the GE-driven warming trend–robust over lo, these thirty-plus years–continues unabated. Yet has the Holocene cooling trend really ended?
And so on. This may seem a pointless bit of Alice-in-wonderland fantasizing, but perhaps it’s worth reminding some of us what a world of epistemological hurt can be carried in that little word “is.” It’s very apt to make unwitting Platonists of us all–that is, to make us think that there is a simple, easily-stated core reality which trumps all else. What, we ask plaintively, is the temperature really doing? (And let’s spare a thought for the undefined “it” as well, since that’s what let me sneakily conflate global and local temps in the preceding paragraph.)
What we really want to know, of course, is not what it ‘is doing,’ but what is going to happen next. And that requires us to specify–in this particular Atlanta suburb, it’s going to get warmer for the next several hours, get cooler for the next several weeks (though with lots of ‘noise,’ and correspondingly low confidence.) Globally, it will probably warm over the next several months as La Nina fades, and will continue to warm over multi-decadal timescales due to greenhouse forcing.
The trouble is, now we’re starting to talk a bit like scientists ourselves, which means we’re starting to lose folks who just want to know what the hell it’s really doing.
Has the “Holocene cooling” been widely accepted as real? I thought that it was based mostly on a select few data series of disputed applicability to global average trends. Has the “Holocene cooling” ended?
climate predictions a few years in advance have not been reliable enough
How about Hansen’s 1988 forecast, which has temps trending below Scenario C (no growth in emissions from 2000 forward)?
[Response: So… Let’s say that yesterday I predicted that *if* it rained today, most people would have carried red umbrellas. But it didn’t rain – that implies that my prediction about the colour of people’s umbrellas is moot, since it is was contingent on it raining (which it didn’t). Scenario C is similar – you cannot validate a prediction for which the contingent factors did not occur. Thus Scenario B is the closest one can come to a prediction that is testable, and we have examined that onnumerousoccasions. – gavin]
I don’t know if the last went through; so I’ll be brief
Can you direct me to a quantitative analysis of the forecast. I understand that Scenario B is closest to what occurred; the link you gave me just states the obvious that temps are running low. Well, they are running below C as well (just as an easy benchmark for eyeball analysis)
You could say we got the temps expected in C but without the economic pain that C would have brought. A win/win.
So it’s not about C vs B; it’s about how good was B. C happens to be an easy comparison. My prior (based on the provided data) is that the forecast performed poorly, since temps are below what was predicted for a scenario with fewer forcings.
But analysis is needed; what sort of diagnostics have been performed on the 1988 Hansen forecast. And I don’t mean the calibrations that the link describes, since that just accepts the forecast and adjusts it ex post.
[Response: See Hargreaves (2010) – the prediction had skill (i.e. it was more informative than any reasonable ‘naive’ forecast that anyone was proposing in 1988). Additionally, the forecast would have been better if the model had a sensitivity closer to 3ºC – the independently determined to be the most likely value. – gavin]
#133–I don’t know how bombproof “Holocene cooling” is these days, though I don’t have the sense that it’s terribly controversial in general.
Just to expand for those who may know even less about it than I do, the general idea is that the Holocene era–the current one, pending official certification of the “Anthropocene”–reached its peak temperature around eight thousand years ago, and global climate has gradually been cooling since. (An inconvenient perspective for those who try to claim that “Of course we’re warming, we’re recovering from an Ice Age!”)
Robert Rodale prepared the figure below on the question back in 2004; I’m not sure how different the picture would look today. Perhaps someone will chime in with the latest research news on the topic.
In the first section, it was argued that it is impossible to assess the skill (in the conventional sense) of current climate forecasts. Analysis of the Hansen forecast of 1988 does, however, give reasons to be hopeful that predictions from current climate models are skillful
You stated that there is some ‘skill'; Dr Hargreaves said there’s hope there there might be.
Is there an actual quantitative analysis of the Hansen forecast?
[Response: Please don’t play games. It reduces any possibility that you will get a substantive response to almost exactly zero. (For the hard of reading Hargreaves is referring to the impossibility of directly assessing the skill of a ~20 year forecast prior to the 20 years actually happening). – gavin]
I’ll note another prior: being better than a naive forecast does not mean a model is good enough to make policy decisions on. Plus, the model needs to be better than the forecast made in 1988.
As gavin said above: “the forecast would have been better if the model had a sensitivity closer to 3ºC – the independently determined to be the most likely value.” (I believe the sensitivity in the late 80s model Hansen was using was closer to 4C).
Implicit in his comment is that 20-odd years later, models like NASA GISS Model E *do* exhibit climate sensitivity near 3C (just slightly less in the case of Model E, IIRC), and also fit observations quite well in a lot of other ways.
This “about 3C” sensitivity is constrained by a bunch of data and has been independently derived by a bunch of different groups employing a bunch of different methods including a bunch that don’t use GCMs at all.
So, many paths lead to the conclusion that modern models are getting sensitivity about right, while Hansen’s was a smidge high. For work done over two decades ago (today’s smart phones have computational power equivalent to a pretty sizable research computer from the 1980s) it’s held up remarkably well.
Suppose you have two identical pendulums. One of them is connected to a mechanism which regulates mechanical energy delivered to the pendulum depending on the position of the pendulum – an escapement mechanism in a grandfather clock. The other pendulum has random bits of energy delivered to it by a paddle poking into a randomly turbulent airflow. If you record the displacement of the pendulums versus time over a few cycles, and perform FFT’s on the data sets, both will show a strong peak at the pendulum frequency. However, the pendulum driven by the airflow is not an oscillator(assuming the swing of the pendulum doesn’t change the coupling of the paddle to the airflow); it’s a noise source, with the high and low frequency components filtered out by the pendulum.
Now lets replace the pendulum coupled to the turbulent airflow with a balanced stick – that has no natural frequency, and no elastic force like gravity. Couple the paddle to the stick with a leaky piston, so that low frequency perturbations are suppressed. Put a mechanism that turns the paddle for preferential coupling of turbulence that returns the stick to center, proportional to the displacement of the stick from center. the limit of force that can be generated by the paddle, combined with the mass it’s driving will filter out high frequency motions. Perform an FFT on the stick motion, and you’ll see a “peak” that looks the same as if there were some periodic mechanism involved, when there is only noise and dissipative filtering.
Because of the filtering, you can’t even tell if the turbulence noise is red, pink, or white. The climate also has very regular periodic drivers(diurnal, seasonal, Milankovic cycles) that couple into noisy processes(Hadley & Ferrell circulation, AMOC, NAO, ENSO). The irregular landforms and distribution of surface features, and the chaotic nature of fluid(atmospheric and oceanic) flow create noisy drivers, that couple into filtering mechanisms that result in “periodic” phenomena, like ocean waves, and probably ENSO. the warmest part of a day is usually in the afternoon, and the warmest days occur in the summer, but the temperature trajectories aren’t neat constant amplitude and phase sine waves. It may be impossible to tell from mathematical manipulations of limited data whether there is a periodic driver and noisy filtering process, or vice versa. I think that most climate cyclists don’t appreciate the difference.
There is also the issue of relaxation oscillators. They are characterized by a rate limited source(of energy or mass), a storage mechanism, and a triggerable mechanism or switch which quickly empties the storage – common examples are tipping buckets, neon lamp oscillators, and glacial cycles. One unifying feature is assymetry of the rates of charge and discharge. Another is the ease with which their behavior can be made chaotic. Start at the wikipedia entry on Van der Pol oscillator, and chaos. Fig 10-8 here looks a lot like glacial cycles.