Update:: Tamino has a post up detailing exactly the problems with the AMO paper that struck me when I read it. They don’t seem to have dealt with the influence of their removal of low frequencies on the high-frequency ‘redness’ (and hence correlation) of the data. If so, then they have not actually demonstrated the main claim of the paper.–eric
Agreed about Muller, but I think the subheading of the WSJ article is significant. It gives various political types a way to back off stark denial without losing face. This could be a benefit to planet earth.
Comment by Pete Dunkelberg — 24 Oct 2011 @ 7:52 PM
Looking at the AMO paper, the section on spectral analysis caught my eye. They report a strong peak in both the AMO and the PDO data at around 9.1 years and a weaker peak at about twice the period of the strong peak, both of which may be the result of the lunar precession cycle or the 18.6 year precession of the nodes of the moon.
When I read the media PR on this, it looked like BEST claimed better statistical methods, leading to lower estimates of the uncertainty in the temperature change -even at the decadal level. Is there any basis to this, -or is it just another meta-analysis?
In any case, it is amusing that he used (I think) Koch money inthe study. No doubt that funding source believed they would get a revoltionary result. I shudder to think what sort of hate mail Muller is getting because of his perceived betrayal.
Muller finishes his WSJ editorial with claiming that Berkeley Earth does not find out whether global warming is man-made. After successfully reproducing all scientists had found out a number of times, you would imagine that it would be high time to at least state the established science as being right until – very very probably not – being proven wrong (by him?). But no, the arrogance of the physicist is too strong in him, which really annoys me. Especially since now he will be dragged through the media to no end…
I always thought that science was built on concenus.
So Best confirms that the HADCRUT3 and UAH data are in agreement.Great.
I personally did not expect any earth breaking news.
Lets celebrate at least an agreement between the opposing foes.
[Response: There is absolutely nothing wrong with BEST having done the work they did. It's fine -- it's great even -- if that's what it took for Muller and his colleagues to convince themselves of the facts. On the other hand, science won't progress if it takes this long for most scientists to come to grips with reality. Scientifically speaking, Muller should have done all this work first, rather than making public pronouncements first. Of course, there are political reasons why one might shoot first and ask questions later -- and perhaps history will judge Muller's political decisions well; I don't know. Let's just not confuse the two things (science with politics that is).--eric]
I applaud the Berkeley dataset, the new data analytic approach, and the potential of those for filling in some voids and delivering better error bars on local, regional, and global rates of change. These are valuable contributions, and open some doors for interesting new science (although on first skim I thought the AMO paper was a good example of this, Tamino’s take-down of it has changed my mind).
Of course I welcome open data, open code, and open preprints. Yay for open.
The headline GMST result is uninteresting in every way apart from the political.
[Response: Open is great, and no one is against it. But the point I keep repeating is that what BEST has demonstrated is that whatever was 'closed' was unimportant, except politically. That's not to say calls for openness are wrong -- but before complaining that someone is 'hiding' data for some nefarious purpose, it is a very good idea to establish first whether it is likely that it matters in the first place. This is the simple homework that Muller and Co did not do.--eric]
Muller and Rohde will present their results on November 1 at the Third Santa Fe Conference on Global and Regional Climate Change. The conference is sponsored by Los Alamos National Laboratory. The program committee chair is Petr Chylek of LANL, who thinks the CRUhack emails revealed an intent to manipulate the temperature record. Also on the conference program are R. Lindzen, D. Easterbrook, C. Monckton, F. Singer, J. Curry, and other well-known deniersskeptics. How they receive the BEST results may be newsworthy, even if the results themselves are not.
One thing that seems potentially new and interesting from their results (and that I haven’t seen many comments on) is the fact that their global record goes back about 50 more years than CRU and 80 more years than GISTEMP by starting with the year 1800. I’m curious as to how well that particular result will hold up to peer review, since it apparently involves some creative statistics.
Whatever that “something else” is supposed to be completely eludes us, given that these groups all along have been publishing results in the peer-reviewed literature using methods that proved easy to reproduce using easily available data (and in the GISTEMP case, complete code).
Folks with some programming skills don’t need to take realclimate’s word for this. The global temperature results really are easier to confirm than most people realize. I was able to replicate the NASA/GISS land-temperature index surprisingly closely with a pretty crude implementation of the standard temperature anomaly gridding/averaging procedure. Coded it up, ran the GHCN raw data through it, and got results much closer to the officially published NASA/GISS results than I expected (especially given a couple of shortcuts taken out of sheer laziness on my part).
The basic procedure is quite straightforward; it could be broken down into a series of homework assignments for a first-year C++/Java/whatever programming class. Additional exercises such as comparing the results for rural vs. urban stations, confirming that Watts’ “dropped stations” accusation is baseless, etc. are surprisingly easy to code up and run. There’s a wealth of free development/data-analysis software out there that makes all this much easier than it was back during the bad old days of Fortran-4 and punch-cards.
Anyway, I’ve stashed a plot of my results (plotted against the official NASA/GISS results) in my gmail inbox so that I can whip it out on my smartphone on short notice should the need arise (i.e. whenever relatives/co-workers/whoever decide to start trash-talking climate scientists in my presence).
There’s nothing even remotely interesting here from a scientific perspective; what it *is* useful for is countering claims that climate-scientists have “hidden” their data/code so that “outsiders” can’t scrutinize their work.
Technically, the Berkeley research could not establish the cause of global warming, since the research is just statistics, no physics. But is it not odd then for Muller to have thought that the AMO might be the cause of global warming, or some of it? The AMO is an oscillation (or fluctuation since it is not that regular) but not an energy source. A fluctuation in the location of slightly warmer surface water could hardly cause the global increase in ocean heat content. ENSO changes the overall surface temperature, but the change is up and down, up and down. But if you want a temperature trend not an oscillation, it is handy to have an energy source.
Comment by Pete Dunkelberg — 24 Oct 2011 @ 10:10 PM
caerbannog, RC, so let’s do it! in a nice interpreted language of course.
Comment by Pete Dunkelberg — 24 Oct 2011 @ 10:59 PM
Anybody expecting earthshaking news from Berkeley, now that the Berkeley Earth Surface Temperature group being led by Richard Muller has released its results, had to be content with a barely perceptible quiver.
Oh, I don’t know, the 4-ish quake centered NE of Berkeley felt earthshaking where I sat (in SFO south of market, near the ballpark). Felt like a truck ran into our office building, actually.
Yet I instantaneously recognized it as being shallow, weak, and nearby – hmmm, not a bad description of the BEST “advancement” of climate science.
[Response: lol This reminds me of when when I was a kid a car ran into our fence, making a big bang sound. About 1 week later, the same sound came from outside, and we ran out to look -- no car. Turned out it was Mt. St. Helens erupting, some 300 miles away.--eric]
Pete #13, I think the idea is not so much that AMO contributes to the global warming trend, but rather that it overlays the modern temperature record in such a way that a “naive” analysis ignoring it will find a slightly greater trend than is really there.
There is one statistical detail which irks me about their papers: Their “uncertainty estimates”.
Considering how much they have filtered, chopped, and fitted the data, they should have made it very, very, clear, that the uncertainty estimates are not the uncertainty on the global average land-temperature, but on the mathematical model they force their data into.
For instance they throw out series which do not correlate enough with neighboring stations, and while that is a valid filter to screen out changes in siting micro-climate, it also tends to cut the uncertainty a fair bit.
They also operate on time-wise averages, which is a guaranteed to get your uncertainty down, the main reason why we do that in the first place.
Let me stress that I basically couldn’t be any other way, the other temperature reconstructions obviously suffer from the same kinds of issues.
But I would really have preferred if they had written in Helvetica,30,Bold that the uncertainty band is not on the actual, as measured in the field, global average temperature, but on their matematical model of it, and because of the steps that model contain, probably an order of magnitude too optimistic with respect to the actual temperature.
Comment by Poul-Henning Kamp — 25 Oct 2011 @ 1:31 AM
I still meet people who think “hide the decline” is something important. I refer them to RC.
#10 Paul “One thing that seems potentially new and interesting from their results (and that I haven’t seen many comments on) is the fact that their global record goes back about 50 more years than CRU and 80 more years than GISTEMP by starting with the year 1800. I’m curious as to how well that particular result will hold up to peer review, since it apparently involves some creative statistics.”
Yes, they don’t have any more data. I checked that out here. Basically what they have is GHCN.
An analysis published earlier this year (Wu et al, Clim Dyn (2011) 37:759–773 DOI 10.1007/s00382-011-1128-8) extracted, using empirical mode decomposition (EMD), a multidecadal (65-year) component to global temperature trends. They could localise this component to the North Atlantic with EMD. This analysis was based on HadCRUT, which seems to be underestimating recent warming. An analysis of the BEST data with EMD might be worthwhile.
The 3 main unresolved problems seem to be (a. What would the result be if 100% raw data was the feed? Each country that provides data has the opportunity to adjust it before sending; and then the same country data set is used for all of the derived sets except satellite – thus increasing the probability of agreement by new analyses. (b. Given the large number of cases where there has been essentially no temperature trend in the last 100 years, how does this fit with the selective geography of greenhouse gas mechanisms? (c. UHI remains a counter intuitive result and disagrees with direct simultaneous measurement in some papers. I have seen no time series method of UHI estimation that agrees with stationary methods. It could be that UHI has other important causes besides common postulates like population, including local effects happening within metres of the instruments.
Comment by Geoff Sherington — 25 Oct 2011 @ 4:05 AM
Re: My comment #4
The authors of the BEST AMO paper do mention in the discussion section that the 9.1 year peak, which their analysis found in the AMO and PDO, could be the result of the lunar tidal cycle. That the lunar tidal cycle appears in the data is not surprising, as others have also pointed to this possibility. For example, I recall this paper, which I referenced back in 1988:
Currie, R. G. “Examples and Implications of 18.6- and 11-year terms in World Weather Records”, Climate Van Nostrand Reinhold (1987)
Curry also wrote this paper (which I haven’t read):
D.P. O’Brien and R. G. Currie, “Observations of the 18.6-year cycle of air pressure and a theoretical model to explain certain aspects of this signal”, Climate Dynamics Volume 8, Number 6, 287-298 (1993)
The BEST authors also note:
“Correlation does not imply causation. The association between Atlantic sea surface temperature fluctuations and land temperature may simply indicate that both sets of temperatures are responding to the same source of natural variability.” They continue suggesting that the AMO oscillation may be the source of much of the trend seen in the temperature data, yet the AMO data can not be used to analyze a 65 to 70 year “cycle” as a consequence of the Nyquist-Shannon criteria. The data set(s) available do not extend over enough time to be used for an accurate determination of periodicity at these time scales. In addition, the early data for sea surface temperatures is not global, which further limits the usefulness of these data for long period harmonic analysis.
As Pete Dunkelberg points out in #17, there needs to be some link in physics between the AMO index and the temperature record before one can claim that the AMO causes anything. The same might also be said about the NAO regarding repeated claims that the NAO index pressure differences are the cause, rather than the result, of changes in atmospheric circulation, i.e., weather. Physics tells us that a fluid can not cause a flow due to a “pull” of lower pressure, only exert a “push” caused by the difference between high and low pressure fields. Gravity does the pulling in the atmosphere and oceans, although, once in motion, viscous shear forces occur between adjacent layers moving at different velocities.
There is recent evidence suggesting that there are oscillations in the THC, which might then be a driver of the AMO index, as the authors suggest, but then at the next level, one must understand what physics is involved in THC variation. Since we know by now that the lunar orbital changes are a driver of tidal forces and thus should not be surprised to find this cycle appearing in various climate records, including the THC (and the AMOC).
“In a talk at AGU last Fall, Naomi Oreskes criticized the climate science community for being reluctant to take credit for their many successful predictions, so here we are shouting it from the rooftops: The warming trend is something that climate physicists saw coming many decades before it was observed.”
Now hear hear!!
To add e.g. this: HadCrut published its data couple of weeks ago – but shouldn’t have. You don’t give in to slander. You demand evidence – or you sue.
Do you have any idea why so much effort as gone into publicizing this work before peer review? Wouldn’t it have been better to wait for publication before cranking up the PR machine? It seems that discussing these results before peer review unnecessarily leaves them open to the criticism that they haven’t been peer reviewed.
AMO is simple, but it is not understood by the climate people. No mystery there, just simple movements of warm and cold water back and forth across the Greenland-Scotland ridge. http://www.vukcevic.talktalk.net/NA-SST.htm
[Response: I agree that most 'climate people' don't understand it, but I also don't see how your graphs help (what are the data, and what are they supposed to illustrate. Also, your explanation doesn't fit the historical definition (see the papers linked in the post). Want to say more about what you are trying to say?--eric]
The idea that the AMO is the source of global warming seems rather like the idea that human beings come from cars. You go to an NFL football game, and *holy smokes* look at all of these people getting out of cars! Cars must breed people!
The difference is one of experience. Terms like AMO and PDO are alien to most of us. We don’t know the processes. Smart science people do. So, it seems so odd that a physicist would make that error. Why you might think that …
[Response: I agree with you that these ideas come out of the blue for the most part, and are not well thought out. However, we don't know the physical basis for the AMO or the PDO. Not really. They are both simple statistical descriptions of observed sea surface temperature patterns. Some people think that the AMO is related to the atlantic thermohaline circulation, and that the shifting of heat North and South that is associated with its variations could cause some of the recent warming trend in the north Atlantic. That's a very mainstream view. Muller's only additional take on it is that the *short-term* variations in the global temperature variations. This is much less sensible because it would imply that the ocean circulation is undergoing large variations quickly. Not likely, not evidence, probably not possible. The PDO has not been linked to anything interesting other than the annual storage of summer (solar) heat in the North Pacific, which creates some memory in the system. That memory timescale is about 1 year, which is all you need to get the apparent decadal variations in the PDO index. Very few people seem to get this point, and they (e.g. J. Curry) talk about the PDO like it is in the drivers seat (to mangle your own analogy above). I very much doubt this is correct.--eric]
Eric, thanks for the even-handed treatment of this “new” climate data, but I remain an anthropogenically-caused climate change skeptic because of the extraordinarily high number of unproved variables that must be shown to be true, in order for man’s puny efforts at controlling the climate to have any long term effect.
While the vagaries (i.e. AMO) of climate change study may glaze the eyes of all but dedicated scientists who are paid to examine these climate change causative permutations, the consequences of being WRONG about the facts and acting on those wrong beliefs prematurely, are hardly vague.
The CCX (Chicago Climate Change) ceased operation in 2010, largely based on public skepticism, but would have (by their own figures) become a 10 TRILLION (with a T) dollar per year trading exchange. And that was hardly the cap for this hardy cap and trade enterprise. It was just getting started. The US GDP by comparison, was approximately 14 Trillion dollars and if you can imagine suddenly taking 2/3 of the US economy out of the global economy for an unproved reason, I think you can see why we “skeptics” would want to make sure that the science is right before we bankrupt large swaths of energy-producing global business enterprises before we are sure that they are truly responsible for the 1 or 2 degree change that’s predicted over the next century. (not to mention the fact that China and other energy consumption giants have no intention of putting similar restraints on their own use of fossil fuel use, thus negating much of our conservation efforts)
This “follow the money” detective approach may not address your specific concerns regarding Richard Mueller’s ego and the veracity of data collection and statistical analysis, but from my day job (former options, futures and equities trader and current financial services employer) I can see an extremely large motivation for getting the science wrong (in favor of trading climate instruments and the power to make or break entire business sectors) and a skeptical public who have seen this movie before, and naturally want to see conclusive and comprehensible cause and effect proof, BEFORE we wean ourselves from our current living standards on the hope that it’ll make a discernible difference in 100 years from now.
[Response: I don't disagree with you that these sorts of motivations are likely to exist among climate-information *users*. It is harder for me to understand how this can motivate climate information producers (like me). It is even harder for me to see how any of this affected Joseph Fourier or Svante Arrhenius, some 100+ years ago. The only way to figure this out, of course, is to follow the scientific arguments, and see where they lead. To his credit, this appears to be what Muller has done, with the apparent result that he has shown his own preconceptions about what motivates people to be wrong.--eric.]
I wonder if Muller is going to speak publicly about the response he’s received from the “skeptic” community because of his results. The fact that Muller went from hero to enemy #1 merely based on the results their research turned out is at the very crux of the climate issue.
This is the story we should all be discussing. It’s not the man, the scientist or the data that are objectionable to the “skeptics.” The only thing they find genuinely objectionable is the conclusion that humans could have an impact on climate. They are a conclusion looking for justifications.
Bill, I might have more sympathy for your line of argument if you weren’t taking issue with physics that has been established for more than a century–as Eric points out.
We understand the greenhouse effect prettyvery very very well. It has a quite distinctive signature in both the troposphere and the stratosphere–and we see that signature in the current warming.
The thing is that you are wrong not just on the science, but also on the economics. Fossil fuels are finite and rapidly running out. We will have to completely replace the current energy infrastructure on a timescale of decades, quite independent of climate concerns. Climate change merely increases the urgency and limits our ability to rely on coal, tar sands, etc.
Finally, your “follow the money” argument is absurd. Climate scientists aren’t trading carbon on exchanges. They aren’t getting rich. They are motivated by trying to increase their understanding of the planet’s climate. Is that so utterly foreign to you that you cannot entertain it as a possibility? If so, don’t you think we’d be better off listening to the scientists than to you?
Perhaps it does, but you would be in good company ;-)
BTW it’s not ‘Schadenfreude’ for me, rather that my quota of compassion was spent in toto on Phil Jones, and the account has remained empty.
Comment by Martin Vermeer — 25 Oct 2011 @ 12:38 PM
let’s do some math on carbon trading (and apologies in advance for any gross errors… this is a little rushed). The annual CO2 emissions of the US are in the vicinity of 6 billion metric tons. Let’s assume that all of those were to be traded on the CCX and that the price was somewhere between $10/MTon and $100/MTon (current prices are about in the low teens, I believe). That gives us a value of the asset at somewhere between $60bn and $600bn. (multiply by about 5 to get the global numbers)
That’s a far cry from $10 Trillion, even assuming that all emissions are traded on the exchange, isn’t it? And the size of the market doesn’t even indicate the value [taken out of the global economy], any more than the ~$100bn notional per day volume of the 10-yr Treasury futures market is value taken out of the global economy. So, saying that cap and trade would take 2/3 of the USD GDP out of the global economy is just, well, alarmist.
Response: ….Want to say more about what you are trying to say?–eric]
Hi Prof. Steig
Of course I am familiar with the AMO definition and the N.A. SST, currently writing an article which will contain all necessary and some new information. Hope to finish in the next few weeks. I’ll email you a copy when finished.
p.s. underlining cause of the natural changes is far simpler than it is normally assumed.
[Response: Great, I look forward to seeing that. By the way, I did not mean to imply you don't know what you're talking about -- only that your link doesn't provide enough information for the rest of us to learn anything! -eric]
@Bill in 29. If people like Al Gore weren’t investing in carbon exchanges you’d say that they don’t believe in their own solutions. The denier, never enough evidence side of things can always find a way to frame things in a damned if you do, damned if you don’t way.
The goal isn’t to wean ourselves from our current standard of living…it’s to maintain our standard of living while having a planet left to live on.
I’m glad you also have a crystal ball and a palm reader at your disposal to say with such certainty what China and other emerging economies will do. My on-the-ground Chinese experience suggests that they will adopt the best available technologies that they can buy, beg, borrow or steal.
the consequences of being WRONG about the facts and acting on those wrong beliefs prematurely, are hardly vague.
Indeed. Try this on for size on YOU being wrong about the facts:
* Existing fertile agricultural production zones, like California-Florida and Southern Europe, turning arid permanently
* Coastal zones around the world, where most of our cities and expensive infrastructure are located, being flooded (or expensively protected or relocated) by a metre or more of sea level rise.
I don’t want you to argue that these are uncertain, and might not happen; I know that already. I want you to prove that they will not happen, with certainty, before allowing the further release of vast quantities of a known greenhouse gas into the atmosphere. A greenhouse gas, I may say, that is known to be implicated in a large part of the temperature swings between ice ages and interglacials, not even to mention the going in and coming out of the “Snowball Earth” episodes of the Precambrian.
Are you really, really serious about allowing the build-up of CO2 to a level not seen since Antarctica first acquired its ice sheet, just on your belief, or suspicion, or hunch, contrary to what those that have actually studied these things think they know, that this ‘control knob’ might not be doing much; just because scientists have not managed to prove, to your satisfaction, that consequences like the above are certain, rather than just very well possible? Feeling lucky, are you?
Remember that our economy, and our material production system, exist within the natural environment and the climate system, and on its terms; not the other way around. The planet can do very well without us.
Sorry for the OT.
[Response: I don't think this is so off topic. We at RC do not consider ourselves experts on economics, nor do we have by any means a unified view of what the policies ought to be (e.g. I have no idea how Gavin or Mike feel about Cap&Trade, as I have never talked with them about it), but that doesn't mean that people should not feel free to discuss this sort of thing here in the comments section. Particularly when it's in response to a post that is really about the very political nature of the way that the science is being presented. In my view, as long as the science is not conflated with the politics, this is fine.--eric]
It’s always competitively tempting to claim the other guy is doing X so you have to do X to succeed. When X involves damage to everyone around, it’s a spurious argument. The damage is what you’re eager to deny.
Muller now accepts that the scientists were right and the climate is warming, so he’s got a dilemma. He can go straight to “we can’t affect it” but that’s scientifically already debunked. He can go straight to “it’s too late” but that’s scientifically at least arguable.
He’s stuck with: business as usual makes this worse. What is to be done?
There are many examples of businesses reacting when the science says their business model is false.
One way is to compartmentalize the market.
The Lead Industry Association did that; when the science became clear enough for political action in Europe, the lead industry lost business there. The Lead Industry Association successfully kept their market in the USA for decades afterward by promoting false reassurances.
It’s practically a guidebook to how to lobby and advertise against the science to prolong your marketability, at least long enough to shift investments elsewhere and get out before the stock collapses. That’s a business method, isn’t it?
Lead in paint is — in the short term — a problem that can be kept local. Climate change isn’t.
Another for Bill — if you did look at the pictures in the PDF linked in my last post, and I hope you did — then this will help:
Am J Ind Med. 2007 Oct;50(10):740-56.
The politics of lead toxicology and the devastating consequences for children.
Rosner D, Markowitz G.
Columbia University, Mailman School of Public Health, New York, USA.
At virtually every step in the history of the uncovering of lead’s toxic qualities, resistance was shown by a variety of industrial interests to the association of lead and toxicity. During the first half of the last century, three primary means were used to undermine the growing body of evidence: first, the lead industry sought to control lead research by sponsoring and funding university research…. A second way was to shape our understanding of lead itself, portraying it as an indispensable and healthful element essential for all modern life. Lead was portrayed as safe for children to use, be around, and even touch. The third way that lead was exempted from the normal public health measures and regulatory apparatus that had largely controlled phosphorus poisoning, poor quality food and meats and other potential public health hazards was more insidious and involved directly influencing the scientific integrity of the clinical observations and research.
Throughout the past century tremendous pressure by the lead industry itself was brought to bear to quiet, even intimidate, researchers and clinicians who reported on or identified lead as a hazard.
This article will draw on our previous work and add new documentation of the trajectory of industry attempts to keep out of the public view the tremendous threat of lead poisoning to children.
Eric, Ray and Martin, Please forgive me for not having time to adequately explain my opposition here, but scientific research must be financed, which introduces the possible profit motive into any study.
Eric, I’m not a scientist, but well remember struggling through 100+ year-old Maxwell’s equations at university and reading more recent research that speculated that, although not “wrong” per se, Maxwell may not have been entirely “right” either, specifically in the 2nd law of thermodynamics relating to non-linear dynamics and non-polluting sources of energy that may be available with changes in assumptions based on century old science. (I’m mentioning this because you mentioned the work of Fourier and it’s continued validity today)
Ray and Martin, regarding the 10 Trillion dollar figure of the now-defunct CCX trading exchange, you guys have forgotten that options and futures markets are many times larger than the underlying instrument trade in any given market, and is (ironically) the most worrisome aspect of the current global financial malaise. IOW, you don’t have to be an energy user or producer to buy/sell/trade options or futures on those underlying sales and to say that there’s no money to be made there, is like saying that the CBOE doesn’t make any money……lol.
Global derivative trade is currently estimated to be well over 200 Trillion dollars and the counterparty risk of default is by far, the most dangerous aspect of world financial collapse. Today we have a housing market collapse fueled by derivatives trading and speculation. But imagine a future collapse in energy credit trade and being gridlocked by regulations while ordinary people affected are starving or freezing to death!
Again, I’m not a anti-climate change evangelist or anything of the sort. But I don’t think that you have a firm grasp on the real world importance of getting this research right, either.
I have to leave , but thanks for the cordial discussion gentlemen.
[Response: Bill. Thank you for the cordiality. Much appreciated. I think what I, Hank, etc. are objecting to here is not that you are raising issues about policy. Those are very legitimate issues to raise, and I definitely think that there has not been a very complete conversation at the international level about the possible economic consequences of various proposed policies. What we’re objecting to, though, is the confusion between what the science tells us about the risks and what the risks associated with specific policies might be. The climate science is certain (certain) that the risks are high, and in the face of those risks it is clear that we ought to be reducing CO2, and fast. There is nothing in that statement that implies that we ought to be reducing CO2 by any means possible. That’s a different question.
The most significant uncertainties do *not* lie with the science, but in knowing the consequences of various policy options.
Bill wrote: “I remain an anthropogenically-caused climate change skeptic because of the extraordinarily high number of unproved variables that must be shown to be true”
The “extraordinarily high number of unproved variables that must be shown to be true”?
What is that even supposed to mean?
Bill wrote: “would want to make sure that the science is right before we bankrupt large swaths of energy-producing global business enterprises”
Oh, I understand now.
What that sciencey-sounding gibberish about “unproved variables” means is that you don’t want to see trillions of dollars in wealth shift from the fossil fuel corporations to other sectors of the industrial economy, therefore, anthropogenic global warming cannot be true.
The generation in power now is totally irresponsible and selfish. The prudent action for the younger generation would be to dramaticly cut greenhouse emissions. Unfortunately, we have a “quarter” short term time frame and not at all interested in what happens after we are gone.
The sad fact is when it is apparent that climate change is our own fault and we must act, it will be too late to do so because of the delay reaction.
For us ‘Older” generation of 50 age plus, won’t be held personally responsible and only be “cursed”.
So deniers think that scientists are ‘cooking up’ global warming by manipulating data…
Yet even a non-scientist such as myself can appreciate that inventing a warming trend by underhand processing of data is a very stupid thing.
1) At any stage someone might find you out by trying to replicate what you’re doing; as you’ve been dishonest they’ll fail. 2) It’s a warming trend that’s projected to continue, so you commit yourself long term to having to ‘cook up’ evidence that fits with that warming trend across an increasingly wide area of observations.
Really, this whole notion of AGW as conspiracy says more about those who espouse it than about science and those practising it (and yes, I’m thinking delusion mixed with a dash of Dunning Kruger).
I glanced over the BEST website and did note that in the recent decades CRU is the outlier (tracking below BEST, GISS, and NCDC), I guess this is due to CRU’s lack of coverage in the Arctic. Otherwise I’m not spending my time reading the papers, I’m overloaded with papers containing novel insight. If there’s anything interesting in BEST I’ll be made aware by reading the appropriate blogs. RC being one of them.
As I’ve said to you some months ago: I found myself in the same situation Dr Steig is now in. Your graphs aren’t nearly enough.
I work in finance. I’m very aware of the underpinnings of our markets. Which is why I can say that your claim that the carbon market ballooning to $10tr amounts to that much being taken out of the economy (and I think I found your wonderful sources for that…) is nonsensical and intended to be alarmist. When you think about the uncertainties of economic models and how much money is invested using those models as a basis, the idea that we don’t know enough about climate change is laughable.
I’d like to recommend to Bill David Brin’s take on how to distinguish between “rational, open-minded ‘AGW-skeptics’” (which Brin considers himself to be) and ideology- or profit-motivated deniers. At the outset, he says:
I find this distinction attractive, at the surface, because I too find some parts of HGCC theory unclear, ill-supported or poorly explained. In such a complex field, there are sure to be gaps.
Does that sound like something you’d be willing to read, Bill?
[Response: What the heck does "HGCC theory" mean? --eric]
Bill says the global financial system is so insane that a market-based solution to reduce carbon emissions could spin out of control and freeze/starve us to death.
But trading wheat futures is okay? What’s special about carbon?
I’ve had my doubts about capitalism, but this sounds a bit over the top. But anyway, I’m game, if we can’t cap and trade, I guess we’ll just have to tax emissions. Failing that, we’ll just have to regulate.
> the confusion between what the science tells us about the risks
> and what the risks associated with specific policies might be.
Any business that reduces fossil fuel use “too soon” (before they can recover the cost or profit from the change) is at a competitive disadvantage.
Any business that reduces fossil fuel use “too late” (after others have made the change and captured newly attractive opportunities early) — ditto.
Businesses get together to request the government regulate so they can avoid the “scorpions in a bottle” aspect of market competition and all move at the same time in a direction that benefits society — even losing the competitive advantage, because of the longterm savings
Why wasn’t it? Old, inefficient transformers with a 50-year service life were in stock that couldn’t have been sold under the new standards. With a decade’s delay, those were sold and installed around the country during that time. Those invested in building cheap inefficient gear got their profit.
Confusion prolongs market opportunities for those creating the confusion.
“Biologically rational decisions may not be politically possible once investment has occurred.”
DOI: 10.1126/science.1135767, Science 315, 45(2007)
eric in response to #45 (Apparently “Human-generated global climate change”)…everyone apparently makes up their own acronyms, but at least it isn’t as bad as “CAGW” :-)
Actually, to Mal (#45), going through the first few paragraphs of the article, I see little to disagree with. Of course we can distinguish between “open-minded skeptics” and “deniers.” That’s a no-brainer.
However, whenever I hear people talking about how they are an “open-minded skeptic” they seem to be referring to some sort of third party, divorced from both the practice of doing science in the way “scientists do it,” or at the other extreme, separated from the noise-makers who just play dishonest gotcha games. The implication seems to be (although not always) that there is currently a rational justification for disbelief in AGW, but this is just as consistent with the literature as is the moon being made of blue cheese.
I associate “open-minded skeptic” with “scientists” (those who do science, or are are at least well-read in the science, and have formed various opinions in the process, and can mold those views over time as evidence develops). This is by definition. There just happens to be overwhelming evidence (that is understood by people who read or do the science) that AGW is real, but there is plenty to debate about the specific details on certain topics.
How do we interpret d18O records in the tropics? Is it reflecting large-scale temperature changes, local precipitation or transport changes, etc? Is lichenometry a good dating tool? What is the best method to diagnose climate sensitivity…do measurements of the co-variability between satellite retrieved radiation budgets and the surface temperature tell us anything useful about sensitivity, or do paleoclimate constraints tell us more? What are the quantiative mechanisms/sources by which CO2 fluctuates between glacials and interglacials?
You’ll get a lot of different answers to these types of questions by “experts” in the field, due to a shaping of their own views developed during their research time, uncertainties in the science, various interpretations of the same data, etc. You’ll find people who will tell you the flaws in a lot of methods (for example, issues in Mg/Ca temperature proxies) but that doesn’t mean they dismiss everything out of hand, or that there views on Mg/Ca somehow means they don’t believe in climate change. Of course, the blogosphere is largely a black and white world, which gives rise to these absurd perceptions about climate science.
Eric asked in reply to #45: “What the heck does ‘HGCC theory’ mean?”
According to David Brin’s website, HGCC is an acronym for “Human-generated Global Climate Change (HGCC)… also called Anthopogenic [sic] Global Warming (AGW)”.
I have never much liked Brin’s science fiction and I’m even less a fan of his nonfiction ruminations, but the article that Mal Adapted linked to is a pretty good essay on the difference between genuine skeptics and the faux-skeptic deniers.
Bill does bring up the legitiment point regarding the funding of alternatives. “Cap and Trade” being one mechanism which may do more to distribute wealth then to fund the trade in margin choices for energy/systems.
That Dr. Muller has confirmed the science of the CRU/UEA teams are the least of his accomplishments. Instead it helps move the science forward..,
What the real effects of confirmed global warning does is suggest, if that science is correct, then it is likely the rest is likely correct as well… If the intent of science is to move to a sustainable future with an equivalent std. of living, we will need the “Bills” of Wall Street engaged.
As most of us know solar is the likely best avenue for fixed energy resources. The issue is how to get the economists to help provide the financing and impetus to move to these alternatives.
The fact that Wall Street sees little in “Cap ‘n Trade” suggests there may be a need for a gov. policy change. It may be time to tax or charge for the cleanup of on-going energy/transportation/housing/farming systems and use those funds to help those industries make the changes to cleaner technologies/processes.
Sure it would be nice if R-n-D tax policies could be used. But, you cannot attach criteria to R-n-D projects. This means you have to implement a new structure and you need buy in by economists to give it teeth.
So before disregarding “outsider” concerns, can we focus instead on how to incorporate or meet them?
(Dr. Steig my apologies for taking this OT, I believe the point I wish to make is Dr. Muller’s, the CRU-NASA and UEA effort is just a start. We have a long way to go yet and a short time to get there. (If I may suggest a future subject, Such as a discussion on bridging todays systems with future systems and how to accomplish it….?)
While I agree that Muller’s op-ed piece in the Wall Street Journal seems to be tooting his own horn quite a bit…But on the positive side, to have the Wall Street Journal editorial page publish anything that is arguing for, not against, at least some aspect of the scientific consensus on climate change is a step forward!
Chris R says:
25 Oct 2011 at 2:16 PM
Thank you for your remarks.
My graphs are data available from NOAA, NCAR etc. Once data show good correlation, next step is to find out if the correlation is a consequence or a coincidence. Tendency of the academia to new, when author is a ‘gatecrasher’, is to reject the first and opt for the second as the preferable choice. I present what I think has a chance to be possible and likely and verifiable by data. Those who are genuinely interested will make an effort to find out more by a personal contact. Criticism is a plenty, assistance a few.
Back to the science: By coincidence, a ‘skeptic’ paper I’m looking at got me interested in the lunar-tidal-climate hypothesis that is tangentially mentioned in the Muller paper (and briefly lampooned in the last April’s Fool post here). I’ve read Richard D. Ray, “Decadal Climate Variability: Is There a Tidal Connection?,” Journal of Climate 20 (July 2007): 3542-3560 (short answer: possibly an 18.6-year cycle in some places with intense diurnal tides, but no quite solid evidence), and will be looking at references therein. What else should I read for a recent overview?
31 Ray Ladbury: “Finally, your “follow the money” argument is absurd. Climate scientists aren’t trading carbon on exchanges. They aren’t getting rich. They are motivated by trying to increase their understanding of the planet’s climate. Is that so utterly foreign to [Bill] that [Bill] cannot entertain it as a possibility?”
Yes it is. Should I try to refrain from guessing why? Social sciences. Do “they” seem to be a different species? Perhaps “we” physical scientists are a different species. “Homo Sapiens” is just a name.
I am afraid that you are falling victim to the fallacy of the argument from consequences–to wit, that the consequences of a proposition have no bearing upon its truth or falsehood. And, I am afraid that once science has been established for a century or more, it is very rarely ever false.
Yes, climate science is complex. Yes, the system has many interacting components and forcings and feedbacks. However, warming due to a long-lived, well mixed greenhouse gas has a rather distinctive fingerprint that can be identified in the paleoclimate as well as our current warming epoch. We don’t need to calculate everything to the last decimal place.
What is more, you seem to think that there are adverse consequences to our economy only if we mistakenly take action. Climate change is already having consequences–serious consequences–and it has just begun!
Moreover, as I said before (and you ignored), fossil fuels are finite, and our energy infrastructure must be replaced in any case.
I would contend that to ignore these facts requires more than myopia. It requires the willful wearing of ideological blinders.
But exactly how finite is debated. Do you have something more specific in mind?
[Response: Here's a useful definition: something is finite if it becomes more expensive as one approaches the finite limit. In reality, we will never reach the limit, we will only approach it (there will always be *some* little pocket of oil we can't/won't get to). But the fact that the tar sands of Alberta are no longer considered too expensive to use (despite the huge expense in real dollars, as well as the environmental cost) shows that such a limit is being approached (and therefore that it exists).--eric]
Comment by Pete Dunkelberg — 25 Oct 2011 @ 9:00 PM
Bill thinks everybody is like Bill. Physicists think all people are or should be physicists. I agree with the should be but not with the are. Scientists are impossible for preachers to control. It is like herding cats. Back before the enlightenment, there were no cats to herd, only cows. Wouldn’t it be lovely for the 1% if there were no more cats? David Brin is right. There is a war against science. How do we fight it?
Congratulations to Richard Muller for having swindled the Koch brothers. + 10 points in the science war.
“The idea that not only can our ideas be severable — but that they must — that it is intrinsic to everything scientists do that everyone, no matter how laudable, can be questioned, can be debated, can be flat out wrong… it’s so far away from how shamans have operated throughout history that they never made the leap, not through all their schooling. To them, we’re just competition.”
The 1% can be shamans or billionaires or whatever. Either way, they must control the minds of the 99% or they loose.
“The implicit philosophy here is that scientific “belief” is inseverable, and that if Einstein (one of our holiest prophets) can be questioned, surely everything falls apart! Just as we tore away from the impure Faith, now the scientists are questioning themselves. Surely the schism of Science is close at hand, and with it the fall of this pernicious heresy…. ”
Ah Ha! Teach the severability of science! Or, as we wanted to do anyway, teach experimentalism. Richard Muller was not bribable.
Comment by Edward Greisch — 25 Oct 2011 @ 11:01 PM
In the case of AGW, I don’t believe we are going to make progress on CO2 emission reduction for a long time. This is because the people most responsible for the emissions are not generally the ones affected by the consequences. Recall that the US government didn’t give a rabid rat’s behind about fallout from atmospheric bomb tests (which had major consequences in lowly populated Pacific Islands, the desert southwest and among nearby military personnel) until the milk supply in the midwest became dangerous to drink. Until the high water inundates million dollar houses in the Hamptons and Boca Raton, droughts, floods and drowning polar bears are going to be considered somebody else’s problem to those in power. The western world’s virtual and practical indifference to AGW is a failure of our collective intelligence and conscience, and strong evidence that we are not evolving fast enough to save our civilization.
Bill, we have to be careful about double and triple and N-tuple overcounting of financial products. I often hear claims that there are $600T of derivatives contract, with the implication that over a decade of world GDP could vanish at any minute. But, what that really means is that hedge funds have a circular betting game going on among them. If it unwinds (as in a key counterparty goes bust), you have a nasty problem unwinding the mess, but the rest of the world isn’t required to pay up. We hear similar exagerations about the amount of debt guaranteed by the fed, where in reality money that is loaned overnight, then reloaned the next day, gets counted roughly 250times over a year, and converted into some big scare number.
So if you go to a carbon exchange, the parts that matter for the economy, and the price industrial buyers of permits pay, and the price those who originate permits get. Everything in between is just middlemen trading among themselves.
Lunar tidal 9.1 year cycle? Hmmm. I’d first consider a spectral analysis of ENSO and lagged correlations with the PDO and the various different AMO products.
Comment by David B. Benson — 25 Oct 2011 @ 11:41 PM
There might be enough data points to do a cross-section analysis. What charcteristics of areas in US are associated with temperature increases and decreases? The map in their article on urban vs. rural warming suggests that temperatures have dropped over 70 years in many parts of the South and Mid-west. Why? They suggest irrigation as a possibility. There are surely many more possibilities that one might tease out with multiple regression. And (joking)there might be a relationship between areas with more warming skeptics and areas with temperature drops.
(58) CM says:
I am only reproducing data as available; formulating hypothesis out of conjecture, and much harder proving to a degree of wider acceptance (perhaps beyond my current competence) is a process where success is rare and long drawn out struggle, but failure is instant and frequent.
A time to plant, a time to reap
in no hurry to be off and weep.
Finitude–well for an energy resource, when it takes more energy to extract, process and transport it than you get out of it. Frankly, it doesn’t much matter what definition you use. Exponential growth ensures that you reach the limit within a few doubling times.
There are two types of demographers: Malthusians and those who are bad at math. WRT his basic thesis (though not his remedies), Malthus will be right eventually.
Just for fun, here’s a Google Earth kmz file showing the stations in the BEST data set with records spanning more than 50 years and that show a best-fit temperature gain (or loss) of more than 0.5°C/century:
That’s about 5,700 of the about 39,000 stations in the data set. Placemarks for stations showing warming are yellow, blue for cooling. Clicking on a placemark brings up station information along with a link to a graph of the data for that station.
Interesting that stations in the southeastern US generally show cooling, unusual in the data set generally. Overall, in this subset, 86% of the stations show warming.
- it’s the first column of the ‘complete’ dataset.
[Response: Thanks. As you may have noticed, I use your site a lot in comments when a specific dataset/time period/trend etc comes up - the URL structure for each figure is really very usefully designed. Have you thought about opening the back-end code up for further development? One could imagine adding a facility for using additional data via a URL, or lat/lon/regionally dependent series, or different masks (land-only, ocean-only, etc). Just a thought... - gavin]
RE: 63 “Until the high water inundates million dollar houses in the Hamptons and Boca Raton, droughts, floods and drowning polar bears . . .”
Some have noted that the BEST results show we are now 2c warmer than 1800. Was the climate then that much better?
[Response: 'Better' depends on where you are, and what you are. There is no necessary problem with higher temperatures (to a point), nor to higher sea level (to a point). It's the rate of change that matters most. Incidentally, I am skeptical of the best results between 1800 and 1850. More on that later.--eric]
“Overall, we are underwhelmed by the quality of Berkeley effort so far — with the exception of the efforts made by Robert Rohde on the dataset agglomeration and the statistical approach.”
I thought that the Rohde part is the the essence of the BEST approach. For the moment it is what interests me, perhaps I am being very narrow.
Long as that paper is, it still needs a book before I could really understand the method.
Unfortunately I come unstuck quite early.
“C(x) captures the time-invariant spatial structure of the temperature field, and hence can be seen as a form of spatial “climatology”, though it differs from the normal definition of a climatology by a simple additive factor corresponding to the long-term average of theta(t).”
Already this is not my understanding of a climatology as I understood it to include the expected seasonal cycle, e.g. the file abstem3 in the HadCRUT3 dataset.
As I read it, the W (weather) function would not only include weather, as I would normally understand it, but the difference between the local seasonal cycle and the globally averaged seasonal cycle.
That it confuses me is no big issue provided that it is clear to those such as you. However, the precise way that the temperature field is separated into components would seem to matter as soon as one starts to form the necessary covariance matrices. As I read it, spatial covariance matrices are important to the analysis, where perhaps strictly it is the full lagged covariances that matter. Presumably this is all tidied away at some point either in the paper or as a matter of convention in all such papers.
Could you, briefly, outline what is really going on with respect to the seasonality of the data in this paper?
[Response: THis requires a longer response than I have than I have time for at the moment, but you raise some interesting points that will have to get addressed. Note that our comment that we are not-underwhelmed by Rohdes work is not necessarily an endorsement of it.--eric]
Comment by Alexander Harvey — 26 Oct 2011 @ 6:49 AM
Hank Roberts @ 38; what is true of the lead industry is even more true of the asbestos industry. Her we had a substance known to be harmful by the Romans nearly 2000 years ago, yet all the same tactics were used to prolong the life of the industry.
“I’ve had my doubts about capitalism, but this sounds a bit over the top. But anyway, I’m game, if we can’t cap and trade, I guess we’ll just have to tax emissions. Failing that, we’ll just have to regulate.”
Yes. It is remarkable how some folks will complain about those crazy anti-capitalist market-destroying social engineers behind AGW theory in one breath, and in the next about fat cat limousine liberals and greedy corporations out to profit from ‘our’ misery by the ‘cap and trade scam.’
I’d have thought you basically either like market-based mechanisms or you don’t. But these are denialists after all, and have no need for consistency.
Sounds cool, but what application runs that file? (I’m running OSX.)
In any event, I recall that the cooling trend in the US Southeast mostly reflects that it was *really* warm during the beginning years of the period of record. If you look at, say, the last 30 or 40 years you find a warming trend, I believe.
Here’s an interesting amateur analysis on the question. (I take the author to be a ‘skeptic’–possibly more ‘real’ than some, from this post.) Bearing in mind some of the points made on the “Moscow Warming Hole” thread, you can see this stuff gets tricky fast:
Thank You!!! (but it looks an awful lot like a hockey stick; dint somebody “disprove” that? &;>)
BTW, it looks like BEST has more residual annual variance than HadCRUT. (mean of 6 zeroes out 6 mo variance, and reduces higher orders; putting 12 in as mean zeroes out annual variation).
Some causes might be miscalibrated thermometers with a consistent larger or smaller scale factor early in the record, or perhaps changes in annual range caused by CO2 increases, which have a larger influence on a longer record, e.g. -
“…most of the warming which has occurred in these regions over the past four decades can be attributed to an increase of mean minimum (mostly nighttime) temperatures….similar characteristics are also reflected in the changes of extreme seasonal temperatures, e.g., increase of extreme minimum temperatures and little or no change in extreme maximum temperatures.” Global warming: Evidence for asymmetric diurnal temperature change, Karl et. al., GEOPHYSICAL RESEARCH LETTERS, VOL. 18, NO. 12, PP. 2253-2256, 1991 doi:10.1029/91GL02900
Has anyone done any work with scaling the baseline for calculating T anomalies?
caerbannog, RC, so let’s do it! in a nice interpreted language of course.
Comment by Pete Dunkelberg — 24 Oct 2011 @ 10:59 PM
I’ve created a “stripped down” and (hopefully) not-too-hard to understand temperature-anomaly app from my original “pile-o-code”.
Unfortunately it’s written in C++ (not a nice, interpreted language). Uses a bit of STL stuff (std::map, std::set, etc) which could be pretty cryptic to folks who haven’t been exposed to C++/whatever.
However, it is straightforward to compile/run on a Unix/Cygwin command-line environment. (Probably will compile/run on a Windows environment as well, but no guarantees).
Reads in GHCN temperature data and metadata, and writes the global-anomaly results to a flat-text .csv file (suitable for plotting with the Excel or OpenOffice spreadsheet software).
Nothing here that would be of any interest to folks like Tamino, Nick Barnes, etc., but hopefully can be used to convince skeptical “average joes” that there’s really no black magic involved in computing global temperature anomalies. Shows how robust the temperature record really is (my simple-minded approach really does generate results very similar to the global land temperature results that NASA publishes).
A software-savvy student would have no trouble modifying the code to compare rural vs. urban results, “dropped stations” results, etc.
caerbannog @ 83, thanks! I have it unzipped and viewed using \TED Notepad\TedNPad.exe, a tiny editor and notepad replacement.
Comment by Pete Dunkelberg — 26 Oct 2011 @ 6:48 PM
The community has not turned against the BEST team. Some hard believers might have (at both ends) but the work so far is open for inspection and correction and improvement. I was a minor contributor of data and ideas, but it’s a complex field and contributing gave me a better insight. We should all be over the phase of “I believe in this outcome or that outcome” in favour of “I believe in the need for the best science we can assemble”.
[Response: Absolutely. No one I have heard -- least of all here -- wishes the BEST team anything but the best success. We are merely trying to point out that the hype -- some of it from Muller, much of it from the media over which he no control -- does not match up with the science. We also think it irresponsible the way that Muller has spoken about his (unfounded) assumptions about the integrety of other scientists -- most notably the CRU group, but also others including some of us at RC. But we also applaud his admitting (even if not in so many words) that he was wrong in those assumptions.--eric]
Comment by Geoff Sherington — 26 Oct 2011 @ 10:02 PM
It appears that I have been banned. Well enough. Though it is interesting to note that when I was just being obnoxious but people were responding to me it was ok. It is when the moderators realized that the community had no response for what I was saying that I was finally muffled. Also fascinating that even the great Gavin Schmidt had no response for what I’m sure you all initially considered to be a wild theory.
Eh, whatever. Twenty years from now this past era will be a cautionary tale. There is no enhanced AGW. It always was a preposterous theory. If temperatures were inherently unstable life on earth would be miserable and humanity would probably have never been able to evolve. Earth’s temperatures are stable, suggesting that the temperature readings we’re getting are a bit of natural cycle mixed with a century and a half of UHI. Even rural stations have suffered from land use temperature increases over 150 years and 0.5 degrees of increase would not be recognizable from white noise over that long of a period.
You haven’t been banned. Deserving comments get consigned to The Bore Hole, where at least one of your most recent comments may be found. You spout nonsense (you even admit to “I was just being obnoxious” – Note: Just because “people were responding” to you it is NOT acceptable to being obnoxious), you get ignored/consigned to the Bore Hole. Especially when you’re being obnoxious.
To be honest, this isn’t really the venue for you. If you are a genuine seeker of knowledge, and you want to divest yourself of the misconceptions you clearly have about the science, then go to Skeptical Science, where there are nearly 5,000 threads dealing with the skeptics memes you propagate.
And if you are just a dissembler and waster of the time of others, then go to the dissembler sites like WUWT, CA, Jo Nova, Curry’s H of H or Goddards Mis-Science, where the skies are as green as the firetrucks.
RE: Ray Ladbury says:
“Just curious. Do you come with a string we can pull to get you to spout your talking points, or are you voice activated?”
I take it Ray is among those that believe it’s just a ‘communication’ problem? If a post looks like bait to you, don’t bite. Or as you did for Bill (RE: 60) pack in as many (all?) of your talking points in a single response. No string required.
Perhaps Ray would like to explain why he considers ‘now’(or whenever in the last 250 years) to be the ideal or optimum climate for the entire planet? My previous Q was directly related to the BEST results showing about 2C difference from 1810′s to now. Is ‘now’ the optimum because of that difference, or was it ‘better’ when it was cooler?
Eric’s response: “‘Better’ depends on where you are, and what you are . . .” is exactly what I was alluding to. IE: Who is it that will decide what the ‘target’ climate should be? Climatologists? (Apparently economists can’t be trusted.) Obviously some already have a target in mind. If in fact ‘we’ are in control of the climate do we consider what’s ‘best’ for all regions globally? Most? Or just what’s best for us? And perhaps most importantly, who is it that will determine what is ‘best’. . . for us or anyone else?
[Response: In response to your response to my response... Yes, of course, deciding what is 'best' is a political thing, and I'd be the first to join you in saying that I don't have particular faith that our politicians (of whatever stripe) can decide this for us. However, this is true of *any* big policy decision, whether environmental, economic, educational, etc. Just don't forget that 'doing nothing' is a big policy decision too. Just as the 'U.S. not getting involved WWII' was a policy option.--eric]
I sent my skeptic/denialist friends a link to Muller’s WSJ editorial. While I understand why the scientists here feel justly upset about the tone of the editorial, IMO it is the only tone that could change the opinion of WSJ readers. The narrative of why he was a skeptic but no longer is is probably necessary for such readers even though there was in reality no good reason to be a skeptic to begin with.
Although I hate seeing the undeserved trash talk, I think no other way of writing it would be effective at converting people and policy. I also think that the explicit statement in a WSJ editorial that past work was properly and rigorously conducted may well start a process of recognizing that climate scientists are being unfairly targeted and move away from killing the messenger.
The Daily Show last night carried a great segment on the editorial, entirely about how ClimateGate was much bigger news than a follow-up study conducted and funded by skeptics that concluded climate scientists were unfairly trashed and in fact were in the right entirely, which is exactly the correct message to draw. If Muller’s study and PR tour helps turn public and policy decision to acknowledge the reality of global warming and the quality of climate research, it is a very good thing. I have hope that this is happening.
Perhaps I am unreasonably optimistic, but I feel good about BEST and its publicity campaign.
suggesting that the temperature readings we’re getting are a bit of natural cycle mixed with a century and a half of UHI.
Here’s how a “citizen scientist” such as yourself can test your UHI claim.
Grab the code at the link in my previous post. It computes global-average temperature anomalies from GHCN v2 data. I’ll be the first to admit that it is pretty crude, but it is still good enough for “message-board science”. Using the wealth of information available on-line, teach yourself the little bit of C/C++ needed to make the very simple modifications to the code to parse out rural vs. urban station data so that you can generate separate rural and urban temperature results. Run the program to generate those results and compare.
Yes, all of this will take a fair bit of time, and may be painstaking and tedious. That’s why they call it “work”. And “work” is exactly what you need to do if you have any hope of making a credible case for your UHI claim.
@ Paul Clark (#73):
Thank you for including BEST in woodfortrees.
But the BEST is a land only temperature timeseries. Other temperature timeseries (for example HadCRUT3) at the woodfortrees-site are land-ocean timeseries. Would it not be better to include CRUTEM3 (and so on) to compare?
Perhaps Ray would like to explain why he considers ‘now’(or whenever in the last 250 years) to be the ideal or optimum climate for the entire planet? My previous Q was directly related to the BEST results showing about 2C difference from 1810′s to now. Is ‘now’ the optimum because of that difference, or was it ‘better’ when it was cooler?
I’m not Ray, but that’s an easy one to answer. The last couple of centuries are when most of our population came into existence. We’re pushing up against quite a few resource limits right now with our current population – food, energy, raw materials. If a changing climate decreases any of those, particularly food, let’s just say ‘unrest’ will result.
In other words, humankind is maximally adapted to the current climate. The planet will be fine if temperatures rise ten degrees. Humankind, otoh, will be screwed.
Obnoxious can be fine, but add in ignorant and arrogant and we have a problem. You’re like some old Tory (Republican) down the pub (bar) who has half-digested a couple of Daily Mail (WSJ) articles and feels the need to pontificate about them. Loudly.
> Who is it that will decide what the ‘target’ climate should be?
> Climatologists? (Apparently economists can’t be trusted.)
Ecologists and farmers will give you the information you need. Any rate of change faster than nature can adapt to loses species wholesale. We know how fast climate changes in the past have happened, and how well (and if) ecological changes coped.
Do you know how much faster our current rate of change is than any rate of climate change in the past? Would it worry you if you knew it was, say, 10x faster than natural change? How about 100x faster? Worry?
Ah, Barn, I see your problem. You can’t distinguish between “talking points” and the truth.
Case in point: “Perhaps Ray would like to explain why he considers ‘now’(or whenever in the last 250 years) to be the ideal or optimum climate for the entire planet?”
How about the past 10000 years, Barney? That is the period where we developed every single crop, domesticated animal, technology and technique that has made civilization possible. Now we are going to push the climate well outside that range–not just of temperatures, but precipitation, extreme weather, drought and so on. And we are doing so, just as fossil fuels begin to run out and human population crests above 10 billion people. What is more, we are changing things in a direction we know makes those crops, etc. less productive!
If you bothered to read the scientific literature–or even Realclimate, you’d know that science has provided pretty convincing answers to allof your questions.
You are also wrong about the degree of denial in the denialist community. The BEST project itself arose out of pseudoskptic doubts about the temperature record.
In short, we’ve seen about 0.8 degrees of warming globally; yes, this is significant–enough so that it is pushing more of the planet into drought; humans are responsible for nearly all of it, and we will see about 3 degrees of warming for each doubling of CO2 concentration.
David Miller wrote: “The planet will be fine if temperatures rise ten degrees.”
If by planet, you mean the underlying ball of rock, then sure, the “planet” will be fine.
But the Earth’s biosphere will definitely NOT be “fine” if temperatures rise 10 degrees. Indeed, the combination of a 10 degree temperature rise and the ocean acidification from the increased CO2 levels that a 10 degree rise implies, could very well lead to the mass extinction of most life on Earth.
Comment by SecularAnimist — 27 Oct 2011 @ 12:30 PM
Wow, that’s the best skeptics have to offer, huh? Argument from personal incredulity. Dude, scholarly monks int eh dark ages recognized that as a logical fallacy.
Hey, Party on, Dude. You must be smokin’ some good stuff.
It’s good that you are visiting this site, because the “viewpoint” you describe is basically a statement of your ignorance of the state of climate science, what is known and unknown, what is certain and less than certain, etc. There is a wealth of information available here that can mitigate that ignorance.
Assuming of course that you are here to learn something rather than merely to regurgitate false talking points.
Comment by SecularAnimist — 27 Oct 2011 @ 12:33 PM
barn E. rubble wrote: “If in fact ‘we’ are in control of the climate do we consider what’s ‘best’ for all regions globally?”
What we are doing to the Earth’s climate cannot be described as being “in control” of anything. On the contrary, the basic problem is our uncontrolled massive CO2 emissions.
Comment by SecularAnimist — 27 Oct 2011 @ 12:38 PM
Re: Tom @98
Erm. People say that RC is too heavily moderated, but recently quite a few people with a knowledge of the science ranging from zero to not-very-much, seem to be staggering in and relieving themselves in the punch-bowl…
Back-up Tom, and start with the REAL 1) of the AGW theory. CO2 is a gas which absorbs infrared radiation, and we are pumping 30 Giga Tonnes of it into the atmosphere every year.
Please press the Start Here button at the top of the home page, and come back when you’ve read a bit more on the subject.
There is a nauseating comment from Fred Singer on the Nature Editorial ‘Scientific Climate’ (Nature 478, 428, 27 October 2011) re the BEST study. Missing from all of his credentials at the bottom of his meanderings, are the most important ones.
“…we’re getting are a bit of natural cycle mixed with a century and a half of UHI.” SirCharge — 27 Oct 2011 @ 12:20 AM
What UHI caused the 6000 year old Larsen B & Wilkins ice shelves to collapse into slush?
Which ones have caused Greenland to lose 2000 gigatons of ice in 8 years?
Did the Dallas-Fort Worth UHI set 3,944,707 acres of Texas on fire?
Is it Resolute, Cambridge Bay, and Murmansk(total area ~500km2) UHI melting the Arctic summer ice cover, from 7.4 milliom km2 1980-1985 to 4.8 million km2 2007-2011?
Which UHI is responsible for the record setting decade of drought in Australia?
“Fundamental Tax Reform Is Now Unstoppable” (by which they mean some version of a flat tax which will shift more of the tax burden to the middle class and poor, I’d say the claim is overly optimistic)
“Recreating A Real Gold Standard” – yeah, putting the US on the gold standard will help our global competitive position, uh-huh. It won’t happen until after we enter the next ice age, which, if we keep doing what we’re doing, might not happen at all …
“Flat Tax This: Regulations Are the Boot On Hiring’s Neck” – oddly these same regulations were in place during the Big Bubble …
Very few if any skeptics assert that the earth is still in the Little Ice Age. While the Little Ice Age raged from approximately 1300 to 1900 AD, it is pretty well accepted that the Little Ice Age did indeed end by approximately 1900 AD.
Watts has responded to the Nature editorial leader with the utterly bogus claim that
Nature printed this letter from Dr. Fred Singer, which I was also given a copy of via email:
Fred Singer said:
Dear Editors of Nature:
What a curious editorial [p.428, Oct.26} ? and how revealing of yr bias!
“Results confirming climate change are welcome, even when released before peer review.”
You imply that contrary results are not welcomed by Nature. But this has been obvious for many years.
Why are you so jubilant about the findings of the Berkeley Climate Project that you can hardly contain yourself? What do you think they proved? They certainly added little to the ongoing debate on human causes of climate change.
The rest of Singer’s screed can be read on Watts site, but since the’ Scientific Climate ‘ leader it replies to appears in Nature vol. 478, page 428 (27 October 2011) only superluminal typesetting could have seen it ‘printed’ in the same issue.
It didn’t happen, though Singer has violated both Nature’s strictures against science by press conference, and the very principle he pretends to defend , by giving it to Watts to publicize
“then go to Skeptical Science, where there are nearly 5,000 threads dealing with the skeptics memes you propagate.”
I’ve been there and don’t disagree with most of what is written. The site does, however, have a tendency to pretend that all matters have been settled and there is no room for discussion. An egregious example of this is the section on UHI. The author uses the paper Jones (2008) as his evidence that UHI is a nonfactor. The problem with this is that Jones 2008 actually found that UHI caused a 0.1 degree celsius temperature increase per decade and a 0.5 degree celsius increase total over the time period (see http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml) For some reason skeptical science manages to omit this rather significant tidbit.
“Yes, all of this will take a fair bit of time, and may be painstaking and tedious. That’s why they call it “work”. And “work” is exactly what you need to do if you have any hope of making a credible case for your UHI claim.”
I would love to do so. Unfortunately I have a job and family. My question is, why would I need to go through that effort? Shouldn’t there be a host of papers written by climatologists on the topic which use very similar methods to what you just described? (Please don’t bring up Peterson’s papers as an example of this as he doesn’t know the difference between rural and urban). The only papers that I can find are written by skeptics. Feel free to point out an example of studies that I may have missed. It seems to me that climatology does not regulate itself very well.
You missed the point where I indicated that we are in a warming trend that is part of a natural cycle. Yes, we are breaking records in part because of local variability of the weather, in part because we just concluded a warming trend and in part because we are experiencing a la nina event/cool pdo as well as a change in the arctic oscillation and wind patterns. In addition I do not doubt that there is some small segment of the warming related to AGW. What is not plausible is the notion of an enhanced AGW.
SirCharge, you have misread the Jones paper, which does not attribute 0.1 celsius to UHI. Even if you read only the abstract you would see the clear statement that the total warming in urban sites–not the urban heat island effect portion–was 0.1 degree, with the “true climatic” portion being nearly all of that–0.81 degree out of the total 0.10 degree.
SirCharge, You are ignoring that temperatures are corrected for UHI. You are also ignoring that rural stations show the same trend as urban–and that the greatest warming is observed in the far north–where there are no major urban areas.
Natural trend? Dude, do you really think natural fluctuations produce 40+ year warming trends. And if it is not due to fluctuations, then there is that pesky first law of thermo that asks where the energy comes from.
Do you know tht “denial” is not a river in Africa?
“The problem with this is that Jones 2008 actually found that UHI caused a 0.1 degree celsius temperature increase per decade and a 0.5 degree celsius increase total over the time period (see http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml) For some reason skeptical science manages to omit this rather significant tidbit.”
I’ll tell you what the problem is, the problem is you have distorted this cite horribly. This is with reference to CHINA, CHINA! China has experienced rapid urbanization and dramatic, to say the very least, economic growth in this period. You may even have read about this, and about how some scientists believe that the contribution of aerosols as a result of explosion may have contributed to the relative slow-down of global warming of the last few years.
To coin a phrase – for some reason Sircharge manages to omit this rather significant tidbit…
“Unfortunately I have a job and family.” Gosh, that’s great. I’ve always wondered what it must be like to have those. Oh wait. I *have* got a job and a family. As I imagine many folks here have. You evince a very low level of research in your posts and job and family are just not good enough excuses.
“I indicated that we are in a warming trend that is part of a natural cycle…”
“we just concluded a warming trend” This is a laughable statement on so many levels.
I would love to do so. Unfortunately I have a job and family. My question is, why would I need to go through that effort? Shouldn’t there be a host of papers written by climatologists on the topic which use very similar methods to what you just described? (Please don’t bring up Peterson’s papers as an example of this as he doesn’t know the difference between rural and urban). The only papers that I can find are written by skeptics.
(The tradition of deniers producing excuses instead of results continues…)
OK, then where are the papers written by skeptics that directly compare rural vs. urban global-average temperature results? I laid out a procedure that would take no more than a few man-days of effort to produce preliminary rural vs. urban results. Why haven’t any skeptics done this? I mean, “skeptics” have been making claims about UHI for *years* — as for all the “skeptics” who boast of their scientific credentials, why haven’t any of them been able to expend a few man-*days* of such effort in all of the *years* that they have been complaining about UHI?
Ditto for Watts’ dropped stations claim — testing that claim would take no more than a few *man-hours* of effort once you have a basic gridding/averaging program up and running (NASA/GISS and Clear Climate Code already provide that code-base). Seriously — starting with what’s already available on the Internet, such an project would require that little effort. So where are the skeptical papers that show the impact of “dropped” stations on global-average temperature results? I mean, where are the papers that show “dropped stations included” vs. “dropped stations excluded” results side by side?
So SirCharge, we are looking at analysis projects that deniers have should have tackled *years* ago to verify their claims. These projects should require man-*hours* to man-*days* to complete (given that most of the “heavy lifting” has already been done by NASA, Clear Climate Code, etc.). Yet in the *years* that deniers have been making these claims, not one of them has expended even this minimal amount of effort to test said claims.
SirCharge, let me remind you that it has been *skeptics* that have been making the above claims — it is up to said *skeptics* to publish papers demonstrating the validity of those claims. So in all these years, why hasn’t a single skeptic published any papers that do so?
Comment by Pete Dunkelberg — 28 Oct 2011 @ 10:31 AM
SirCharge wrote: “I indicated that we are in a warming trend that is part of a natural cycle”
That’s a falsehood no matter how many times you “indicate” it.
Comment by SecularAnimist — 28 Oct 2011 @ 10:45 AM
Thanks for this, which I’ve been waiting for.
Their central rationale appears to be that CRU were wrong to “hide” their data (a flimsy accusation) and hence the results had to be checked. This has already been done by the Muir Russell enquiry and of course the various other independently-derived data sets that are consistent with CRU. Same with the stuff about checking weather stations.
They have found that everyone else in fact did their jobs pretty well, and somehow found this surprising. Do they think we are all dolts?
What I find objectionable about the whole exercise is the accompanying innuendo. This stuff all has the nasty stench of the tobacco industry wanting the probability of causing cancer to be calculated to ever more precision, when they knew themselves that they were selling a flawed product. Muller et al. are guilty of buying the denial case time, rather than ultimately supporting it, and you have to ask: who gains from that? Rather than the moral high ground they are trying to occupy, their motives are suspect, and border on unethical. Like medical scientists who took money from tobacco companies, they do not deserve to be taken seriously.
One of the key political arguments by deniers is that scientists are in it for the money. So why was this relatively well-funded and ultimately useless study, funded in part by the denial industry, necessary? The real money should be going into the hard problems of renewable energy, not into confirming what we already know, and what we should be trying to prevent.
[Response: Well let's see if we can follow the shells here. The BEST team's research goal was to validate whether or not the data sets used in other global T analyses were biased. Having found that no, they are not, Curry has now decided to change the focus to the more or less completely unrelated question of whether there's been any upward trend over the last decade. Oh OK.--Jim]
Comment by Rattus Norvegicus — 29 Oct 2011 @ 11:08 PM
Well, that did not take long
“Scientist who said climate change sceptics had been proved wrong accused of hiding truth by colleague”
Judith Curry is obviously angry that she didn’t get to read Muller’s press release before it went out, but I can’t work out what her point is regarding the analysis. She’s co-author of the papers, so does she reject her own papers here? The article is completely muddled and confused but it will again just leave the readers with the impression that climate science is completely dodgy.
Ah, just the sort of clarity and investigation I’ve come to expect from the Daily Fail. And just the sort of equivocation and vagueness, coupled with insinuation, we’ve all come to expect from Judy. Why does anyone take either The Fail or Judy seriously anymore?
Using Wood for Trees, I made (I think) a graph of the last 120 months.
So in this month-to-month scoring, has AGW resumed?
[Response: As we've said many times, short term trends are not useful for such statements. But one thing worth noting is that in the BEST error estimates, the errors for the last few months are an order of magnitude larger than for the other months and that the big dip right at the end is very likely not real. I imagine it is related to a very limited data coverage in their 2010 collation. - gavin]
[Further response:Tamino has observed the same thing. -gavin
Tamino shows that the warming hasn’t stopped even by the Berkeley dataset. See here.
There are no grounds for claiming global warming has stopped.
Judith Curry is simply wasting people’s time and really should know better. Although the pattern w.r.t. her musings indicates she simply doesn’t like the idea of AGW and that is clouding her better judgement.
With the Daily Mail article (and Judith Curry’s contribution to it), Muller is learning what climate scientists have to put up with every day (I am not one myself, but I can only imagine). I wonder if it will make him more careful about his own criticisms in the future…
Even if it were natural variations, it would be incorrect to say “a natural cycle”. As you pointed out, for starters we have ENSO, PDO, and Arctic Oscillation – three natural cycles – plural. If you want to describe them mathematically as oscillations, you have to include randomly variable frequency and amplitude modulations. Then you have to take into account the variable frequency and phase of the solar cycle, and the delay in response to any forcing you could produce cause by the thermal mass and nonlinear response of the separate components of the cryosphere. The time constants of albedo feedback from melting N America snow cover are shorter than the albedo feedback from melting Arctic sea ice, and the sea ice is changing response as its average thickness decreases, and the ratios of 1, 2, 3, 4, 5 year ice area changes. Arctic sea ice is less than ten meters thick; the floating ice shelves are several tens to hundreds of meters thick; the Canadian Arctic Alfred Ernest Ice Shelf, Milne Ice Shelf, Ward Hunt Ice Shelf and Smith Ice Shelf are less thick than the Antarctic ice shelves, and the Ayles Ice Shelf and Markham Ice Shelf broke up in 2005 and 2008. Even in the Antarctic, the much larger Larsen A and B shelves, the Wordie, Jones, and Muller shelves, and a significant part of the Wilkins ice shelf have recently broken up in highly nonlinear “tipping point” events. Larsen B was ~220 meters thick, and an area about the size of Rhode Island broke into small fragments in less than thirty days. The Greenland and Antarctic ice sheets have much larger masses and consequently much longer time constants, but there is already observational evidence of nonlinear response. Mass loss from glaciers determined by the WGMS is also accelerating – http://www.wgms.ch/mbb/sum09.html
In “Cyclic Climate Changes and Fish Productivity” K.B. Klashtorin and A.A. Lyubshin found spectra of ice cores and tree rings showed peaks at 25.6, 32, 32.5, 38.6, 53.9, 55.3, 60.2, and 75.8 year periods. They go on to say “Fluctuation spectra of California sardine and anchovy populations during the recent 1700 years (Fig. 1.15) demonstrate well defined predominant peaks: 57 and 76 years for sardine and 57, 72, and 99 year for anchovy. This correlates well with the predominant spectra of climatic fluctuations during the last 1500 years according to ice core samples and tree growth rings.” Plus there are DeVries–Suess and Gleissberg cycles with periods near 210 and 87 years, and A 725 yr cycle in the climate of central Africa during the late Holocene, J.M. Russell, T.C. Johnson and M.R. Talbot, Geology, v. 31 no. 8 p. 677-680. (BTW, the last peak in the cycle was ca. 400 years ago, an is therefore out of phase with the current warming by 325 years).
In Possible solar origin of the 1,470-year glacial climate cycle demonstrated in a coupled model, Holger Braun, Marcus Christl, Stefan Rahmstorf, Andrey Ganopolski, Augusto Mangini, Claudia Kubatzki, Kurt Roth & Bernd Kromer postulated that the combination of the 210 year DeVries–Suess and the 87 year Gleissberg cycle could tip nonlinear fresh water influxes and subsequent changes in Thermohaline Circulation triggering Dansgaard-Oeschger events.
If we take an epicyclic periodic leap of faith an assume the hypothesis that this 1470 year cycle is a significant driver of present day warming, and align it with the Medieval warm period, we get temperatures rising from ~800 BC to a warm peak at ~468 BC – a little early for the Roman warm period. Temperatures then fall through 100 BC to a minimum at ~267 AD(and associated bad weather starts the tribal migrations that soon lead to the fall of Roman Empire), rising through 635 AD to the MWP peak at 1002 AD. Temperature then head back down through 1370 AD to the LIA minima around 1737. then temperatures begin rising again. What is the probability that all the other periodic natural variations observed just happen to be in the correct phase and amplitude to coincidentally match anthropogenic carbon emisions?
And of course if we extend the 1470 year cycle forward, temperatures will be rising through 2105 to a peak around 2473. Adding the 1.23 degree OLS rise seen in the BEST record since 1800 to an expected 2.46 degrees(ignoring the accelerating slope of the record in recent decades) to the peak gives 3.69 degrees – “catastrophic natural warming?”
“Periodic” is often used to describe systems that have two reactive(where the response is out of phase with the forcing) components; inductor and capacitor in electronics, mass and spring in mechanical clocks, chemical bond and atomic mass in greenhouse gases, gas density and compressibility in sound. In climate, the apparent periodicities arise from noisy forcings(turbulence, high and low pressure vortices, ocean surface water sloshing back and forth or surface currents and Eckmann transport, convection, etc.) being filtered into bands by lossy mechanisms (wind friction and turbulence, thermal conductivity, AMO circulating currents, isotropic scattering and emission).
As the wikipedia entry on Bond events says, “…the 1,500-year cycle displays nonlinear behavior and stochastic resonance.”
I’ve heard on BBC Radio 4 that Judith Curry is now giving a denialist spin to the outcome of the BEST study! Apparently, there’s been no warming since the late 1990s. Muller was cited as agreeing, but saying this “may or may not be statistically significant”. Now the broadcasters may have garbled the message, but this does sound like complete piffle based on looking at a period too short for the trend to reach significance – and simply (like the AMO stuff?) an attempt to distract attention from the study’s remarkably close confirmation of others’ results.
From Judith’s Curry’s recent discussion with Richard Muller:
With regards to the BEST data itself and what it shows. He showed me an interesting graph this is updated from the Rohde article, whereby the BEST data shows good agreement with the GISS data for the recent part of the record. Apparently the original discrepancy was associated with definition of land; this was sorted out and when they compared apples to apples, then the agreement is pretty good. This leaves CRU as an outlier.
Speaking of CRU, Muller related an interesting anecdote about Phil Jones that was apparently related to him by a reporter. When Jones was asked to comment on the BEST papers, he said he no comment until after the papers were published. Maybe Muller was correct in worrying about making sure the IPCC pays attention. … – Judith Curry
To what about CRU is the IPCC supposed to pay attention?
There was nothing new or surprising in Richard Muller’s work. His main concern was that there was not enough skepticism prior to ‘his’ work? It is a bold claim and I am reasonably sure that most of the scientists that have been working in the field would argue with him on that point.
Essentially he was saying that because ‘he’ was not convinced until ‘he’ examined the data that ‘he’ would not believe the other scientists.
There is no revelation in his work though. Global warming was predicted as early as 1896 by Svante Arrhenius and further confirmed through the work of many scientists over decades of research including Callendar, Plass, Revelle, Keeling, Hansen, and in recent decades thousands of scientists and hundreds of research universities and institutions.
Muller only confirmed, that the world is warming though, he did not delve into the human cause issue. He may still be skeptical but did say “Greenhouse gases could have a disastrous impact on the world,” he said. Yet, he contends that threat is not as proven as the Nobel Prize-winning Intergovernmental Panel on Climate Change says it is.
What this indicates is that he does not trust the field of science until he personally verifies it. It’s odd though. He knows the scientific method is being used in climate science. It seems he merely does not trust the work of others. My question then to Richard Muller is, does he fly in airplanes or if he does, does he ask the pilot to let him fly the plane so that he knows everything is being done correctly.
His statements still lean toward arrogance and his skepticism seems inappropriate as it seems overly skeptical of the very foundations of science itself.
It seems strange that the quality of so many thousands of scientists could not be trusted until Richard Muller could put his stamp of approval on it. This fact remains. The basic physics knowledge of the greenhouse effect have been examined since 1824 beginning with Joseph Fourier, the hypothesis that we should warm with continued CO2 emissions has been around for more than a hundred years, the general confirmations and realization that this will develop into a global problem have been germinating for over 50 years, and in the last three decades the knowledge that the warming we will experience will affect our climate and agricultural systems is well known.
Muller has added nothing new to his understanding other than inform us that there are still some stubborn scientists that don’t believe anything until they see it with their own eyes. I conclude with this thought. There is healthy skepticism and there is arrogant skepticism. The evidence suggests that Richard Muller falls into the arrogant skepticism side of the equation.
In summary, if nothing scientific can be trusted until Richard Muller tests it, then he’s going to be a very busy man.
On a quick look, the most significant contribution here looks to be the data manipulation of Rohde et al alluded to in the Eric’s post. Particularly the spatial averaging approach adopted might reasonably be expected to deliver a better treatment of sparse early data, and so perhaps a better – and rather cooler – estimate of 19th century temperatures. If so, the most significant Berkeley contribution might actually be to substantially increase the best estimate of warming since pre-industrial times.
So I checked the BEST data.txt to see why these month data had such large error bars, and were so out of line. It turns out that all the data they have for those months is from 47 Antarctic stations. By contrast, in March 2010 they have 14488 stations.
I.e. Gaving was right in his 131 reply: “I imagine it is related to a very limited data coverage in their 2010 collation.”
Everybody please knock it off with the off topic pontifications about crop yields
Too bad. There was some real gold there. I believe it started with a comment noting the BEST study showing a 2c warming since the 1810′s.
[Response: That isn't a very sensible framing. The spatial coverage in the 1810s is not sufficient to give a good global coverage, so any trends from then are highly uncertain. Secondly, the 1810s were affected greatly by the eruption of Tambora in 1815 and another big eruption in 1809 - these caused widespread crop failures (1816 was the 'year without a summer' - Henry Stommel wrote a great book on this). - gavin]
[Response: People are more than welcome to discuss the effect of climate on crop yields in the new open thread as long as they actually stick to the topic, provide legitimate support for their statements that others can check, and steer clear of insults. Believe me, I'm as interested in the topic as anyone.--Jim]
Several commenters on various blogs seem to agree the data coverage on the last two months is a problem.
My question is, why did BEST include them? Am I wrong in believing additional data will melt that little icicle hanging off the end of their current graph?
If there can be a drill-down discussion on crop yields, why can’t there be a discussion about Muller’s characterization of CRU as being an outlier. I’ve seen a lot of complaining about HadCRUT running cold. He’s told Judy that, apples to apples, GisTemp is closest to BEST. I remember in Kyle Swanson’s article on “the shift” he made a comment about the shift not being very noticeable on GisTemp, and it appears to have now disappeared. So whatever the slight differences are between the two series, it appears they can make a substantive difference in the science.
It seems whenever I see an attempt to claim AGW has stopped, it’s bolstered with a WfT graph of HadCRUT data that shows a downward trend between arbitrary dates in the last 11 years. I switch their graphs from HadCRUT to GisTemp, and the downward trend usually changes to at least a flat trend, but usually an upward trend.
Dr. Judith Curry continues to misinterpret long term climate trends by focusing on irrelevant time periods (too short). She has been informed by many highly competent scientists that are apparently much more qualified than herself in how to separate the short term natural variation from the human change signal based on changes and influences of increased radiative forcing. The key to relevant context for examination of the data/trend is time. Generally you need at least three decades of change combined with attribution including human and natural factors in order to see (separate) the significance of the human signal from natural variation.
She apparently continues to ignore this reality and point to data segments that are too short to separate the natural variation from the human influenced trend signal. Why would a scientist continue to ignore these well known realities? Let us consider the possibilities:
* Curry’s view is subject to confirmation bias
* Possibly there is some as yet unseen special interest influence
* Curry has become a victim of her own tribe mentality problem/hypothesis
* Dr. Curry is not sufficiently knowledgeable in the field of climate science to express a competently informed view
It is possible, if not likely, that one or more of these factors are in play with Dr. Curry’s continued focus on irrelevance. Either way, she exemplifies inadequacy in interpretation of the available evidence.
From a fairly broad perspective, isnt the fact that in the US, about 33% of the temperature stations show cooling when they are often close to ones that show warming, of some serious concern. Working in the commercial world, I would not like to risk my client money if a decision were to be based on such records, unless there was good reason to dismiss one or other of the subsets. It seems to me that answering this question ought to be a priority instead of taking an “average” and treating it as a true representation of fact. Positive and negative anomalies in the same area cannot both be right, unless there is good reason, in which case you ditch the one that is caused by other factors and the discrepancy disappears. I would like to know why this important point seems to be largely ignored in favour of an “average” that clearly cant be right.
Suppose you have a class with 30 school children. One day you measure the height of each. Two weeks later you measure them again. You find that 1/3 or the kids showed a lower height while 2/3 showed greater height.
You suspect that the large variation in height differences is due to the kids slouching, wearing different shoes, or having had a different breakfast the morning before the measurements are taken. Perhaps you’re right.
But you also notice that not only is the average greater, the difference is statistically significant. The conclusion: these children are still growing. It’s the average that’s meaningful, because the process of averaging reduces the inherently large uncertainty level.
> Working in the commercial world, I would not like to risk my client money
What’s the risk if the science is correct?
How do you balance risk that gets worse over time, against cost now?
Do you discount the future cost the further in the future it will happen?
Yes Im pretty conversant with the concept of averages and gridding. My point is that if the stations are sampling the same well mixed atmosphere then they ought to at least show the same “trends”. Your point about averages of the sample is taken and I would expect that to apply to the temperatures themselves but not the trend even if they are using different instruments and have different local characteristics. The trend ought to be at least similar certainly not opposite in polarity.
[Response: Please provide some specifics of locations and data]
Terry’s original:”From a fairly broad perspective, isnt the fact that in the US, about 33% of the temperature stations show cooling when they are often close to ones that show warming, of some serious concern. Working in the commercial world, I would not like to risk my client money if a decision were to be based on such records, unless there was good reason to dismiss one or other of the subsets.”
If I were asked to invest on such sketchy information, no way would I do so. How about 12 stations that are in the same region of the US, where a third show an opposite long-term trend (say the 1960′s decade compared to the 2000′s decade average) than the other 8.
#160, #161, #166 re: Terry’s source for “in the US, about 33% of the temperature stations show cooling”
– is obviously the BEST papers. The Muller et al. BEST paper on station quality in the U.S. (PDF) says:
One immediate observation is that for all categories, about 1/3 of the sites have negative temperature trends, i.e. cooling over the duration of their record. The width of the histograms, is due to local fluctuations (weather), random measurement error, and microclimate effects. A similar phenomenon was noted for all U.S. sites with records longer than 70 years in the study by Wickham et al. (2011) [i.e., the UHI paper -- CM]. We have also verified that about 1/3 of the world sites collected by the Berkeley Earth team also have negative slope.
On Hank’s question “over what length of time”, the Wickham paper (PDF) is the more helpful.
…67% of the slopes are positive, i.e. there are about twice as many warming stations as cooling stations. The dispersion is larger in the records of short duration, but even in the stations with records longer than 30 years, 23% have negative trends.
From the following histogram and discussion, it’s clear that this includes stations with records of less than 10 years, and the trends of these stations are, unsurprisingly, all over the place. But they also have a map showing warming and cooling stations of over 70 years duration, and again the ratio is said to be 2:1.
Your claim about trends being different from averages — in the statistical sense being considered — is simply mistaken.
The trends (1/3 negative, 2/3 positive) show large variation due to both measurement error and natural fluctuations. Because of this, all those trend estimates have uncertainty levels. Because the noise level at a single location is so large, the uncertainty in the trend estimate at a single location will be large.
Your argument comes down to “there’s noise in the data.” We knew that. Without averaging, the noise is big enough to cause you concern. With averaging, we achieve statistical significance. So to answer your original question, the variation in the data (and in local trends) is not ignored, and your claim that the average “cant be right” is false.
Yes I saw cases in the USHCN, in which a group would demonstrate lower averages near stations where there where upward trending; however, not 33%. Though there could have been a couple of instances.
Overall, it was due to local geographic topography (sampling resolution). If you were to mark those sites and perform a windrose, dewpoint and RH analysis, the localization was pretty clear. As was said by others averaging “hides” outliers. from both sides of the mean.
But going back to your analogy with the class room, if it showed that a subset of the children were shrinking, would that not raise an alarm that there was something wrong since they would all be expected to have positive gains in height with time. If the shrinking was statisically significant and not considered “noise”, you would conclude that there was something wrong with some of the data. Going back to the temperature record the same applies. If the negative trends are significant then either they are not measuring the same well mixed boundary layer (due to other influences)as the positive trends or the reverse is true. One of the subsets therefore does not belong in the average.
[Response: Until you provide the specifics of exactly the data you are talking about, this generalized discussion is pointless.--Jim]
Terry: Let’s use the march to summer as an example. If I compare April 3rd to April 2nd around the Northern Hemisphere, some subset of stations will have become warmer, and some will have become colder. The number of warming stations will likely outweigh the number of cooling stations. If I look at a longer time period (April 30th compared to April 2nd, for example), the proportion of warming stations will increase.
Do you mean to say that because some stations on April 3rd have cooled since April 2nd, that you would doubt that the Northern Hemisphere was warming?
“With averaging, we achieve statistical significance. So to answer your original question, the variation in the data (and in local trends) is not ignored, and your claim that the average “cant be right” is false.”
Question from a novice: how is this spacial noise – or poor spatial correlation – taken into account when estimating the uncertainty on the average temperature?
FWIW I suspect that those that say least prior to a determination that the method used in the generation of the temperature series is a positive contribution, will serve themselves well.
As it stands, the key paper could be a train wreck, if it is the other papers will be but more wreckage.
If there be serious flaws, the attention that the authors have garnered for it in the media will have done much to muddy the waters.
Supporting evidence that is both flawed and heralded may leave a taint on that which it supports.
I can have no real idea as to how this paper will fair and would be delighted for it to be a useful contribution but as of now it gives me indigestion, a nasty feeling that it will not stand a lot of scrutiny. If that be the case, the fact that it largely confirms what was already known, is I think unhelpful. I do wish that people would be careful with this stuff, for it is important; part of that care might well extend to passing review prior to publication, press release, and fuelling debate.
This has many ingredients, that could lead many to be seen to have played the impromptu ass rather than keep their counsel.
Comment by Alexander Harvey — 2 Nov 2011 @ 7:30 PM
#176 Pete Dunkelberg: Muller may be talking but he’s not making sense. He more or less accepts the problem but says we must do nothing about it because if the US moves and neither India nor China follow, we don’t correct the problem. What was all that in the past about the holdouts from Kyoto, the USA and Australia, being the excuse for the rest of the world to do nothing? Does the word “leadership” mean nothing? His contribution is looking increasingly confused and purposeless. He’s now moved to justifying criticising climate scientists for endorsing Al Gore. This is the same person who endorses Anthony Watts, who has published way more confusing and wrong information than Gore.
#162 JCH: there’s a discussion going on here, so it’s better to post the relevant facts here. We can’t post comments into a PDF anyway :( In any case Fig 4 on p 10 doesn’t have the resolution to support the claims made, nor does it contain the local climate factors to judge whether a red and a blue dot are in comparable areas (different sides of a hill for example).
Terry (#165, #172): I understand your point and I think it is a fair question. But: You say “well-mixed atmosphere”, but exactly that assumption is the problem. The boundary layer is not well-mixed enough to assume uniform temperature over short distances. If you go around with a thermometer in your neighbourhood you’ll notice quite large differences. A site’s microclimate is influenced by vegetation, exposure to wind and other such small-scale factors.
So, to reply to your original observation: It may well happen that one station gets warmer and one a few kilometers away gets colder, for example because the land use around it changes (from agriculture to growing forests or similar, but often it’s perhaps not even an obvious change). This is why we actually can expect the trends from individual stations to vary a bit, and this is especially true for stations with a short timeseries. So the fact that a proportion of the stations don’t show warming does not in itself mean much. We have to look at the overall picture.
Ye I completely agree with you that station specific environments are likely the cause. Any that is exactly my point. The positive anomaly sites are not measuring the same thing as the negative ones. And while my background is physics and not statistics, I cannot see how it is valid to clump them together in the belief that they will all average out to give a meaningful representation of the “true” atmospheric trends. Which is exactly what is currently the case.
Terry, until you provide specifics, then neither we nor you have any idea what you are talkiing about.
However, keep in mind that we are talking about long time series of data for a number of stations that oversamples the planet by roughly a factor of 4. A station that departs systematically from its past behavior or the behavior of surrounding stations will arouse suspicion, and it will generally be easy to correct the data for the systematic errors.
The prospect of dealing with 100 year time series of a system oversampled by a factor of 4x is enough to make me salivate. I have to make decisions regarding billion dollar satellites with far less data.
Forgive my ignorance once again, but I am puzzled by this.
“However, keep in mind that we are talking about long time series of data for a number of stations that oversamples the planet by roughly a factor of 4″
I believe this is somehow related to this paragraph from the 1st BEST paper (page 23 in the 11.Spatial Uncertainty section)
“Ideally 𝐹 𝑥, 𝑡! would be identically 1 during the target interval 1960 ≤ 𝑡! ≤ 2000 used
as a calibration standard, which would imply that 𝜏 𝑡!, 𝑡! = 0, via equation . However, in
practice these late time fields are only 90-98% complete. As a result, 𝜎!”#$%#& 𝑡! computed via
this process will tend to slightly underestimate the uncertainty at late times.”
People seem to be sure the spatial sampling is more than enough (Ray Ladbury is even saying 4 times over-sampled) to capture acurately the average temperature, at least for the last 40 years.
How do we know that? Is there a study out there that I missed (I am thinking something about the temperature spatial autocorrelation for different climate patterns)? Or something I don’t undertand? Thank you for your help.
Romain, for one thing you can see it in the record. Most nearby stations march in lockstep. Remember, what we are interested in here are global, longterm trends. That’s a very forgiving problem. Also, think about the sorts of changes you are likely to see as a result of introducing some source of systematic error into the data–it’s quite unlikely to look like the signal you are looking for.
All of this was discussed during Tony “Micro” Watts’station project. I will say then as I say now, you have to understand the data, the errors and the processing to ascertain whether an error at a station will have any effect at all on the product.
The 4-times sampling standard can likely find its beginnings in the works of a Bell-Labs scientist back about 1960-64. The issue was how to digitally sample changes in a analog signal. It was found for a sinesoidal signal that with 4 samples at the highest signal frequency rate it should be possible to replicate the analog signal. This only applies to signals created with a sine wave form, (though the mixing signals of different frequencies or hetrodyning can create non-sine wave results).
Consider that the daily temperature change were a sinesoidal signal, sampled 4 times daily you should be able to replicate the change in tempersture for that day. Similarly if you were to sample 4 sites simultaneously for the temperature the ability to replicate that localities temperature for that day should be possible with a high degree of accuracy.
(Note: For non-sine wave signals you can replicate them, though with varying resolution, based on the measure of the phase angle or rate of change. You then determine the acceptable level of reproduction accuracy and select the sample frequency that is approprate for the signal you are representing. (In the mid-late ’70s this technology was extended sine wave signals and standard digital sampling was changed to 2 times signal rate.)
As you are likely aware because temperature is not varying about a null value evenly, it is not truly sinesoidal. However, if you track the phase angle or rate of change for a given time period you should be able replicate the temperature record accurately with just two daily samples. Though this technique may only be applied when attempting to replicate a locality if you also indicate the daily differences or slope of change between sampling sites, (to my knowledge this method is not used by any historic record system).
An extreme example or measure of a signal may be the pulse width of a square wave, the accuracy of its reproduction is not only based on the sample rate; but, is also dependent on the switching speed of the device creating the signal. High speed sample reproduction accuracy of a device would be the slew rate of the device. (The ability to model patterns with computer systems have a similar issue, where in the rate to change is limited by the rate of the processor to calculate change.))
Romain, when I was playing around with the GHCN data (with my little “hand-rolled” program — see my previous posts here), I tried a little informal experiment to test the “redundancy” of the GHCN surface temperature network.
The plot shows an ensemble of global-average temperature time-series scans, where each temperature time-series was computed from a random “1 out of 10″ selection of GHCN land temperature stations. For each run, a random-number generator was used to select the stations with a 1 out of 10 probability for each random-number “trial”. Basically, I recomputed global-average temperature results a bunch of times while throwing out 90 percent of the GHCN data at random each time. I plotted the first 10 random runs generated by my program so that nobody could accuse me of “cherry picking” an individual good result.
The GHCN stations were selected completely randomly, with no attempt to maintain uniform global coverage.
To provide a basis for comparison, the official NASA/GISS land-station results are plotted along with my random “1 out of 10 station” results. For clarity, the NASA/GISS temperatures are plotted the foreground (red) scan, as indicated by the legend. (The legend labels for the other scans are just cryptic data labels generated by the program that I wrote — for those who are curious, they contain information like the data file name and run number plus a bit of other info about the processing options.)
What you can clearly see is that all of the “1 out of 10″ results agree reasonably well with the NASA results. In anything, they tend to show a bit more warming than the NASA results do, probably because throwing out so much data tends to increase the overweighting of the Northern Hemisphere data (remember that there are more temperature stations in the NH than in the SH, and that the NH has been warming faster than the SH). Anyway, so much for idea of NASA “cooking the books” to exaggerate the global-warming trend.
The take-home message here? The global surface temperature record really is quite redundant and robust.
Note: I was able to do all of the above with nothing more than publicly-available raw data, documentation, and free, open-source software tools. Didn’t have to file a so much as a single FOI request.
Berkeley monthlies over the last 4 decades appear to be nearly twice as noisy as the next highest estimate. I suggest that the high Berkeley inter-month noise is unlikely to be physical, because it is inconsistent with six other reputable estimates encompassing three largely independent approaches. Presumably it is an artifact of their data processing method.
At least at the monthly scale, the Berkeley results should be considered unreliable, in my view.
The “widest possible coverage” is provided by the satellite products, or, less satisfactorily, by the reanalysis products. Those show much lower variability. Whatever they have done appears to have introduced spurious inter-month noise.
Glenn, if you don’t like “widest” try different words: their stated approach was to find, collect, and come up with statistics to combine in usable fashion all of the various available data collections.
You’re complaining about what was described in the main post above: “efforts made by Robert Rohde on the dataset agglomeration and the statistical approach” — because you don’t like the result.
You could instead perhaps write up what you understand about the result? Perhaps you have a contribution to make here, rather than a complaint?
inter-month noise?!! c*** on a crutch! We’ve seen a good bit of real inter-month weather noise lately and it is likely to break the world’s banks – what’s left after the banksters have done their hit and run stuff.
pardon my “french” but even a sense of humor is not enough to get through this nonsense sometimes.
Yeah, I know, I’m talking about reality and its timeframe doesn’t work with physics and statistics properly, but that’s what we live with – noisy or not, it’s physically challenging. I’ll get over myself, but not just yet.
Comment by Susan Anderson — 11 Nov 2011 @ 11:23 AM
Idavidcooke, the (over)sampling and uncertainty issues I was talking about are SPATIAL issues. Your answer is about TIME series.
Ray Ladbury, caerbannog, KR, thank you for your answers. I still have to digest it but meanwhile:
“Romain, for one thing you can see it in the record. Most nearby stations march in lockstep.”
Well this is the thing. And this is what I had in mind from previous discussions: a fairly large spatial correlation, like the 800-1200 km that can be found in the Hansen 1987 paper.
But then there is this figure 4 page 10 of the UHI BEST paper. You can see very close stations (in California for exemple) showing different trends…so clearly in California, the spatial correlation is poor, no? I just have some difficulties to reconcile everything…a hint?
> clearly in California, the spatial correlation is poor, no?
No. You know those California winery ads? They’re all about the local differences — microclimate differences — between areas. You can go a mile and be in a very different microclimate.
You’re looking at a picture of the entire USA with little blue and red dots.
Are those unadjusted temperatures, do you know?
Because that makes quite a difference. Here’s an example of the difference and how it can be spun if it’s not explained:
“Remember, what matters are the trends, and those tend to track”
Not according to BEST results…
In North California, you have half of the stations cooling, half warming (roughly). And that for 70 years of recording. I agree that in California (and probably USA) we are somehow ‘oversampling’ so that the average will tell us something representative (and that it is warming). But what it tells us (I think), is that the temperature autocorrelation falls apart at a much shorter distance than previously thought. No?
So I still don’t get the oversampling by a factor of 4 when talking about world land surface coverage…
And the uncertainty associated…
‘Are those unadjusted temperatures, do you know?’
To be honest, it is not clear to me what BEST did with the data and what they are showing in this figure…Are you saying these results should not be trusted?
EMF is three dimensional, whether you are sampling a series of changes past one point or a series of points for one change in a varying property. The rules are the same, if you intend to increase accuracy based on spacial conditions you have to insure the conditions are also similar at each site. There in lies the answer to your original question; but, others here have shared this better elsewhere in the archives, you might consider researching the inital GAT/GCHN modelng description discussions from a few years back.
Thanks for your constructive answer, I will search in the archives for better explanation. Because I still don’t get your point. ;-)
“You post on what you believe it might be”
Spot on. I read these BEST papers, but just once, so not enough to understand every single point. That is why I came here, explaining my problem/doubt/skeptiscism and asking questions. Because I already had some helpful clarifications and corrections of my mistakes in the past.
The other comments I left in the other sites are confirming I think the uncertainties are underestimated. I am not claiming I hold the truth. But I am still struggling to understand the 4x over-sampling statement. So besides your micro smear campaign on my name and the ‘you can look it up yourself’ answers, do you have anything more directly useful to this point?
To make it more simple, imagine if you will two WX stations 500km separate. One at sea level 200km off the coast of LAX, consider the other placed 300 km East in the middle of Death Valley.
If you had an anti-cyclonic (High Pressure in the NH) overhead at the two different stations at nearly the same latitude you would get very different temperatures. At the surface you would also have near stagnat wind and low wv at the DV station. While at the ocean station, you might get a strong N-Nw wind flow and high wv condition.
The conditons are such, one sample could rudely offset the other. However, take both WXs and elevate them 1km and you may find the difference between the two are slight.
Hence, to replicate the regional gridded value the raw temperatures could be adjusted to correct for the local characteristic, (with a low level of confidence, due to the number of degrees of freedom), or you could simply note the degree of change. Hence, by tracking the average and the change over time you can see your trend value.
So what happens if local conditions change? Say a multi-decadle condition which modifies the local weather, such as a S-N wind sweeping the DV region and laying down rain. The trend may go negative for several years…, while the ocean goes positive. Likewise, say a La Nina occurs which sweeps cooler drier air down the S. Cally coast. Again your trend could go negative.
These are examples of individual WX stations demo’ing a local variation. However, if you sample a min. of 4 stations to represent the regional average over time (typically 30 years), the change in the trendlines are likely to be very closely representative of the regional conditions.
By then linking the changes in regional values to a large scale grid you get a fairly good representation of the hemisphere or globe. (Remember, it is not the extremes or the raw temperatures you are measuring; but, the change in the average over time. Also, remember that by sampling the trend for 4 randomly selected sites within a region you should be able to represent that regions climatic conditions over a 30 year period with a min of a 1 Sigma (roughly 67%) and likely a 2 Sigma (or roughly a 97%) confidence level.) The key is to insure a random selection of WX, if you are not going to include the whole of the population, as most climate temperature analysis are currently done.
“… For general email inquireies or feedback on our analysis and papers, please contact firstname.lastname@example.org. We are very grateful for substantive feedback on our work.
We have been receiving many emails, and may not be able to respond to everyone who contacts us. However, do read all messages, and we will add frequently asked questions along with our responses to the Berkeley Earth website….”
If you did not ask there before posting your doubts at various blogs,
why not? Let us know if you did so we can watch for their response.