Hansen reminds us that planetary scale physical processes are not deterred by clown shows. (More at Romm’s place, and Planet 3.0 nails the point.) But Mckibben shows us what it is really all about for the ruling one percent: 20 trillion dollars worth of carbon still unburned, but on the books and contributing to the value of stock portfolios – unless it has to be written off for the sake of the 99 percent. Hence this game is not played by Olympic rules, and may involve sudden death.
What about the increasing diurnal range since the 90′s…
A signature of GH warming is warmer nights…decreasing range.
I wonder in the data whether its the nights not keeping up, or the days shooting ahead, and how that might have an explanation. Maybe the has been more solar since the 90′s (but from memory its the opposite), or some unexplained ocean current (maybe its el nino?)
Or maybe Muller has rushed to publication too soon…
I asked about diurnal range at Tamino’s blog. The response I got (not from Tamino) was that it could be that aerosols had caused an overshooting of diurnal compression and that the range was returning to the long term trend.
Comment by Sceptical Wombat — 4 Aug 2012 @ 8:59 PM
And just to add, I think the increasing diurnal range is probably due to increase in warming in the Northern hemisphere where there is more land rather than sea. Simple enough!
Over at WUWT, I am one of the few that have cautioned against trumpeting this paper as the final nail. I do, however, think Watts has a point.
Watts has stated he would like comments from all interested parties on his draft paper. This may not be traditional peer review, but there is a real opportunity for both sides of the debate to make a contribution.
What if Watts point is valid? Would it be too much to ask for readers of RC to offer constructive criticism, which could either improve the paper or bury it?
One possibly amusing outcome of this imbroglio will be if Watts actually did submit the paper somewhere given how McIntrye busily started rolling back taking any responsibility and the requirement that the corresponding author swear on a stack of bibles that all the co-authors are on board.
In conclusion, Watts et al. of course deserve the right to try to make their case in the peer-reviewed literature, however implausible that case appears to be. Therefore, we hope they will consider addressing the important concerns detailed above before they submit the paper to a journal. Otherwise we suspect the paper will not fare well in the peer review process. With said caveats carefully addressed and the conclusions amended if and where necessary, the paper has the potential to be a useful contribution to the climate science literature.
Some commenters on that article think that’s being too generous, because there’s a risk of lending Watts undeserved credibility. OTOH, “A soft answer turneth away wrath, but grievous words stir up anger.” For “framing” purposes, I’d go with the soft answer.
MangoChutney: what exactly *is* the supposed point of the Watts paper?
So far, it appears to those who work on this for a living that the point of the paper is “if you ignore all known corrections, required to correct for a variety of changes like time of observation and change of thermometer type, then the trend is not as high”. Which isn’t new, nor a valid point. It’s just blowing smoke.
Re: Mr. MangoChutney writes (on the 5th of August, 2012 at 2:52 AM) an appeal for constructive criticism on a draft paper on station siting effects on US temperature observations.
My position is as follows: The lead author has spent years misrepresenting climate science and vilifying climate scientists. Every word out of his mouth and every sentence he writes is designed to inflame his audience against an imagined conspiracy. I foresee that any criticism, praise, or comment, that I might, in more congenial circumstance, choose to voice would be distorted into the service of yet another jeremiad against climate science and scientists. I will not add grist to his mill. I do not work with those I do not trust, and I see no reason to do either here.
MangoChutney, you wrote: “What if Watts point is valid? Would it be too much to ask for readers of RC to offer constructive criticism, which could either improve the paper or bury it?”
What Mr. Watts has done, it seems to me, is put his paper out there in the hopes that others would comment and he could then use those comments to improve the paper. Constructive criticism there has been, in abundance. So, is he going to give due credit to all those who have offered substantive feedback? I wonder because the bottom line from the feedback seems to me to be that he has little more than bupkis, in the big scheme of things. The good folks at Skeptical Science are being appropriately generous in their final evaluation, as Mal Adapted points out. But I don’t see Watts as a genuine scholar in search of knowledge. I think the science community has been very generous and accommodating to him. An overall reaction to his work: meh.
From where I sit, Watts’ work is a classic example of how not to do science: let ideology run the show. His reaction to Muller’s work is an example of that. And there are many more.
You say that you think Watts has a ‘point’ and ask “What if Watts point is valid?” I have a problem with what you say because I’m not entirely sure what this ‘point’ of Watts is that you talk of. Some clarification please.
You say “he stated he would like comments from all interested parties“. Could you point to this Watts statement? I see no sign of it within the announcement-post at WUWT. All we have in that post is:-
() Comment that it is a pre-release following the practice of Muller (but I don’t think Muller was after discussion, was he?),
() A quote from Lemonick that perhaps implies a desire for “a much wider peer review” on the paper but nothing more is said.Who would be the intended peers? How they are to respond?
() And the descriptor on the paper itself “PRE-PRINT DRAFT DISCUSSION PAPER” which pretty much means diddly-squat.
You also say there is a real opportunity for both sides of the debate to make a contribution. Does Watts also say this or is it you suggesting it?
I think if Watts truly wants contribution from folk here, he perhaps should make that fact known. Given the past experience of Watts, him being such a polite and courtious chap and all, I’m sure if he asked nicely he might just get a fitting reply from RealClimate & its patrons.
Hard not to think Watts would’ve experienced a better week had he but emulated Schmidt over the prior weekend as per his plans.
But who truly knows the future with human volition wiring the selector switch to puree, and, given the media environment seeking product placement in all the synapse gaps in the known universe, really too much about the present or past.
MangoChutney, 5 Aug, 2:52
“I am one of the few that have cautioned against trumpeting this paper as the final nail. I do, however, think Watts has a point.”
I take “final nail” and “this paper” to mean conclusive evidence that human activity, principally CO2 atmospheric loading, is NOT a perturbation and will NOT have serious consequences.
Must say … that observation on the “final nail” is rather perceptive. As to Watts having a point about crowd sourcing which is what I understand you to mean, this 3 minute clip rather neatly sums things up.
#7 “Would it be too much to ask for readers of RC to offer constructive criticism, which could either improve the paper or bury it?”
It doesn’t seem to help. One criticism offerred, even by co-author McIntyre, is that TOBS adjustments are required. Here is his response. AW was enough pleased with it that he added it to his main post.
So when it’s pointed out that TOBS adjustments are required, his response, based on some anecdotes, is that he doesn’t think observers would have adhered to the stated times, so the change can be ignored.
@Isotopious, @Sceptical Wombat: I wondered about that ‘compression of the diurnal range’ too, for two reasons. One, I wondered what in the model mechanism could cause or inhibit such a behavior. (I’m not a climate scientist, and I don’t know the models.) Two, from a statistical perspective, documenting the behavior of extreme order statistics is tricky, because generally speaking there isn’t a lot of data available to do it with. Also, their statistics tend to be unstable. To nail them, the practice as I understand it is not to try to characterize them as a byproduct of the main distribution, but to understand them on their own. Hence, Castillo, Hadi, Balakrishnan, Sarabia, EXTREME VALUE AND RELATED MODELS WITH APPLICATIONS IN ENGINEERING AND SCIENCE, Wiley, 2005. Example that is good is in Section 11.9.1, “The Yearly Maximum Wind Data”.
The Surface Stations project was ill-conceived from the very beginning. By its very nature it is based on a misunderstanding not just of climate science, but of the scientific method. Watts would have us believe that you can only do science with perfect error-free data. However, such data so not exist. In reality, scientists spend most of their time trying to understand the errors present in real data. Given that Earth’s surface temperatures are oversampled by about 4x and have long time series, this problem is quite manageable. Those of us who have to work with very sparse data would love to have such a problem.
I am on record having said this on this very blog from the inception of Tony’s project. My objection was never to amateurs looking into the issue, but rather to looking into the issue with zero understanding.
“It’s a scientist’s duty to be properly skeptical. I still find that much, if not most, of what is attributed to climate change is speculative, exaggerated or just plain wrong. I’ve analyzed some of the most alarmist claims, and my skepticism about them hasn’t changed.” Right. Quite a transition.
Mango #17. You asked a question in #7 that had already been answered in #1. Did you read #1, and the constructive critique at SkepticalScience? The constructive criticism offered in the direction of Watts is much more likely to bury the paper than support it.
“In conclusion, Watts et al. of course deserve the right to try to make their case in the peer-reviewed literature, however implausible that case appears to be. Therefore, we hope they will consider addressing the important concerns detailed above before they submit the paper to a journal. Otherwise we suspect the paper will not fare well in the peer review process. With said caveats carefully addressed and the conclusions amended if and where necessary, the paper has the potential to be a useful contribution to the climate science literature.”
Pretty generous critique, to be honest!
And as others have pointed out, Watts doesn’t exactly have a reputation for accepting constructive critcism. This is the man who said he’s accept Muller’s BEST conclusions, “even if they proved his premise wrong”. This is someone who claims his station analysis results are right and everyone else is wrong, even to the level of suggesting that the people recording the times of observation were writing the wrong times in their logs! This is not, evidently, someone who is amenable to rational explanation or constructive criticism.
I think Muller is potentially really important in terms of changing public perception, which is so hugely out of whack with what is actually going on in climate science. I don’t know about everyone else, but I think this is a massive problem, especially for democracies and/or major economies (hello USA, I’m looking at you).
It was really interesting to see how former Australian Senator Nick Minchin (and former rabid denier) responded to Muller in the recent ABC Television documentary ‘I Can Change Your Mind About… Climate’, in comparison to how he responded to John E. Barnes, an actual climate scientist. Unfortunately the video’s have expired but the transcripts are still available.
I’m not trying to imply that climate scientists are poor communicators, just that converted “sceptics” are probably more compelling to borderline deniers or doubters, who are really important to be talking to. Preaching to the choir won’t get us anywhere.
Mike Lemonick @22
My sincere apologies for mistaking Muller’s words for yours.
In my defence, the sentence Watts uses to introduce the quote in question is rather long and not entirely grammatically correct. In my eagerness, I had concluded it was two sentences. Re-reading it now, the sentence is saying that Muller “embraced a practice” during an interview with you, which is something Watts is now following with the pre-release of his grand ‘pre-print draft discussion paper’.
Watts quote-attribution remains grammatically ambiguous so it is good that you can confirm it is a quote from Muller.
I probably should ‘translate’ Watts contorted text into what I presume he meant to say, but for correctness sake, will stick with the actual words. As my foolishly imagining one sentence as two demonstrates I am not up to the task. Sadly I was only ever taught how to read words that meant what the said.
Sorry, off-topic, but in response to “2 Pete Dunkelberg says: 4 Aug 2012 at 6:41 PM” referring to Bill McKibben’s article “Three simple numbers that add up to global catastrophe”
Unfortunately, McKibben makes a simple, infuriating, error in his article: he confuses gigatons of Carbon dioxide with gigatons of carbon. Two of the three numbers he discusses (the first is the 2degC “acceptable” rise in global temperatures) he defines thus:
“565 Gigatons … Scientists estimate that humans can pour roughly 565 more gigatons of carbon dioxide into the atmosphere by midcentury”
“2,795 Gigatons … The number describes the amount of carbon already contained in the proven coal and oil and gas reserves … ”
And he then goes on to compare them as if they were the same: “the key point is that this new number – 2,795 – is higher than 565. Five times higher.”
As an example of spoiling the ship for a ha-pence worth of tar, that takes some beating.
@30: I agree. Who cares if Muller is still a decade behind? At least now he’s pointed in the right direction. It’s like the wave of conservatives who have recently been trumpeting a carbon tax, as if it’s some brand new, revolutionary idea that no one ever thought of until them. Anything that gets us to debate policy rather than the most basic, settled science is good IMO.
@32: Could you (or someone else) please explain how that affects the overall analysis?
The “overall analysis” in McKibben’s article is that fossil fuel companies already have proven coal and oil and gas reserves sufficient to take the average global temperature well over the 2degC that even the unambitious 2009 Copenhagen Accord recognised should not be exceeded.
That conclusion stands, even with the confusion over the figures, since there is no combination of meanings for which that is not true.
McKibben informs me that “the copy-editors dropped ‘dioxide’ from the second reference” (to 2,795 gigatons), but that cannot be the whole story since that correction would produce:
“2,795 Gigatons … The number describes the amount of carbon dioxide already contained in the proven coal and oil and gas reserves … “, which is not true either (coal etc. doesn’t ‘contain’ CO2).
I think what McKibben meant to say was:
“2,795 Gigatons … The number describes the amount of carbon dioxide that would be produced if all the proven coal and oil and gas reserves were burnt.”
Christy says that the two satellite measurements have been adjusted for an “apples-to-apples” comparison with the surface measurements. I do not know what the adjustedments are, but may be based on climate models which predict tha the troposphere should warm 1.2 times the surface globally.
“… This article describes current efforts … the International Surface Temperature Initiative, an international and multidisciplinary effort that aims: firstly, to create a single comprehensive global databank of surface meteorological observations at monthly, daily, and sub-daily resolutions; and secondly, to encourage the contribution of multiple independent data products, subject to common performance assessment and benchmarking criteria, thus providing the opportunity for a detailed assessment of uncertainties. The rationale for the initiative is discussed, along with logistical and technical challenges, as well as opportunities for involvement from the statistical and wider scientific and user communities.”
(Of those authors Peter Thorne is the only name I recognize as an amateur following these subjects. He’s been in the news recently, attacked by one of the Pielkes, I forget which). Google the name +climate
so I was looking for something on oversampling — this will get you into the issue:
“…. Given that the new paper from Watts and his team largely takes aim at the work of climate researchers at the National Climatic Data Center of the National Oceanic and Atmospheric Administration, I sought a reaction this morning from three of the scientists there responsible for temperature data and analysis — Peter Thorne, Matthew Menne and David Easterling. They wrote that they would not comment on the Watts paper prior to its publication but could offer some context. Here’s the note from Thorne defending the quality of temperature records in the United States (it’s a bit technical in spots) ….”
Hmmm. Identity-concealing handle, new as far as I can recall. Thinks AW has a reasonable “point” but doesn’t give any indication what that might be. Thinks we at least owe some constructive criticsm, is promptly pointed to some, then complains truculently that those meanies won’t answer a polite request.
The abbreviation sfc for ‘surface‘ often appears in formulae which also use toa so it’s probably Christy’s shorthand for “Surface Adjusted,” him being a satellite man and all.
There is an “Amplification factor” used to convert surface temperature changes to troposhhere values. Choosing an appropriate value of this factor was one part of the silliness within the recent Watts ‘pre-print draft discussion paper’ that is discussed at SkepticalScience which explains something of how and where it varies. Alternatively you could rely on NIPCC sources which presumably will require you to dodge all the Ndata and Nsense.
Who wants to guess on what big AGW event will occur between 31 July 2012 and AR5 publication?
This delay of publishing would be acceptable if we had a good estimate of how fast global warming was coming at us. We do not.
Arctic Sea Ice Volume: PIOMAS, Prediction, and the Perils of Extrapolation @ 11 April 2012
Does not really seem consistent with what is going on with the Arctic sea ice only 3 months later. Compare the loss of Arctic Sea ice volume in 2007 to that of 2012. If one thinks about the geometry, physics, and nature of feedback systems then sea ice until 2040 or 2070 is not plausible.
It seems that Hansen has it correct once again: “The future is now. And it is hot.” Everything else in climate science seems to be papers submitted a year ago based on 2 year old data.
That approach works in Medievial French Literature, but it does not work in Climate Science when the future is now.
“Climate dice,” describing the chance of unusually warm or cool
seasons, have become more and more “loaded” in the past 30 y,
coincident with rapid global warming. The distribution of seasonal
mean temperature anomalies has shifted toward higher tempera-
tures and the range of anomalies has increased. An important
change is the emergence of a category of summertime extremely
hot outliers,more than three standard deviations (3σ)warmer than
the climatology of the 1951–1980 base period. This hot extreme,
which coveredmuch less than 1%of Earth’s surface during the base
period, now typically covers about 10%of the land area. It follows
that we can state, with a high degree of confidence, that extreme
anomalies such as those in Texas and Oklahoma in 2011 and
Moscow in 2010 were a consequence of global warming because
their likelihood in the absence of global warming was exceedingly
small.We discuss practical implications of this substantial, growing,
It might be getting into the weeds a bit, but creating a globally homogenized temperature series using that many stations is actually quite an important result, since homogenization methods like the PHA don’t do that well in areas and times with sparse spatial coverage. It provides more evidence that there are not large uncorrected inhomogenities lurking in GHCN-Monthly.
With regard to MangoChutney – apparently Watts is not wanting informed comment from science people. He refused to publish a comment from Dana N alerting him to the suggestions on skepticalscience, and a mod wrote a scathing reply saying to go post elsewhere when someone else pointed to the skepticalscience artile.
Mango Chutney is a fairly regular Contrarian contributor to Richard Black’s blog at the BBC.
Given his track record at Auntie Beeb, it seems entirely possible (in the light of his peevish response to the cogent points raised in reply to his first post) that he was mainly interested in satisfying his preconceived prejudices about supposed ‘Pal’ review in the Scientific community, conspiracies, etc…
9 Eli the Rabett said: “It was a good week for popcorn.”
Where’d ya get that? You must be eating old corn, last years crop. I was just in corn country: Detroit, Chicago, & points in between. As far as I can tell there’s negligible corn on the stock throughout most of the grain belt. The combination of drought and heat got it.
Comment by John E. Pearson — 6 Aug 2012 @ 10:01 PM
I am grateful to Robert Rohde for making the best single graphic I have seen showing how complex life on this planet relates to lower levels of CO2 over half a billion years. And I am grateful to him for his work on the GeoWhen database (“an attempt to make sense of…the geologic timescale”).
Recent rains and cooler temps may have salvalged much of the corn crop in the Northern states (NY, MI, WI, and MN). In some cases, this was too late, but others, not so. The output will still be low, but not a disaster. The same cannot be said for the I states further south, where much of the feed corn is grown. Expect further increases in meat prices, but corn may fall as the farmers begin to harvest.
Although I have read comments insisting that Watts include TOBs in his assessment of his data, I haven’t seen anyone suggesting the Muller do the same. Is that because he did and I’ve missed it or because he is now a convert?
The worst drought in more than half a century in America’s Corn Belt has slashed the corn crop to the lowest in five years, leading to a plunge of corn supplies to the smallest in 17 years by next summer, a Reuters poll of 21 analysts showed on Monday.
That would result in the third year in a row of razor thin corn stocks, keeping prices at record highs and rationing demand for the world’s most popular feed grain, analysts said.
Zeke Hausfather, commenting above, would be the best person to explain but the BEST method uses a breakpoint detection algorithm to seek out any discontinuities in temperature records. When a breakpoint is found the offending record is split into two, each segment treated as an independent station.
Assuming the breakpoint detection algorithm works as designed it should remove any inhomogeneties, whether from TOBS or siting changes or anything else, from the database. TOBS is adjusted for implicitly through this method, even thought there is no explicit TOBS adjustment procedure.
For no other reason than the joy of stringing two unrelated words together, reCaptcha: Blackburn’s gespati
Although I have read comments insisting that Watts include TOBs in his assessment of his data, I haven’t seen anyone suggesting the Muller do the same. Is that because he did and I’ve missed it or because he is now a convert?
Muller et al detect discontinuities in the data (such as those caused by a change in TOBS) and splits the data at that point, treating the two halves as being separate stations. Since trends are computed for the split stations separately, they don’t need to explicitly account for changes in TOBS or thermometers, etc.
The BEST team ran some comparisons of station data adjusted via the traditional homogenization techniques (which account for changes in TOBS, etc) and their algorithmic approach, and when they’ve done so they’ve found the two almost perfectly match.
The fact that two such separate techniques yield almost indistinguishable results is a strong argument that both approaches are robust.
There is absolutely no excuse for Watts refusing to account for changes in TOBS. His argument that observers ignore directions to change observation times then LIE ON THEIR DATA SHEETS is crap.
@Ian #57 – the short answer is that the BEST team did make adjustments that effectively allowed for changes to time of observation.
The slightly longer answer is that the BEST team used a different approach to their analysis of raw data, which is described in their ‘methods’ paper. AFAIK they treated discontinuities as new records.
We tested the method by applying it to the GHCN dataset created by the NOAA group, using the raw data without the homogenization procedures that were applied by NOAA (which included 39 adjustments for documented station moves, instrument changes, time of measurement bias, and urban heat island effects, for station moves). Instead, we simply cut the record at time series gaps and places that suggested shifts in the mean level. Nevertheless, the results that we obtained were very close to those obtained by prior groups, who used the same or similar data and full homogenization procedures.
has anyone taken a look at Arctic sea ice lately. Apparently a record low cyclone (not a polar low, but a synoptic scale system with lowest pressure of 963-hPa) has severely churned up the Beaufort Sea and is currently at about 75-80°N latitude. Unbelievable preliminary ice extent losses of as much as 285,000 km2 area over a single day. It’s hard to judge what the *quantitative* result of all this wind, advected warmth, and exposure to warmer, salty water will be yet, but the preliminaries are unprecedented in the satellite record.
Ian@#57, AUIU Muller et al. uses a ‘scalpel’ approach that splits any site where TOB data varies into separate sites. Watts didn’t do that, or anything else, to account for TOB.
Comment by Steven Sullivan — 7 Aug 2012 @ 12:02 PM
“Converted skeptic” is a rather odd title. I do science. I’m a skeptic. That what scientists do (including Gavin, other contributors to this site and many readers too). If Muller was genuinely skeptical of published research in climate science, all he had to do was read the published research and look for flaws. His exercise in checking the temperature record was obviously a waste of time, as no one has picked up a serious methodological flaw in existing practice.
The fact that he painted himself as one of the Watts camp, took money from a tainted source and generally had a belligerent attitude to climate scientists doesn’t define him as a skeptic. Possibly something else.
If he had the doubts he claimed he had, and went about checking the results, using a new methodology, or whatever, then published in the academic literature, maybe he wouldn’t have attracted as much media. Possibly there is some value in someone shouting from the rooftops something like “It’s all a fraud,” then recanting after checking. But I doubt very much that the hardcore deniers will take anything from this but confirmation that there’s some sort of weird conspiracy going on. You can’t make a head case sane by agreeing with them then changing your mind. If it was that easy, a lot of therapists would be out of work.
And speaking as always from the cheerleading section of the bleachers:
remember to encourage the other folks out there making gifts of good science like globalwarmingart, and woodfortrees, and Tamino’s page, and so many others; some have contribution links or info, all have contact info.
Those of us who neither do, nor teach, can applaud and say thank you.
(Speaking up before I go mostly offline for a few weeks, to revegetate)
Oh come on Dhogaza that’s a bit thin skinned but as I have no wish to insult or offend please would you accept my unreserved apologies. As I am a biochemist/molecular biologist there is a lot I’ve missed, as you so cogently observe, in climate science and am trying, with help from the replies here to catch up. I would note however that Philip Machanick seems a lot more critical than I of Riichard Muller.
Can anyone help me understand the difference between the bottom row of Figure 4 in the new Hansen et al. PNAS paper and Figure 9 of the same paper. I think the left-hand panels of each figure are identical, but I don’t quite understand why the right-hand panel of Figure 4 is as different from the right-hand panel in Figure 9. How does a shift in the mean impact the shape of the distribution? I understand the behavior in Figure 4, but don’t understand Figure 9 (a behavior which Hansen et al. think is important).
Thanks for setting me straight.
Comment by Chip Knappenberger — 7 Aug 2012 @ 4:35 PM
As I am a biochemist/molecular biologist there is a lot I’ve missed, as you so cogently observe, in climate science and am trying, with help from the replies here to catch up.
Then you have no right to make veiled accusations of hypocricy.
Apology accepted, hopefully no more apologies will be necessary.
I would note however that Philip Machanick seems a lot more critical than I of Riichard Muller.
A lot of people climate scientists and interested and somewhat knowlegeable layfolk (like me) are highly critical of Muller. Your comment didn’t target Muller, though, you accused *us* of possibly letting him off the hook because he’s a “convert”.
That attitude isn’t going to win you any friends around here.
You say “As I am a biochemist/molecular biologist there is a lot I’ve missed…” Is this what makes you thick-skinned and needing to catch up? And I thought you guys used fume cupboards!
Philip Machanick questions Muller’s ‘conversion’ from denialist/skeptic to born-again climatologist. You, however, question whether this ‘conversion’ sort of allows Muller into some ‘club that can do no wrong.’ In my view, that is quite a different thing to be “a lot more critical” about. Is not one questioning Muller and his motives while the other is questioning climatology in its entirely with its motives?
Me? Well I just question you!
The berkeley method does not “slice” records based on metadata. It slices based on the characteristics of the time series.
That is why, for example, zeke’s finding, that the berkeley method using no metadata matches the method using metadata is noteworthy. Some ( skeptics) have suggested the TOBS metadata is corrupt. A comparision of the two methods, argues against such a conjecture
I heard Muller tooting his horn on Democracy Now a few days ago (a good show, IMHO, which followed him with Bill McKibben; even the Mullers of the world are allowed their say without bullying or interruption). What was hair-raising for me was realizing that (a) he still does not not admit that anybody was scientifically justified in accepting the reality of climate change until this very moment, with the completion of his own magnificent study, i.e., does not admit that the “problems” with data analysis were ever adequately addressed before this, and (b) he is as staunchly denialist as ever about the _effects_ of climate change. E.g., dismisses the present US drought and heat wave on the ground (spurious even if true) that at this very moment, below-average temperatures elsewhere in the world happen to level out the instantaneous global average. Also scoffs at the idea of polar-bear endangerment (ha ha, but see, e.g., http://www.esajournals.org/doi/abs/10.1890/09-1641.1 ). Et cetera.
In short, even to layperson like myself, his ongoing intellectual buffoonery is painfully apparent.
I hope I’m definitively clear of letting Muller off any hooks just because he’s a “convert” . . .
Thanks, Rattus. I understand that the means are different, but don’t understand how adding a constant to the anomalies (i.e. means calculated from different periods) results in a different shaped distribution.
[Response: With respect to a different mean, the variations are different. - gavin]
Comment by Chip Knappenberger — 8 Aug 2012 @ 12:32 PM
I left a detailed comment, moderated out ,regarding the difference of scientific opinion on the paper by Dr Hansen reported upon in Wednesday’s NYT. In a nutshell I asked which of those if those agreeing with or dissenting from the conclusions of that paper were correct. I have no idea why this comment seeking information from experts in the field was not published. Are the critics correct or are they not?
#79, Larry Gillman said, ” What was hair-raising for me was realizing that (a) he still does not not admit that anybody was scientifically justified in accepting the reality of climate change until this very moment, with the completion of his own magnificent study, i.e., does not admit that the “problems” with data analysis were ever adequately addressed before this,”
That was the same vibe I got from listening to Muller on an NPR Science Friday (via the podcast) a few days ago.
Re 82 Chip Knappenberger, (re 75 Rattus Norvegicus)
about figs 4 and 9 of Hansen et al “Perception of climate change”
“Fig. 4. Frequency of occurrence (y axis) of local temperature anomalies (relative to 1951–1980 mean) divided by local standard deviation (x axis) obtained by counting gridboxes with anomalies in each 0.05 interval. Area under each curve is unity.”
“Fig. 9. Frequency of occurrence (y axis) of local temperature anomalies divided by local standard deviation (x axis) obtained by counting gridboxes with anomalies in each 0.05 standard deviation interval. Area under each curve is unity. Standard deviations are for the indicated base periods.”
See also discussion p.4 and p.7 (on the pdf file) – if I’ve understood correctly, indicates that all anomalies in fig. 4 are relative to the same base period, although the different frames use different standard deviation values – the middle frame uses detrended data, so the larger standard deviation there is not from the ongoing change in climate but a characteristic of the climatic state. The larger standard deviations compress the graphs horizontally and thus increase the height – but should otherwise preserve the shape, so far as I know. I could imagine some shape changes would occur because of the binning process (into 0.05 units of standard deviation intervals) – so when a different standard deviation is chosen, some data points that were in the same interval are now split up and vice versa (although the interval is small enough that I woulnd’t think this would have a large effect on the appearence of the graphs)..
Fig 9 uses anomalies relative to different base periods, so the means shift. The related section in the text also implies that it uses the different standard deviation values as well. You would think that the shape should be the same then, compared to the corresponding frame in fig. 4 (second row) (although the binning process could make some small alterations as before) – noting that the middle frames in each do not correspond (fig. 4 uses detrended sigma for 1981 – 2010, while fig 9 uses the entire period (1951 – 2010) and doesn’t specify (at least not in the parts I’ve read) whether the sigma is a detrended value or not. If it is not detrended, then all the graphs in that frame will individually be that much narrower with that much higher peaks then the normal distribution; I’d guess they could average to a normal distribution (?) if the trend had the right shape, but I’m thinking this case won’t quite fit (with the earlier decades being more similar, there should be an off-center peak with a longer tail on the other side, right?). It’s interesting to note that the middle decade of 1981 – 2010 approximates the normal distribution for the sigma of that period as shown in the last frame of fig. 9.
I think the off-center curves may sometimes appear to tilt away due to an optical illusion?
Re 84 Rick Brown – actually the second row of fig 4 is for NH land. The anomalies are relative to the same base period in fig 4 and different base periods in fig 9; the histogram shows anomalies in units of standard deviation, for standard deviation values from different periods – in each figure, though the middle frames of each row don’t correspond between the figures and I’m not quite sure about the right frames.
Actually I am noticing a few things that I couldn’t explain (compare decade 1951-1961 second two frames of 2nd row fig 4 to right frame of fig 9 – the lowness of the peak in fig 9 suggests that the sigma there is the detrended one, but that wouldn’t account for the higher peak of 2001-2011. But I haven’t read the whole paper either. The more general point about different base period sigmas tending to compress horizontally and stretch vertically, or vice versa, makes sense.
‘…remark on how little actually changed’ – i absolutely understand your chagrin at this guy turning up and claiming he’s made a major contribution to climate science….clearly he hasn’t. but what he has done is made a major contribution to the pseudo-debate in the media. the storyline of independent, sceptical, einstein looking physicist committing to the reality of agw is worth much more to the campaign for political committment (sad to say) than a hard-working climate scientist.
so, i think something has changed. not in the science but in the public debate (as demonstrated by the buzz in the denial-o-sphere about their betrayal).
Just piling on……what a crock of sh***t. Actually, I find myself in an odd agreement with Judith Curry when she says:
“If the attribution problem was as simple as Muller makes it out to be (curve fitting to CO2 concentration), then why are others wasting all their time with complex modeling studies, data analyses etc as described above?”
Septics have been ranting and raving about how complex attribution studies are prone to uncertainty, “unknown unknowns”, “butterfly effects” and other clichés for quite some time, and Muller himself has been disputing facts about measured temperatures and proxy reconstructions which, to my knowledge, are at least as solid as the attribution studies (and certainly less based on the favourite septic target of complex modelling). That Muller then suddenly would declare himself convinced by simple, non-peer reviewed curve fits (which most competent undergraduates in any scientific discipline could do) is, to my mind, the strangest part of Muller´s “conversion”. He certainly leaves himself open to completely justified criticism from both standard septics and climate scientists alike and appears to neither convince nor impress anybody in the process. Is the whole purpose of this exercise really just simple self-promotion and then to heck with his scientific reputation?
But what the hey, perhaps the reality based community should just laconically accept Mooneys conclusion:
“while in a scientific sense Muller’s conversion is quite insignificant [...] in a political sense, his recent arrival is all that matters. So just declare victory, my scientific friends.”
Comment by Christoffer Bugge Harder — 9 Aug 2012 @ 3:46 AM
I thought there was something interesting.
What about the increasing diurnal range since the 1990
Thanks. But so that I better understand… in Figure 9, the sample standard deviation is not calculated with respect to the sample mean, but wrt the mean from a different period?
[Response: The figures shows a distribution of anomalies with respect to a baseline, not a standard deviation. There are three baselines used (title of each plot), and the anomaly distribution in each figure for each decade uses the sd from the baseline to calculate the z-score. - gavin]
But it in Figure 4 the sample standard deviations are calculated in the normal way (i.e. from the sample mean)? Figure 4 seems to make sense to me, but I am still struggling with the significance of Figure 9–which Hansen uses to argue against a shifting baseline. Perhaps an interesting argument, but not one that I am completely following as of yet.
[Response: Figure 4 shows the impact of different sd, not different baselines. - gavin]
Comment by Chip Knappenberger — 9 Aug 2012 @ 11:17 AM
Yes I suppose, but they started ~47 years earlier, and ended about a year later.
Yes, the change in the diurnal range is interesting. Gavin always seemed to say that the reason more warming was occurring in the northern hemisphere is because there is more land. I guess he was right. I’m surprised the authors didn’t include the change in diurnal range for each hemisphere. That would have been…better. In which case you would hope to see decreasing diurnal range for the northern hemisphere and no upswing? Or not.
I understand how using a different standard deviation to calculate z-scores produces a different shaped sample distribution (why the left-hand panel in the bottom row of Fig 4 looks different from the right-hand panel). But I don’t understand how using a different mean does. The only difference between anomalies from the 1951-1980 baseline (which I take to mean the 1951-1980 average) and anomalies from 1981-2010 baseline is a constant shift. If I generate a string of normally distributed random numbers, normalize them, and plot their distribution, and then add a constant to all the original values, divide through by the original stdev and plot them again, the shape of the distribution stays the same (it is only shifted to the right). So I don’t understand why the right-hand panel in Fig 4 looks different from the right-hand panel in Figure 9 (the standard deviation used is the same in both panels, but the mean is different). Consequently, I don’t follow the point being made by Hansen about the need to keep the baseline fixed at some reference period.
I guess I should just accept that I don’t understand and leave it at that, since I can’t seem to grasp what Rattus and you have repeatedly explained to me. Thanks, though, for taking the time to try to help.
Comment by Chip Knappenberger — 9 Aug 2012 @ 6:05 PM
Chip (# 101) and others,
Figure 9 graphs the anomalies (by decade) from three different baselines for NH land. The first (on the left) is from 1951-1980, and thus is identical to the first graph on the bottom row of figure 4. Since the next two graphs on figure 9 use different baselines than figure 4 (for comparison), the entire populations are slightly different, so the distributions are different. They are not simply adding a constant to each anomaly, rather they calculate each anomaly from three different, overlapping populations. Thus you expect the shape of each distribution to change modestly.
The authors discuss the reason for this additional analysis in the section titled “Reference Period.” Think of it as a parallax view, perhaps; it shows some robustness of the their conclusions.
Chip Knappenberger @101
I am not a great fan of waving around in public analyses like the one you’ve been enquiring about, Hansen et al (Pdf). I feel the logic of the method it uses is not easily comprehendible, not obvious enough. So the precedent is set such that any aspiring numerlogist could present a nonsense mess of similar graphs to prove black is white or whatever (Of course, some of them already do that.) and only those smarter than the average bear will ever be able to tell that its not genuine.
I also consider the comments replying to you here has not been helpful, and this may be reflective of the less-than-straightforward logic. Your comment @96 I think pretty much conforms to my understanding of it (although the Response suggest we are both wrong! And we’re off-topic! Oh no!!).
In figure 4 each gridbox has a temperature data set. Each gridbox’s data will have a mean & sd which the choice of anomaly base will shift up or down. But when the anomaly base is changed, each gridbox’s anomaly zero will shift by different amounts. So the graph’s shape (a sum of lots of different local anomalies) is dependent on choice of anomaly base.
In figure 9, Hansen et al is calculating the sd for each gridbox on a sub-set of its temperature record. “Standard deviations are for the indicated base periods.” The implications for the graph is that baseline periods with more gridboxs showing greater scatter, more climatic variability, will be represented in figure 9 with a narrower bell-shape.
“Climate variability increased in recent decades, and thus the standard deviation increased. Therefore, if we use the most recent decades as base period, we “divide out” the increased variability. Thus the distribution function using 1981–2010 as the base period (Fig. 9, Right) does not expose the change that has occurred toward increased climate variability.”
Of course I could be wrong on this. A quick read off the screen is no substitute for a proper old fashioned “paper reading.” And if I did print it out, I might come to understand that I don’t know what on Earth he’s on about, Boo Boo.
Question about “adjustments” to the US temp records: IF the correction curve of the temp records looks substantially similar to the warming curve, and the non-adjusted curve looks flatter, would that indicate a physical, natural process, or would it indicate human bias? Or a combination of the two (and if so, in what proportion)?
[Response: There is no absolute answer. You have to see what the adjustments were for (TOBs, station moves, instrument changes, UHI etc.) and what the uncertainties on that correction was, and how robust the answer is to all these issues. For the US, the basic warming is robust no matter how you do the necessary adjustments (and coherent with warming in the surrounding ocean, lake warming, phenology changes, glacial retreat, snow cover decreases etc.). Different methodologies - NCDC vs BEST for instance, give very similar changes. Of course, uncertainty in the adjustments adds a little to uncertainty in the final answer, but this is nothing as large as some have been claiming. - gavin]
Chip, as a layman I think Hansen is saying that the length of the base period introduces significant error in counting high sigma events. Over 30 years the average temperature rises enough that any algorithm asking “How cold was this day in deviations, not degrees?” would have to contend with the fact that a cold snap “one would remember” back in 1980 was surely colder than the same sigma event today. Compare two days 30 years apart with the same temperature. They’re really different but the algorithm treats them as identical. Now start walking the base period forward in time…
Hansen is saying, pick a period of stability, good data, and as close to pre-human as possible. That’s 1951-1980. It’s only obvious flaw to me is the aerosols.
Now, consider that our white coat science team members have grown old or have lost traction for other reasons. Could we manufacture another expert member? YES, of course, and even a much better one.
Recruit someone with a Name in his own scientific field and expressed doubts concerning the climate science. Emphasize initially his/her critical views in aggressive press releases. Provide his/her with money to do the simple sums that are required to prove beyond reasonable doubt that the global temperature has risen over the past 100 years.
That simple task completed, let him/her declare that his/her results demonstrate indeed beyond reasonable doubt that the world has warmed up. Call this the definite proof, original and fully satisfactory, certainly beyond all previously published work on the topic.
The author then declares his conversion from a climate sceptic to a believer in global warming as all his/her doubts have been removed. Add as an extra embellishment a note that the humanity may have at least something to do with it.
A climate expert of high public stature has now been manufactured. In particular, he/she also has demonstrated irreproachable character and scientific integrity by flipping over his/her personal views as a consequence of his/her own research results.
He/she will be famous, will be loved by the media, will do great on the speech circuits, and will be invited to all political hearings to present his/her expert views and opinions on all matters even remotely connected to climate science.
The Establishment will unwittingly play along, egged by barbs sent to their direction.
As an example, he/she may well state that 90% of all claims made by Al Gore in his films and presentations are not supported by science. This will be devastatingly credible, coming from such famous professional source with exceptional scientific integrity. Also all his/her other opinions will be weighty and can be heavily promoted (scientific proof need not be asked, it would be insulting in fact).
Never forget the basic messages: “Impression is everything”, “Doubt is our product” and “Create controversy”. This is how harmful laws are not passed.
There is an entirely separate and valid argument for using 1950-1981 as base period:
A very high percentage of our current infrastructure and real estate was designed, planned and built during that period, all based on our perception of the climate back then.
A five sigma weather event relative to that baseline will often be outside the design envelope of normal safety-class buildings (ie: houses) from back then. High safety-class (ie: buildings with lots of people in them) will be less vulnerable. At least for now.
Comment by Poul-Henning Kamp — 12 Aug 2012 @ 2:35 AM
Hansen did an analysis relating NH ice sheet growth to NH spring insolation over tha last few stades. Could such an analysis be extended further into the past to see if the MPT can be explained in similar terms ?
He sort of blames climate scientists for the skeptics’ views, saying that the scientists exaggerate their claims and make obviously unsubstantiated claims. Skeptics, he claims, then become suspicious of the overall AGW claims.
I think he is silly. Of course, a few odd climate scientists go too far, but that happens in any scientific area. It’s the general consensus that counts. Skeptics can always find red herrings to attack. But they are dishonest in that they imply that these outliers represent the scientific community.
… Here, the post-1950 Berkeley Earth “complete” land series is compared to the preliminary Berkeley series released in 2011, as well as to GHCN-only simulated series, based on overall attributes of those unreleased series provided in the Berkeley Earth companion “methods” paper. The 2011 and 2012 “complete” (ALL) series Berkeley versions both fall squarely in the range of the latest comparable series from the three other groups post-1950. However, the two Berkeley ALL series diverge over the 1980-2010 period, and lie completely outside each others’ 95% confidence intervals in the 2000s, when baselined to 1950-1979. The 1950s average absolute temperature is 0.42 °C higher in the GHCN 2012 series than in the ALL 2012 series, a 25-sigma difference with respect to the reported uncertainty in the GHCN series. The GHCN 2012 series falls halfway between the 2012 ALL and 2011 ALL series in the 2000s. As well, there is an increasing widening between the Berkeley 2012 GHCN and ALL series the further one goes back before the 1950-1979 baseline period, with the ALL series about 0.3 C cooler in the early 1800s.
Other issues requiring further analysis are also identified, particularly a reported reversal in the long-term trend of narrowing diurnal temperature range starting in 1987, which contradicts previous GHCN-based analyses.Taken together, these issues cast doubt on the robustness of the present Berkeley Earth analysis, and point up the need for more open data access and improved diagnostics in order to further assess the reliability of the Berkeley Earth approach to surface temperature analysis.
The big picture is this: an independent, previously-skeptical team of researchers obtained results that confirm AGW. No Earth-shattering science, but the overall result should be seen as positive for climate scientist who have been making the same claims for many, many years.
Comment by Eccentric & Anomlaous — 16 Aug 2012 @ 2:43 PM