Guess we can anticipate some blogospheric corners to use this to bolster their claim that when model-data comparisons are imperfect, the data are matched towards the models though. After all, conspiracies live by virtue of seeing them bolstered wherever you look.
In the middle of the article you wrote:
“(…) deciding whether the differences are related to internal variability, forcing uncertainties (mostly in aerosols), or model structural uncertainty is going to be harder.”
Guess you meant to write “easier”?
[Response: Not really. The remaining differences are smaller and so the given the various uncertainties, will be harder to attribute. But it also becomes less important. – gavin]
A general question from a non-expert. Intuitively it seems to me that measuring the temperature at a large number of points in the oceans is the most reliable way of assessing temperature change. There should be far less short term variation and none of the problems associated with the locations of land-based thermometers. Why were these relied upon in the past, when ocean measurement was always available? I can see that a drawback could be the slow response to temperature changes in the air; and perhaps satellite measurment is considered superior for this and other reasons.
[Response: The problem is that the ocean is very large and the number of ships traversing it – particularly in the early years – are small. There is a significant component of ‘synoptic’ variability in the ocean as well (eddies etc.) and so while the variation is less than in the atmosphere, for many areas there aren’t/weren’t sufficient independent observations to be sure of the mean values. Try having a look at the data (there is a netdcf file you can download, and use Panolpy to look at the various monthly coverages) and you’ll get a sense of what data there is when. – gavin]
Comment by Jonathan Bagley — 11 Jul 2011 @ 7:56 AM
It still looks like the same old hockey stick to me. Thanks much for the links to papers. I am downloading them.
The problem with the Pielke/McIntyre predictions is not so much their magnitude as it is the implication that the 1950-200X trend is somehow a uniquely relevant measure of climate change. Pielke’s deconstruction of the IPCC ‘mid-century’ statement is particularly silly in this respect.
What is shockingly ill-advised to me is that the Pielke and McIntyre projections both required, in order to fit with their hoped for story line, that the adjustments not only affect the period from 1945 to 1960, but also extend beyond that into the late 90s, in order to level the more recent temperature increases so as to both make the rate appear less dramatic and the amount of recent, CO2 forced warming less of a concern.
And yet Thomspon et al 2008 explicitly states (emphasis mine):
The adjustments immediately after 1945 are expected to be as large as those made to the pre-war data (Fig. 4), and smaller adjustments are likely to be required in SSTs through at least the mid-1960s, by which time the observing fleet was relatively diverse and less susceptible to changes in the data supply from a single country of origin.
The adjustments are unlikely to significantly affect estimates of century-long trends in global-mean temperatures, as the data before ,1940 and after the mid-1960s are not expected to require further corrections for changes from uninsulated bucket to engine room intake measurements.
How could they be that… hmmm, what word should I use to be polite?
Where did I “predict” a 30% decrease? I don’t recall providing such a prediction and the link you have to Prometheus has no prediction. Thanks.
[Response: It’s possible I got confused, but the very next post after the one linked says:
Instead of the about 50% reduction in the 1950-2007 trend from the first rough guess from you-know-who, Real Climate’s first guess results in a reduction of the trend by about 30%. A 30% reduction in the IPCC’s estimate in temperature trends since 1950 would be just as important as a 50% reduction, and questions of its significance would seem appropriate to ask. But perhaps a 30% reduction in the trend would be viewed as being “consistent with” the original trend ;-)
right next to another figure that you (erroneously) claim was we had suggested would be the impact. The only person who said ‘a 30% reduction’ appears to be you, since it is clear from our original text we said nothing of the sort. – gavin]
Thanks for posting this. I’m very glad to see this cleared up. I suppose GISSTEMP will also be revised. Can anyone give a brief explanation of the 21 point binomial filter?
[Response: Binomial filters have weights in proportion to the binomial expansion coefficients (i.e. 1,2,1 or 1,3,3,1 or 1,4,6,4,1 etc.) ( with n as one less than the number of terms). A 21-point filter has weights 1,20,190,…..,190,20,1 etc. I used it just because that was what was used in the Indpendent figure (despite their caption saying otherwise). – gavin]
Comment by Pete Dunkelberg — 11 Jul 2011 @ 10:04 AM
Predicted denier comment on the first chart: “See! It says “BIAS” right there on the chart! That proves it’s not objective!”
Are the data to construct the figure below
“The differences between HadSST3 and HadSST2 are shown here:”
available? I’ve found only grided data on the metoffice-page.
[Response: I took the TS_all_realisations.zip file (monthly averages for each of the 100 realisations) and calculated the annual means for each and then calculated the mean and sd for each year. If you can’t find it, I’ll post the net result when I get back to my desk. – gavin]
Whether or not GISS updates GISTEMP, the Climate Code Foundation intends to update ccc-gistemp to allow it to optionally use HadSST3 for historical SSTs (current SSTs are Reynolds). This relates to some work already on our to-do list.
Mr. McIntyre is claiming , in his response post, “More Bilge from RealClimate”:
“Nor did I, as Schmidt alleged, ‘predict that the 1950 to 2000 global temperature trends would be reduced by half’. The article did not contain anything remotely a ‘prediction’ of how HadCRU and similar organization would deal with the apparent error.”
In the comment section of the relevant post, Mr. McIntyre says to a scientist, Peter Webster of GTECH:
“Hi, Peter, thanks for saying hello.
The statement was made to Warwick Hughes who forwarded me a copy of his email. Von Storch disbelieved that any scientist could say such a thing and contacted Phli Jones directly for confirmation. HE reported that Phil Jones confirmed the statement in his presentation to the NAS panel reported on CA here, with link to PPT.
The TOGA protocol is a reasonable one. There are some high-level policies that I’ve reviewed previously (See Archiving Category old posts), that require this sort of policy. NSF – Paleoclimate doesn’t require it, although some other NSF departments do. The DOE agency funding Jones doesn’t require it either. We contacted them a couple of years ago and they said that Jones’ contract with them did not entitle them to make any requirements on him for the data. This was a couple of years ago. If I’d been in their shoes and funding someone who was making me look bad, I’d have told them – maybe you beat me on the last contract, but either you archive your data or that’s the last dime you ever get from us. But DOE were just like bumps on logs.
The only lever on Jones has been UK FOI. Willis Eschenbach has been working on this for years. Jones wouldn’t even disclose station names. 6 or 7 different attempts have been made finally resulting in a partial list. After repeated FOI actions, we got a list of the stations in Jones et al 1990, a UHI study relied on by IPCC to this day. As noted in CA posts and by Doug Keenan, Jones et al 1990 made claims to have examined station metadata that have been shown to be impossible. In any other discipline, such misrepresentations would have the authors under the spotlight.
My understanding is that the adjustment was applied across the board to all oceans, so, yes, it would apply to all oceans. To be sure, you’d have to inspect their code, which is, of course, secret.
Also there’s more to this a WW2 blip. As I observed in a companion post, Thompson et al are simply lost at sea in assessing the post-1960s effect. The 0.3 adjustment previously implemented in 1941 is going to have a very different impact than Thompson et al have represented.
If the benchmark is an all-engine inlet measurement system, as seems most reasonable, then the WW2 blip is the one part of the record that, oddly enough will remain unchanged, as it is the one part of the historical record that is predominantly consistent with post-2000s methods. Taking the adjustments at face value – all temperatures from 1946 to 1970 will have to be increased by 0.25-0.3 deg C since there is convincing evidence that 1970 temperatures were bucket (Kent et al 2007), and that this will be phased out from 1970 to present according to the proportions in the Kent et al 2007 diagram.
Recent trends will be cut in half or more, with a much bigger proportion of the temperature increase occurring prior to 1950 than presently thought.”
This is the prediction you are talking about, correct?
The Kennedy papers look like papers I would have to read if I was a scientist and doing some heavy duty faunistic ecological analysis, just to warn aspiring ecologists. I didn’t read them through yet, maybe I never will but the 1st 3 pages of both were still understandable enough. Sometimes I’m glad I didn’t make the grade to be a Dr. but just a M.Sc. :-).
Thanks Gavin, indeed you are confused, there was no such prediction of 30% reduction from me, can you correct this post? I appreciate it, Thanks!
[Response: Well I’m confused now. You took three posts to estimate the impacts of the Thompson et al paper and now you claim that none of these were ‘consistent with’ a 30% prediction despite the quote above and the three figures you produced showing various impacts (showing a 50% changes, 30% changes and 15% changes). (I have taken the liberty of editing your second figure to correct the impression that you tried to give that this was based on something we said). You are welcome to tell us what you think you predicted in the comments but I don’t see any errors in the post. Perhaps you could explain where the 30% number came from since it only appears on your blog. Thanks! – gavin]
[Further response: Actually there is a slight error. The graph I attribute to you in the above post is your emulation of what McIntyre said (a ‘50% decrease’), not your independent emulation of a 30% decrease linked above. – gavin]
Well, that RPJr post and subsequent thread is an interesting read.
I don’t see it as “obviously wrong” — but it certainly is not be the final word, it is just my guess at how the 5-year trend presented in The Independent might be related to annual values. Obviously, there are other opinions, and I’ve asked for yours.
The “I” in this snippet is RPJr. The “guess” in this snippet is the 30% impact.
Thanks Gavin, on Prometheus I provided 3 emulations (and requested others if people did not like those 3), they were:
1. What I interpreted that McIntyre said (50%)
2. What I interpreted that you said (30%, which you dispute, fine, I accept that)
3. What I interpreted that the Independent said (15%)
Once again, at no time did I generate an independent prediction. I am now for the third time asking you to correct this post.
I did however say that any nonzero value would be scientifically significant, which I suggest is more interesting than the blog debate that you are trying to start.
Please just correct it, OK? It is not a big deal. Then let’s get back to the science.
[Response: Since your #2 emulation was not ‘consistent with’ anything anyone else said, it must have originated independently with you. But your statement in this comment can stand as a rebuttal to my interpretation of what you said. I have no desire to have a ‘blog debate’ but rather I am making the point that hurried ‘interpretations’ of results aren’t generally very accurate and that people can be in too much of a hurry to jump to conclusions about the IPCC, attribution, impacts on policy etc. I do not anticipate that this will stop any time soon – speculation unattached to data is too tempting for many. But for readers of the latest shock horror interpretation of some new study, this might serve as a cautionary tale. – gavin]
You seem to be upset that people were discussing and speculating about interesting new science. Instead of tut-tutting that some of this proved not to pan out, you should welcome such discussions. They happen all the time and are an indication of the interest that people have in the science.
By its nature such speculation will include some explorations/views that don’t pan out (sounds a lot like what happens in the peer reviewed literature). So what? That is why on my blog I presented a wide range of views and asked for yours as well.
Are you really suggesting that that McIntyre’s speculation was inappropriate but the Independent’s was OK? Would you prefer that people not discuss science and its implications if they are at risk of being mistaken? Silence would ensure everywhere.
Since you will not correct this post after I have explained your confusion, I will put up a blog post to set the record straight. There is a lot of misinformation out there;-)
[Response: The misinformation is greatly enhanced by people jumping to ‘interpret’ things that require actual work. Discussion of issues is not the problem (and your misrepresentation of my statements is nicely ironic in this context), but jumping to conclusions is. I would certainly support greater reflective discussion of real issues – perhaps you could actually help with that this time. – gavin]
So RPJr’s post was basically suggesting that this IPCC conclusion might not be valid:
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
He wrote, “I interpret “mid-20th century” to be 1950, and “most” to be >50%.”
Regardless of whether he said 30% or something else, it sure doesn’t look like this adjustment materially affects the amount of observed increase in global average temperatures, just the date on which it occurred.
Start with Pascal’s triangle. The way the numbers are added in one row to get the next row adds each number twice so that the sum across a row is a power of two. The sum of the nth row is 2 to the n-1 power.
The coefficients we want to use as weights will be related to these rows although these rows have no physical significance – the numbers are just easy for computers to generate so why not be a little fancy?
The coefficients will be used as weights for years so they must sum to 1. This is easily achieved: divide each by their total. This also means that years far from the year of interest have little influence. See this pdf for more explanation.
For our 21 point filter the final Pascal row will sum to 2 to the 20th power. Recall that 2 to the tenth is just over 1000, 1024 to be exact. Square this to get the 20th power of 2.
1000 squared is a million. 2 to the 20th is 1,048,576.
Next, note that the leftmost two numbers in the 21st Pascal row are 1 and 20.
The smallest weight should be 1 divided by 1,048,567 or just under 1 divided a million, or just under
and the next weight should be twenty times that, or just under
It finally ends up showing the same trend that you see using a simple moving average. Evidently the trend is robust to these niceties.
Comment by Pete Dunkelberg — 11 Jul 2011 @ 3:30 PM
Some time ago Gavin, seconded by Chris Colose, urged us all to download EdGCM. But it costs $250, which I suspect is $250 more than the original but now unusable code. Taking care of this little matter ought to be duck soup for the Climate Code Foundation, he said hopefully.
[Response: This is a shame (and I have said this to the people concerned), but they have been unable to find an alternative funding model that allows them to provide support at the level required. – gavin]
Comment by Pete Dunkelberg — 11 Jul 2011 @ 3:53 PM
“but jumping to conclusions is”
I am curious as to what conclusion you think that I jumped to you and that you are objecting to? Please provide a direct quote from me.
In my discussion that you link to I was careful to note: “Other adjustments are certainly plausible, and will certainly be proposed and debated in the literature and on blogs”
Roger has demonstrated yet again his propensity for misrepresentation that was evident in the first go around on this topic (and on multiple other occasions). It is not enough to accept that someone disagrees with him, any disagreement has to be based on ‘a fabrication’ because, of course, since Roger is always right, anyone who claims he isn’t must be lying. This kind of schoolyard level discourse is pointless and only adds to the already-excessive amount of noise. I for one am plenty glad that Roger has decided to expend his talents on other subjects. Perhaps those other audiences will be more appreciative of his unique contributions.
What concerns me most these days is not what you, or others say, per se, but what is implied or inferred. If one uses reasonably comprehensive reasoning, one can see the most logical inference of a post, or combined related communications.
Your father uses similar red herring style arguments as well. Inferred points that allow for ambiguity and reinterpretation may allow for politically comfortable talking points and sufficiency in obfuscation presentation, but are not helpful when attempting to get closer to the truth of a matter.
So while you and so many others love to play in the wiggle room, still others are diligently trying to get at the truth of the matters at hand. And no, I’m not talking merely about data but how some choose to represent their arguments and opinion about data.
The take away from a particular or combined communication is the total representation, not the specifics of a particular detail. Do you see the difference and the importance of the difference?
On the question of whether or not I offered an independent prediction of how ocean temperature adjustments would work out, there is an objective answer to this question — as I have carefully explained to you here, I did not. Anyone can read my posts and see that this is the case. It is not a matter of interpretation.
More generally, if you want to avoid a playground fight then don’t write posts like this one, because as you have seen, both McIntyre and I have taken issue with your comments (what did you expect?). That is always a danger on a playground, if you call people out they will sometimes react … So maybe in the future stick to calling out Willie Soon and Fred Signer? ;-)
[Response: Roger, that you would jump up and down calling people names when someone attempts to hold you accountable for your statements was eminently predictable. Indeed, I have no doubt you’d be happier if we didn’t criticise your rush to judgements and misrepresentations of other people’s statements, but overall I think it’s better if people are made aware of the credibility of prior performances. I think that readers can easily judge whose judgement back in 2008 was the most credible and I’m happy to leave them the last word. – gavin]
Pete, Some time ago Gavin, seconded by Chris Colose, urged us all to download EdGCM. But it costs $250, which I suspect is $250 more than the original but now unusable code. Taking care of this little matter ought to be duck soup for the Climate Code Foundation, he said hopefully.
What are you proposing? If it involves us spending money, then I infer you are labouring under the misapprehension that we have some. If it’s something else, get in touch.
I’m finding the reaction to this amusing. On one hand you have a bunch of apologists complaining that Gavin is somehow stifling skeptic creativity by pointing out that they were wrong (if this is the new mantra, I believe Micheal Mann is due some arts and crafts time!). Mcintyre is now saying that his prediction was ‘conditional’, as if this somehow changes Gavin’s point about wild speculation being wrong — in fact, it only adds to point. Probably just as bad (but definitely more funny) RPJ is attempting to get Gavin to correct the original post, where his wrong interpretation of an RC post turned out to be wildly speculative and wrong. Here’s an idea. Admit that your wild speculation turned out to be erroneous and likely led many people to believe things are untrue (for examples check the trackbacks or just google) and then stop doing that. Do the work, it’s much more fulfilling to both yourselves and the science.
The approach of providing an ‘envelope’ of temperature histories in HadSST3 is a nice one. It also means that for say, the 50 yr trend (1946 – 2006) you can get a distribution of trend values. How does that distribution look like? how big is the spread?
Is it accounted for in the 0.02 range you use ( in 0.127±0.02 & 0.113±0.02)?
[Response: No. Those uncertainties were from the OLS regression fitting to the mean reconstructions. I’ll see if I can calculate what you want later this evening. – gavin]
[Further response: The range of 1956-2006 trends in the SST is 0.080±0.01, compared to 0.081±0.02 for the trends in the mean reconstruction – thus the errors in the ‘updated’ HadCRUT3v series trend will likely be just a little larger than the 0.022 (adding the terms in quadrature). – gavin]
What I do not understand is why people play the Judith Curry style game of creating several posts filled with other people´s quotations, making up hypothetical graphs, asking questions to the commenters, and playing ¨what if¨ scenarios in their head, only to back off and play the ¨I´m innocent¨ card when someone tries to call them out on the bulls**t.
I can give Roger the benefit of the doubt that he did not directly predict a 30% decrease in trend, and gavin may have misunderstood him. Instead, he made up something that RC never said (see emulation #2 in post 17), and artificially introduced the notion that RC thought the Thompson et al. issue never mattered, when gavin emphatically wanted to reduce the noise brought about by speculation and wait for the experts to do the analysis. Roger considered this an attempt to dodge the issue.
Meanwhile, Steve McIntyre DID predict a reduction in trend by half, or at least made several very confident and authoritative statements about what would happen, which Roger interpreted similarly. Steve´s predictions also extended to what he perceived as a big coming change to aerosol climatology. Naturally, Steve now denies this and puts an odd spin on his quote, and claims gavin is fabricating claims against his blog.
Most humerous, is that Steve M tried to take credit for the discovery of this bucket issue, even though scientists knew about it many years before the Thompson et al paper. The issue is actually quantifying and saying something useful about it, not blogging about speculations and why ¨the Team¨ won´t handle your data requests the way you like. In fact, according to McIntyre, all the members of Thompson et al. get the humble position as being members of ¨the Team.¨ But noticing a potential issue is not enough. ClimateAudit had absolutely no role in pushing the science of bucket corrections and temperature measurements any better. I can arm-wave about an unsettled issue too, and then claim that I should get credit for noticing it when a peer-reviewed paper comes out a couple years later dealing with it, but I´d only do so if my intent was to confuse and obfuscate. Just as all of CA whines about the dozens of hockey sticks, they have not once offered a better reconstruction.
It is a great tactic they have going…they do not contribute to the literature, but they cannot lose the fight.
Sorry. When I used EdGCM, it was setup at my University, but I think it was down-loadable on a personal computer for free (at the time). I heard Mark Chandler was having a lot of issues keeping it going, which may account for the need to charge for the product. It is a shame, because it´s a great educational tool, and a rather realistic model for being able to run ~100 yr simulation in 24 hours or so on a regular laptop. It also gave the user the freedom to tinker with many different scenarios and setup a real simulation.
[Response: It’s not inconceivable that the current GISS model code run at the old resolution (8×10) would produce similar performance. It might be worth someone’s time to investigate…. – gavin]
I think Gavin’s reading of what McIntyre said is a bit off, though I can see how that could happen in this convoluted topic.
First of all McIntyre said, “If the benchmark is an all-engine inlet measurement system… all temperatures from 1946 to 1970 will have to be increased by 0.25-0.3 deg C since there is convincing evidence that 1970 temperatures were bucket (Kent et al 2007), and that this will be phased out from 1970 to present according to the proportions in the Kent et al 2007 diagram… Recent trends will be cut in half or more, with a much bigger proportion of the temperature increase occurring prior to 1950 than presently thought.”
Obviously, that large an impact was predicated on the “benchmark” of “all-engine inlet” measurements in the “proportions in the Kent et al 2007 diagram.”
“If” the reference really was “all-engine” and if the proportions really matched Kent 2007, then half of “recent” trends could be affected, with more warming occurring prior to 1950. That is a lot necessary conditions to fill.
You’ve represented that in the post as a prediction that “the 1950 to 2000 global temperature trends would be reduced by half.”
That isn’t quite what he said in two ways. The first, because of the erasure of all the “ifs” and predicates. The second, is that he did not say that the trends from 1950-2000 would be cut in half. He said that the recent trends might be cut in half with more of the remaining warming present prior to 1950 than is shown now. Those are not equivalent, interchangeable statements. As a mental shorthand, it might be fine to reduce the complexity into more simple and memorable portions, but when attributing a statement to someone, it would be better to use the full complexity version of what they actually said, rather than the reduced complexity mental shorthand version of what they said.
Moreover, that wasn’t his only commentary on the topic. Three days later (2008-05-31, over three years ago, and fully search indexed) he publicly speculated on a smaller magnitude based on different types of buckets, “Let’s say that the delta between engine inlet temperatures and uninsulated buckets is ~0.3 deg C… [i]nsulated buckets would presumably be intermediate. Kent and Kaplan 2006 suggest a number of 0.12-0.18 deg C.”
He also refers to the previous writing as, “[M]y first cut at estimating the effect,” obviously implying that this and future estimates might be better than his “first cut.”
While I can see how one might misread what he said, particularly given a haze of three years, there are some important differences between what he said, and what this post claims that he said.
[Response: I’ll point out that McIntyre didn’t complain when Roger Pielke interpreted his statements exactly I as I did. And while he was subsequently informed that he’d got his buckets confused he never came back to the issue (as far as I can tell) after promising “I’ll redo my rough guess based on these variations in a day or two”. The fact is that he only produced one graph of what he thought the impact would be and speculated wildly on how this would undermine all the previous D&A work (it didn’t, and it won’t). He had plenty of time to update it, but he never did. – gavin]
I think Roger is playing a game of obfuscation, whereas every step forward by RC should be met by ten confusing step backwards by contrarians. Its part of their generalized game plan of sorts. All while back at the ranch (planet):
the real action is for a warming, proving and complementing the shown above sst graphs quite well. I shutter in dismay a whole lot more when arguing about details on how things are calculated becomes the main topic of discussion, the main signal is lost in this case, SST’s do not seem to be cooling, I note Roger not writing about that, again part of the process of denying reality. May be if we don’t talk about it the problem will fade away, like shutting eye lids in front of ghosts, or by cherry picked poorly placed point to point smoothing graphs… While arguing substance leading to nowhere, polar bears are missing a piece of ice to sleep on, not that they have a say in all this, but its starting to be far worse than that. a warming planet is a serious concern, is happening big time, Roger should regularly admit what mere animals already know.
An article published recently in the journal Science showed that the flow of ocean heat into the Arctic Ocean from the Atlantic is now higher than any time in the past 2000 years. The warm, salty Atlantic water flows up from the mid-latitudes and then cools and sinks below the cold, fresh water from the Arctic. The higher salt content of the Atlantic water means that it is denser than fresher Arctic water, so it circulates through the Arctic Ocean at a depth of around 100 meters (328 feet). This Atlantic water is potentially important for sea ice because the temperature is 1 to 2 degrees Celsius (1.5 to 3 degrees Fahrenheit) above freezing. If that water rose to the surface, it could add to sea ice melt.”
I was wondering how Steve McIntyre made such a large error in the scale of the correction, and found something that may be relevant –
“Steve’s adjustment is based on assuming:
that 75% of all measurements from 1942-1945 were done by engine inlets, falling back to business as usual 10% in 1946 where it remained until 1970 when we have a measurement point – 90% of measurements in 1970 were still being made by buckets as indicated by the information in Kent et al 2007- and that the 90% phased down to 0 in 2000 linearly.” Roger Pielke http://cstpr.colorado.edu/prometheus/archives/climate_change/001445does_the_ipccs_main.html
“To put the problems inherent in recording ‘bucket’ temperatures in this fashion into their proper context, I can do no better than record the conversation I had some years ago with someone who had served in the British Navy in the 1940’s and 50’s when the bucket readings were still common (they finally finished in the early 60’s).” Judith Curry at http://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/
Obviously, they can’t both be right, and McIntyre’s 50% overestimation should have some basis. Maybe he fooled himself into a calculating a large, inaccurate correction by making an unwarranted assumption about the transition from buckets to engine intakes and back to buckets, because it fits his narrative. It’s not like I’ve never made that misteak.
Nick Barnes and others, I was not proposing that you personally finance anything. Your work on ccc-gistemp is already a giant contribution. I hoped that with that experience you could convert a smaller program without too much more effort. But I see that ccc-gistemp is not done yet.
We all agree that a people’s GCM would be good to have. There ought to be some foundation that would support the work. Maybe it just takes the right person to make the pitch.
Comment by Pete Dunkelberg — 11 Jul 2011 @ 10:20 PM
#37 “We all agree that a people’s GCM would be good to have. There ought to be some foundation that would support the work.”
Well, there is NCAR’s CAM
it’s not that hard to understand. If you read a McIntyre post as carefully as one reads a wegman report looking for sources, you’ll find the source of the idea:
“Let’s suppose that Carl Smith’s idea is what happened. I did the same calculation assuming that 75% of all measurements from 1942-1945 were done by engine inlets, falling back to business as usual 10% in 1946 where it remained until 1970 when we have a measurement point – 90% of measurements in 1970 were still being made by buckets as indicated by the information in Kent et al 2007- and that the 90% phased down to 0 in 2000 linearly. This results in the following graphic:”
You see, The lines left out indicate that what Steve was doing was exploring a suggestion made by a reader. That’s why he labelled the graphic ( a version is shown on RC without the label) as Carl Smith’s proposal. You could look at it as a thought experiment. Anyone who has ever worked with what if analysis is used to doing these sorts of things. You could think of GCM projections as ‘what if’ scenarios. The problem is, of course, that people are not very cautious about reading print or the fine print or the labels on charts. Given the current environment it’s become hazardous to even engage in absolutely ordinary what if analysis or first order guesses. Some people take them too seriously.on both sides i’m afraid.
GISTEMP uses HadISST for pre-Satellite SST. This is an interpolated version of the data, and as such is considerably different than HadSST2. Presumably, they’ll create a version of HadISST based on these new adjustments.
It seems that McI and PJnr are both following the standard playbook, yet again, on this issue: “Always offended, never embarrassed”.
In their zeal to attack ‘the team’ at all costs, what they have clearly if unwisely said must be spun to mean something else when they’re confronted with it. The comments by Grypo @ 29 and Chris Colose @ 31 have it right.
In what other field would scientists apparently have to tolerate such nonsense from the usual repeat offenders?
#33, Thomas L – It seems to me that Gavin was highlighting the wild-eyed speculation about ‘bucket adjustments’ in the blogosphere. That McIntyre’s inference of large alterations wasn’t so much a considered prediction as guesswork based on incomplete information doesn’t really detract from that point.
It could be argued that the Independent’s figure was equally speculative. Was there an a priori reason why that was felt to be a better guess?
[Response: Yes – the information on the likely changes came from the people who knew most about the data and the inconsistencies. – gavin]
Given the current environment it’s become hazardous to even engage in absolutely ordinary what if analysis or first order guesses.
Hazardous? I think it’s more like, if you are wrong about something, that is misleading on an important subject, be prepared for people to notice and issue corrections, especially when the original party fails to do so. Speculate all you want, just do the work and get it right, and when you’re wrong, don’t become a drama queen when people start to notice past digressions.
Some people take them too seriously.on both sides i’m afraid.
Yes, many people took that analysis very seriously.
Pete, Nick Barnes and others, I was not proposing that you personally finance anything. Your work on ccc-gistemp is already a giant contribution. I hoped that with that experience you could convert a smaller program without too much more effort. But I see that ccc-gistemp is not done yet.
It will never be “done”, but it’s usable already, and we are certainly working on other things as we continue to improve it. One of our Google Summer of Code students is working on making a faster and more user-friendly ccc-gistemp; one of the others is working on a new homogenization codebase (with input from Matt Menne, Claude Williams, and Peter Thorne), and the third is working on a web-facing common-era temperature reconstruction system (mentored by Julien Emile-Geay, Jason Smerdon, and Kevin Anchukaitis). For more information on all three, see their mid-term progress-report posts on our blog, later this week.
Are we interested in GCMs? Absolutely, and I hope to do something in that space later this year.
I guess you are still taking a what if scenario seriously. I don’t think a careful reading of the text supports your position. yes, careless people failed to read carefully. There are two over reactions at play here: the first over reaction is the one taken by those who saw the assumptive analysis as a fact; the second over reaction is one that makes too much over “correcting” the analysis. I Try not to fall into either camp. As for corrections. I’ll avoid the comparing track records on corrections. I will note that the notion of correcting a what if analysis doesnt really make sense. How best to illustrate that?
For example, I have no idea how much you weigh. I have no idea how tall you are. I dont know if you are a male.
Assuming you’re a male, assuming you’re about 70 inches tall, I’ll propose you weigh about 190lbs.
Now some people will walk away from this analysis and conclude that mosher claimed you were a 5 foot 10 190lb male. They are poor readers. And, if you end up being a woman, I’m sure some people will laugh and say, “stupid mosher thought she was a he.” They too are poor readers.
On my view they are no more careful than those who made the mistake of misconstruing a projection as a prediction. But the idea that I should “correct” my “what if”, doesn’t really make sense does it? When, the IPCC does a projection of future temperatures based on an assumption about emissions, we do not clamor to have them issue a correction if the emissions scenario doesnt materialize. The whole POINT of doing what if analysis is our uncertainty about what certain facts will be.
would you insist that the IPCC issue a correction if A1F1 or B2 doesnt come to pass? would you argue that they got it wrong? divorce yourself from the parties involved and just look at the logical form of the argument.
The sampling deficiencies audited by McIntyre (“the Team’s Pearl Harbour adjustment”) begged the question from Pielke “Does the IPCC’s Main Conclusion Need to be Revisited?” (since “…we know now that the trend since 1950 included a spurious factor due to observational discontinuities, which reduces the entire trend to 0.06″). Emphasis mine.
I’ve noticed that it also apparently affected the meteorological stations. Were these data adjusted to account for a change from buckets to engine cooling intakes too?
I’ve come across one site – http://www.climate-ocean.com/ – that argues that it wasn’t the sampling, but the effect that military actions had on the sea surface, by mixing in cooler deeper water. His thesis is that explosions in the water from shells, bombs, depth charges, and sinking ships were much more effective than winds & waves at mixing the top 100 m or so. This led to cooler surface water, which in turn caused cooler air temperatures.
There’s also a decline during the Vietnam war which was primarily fought on land, not sea. I’ll go out on a limb here and engage in a hazardous first order SWAG, and posit that the cause might be aerosols from explosives. Cheng & Jenkins observed 10^6-10^7 particles per cc, with multimodal size distributions with peaks at 700-900 nm and smaller than 100 nm, which grew 2-3 orders of magnitude more rapidly than normally seen in air. Add the dust raised by explosions (the “fog of war”?), smoke from burning, maybe other stuff I haven’t thought of yet, and there might be enough temporary albedo increase to cause short term cooling.
[Response: This war/climate connection is nonsense – orders of magnitude too small to have global impacts. No more on this please. – gavin]
would you insist that the IPCC issue a correction if A1F1 or B2 doesnt come to pass? would you argue that they got it wrong? divorce yourself from the parties involved and just look at the logical form of the argument.
Yah. Except the IPCC issue dozens of scenarios and variants upon them, with absolutely no indication of which they think is most likely. When a party just states the outcome of a single ‘what if’ can we not infer that they believe it likely?
I seem to remember somebody calling for resignations because the IPCC highlighted a particular ‘what if’ scenario in a press release on renewable energy not that long ago.
steven mosher says: “divorce yourself from the parties involved and just look at the logical form of the argument.”
Aye, there’s the rub – the denialist mindset, which gladly embraces anything contrary to climatological science no matter what assumptions or pipe dreams are involved, and especially if those “what ifs” come from certain contrarians. Making the misleading fantasies plain can’t be a bad thing – the IPCC isn’t speculating that the emission scenarios will come to pass and their underestimated projections of temperatures, ice melt and sea level rise are being pointed out.
Back in 2008, a cottage industry sprang up to assess what impact the Thompson et al related changes would make on the surface air temperature anomalies and trends – with estimates ranging from complete abandonment of the main IPCC finding on attribution to, well, not very much. While wiser heads counselled patience, Steve McIntyre predicted that the 1950 to 2000 global temperature trends would be reduced by half while Roger Pielke Jr predicted a decrease by 30% (Update: a better link). The Independent, in a imperfectly hand drawn graphic, implied the differences would be minor and we concurred, suggesting that the graphic was a ‘good first guess’ at the impact (RP Jr estimated the impact from the Independent’s version of the correction to be about a 15% drop in the 1950-2006 trend). So how did science at the speed of blog match up to reality?
Here are the various graphics that were used to illustrate the estimated changes on the global surface temperature anomaly:
Instead of quibbling over words like “prediction” vs imply or whatever, why can’t those who were way off just admit the obvious? O wait, if they started admitting the obvious ….
Comment by Pete Dunkelberg — 12 Jul 2011 @ 4:49 PM
You’re just justifying a game that Philip Stott did in a global warming debate with people like Lindzen, Somerville, Gavin, and others starting at 7:55 here. In this video, Stott raises the big issue of “cosmic rays” and its contribution to climate change, but then backs off and says he didn’t say it was causing global warming and that it’s just a “hot topic” of research. Why did he bring it up? (you may need watch the whole thing to see it really had no relevant context whatsoever).
As a more extreme example, Judith Curry “summarizes” extensively her thoughts on the Montford Hockey Stick book (comment 168) and then backs off and says none of it was actually her opinion, just her summary, after gavin calls her out on the nonsense (comment here). These people are lawyers and choose their words carefully, so with just knowledge of the construction of english sentences at your disposal, you can never call them out on anything; you actually need to use your head and see through what they try to do.
He’s just doing a similar game of speculation and raising hypothetical meant to distract the people who aren’t going to keep up with the next several years of research on the matter.
The penultimate sentence of RC’s original “Buckets and Blogs” coverage of the issue sums this up pretty well.
“The excuse that these are just exploratory exercises in what-if thinking wears a little thin when the ‘what if’ always leads to the same (desired) conclusion.”
I would also point out that many other people, not just gavin (and including Roger), interpreted Steve M’s remarks to Peter Webster in a similar fashion. Many people who do know how to read, in fact. Regardless of whether we misread what he actually meant, no one is intentionally misrepresenting him, and we’re mostly calling for an end to the absurd games that he tries to play with his views of “The Team,” his distortions of what he thinks he is auditing, climate gate, etc.
If he doesn’t want to contribute to the science, and rather play his childish name-calling games and appealing to the “it’s a hoax” crowd, then no one is going to listen to him except for the very small non-expert community that already agrees with the conclusions of his crowd.
The arguments between climate scientists on the one hand and the befogged archipelago of deniers, lukewarmers and “auditors” on the other is getting too arcane and convoluted for me.
Pielke Jr., Curry, McIntyre, et al. are long- and-well known to me as dissemblers and obfuscationists. I cannot bring myself to pick back through the trail of this particular example of their dismal efforts in order to follow the arguments. I know them for what they are; I don’t need to re-confirm my conclusions. It grieves me that useful people such as Gavin must continue to spend their time confronting these drones.
I guess you are still taking a what if scenario seriously.
I take seriously what others take seriously. His ‘what if’ was based on poor understanding. That’s still a mistake and should corrected, especially if others are disseminating the information.
When, the IPCC does a projection of future temperatures based on an assumption about emissions, we do not clamor to have them issue a correction if the emissions scenario doesnt materialize.
You do understand the difference between making several scenarios on different control variables and a single scenario, right? See pjclark, perhaps McIntyre highlighting a scenario that is not based in reality means “Everyone in” Climate Audit “should be terminated and, if the institution is to continue, it should be re-structured from scratch.”
On my view they are no more careful than those who made the mistake of misconstruing a projection as a prediction. But the idea that I should “correct” my “what if”, doesn’t really make sense does it?
A projection needs to have proper boundary conditions to be valid. So yes, correcting a ‘what if’ statement, even post-mortem, is absolutely necessary to avoid other errors. If you were to guess I am from the planet Mars, or the IPCC guessed that CO2 might be 420ppm by 2011, only if pigs flew, then that would certainly be an issue.
Of course this all ignores that you just don’t get the big picture here, and why being smart about what you “project” or “predict” is important if you would like to avoid being “corrected”.
how did the marine air temperature data stack up?
as I recall that was suggest as a constraint on how big the corrections would be.
thanks for your reply.
[Response: The NMAT data are here, and the discussion of the SST/NMAT difference is in Rayner et al (2003). I’ll see if I can find time to do a comparison (though if someone beats me to it, that would be great). – gavin]
McIntyre has a new post where he tries to rescue the previous ‘projections’ – but he confuses the changes in HadSST (ocean temperatures, which he is plotting) and the changes in HadCRUT3 (the global surface air temperature anomaly) which is what his projection was for (as can be seen in the figures in the main post). Pielke’s projections were also for changes in HadCRUT3, not for the SST data alone.
Corrections to the global mean are obviously less than for the oceans alone (since they comprise 70% of the surface) and that is taken into account above.
Overall, we expect land temperatures to rise substantially faster than ocean temperatures because of the lower heat capacity on land. Even at equilibrium though the land response to increased GHGs is expected to be higher than over the ocean.
It is always worth noting that all of these reconstructions are works in progress. Newly digitised data is being continually added, better and more accessible metadata is coming on line, and different groups will have different ideas for how to deal sensibly with the inevitable ambiguities. Neither HadSST2 nor HadSST3 (or HadSST4 for that matter) were the final word, and simply because they used as the best available data set, does not imply that people think they are error-free. Indeed, it is precisely that reason why modellers don’t spend time tuning models to specific datasets of climate change – you never know when some feature might disappear.
Gavin, from McIntyre’s post:
“The original blog discussion at CA was entirely focused on SST (CRUTEM never being mentioned), though one graphic used HadCRU though the discussion was about HadSST.”
[Response: This is obvious – this post too is about HadSST. However, the impacts that everyone was talking about was on HadCRUT (in his figure, in Pielke’s emulation of his figure, in Pielke’s other predictions) and that is what all the % changes referred too, and so that is what I calculated. It’s not that complicated. – gavin]
Gavin, what did you get as the difference in 1950-2006 trend between HadSST2 and HadSST3, the series illustrated in the second figure of your post?
[Response: Odd question. The data are available and anyone can calculate the different trends, I don’t think I have any special method or anything, but for completeness the 1950-2006 trend went from 0.097 deg C/dec to 0.068 deg C/dec (mean of all realisations) a 31% drop (uncertainties on OLS trends +/-0.017 deg C/dec; for 100 different realisations of HadSST3 the range of trends is [0.0458,0.0928] deg C/dec). Your post insinuates that I deliberately did not give these numbers – that is false. I calculated the changes in trends for the metrics that people had talked about previously, and especially the ones they had graphed (and you will note that your graph is of the change to HadCRUT, not HadSST; as was RP’s). You are of course aware that oceans are 70% of the world, not 100%. – gavin]
SteveM’s 2008 graph, which he uses as a basis for the 50% reduction claim, explicitly says “HadCRUT3″, the global temperature record. RogerP’s 2008 post, which praises SteveM and postulates a 30% reduction, is also clearly to the global record. SteveM’s recent post attempts to say RogerP’s prediction was correct by calculating the trend difference for just the ocean record, not the global record.
Both of them should know better but who are they trying to fool? I find such games to be really silly.
Another oddity is that while RogerP staunchly claims that he has been misrepresented here with “there was no such prediction of 30% reduction from me, can you correct this post?”, SteveM says in his recent post “a value that is “remarkably similar” to the figure of 30% postulated in 2008 by Pielke Jr.” to which RogerP responds to SteveM with “nice post”. I would think RogerP would be outraged, especially given the further HadSST vs HadCRUT confusion.
Moving beyond the cheap blogo sniping and selective outrage from these fellows, I have a serious question. What does the adjustment imply for attribution and model-data comparisons during this period? I understand that the models seemed to have some problems at the decadal level with the early 40’s spike (explained by el Nino anomalies?) and with the subsequent sharp drop that aerosol forcing didn’t quite account for.
Stepping back from the details, personally I find it mind-boggling to think of the amount of extra energy that needs to be trapped by the atmosphere to heat up the ocean surface (over 70% of the earth’s surface) by ~0.1C per decade. It’s almost as mind-boggling as the amount of energy some individuals and organizations spend trying to convince the public that this heating up isn’t anything to worry about and/or that anthropogenic greenhouse gases aren’t the cause!
Comment by Gerry Beauregard — 14 Jul 2011 @ 4:45 AM
We don’t have revised HADCRUT yet, but I have done a direct land/sea calc using HADSST3 with GHCN v2. It does look fairly similar to what is projected here. I also did a plot (at the end) of the trend from 2006 back to an arbitrary starting year, again with HADSST3, HADSST2 and GHCNv2.
Thanks for the link Nick. One thing I hadn’t noticed from looking at Gavin’s plots was the slight downgrading of 1998 to about the same level as 2005. If this turns out to be the final result in HadCRUT4(?) it may well upset a few people.
Don’t worry, the conspiracy theorists will jump all over that one just as soon as they figure it out!
Comment by Rattus Norvegicus — 14 Jul 2011 @ 10:29 AM
For those who are more comfortable with Excel rather than a language, or don’t feel comfortable downloading the data, I have made a file of HadSST2 and HadSST3 (all realizations for a particular month, as well as the mean); this has a function to calculate the linear trend so you can reproduce, for example, gavin’s result in his response to Steve M. I didn’t put the error bars in this sheet, but they can easily be added by the user, for those who want a more rigorous file. You can download it here (it is an xlsx file, so not sure if it is compatible with older Excel versions). Of course, let me know if I screwed something up.
Note if you choose 1960 rather than 1950 as a start date to estimate the trend, the percent decrease is ~18%; if you choose 1970 as the start date, it is ~2.7%.
It does appear from the spreadsheet Chris provided that the 1998 through 2006 SST trend increases from about 0.08 (HADSST2) to 0.12 (HADSST3). Like Paul said, that will upset some people.
The study indicates analysis isn’t available beyond 2006 because of the “problem of missing or non-unique call signs, exacerbated in recent years by the decision of several countries to deliberately anonymise their meteorological reports.”
What are the prospects that this problem will be resolved, and how might adjustments from 2007 through date affect the trends?
The Met Office, as an example, is part of the Ministry of Defence. I read (somewhere) a few years back that Argentina may have been able to track British ships during the Falklands War by checking where British weather reports came from in the Atlantic. I don’t know if that really was the case, but it’s a theory out there. Don’t forget that piracy has been on the increase, too, and naval ships are the primary means of thwarting such actions.
Some ships are implementing call sign masking, by using an unique callsign like BATEU01 or BATEU49 instead of their real callsign. Some others are just using SHIP as a call sign (today 18Z there were at least 18 ships called SHIP).
> track … ships … by checking where … weather reports came
That’s a reason for short term anonymizing or delays, but I’d guess some countries keep site identifications anonymized to protect the data they make money by selling.
I recall confidentiality agreements came up previously since climatologists needed the details but some countries profit by selling such detailed weather data to businesses for scheduling activities like travel or roadbuilding.
Perhaps those countries have ceased to trust the ability of the climate scientists to guarantee their data isn’t disclosed in violation of such agreements?
I’d think the next IPCC reporters need to resolve the problem, whatever it is, fairly soon.
The Ministry of Defence, being the owner of the UK Met Office, makes millions in dividends from Met Office profits, since UKMO became a trading fund (it can be tens of millions of quid per annum). The UKMO’s financial report is available online, where we see a drop in revenue but UKMO still able to pay for itself, IIRC.
But I’d have thought anonymising the data would make it less commercially valuable, academia being a main customer? Given how the naval protection of maritime trade is an ancient raison d’etre for navies, which traditionally have also sought to have the best knowledge of weather for tactical advantage, plus the increase in high seas piracy (coincidentally since not too long before when the data went anonymous) where it’s to a navies advantage for not allowing resourceful pirates to know whether their ships have been in the area even in the last few days, I wouldn’t discount the anonymising being primarily for military reasons. I imagine the lives of merchant seamen and cruise ship passengers take priority over academic papers.
Of course, the Falklands War hypothesis I read may be bunk, which I fully acknowledge.
“This paper demonstrated that there was very likely an artifact in the sea surface temperature (SST) collation by the Hadley Centre (HadSST2) around the end of the second world war and for a few years subsequently, related to the different ways ocean temperatures were taken by different fleets”
In fact Thompson et al. (2008) demonstrated that there was likely an artifact in *all* SST data sets that use ICOADS. Not just HadSST2, all of them.
“While the sale of reformatted information does occur, many commercial weather companies integrate both governmental and nongovernmental sources ….
… operate their own computer models using proprietary algorithms in order to provide value for their clients….
… commercial weather companies use research and development efforts to better their competitive position, many research and development efforts are often confidential …. (Pielke 2002, manuscript submitted to Bull. Amer. Meteor. Soc.) …. the commercial weather industry has been pressing Congress for legislative action.”
Sounds like a real mess as far as getting access to information useful to climatology.
From what I can read of the graphs given (not at all easy given what you’ve provided), the adjustment does look like an entire degree – 50% of the value, as MacIntyre claimed.
Perhaps you could provide an graph plotting the actual item of interest: the amount of the adjustment.
My best reading of the graph: there is a downward adjustment of about 0.2 degrees in 1939 or 1940 of about 0.2 degrees which must be for something else; and then an abrupt upward adjustment of 0.8 degrees.
[Response: The difference between HadSST3 and HadSST2 is in the second graph. The differences this makes in the Land/Ocean HadCRUT3v is estimated in the last graph using an anomaly of 0.7*(HadSST3-HadSST2), therefore the difference in the last graph is just 70% of the difference in the second one. Thus the maximum change is around 0.15 deg C around 1946-1948. – gavin]