I think this paper represents a very nice advance in the characterization of Holocene climate; ever since the various papers on cooling trends of the past 2000 years pre-industrial started coming out, I had been yearning for something that covered back to the Altithermal. This kind of paper will be very valuable in refining our understanding of how the climate responds to the precessional cycle, particularly once it becomes clearer whether the reconstructed temperatures are really annual averages, as opposed to biased toward summer.
I do think, however, that some of the commentary in the blogosphere regarding how unprecedented the warming of the past century looks (notably the “wheel chair” graph comparing Marcott et al with the instrumental record) risks going beyond what can really be concluded from the study. As noted in the FAQ, the time resolution of reconstruction is approximately a century. Thus, it is not quite fair to compare the reconstruction to instrumental data that is not smoothed to the same time resolution. It is conceivable that there are individual centuries in the Altithermal where the temperature rose as fast as today, and to the same extent or more, but these would not show up in a record smoothed to 100 year time resolution. I think this is very unlikely, but the paper doesn’t strictly rule out the possibility. This remark applies only to the warming of the past 100 years. Where we are going in the next century is so extreme it would show up even if smoothed down to the centennial resolution, I think.
Note also the response in the FAQ concerning Fig. 3, where the authors attempt to back out some information regarding possible high frequency Holocene variability that is damped by the smoothing. I would welcome further discussion regarding the nature and validity of those arguments.
Is the response to criticisms about core-top dating adjustments clear enough to end that particular dispute? The radio-carbon dating calibrations were not at issue.
[Response: I doubt any amount of rational discussion will “end” criticism. In some quarters this goes on and on regardless of the relevance. I’ve yet to see the relevance of this issue — which I think Marcott explain perfectly well — to the conclusions of the paper. The question that seems to be of interest here really is what the sharp rise in temperature in the last century reflects; I think this is addressed well at Tamino’s blog, making it clear that this aspect of the paper has no influence on the subject of the paper, which is the long term Holocene variations.–eric]
Regarding “spikes” that might have escaped this reconstruction, I think it’s an interesting scientific question to ask:
For a current spike to be missed by Marcott, eta al:
a) There needs to be plausible mechanisms for both an upspike like the current, and then a downspike of approximately similar size.
b) Any such mechanism must not be caught by higher-resolutiuon proxies like ice-cores.
So the question is: how well can possible jiggles be bounded?
I note that the usual line+uncertainty zone graphs do not constrain possible paths as much as physics and other data do. It seems fairly hard for the real line to be at the bottom of the zone at one point, leap to the top at the next, and then go back … unless someone can provide a real mechanism.
[Response: John and Ray: See my update above with link to Tamino’s post addressing some of these questions.–eric]
[Response: Tamino’s posts have a good description of the issues with the uptick, but as far as I can tell they don’t address the issue of whether a 50 year spike in temperature of similar magnitude to the past half century, followed by a recovery, would be picked up by the proxies. I agree with John that it’s an issue of what physics could cause such a thing, but the point here is whether one is invoking physics or reasoning from the proxy reconstruction. What I’m going after here is the issue of what the Holocene record can tell us about the maximum excursions that could be caused by things like the AMO. A warming like the present, which is caused by CO2, could not be followed by a recovery because of the long lifetime of CO2, but we already know that CO2 doesn’t vary enough in the pre-industrial Holocene to do very much. The issue is whether some other cause (notably ocean heat uptake and release) could cause a centennial scale excursion of anything like the magnitude of the present CO2 induced warming. Constraining the magnitude of centennial natural variabiity over the Holocene would give us some further information about how much variation in rate of warming we should expect on top of the upward CO2-induced trend. Unless I misunderstand the authors’ remark about time scale, the Marcott et al paper doesnt tell us much about the centennial variability in the Holocene. It does put the CO2-induced warming of the present into a valuable perspective, though, because we know that the CO2 induced warming will last for millennia, and that the millennial warming we will get even if we hold the line at a trillion tons of carbon emissions is vast in comparison with the millennial temperature variations over the whole span of human civilization. That’s the real “big picture” here. –raypierre ]
Step 9 in the stacking procedure describes the proxies being ‘mean-shifted’ to reference them to the 1961-1990 CE mean.
Is this saying the various proxies are providing relative temperature estimates, and those need to be tied into an external source (like an overlap with the instrument record at their ends) to make them absolute?
Or is this simply describing how they locate “0 C” on the y-axis in the figures?
[Response: I think this is just the location of the 0º C anomaly baseline. – gavin]
Andy Revkin has a comment on the FAQ, pointedly commenting:
there’s also room for more questions — one being how the authors square the caveats they express here with some of the more definitive statements they made about their findings in news accounts.
How about it?
[Response: Can you be specific? The press release they put out doesn’t have anything contradictory. – gavin]
The post which Eric links to is the final installment of a three-part series on the Marcott et al. reconstruction. It deals primarily with the regional differences revealed by the proxies.
The issue of the 20th-century uptick is the main topic of part 2.
What the “critics” are desperately trying to ignore is the real point of this work, the subject of part 1 of the series, namely the big picture of temperature change throughout the holocene.
The point of Marcott et al. is to study global temperature change in the past, not the present — specifically the last 11,300 years. In fact, for this reconstruction the number of proxies used dwindles as time goes forward, rather than dwindling as one goes back further in the past like in most other paleo reconstructions. One hardly needs Marcott et al. to tell us about recent global temperature changes; we already know what happened in the 20th century. For the given purpose, re-calibrating the proxy dates is absolutely the right thing to do.
What happened in the past is most definitely not like what happened in the 20th century. In spite of the low time resolution of the Marcott et al. reconstruction, the variations (even — perhaps especially — without the smoothing induced by the Monte Carlo procedure) are just not big enough to permit change like we’ve seen recently to be believable. In my opinion, the Marcott et al. reconstruction absolutely rules out any global temperature increase or decrease of similar magnitude with the rapidity we’ve witnessed over the last 100 years. And the fact is, we already know what happened in the 20th century.
Critics of the recent uptick, and of the re-dating procedure, either have utterly missed the point, or they staunchly refuse to see it and want everyone else not to see it either. Their foolishness is appalling.
[Response: Tamino, thanks for clarifying regarding your post, and weighing in. I think your answer addresses John Mashey’s question rather well too — the celebrated “uptick” that McIntyre is so obsessed with was never the point of the paper. It may have been an oversimplification to plot it this way — but to my reading nothing actually said in the paper is problematic. Your statement that you think that Marcott et al. “absolutely rules out …” is a very strong statement coming from you, and makes me want to read it again, and your posts again. Very interesting if you are right. –eric]
[Response: I edited the update to point to all three posts. – gavin]
Perhaps you didn’t read the PR. Start with the headline. It reads: “Earth Is Warmer Today Than During 70 to 80 Percent of the Past 11,300 Years” Fair enough but above we are now told…
“Thus, the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.”
Without the uptick there is no basis for the headline.
[Response: Sorry, but no. It is not Marcott et al that provide the estimates of the modern warming – they don’t have sufficient proxies in the last 50 to 100 years to robustly estimate the global mean anomaly. This was stated in the paper and above. Their conclusions, as discussed in the paper and above, come from comparisons of their Holocene reconstruction with CRU-EIV from Mann et al and the instrumental data themselves. The headline conclusion follows from that, not the ‘uptick’ in their last data point. – gavin]
Tamino, after all these years, surely you no longer expect honesty in either fact or tactics from the denial industry? The point is to find something to criticize and go to town on it. This one is easy for them, all they have to do is separate the parts of the record and go after the level of certainty of each part, then indicate that until they are all perfectly linked by methodology and accuracy, it’s all a tissue of loosely sewn lies. Many innocent worker bees will spread the “word”, not necessarily even knowing they are being used. Andy Revkin, IMO, values his reputation for fair mindedness and I’m guessing he also has difficulty swallowing the level of danger we face and grasps at straws. All those “honest brokers” – gag.
Speaking of lies, Steve, the one man rapid response squad, has a pack of them up as a comment on this piece. Sometimes I think he is being deliberately obstuse.
Comment by Rattus Norvegicus — 31 Mar 2013 @ 3:19 PM
Re #11. Can you be specific? Steve’s post doesn’t have anything untruthful.
Here are a few unanswered questions posed by Steve McIntyre.
[Response: Some suggestions below, but hopefully one of the authors can chime in if I get something wrong. (some editing to your comment for clarity). – gavin]
They did not discuss or explain why they deleted modern values from the MD01-2421 splice at CA
[Response: I imagine it’s just a tidying up because the three most recent points are more recent than the last data point in the reconstruction (1940). As stated above, there are not enough cores to reconstruct the global mean robustly in the 20th C. This is obvious in the spread of figure S3. Since this core is quite well dated in the modern period, the impact of this on the uncertainty derived using the randomly perturbed age models will likely be negligible. – gavin]
Or the deletion of modern values from OCE326-GGC300.
[Response: Same as previous. – gavin]
Nor do they discuss the difference between the results that had (presumably) been in the submission to Nature (preserved as a chapter in Marcott’s thesis).
[Response: You are jumping to conclusions here. I have no idea what the initial submission to Nature looked like, and nor do you. From my experience working on Science/Nature submissions, there is an enormous amount of work done to take them from a first concept to the actual submission. In any case, with respect to the thesis work, the basic picture for the Holocene is the same. – gavin]
[Further Response: Turns out there wasn’t a Nature submission. The thesis chapter was eventually prepared for Science, and submitted (and accepted) there. – gavin]
Nor did they discuss the implausibility of their coretop redating of MD95-2043 and MD95-2011.
[Response: The whole point of the age model perturbation analysis (one of the important novelties in this reconstruction) is to assess the impact of age model uncertainties – it is not as if the coretop date is set in stone (or mud). For MD95-2011, I understand that Marcott et al were notified by Richard Telford that the coretop was redated since the original data were published and that the impact of this on the stack, and therefore the conclusions, is negligible. – gavin]
Perhaps Marcott et al can provide the answers.
[Response: I will ask, and if there is anything else to add, I’ll include it as an update here. (Just as an aside, please note that I am more than capable of reading McIntyre’s blog, and so let’s not get involved in some back and forth by proxy. I am not interested in playing games.) – gavin]
Since this paper has been the subject of a lot of hostile attention as well as genuine interest, it is worth reminding people of the comment policy, and in particular the injunction against posting unfounded insinuations of misconduct. Being polite and respectful is much more likely to lead to useful interactions (such as the authors answering more questions) than jumping to conclusions or insulting people because you don’t like their conclusions. This goes for everyone. – gavin
I’d seen tamino’s nice post, but I still haven’t seen a good analysis of possible forcing changes and causes to raise the temperature equivalent to the modern rise, and then nullify enough of it to escape notice by the higher-resolution proxies. I understand that spikes are not ruled out by the statistics of the proxy-resolution they used, but I’m trying to understand a combination of events that could produce such a spike.
The thought experiment would be: take the modern rise, place that anywhere on the Marcott, et al curve, with start at the lower uncertainty edge, and propose a model for what could have caused an upspike, and then enough of a downspike to get back to the line in a way that doesn’t contradict the ice-core records.
Offhand, I can only think of:
1) Major state change gyrations like Younger Dryas (before this) or the 8.2ky event … but they show strongly in ice-core CH4 2 records, and seems like it would be very noticeable in a set of marine records. And we certainly know that isn’t what’s happening now, i.e., the upswing at end of YD.
2) A strong volcanic period, followed by a period with none.
3) A BIG boost in solar insolation.
4) A big rise in CO2 … but the ice cores rule that out.
Anyway, the question remains: what combination of events could cause a current-equivalent spike and escape the higher-frequency proxies?
Re your point at #10. Marcott et at “Mean-shifted the global temperature reconstructions to have the same average as the Mann et al. (2008) CRU-EIV temperature reconstruction over the interval 510-1450 years Before Present.”
How is this anything other than an arbitrary shift, and as such how can this be regarded as a robust conclusion, that would support the press release?
[Response: Perhaps you can explain why this a problem. It’s this just calculating anomalies relative to the same baseline?–eric]
Re 17, Eric, This is a problem because by choosing different reconstructions and/or time intervals you can shift the baseline almost as you wish!
[Response: Not so. The baseline is set according a reasonable overlap period for any specific reconstruction. There will be small differences for sure, but they aren’t going to be large compared to the 20th C increase and so won’t impact the conclusions. – gavin]
Gavin, saw you the other night with Stossel. You represented yourself well. I think you have the integrity to enable a true scientific discussion, reminiscent of the one a couple of years ago at ourchangingclimate.org with the statistician VS. Let McIntyre go mano-to-mano with Marcott. Science will be better for it.
[Response: I am neither of these people’s keeper and they can do what they like. I will point out that this desire for cage-matches focused on the minutia of specific studies has nothing to do with how big picture issues actually get resolved. Our understanding of Holocene climate trends will not be enhanced by insinuations of malfeasance and bad faith, but rather by constructively exploring the real uncertainties that exist. If McIntyre was interested in the latter I, for one, would be quite happy. But as I said, I speak for no-one but myself. – gavin]
Gavin,thanks for your reply. I realize you are not their keeper, but you do realize you occupy a pivotal role in the world debate. I would urge you, despite your protestations that this will turn into a cage match on the minutia of this particular study, to encourage the two to go at it. Trust me, it WILL be good for science.
[Response: This is a very naïve view of how science progresses. The “go at it” approach is at best appropriate for the courtroom, though even there some level of civility is usually enforced.–eric]
Hmm. Just a thought. From Marcott et al, published March 8th
In addition to the previously mentioned averaging schemes, we also implemented the RegEM algorithm (11) to statistically infill data gaps in
records not spanning the entire Holocene, which is particularly important over the past several centuries (Fig. 1G). Without filling data gaps, our
Standard5×5 reconstruction (Fig. 1A) exhibits 0.6°C greater warming over the past ~60 yr B.P. (1890 to 1950 CE) than our equivalent infilled
5° × 5° area-weighted mean stack (Fig. 1, C and D). However, considering the temporal resolution of our data set and the small number of records
that cover this interval (Fig. 1G), this difference is probably not robust.
From today’s Marcott FAQ
Our global paleotemperature reconstruction includes a so-called “uptick” in temperatures during the 20th-century. However, in the paper we make the point that this particular feature is of shorter duration than the inherent smoothing in our statistical averaging procedure, and that it is based on only a few available paleo-reconstructions of the type we used. Thus, the 20th century portion of our paleotemperature stack is not statistically robust
And McIntyre’s blog post today
Although they now apparently concede that “the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions”
It does seem to me that somebody’s reading commprehension may be a little off and may not have the first clue on what they are critisizing. I wonder if any press agents will demand a response from McI on that one.
As has been pointed out by any number of folks – while a 300 year spike/recovery in temperature would have been filtered out by the Marcott proxies and methods, those making strong assertions in that regard (including McIntyre) have missed several important points:
(1) Temperature changes such as postulated in a spike require some physics, not just arm-waving about “it might be possible”. Just because a method has a particular lower resolution limit does not make gremlins in the gaps a likely postulate.
(2) Finer resolution proxies such as ice cores should have picked up such temperature changes in isotope ratios or in CO2 concentrations driven by ocean temperatures.
(3) Most important of all – even if such a spike occurred (doubtful based on energy conservation), it would not be relevant to recent temperature changes – because although temperatures are rising at a geologically fantastic rate, it will take a minimum of thousands of years to return to previous temperatures due to the concentration lifespan of CO2. Meaning that an event like the current warming would inevitably have been seen in the Marcott data. So arguing about possible but unlikely spikes and returns during the Holocene is irrelevant to the situation we find ourselves in today.
According to the supplementary information and data to your Nature Article:
Shakun, J. D., P. U. Clark, F. He, S. A. Marcott, A. C. Mix, Z. Y. Liu, B. Otto-Bliesner, A. Schmittner, and E. Bard (2012), Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Nature, 484(7392), 49, doi:10.1038/nature10915.
you use many of the same cores and you also redate them using the same methods as Marcott et al. However, you handle the core tops differently. In Shakun (2012) you linearly extrapolated beyond the latest radiocarbon date to the top of the core using the mean sedimentation rate, but in Marcott 2013 you simply set it to 0BP. Given that the change in methodology has resulted in some date changes of more than a thousand years could you answer the following:
Why the change in methodology between the two papers?
Since the 2 papers are closely related do you not think it would have been a good idea to explain why the core top dating method was changed?
What impact would using the core top dating method used in Shakun et al (2012) have on Marcott et al (2013)?
What impact would using the core top dating method used in Marcott et al (2013) have on Shakun et al (2012)?
Since the Shakun et al (2012) and Marcott et al (2013) share the same analytical code (the Shakun et al SI states the analytical code will be published in Marcott et al) can you show the figures that you got when you used the Shakun et al (2012) core top dating method?
[Response: I don’t know for sure, but since the core-top dating only affects the very recent dating, it is going to be irrelevant for the radio-carbon dated portion in the deglacial. Using the published dates as is you get the following (from Tamino):
No temperature spike lasting ca. 100 years is present in the far south.
Comment by David B. Benson — 31 Mar 2013 @ 6:38 PM
I agree that science is ill suited to “cage matches”. I also apologize for beginning to go over the edge in characterization. I’ve been under more or less continuous attack for years because of my views that science as it is practiced is worthy of respect and should be taken at face value. I find these attacks disingenuous, and they are full of personal innuendo. However, it must be remembered that when, as in my particular case, we give up our self-respect and react, we demean the basic premise that truth is at the center of all scientific investigation.
I’ve been reminded several times recently that my voice has its greatest value when I stay away from the personal fringes and don’t react. It may be hard, but it is necessary.
Eric, This is a problem because by choosing different reconstructions and/or time intervals you can shift the baseline almost as you wish!
[Response: Not so. The baseline is set according a reasonable overlap period for any specific reconstruction. There will be small differences for sure, but they aren’t going to be large compared to the 20th C increase and so won’t impact the conclusions. – gavin]
How is this “not so” in the category of “being able to set the baseline to any reconstruction you wish”?
From what you’re saying, it seems instead that you can set it to others, but the paper isn’t supposed to speak to whether or not (for example) the MWP is warmer than present (a popular reference-point for any reconstruction). But instead, all Marcott et al needs to do is to show similar enough alignment of warming and cooling periods within the reconstruction dates (nevermind the magnitude) in order to be deemed ‘consistent’.
Do I have this right? This means that Marcott et al can also be ‘consistent’ with any of the others, including something like Loehle et al, because all we need to see is the similar mimicking of ups and downs (more slopes than magnitudes) through AD 0-1850…and then what we already know of 1850-present becomes another implied conclusion of the paper (meaning the specific reconstruction doesn’t itself speak to it, but taking the known instrumental record along with it you have a dramatic shift upward in temperature).
If I have this right, this means that Marcott et al can still be correct and informative even if the WMP was warmer than present (which can be achieved by re-aligning the proxy mean to a different reconstruction, the defense of which then becomes a necessary component to the paper).
[Response: This is not the paper you should be interested in to discuss the details of medieval/modern differences. Given the resolution and smoothing implied by the age model uncertainties, you are only going to get an approximation. Note though the Marcott reconstruction is not being scaled to anything – it was just that the baseline was adjusted to have the same anomaly as the more recent reconstructions (which are in turn calibrated to the instrumental period). Loehle’s reconstruction is not useful for many reasons, but if you wanted to baseline to Ljundqvist or Moberg you certainly could (being a little careful with hemispheric extent) and I don’t think it would affect anything much. – gavin]
Susan Anderson says:
31 Mar 2013 at 2:47 PM
‘until they are all perfectly linked by methodology and accuracy’
Are you saying that published peer reviewed science papers have levels of certainty that are imperfectly linked by methodology and accuracy.
As I understand it, without the core-top re-dating, the proxy reconstruction takes a rather steep dive in the 20th century.
[Response: The error bars on the reconstruction once age uncertainties and proxy drop out are taken into account are large – and so many of the Monte Carlo members in the stack go up, but many go down.
After re-dating there is a sharp rise, the opposite. If the whole reconstruction is done without the core-top re-dating it appears that the 20th century is “cooler” than a large part of the Holocene (looking at proxies only).
[Response: No. You’d still need to run this through the age model perturbations, and the differences are not enough to change the uncertainties on the most recent points. – gavin]
If there were issues with the most recent proxies, why weren’t they truncated instead of specifically re-dated [edit – no assumption of motive please]? Why were the original dates not acceptable?
[Response: Core top dates are uncertain for many reasons (core recovery, dating method uncertainty, reservoir age etc.). So you need to take that into account. – gavin]
Also, with original dating in the reconstruction there is a significant divergence with the 20th century instrumental record. I don’t understand how the reconstruction in total is “robust” with regards to the proxies ability to represent temperature to the resolution that makes any analysis of recent years possible.
[Response: The most recent points are affected strongly by proxy dropout and so their exact behaviour is not robust. Tamino’s post does a good job analysing that, and the overall reconstruction is robust. – gavin]
I see nothing in the FAQ that addresses the authors reasons for the re-dating of core-tops or how they can apply their re-construction to any recent warming, a very small time frame.
[Response: They specifically state that this reconstruction is not going to be useful for the recent period – there are many more sources of data for that which are not used here – not least the instrumental record. – gavin]
How does this reconcile with the following excerpt from the abstract?
Current global temperatures of the past decade[My Emphasis] have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history. -Marcott et al
[Response: As they state above:
Based on comparison of the instrumental record of global temperature change with the distribution of Holocene global average temperatures from our paleo-reconstruction, we find that the decade 2000-2009 has probably not exceeded the warmest temperatures of the early Holocene, but is warmer than ~75% of all temperatures during the Holocene.
Now that DotEarth comments from the day are being rolled out it looks like a concerted negative campaign. As to “cage fighting” this would be a perfect example of why it wouldn’t work.
A plasterer, I’m saying that the effort to discredit science is fond of claiming that nothing short of perfection is acceptable. The point is that we will never have consistent temperature records over centuries and millenia. This is a ready-made to deceive the gullible.
Comment by Susan Anderson — 31 Mar 2013 @ 10:01 PM
Susan Anderson says:
31 Mar 2013 at 10:01 PM
‘claiming that nothing short of perfection is acceptable’
Sorry Susan,dont want to appear gullible but I don’t understand you.
I thought for science to be ‘perfect’ it would have to be correct.Are you saying it is not correct?Wether or not it is ‘acceptable’ is beside the point,it is either correct or not correct and if we will never have consistent temperature records over centuries and millenia,why are we trying to predict future temperatures from them.?
[Response: Not quite sure what you are asking here, but no-one is predicting future temperatures based on reconstructions of the Holocene. Future predictions are made based on physics-based models of the climate under plausible scenarios for future changes in climate drivers. Reconstructions are of use in model evaluation in some circumstances, but similar predictions existed long before any reconstructions were published. – gavin]
[Response: Core top dates are uncertain for many reasons (core recovery, dating method uncertainty, reservoir age etc.). So you need to take that into account. – gavin]
I know they are uncertain, the question is why they re-dated previously published core tops. They only re-dated (not re-calibrated) several and have not addressed it in the FAQ. If they are uncertain etc., how are the authors dates more valid than the published dates? The affect of this is a later issue in the chain.
[Response: They specifically state that this reconstruction is not going to be useful for the recent period – there are many more sources of data for that which are not used here – not least the instrumental period. – gavin]
I understand instrumental records are more accurate than some sediment but how can you compare so vastly different resolutions with any certainty spanning the entire Holocene? Tenths of a degree decadal?
Thank you for the link to Tamino, I am going through the post now.
I know you can’t speak for the authors, I just don’t understand where the Qs in the FAQs came from.
So when the NYT trumpets the following, asking the world to spend trillions more on “green” energy, sans nuclear, what am I supposed to think?
“Global temperatures are warmer than at any time in at least 4,000 years, scientists reported Thursday, and over the coming decades are likely to surpass levels not seen on the planet since before the last ice age. ”
While I hope for humanity, you guys have it wrong, I hope for science you have it right. I can’t imagine the backlash to all this expense if you are wrong.
[Response: Perhaps I’m being obtuse, but I don’t see how a better understanding of global temperature history forces either the NYT or you to spend any money on anything. Whether the energy mix of the future includes an expansion of nuclear or not, this has nothing to do with temperatures 5000 years ago. Nonetheless, the observations here, and in other reconstructions and in the instrumental record indicate that temperatures since the Early Holocene (particularly in the high Northern hemisphere) have been falling slightly (mostly in line with expectations from orbital forcing), and that over the last 100 years or so, something anomalous has happened – coincident with the dramatic uptick in greenhouse gases due to the industrial revolution. GHGs are continuing to rise (even accelerating) and basic physics, as well as more sophisticated models, indicate that future changes will be larger still. This implies a risk to society since we have an enormous investment in the climate status quo. However, how society deals with that risk is a decision for politicians and the public to make – it does not follow linearly from the science I just outlined. If you think the benefits of nuclear outweigh the potential costs (or vice versa), you should make your voice heard on that topic. Likewise if you think that energy efficiency, or wind or solar, or carbon capture or adaptation and mitigation are more sensible responses. Whatever you decide, it is not not determined uniquely by the science in general, and certainly not by this single reconstruction. – gavin]
A Plasterer: What exactly do you mean by ‘correct’? In most data collected in science, there is some degree of uncertainty as regards how close it is to the ‘true’ data (chemical formulas are the only exception I can think of). Much of science consists of finding new ways to get closer to the ‘true” values, while a large part of statistics is aimed at determining the probable lower and upper bounds within which the ‘true’ value lies. For most data, exactly correct values are not known and likely will never be known. The important thing is,’are the data good enough to be useful?’ For climate science, physical and historical data are known with much more precision than is needed to be able to predict rapidly increasing global temperatures.
“9. Mean-shifted the global temperature reconstructions to have the same average as the Mann et al. (2008) CRU-EIV temperature reconstruction over the interval 510-1450 years Before Present.”
How much was it shifted?
[Response: This is just a setting of the zero anomaly line. Something needed to be used as the baseline, and this is as good as anything. Differences in using Ljundqvist or Moberg et al would be around 0.1 or 0.2ºC – not large enough to affect the conclusions related to the 20th C rise. – gavin]
Given that the authors state that “20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions”
Why was this portion included initially and why did the paper’s authors draw attention to it in the NSF press release and in many interviews?
[Response: The uncertainties increase towards the recent end because of proxy drop out, and that is reflected in the error bars and in the sensitivity to method. The authors had to make a decision where to end it, and they chose 1940 which is a balance btw inclusion and uncertainty. In interviews they discussed the Holocene trends with respect to the 20th Century warming for which there is abundant additional evidence. Given that people are understandably perhaps very focused on the relationship between long term trends and the 20th C, not discussing this would have been untenable. – gavin]
[Response: I find it amusing that Roger thinks that NSF should withdraw a claim that 20th Century temperatures have risen, and that figures should be censored to ‘hide the incline’. – gavin]
Would the authors care to comment upon why this non-significant uptick was absent from Marcott’s thesis that this paper was based upon?
[Response: The thesis was finished 2 years ago, and there has clearly been further work done on the reconstruction since then. Some differences are in the amount of ‘jitter’ introduced into the Monte Carlo age model variations and the smoothing (100 years vs. 50), plus much of the exploration of spatial and variance-related sensitivities. The treatment of the core-top dates may have also changed – but I will ask the authors to comment further. – gavin]
There does seem to be something a bit odd going on since yesterday’s FAQ. Roger Pielke Jr. also suffers from the same reading comprehension malady as McIntyre. Both seem to have missed that Marcott et al already discussed the 20th century ‘tick’ as not robust. Now Roger, with an assist from Andy Revkin, seems to be embarking on something a bit more troubling and that is the belief that a paper/scientist that does not deal with the 20th century temperature rise cannot comment on the differences between their own findings and the known temperature changes. He is coming amazingly close to asking scientists to censor themselves from making logical extrapolations from their own work. This is worth noting
Some of the comments above are (sadly) quite telling–their obsession with the ‘uptick’ and the headlines despite the clear statements in the original abstract (and again in the present piece) that the uptick is probably not robust show clearly that for some, it’s never really about the science. For them, it’s ‘politics all the way down.’ Hence, the whole point of the paper ‘must be’ that current warming is unprecedented (even though it actually says otherwise!)
However, Marcott et al. (like many other papers, including MBH ’98, back in the day) presents clear internal evidence (in the form of inordinate amounts of time and effort expended) that yes, some folks really do care what the temperatures in the early Holocene (or the the early modern era) were–regardless of what that may or may not say about our present predicament.
My reading of the reaction from the ‘usual crowd’ (ie Pielke Jrs, auditors, WUWT rabble etc) is that:
a) they haven’t read the paper or the supplement (eg “finally concedes” nonsense);
b) they have not the wit or maybe not the will to understand or appreciate science;
c) they believe we are still stuck in the Little Ice Age (eg Pielke diatribe).
I marvel at all the achievements that allowed this research and other work like it to come to fruition. The effort that must have gone into the collecting proxy samples by so many people over so many years (in some cases the work quite probably entailed personal danger). The painstaking measurements and analysis. Not to mention all the previous research that identified what proxies to collect in the first place and how they can be used to indicate past temperatures and other things, development of tests etc. Then came the idea for this work and the further analysis by Shaun Marcott, Jeremy Shakun et al.
Clever and dedicated people have been quietly working to build up a vast amounts of knowledge in only a few decades. They deserve congratulations and recognition.
It’s only the illiterati who scoff at and belittle such achievements instead of making the effort to understand and learn from them. This small but loud group are not deserving of any respect at all IMO.
During the past three weeks, I have watched for complaints by the original authors about redating the published cores. Since no-one has spoken I concluded that the original researchers have no problem with this procedure.
Now we see that Richard Telford notified Marcott et al about redating after original publication. I this particular case, Marcott et al did not redate a core: they simply used the best information available.
In honest discussion, the redating complaints should end.
I still don’t understand why the uptick is included if it is “not robust”. Surely the authors, editors, and reviewers are aware of the publicity and scrutiny that studies on this topic receive. News articles and bloggers aren’t all going to care about qualifying discussions in the paper nor will they ensure conclusions they draw are consistent with the authors’. I understand that the authors had to draw the line somewhere but to not question the inclusion of the uptick and then later wonder why it is being picked on seems disingenuous (or obtuse;)).
[Response: The main point of the paper is the Holocene reconstruction. As you get to the end, there is data drop out which increases uncertainty. Deciding where to end it is a judgement call and as long as the increase in uncertainty is made clear (and it was), I don’t see that the authors have done anything untoward. People are always free to draw their own conclusions from papers. It was indeed predictable that this paper would be attacked regardless of where they ended the reconstruction since it does make the 20th C rise stand out in a longer context than previously, and for some people that is profoundly disquieting. Thus if it wasn’t this issue that got the attention something else would have – see for instance the 17(!) posts attacking the paper on WUWT as they try to find some mud that will stick. – gavin]
I think RC would come off as a bit less partisan if they would note honestly in Revkin’s dialogue with Shakun, Shakun was every bit euphoric about the uptick. To pretend that the uptick was never of any consequence is to dissemble.
[Response: Euphoric? Hardly. But he did strongly note the contrast between the long term cooling trends since the Holocene and the recent rise in temperatures. This ‘new rule’ that people aren’t allowed to talk about conclusions drawn from other work when speaking to journalists is an odd one, and indeed one I haven’t ever seen applied to anyone else. I wonder why that may be? – gavin]
If not “euphoric,” Shakun was quite pleased about the uptick, and – far from dismissing its importance,as so many here now do – he made it a highlight of his 30-second “elevator speech.”
[Response: No – his elevator pitch was talking about actual temperatures and the contrast between the 10,000 year trends, the recent changes and where we are headed (‘outside the elevator – boom!’). Obviously you aren’t claiming that he can’t talk about model projections because none of his proxies go out to 2100, so why you feel he can’t talk about the 20th Century rise in temperature is strange. – gavin]
Gavin – “This ‘new rule’ that people aren’t allowed to talk about conclusions drawn from other work when speaking to journalists is an odd one”
If the authors were clear that they were comparing the recent measured temperature record against their proxies, then your statement would be valid. Is this what you are asserting?
It is certainly a valid interpretation that the authors were comparing *** their own *** last century proxy results against the past millennium, which they now declare as not robust.
Inquiring why the authors made these statements, without qualification, is a legitimate question.
[Response: No it’s not. It’s just another red herring ‘question’ that always comes up when people don’t like the results. It is just so much easier to try and shoot the messenger and convince yourself that there is nothing here to be understood that these things don’t surprise me in the least. But I’m not going to convince you of anything, so I’ll pass on further micro-parsing of their statements. The fact of the matter is that the 20th C rise is real, anomalous, pretty well understood, and because of where we are economically/societally/technologically, it presages what we can expect in the future. People harassing newly-minted postdocs doesn’t change any of that. – gavin]
Since Marcott et al ends in 1950 and the IPCC has concluded the temperature increases pre 1950 are due to natural causes, Macrott cannot be showing us anything about the human influences on temperature.
[Response: Marcott ends 1940. IPCC concluded no such thing, and Marcott’s work does place the 20th Century rise in the context of the long term natural trends. – gavin]
Rather, it must be concluded that what Marcott is showing is only the natural variability in temperature and if any weight is to be given to the uptick, it shows that at higher resolutions there may be significant temperature spikes due to natural causes.
[Response: Of the size and magnitude of the 20th Century – unlikely. Even the 8.2kyr event which is the biggest thing in the Holocene records in the North Atlantic is small comparatively. It would definitely be good to get more high resolution well-dated data included though, and Marcott’s work is good basis for that to be built from. – gavin]
Response: 20th Century rise in the context of the long term natural trends… Even the 8.2kyr event which is the biggest thing in the Holocene records
NOAA shows the 8.2kyr event as greater than 3.0C, while GISS shows the 20th Century rise as less that 0.7C.
[Response: Those are local signals around the high North Atlantic; averaged over the planet they are much smaller. – gavin]
When one looks at long duration events we have the Younger Dryas event, with a temperature change of approximately 15C as compared to 2oth century warming of less than 0.7C
“Near-simultaneous changes in ice-core paleoclimatic indicators of local, regional, and more-widespread climate conditions demonstrate that much of the Earth experienced abrupt climate changes synchronous with Greenland within thirty years or less. Post-Younger Dryas changes have not duplicated the size, extent and rapidity of these paleoclimatic changes”.
[Response: This is becoming a habit – the YD change of ’15ºC’ is a Greenland signal. It is not in phase with (much smaller) changes in the high Southern Latitudes, and in ocean cores, even in the N. Atlantic, it is much smaller. An estimate of the impact of the YD on global mean temperatures is found in the Shakun et al (2012) paper, and is likely less than a degree. Note that the whole glacial to interglacial change is only about 5ºC in total! – gavin]
I went back to the Skype call. Shakun was engaged and animated; I couldn’t see any euphoria. I did however think he would be a good guy to drink beer with.
Revkin wanted to talk about the 20C so Shakun did. He was , however, always careful to make the distinction between the paper and the instrumental record. In this , he was consistent with the paper and the FAQ’s.
that some want to hold Marcott et al responsible for what others write about the paper or the headlines that others compose does seem a bit silly.
Once cannot conclude from Marcott that there have been no short term events of similar magnitude to the Younger Dryas (approx 15C) within the period covered by Marcott. Such events would be hidden by the lack of resolution so long as they were shorter than 1/2 the sampling rate as per Nyquist.
[Response: These are the same core that show the YD! You can’t claim that a YD like event wouldn’t be shown in the same cores. Please, you can do better than this. – gavin]
The fact of the matter is that the 20th C rise is real, anomalous, pretty well understood…
The 20th C rise is clearly real. But with all the uncertainties stated (and restated) by Marcott et al., there’s little to support calling it “anomalous.” And the recent divergence of observed temperatures from predictions suggests it is not “pretty well understood.”
[Response: You can think whatever you like. – gavin]
The FAQ makes it clear that Marcott et. al have measured the broad holocene temperature dependency at a maximum resolution of ~ 100 years. This shows a clear cooling trend over the last 5000 years. The data says nothing at all about any trends over the last 100 years.
It is only by comparing instrument data eg. Hadcrut4 with the Marcott data that statements like
“We conclude that the average temperature for 1900-1909 CE in the instrumental record was cooler than ~95% of the Holocene range of global temperatures, while the average temperature for 2000-2009 CE in the instrumental record was warmer than ~75% of the Holocene distribution.”
can be made. I think it would have been better to have made this fact much clearer in the various press releases and media interviews.
” This fact ” was clear from the beginning. It was stated , more than once , in the paper. It was stated , more than once , by Shakun in his Skype interview . It was stated, more than once, in the FRQ.
What do you require. Should ” this fact” be written on a 2×4 and be emphatically and repeatedly applied.
Marcott et al. tells us what temperatures were doing over the past few thousand years. GISTEMP, BEST, etc. tell us what temperatures have been doing over the past century.
Clearly a lot of people desperately want to forbid any comparison of A to B. Just because it hurts Anthony Watts’s feelings, though, doesn’t mean that the rest of us have to promise not to put 2 and 2 together.
Please do let me know if it this adaptation is mis-representing the outlook choice or history in any policy-relevant way as I would like to get it right. Obviously this is intended purely as messaging to summarise the recent science as currently understood and not as a comment or development of the science.
@ The_J (55).
GISTEMP, BEST provide data based on one-year means. Maarcott et al provide data based on 100-year means. For any previous century without instrumental data (eg. from year 1000 to year 1100), we have no way of knowing whether annual temperatures fluctuated over the century by zero degC, bu 0.2 degC, 0.5 degC or even 1.0 degC. Therefore, we cannot use a comparison of GISTEMP/BEST and Marcott et al to judge whether 20th C fluctuations are normal or unusual.
Does Tamino’s deconstruction by latitude suggest that the Southern Hemisphere did not experience much change from end-glaciation to now, in fact a slight cooling? that the Holocene is/has been largely a NH experience (due to continental vs marine ice masses)? That the tropics were little affected once the glaciers started to melt?
If the beginning of the Holocene to now in the SH didn’t show much change, one wonders what happened during the glaciation relative to the NH.
Are we seeing sudden, serious climatic variations on a hemispheric level happening because of ocean currents, not atmospheric temperatures?
Everyone from NYT to Daily Telegraph to Mcintyre interpreted the statement as a direct result from the proxy data. It would have been easy to clarify immediately. There remains some uncertainty in my mind regarding the normalisation of proxy anomalies from 5000ybp to 1961-1990. I just added 0.3 C to the Marcott anomalies. What is the net shift used in the paper ?
The phrase “not robust” keeps being used. Perhaps someone could explain exactly what this means in scientific terms. Content-free? Unreliable? Weak? Wrong? Vacuous random noise? In probabilities, anywhere from potentially 5% to 95% accurate, but nobody knows? See, eg, the work of William C. Wimsatt on ‘scientific robustness’.
In this matter “should not have been included” seems a good definition.
[Response: it’s not so hard. Any kind of complex analysis involves choices, and for each choice there are usually conflicting justifications. A result is robust if it is the same regardless of the choices made. If the result changes a lot depending on choices that can’t be a priori decided then it isn’t. Easy. – gavin]
You mean, aside from conservation of energy. Find me a mechanism that will cause temperatures to fluctuate by a degree on decadal timescales but remain unchanged on centennial timescales. THAT’s a neat trick.
SecularAnimist is wrong to say this discussion is the “death throws of organised denial” (whatever that is). Instead it shows that climate scientists cannot admit even the slightest error, and it is this failure which drives many people such as myself to accept the sceptical position.
It does not even seem possible for RC to say something like “while accurate, on reflection the publicity was mis-judged” or that “Perhaps better choices could have been made about the end point of the graph”.
[Response: What tosh. We have criticized many press releases and suggested many papers could have ‘done something a little better’. But this isn’t one of those cases – instead we have people making a huge mountain out of an irrelevant molehill. Your assessment is based simply on your prior judgement and an inability to accept that other people can genuinely come to a different conclusion. You find it easier to impugn our integrity rather than agree to disagree. Readers can judge who is more credible. – gavin]
One again we see Gavin patiently and expertly responding to a long series of determined point missers – what looks like (taking other web sites into consideration) powerful organized denial.
As Gavin @ 46 says:
“It was indeed predictable that this paper would be attacked regardless of where they ended the reconstruction since it does make the 20th C rise stand out in a longer context than previously, and for some people that is profoundly disquieting.”
My deconstruction by latitude is remarkably similar to that of Marcott et al. except for the 20th century. Note in fact that I showed their reconstructions for these latitude bands as well and compared them to mine (which are done by the “differencing method”). Look at panels I, J, and K in their figure 2, you’ll see essentially the same thing.
Very nice graph! One thing I reacted to was the “Temperatures unknown to humans” as humans probably were around during at least last interglacial, which were a little bit warmer than the present. Perhaps it should say “Temperatures unknown to human civilization”?
I also wonder about the projections for 2100? I guess it is the IPCC 2007 upper range projections. This should be annotated, and I would choose a different color for those projections than the temperature reconstruction (maybe orange and replace the A1B line).
The only cause for concern would be if the original paper was sufficiently incorrectly determining the paleoclimate reconstruction or if the modern instrument record was sufficiently incorrect to invalidate any conclusions comparing the two.
I’m not seeing either point in any of the objections.
This is an excellent piece of work. Like raypierre @3 I have concerns that it is being over-interpreted in the popular press (eg the Times article) and the blogosphere. Even the raw Marcott data is time-smoothed with a bandwidth of about 120 years plus. That means the smoothing may hide high frequency transients (such as spikes) or even high frequency oscillations. So it’s not strictly appropriate to add the recent instrumental record on to the end of the Holocene reconstruction and compare the two as done by the NYT, Tamino & others. It’s conceptually the same error as comparing weather and climate.
I guess there are two caveats to this:
1) Science communication seems to require some degree of simplification, but at what point do we throw the baby out with the bathwater?
2) There is good reason to believe the recent surge in global temperatures is the start of a larger more-prolonged episode due entirely to human actions. This is the key message, and this finding helps highlight that.
I was venting about this to my father (PW Anderson), and he mentioned that he had read the article in Science and I could quote him, and even found the issue for me. Since he will be 90 soon and prefers to stay out of this donnybrook, this is quite a compliment, and I hope Marcott will see it!
He said he was impressed; the article was “very clean” and “well put together”.
#64: Definition (and note spelling): “1. A severe pang or spasm of pain, as in childbirth. See Synonyms at pain.
2. throes A condition of agonizing struggle or trouble: a country in the throes of economic collapse.”
#63: Well, Ray, one can hope. erhaps it’s just me, but the quality of argument amongst the usual suspects isn’t getting any higher–rather, I think, the reverse.
I have a (different, probably easier) question regarding the Marcott Study.
I was never under the impression that direct changes to obliquity (compared to precession for example, which is of course modulated by obliquity) was the main driver of Holocene climate change, in particular the high-latitude seasonality. This is mentioned in their text on a few occasions. Did they get this right?
There is a message in Marcott that I think many have missed. Marcott tells us almost nothing about how the past compares with today, because of the resolution problem. Marcott recognizes this in their FAQ. The probability function is specific to the resolution. Thus, you cannot infer the probability function for a high resolution series from a low resolution series, because you cannot infer a high resolution signal from a low resolution signal. The result is nonsense.
However, what Marcott does tell us is still very important and I hope the authors of Marcott et al will take the time to consider. The easiest way to explain is by analogy:
50 years ago astronomers searched extensively for planets around stars using lower resolution equipment. They found none and concluded that they were unlikely to find any at existing the resolution. However, some scientists and the press generalized this further to say there were unlikely to be planets around stars, because none had been found.
This is the argument that since we haven’t found 20th century equivalent spikes in low resolution paleo proxies, they are unlike to exist. However, this is a circular argument and it is why Marcott et al has gotten into trouble. It didn’t hold for planets and now we have evidence that it doesn’t hold for climate.
What astronomy found instead was that as we increased the resolution we found planets. Not just a few, but almost everywhere we looked. This is completely contrary to what the low resolution data told us and this example shows the problems with today’s thinking. You cannot use a low resolution series to infer anything about a high resolution series.
However, the reverse is not true. What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the very first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.
Thus, what Marcott is telling us is that we should expect to find a 20th century type spike in many high resolution paleo series. Rather than being an anomaly, the 20th century spike should appear in many places as we improve the resolution of the paleo temperature series. This is the message of Marcott and it is an important message that the researchers need to consider.
Marcott et al: You have just looked at your first star with high resolution equipment and found a planet. Are you then to conclude that since none of the other stars show planets at low resolution, that there are no planets around them? That is nonsense. The only conclusion you can reasonably make from your result is that as you increase the resolution of other paleo proxies, you are more likely to find spikes in them as well.
MODS: This post’s artificial controversy is centered on a specific graph. IMHO, that graph oughta be front-and-center. For your readers to have to link to Denialist sites to see the graph (as everyone knows, seeing the actual paper is against the rules for regular folks), well….
58 Quer said, “For any previous century without instrumental data (eg. from year 1000 to year 1100), we have no way of knowing whether annual temperatures fluctuated over the century by zero degC, bu 0.2 degC, 0.5 degC or even 1.0 degC.”
Doesn’t make sense to me. First, you’re really talking about four ~1C deviations. First, the ~1C rise in temps, then the ~1C decline to get back to “average”. Then, you’d need to compensate for the up-wiggle with another ~1C decline, and finally end with a second ~1C rise to get back to “normal”.
Fudge it as you desire, but to “hide the incline” would require something close to four 1C deviations over a few hundred years. That would surely leave a tremendous mark on the biosphere and our civilizations. Even if you could cobble up a scenario that this particular study wouldn’t pick up, it would leave traces. For example, freeze-intolerant species could show up in a core for a century and then disappear. Agriculture, cities, all kinds of stuff would be affected by such wild and constant swings. Are you saying that such things would be overlooked by the biologists who study the same (and other) cores and by the anthropologists who study past civilizations?
62 Ray L said, “You mean, aside from conservation of energy. Find me a mechanism that will cause temperatures to fluctuate by a degree on decadal timescales but remain unchanged on centennial timescales. THAT’s a neat trick.”
You forget Arthur C Clarke’s, “Any sufficiently advanced technology is indistinguishable from magic.”
Since ALL science is sufficiently advanced for that statement to apply to Denialists, they need not even reply.
But since you ask, a serious (Yellowstone) volcano at the beginning of a low solar period, followed by a solar max and almost no volcanoes might do it. (probably not, but hey, just speculating)
Now, if you add in, “a mechanism that leaves NO trace other than temperature changes”, then we’re into seriously Hogwarts territory….
AndyL #64: try a google scholar search on climate science error. There. I did it for you. Over 1.8-million hits. Even if you take out the papers where the word relates to “error bar” or “error bars” you still have 1.7-million hits to choose from.
Anyone attracted to a side that is extremely self-critical would not feel comfortable with the self-styled sceptics. Any argument goes, including some that are complete balderdash.Here is a compendium of articles from one of my blogs that makes the case. Just the tip of the iceberg. Read their stuff really sceptically and you will find a lot more.
Anyone who thinks climate scientists never admit to errors has never read the literature.
Here’s an example for you: previous studies of the volume and depth of Antarctic ice turn out to be flawed; the latest British Antarctic Survey paper on the subject corrects the error. It seems the Antarctic is a rather smaller land mass than previously thought and if all the ice went, a large part of it would be isolated islands.
@67 Perwis Thanks! And thanks for the suggestions. You are right I need to make the differences in projections clearer. The intention in the graph is to demonstrate policy choices faced now, but the original data should be separated more as you say.
@RealClimate.org Many thanks for this blog in general and this Q&A/discussion in particular. All immensely helpful.
I continue to be most dismayed by academics like Pielke Jr who continue to work so hard to miss the point. His critique of the reconstruction, using McIntyre as a reference, is made especially absurd in attempting to cloud the actual paper and its evidence by basing his attack on not liking the press release. At least many of these attacks are becoming seriously silly, which could be viewed as heartening. It seems the same in the media: Revkin did not like this Q&A because it came out on a Sunday. You couldn’t make it up.
Quercetum writes: For any previous century without instrumental data (eg. from year 1000 to year 1100), we have no way of knowing whether annual temperatures fluctuated over the century by zero degC, bu 0.2 degC, 0.5 degC or even 1.0 degC. Therefore, we cannot use a comparison of GISTEMP/BEST and Marcott et al to judge whether 20th C fluctuations are normal or unusual.
Or do you disagree with that?
Yes, I disagree with that. First of all, the present CO2-induced temperature rise is almost certainly going to be around for longer than a century. A paleofeature of equivalent magnitude and duration would certainly show up in Marcott’s reconstruction.
Secondly, we know that the current temperature rise is real and we know the physical basis for it, but as far as I know nobody has suggested a physical mechanism that would cause sub-century-scale global “warm spikes” on the order of 1C. And we see nothing like that in the high-resolution proxy record, do we? Before you ask me to believe in gremlins, shouldn’t you have some line of reasoning — a physical process that would produce gremlins, or paleoevidence of gremlins?
Remember that such a spike has to be *global* not just local, it has to be sustained for at least a few decades but not as long as a century, it has to be large (on the scale of 1C) and it has to be *warm* not *cold*.
Can you give me an example of when such an event occurred, and/or an explanation of what physical process did or would produce it?
Nancy Green, I would be more inclined to respond to your analogy about detecting planets around other stars, if you acknowledged how seriously wrong your previous comments in this thread have been.
You claimed that the Younger Dryas was a 15C swing in temperatures, and that the Younger Dryas isn’t detectable in the proxies. But both of those claims are wildly, absurdly wrong. I think you may have been confused by a statement on Wikipedia that the summit area on Greenland was 15C colder during YD than it is today. But that’s not a global-scale change of 15C! As discussed here, the entirety of the change from last glacial maximum to the Holocene was approximately 5C, and the Younger Dryas was approximately 0.6C. And of course the Younger Dryas is readily detectable in climate proxies … that’s how we know about it.
#73–A cross-post from Tamino’s where I called it “ingenious but perverse.”
Nancy’s point boils down to: ‘It is happening now, therefore it probably happened before.’
Except that we know quite why why it is happening now–it is because humanity has increased atmospheric CO2 by ~40% since the 19th century, with a fillip of miscellaneous other greenhouse gases. We are pretty much certain that that has never been done before–that whole Plato “Atlantis thing” being pretty much debunked and all.
So the ‘logic’ requires us to ignore what we do know about the present–the very period about which we have the best, most complete, least uncertain information.
This follows a very familiar path (Yamal springs to mind). Step 1 – Create a controversy. Step 2 – rely on sympathetic outlets to amplify the supposed controversy. Step 3 – try to mire other scientists in the supposed controversy. Makes no difference to science, and it matters to almost no-one outside the denier-sphere and their chosen targets. Nevertheless, I am grateful to this blog and to Tamino for clarifying matters.
@ 73 One fine analogy deserves another. You bury your head in your GPS but you don’t look out the window. Worse, you fail to imagine the reason to. It has been explained to you, but you don’t bother to look up.
You are explaining GPS to people who already know how to use GPS, but you still don’t know where you are.
Wow! Your father’s almost 90. That makes me feel old. Then again, that means that when I interacted with him while I was at Physics Toady he was in his 70s. I wouldn’t have guess that even then. My best to him.
I don’t think I can ever remember seeing the denialati so terrified!
And remember: when dealing with weapons-grade stupid, wear protection!
Marcott et al. disavow any statistical significance in their paleotemperature reconstructions for the recent 100 year period. In plain language, there is no evidence to be drawn from their work which would indicate that the temperature trend in the past 100 year period is different from the trend in earlier periods studied. Marcott et al. state the reasons in their FAQ at “Q: What do paleotemperature reconstructions show about the temperature of the last 100 years?”
The overlay of data from other sources and the adjustment of the authors’ data baseline to conform to the Mann series serves to put the authors’ work in a comparative context. The only conclusion to be drawn from this is that the authors’ paleotemperature reconstructions are not at variance with models forecasting future average global temperature.
I agree with nearly everything you write. However I don’t think there is strong evidence for the following statement.
“However, the reverse is not true. What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.”
I studied only the measurement data. I binned the data in 50 year intervals to avoid interpolation, and then used the Hadley 5×5 grid global averaging software. The results for the published dates are here. The results for the corrected dates are here. Both sets agree with the instrument data, but there is no clear uptick.
I think it would be more correct to say that Marcott’s result is compatible with the observed instrumental rise in 20th century temperatures.
Creationists have been predicting the imminent demise of Darwinism for 150 years. Deniers have been predicting the imminent end of warming and AGW for 30 years (while simultaneously admitting “Of course the planet is warming up” when pushed).
Their prognostications have been, and will continue to be, just as much in vain as before.
Simon, to the extent that you think that spikes like the current one lurk in the low-resolution history of the past, could you explain the physics of such spikes? What would cause them to start and stop?
“Nancy is saying ‘if we were able to peer more and more closely into the past we would find that it looked more and more like the recent present’…”
Except, of course, where we can, via specific proxy records, it doesn’t. She makes an assertion not backed by evidence, and, where the evidence exists, not consistent with that evidence.
“Kevin, everyone now knows that such an assertion is no more than mere hypothesising, flagrantly and unashamedly begging the question. Just watch while the tenuous link between increasing atmospheric CO2 and global surface temperature grows ever weaker, almost by the month.”
@garhighway #89 “could you explain the physics of such spikes?”
I really shouldn’t have to. They’re only “spikes” because of the grossly compressed x-axis needed for millennial presentation. If you made the x and y axes commensurate you’d see ordinary natural variation rises and falls.
Simon and Nancy have been asked repeatedly by others here to propose a *physical basis* for all these spikes that are thought to be hiding in the paleo records due to the mean proxy resolution of ~120 years in the proxies used by Marcott et. al. So, what is it? Fairies? Leprechauns?
What could cause, say, a .5C upwards spike in temps over 50 years that would also dip back underneath the radar in the next 50 years. The answer is: nothing that we know of. For the Simons and the Nancys of the world it is simply enough that someone like McIntyre can cast doubt. It doesn’t matter if there is no rational basis for the doubt.
And so these elusive spikes become the climate science equivalent of evolution’s ‘god of the gaps’.
Do you have any idea how rare it is to see the temperature spike, say, a degree in a few decades? Do you have any comprehension of the energies involved? Have you heard of conservation of energy? Or does the desperation behind your magical thinking make you willing to give up all rationality in the Universe?
Dr Schmidt I haven’t visited RC for sometime and I’m struck by the difference in approach to posters. I congratulate you on your patient, considered, informative and numerous responses. I can’t think of another blog dealing with climate science where the number and calibre of responses even approaches let alone equals the level reached here
Gavin, Ray, and others. The authors chose your site to post the FAQ’s. Why is it that you don’t insist on having them answer questions, rather than the sometimes ambiguous answers you are forced to give. Not sarc, a serious question.
[Response: I’ve asked them to chime in when they can and hopefully they will. I’m not quite sure why you think we can ‘insist’ on anything though – blogging and/or commenting here or elsewhere is a voluntary activity and sometimes other things take precedence. I would much rather have a few considered responses come in slowly than hurried responses to dozens of queries. -gavin]
Indeed, what physical mechanism could cause a spike that causes 25*10^22 Joules or more of warming (http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/), and then disperses it within 300 years?!? A spiking mechanism, I should note, other than the 40% increase in CO2 we’ve caused over the last one and half centuries?
Gremlins, I would say. Definitely gremlins…. If you have neither a plausible mechanism, nor evidence, for such a spike (there’s lots of evidence against such occurrences, in fact), you’re just blowing smoke, promoting nonsense and confusion.
We _are_ responsible for recent warming, whether you like it or not.
Gavin, ” I’ve asked them to chime in when they can and hopefully they will. I’m not quite sure why you think we can ‘insist’ on anything though – blogging and/or commenting here or elsewhere is a voluntary activity and sometimes other things take precedence. I would much rather have a few considered responses come in slowly than hurried responses to dozens of queries. -gavin]”
Perhaps “insist” was not the proper word. I just think it odd that Marcott et al chose RC to post their FAQ’s, but thus far have not chosen to participate.
Hello, I found this FAQ after reading an update in the New York Times.
I read RealClimate occasionally as a non-scientist trying to understand the issues. I say occasionally because I hate the personal attacks and the gotcha approach I see on both sides. Sometimes it is like being invited to dinner and the hosts get in a huge fight that makes one want to crawl under the table.
I do feel a great deal of sympathy for authors who have done their best and then have to face what sometimes seems to be a pack of hungry wolves. And even as a lay-person I can see a great deal of bad faith argument- splitting of hairs, deflection, focusing on language etc. I really want to hear explanations that are clear and say: this is what our data shows, this is what we THINK it suggest, but doesn’t prove, and this is what it doesn’t really help with one way or another, even if we wish it did. That is why I appreciate the FAQ the authors have done. The FAQ takes into account that this is an important issue not just to scientists.
I recognize that scientists often rely on the press to explain their results to the public and that we can’t hold the scientists responsible for the press getting it wrong — as seems to be happening here to some extent.
I don’t agree with some posts that latch on to the press articles and ignore the clear statements of the authors in the paper. But I also can’t agree with the dismissive posts about a non-issue being ginned up. As a member of the public, I read the New York Times story, and Mann’s comments elsewhere and came away thinking that this paper said something that the FAQ seems to contradict. The Times, to their credit, followed up and that is how I found this FAQ. But I can understand why the “skeptics” jumped on the issue – and I think the FAQ, and a discussion of what the paper does and does not show is the appropriate response. Not back and forth accusations and ad hominem attacks I see here and (especially) on other blogs.
But I really do want to understand this, so I am commenting for the first time on this blog.
I hope the experts will indulge an outsider and someone will read this and try to explain it to me — and I apologize to everyone else for the level of my understanding …
In reading the FAQ, I THINK the authors are saying that some reporters (and some third-party scientists) commenting on their paper got a couple of things wrong. They clarify (or reiterate) that 1) their reconstruction for the past century is not robust. Therefore one could not say that their reconstruction independently reconfirms the sharp temperature rise of last century. Rather, that sharp rise is recorded in the previously published temperature record. Is that right? 2) the paper does NOT compare the rate of the sharp rise in the past century to the rate of any rise or fall in the paleotemperature reconstruction because the temporal resolution of the reconstruction is larger than 100 years — and other factors in the proxies tend to increase this “smoothing”. Thus, a sharp uptick or downtick in temperature during 100 or 200 years would not have been preserved in the paleoclimate record. Did I get that right?
None of that diminishes the value of the paper in extending our understanding (i really mean YOUR understanding) of the paleotemperature record to an earlier period. But it does mean that newspaper reports that said the paper confirmed the hockey stick or that it showed that the rate of rise in the past century is unprecedented (over the long period studied) are not correct. The observations in the paper are not INCONSISTENT with the hockey stick or the claim that the sharp rise is unprecedented. The reconstruction is simply not robust enough on those points to be said to confirm or prove them.
I really hope I got the above right because that’s the part I think I UNDERSTAND. And I would appreciate any of the experts gently correcting me if I missed the point.
Here is the part I am having trouble with. The FAQ also includes the following:
“Holocene Temperature Distribution: Based on comparison of the instrumental record of global temperature change with the distribution of Holocene global average temperatures from our paleo-reconstruction, we find that the decade 2000-2009 has probably not exceeded the warmest temperatures of the early Holocene, but is warmer than ~75% of all temperatures during the Holocene. In contrast, the decade 1900-1909 was cooler than~95% of the Holocene. Therefore, we conclude that global temperature has risen from near the coldest to the warmest levels of the Holocene in the past century. ”
This statement seems very significant on its surface. When I first read it, it almost seemed to be saying what the part of the FAQ i discuss above says the paper did NOT prove: that the sharp spike in the last 100 years (and the jump in the 2000-2010 decade) are very unusual and perhaps unprecedented in the long record.
But on closer reading I THINK it really is saying we had an unusually cold decade (1900-1910) and 100 years later we had an unusually hot decade (and a sharp 100 year rise in between). And that the sharp rise took us from a point near the bottom of Holocene temperatures to a point near the top of the Holocene temperatures. And it is one of the main findings listed at the top of the FAQ, so I feel like I am missing the significance. Is it saying something more than my summary in the first two sentences of this paragraph?
I guess the question I am asking is, yes there is a 100 year spike, and now we see how that spike compares to smoothed out temperatures in sometimes two or three hundred year periods — but does this paper show that that spike over that temperature range tells us something new about the last hundred years as it compares to the past? (Ie. why is it a key finding?)
Based on the paper, what are the chances that similar spikes — from unusually low to unusually high temperatures — has happened one, two, three or more times, in the past (including in the warm part of the Holocene itself) and that the past spikes are all hidden in the smoothing? Is there a way to look at the data in the paper and figure out whether this 100 year spike is probably unusual or is probably common? With what level of confidence?
I know I am probably misusing terms or mismatching concepts — so please don’t jump on the mistakes. I REALLY do want to understand this. I know that if the paper doesn’t prove X that doesn’t mean the opposite of X is true. (And that even if the paper showed X, someone some day might expand upon or take issue with X).
I got interested in this because of the initial press coverage and now that it seems that coverage may have gone beyond what the authors of the actual paper intended, I just want to try to figure out what this one paper really, robustly and confidently did show. .
On that note, thank you very much for the FAQ, the blog, and in advance for your patient, non-judgmental replies….
KR: I think your attribution is insufficient: 3-4 species of magical beings are needed, not just 1:
For the post-Industrial Revolution temperature history to have nothing to do with GHG effects and cause the upward part of a spike, we need:
a) A set of unknown-to-science positive forcing agents (gremlins) that generate temperature increases indistinguishable from those of GHGs AND
b) Another set of agents (leprechauns) that magically cancel the known effects of post-IR GHGs. They are needed because we certainly have good post-IR data on GHG increases, and these things must apply a negative forcing large enough to cancel GHG forcing.
Some people seem to want worldwide early Holocene spikes, whose upsides are similar to the post-IR, but those may require magical entities of different sorts:
c) We need entities that can raise the temperature over 50-100 years, getting the heat from somewhere, call them demons, in honor of Maxwell, perhaps. Until both gremlins and demons are measured and understood, they should not be assumed to be identical. Gremlins need to exactly match all the other side-effects of GHGs, and other current data, i.e., gremlins can’t be a huge upward spike in solar activity or cessation of centuries of extreme vulcanism. Demons may have more flexibility, to the extent they can sneak among higher-resolution proxies. Since demons are less-constrained, gremlins may be a subspecies of them.
d)But finally, we need one more kind, basilisks that implement the downward 50-100=year parts of early Holocene spikes, via unknown mechanisms that escape notice by any proxies. They differ from leprechauns that nullify the effects of GHGs. They have to dissipate the heat content built up by the demons, and would be very useful today. I suppose they could be demons running in reverse. Basilisks seem to have died out (as weasels are not susceptible to their gaze and can harm them), but people are proposing modern equivalents under “geoengineering,” like giant solar shades, not available in the early Holocene.
For reasons others have given, it is hard for any of these to be state-change events like Younger Dryas or 8.2kyr. In any case, we need 3-4 magical species.
We could, of course, stick with science, but many prefer otherwise.
“The smoothing presented in the online supplement results in variations shorter than 300 yrs not being interpretable.”
Why then do many of those commenting above assume that a spike has to occur with 100 years? With a data resolution of 120 years this means for a feature that is “spike-like” in form to be visible clearly as a “spike” (but visible only with diminished height) it would have to have much larger duration, about 800 to 1200 years.
This is the obvious interpretation of “not being interpretable” at 300 years or shorter.
And most likely such a thing would look like any one of many small bumps clearly visible in the “averaged output” of Monte Carlo simulation produced by Marcott (i.e., bumps with its height greatly reduced).
So it is clear by their own qualifications, and looking at their output, that many such large excursions (~ 0.5 C) in temperature have occurred (at least over a 800 to 1000 year interval). And, yes, there could be some of shorter duration in the paleoclimate record (prior to 2000 BP). How short on a global basis? We do not know the answer to that yet, until better global proxies are found, but ice cores would indicated there is a good probability of it (given their many such spike “locally”).
So no one should be too certain about ruling out 100, 200 or 300 year “spikes”, just yet.
“The smoothing presented in the online supplement results in variations shorter than 300 yrs not being interpretable.”
The smoothing was due both to the Monte-Carlo variations with the 20 year interval interpolation. The reason why despite this smoothing, Marcott’s paper still showed a large spike at the 1940 end point has been explained by NZ Willy.
“Marcott performed 1000 “perturbations” on the raw data, which permutated each datum 1000x time-wise within the age-uncertainty of that datum. But Marcott set the age-uncertainty of the 1940 bin to zero. Therefore the 1940 bin is protected from the homogenization which affects all other bins, so its uptick is protected.
If you look at the raw data binned in 50 year bins then you can see other spikes over the entire period : see here
I’d still love to hear someone describe the physical processes that could lead to such short-term global temperature spikes. Simon? Clive? PhysicsGuy? Nancy? Bueller? Or is this more of the ever-mysterious “cycles” that so many are fond of?
These short term spikes in the proxy data are just statistical fluctuations and have no physical significance. However there are some very interesting regional variations in the data as noted in Marcott’s paper. The NH shows stronger cooling than does the SH see here. There is also agood discussion of these differences on Tamino’s blog.
About 11,500 years ago the seasons were inverted. So NH summers occurred in “December” and coincided with the Earth being at its closest distance to the sun (as is the case now for the SH). In addition to this the obliquity was greater than now so there was also more insolation at higher latitudes in the summer. Perhaps this change can explain the slow cooling of the NH.
If you look at the raw data binned in 50 year bins then you can see other spikes over the entire period : see here
1- That’s not “spikes”, that’s “noise”. Why do you think the authors performed this Monte-carlo simulation? Because the raw data itself is not interpretable on short timescales.
2- A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries. If something like *that* had happened in the past, Marcott’s proxies and methods would have detected it.
They didn’t, so it hasn’t. Hence, “unprecedented”.
[Response: What a lot of this discussion of “spikes” is missing is that the point isn’t whether the Holocene had any sudden rises with a cause analogous to the present. As many people have pointed out, that couldn’t be, because the present rise is due to CO2, and is hence “durable,” as many people have pointed out; it would show up in a record like Marcott’s. But also, as others have pointed out, we already know from ice cores that CO2 fluctuated very little in the pre-industrial Holocene, so that isn’t in the cards anyway. What is at issue is whether there are any other mechanisms of centennial-scale variability that could cause a 1C or even a .5C spike. We know that the AMO can go a smallish fraction of that distance, and it’s hard to rule out on first principles that some kind of ocean variability might not be able to do something bigger in amplitude if you wait long enough. There’s no evidence that it can, and I myself find it hard to see how you could make the ocean do a centenniel scale uptick (a centennial scale downtick is easier, since there’s all that cold water you could conceivably bring to the surface). But the Marcott analysis has no bearing on that question. I don’t think the paper ever claimed it did, but the comparision with the instrumental era rise may have confused some people into thinking so. To repeat my earlier comment, it is useful to compare the instrumental era rise and the forecast of further rise to the Marcott record because we know the rise is durable and will last millennia. Thus, we know we are bringing about a durable increase that is huge compared to any long-duration variation over the Holocene. That’s a big, big deal. –raypierre ]
[Response: To add to Raypierre: it would have to be ocean variability causing a spike in global-mean temperature without causing a spike in the high-resolution proxies from the ice cores. That is exceedingly unlikely; the ocean’s deep-water formation sites near Greenland and Antarctica would surely experience major change during such a spike. Think of the 8k-event: the biggest in the Greenland ice cores over the entire Holocene, yet minor impact on global-mean temperature. -Stefan]
Simon, so I’ll take your #95 as an admission that you have given up all rationality.
Dude, the planet is really, really big. It isn’t easy to pump that much energy in in a decade or two. We’ve only managed to do so by liberating most of the carbon sequestered since the Jurassic! The Paleocene-Eocene Thermal Maximum failed to achieve such rates of growth despite (probably) burning up the Deccan coal field!
The masthead here says “Climate science from climate scientists” (I have the impression, which may be wrong, that it once was “by scientists for scientists”.)
Visitors curious about or wishing to promote the Mac attack promulgated elsewhere and looking to comment here should be aware that all the scientists here have day jobs and are not amateurs. (I am an observant layperson and often just lurk.) They are too knowledgeable and experienced in both science and “blog science” to give equal weight to appearance and substance. It will not work to discredit the messenger here.
You must not blame them for any unwise assertions made by yours truly. On the whole, they have wisely refrained from getting down in the mud.
I have always believed that one reason I am tolerated is because I mostly act like the guest though like all hotheads I occasionally get a gentle warning to desist. Lately I’ve been wishing I took their most recent advice a little more to heart!
You are missing the point. No one was suggesting Marcott showed meaningful spikes. The deniers were saying that there COULD BE spikes but that Marcott’s resolution was too poor to show them, and therefore recent warming might not be remarkable.
My point was that those who sought to jam an unobserved spike into the Marcott graph ought to also explain how it could be: what would the physical explanation be of such a spike? How could it happen?
Now, since then, as has been noted upthread, Tamino put the whole thing to rest by showing that if such a spike in temps actually existed it really would show up in Marcott, so the point is now moot, and those who object to Marcott need another reason to do so. History being what it is, the next reason will not be consistent in any way with the prior reasons, but that’s normal I guess.
Re #13 – For the composite age model for MD01-2421, 3cm was location of a radiocarbon bomb spike and therefore post-1950. Because we only went up to 1940 in the reconstruction, we didn’t use data after that time.
Re #13 – The paper was never submitted to Nature, only to Science.
Re #13 –In the original publications for both MD95-2011 and MD95-2043, the core top ages were not defined in the text, so we assumed 0 yr BP as explained above in the FAQs.
Re#39 – Since the thesis, we updated our treatment of age models and their uncertainty, and corrected some programming errors when preparing the manuscript for publication. It is important to remember that with any reconstruction from geologic archives there is always uncertainty in the age component. The paleoclimate records used in our reconstruction typically have age uncertainties of up to a few centuries, which means the climate signals they show might actually be shifted or stretched/squeezed by about this amount. One of the primary goals of this study was to quantify these uncertainties to see how they might impact the final composited global temperature reconstruction using what is called a random walk model that allows uncertainty to expand moving away from an age control point (radiocarbon dates in this case).
After recalibrating all radiocarbon control points to make them internally consistent and compliant with the scientific state-of-the-art (see FAQ – Technical Questions), we constructed age models for each sediment core. It is pretty straightforward to model ages lower down in each core between radiocarbon dates, but various factors can make it more difficult to pin down ages near the tops of the cores above the uppermost date. For instance, radiocarbon dating does not work for the last couple centuries (see FAQ), uncompacted sediments near the core top can be lost during the coring process, burrowing organisms mix up older sediment so that a core collected today will not necessarily have 0 year old mud at the top, etc.
Given these complexities, there is often no simple “correct” way to define a core top age, and so researchers typically use one of a few different reasonable assumptions. For instance, the core top age of when the sediment core was extracted could be used, though this can be problematic for the reasons I just mentioned. Or, one could simply extrapolate the age of the core top from radiocarbon dates deeper in the core – though in our study this would have in many cases yielded a core top age close to present with an age uncertainty of +/- several hundred years using the random walk model. In other words, the core top age might be 0 years before present +/- 300 years, which would have generated some age models extending into the future in our Monte Carlo simulations – which is obviously not meaningful. Instead, we chose to use the common practice (e.g., http://www.sciencemag.org/content/suppl/2007/09/27/1143791.DC1/Stott.SOM.pdf, ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/tierney2012/tierney2012.txt) of setting core top ages to 0 yr BP, unless the original publications mentioned some firmer constraint on the core top ages. The take-home point here is that you almost always have to make some assumption when dealing with core top ages – this is one reason why we provided the published and our recalculated age models together, as well as the raw radiocarbon dates, so that future people working with these data can further evaluate this. It’s important to note though that the choice of core-top assumption should have little impact on our overall Holocene reconstruction or our main conclusion that 20th century warming from the instrumental record spanned much of the Holocene range.
[Response: Jeremy, thanks for stopping by and helping clarify. – gavin]
An expectation that the global temperature suffers excursions of short duration (so as to be too short for the smoothing to capture) but equal to twice the maximum difference betwen high and low over shown in the graph (as it has to average out to the values obtained), is NOT reasonable?
That in the absence of any known physical driver for such a phenomenon, there is no justification for assuming it JUST because the statistics leave open the possibility?
Would we not have noticed that sort of climate variation elsewhere in our histories?
I have a lot of trouble reconciling the original publicity about Marcott et al (what the majority of citizens accepting, uncritically, grasped from this recent AGW paper) with the yawning about the recent data, now.
Did this NSF misunderstand the research, as well. Does this press release not *state,* never mind imply, that AGW is getting bad, badder, worst — today? ….Lady in Red
“Did this NSF misunderstand the research, as well. Does this press release not *state,* never mind imply, that AGW is getting bad, badder, worst — today? ….Lady in Red
Earth Is Warmer Today Than During 70 to 80 Percent of the Past 11,300 Years”
No. We know what recent temps have been from the instrumental record. Marcott et al said they can’t rule out the possibility of short warm spikes in the past record due to the temporal resolution of their proxy ensemble. That does not change the conclusion that the earth today is warmer than the past 70 to 80% of the last 11,300. If they had said 100%, that would be a misrepresentation, but they didn’t.
Tamino’s shown that a large spike on the order of the 0.9C warming seen the past century followed by equivalent cooling over the following century would show up using their methodology (though attenuated), in other words his analysis makes him believe the authors have been too conservative.
But that’s irrelevant.
“What that history shows, the researchers say, is that during the last 5,000 years, the Earth on average cooled about 1.3 degrees Fahrenheit–until the last 100 years, when it warmed about 1.3 degrees F.
The largest changes were in the Northern Hemisphere, where there are more land masses and larger human populations than in the Southern Hemisphere.
Climate models project that global temperature will rise another 2.0 to 11.5 degrees F by the end of this century, largely dependent on the magnitude of carbon emissions.
“What is most troubling,” Clark says, “is that this warming will be significantly greater than at any time during the past 11,300 years.””
And this is what models show. Actually 2F is extremely unlikely, it will be higher. Regardless, the previous century’s 0.9C rise plus another century of continued warming leads to a period of a couple of centuries of warming. Even at the low range, that’s about 2C for the two centuries.
If we’d seen this in the past, their methodology *would* have detected it. So the last statement is correct.
I’m not going to go through the rest of the quotes you’ve provided one-by-one, others can if they have the patience, but, you have a track record that leads me to believe you won’t listen.
After all, you believe stuff like this:
“lady in red:
It’s important not to underestimate the courage it took for CERN to *permit* this “science” in today’s intl “climate change” political climate: massive govt spending for “scientists” to “prove” AGW…. the IPCC fraud reports of the past two decades…. the pressure on scientists of all stripes to sign petitions backing the AGW theory…..
It will be interesting to see how Michael Mann’s hockey stick (which “evened” both the Medieval Warm Period and the Little Ice Age) and has been the poster star for the IPCC and Al Gore’s AGW green investment fraudsters, as well as the dupes who have PhD’s in “climate science” and want to save the world’s polar bears and may now hit the unemployment lines…..fare in all this. Hmmmmmm…..”
Regarding Peter Dunkelberg’s post in #117, I will stand by what I said in #104. Tamino’s analysis is flawed. It does not take into account the uncertainty inherent in the proxies themselves (time and temperature) many of which have uncertainties in temp of +/- 0.5 C or more and resolutions 200+ years. It the synchronizes the timing of all test spikes into all proxies, which would not happen.
Clive has done a similar analysis as Tamino with different results taking into account these issues. See link.
Maybe some people need to go here first… http://en.wikipedia.org/wiki/Proxy_(climate) or use Real Climate’s own search for climate proxies for a better understanding of climate proxies. If you don’t like the explanation here or over at Tamino’s site, fine, but give us a detailed explanation of how these mysterious spikes somehow manage to elude all the proxies. Show us some real physical mechanisms you can back up with actual science.
Curious. Roger Pielke Jr. is claiming the the FAQ here is a ‘startling admission’ that the post 1890 ‘uptick’ is not robust. He is also asking as part of the ‘fix’ that the authors should change the paper to show this.
This goes against what most others have interpreted the paper to already say. This perhaps may be just a problem with how Roger is understanding the wording of ‘robust’. I would say that ‘the difference is not robust’ means that since the two methodologies do not support each other, that the authors are claiming no knowledge of the temperature post 1890 from their reconstruction.
“Now, since then, as has been noted upthread, Tamino put the whole thing to rest by showing that if such a spike in temps actually existed it really would show up in Marcott”
I repeated the same procedure as Tamino by simulating exactly the same 3 spikes. I then used the Hadley 5 degree global averaging algorithm to derive anomalies from the 73 proxies. The 3 peaks are visible but are smaller than shown by Tamino. This procedure assumes that there is perfect time synchronization between the peak and all proxy measurements. Once I include a time synchronization jitter between proxies equal to 20% of the proxy resolution then the peaks all but disappear. see plot here
Jeremy Shakun writes above …. “The paleoclimate records used in our reconstruction typically have age uncertainties of up to a few centuries, which means the climate signals they show might actually be shifted or stretched/squeezed by about this amount.
Therefore I think that any climate excursions lasting less than about ~400 years will be lost in the noise.
[Response: Doing a proper synthetic example of this (taking all sources of uncertainty into account – including spatial patterns, dating uncertainty, proxy fidelity etc) is more complicated than anyone has done so far. However, one could also make an experiment of adjusting age models within uncertainties to line up anomalies. This is generally not done without independent information that there is a real event there – however, it might be fun to do here with the actual data. I’m doubtful whether any reasonable redating could produce anything as large as the 20th C. – gavin]
A far more interesting point in this respect is to look at the full set of the 1000 realizations in Marcott et al 2013 (Supplemental Fig. S3). If such a (gremlin-induced) two-century spike were in the data at least a few of the realizations would show it – even in the presence of dating errors, the space of perturbations would include some shifting the proxies into roughly the correct and reinforcing alignment. In fact, spikes above that level would be expected in a Monte Carlo reconstruction as unrelated variations were shifted to coincide with such a spike.
There are _no_ such 0.9C spikes in any realization in the 1000 set prior to the last 200 years. None.
Now – I have seen no physical mechanism proposed to drive a 25×10^22 Joule rise and fall (as per ocean heat content) of climate energy over 200 years, there is no sign of such a global spike in any of the paleo data including near-annual speleotherms, and hence there is neither support for (a) such spikes to exist, let alone be missed in the Marcott analysis, nor (b) for any claims that current warming might therefore be natural rather than anthropogenic.
Marcott et al 2013 is a very interesting paper, and I expect will be expanded upon in future work. Can we _please_ drop the red herrings of mythical spikes from the discussion?
Since the CERN experiment keeps getting brought up, perhaps repeating what Eli has said elsewhere would be useful. First, it has been well known since the earliest atmospheric science textbooks that Eli has, that ions can form nuclei for aerosol growth and that cosmic rays produce ions in the atmosphere. The real question, which the CERN experiment does not touch, is what is the rate limiting step in nucleation.
That is a question which a recent experiment comes much closer to answering. A Finnish group, recently published Direct Observations of Atmospheric Aerosol Nucleation which found that at the surface a) less than 10% of nanometer sized aerosols are ionic in character b) that growth is limited by sulfate and organic vapor availability and c) that growth above a few nm depends on the availability of organic vapors.
The Rabett will point out that he said this at RC about two years ago
The most interesting thing about aerosol growth to Eli is the role that SOx plays. Cleaning the air, at least locally, strongly affects cloud cover (see the Molina’s work in Mexico City, and Monet’s pictures of Parliament), and there is also something obvious there WRT volcanos. A much more direct and convincing part of the mechanism. Clean air is air without much SOx and that must have effects on global temperature.
The peer review process just might select Shakun, Marcott, and Tamino to be his reviewers ;)
It could be that his work gets rejected for publication because it’s not “interesting” since he’s attempting to refute something that Marcott et al never specifically concluded. Perhaps it’s worth one of those famous “comments” wind up alongside a publication.
To add to the Rabett’s points, the rate limiting process in cosmic rays putative effect on climate is not only the relative effect of ions in aerosol nucleation, but also (and perhaps more so) the growth of freshly nucleated particles (about a nanometer in diameter) to sizes where they can influence cloud formation (larger then approx 100 nanometer).
As I wrote here 4 years ago:
“Freshly nucleated particles have to grow by about a factor of 100,000 in mass before they can effectively scatter solar radiation or be activated into a cloud droplet (and thus affect climate). They have about 1-2 weeks to do this (the average residence time in the atmosphere), but a large fraction will be scavenged by bigger particles beforehand.”
Given that the methodology imparts a different level of confidence to the 20th century part of the graph, and that the point of the study was the preceding ten centuries, much anguish could have been avoided by ending the graph at 1900 AD.
In fact, the right thing to do would be to republish, either deleting or reworking the 20th century.
What you say might be true for those folks who are unable to read and understand science, or who enjoy distorting science for their own purposes. The rest of us, especially the non-experts like myself, like to have the instrumental record side by side with the previous Holocene period to provide context.
“Given that the methodology imparts a different level of confidence to the 20th century part of the graph, and that the point of the study was the preceding ten centuries, much anguish could have been avoided by ending the graph at 1900 AD.”
Even more anguish could’ve been prevented by not publishing at all, and by not doing a PhD thesis on paleoclimatology.
In this way, the intentional misreading, personal attacks, cries of “fraud”, etc would’ve been avoided.
Which is rather the point of all the attacks, no? Discredit scientists, put them on the defensive, and in general make life for climate scientists such a pain in the butt that graduate students will find something else to study.
“In fact, the right thing to do would be to republish, either deleting or reworking the 20th century.”
Like, oh, maybe presenting the instrumental record for the 20th century rather than their truncated proxy series which they openly said was most likely not robust?
Wait … they’ve did that, only in addition to rather than instead of the truncated proxy series.
And cries of “scientific fraud” have been the result.
You don’t seem to understand what McI, Watts et al are up to here. The intent is to destroy, not learn, science.
Gavin et. al. have been more than patient in their responses here. Scientists aren’t saying that the ending up-tick is robust. Marcott et. al. never claimed that themselves. Anyone can see that there are only 18 of the 73 proxies remaining ca. 1940. Duh.
But what is the salient point of the paper, and what has the AGW deniers so obviously terrified is… that for 5000 years we had started the slow, inexorable slide into the next ice age that Milankovitch has explained so many years ago. But now mankind has reversed that. Yes, there are *mathematically possible* places in the proxy record where a .9C spike like we have produced since the start of the industrial revolution could hide due to their mean temporal resolution. But there is, on the other hand, no possible physical basis for these spikes. No known forcing could cause the climate to warm by more than .5C over just a hundred years or so, and then take it down by a corresponding amount just as quickly so as to be completely missed by *all the proxies*. You can’t hide that much heat, even if most of it might have gone into the ocean.
But whenever you posit the obvious, the AGW deniers just go all quiet, and attack again a few hours later with their “Hey, look, there goes a squirrel!” shtick. It’s becoming quite tiresome/juvenile.
“But what is the salient point of the paper, and what has the AGW deniers so obviously terrified is… that for 5000 years we had started the slow, inexorable slide into the next ice age that Milankovitch has explained so many years ago.”
Apparently the current line of attack over at CA is an attack to show that Marcott et al have miscalculated their monte carlo error estimation. Nick Stokes brought this up at tamino’s (and thinks they’re wrong), led my RomanM, who was also involved in the attack on Eric Steig’s temp trend reconstruction for antarctica a couple of years ago.
Look you are probably right that no-one can propose a possible physical basis for such spikes which are *mathematically possible* hiding in the proxy data due to their mean temporal resolution and have no theoretical basis – as 19th century physicists discovered to their cost.
Milankovitz is only a half victory as he doesn’t really explain the details of ice ages at all. No insolation orbital change predicted any cooling over the last 6000 years. Timing of ice ages for he last 1 million years is still a mystery – although it seem likely another one is due within 2000 years.
Clive Best @143 — Unfortunately you have it rather wrong. Look at June isolation for 65N for the past 10,000 years. As for another glacial soon, read David Archer’s “The Long Thaw”. Regarding the term ice age, Terra has been in one for at least the past 2.588 million years and continues to do so despite the long thaw.
Certainly Roe showed good correlation for June 65N insolation with Arctic Ice volume, but the correlation with global Benthic dO18 is not so clear. Nor can it explain the ~100K cycles of interglacials.
I’d just like to thank the paper’s authors for putting together this FAQ and further responding to other comments, and thank RealClimate for making this happen. This has been a great opportunity, and I hope the acidic rhetoric from certain quarters doesn’t sour the team from engaging the public or doing their research.
For those interested in ‘spike’ discussions, I’ve run a 200 year 0.9 C spike through the measured Marcott et al frequency gain function(http://www.skepticalscience.com/news.php?p=2&t=71&&n=1951#93527), and find that a 0.3 C x 600 year spike remains after filtering – which would have shown in the Marcott data. The zero frequency average value increase (unaffected by the Marcott Monte Carlo procedures) has to go somewhere, after all.
Claims regarding mythic and unevidenced spikes, particularly when attempts are made to draw parallels to current conditions, are (IMO) quite unsupportable.