RealClimate logo


Technical Note: Sorry for any recent performance issues. We are working on it.

BBC contrarian top 10

Filed under: — gavin @ 13 November 2007

There is an interesting, if predictable, piece up on the BBC website devoted to investigating whether there is any ‘consensus’ among the various contrarians on why climate change isn’t happening (or if it is, it isn’t caused by human activity or if it is why it won’t be important, or if it is important, why nothing can be done etc.). Bottom line? The only thing they appear to agree about is that nothing should be done, but they have a multitude of conflicting reasons why. Hmm…

The journalist, Richard Black, put together a top 10 list of sceptic arguments he gathered from emailing the 61 signers of a Canadian letter. While these aren’t any different in substance to the ones routinely debunked here (and here and here), this list comes with the imprimatur of Fred Singer – the godfather to the sceptic movement, and recent convert from the view that it’s been cooling since 1940 to the idea that global warming is now unstoppable. Thus these are the arguments (supposedly) that are the best that the contrarians have to put forward.

Alongside each of these talking points, is a counter-point from the mainstream (full disclosure, I helped Richard edit some of those). In truth though, I was a little disappointed at how lame their ‘top 10′ arguments were. In order, they are: false, a cherry pick, a red herring, false, false, false, a red herring, a red herring, false and a strawman. They even used the ‘grapes grew in medieval England’ meme that you’d think they’d have abandoned already given that more grapes are grown in England now than ever before (see here). Another commonplace untruth is the claim that water vapour is ’98% of the greenhouse effect’ – it’s just not.

So why do the contrarians still use arguments that are blatantly false? I think the most obvious reason is that they are simply not interested (as a whole) in providing a coherent counter story. If science has one overriding principle, it is that you should adjust your thinking in the light of new information and discoveries – the contrarians continued use of old, tired and discredited arguments demonstrates their divorce from the scientific process more clearly than any densely argued rebuttal.


397 Responses to “BBC contrarian top 10”

  1. 151
    Keith says:

    John. OK. Now I understand the aggressive nature of some of the commenting. I stand by my analysis of the models though. We’ll just have to disagree on that one. My argument around experimentation is about the forward experiment. The prediction part. We too have models that explain huge amounts of data but when new data is generated they fall over. That’s why I find this debate so interesting. I haven’t seen anything being done that explains why they are so different and are able to predict the future with such small errors. I shall read some more. There could be some great stuff in there for my own work that I simply haven’t fully grasped.

    Evolution. Ah I love this debate. The intelligent design bozos just don’t get it. I work with bacteria and viruses and see them mutate every day under selective pressure. Evolution is all about selective pressure. There;s considerable debate about whether selective pressure is pure evolution since it’s not random, you are putting a forcing(!) onto the system. The random bit is where some molecular biologists get worked up but I see it in terms of the selective pressure forcing a random mutation in order for the organism to maintain function and not die. So yes we can watch evolution in action particularly in the field of viruses and bacteria where the experiments are easier. So the consensus works here. Evolution in action. Fantastic stuff.

    Smoking. This is a curious one. I think that a number of groups have pretty much shown the toxicology makeup within tobacco smoke and the effect on lung cells in vitro and in vivo. It’s a pretty clear experimental result that certainly supports the epidemiology (sp?). Although the one part that is curious is why it has more effect one some people than on others. SNPs are the current fav target for this observation but there’d no consensus yet. But the experiment using smoke on real (non-human) living animals is pretty clear. Not done much now except in some countries where that animal experiment is seen as ethical. I think it’s pointless myself. The data is pretty clear here.

    Marcus. I can agree with you but I might add that I view that science is build on data and observation. Otherwise we wouldn’t have the urge to keep doing experiments and discovering something new. Which is why I turn up every day to work. I’m doing something that nobody has ever done before or even thought of. Which is nice! Oh and Mayo’s work is cool but he picks the easy proteins that have good sequence homology with proteins of known structure. He’s had a few shockers where’s he’s got the structure right(ish) and then got the function totally wrong. That’s why Reetz’s more recent work was so outstanding (and confusing)

    David B. Yip. indeed I have read the bit at the start. That’s how I started reading this site.

    Oh and Chuck. Thanks. It’s a great story. Fantastic piece of science and a well deserved Nobel Prize.

    But I think I’m done commenting now. It seems to be far too polarised for my liking. I was taught in my earlier scientific career to be inquisitive and questioning. I still think that’s a good thing. But thanks for all the info and the comments everybody. Good luck with those models. I watch with interest.

  2. 152
    Ray Ladbury says:

    Keith,
    On the one hand, you say that you want some sort of “experiment”. What would you suggest? Much of the physics that goes into the models has its basis in the lab, but evidently that is not enough for you. The thing is that it is rather difficult to fit a planet into a laboratory. We have no experiments that support plate tectonics. We have no lab experiments to support much of Earth science, astronomy, astrophysics… We do have lab experiments to support some, but not all of the tenets of evolution. Even my thesis experiment in particle physics could not be described as yielding unambiguous experimental results–they had to be interpreted statistically. In short, science has moved beyond your limited view of it–and it still works. Inductive science is every bit as valid as laboratory science, and it yields conclusions every bit as strong. Moreover, even with laboratory experiments, ultimately, we must conclude what the experimental results support. I strongly urge you to learn more about the scientific method. It will make it easier for you to appreciate research outside your field of expertise as well as making you a better chemist.

  3. 153
    Svet says:

    Re: #82 CobblyWorlds
    “the three surface temperature analyses depict similar rates of warming over long time scales, and discrepancies in recent decades are largely consistent with differences in methodology.”

    The problem of course is that HadCRUT3 and GISS are telling totally different stories for period 2001-2007. If “differences in methodology” can do this then something is very wrong somewhere. What can the layman do when confronted with two such different stories? If this discrepency is not quickly resolved in an objective, credible fashion then intelligent discussion of the subject is going to become very difficult. “Sceptics” will favour HadCRUT3 and “believers” will choose GISS and both will have perfectly defensible positions but nothing will be resolved.

    [Response: look at the maps. it is readily apparent where there are differences, and why. No mysteries there.... - gavin]

  4. 154
    Steve Bloom says:

    Re #125 (original comment by Timo reproduced below for convenience):

    Timo, IMHO the kind of trick you just tried to pull is very disrespectful of others. You imply something momentous about the sea ice trend, but a look at the NASA press release (and notice how I link to the document I’m quoting so others can easily confirm what I say?) finds this passage:

    ‘”While some 1990s climate trends, such as declines in Arctic sea ice extent, have continued, these results suggest at least for the ‘wet’ part of the Arctic — the Arctic Ocean — circulation reverted to conditions like those prevalent before the 1990s,” he added.’

    IOW, nothing at all about the sea ice. Imagine that.

    The paper actually is very interesting, but the reason a press release was issued six months after the paper was published was to announce that the AO now seems to be flipping back to the 1990s state. Its main point is that the warming trend hasn’t yet taken complete control of the Arctic Oscillation:

    ‘Morison said data gathered by Grace and the bottom pressure gauges since publication of the paper earlier this year highlight how short-lived the ocean circulation changes can be. The newer data indicate the bottom pressure has increased back toward its 2002 level. “The winter of 2006-2007 was another high Arctic Oscillation year and summer sea ice extent reached a new minimum,” he said. “It is too early to say, but it looks as though the Arctic Ocean is ready to start swinging back to the counterclockwise circulation pattern of the 1990s again.”

    ‘Morison cautioned that while the recent decadal-scale changes in the circulation of the Arctic Ocean may not appear to be directly tied to global warming, most climate models predict the Arctic Oscillation will become even more strongly counterclockwise in the future. “The events of the 1990s may well be a preview of how the Arctic will respond over longer periods of time in a warming world,” he said.’

    I’d really like to know how you managed to read an opposite meaning into this material. If it’s just that you didn’t understand it, I think you need to get out of the climate science gate-keeping business.

    Original comment by Timo:

    About Arctic sea ice please see:

    Morison, James, John Wahr, Ron Kwok, and Cecilia Peralta-Ferriz, 2007. Recent trends in Arctic Ocean mass distribution revealed by GRACE. Geophys. Res. Lett., 34, L07602, doi:10.1029/2006GL029016, April 4, 2007

    “…The spatial distribution and magnitude of these trends suggest the Arctic Ocean is reverting from the cyclonic state characterizing the 1990s to the anticyclonic state that was prevalent prior to the 1990s. ”

    Nasa news November 13, 2007

    “The results suggest not all the large changes seen in Arctic climate in recent years are a result of long-term trends associated with global warming.”

  5. 155

    On “Scientific Consensus”

    There is a great deal of interdependence between scientific disciplines – I have pointed out as much in relation to Duhem’s thesis previously. Typically, if someone is to test a given advanced theory, there will be a great number of other, earlier scientific conclusions which one will have to presuppose are true, even though it is at least “possible in principle” that evidence at some point may be discovered which would require that individual to abandon one of those background assumptions.

    The test of a hypothesis M will generally be of the form if A&B&C&…&M then e, and M is regarded as failing the test if not e. But if any of the background assumptions were in fact false, then M might be true with not e or false with e, and the test would be an invalid test of M for either the purpose of confirmation or disconfirmation. Or to put it another way, in principle at least, the test is a test not of M but always of A&…M.

    *

    Likewise, the more broadly integrative theories will rely upon the conclusions in a wide variety of disciplines, implications in a wide variety of of disciplines and have explanatory power in a wide variety of disciplines. At the same time, modern science requires a large division of cognitive labor. It is divided into fields, disciplines, sub-disciplines and sub-sub-disciplines. Few people can achieve any real expertise in more than a handful or so areas.

    As such, while someone may be able to state as a matter of their own expertise that a given theory or (narrower) theoretical principle which cuts across a wide variety of disciplines is well-supported by the evidence in their field as a matter of their own expertise, they will generally be unable to do so in a variety of other fields where that theory or principle applies. Likewise, they will generally tend be unaware of many of the issues which had previously decided in favor of the principles which their discipline takes for granted – except at a fairly cursory level.

    As such, experts throughout the scientific endeavor generally have to rely upon points that are at least tacitly dependent upon a form of scientific consensus — whether they are aware of this or not. Its unavoidable.

    But at the same time, it generally isn’t something which they have to be all that aware of – precisely because the principles which have become part of the well-established consensus are well-established. We generally become self-consciously aware of expert consensus and the need for it only at the interface between the scientific community and the broader community in which it is embedded.

    *

    There have been points at which the very notion of a “scientific consensus” has come under attack, and no doubt there will be in the future. Typically, such attacks will rely upon an equivocation between appealing to a scientific consensus and “appeal to the majority,” or alternatively, assume that an appeal to the scientific consensus is an appeal that is independent of the actual evidence upon which a decision should be based.

    However, the scientific consensus is a consensus of experts, each acting as an expert alongside other experts in his or her own field. These experts are gathering evidence, generating theories, forming hypotheses, making predictions – and testing theories – and their views become relevant to and incorporated into the consensus on a given issue only to the extent that their area of expertise is relevant to that conclusion. As such, the scientific consensus is evidence-based.

    It is not simply a form of an “appeal to the majority.” It is, in essence, an appeal to a congress of individuals who are acknowledged and tested experts in their respective fields – where the weight given to any voice is a matter of the relevance of the expertise.

    Given the collective extent of those fields, this exposes the conclusions to a far larger body of evidence and tests than would be possible by means of any one field considered in isolation from the rest. Consequently, the justification for a conclusion arrived at by means of the congress can be far greater than that which the conclusion would receive if it were simply supported by only one or a handful of disciplines.

  6. 156
    Julian Flood says:

    quote Actually I just scrounged a section from Salter and Latham’s paper. Highly recommended. unquote

    Ooops, silly me. That should be 10^18 drops/sec, but I suppose that was so obvious no-one bothered to correct me.

    From the same paper: For a given amount of water, a large
    number of small droplets make clouds reflect more than a small
    number of large drops. Increasing global cloud albedo by only
    1.5% would produce a cooling sufficient to offset the warming
    due to a doubling of current CO2.

    Google on Palle. See the reduction in albedo in the Earthshine observations and ponder why it’s getting warmer. (Then you can wonder why the albedo is dropping and, as I did, wonder about pollution of the ocean surface reducing droplet production and — just thought of this — the increase in critical relative humidity caused by surface changes on the CCN. But I digress.)

    It’s getting warmer because it’s getting sunnier.

    JF

  7. 157
    Richard Ordway says:

    Okay, More about all this peptic ulcer stuff and global warming consensus.

    The old peptic ulcer consensus appears to have been partly right anyway, and then it was peer-review (seemingly working extremely slowly on the issue in this case (no one really saw a big need to challenge it) which added to the knowlege to complete the story.

    Wikipedia states (and to refute this you need links and just not “I work on this in the lab so I know better than you do” response).

    Wikipedia states under the heading of “Peptic Ulcers”:

    “As much as 80% of ulcers are associated with Helicobacter pylori, a spiral-shaped bacterium.

    Tobacco smoking, blood group, spices and other factors that were suspected to cause ulcers until late in the 20th century, are actually of relatively minor importance in the development of peptic ulcers.
    [4]
    Ulcers can also be caused or worsened by drugs such as Aspirin and other NSAIDs. Wikipedia
    http://en.wikipedia.org/wiki/Peptic_ulcer

    Okaaaay, so what we got here?

    1) The old consensus theory was of relatively minor importance…but still a factor nonetheless…so all those “consensus quacks” were not totally wrong and indeed right in some cases.

    2) ~80% of the ulcers are caused by the nasty helicopter errr (Helicobacter pylori-sorry, I used to fly). Wow, that leaves 20% that are not. So again, our consensus quacks are not all wrong.

    “Ulcers (peptic, I presume) can also be caused or worsened by drugs such as Aspirin”. Hmmm, that is not our “helicopter” super bug either.

    The bottom line is that the consensus view was *partly right*…and it was by peer-review that the other 80% or so came out.

    This global warming issue is and has been since the late 1960s at least, by contrast, actively and massively investigated even now in peer-review…

    but the challenges do not stand up even though the challengers have had almost unlimited funding (from the eager (desperate) energy and transportation industry, that the last I heard earns over one billion dollars a day (for excess cash to spend on dailiances like funding competing scientists).

    So, this peptic ulcer, thingie, in my opinon, is a strawman…like comparing apples and oranges.

  8. 158
    GlenFergus says:

    #130

    Whoops, fixed now.

  9. 159
    Rod B says:

    A little belated aside: Lawerence Brown’s (15) use of “the now famous statement by Jack Nicholson’s character in ‘A Few Good Men’ applies- “They can’t handle the truth!”’ to refute the so-called deniers seems a little odd, being as how Nicholson was the authoritative liar in ‘A Few Good Men’.

  10. 160
    Ellis says:

    110
    [Response: This is a fallacy. Thanks fot the warning.

    Given that Snowball Earth and the Cretaceous hothouse clearly delineate the bounds of natural variability, your argument is equivalent to saying that nothing less dramatic than the freezing over of the oceans or the raising of the temperature by 10 deg C or so can possibly be attributed to any forcing. This is absurd.
    The use of a straw man is not going to dissuade me from my opinion.

    The issue is not whether any climate change was bigger or smaller than today’s - it is whether it can be explained, and whether that explanation has validity today. So while orbital forcings, giant volcanoes, asteroids, Heinrich events etc. have all caused dramatic climate changes in the past, none of them are relevant to today. I am not sure that it follows that none of them are relevent today. Our climate is defined by the orbital forcings, however, I understand your point that these forcings change little on centennial time scales.

    Just because large forest fires have occurred naturally in the past, does that imply that arson cannot happen? No, but does an arson's match in San Diego imply global warming?

    Or that if we see someone walking away from a fire with an empty kerosene can and some matches, we can’t logically infer that he may have had something to do with it? - gavin]

    I don’t object to the claim that a dissproved alternate theories bolsters your stance, I do object when this is taken as proof of your theory.

  11. 161
    Hank Roberts says:

    Google Scholar finds:
    http://www.agu.org/cgi-bin/wais?ii=A43B-01
    2007 Joint Assembly

    “AB: The Earth’s albedo is one the least studied of the fundamental climate parameters. The albedo is a bi-directional property, and there is a high degree of anisotropy in the reflected light from a given surface. However, simultaneously observing all points on Earth from all reflecting angles is a practical impossibility. Therefore, all measurements from which albedo can be inferred require assumptions and/or modeling to derive a good estimate. Nowadays, albedo measurements are taken regularly either from low Earth orbit satellite platforms or from ground-based measurements of the earthshine in the dark side of the Moon. But the results from these different measurements are not in satisfactory agreement. Clearly, the availability of different albedo databases and their inter-comparisons can help to constrain the assumptions necessary to reduce the uncertainty of the albedo estimates. …”

    and

    “The Darkening of the Earth’s Albedo at High Northern Latitiudes During 2006 as Measured by MISR

    “AB: The deseasonalized anomalies in the time series of globally averaged top-of-atmosphere spectral albedos measured by MISR have now been analyzed from 2000 through 2006. The initial record from 2000 through 2005 showed little in the way of significant anomalies. However, a significant decrease was detected during mid 2006. This anomaly disappeared by the end of 2006 and does not appear to be an instrumental or sampling aberration. The 2006 anomaly is restricted to latitudes north of 40° during late Spring and early Summer, and is large enough to affect the global annual average. …”

  12. 162
    Larry says:

    Re # 116

    I don’t follow the graphs you reference, about 12 million years ago it shows anti-artic reglacialtion?? Not melting?

  13. 163
    Joseph O\'Sullivan says:

    The contrarians might be fighting away in the public and legislative arenas, but they are getting walloped in the courts.

    The latest is the Federal Courts rejection of the new mileage standards
    http://www.nytimes.com/2007/11/16/business/16fuel.html?_r=1&ex=1352955600&en=f428086d78492032&ei=5088&partner=rssnyt&emc=rss&oref=slogin

    For analysis
    http://warminglaw.typepad.com/my_weblog/

    This follows the court’s acceptance of climate science under the Clean Air Act at the Supreme Court, state’s right to regulate greenhouse gases in Vermont, the effects of anthropogenic global warming on the ecosystem involving corals and polar bears and Pat Michaels withdrawal as an expert witness.

    The one thing the legal system has that scientists might appreciate is the concept that losing parties can’t keep fighting. The courts tell them that they have lost and they have to get over it. Too bad the legal concept of finality can not be applied to other areas.

  14. 164

    #125 and 150. It was quite clear, the weather systems preceding and during the big ice melt, especially the pressure systems, were quite familiar, nothing new there. By simple logic,
    if the systems were similar, then why the big melt of 2007? The answer is temperature,
    giving a true departure from recent experience, temperature anomalies were strongly consistently warmer. The same pressure system of this summer past, a generally wide anticyclone, is still here now, resurfacing the Arctic ocean with very thin ice, but over the last recent years the temperature landscape was dramatically different, despite similar pressure dynamics. It was not the first time a wide anticyclone dominated North of Alaska during summer, yet the ice melted massively, unlike many previous years. The only possible conclusion: more heat was circulated in the usual ways through the pressure systems .

    Gavin cited 2007 on track to be #2 warmest worldwide, now imagine if a Low pressure was persistently North of Alaska precluding the melting of millions of kilometers of extra ice, 2007 would have been #1 warmest if it wasn’t for the clear summer Polar air. Ongoing data show the Arctic atmosphere in general warmer at multiple levels, the only source of extra heat above the surface is tri atomic gases and their feedback companion water vapour. Remember the solar argument completely flunks when average temperatures are about 10 degrees C above in near darkness., there are no other explanations or observations identifying another source of heat, not even the ocean, at best -1 C warm during much of the year.

  15. 165
    Imran Can says:

    #130
    “I’m rather older than 8, but I can draw graphs.”

    Glen – obviously not – you forgot to correct for the fact that the 2001 IPCC graph has a baseline of 1990 … as opposed to a 1961-1990 mean basline for the HadCrut data. Maybe you could re-post the graph with the points correctly plotted ?

    But I tend to agree with Gavin – there is nothing definitive about these points (even though they are below the range) because the recent flat period of 6 or 7 years doesn’t mean the overall AGW theory is wrong. But it does remind us that we have to act with a little humility and remember to keep our uncertainty ranges wide.

  16. 166
    Keith says:

    Oh God. I just can’t seem to get away from this.

    Ray. You are right that many areas of science where experimentation is difficult and so we have to reply on statistical methods and so forth. My point has been, all along, that I am uncomfortable with the a heavy reliance on models. That is based on my own experience in my area which may or may not have relevance in the field of climate change. I should point out that I am not actually trying to deny AGW or suggest that the work that is being done is flat out wrong. I simply have some doubts about the ability to predict the climate in 50 years with what I see as a limited data set. I’d love to see more physical experiemnts done to test the models. As someone pointed out earlier, the predictions could actually turn out to be too conservative and we may be underestimating things. That is a greater concern than us getting away with not much happening. That is what I am trying to get at. It’s truly superb work that you can computationally generate a model which fits so much data. It’s the bit after that where thigns get interesting. The prediction bit. That is the meat of this area in my opinion. The really interesting bit. In my own field there are literally dozens of ways of making a molecule using known methods and techniques that have data and consensus and everything you could possible want. But the real challenge is actually doing it. Because sadly, as soon as we try to do anything complicated the known reactions often fail (despite having maybe 100 years worth of experimental exmaples on other substrates)and we have to try out dozens of new routes to solve the problem (that’s actually one of the massive appeals for me). Small changes make big differences. It’s the reason why nature makes Brevotoxin B in minutes (after a few million years evolution!)and KC Nicolaou and his group took more than 10 years using synthetic methods and numerous routes.

    So going forward I’d argue that my view is not limited, Ray. It may be different to yours but if you were so work in my field you might see that our bias is heavily weighted towards experimental science. We’re absolutely nowhere in terms of applying physics to describe apparently simple chemical reactions. Ahmed Zewail has shown that even some of the most basic ideas we had were incorrect when applied to our most simple reactions so perhaps you can understand why I struggle to be comfortable with there being less experiments done in your field.

    I suspect that your view of the scientific method and mine are not the same as a result of the respective fields we work in but I agree that it would be instructive to look in more depth at your methodologies since clearly you generate, collect and interpret data differently. Perhaps you would be willing to do the same and consider how it differs outside of your own field? Maybe you could learn something yourself? You never know. Good luck with the research.

  17. 167

    Keith writes:

    [[So,a great model. But they STILL go and do the actual crash at the end. Just to be sure. So there we have an example of a model that works almost perfectly. We have models in pharma that don’t. So where is the experiemntal work to show where the climate model is? And that is my point, even if it seems to be unresolvable becasue the experiment can’t be done. That is why I am a little skeptical on the output of the models. But look on the bright side, I’m very close to being a believer. Just one experiment!]]

    The crucial experiments were done in 1859 by John Tyndall. Google for that.

  18. 168

    Dean posts:

    [[It is my understanding that the cooling of the 40s and 50s was due to volcanic activity, but what was the earlier temperature rise due to? ]]

    The cooling of the 1940s and ’50s was due to aerosols created by industrial activity, not volcanoes. The global warming from 1900 to 1940 is partly attributable to the sun, which increased slightly in luminosity over that period.

  19. 169

    Keith writes:

    [[I was taught in my earlier scientific career to be inquisitive and questioning.]]

    Implying, of course, that we (and perhaps climatologists in general) are not.

  20. 170

    JF writes:

    [[It’s getting warmer because it’s getting sunnier.]]

    Except that it isn’t getting sunnier. We’ve been measuring Total Solar Irradiance with satellites for decades. The Sun hasn’t gotten noticeably brighter in 50 years.

  21. 171
    Anthony Hawes says:

    I am a scientist who performs experiments to test hypothesis, in a field with very few variables. We regularly attempt to model outcomes, albeit with limited algorithmic scope, an always fail to adequately reflect the real world outcomes. Sure sometimes we get in the ballpark – and perhaps that is what is being argued here by the author – that the ballpark ain’t a bad result in the climate change field. But hang on – we can’t even relably model climate days ahead (often not the carpark). Surely the extreme complexity in the climate system will never be adequately modelled for future outcomes? Please note that I am desperate to know where the AGW issue is headed as it will directly affect my business. There is a large body of data, and so much of it conflicting (even in the IPCC reports – each of which I have read completely). So, I direct readers to this link (http://hadobs.metoffice.com/hadat/images.html) which for me is the crux of the issue: that is, the emperical data is not all that convincing wrt AGW. Actual data is a wonderful thing and my extensive reading on the subject of the temperature record suggests the link is the most reliable data source around. Warmer it is getting but out of control?
    Can anyone comment on this data?

  22. 172
    Cobblyworlds says:

    #160 Ellis,

    Gavin was clearly not engaging in a Straw-Man argument.
    Hothouse – Snowball is the natural range of the Pharenozoic. In order to support your argument you must specify what bounds of natural variability you are referring to. So as you seem not to prefer the bounding Gavin points out, perhaps you could define what you see as “the bounds of natural variability”.

    You state (with respect to orbital forcings, giant volcanoes, asteroids, Heinrich events etc) that: “I am not sure that it follows that none of them are relevent today.” If you are not sure then it implies that you have reason, thus you should be able to detail such reason in terms of mechanism and evidence…
    In other words – please do so.

    You state: “The theory of AGW rests only on one principle, as far as I can tell, that the warmth of the past century is outside the bounds of natural variability.”
    AGW clearly does not rely upon the current warming being outside the bounds of Holocene variance.
    The physics of CO2′s impact on OLR suggest that the more CO2 we put in the atmosphere the more warming we will have. So if by evidenced mechanism and supporting observations we can establish good cause to finger CO2 as the culprit, then it matters not whether we have left the bounds of Holocene variability. What matters is whether atmospheric CO2 levels can be driven high enough to attain that state in the future. I don’t need to be contending with an actual housefire to spot the danger of a smouldering cigarette in a waste-paper bin. (That’s an analogy – housefires and cigarettes are not significant factors in AGW)

    Then you take Gavin’s clear analogy and attempt to dismiss by taking it literally. Which whilst worth noting is unworthy of comment.

    The ball is clearly in the “Sceptics” court, I use quotes because I don’t think that there are real sceptics of the theory of AGW anymore (denial is not scepticism). The intelligent sceptics have long since moved from claiming the current warming isn’t due to CO2 onto questioning what the impacts will be and whether it’s worth acting. The “Sceptics” must provide coherent alternate theories that explain observations. They must produce sound alternate explanations for observed changes, stating “natural” is no more an explanation that Intelligent Design’s “God did it”.

    So where does this leave us?

    We have warming, backed up not only by a range of different measuring techniques – from boreholes to satellite thermal emission from the atmosphere, but also by the global average trend in water vapour changes. The increase in ocean heat content shows that the warming is significant. So why is there a warming if not due to CO2?

    We have concomitant observations that are expected with CO2 (diurnal range, vertical profile of changes – stratospheric cooling, tropospheric warming). So why do we have these observations if not due to CO2?

    To dismiss CO2 you have to address this pattern. If CO2 isn’t to blame, what is causing the warming and the observations I outline? The stance “It must be anything but CO2″ does not constitute a theory.

  23. 173
    Imran says:

    #158
    Glen – I don’t think you got it right on your second attempt either. Let me walk you thru it …. maybe we can just do 1 point …. say 2006. The HadCRUT average for 2006 is 0.42 (anomaly above te 61-90 mean). But the difference between the 1990 value (which is the IPCC graph baseline) and the 61-90 mean is ~0.25. So the point you need to plot on the IPCC2001 graph is 0.42-0.25 = 0.17. For the year 2006. Try plotting that.

    Its not hard, but I guess sometimes we get the answer we want to get.

  24. 174

    Re comment: 148.

    I must hand it to you, Tamino. You’ve managed to coax more warming out the period from 1979 to 2006 than any other analysis I’ve ever seen. Since the BBC’s Richard Black gave the total rise from 1900 to 2007 as 0.8ºC, your four 8th order polynomial fit methods yield on average a 1979-2006 temperature rise of about 0.51ºC (with one solution producing as much as 0.8ºC *0.67=0.54ºC of warming. Since the difference between the absolute greatest anomalies (0.526 in 1998 and -0.037 in 1985) is 0.563 in non-endpoint years (from the HadCRUT3v), you are doing pretty well to produce a solution yielding 0.54ºC.

    In your various non-linear (and sometimes smoothed) analyses of the overall period, 1900-2006, the behavior during the middle years from 1945 to 1979 acts to reduce the estimates of the temperature rise between 1900-1945 and raise the estimates of the rise from 1979-2006 from those values that would be calculated from looking only at those periods independently.

    As you mentioned, all you really have to do is look at the data to know that the post 1978 rise is not greater than the early century rise. Move the starting point back a few years or wait a few more years for warming temperatures to continue and the story will certainly be different. But as of right now, the temperature rise from 1979 to 2006 is not greater than the temperature rise from say, 1910 to 1945 (however you care to fit those periods independently).

    -Chip

  25. 175
    Keith says:

    No Barton I was not implying that at all. You really do want to get those digs in don’t you. [edit]
    I was merely stating that I am inquisitive about this particular issue which I why I have chosen to comment, read source material and participate in this discussion. I think it’s quite clear that that modellers ask questions too. Try not to be too paranoid mate.

  26. 176
    Nick Gotts says:

    re #171 (Anthony Hawes) “But hang on – we can’t even relably model climate days ahead (often not the carpark).”

    You’ve read all the IPCC reports and you still don’t know the difference between weather and climate? How did you manage that? I do not know whether tomorrow will be warmer than today here, but I would bet a substantial amount that 16th August 2008 will be warmer than today (I live in Scotland). So you see, changes over a longer period are not always more difficult to predict.

  27. 177
    Andre says:

    #172 CW, That’s not how it works. perhaps you remember the core business of the scientific method. The falcification. “Science is what we have learned about how to keep from fooling ourselves.” – (Richard Feynman).

    So the bottom line is that the anthropogenic global warming hypothesis can be considered scientifically feasible if no-one is able to falsify it. There is no need to provide a competing hypothesis.

  28. 178
    Paul Middents says:

    #149 Steve Marx asks “Has anyone associated with Real Climate read, analyzed and written about Marlo Lewis?”

    Gavin was asked last year whether he might comment on Lewis response to the 2006 Time Magazine issue devoted to climate change. He responded:

    “I doubt it. The number of red-herrings, strawmen and simply incorrect statements would challenge even our abilities to keep up with…. ”

    Lewis’ focus seems to be on limiting any climate change response to free market inspired technical innovation. He invokes Lomborg and proudly notes in his bio that Rush Limbaugh and G. Gordon Liddy commentary has been inspired by his ideas. His CV is a complete guide to the denialist repertoire. Real Climate has addressed every one of his “scientific” arguments.

    Lewis is a political “scientist” with nothing to say of value on the science of climate change. Real Climate is a forum primarily devoted to the science even though many discussion threads wander into the politics of responding to climate change.

    I would suggest that Dr. Marx assign his students the task of analyzing Marlo Lewis writings. Other web forums address the politics of climate change and solution/mitigation and should offer some excellent arguments refuting his points. You might submit the best of their efforts to the administators of this forum as a possible entry or more appropriately to one of the forums devoted to the political and economic implications of climate change.

  29. 179
    J.C.H. says:

    I think posts by John N-G (59.) and John Nielsen-Gammon (91.) pretty much box you in the canyon. You can say you jumped over the mountain to escape, but I don’t think you did.

    Is that the John to whom you were responding in your post (151.)?

  30. 180
    Stephen Berg says:

    Re: #174, “I must hand it to you, Tamino. You’ve managed to coax more warming out the period from 1979 to 2006 than any other analysis I’ve ever seen. Since the BBC’s Richard Black gave the total rise from 1900 to 2007 as 0.8ºC, your four 8th order polynomial fit methods yield on average a 1979-2006 temperature rise of about 0.51ºC (with one solution producing as much as 0.8ºC *0.67=0.54ºC of warming. Since the difference between the absolute greatest anomalies (0.526 in 1998 and -0.037 in 1985) is 0.563 in non-endpoint years (from the HadCRUT3v), you are doing pretty well to produce a solution yielding 0.54ºC.”

    Chip, Tamino’s estimation of the warming since the late 1970s is quite accurate when you observe this graph:

    http://en.wikipedia.org/wiki/Image:Instrumental_Temperature_Record.png

    To calculate the ~0.5ºC warming since 1978 or so, one can see an average anomaly of -0.1ºC (taking the five year anomaly for smoothness) in 1978 and a +0.45ºC (five year average) anomaly in 2004 when using the 1961-1990 mean temperature. If you do the simple math, 0.45ºC-(-0.1ºC)=0.55ºC. Therefore, Chip, Tamino is not out to lunch in their assessment.

  31. 181
    tamino says:

    Re: #174 (Chip Knappenberger)

    I’ve got to hand it to you, Chip. You’ve got really big “stones” to tell such a tall tale.

    For your information, I didn’t “coax” anything at all. I simply applied very well-known smoothing methods and reported the results. And only one of the methods I applied was an 8th-order polynomial.

    For those who want to take a look for themselves, look here.

  32. 182
    James says:

    Re #166: [My point has been, all along, that I am uncomfortable with the a heavy reliance on models.]

    But the basics of AGW theory do not rely on models: they’re just to get a better picture. To take one of the analogies, we can say, sans model, that if you drive a car into a brick wall at speed, then it’s going to wind up as a crumpled mass of sheet metal. A good model would tell you which panels crumple and in which directions, and so allow the design of a car in which the passengers might survive the crash. Still, if you happen to be the driver, the best advice is to avoid hitting the wall in the first place :-)

    Likewise with AGW: without any more model than Arrhenius’ pencil & paper, we can say that adding this extra CO2 is going to cause problems. The models attempt to refine the limits on how big the problems will be, and how soon they’ll happen. In either extreme, with perfect models or no models at all, it’s still better not to hit that wall.

  33. 183
    Nick Gotts says:

    Re #171 (Anthony Hawes) “So, I direct readers to this link (http://hadobs.metoffice.com/hadat/images.html) which for me is the crux of the issue: that is, the emperical data is not all that convincing wrt AGW. Actual data is a wonderful thing and my extensive reading on the subject of the temperature record suggests the link is the most reliable data source around. Warmer it is getting but out of control?”

    I’m no expert, but the trends in these graphs look pretty convincing to me – particularly the fact that the stratosphere is cooling while the surface and troposphere are warming, as expected if greenhouse gas emissions are responsible for the change. As for “getting out of control”, well assuming you mean “almost gone too far for us to stop”, how would such a graph tell you whether that was the case? A central concern is that there are may be “tipping points” beyond which specific positive feedbacks kick in and AGW becomes far harder to halt – but having any idea how what these might be and how close we might be to them demands knowledge of the climate system, not just looking at what’s happening to the wiggly lines on a few graphs.

  34. 184
    CobblyWorlds says:

    #177 Andre,

    I don’t undertsand what you are trying to say here. AGW is clearly falsifiable.

    The ongoing most obvious test of falsification for the AGW enhanced greenhouse effect would be if CO2 levels continue to rise but temperatures did not rise. It is a keystone prediction of the theory that more CO2 causes a radiative imbalance in the Earth that causes temperature to rise towards a new higher equilibrium. However as we know that things like ocean mixing, aerosols and other extraneous factors could cause a cooling even with increasing levels of CO2 we’d have to consider the caveats (which as always are all important).

    So far that most obvious falsification is failing – the warming continues. As for the other 2 predictions (without recourse to models) that can be used to test falsification:

    The Stratosphere should cool as outgoing long wave flux through the Stratosphere is reduced
    Observed and ongoing.

    The diurnal range should narrow as night-time heat loss is reduced by reduced outgoing long wave floux reduction.
    Observed up to just after 1980 then stalled. This may appear to falsify, but Wild et al* offer an explanation that suggests another factor (increasing daytime surface insolation) is at play. And if anyone else reading this thinks that’s a cop out, consider the case of a magnet picking up iron filings – that doesn’t falsify the law of gravity.

    Copy of Wild et al linked to in my post 12 here: http://www.realclimate.org/index.php/archives/2007/11/global-dimming-and-global-warming/langswitch_lang/sw
    but note my correction post 21.

    There remains doubt on certain issues – notably practical impacts that the future will bring. It remains my “opinion” that this is the most important issue we face, it’s worth taking a war time approach and really acting on it, but that is mere opinion. The certainty is on the issue of there being a role for CO2 in the current warming (last 30 years at least), the only reasonable answer is yes. And from CO2-warming linkage we can be pretty sure that as we’re emitting more CO2, for as long as we continue to do so, the planets global average temperature will go up.

    To quote from the page you linked to:
    “At the moment, Relativity is once again the only theory still standing. But there’s no way to guarantee that it will stay on top. It isn’t proven. Like all other scientific theories, it is forever tentative.”
    So it is with AGW.

    The ball remains in the contrarists court.
    And a real game of Tennis involves actually hitting it over the net, not shuffling around bouncing the ball off the court. ;)

  35. 185
    Steve Bloom says:

    Re #162 (Larry): What it shows is that when temps dropped to about 3C warmer than now it was possible for the Antarctic (actually just the EAIS) to reglaciate; i.e., if we let things get back up to that temperature for any significant period of time, we should expect to see it start to destabilize. We are now on a path toward such temperatures, which is why Hansen argues they have to be avoided. Please read this.

  36. 186
    Steve Bloom says:

    Re #181 (Tamino): Strictly speaking I think they’re lumps. :)

  37. 187

    Since Karl Popper’s “Principle of Falsifiability” is coming up again (Andre’s 177, CobblyWorlds’ 184 and my own 155) …

    Do Scientific Theories Ever Receive Justification?
    A Critique of the Principle of Falsifiability

    Karl Popper states, “Now in my view there is no such thing as induction. Thus inferences to theories, from singular statements which are ‘verified by experience’ (whatever that may mean), is logically inadmissible. Theories are, therefore, never empirically verifiable…

    “But I shall certainly admit a system as empirical or scientific only if it is capable of being tested by experience. These considerations suggest that not the verifiability but the falsifiability of a system is to be taken as a criterion of demarcation. In other words: I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical scientific sytem to be refuted by experience,” (Popper, “The Logic of Scientific Discovery,” 2nd ed., 40-41).

    Now these statements, taken together, require some analysis. Induction, at the simplest level, is normally taken to use the truth of singular statements which are verified by experience to provide justification for general statements. And when the justification for a given general statement is regarded as sufficient, the general statement is regarded as true. This is the general nature of induction. However, Popper is opposed to this view.

    *

    Popper states his opposition in terms of using induction to justify theories rather than individual statements, but this makes little difference. Theories are composed of statements, and to say that one cannot ever regard a theory which has stood the test of induction as true is to say that one can never regard the statements which compose the theory as being true: if all of the statements which compose the theory are true, then one would have no reason to deny the status of truth to the theory itself.

    To see why, let a theory be expressed by the set of statements {h1,h2,h3,…,hn}. Assume that each of the statements in this set is true. Then the theory may expressed by the statement h1&h2&h3&…hn. The truth of this statement necessarily follows from the truth of the statements which compose it. Thus Popper’s Principle of Falsifiability is wide open to a criticism stemming from what is called “Duhem’s Thesis”: the principle of falsification runs into problems since scientific statements are presummably never regarded as true.

    *

    According to Duhem’s Thesis, no empirical hypothesis H can be used to make empirical predictions unless it is conjoined with one or more auxilary hypotheses A. Thus when we use an experiment to test H, where H&A is used to predict an experimental outcome S, the failure to obtain S falsifies H&A. But an isolated experiment does not allow you to determine whether H is false, A is false, or whether both H and A are false. Thus no single test can falsify H by itself.

    However, we can state this even more strongly in the case of the principle of falsification. Since no hypothesis is ever regarded as true, no hypothesis can ever be shown to be false. And if one takes as one’s unit of meaning to be theories instead of hypotheses, one will find that present theories are generally tested by presupposing the validity of theories which have stood the test of time. Unless one assumes that one’s background theories are true, one cannot falsify the theory which is in the foreground, i.e., the theory which one is presently testing. However, the above analysis calls for some examples. I will provide three.

    *

    In my first example, I will be considering a problem involving Newton’s gravitational theory. In his day, the explandatory power of his theory was considered amazing. Given the highest degree of accuracy available in the 1600s, his theory was able pin-point the trajectories of all the planets but one: there existed a minute discrepancy in rotation of the perhelion of Mercury. This one fact was not regarded as in any way falsifying Newton’s theory, though. To test his theory, it had been necessary to bring in other assumptions. For example, when his theory was first checked against the orbit of Mercury, it was assumed that Mercury was the closest planet to the sun. Rather than throwing out Newton’s theory, this assumption was modified.

    For a while, it was thought that there existed a planet Vulcan inside Mercury’s orbit which disturbed this orbit in just such a way as would account for the discrepancy between the original theoretical prediction and the experimental observation. On the basis of this hypothesis, astronomers searched the heavens for the hypothetical planet. As things happened, the additional hypothesis that Vulcan existed turned out to be wrong and Newton’s gravitational theory was abandoned in favour of Einstein’s gravitational theory, in part on the basis of this early experimental evidence, but also on the basis of additional experimental evidence which came after the formulation of Einstein’s theory. Does this mean that Newton’s gravitational theory should have been abandoned in the first place rather than being saved by means of an “ad hoc” hypothesis?

    The Principle of Falsifiability not withstanding, no it does not.

    A similar proceedure was used to predict and ultimately discover the existence of Neptune on the basis of how this outer planet disturbed the orbit of Uranus. Newton’s gravitational theory simply proved to powerful to abandon as hastily as the Principle of Falsifiability would have required. Besides, other explanations of the failure of this one prediction were still possible even once planet Vulcan failed to turn up. For example, a hypothetical oblateness of the sun, and if measurements of the sun’s profile disproved this, then one could hypothesise a rotation in the sun’s interior which would give rise to an oblateness of the distribution of the sun’s mass that would exist only within the sun’s interior, leaving no appreciable evidence at the sun’s surface. Or would it? One might have to ask a student of stellar dynamics.

    When the original conflicts were discovered between Newton’s gravitational theory and the experimental evidence, its discovery was the result not only of Newton’s gravitational theory, but also certain implicit assumptions, assumptions which were not necessarily even stated, but were, in effect, a kind of theoretical background to Newton’s theory. As a result of the predictive power of Newton’s theory under a wide range of circumstances, the scientists of Newton’s age thought it best to modify the background assumptions rather than abandon this powerful theory. With the hindsight made possible by our own advanced age, we may conclude that with respect to Vulcan they were wrong, but in the case of Neptune, they were right. But in both cases, their approach was most reasonable.

    *

    Now I will begin my second example. Roughly at the time that Darwin, it was considered a recognised fact that the earth and the sun couldn’t be more than a few million years old: the only fires known were chemical fires, and alternatively, the only other source of energy which we could conceive of for the sun was due energy being released as the result of gravitational collapse. On the basis of the latter, Lord Kelvin calculated that the age of the sun had to be in the range of millions of years, not thousands of millions. This required evolution to take place at a rate which seemed unlikely.

    Similarly, a geologist discovered evidence that the rocks of the earth were in many cases older than the limit on the earth’s age based upon the calculation involving the sun. In addition, the theory of continental drift was proposed to account for similarities in the shapes of the continents: these enormous land masses seemed to have shapes which could fit together like pieces of a puzzle, but the fit was not perfect, and once again the apparent age of the earth seemed to count against the theory. Another problem with this theory was that there existed no known engine for the hypothesised movement of the continents: as far as scientists of the time knew, the earth was essentially one giant, solid rock. Volcanos were simply a small, irrelevant side-issue.

    However, special relativity, which was originally put forward to account for experimental results involving the motion of light, required an equivilence between mass and energy which suggested that chemical fires were not that efficient. The study of subatomic particles lead to the recognition that nuclear fires could exist which would be much more efficient than chemical fires. Nuclear fusion made it possible for us to recognise that the sun is much older than we originally thought it was.

    Nuclear fission explained the generation of heat internal to the earth’s surface, and this made it possible for us to recognised the fact that the continents are afloat on a sea of molten rock which exists beneath the earth’s crust. This provided us with a means to explain continental drift. In addition, both botany and zoology discovered similar populations at just the places the theory of continental drift argued were where the continents had once been together.

    New evidence and once highly-controversial theories were fitting together like the continents once had. They were providing us with a unified view of our world. Whereas Karl Popper’s fallibilism viewed distinct theories as being tested against evidence independently of one-another, the history of science has shown a remarkable degree of interdependence between distinct theories existing in highly disparate areas of human knowledge.

    *

    With my last example, I will be considering Newtionian mechanics. If one stead-fastly held to Popper’s Principle of Falsifiability, one result contrary to prediction would be enough to falsify this theory. With this in mind, one could easily conclude that Newtonian mechanics has been falsified many thousands of times over in high school physics classes. Students perform experiments which quite regularly “falsify” this theory every year. But why is it that whereas this would be enough to discount the theory if the experiments were being performed by expert experimental physicists, this is not enough when the experiments are being performed by young students?

    When one explains this difference in terms of the different levels of training and reliability, one is bringing in psychological considerations to explain the results of physical experiments. Thus one can argue that there is a sense in which the science of physics depends upon the science of psychology.

    *

    I will draw from this analysis three conclusions.

    First, if one accepts induction, some element of coherentialism is required: there exists an interdependence between the justification of the distinct statements which compose a theory. Second, there exists an interdependence between the justification of a foreground theory and its background theories. Third, in science, one must regard many statements as true even if the justification of these statements does not admit of absolute certainty. Much of our knowledge is corrigible.

  38. 188
    CobblyWorlds says:

    Marlo Lewis ?

    Hmmmm

    http://www.renewamerica.us/columns/mlewis/050929
    Marlo Lewis says:
    “The increasing frequency of hurricane activity in the Caribbean and Gulf of Mexico since 1995 is due to a natural, multidecadal shift in the Atlantic Ocean thermohaline circulation (THC), the oceanic “conveyor belt” that pulls warm water from the tropics northward to the British Isles.”
    THC or Gyre?
    Anyway he doesn’t mention Holland and Webster “Heightened tropical cyclone activity in the North Atlantic: natural variability or climate trend?”
    Pdf from the Royal Society here: http://www.pubs.royalsoc.ac.uk/media/philtrans_a/Holland%20and%20Webster%201.pdf
    Holland & Webster say:
    “42% of the observed 0.678C increase in hurricane season SSTs in the NATL
    since 1906 can be attributed with a high degree of confidence to anthropogenic
    gases introduced into the atmosphere.”
    Although the most interesting argument is in part 4 with respect to a possible natural role. Too involved for quote mining.

    Marlo Lewis says:
    “More important, global warming was not responsible for Katrina’s destructive fury. Any tropical storm traversing waters of 82 degrees Fahrenheit or warmer has the potential to become a category 4 or 5 hurricane. Gulf waters routinely exceed that temperature in August, and did so long before mankind began using fossil fuels.”

    Holland and Webster say:
    “Furthermore, equatorial developments increased sharply in 1995 in association
    with the marked increase in Atlantic SSTs since 1970, which Santer et al.
    (2006) have demonstrated to be largely due to greenhouse warming. Kossin &
    Vimont (in press) have related this to a warming phase of theAMM,which
    included the equatorward shift in formation and which they conclude could
    be influenced by greenhouse warming….
    ….The increase in equatorial developments places a substantially larger
    number of cyclones in a region that is conducive to sustained intensification,
    and this has been the dominant cause of both the trend in major hurricanes and
    the recent heightened activity.”

    Not a good one for me to pick – Hurricanes are not really my thing, but there’s not much science on that site as well (I’m just a hobbyist learning this stuff). As for the hurricane/AGW linkage – a promising hypothesis IMHO.

    On this page
    http://www.renewamerica.us/columns/mlewis/050408
    Lewis talks about CO2 not being a pollutant, and fertilization of crops, on those I don’t really know (don’t know if I care) – to me that’s not the point anyway. A tiger cub might be a benefit in controlling mice around your farm – but one day that cub will grow up and may take a liking to your livestock. Cost benefit analysis must consider timescales and the nature of the beast.

  39. 189
    dhogaza says:

    But the basics of AGW theory do not rely on models: they’re just to get a better picture.

    Keith has been told this at least three times in this very thread.

    Keith – are you listening?

  40. 190
    Vera B says:

    My understanding of this forum is that it is primarily for climatologists to educate the public and avoid disinformation. There are some good posts here, but as a newcomer to this forum (but not this debate) I must say the vitriol is a distraction. There may be contrarians posting under assumed names, but you have to take the high road. Assume all posts are legitimate. Posts links to earlier discussions for those of us late to this site. By all means keep posting links to peer reviewed articles. Quit demonizing skeptics – my mother is one. (OK, she watches Fox news – I can’t stop her!). Please remember that many people trust the news, not realizing that they must sell to stay in business – and controversy sells. You would make more converts by far answering every post calmly and logically. And repeatedly.

  41. 191
    David B. Benson says:

    Anthony Hawes (171) — Click the Start Here link at the top of the page and read it. Then click the link to the AIP Discovery of Global Warming pages in the Science section of the sidebar.

    After reading these pages, you will likely agree that AGW is occurring.

    And in my inexpert opinion, it will be horrendously bad.

  42. 192
    dean_1230 says:

    RE 144

    Being asked to use averaged numbers rather than single dates, i went to the HadCRUT3 dataset and found the following:

    5 Year Central Moving Average:

    YEARSPAN : Delta T
    1910-1941: 0.5556
    1976-2004: 0.5938
    difference of 0.04°C
    % difference of 6.43%

    9 Year Central Moving Average
    YEARSPAN : Delta T
    1907-1941: 0.5524
    1971-2002: 0.6041
    difference of 0.0517°C
    % difference of 8.56%

    So all the uproar about the fastest rising ever is basically because of 0.05°C increase spread over more than 30 years. To claim that that is a significant difference is truly misleading. It’s not, especially when the variability/accuracy of the numbers are no greater than 5% to begin with.

    By the way, I went with a 9 year moving average because I wanted an equal number of years on either side of the center. Also, you ‘ll notice that I changed the years included to pick the highest and lowest points over these areas rather than a specific timeframe.

  43. 193
    dean_1230 says:

    Re #174 & #180

    The 0.55°C rise from the late 70s through today is accurate. But what is INACCURATE is the use of 1900 as the start date for the warming. The start date, depending on which averaging method you use, is around 1907-1911 and is as much as 0.15°C lower than the average at 1900.

    The difference between the highpoints and the lowpoints of the two warming episodes is almost identical (within 0.05°C). To say that 60% of the warming has happened in the last 30 years is clearly misleading. To infer that we’re warming faster over the last 30 years than we’ve ever warmed before is just as misleading. We are warming at the same rate as the early 20th century.

    [Response: actually, going with your 9-year running mean changes, it looks like the rate is 20% higher now (since the period of years is shorter - 31 vs 34 - in the recent larger change). - gavin]

  44. 194

    Re 181:

    Or, you all could have a look at the HadCRUT3v data here.

    -Chip

  45. 195
    dean_1230 says:

    Re 194 & 181

    Herein lies the rub… there is a significant difference between the two data packages (HadCRUT3 & GISS). If you use HadCRUT3, the temperature rise we’re currently undergoing is the same in magnitude as was seen in the early 20th century. IF you use GISS, it’s greater (by about 0.2°C)

    Since both of these are accepted temperature datasets, what does that say about our ability to accurately measure the global temperature?

  46. 196
    dean_1230 says:

    Re: the response to #193:

    But it’s still only 0.05°C, which is insignificant compared to the accuracy of the data. There’s more difference between which of the accepted data sets you use (GISS vs HadCRUT3) than the difference in warming if you use the HadCRUT3 dataset.

  47. 197
    John Mashey says:

    re: #166 keith
    Keith says:
    “We’re absolutely nowhere in terms of applying physics to describe apparently simple chemical reactions. Ahmed Zewail has shown that even some of the most basic ideas we had were incorrect when applied to our most simple reactions so perhaps you can understand why I struggle to be comfortable with there being less experiments done in your field.

    I suspect that your view of the scientific method and mine are not the same as a result of the respective fields we work in but I agree that it would be instructive to look in more depth at your methodologies since clearly you generate, collect and interpret data differently. Perhaps you would be willing to do the same and consider how it differs outside of your own field? Maybe you could learn something yourself? You never know. Good luck with the research.”

    Now, that is a reasonable comment that may actually lead to fruitful discussions. As it happens, I’m fortunate to have at least talked to researchers in both domains over some years. [I used to visit big pharmas, who visualized molecules in conference rooms with 3D glasses ... of course, visualyzing molecules was the *easy* part.]

    Maybe Keith can suggest good references, but for now, Wikipedia suffices:
    http://en.wikipedia.org/wiki/Molecular_modelling
    http://en.wikipedia.org/wiki/Protein_folding

    Let me expand the discussion in #145 [about data, science, computing] by considering models, approximations, and usefulness.

    Many models (whether computer or otherwise) describe the real world, and act as approximations to it. People usually seek the highest-level (simplest) model that provides useful results, and historically, we generally got simplest ones first, discovered places where they didn’t work, and had to go deeper. The standard example is Newton -> Einstein -?> (unified theory)

    For instance, chemistry got a *long* way before quantum mechanics or computers, even without modeling the quantum chemistry at the next level underneath, or the quantum physics below that.

    On the other hand, in the domain Keith is talking about, for predictions to be useful, they essentially need to model very deep properties, and they have to give *exact* (in essence, digital) results, not averages (i.e., more like analog).

    For instance, this is easily seen in protein folding: see the first chart in the Wikipedia entry on Protein Folding.

    This is a nightmare: you start with a (relatively) simple linear chain of amino acids, i.e., something you can specify. This then “folds” into a particular physical shape (or can, under various circumstances one of several), and extremely important characteristics of the resulting shape matter. This takes many steps, but in general, under the same conditions, the same starting point will yield the same final result, which is a good thing for life on Earth.

    To be useful, a model of protein folding yields an *exact* result: a specific 3D configuration. If the model is really good, its result matches what happens in the real world. If the model is not quite good enough, it may mis-model one step of many in the folding process, in which case the result it predicts can be TOTALLY DIFFERENT, and in no way resemble what actually happens in the real world. The Wikipedia entry shows how incredibly far off they are from being able to do what they’d like.

    There is no equivalent of getting a prediction like:

    “Given XYZ assumptions, and known physics, we’d predict that the 5-year average in 2050 would be N degrees higher, +/- M.” That’s a bulk prediction with a distribution, and it’s a very useful prediction. It doesn’t mean that new data won’t modify it.

    At one extreme, some models predict:
    - bulk properties
    - distributions of results
    - are looking for approximations that are “good enough”
    - try to improve resolution over time

    For instance, climate science deals with an inherently noisy system, only expects to make predictions about averages across a few years, and for relatively large geographic areas.

    In the middle are models (like the cars, for example), that need to make detailed predictions to be useful, but:
    - more detailed approximations give better answers
    - there usually aren’t terrible discontinuities in the results, given small changes to inputs

    I.e., they don’t get results like:
    - If you do a headon crash at 5MPH, everybody is fine.
    - But at 6MPH, or at a 1degree angle off headon, the car blows up and eveyone dies. [Despite the Taurus example I mentioned earlier].

    In semiconductors, simulations of circuits and power are somewhat like this as well, or at least have been. They’re approximations.

    At the other extreme, a model, to be useful, must match reality essentially *perfectly* (at some level of abstraction), and even a small error somewhere can yield totally different results.

    In semiconductors, *logic* simulation is like that. Given a description of the logic, a logic simulator produces an exact answer [normally, pass/fail] for a given input. It *never* says “Passes this specific test at 95% confidence level”: if a chip designer saw that, they’d flee in terror.

    At least some of the modeling in Keith’s domain is like that.

    ====
    Anyway, that’s my quick (I’ve got to get ready for trip) take on it.

    Given where Keith’s coming from, I can sympathize with his concern …

    but really: many simulations of the real world (including climate)
    - small perturbations of inputs usually yield small perturbations of outputs
    - ensembles and sensitivity analyses can be useful
    - don’t need exact results, distributions are fine
    - error bars actually make sense

    and there are plenty of cases in molecular modeling where those aren’t true.

    So, does that make more sense?

  48. 198
    Svet says:

    RE: Gavin’s response to #153
    “look at the maps. it is readily apparent where there are differences, and why. No mysteries there…. – gavin”

    Gavin, I assume that you are referring to the fact that the difference is in the Artic. Also, in another thread you have said. “There is a difference in how the[y] interpolate between data stations, particularly in the Arctic – HadCRU does not estimate Arctic ocean temperatures from nearby coastal data, while the GISS analysis does – given the warmth of the Arctic in recent years, that gives make the GISS anomalies slightly warmer.”

    Ok – there is no mystery but nonetheless I believe there is a problem. What GISS is doing is either valid or it is not. If it is valid then why has the Hadley Centre not corrected their methodology?

    Looking at it from another angle, the implication is that almost all of the warming in the last six years has been in the Artic. Is this what the models predict?

  49. 199

    Keith writes:

    [[I simply have some doubts about the ability to predict the climate in 50 years with what I see as a limited data set.]]

    So does everybody else, since we can’t predict what emissions will be over that period. But given a predicted pattern of emissions, and assuming so many volcanoes go off, and so on, we can predict. Nobody has a crystal ball. Celestial mechanics comes closest, but even that has its limits.

    What this has to do with listening to the models when they tell us we’re in serious trouble escapes me.

  50. 200
    dean_1230 says:

    Re the response to 193:

    Gavin, how did you get 20%? The total difference in temperature between the two periods is 0.05°C.

    By the way, I calculated the percentage by taking the difference between the two temperature rises (0.04°C for the 5 yr average case) and dividing it by the temperature rise between 1976-2004 (0.5938°C) then multiplying by 100. That gave me 6.8% difference between the two warming periods. the 9 year average using the same method gave me 8.5%. So there’s a 2% difference solely due to which averaging technique you use.

    I also did a 5 & 10 year weighed average using the 5 & 10 previous years. The difference there was 3% for the 5 year case and 12% for the 10 year case. Clearly, when the choice of averaging technique used is generating larger deviations than what the actual difference is, any conclusions are inherently suspect.

    Can anyone claim that the data is so accurate that these differences and percentages are outside the error band of the data? If not, then there’s no statistically significant difference between these two warming periods.

    [Response: You said the rate of warming - that is the delta T divided by the time period. In the later period, you have a bigger delta T and a shorter time - that makes the rate faster (0.19 degC/dec vs. 0.16 degC/dec i.e. by 20%). But before this gets out of hand, remember that the only claim made in the original comment - which it should be said I did not originate - is that the warming in recent decades is more than half of the century long trend. It is, and robustly so, as Tamino demonstrates. - gavin]


Switch to our mobile site