RealClimate

Comments

RSS feed for comments on this post.

  1. You write, “uncertainty associated with uncertain or inaccurate models grows with time,” but the graphs both show model uncertainty declining with increasing time. Or, am I interpreting them wrong?

    [Response: The graphs show the fractional uncertainty compared to the signal, specifically: “The relative importance of each source of uncertainty in decadal mean surface temperature projections is shown by the fractional uncertainty (the 90% confidence level divided by the mean prediction) relative to the warming from the 1971–2000 mean” – gavin]

    Comment by Benjamin — 28 Sep 2009 @ 8:30 AM

  2. I assume that a decade is a running mean of ten years. Is this correct?

    Comment by Charles Raguse — 28 Sep 2009 @ 9:01 AM

  3. Re #1:
    As the author of the referenced paper, I know that this can be a confusing picture! There is another version here:

    http://ncas-climate.nerc.ac.uk/research/uncertainty/exFig1.jpg

    which I think clarifies the growth of uncertainty, and it’s components.
    Ed Hawkins.

    Comment by Ed Hawkins — 28 Sep 2009 @ 9:04 AM

  4. “Predictions”

    [Response: cough.. fixed. – gavin]

    Comment by David N — 28 Sep 2009 @ 9:27 AM

  5. Thanks for another informative post!

    Typo in heading: “predicitions” for “predictions.”

    Comment by Kevin McKinney — 28 Sep 2009 @ 9:33 AM

  6. The link to the World Modelling Summit for Climate Prediction at the end of the article doesn’t mention decadal forecasting. As far as I know (I wasn’t invited to it!) that meeting was more about how to improve the representation of regional climate and extreme weather in climate models, partly by increasing the resolution of climate models, and partly by testing climate models by performing weather forecasts and seasonal to decadal predictions (seamless prediction) with them.

    I agree wholeheartedly with your assessment of the challenges to producing skillful and useful decadal predictions, but I think you’re possibly missing out by not mentioning that they might be extremely useful tools for evaluating the representation of processes within climate models (e.g. can trying to forecast ENSO with a climate model tell you anything about how well the model represents ENSO?).

    Cheers

    Len Shaffrey

    [Response: Thanks for the comment. I’m certainly not suggesting that research on decadal predictions is not worthwhile – it is and for the reasons you suggest. However, this is more a caution against people thinking this is a mature field of research which it certainly isn’t. There is however a link between regional predictions and decadal predictions – which is highlighted by the figure from Hawkins and Sutton – and that is the role of internal variability. Statements about ‘necessary’ improvements to regional forecasts (which depend on greater predictability of the internal variability) without proper assessments of its feasability, I find to be somewhat jumping the gun. – gavin]

    Comment by Len Shaffrey — 28 Sep 2009 @ 9:34 AM

  7. well, so if we expect some more years of no warming and perhaps some cooling, we must be “prepared” for the media, the politics and the own credability, not to loose.
    having the same globale temperature at 2020 as around 2000, it will show again, that the proposed climate sensivity for CO2 is not well calculated and any stated future scenario must be wrong.

    Comment by franz mair — 28 Sep 2009 @ 9:52 AM

  8. I’m sure you want to update this
    It’s got some problems.

    http://www.realclimate.org/index.php/archives/2004/12/how-do-we-know-that-recent-cosub2sub-increases-are-due-to-human-activities-updated/
    How do we know that recent CO2 increases are due to human activities?
    — 22 December 2004
    Another, quite independent way that we know that fossil fuel burning and land clearing specifically are responsible for the increase in CO2 in the last 150 years is through the measurement of carbon isotopes.

    One of the methods used is to measure the 13C/12C in tree rings, and use this to infer those same ratios in atmospheric CO2. This works because during photosynthesis, trees take up carbon from the atmosphere and lay this carbon down as plant organic material in the form of rings, providing a snapshot of the atmospheric composition of that time.

    Sequences of annual tree rings going back thousands of years have now been analyzed for their 13C/12C ratios. Because the age of each ring is precisely known** we can make a graph of the atmospheric 13C/12C ratio vs. time. What is found is at no time in the last 10,000 years are the 13C/12C ratios in the atmosphere as low as they are today. Furthermore, the 13C/12C ratios begin to decline dramatically just as the CO2 starts to increase — around 1850 AD. This is exactly what we expect if the increased CO2 is in fact due to fossil fuel burning.

    [Response: Not following your point at all. What do you think is wrong with this? – gavin]

    Comment by Howard S. — 28 Sep 2009 @ 10:11 AM

  9. In addition to economic, technological and sociological changes as sources of uncertainty related to knowing the level of carbon dioxide in the atmosphere over time, there are also uncertainties in our understanding of the carbon cycle where, even if emissions and deforestation were perfectly known, uptake or release of carbon dioxide from or to natural carbon pools is also unclear. And, while direct anthropogenic effects dominate now, an exponential feedback could take over as the dominant term and thus may dominate the scenario uncertainty in the longer timescale. The only way to control for this is to avoid perturbations which might trigger such a feedback. In other words, good models may only be possible for a reduced emissions regime.

    Comment by Chris Dudley — 28 Sep 2009 @ 10:22 AM

  10. This?

    [edit]

    [Response: Not even close. Carbon isotope analyses in well-dated tree rings have absolutely nothing to do with tree rings used as climate proxies. In this context the only thing that matters is that the carbon in the wood is independently datable. – gavin]

    Comment by Howard S. — 28 Sep 2009 @ 10:39 AM

  11. Following up on the points raised in #6….

    The internal variability of climate on regional scales is, of course, an important issue. We can still potentially significantly improve projections of regional climate for later in the century, where the internal variability component is a smaller fraction of the total uncertainty. Projections of regional details and changes in extreme temperatures in the mid-21st century should still be useful(?), and that is likely to require higher resolution. To predict regional changes for the next decade or two will probably require both higher resolution and initialised predictions.

    Of course, we have a long way to go to realise this potential. And, as #9 has suggested, there is additional uncertainty in the carbon cycle on the longer timescales.

    Ed.

    Comment by Ed Hawkins — 28 Sep 2009 @ 11:11 AM

  12. When will you be posting on the implications of the BAS Antarctic survey?

    Comment by Walt Bennett — 28 Sep 2009 @ 11:23 AM

  13. Here’s more from Gavin on this important issue. He’s been warning about inflated expectations for a long time on short-term prediction:
    > http://dotearth.blogs.nytimes.com/2008/08/20/making-climate-forecasting-more-useful/
    > http://j.mp/nytIPCC

    Comment by Andy Revkin — 28 Sep 2009 @ 11:31 AM

  14. now a had a better look at the graphs above. what would you tell us with that “uncertanities”?
    the black curve lies globaly arround 0,5, so what, can be right, can be true, we dont know.
    at british islands by 0,7, ok. that means we dont know, but we have a chance, that the “predictions” (better simulations!) could be ok.

    thank you guys, thats what we wont explain to you for many years.

    Comment by franz mair — 28 Sep 2009 @ 12:30 PM

  15. I’m only a public health sociologist, but I can make one decadal prediction with a high degree of certainty. If there is no clear warming trend for the next ten years, the chance of getting anything meaningful done politically in the United States, whether it concerns greenhouse gas emissions or mitigation, is going to be 1+(e^(i*pi)). Just sayin.

    Comment by cervantes — 28 Sep 2009 @ 12:45 PM

  16. Agree with #15.

    At the risk of driving Gavin crazy, if possible to reduce model uncertainty just a bit, would be very helpful. Backcasting explanations make it all look so easy.
    We can deal with scenario uncertainty and “weather.”

    Comment by Kevin — 28 Sep 2009 @ 3:41 PM

  17. With regards to model predictions and uncertain variables like CO2 emissions.
    Why not freeze the models along with the model runs?
    So instead of just posting the output of a model run on a scenario, the model code itself should be frozen, so it can be rerun with updated emissions scenarios.

    Comment by MikeN — 28 Sep 2009 @ 4:32 PM

  18. What confusing graphs !!!
    Very very hard to read…(thank you Ed Hawkins, the other graph helps a lot, but I still can’t link it to the two on this page)
    For example I don’t understand why the fractional uncertainty does not start from more than 1 for the global temp…
    I had understood that for less than 30 years, the natural variations (or “noise”) can hide the long term trend. So it means that you even cannot tell if it will warm or cool down.
    It that light, was does mean the total fractional uncertainty of 0.4 at ten years ?
    I understand we are talking about decadal mean temperature, but still…

    Comment by Naindj — 28 Sep 2009 @ 4:51 PM

  19. @15 cervates:
    I’m only a public health sociologist, but I can make one decadal prediction with a high degree of certainty. If there is no clear warming trend for the next ten years, the chance of getting anything meaningful done politically in the United States, whether it concerns greenhouse gas emissions or mitigation, is going to be 1+(e^(i*pi)). Just sayin.

    3-5 years stedy or slightly cooling will be enought and they are almost affraid, that it will be happen and the co2 climatesensivity is said to be completely wrong!

    Comment by franz mair — 28 Sep 2009 @ 4:54 PM

  20. MikeN, the reason they don’t keep outdated models running after improvements are added is — just from what I’ve gathered as an ordinary reader, mind you:
    — old models are less useful than improved ones
    — each model uses all available computer and time resources
    — all models are wrong.
    Do you know the rest of that 3rd line?

    [Response: Actually, these days it’s pretty easy to find old versions of the models. EdGCM was bascially the old Hansen model from 1988, and I think you could still find the previous public releases of the NCAR models. There are issues with software libraries and compilers of course. – gavin]

    Comment by Hank Roberts — 28 Sep 2009 @ 4:57 PM

  21. #18 Naindj –

    The graphs are for 10-year average temperature predictions relative to the 1971-2000 mean (not the forecast starting year of 2000). So middle time of the first data point (2001-2010) is already 20 years past the middle time of the baseline. Decadal-scale uncertainty is competing with 20 years of global warming, and the global warming is larger by a factor of two than the 90% spread of the model sample.

    Approximate numerical example: mean model forecast for 2001-2010 = +0.28 C relative to 1971-2000. 90% range of model forecasts: +/- 0.14. Fractional uncertainty 0.14/0.28 = 0.5.

    In their paper, they also estimate fractional uncertainty in predicting 2001-2010 temperatures relative to a year 2000 baseline, and that’s quite a bit higher initially (0.9).

    Comment by John N-G — 28 Sep 2009 @ 5:39 PM

  22. So, regional applications of global models show that northern California will warm as a result of global climate change, especially during the winter and spring, resulting in decreased snowpack and increased stress on the statewide water system.

    At the same time, increased warming will result in more frequent El Nino conditions as the Pacific ocean looks to dump heat back into the atmosphere. This means more rain in southern California.

    So, if Southern California water managers get their act together, they can swap snowpack losses for El Nino gains.

    Is this right?

    [Response: Hmm… not really. The big issue is of course what ENSO will do – and this is not well understood. I don’t think you can safely say that “increased warming will result in more frequent El Nino” – the projections and credibility for this aspect of climate are very varied. The snow pack changes are more reliable. – gavin]

    Comment by Francis — 28 Sep 2009 @ 5:40 PM

  23. cervates (15) & franz mair (19) — Here are the decadal averages from the HadCRUTv3 global temperature product:
    http://tamino.files.wordpress.com/2008/04/10yave.jpg

    Almost fifty of steady, very fastm warming ought to be enough to spur action, don’t you think?

    Comment by David B. Benson — 28 Sep 2009 @ 5:49 PM

  24. Re #18, Naindj,

    Hopefully the graphs make more sense if you read the whole paper linked to by Gavin! I agree that they are not easy to read at first, but are updates of a similar graph made by Cox & Stephenson (2007, Science). There are other versions in the paper as well which may be easier to understand, like this one:
    http://ncas-climate.nerc.ac.uk/research/uncertainty/Fig4c.jpg
    where the colours are the same as in the graphs in the article and show the fraction of the total uncertainty due to each source.

    Internal variability can indeed hide the warming for a decade or so, as we are experiencing at the moment, but not for 30 years globally.

    Re: the fractional uncertainty – it does not start from more than 1 because, as you say, we are considering a decadal mean temperature which reduces the uncertainty in the internal variability component, and importantly, the temperatures are always measured relative to the mean temperature from 1971-2000. We can therefore be confident that the temperature in the next decade will be warmer than the average of 1971-2000.

    As the fractional uncertainty is the total uncertainty divided by the temperature change above 1971-2000, the uncertainty would have to be quite large to make this quantity greater than 1. [In fact, it is very close to 1 for an annual prediction 1 year ahead]. A value of 0.4 for the fractional uncertainty means that if the temperature prediction is 0.5K above 1971-2000 then the uncertainty is 0.2K, and so on.

    Hope this helps!
    Ed.

    Comment by Ed Hawkins — 28 Sep 2009 @ 6:05 PM

  25. Re #18 (again)
    Also, of course regionally the internal variability is much larger and so we can be less confident in our projections of the next decade on regional scales, as shown by the second panel in the article, where the fractional uncertainty is indeed larger than 1 for the next decade for the UK.
    This does not make the projections useless, but it is important to realise the uncertainties in short term projections. It is hoped that the decadal predictions may help in this respect, though much more research is needed.
    Ed.

    Comment by Ed Hawkins — 28 Sep 2009 @ 6:11 PM

  26. It very much looks like the model predictions are just about hanging in there when compared to the real world measurements. If temperatures continue to fall or flatten out then observations will break out of the lower band and then it will be interesting to see where the modellers go then.

    Is it just me or is there a lot less certainty surrounding this subject recently, shrill nonsense stories in the lead up to Copenhagen notwithstanding.

    Comment by David Harrington — 28 Sep 2009 @ 11:42 PM

  27. Re #21, #24 and #25 (John and Hank)

    Yes it helped!
    (I’m slow but not a desperate case)
    Quite interesting graph indeed. (once you understand it!)

    Thanks!

    Comment by Naindj — 29 Sep 2009 @ 9:48 AM

  28. I think that it would help many of us understand the basis of the discussion if “decadal predictions” were defined and that was contrasted with climate change predictions (or projections if that be a better word) whose scale can be “decadal”.

    [Response: The projections that are usually talked about (and that are the basis of the IPCC figures for instance) are all from long simulations that hindcast from the 19th Century using only the external forcings (CO2, methane, aerosols, volcanoes, solar etc.). By the time they get to 2010 or 2020 there is a wide range of ‘states’ of the internal variability. For instance, one realisation might have a big El Nino in 2009, another would be having a La Nina event and more would be ENSO-neutral. Similarly, some might have an anomalously strong circulation in the North Atlantic, others would have an anomalously weak circulation etc. Thus the projections for the next decade from these sets of runs encompass a wide range of internal variability which has no relationship to the state of internal variability in the real world right now. Thus their future simulations are generally only used to forecast the element of climate change that is forced by the continuing rise in greenhouse gases etc. and to give an assessment of the range of possible departures from the forced change that result from different internal variability.

    The difference that the decadal predictions make is that they try and sync up the state of internal variability from the real world with an analogous state in the model, and thus attempt to predict the unforced variability and the forced variability together. In theory, this should be closer to what actually happens – at least for the short time period. – gavin ]

    Comment by Dean — 29 Sep 2009 @ 10:00 AM

  29. A bit OT here, but speaking of changing the energy mix, Ontario, Canada’s largest province, has just enacted feed-in-tariffs a la Spain–and we may be seeing the first corporate impacts from that. It should be noted that Ontario plans to phase out coal by 2014. (Coal was 23% of the energy mix five years ago, and is now just 7%.)

    http://www.cleantech.com/news/5077/feed-tariff-spurs-canadian-hydro

    Comment by Kevin McKinney — 29 Sep 2009 @ 11:14 AM

  30. > sync … with an analogous state in the model

    Does that mean picking from the simulations run from 1900 to the present a few that (now? year after year from the start?) happened to stay the closest to observed reality in as many ways as possible?

    Some of that closeness would be happenstance (getting the volcanos right); is the notion that some of the closeness would be meaningful and those scenario runs would happen to have picked values within the ranges that are …..

    I’ll stop guessing now. But before the topic derails, if you can, or some of the visiting authors can, please do set out a 7th-grade-level description of what’s done?

    Comment by Hank Roberts — 29 Sep 2009 @ 1:34 PM

  31. David Harrington (26) — Tempatures continue to rise. There is a nice graphic here:
    http://climateprogress.org/2009/09/22/new-york-times-andrew-revkin-suckered-by-deniers-to-push-global-cooling-myt/

    and here are the decadal averages from the HadCRUTv3 global temperature product:
    http://tamino.files.wordpress.com/2008/04/10yave.jpg

    Comment by David B. Benson — 29 Sep 2009 @ 1:36 PM

  32. On trends and 30 year ‘windows’.

    I’m not a scientist, nor a mathematician.

    Dipping into RC from time to time I have learned that climate is determined by 30 year time spans; anything less is natural variations – or ‘weather’. Variations tends to obscure trend. So far, so good.

    So how can I put faith in the work of scientist and modelers in the continuing, if slightly erratic, upward trend.

    This would help me.

    If one were to take every 30 time span from 1850 (reasonable start point?), so that’s 1850 – 1879, then 1851 – 1880, blah, blah, etc finally ending at 1979 – 2008, are all these year on year 30 year trends level or upward?

    Essentially, I am looking for something that would satisfy my (always doubting) mind.

    I really would like to get an answer to this.

    IF not, what do people on this site say.

    __________________

    PS. Above, where I wrote “always doubting” I would have preferred to have used the word ‘sceptical’, but that word, ‘sceptical’, sees to have become poisoned.

    Comment by Theo Hopkins — 29 Sep 2009 @ 1:51 PM

  33. From the web article referenced in your article its all about natural variability being so overwhelming on the relatively short time scales of the oscillations of natural phenomena (ENSO, La nina, el nino etc). In order for climate science to tease a viable projection of a specific trend shorter than a statistically significant time line is much harder than knowing what will most likely happen over a entire century period (100 years) ?

    If we BAU for as long as we can on fossil fuels at the annual rise rate of 2% then we double our usage in 35 years (log2/2) which also means an additional 1.6 Trillion tonnes will have been releasd by 2045. If sinks falter then upto 1 trillion tonnes remains in the atmosphere.

    Do anyone know the billion tonnes of CO2 to Co2 ppmv in the atmosphere at all? I did read 50 ppmv per 200 billion tonnes which would mean a 250 ppmv rise in CO2 by 2045.

    Comment by pete best — 29 Sep 2009 @ 2:59 PM

  34. I have the impression that this discussion on decadal prediction is ignoring natural external forcing:
    for the very next decades, it seems to me that the potential for major volcanic eruption(s) and the unknowns regarding solar activity are absent from the BAMS figures – or at least, I wonder where it could be : this is not internal variability (it is short term fluctuations in external forcing), it is not model error, and it is probably not included in the “scenario” uncertainty (it might have been, but a quick look at the paper suggests that it is not the case).
    Thus my impression is that this is a theoretical view about models, but it does not fully answer the question “what do we know about the climate within x decades”, at least for short term prediction. Right ?

    [Response: Not really. For natural forcings that are somewhat predictable (i.e. the existence of the solar cycle – if not its precise magnitude), this can be included. Volcanoes are obviously a wild card. In some sense this can be likened to scenario uncertainty – but it would be a greater magnitude than the scenario uncertainty addressed by Hawkins. – gavin]

    Comment by P. Marbaix — 29 Sep 2009 @ 3:36 PM

  35. Theo Hopkins (33) — The graph of decadal temperature averages linked in comment #32 starts in 1850 CE. You can estimate the 30 year averages from that. Of the more recent such intervals 1940–1969 might be flat or even down a bit; certainly not after that.

    Comment by David B. Benson — 29 Sep 2009 @ 3:37 PM

  36. Theo, that’s a great project. I think that, in order to achieve maximum reassurance for your mind–doubting, skeptical, or just questioning–you should do the plotting yourself. Woodfortrees offers a wonderful tool to do this. I’ve done the first 30-year span for you; knock yourself out!

    (And let us know what you find.)

    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1850/to:1879/trend

    Comment by Kevin McKinney — 29 Sep 2009 @ 3:49 PM

  37. – old models are less useful than improved ones
    Of course. But the old models still have some use.
    – each model uses all available computer and time resources
    No they don’t. Only when they are running. I’m asking for the code to be stored.
    – all models are wrong.
    but some are more wrong then others?

    Comment by MikeN — 29 Sep 2009 @ 4:36 PM

  38. Theo Hopkins says:
    29 September 2009 at 1:51 PM:

    If one were to take every 30 time span from 1850 (reasonable start point?), so that’s 1850 – 1879, then 1851 – 1880, blah, blah, etc finally ending at 1979 – 2008, are all these year on year 30 year trends level or upward?

    No. Why would you expect them to be?
    1851 – 1880
    1881 – 1910
    1911 – 1940
    1941 – 1970
    1971 – 2000
    1851 – 2008

    I used hadcrut3 because it goes back to 1851. GISSTEMP is similar, but only goes back to 1880. (It has been argued that from 1851 – 1880 hadcrut3 only represents the NH well.)

    Here’s GISTEMP:
    1881 – 1910
    1911 – 1940
    1941 – 1970
    1971 – 2000
    1880 – 2008

    Comment by llewelly — 29 Sep 2009 @ 5:06 PM

  39. Theo, Kevin’s right, you should do it yourself rather than just trust some stranger on a blog somewhere to do it for you.

    This may help:

    http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#

    Comment by Hank Roberts — 29 Sep 2009 @ 5:07 PM

  40. MikeN — you can look it up. You know how.

    Comment by Hank Roberts — 29 Sep 2009 @ 5:08 PM

  41. Theo Hopkins (#32),

    http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html

    What you’re looking for (though 20- or 25-year trends rather than 30-year) is in figures 3, 4 and 5, but do read the whole post, or you’ll miss out.

    Comment by CM — 29 Sep 2009 @ 5:11 PM

  42. Re #30 (Hank)

    The ‘syncing’ of the models to the internal variability is done by inserting (or ‘assimilating’) ocean observations into the climate model (this type of thing is done for weather forecasting too) to force the climate model to be close to the current state. It is then let to run free without being given any more observations, and is then a prediction.

    Ed.

    Comment by Ed Hawkins — 29 Sep 2009 @ 5:37 PM

  43. Our Stefan is pumping up the volume:

    Two meter sea level rise unstoppable: climate scientists

    OXFORD, England (Reuters) – A rise of at least two meters in the world’s sea levels is now almost unstoppable, experts told a climate conference at Oxford University on Tuesday.

    “The crux of the sea level issue is that it starts very slowly but once it gets going it is practically unstoppable,” said Stefan Rahmstorf, a scientist at Germany’s Potsdam Institute and a widely recognized sea level expert.

    “There is no way I can see to stop this rise, even if we have gone to zero emissions.” …

    Comment by Jim Galasyn — 29 Sep 2009 @ 7:44 PM

  44. Gavin in #28,

    Is it not possible to test with presently available model runs if the idea of detailed predictions can be hoped for? There must be instances when model and observation make a pretty good match in the hindcasts. Do they then stay close for a period longer than just being temporary neighbors might produce? Is there extended coherence between model and observation when they get close? If so, then there is hope. If not, then the models are not close enough in detail to the world yet.

    Comment by Chris Dudley — 30 Sep 2009 @ 12:19 AM

  45. Jim Galasyn #43, note that the two metres is not a forecast for 2100AD. Just in case someone tries to pull a Gore on Stefan…

    Comment by Martin Vermeer — 30 Sep 2009 @ 1:24 AM

  46. I would like to ask you a general question – How accurate are the GCM’s in predicting reality?

    You have said that for temperature, the output from single GCM runs will not match the data as well as a statistical model based purely on the forcings, but that the mean of many simulations is better at predictions than any individual simulation.

    I am puzzled by this. If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

    [Response: Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean). Just as in weather forecasting, the forecast with the greatest skill turns out to be this ensemble mean. i.e. you do the least badly by not trying to forecast the ‘noise’. This isn’t necessarily cast in stone, but it is a good rule of thumb. – gavin]

    Comment by Richard — 30 Sep 2009 @ 6:18 AM

  47. Should we really treat “scenario uncertainty” as an uncertainty (part of the noise), or as sensitivity to an independent variable (part of the signal)?

    [Response: It is not part of the noise, it is part of the uncertainty in the forced signal and so is very different in kind from internal variability. – gavin]

    Comment by DanH — 30 Sep 2009 @ 6:58 AM

  48. I’ve put a new page on my climatology web site dealing with the successful predictions of the models. I cribbed a great deal from earlier posts at places like RealClimate and Open Mind. Here it is:

    http://BartonPaulLevenson.com/ModelsReliable.html

    Can anyone who knows any of these details send me a citation for a prediction and/or the observation(s) which confirmed it? I have some of this information already, but not all of it. If I can get a comprehensive list together I’ll put it all into that web page as an appendix to prove I’m not just blowing smoke.

    Comment by Barton Paul Levenson — 30 Sep 2009 @ 7:00 AM

  49. Paul in 48. It is all very well to trumpet the success of the models at predicting this or the other but where is the model that can predict at least a fair number of these results in its own right? Model “a” got this right and model “b” got that right but on the whole the process is an amalgamation or average and on this basis it is not going that well. Cherry picking is not science.

    [Response: Fair general point, but not actually a propos. Publishing particular results with a particular model is normal, publishing repetitions of the same result with a different model tends to be less common – but it certainly happens. And in assessments such as the IPCC or the CCSP reports, the results tend to be taken over multiple models. – gavin]

    Comment by bushy — 30 Sep 2009 @ 10:29 AM

  50. #32 Theo Hopkins

    30 years with attribution helps. In other words, look at the period after WWII. There was about a 33 year cooling trend, but that trend is generally attributed to the effects of industrial aerosols masking the global warming effect during that period.

    The industrial output at that time had some downsides, like the destruction of the ozone and acid rain. So the Montreal protocol was put in place and the industrial output of those pollutants was reduced and the CO2 pollutant then became more dominant and warming resumed.

    PS I’m still a skeptic.

    Comment by John P. Reisman (OSS Foundation) — 30 Sep 2009 @ 11:12 AM

  51. “but where is the model that can predict at least a fair number of these results in its own right?”

    What does that mean?

    The change in models is because they improve the models. Yet they still go back and run hindcasts to show that the new model too predicts those features in the past.

    So it is already done for subsequent models.

    And doing it for earlier models just tells you that, if it fails, the model then was insufficient for the prediction. But that is why you made the newer model, isn’t it.

    Comment by Mark — 30 Sep 2009 @ 11:41 AM

  52. Regarding #48

    I think “Bushy” is expecting that individual models simulate all the observed effects. IE, that the TopKnot model V1.x projects temperature changes at various altitudes, ENSO states, global temperatures, ocean currents, regional weather, and all the other things that scientists now model about climate.

    Bushy appears to think that BPL is cherry picking to say that TopKnot V1.x got the atmospheric temps right at various altitudes but got the ocean currents wrong, and that NasaMod V4.y got the ocean circulation right but everything else wrong, and that BPL is taking the part of each that it got right and ignoring all the things it got wrong.

    I think the problem is in the basic assumption that there are any such all-encompassing models. Suites of models might look at radiative physics, CO2 distribution throughout the atmosphere, partial pressures and whatnot and project temperature stratifications. But that same model doesn’t predict global temperatures, ocean currents, or migration of Hadley cells.

    — David

    Comment by David Miller — 30 Sep 2009 @ 12:27 PM

  53. #33 – Pete Best wonders how much CO2 1 ppm represents.

    It’s funny you should ask that Pete. Just this morning I was calculating the energy trapped by CO2 compared to the energy release by its formation. I found some fairly different calculations of that. The one that looks properly calculated is at http://answers.yahoo.com/question/index?qid=20070908101242AALwgLr and concludes that it takes about 8 gigatons of CO2 to add 1 ppm to the atmosphere.

    The answer I got was that takes just about a year for the CO2 from burning a ton of carbon to trap as much heat as was released during the combustion. Given that the residence time in the atmosphere is measured in centuries, we add a lot of heat to the planet every time we burn a ton of coal.

    — David

    Comment by David Miller — 30 Sep 2009 @ 12:32 PM

  54. Gavin, your response to #46. What’s your take on the multimodel mean being better than any single model – do you think that is also “noise” (i.e. uncorrelated model error) canceling out? And do you think a multimodel mean would also perform better than any single model in decadal prediction?

    [Response: Not yet clear. It is clear that for a lot of fields the multi-model mean is better than any single model, implying that there is at least some random component associated with any individual model error. But it remains mysterious as to why that is or whether you can’t do better. More research needed as they say… – gavin]

    Comment by Adder — 30 Sep 2009 @ 1:26 PM

  55. BPL – I see you have that models gave a pretty good prediction of the amount and duration of cooling that the increased aerosols from the Mt Pinatubo eruption caused you could also include the consequent change in humidity, providing increased confidence that the models have the positive water vapour feedback about right.

    References:

    http://www.sciencemag.org/cgi/content/full/296/5568/727 [free sub required]
    http://pubs.giss.nasa.gov/abstracts/1996/Hansen_etal_2.html

    Phil Clarke

    Comment by pjclarke — 30 Sep 2009 @ 2:25 PM

  56. #xx: [edit – post now deleted]

    Your story already falls apart at the very first point, but for that you probably need to be aware of the meaning of “detrended”. Is it just me, or is there a growing group of people with a university education who don’t understand that concept? First McLean et al removed the trend and then claimed to have explained the trend, on Jennifer Marohasy’s blog Tim Curtin removes a trend and then claims there is no trend, and here we have a Dr. Löbert who looks at de-trended data, and then claims the data shows there is no AGW…

    Oh, and if you are so certain of your analysis about the universe, why isn’t any of it published in a physics journal? In fact, have you EVER published anything in a physics journal? Spamming it all over the internet does not make it valid, but rather indicates you can’t get it into even the most low-ranked physics journal.

    [Response: Sorry – we shouldn’t have let that through. This is not a repository for people to post their crackpot physics ideas! (Thanks for the swift rebuttal though!). – gavin]

    Comment by Marco — 30 Sep 2009 @ 3:18 PM

  57. @ Various people in reply to my post at #32.

    Yes, I _have_ to “do it myself”. I hope my maths are up to it. I remember something about root mean squares from school – but that is 50 or more years back.

    But yes, only “do it yourself” will work. Relying on the skills of others means relying on “authority” and that means if one is challenged, say by a denier, one is on weak ‘mental’ ice.

    Thanks in advance.

    Theo.

    Comment by Theo Hopkins — 30 Sep 2009 @ 3:22 PM

  58. Re #56,
    It should be noted that Marco is not referring to the existing post 54; he is referring to a crank post which briefly appeared in the 54 slot. ‘Twas a pity that the crank got deleted — it’s not often that you see all of climate science *and* Albert Einstein simultaneously swept into oblivion.

    [Response: I’ve made that clearer. Comedic potential aside, that kind of stuff is not worth our time. – gavin]

    Comment by spilgard — 30 Sep 2009 @ 4:08 PM

  59. Barton P. L., nice list!

    It might be fun also to look at how far back some of these predictions were made. I was just reading Oreskes (“How do we know…”), and she cites Manabe and Stouffer (1980) as predicting your #5 (polar amplification). I looked it up. They put a GCM with a simple mixed layer model of the oceans through a quadrupling of atmospheric CO2. My layman’s take is that they also got your #2 (tropospheric warming, stratospheric cooling), #4 (winter temperatures increase more than summer temperatures), and #6 (Arctic warming faster than Antarctic). As for the warming, this particular study got 2ºC for 2xCO2 (based on 4º for 4xCO2).

    Considering recent events, I find this bit rather poignant: “It is of interest that the sea ice disappears completely from the Arctic Ocean during a few summer months in the 4 x CO2 experiment”.

    Manabe, S., and R. J. Stouffer. 1980. Sensitivity of a global climate model to an increase of CO2 concentration in the atmosphere. Journal of Geophysical Research 85(C10): 5529–54.

    Comment by CM — 30 Sep 2009 @ 4:22 PM

  60. As time goes by, the different weather paths get averaged out and so this source of uncertainty diminishes.
    I am wondering about justification for the above. Certainly in my work with non-linear coupled dynamics, this is emphatically not the case. Thresholds are reached, feedback sensitivities change as a function of output state variables, coupling coefficients evolve. All of this combines to produce system evolution that may bifurcate or move to an entirely different limit cycle trajectory. In these systems (of which I think climate dynamics is a member), one could never state a priori that all possible state trajectories average to a mean of any predictive value. What is different about GCMs that makes this true?

    [Response: There are very significant constrains on energy fluxes to and from space, and strong negative feedbacks through the fourth power dependency on the emitting temperature that keep things bounded. The same constraints occur in the real world, though it isn’t obvious (nor provable) that the real climate itself is not chaotic. The GCM climates are not – the statistics are stable over time and do not have a sensitive dependence on initial conditions. Think of the structural stability of the Lorenz butterfly even while individual trajectories are truly chaotic. – gavin]

    Comment by J. Patterson — 30 Sep 2009 @ 4:45 PM

  61. Sort of on topic: On a previous thread here there was discussion on the possibility of attributing extreme single events (2003 French heatwave) to AGW. So following the tragic impact of typhoon Ketsana on Manila, I’m wondering if there’s more one justifiably say could say beside “this is the kind of thing we expect to have more of due to man-made warming, and over the last decade(s) there has, indeed, been more of it”?

    Comment by CM — 30 Sep 2009 @ 4:57 PM

  62. This sounds like an exciting and, if models can successfully project decadal temperatures, a very interesting attempt at short term analysis. I would hope that from time to time modelers will state reminders that decadal numbers,regardless of direction, don’t indicate a trend.

    Comment by Lawrence Brown — 30 Sep 2009 @ 7:37 PM

  63. Mr. Miller writes at 12:32 PM, 30th Sep. 2009:
    “The answer I got was that takes just about a year for the CO2 from burning a ton of carbon to trap as much heat as was released during the combustion.”

    the mass cancels out of the equation ? i.e. any mass of carbon burnt to CO2 released into the atmosphere will trap the same amount of heat in a year as was generated during combustion ?

    or i am quite muddled up ?

    Comment by sidd — 30 Sep 2009 @ 7:37 PM

  64. I have some experience with modelling in engineering fields. I’m not a climate scientist. One concern I have with modelling is that with enough fiddling, a model can be made to fit past history, but that says nothing about how well it will predict the future. When a model does not accurately predict the future, then it is typically tweaked to fit the recent past that the previous model failed to predict. Failure of the climate models to predict accurately leaves me wondering, with so many climate variables, how can we be sure that increases in CO2 levels has caused the current global warming trend? Just because CO2 is going up with the temperature does not mean its is the cause. How has the causal relationship been proven?

    [Response: We discuss what goes into the models and how they are developed in a couple of FAQs (Part I, Part II) on the subject. Much of what you are asking is answered there. As for the big question of attribution of current trends, see the relevant chapter in the IPCC report – it isn’t that difficult a read. – gavin]

    Comment by John Phillips — 30 Sep 2009 @ 8:45 PM

  65. Sidd (#63):

    Not a cancel, but a double whammy. 2X heat + 1X heat each of subsequent years.

    Steve

    Comment by Steve Fish — 30 Sep 2009 @ 9:04 PM

  66. Sidd asks if the mass cancels out in #63.

    Yes, Sidd. Mass cancels out. Carbon releases ~33 GJ/ton when you burn it, and according to my calculations the resulting CO2 traps 32 GJ per year. Multiply by the residence time of the CO2 for the final tally.

    It’s completely unrelated to mass – 10 tons produce 10 times as much CO2 that traps 10 times as much IR energy.

    Perhaps someone here could clarify something for me. I took the value for heat trapped as 1.66 watts/m^2 from an epa.gov web page. I know that’s in the ballpark, but two things about it are not clear to me:

    1) Does 1.66 watts/m^2 include the additional water vapor effects?

    2) Is the square meter referred to the surface area of the earth or the disk facing the sun? IE, pi*r^2 or 4*pi*r^2.

    I was trying to get a rough calculation, but it would be nice to get as close as possible.

    Comment by David Miller — 30 Sep 2009 @ 9:43 PM

  67. > 10 times as much CO2 that traps 10 times as much IR energy.

    No. Citation needed — why do you think this is true? What source are you relying on and why do you trust it enough to repeat it as though it’s a fact?

    See what you find here, e.g.

    http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

    CO2 or any other greenhouse gas isn’t capturing and holding the energy — it catches and releases it. What matters is the proportion in the atmosphere.

    Take a big sheet of window glass. You can see right through it.
    Look at it edge on. It looks dark blue-green.
    Why?

    This is really basic. Start with Weart, first link under the Science heading.

    Comment by Hank Roberts — 30 Sep 2009 @ 10:26 PM

  68. David Miller: The way I would do the calculation would be to state that 2.1 gigatons of C produce about 1 ppm of CO2. However, that 1 ppm will slowly decrease over time: using the Bern Cycle approximation, over 100 years it totals to about 48 ppm-years (reality may vary from the approximation for all sorts of reasons).

    1 ppm of CO2 is about 0.014 W/m2 at the current concentration of CO2 (5.35*LN(C/C0)). That’s averaged over the entire surface area of the earth. That doesn’t include any feedback effects, which could lead to a gain of 1.7 to 3.7 times the original forcing (for the standard 2 to 4.5 degree sensitivity, according to Roe and Baker) or more.

    Does that help?

    Comment by Marcus — 30 Sep 2009 @ 10:42 PM

  69. “No. Citation needed — why do you think this is true?” “Not a cancel, but a double whammy”

    Hank and Steve: I think you didn’t read the statements quite right. Basically, he’s just trying to say that if you are trying to determine the energy added to the system from the CO2 greenhouse effect, and compare it to the energy added to the system by the burning of the carbon that led to the CO2, that, for reasonable quantities of CO2 it doesn’t matter how much you are adding because it cancels out in the ratio (integral of total top-of-the-atmosphere radiative forcing over time) divided by (energy from burning).

    Yes, if you add enough CO2, you’ll have to start caring about the logarithmic relationship. Yes, if you care about total energy added to the system, you need to add burning energy to greenhouse energy. But if you just care about comparing greenhouse energy to energy released from burning for small quantities of coal burned, it is much simpler. And I’ll note that greenhouse energy is much greater than combustion energy (my back of the envelope calculation a few years back suggested by more than two orders of magnitude: hopefully David’s approach will yield something similar).

    Comment by Marcus — 30 Sep 2009 @ 10:50 PM

  70. My apologies. My previous comment is not what I intended. (I blame lack of preview.) This is what I intended:
    CM says:
    30 September 2009 at 4:57 PM

    So following the tragic impact of typhoon Ketsana on Manila, I’m wondering if there’s more one justifiably say could say beside “this is the kind of thing we expect to have more of due to man-made warming, and over the last decade(s) there has, indeed, been more of it”?

    Some background on the effect of global warmiing on precipitation can be found in AR4 chapter 3. See question 3.2 on page 262 (page 28 in the pdf). In particular:

    Widespread increases in heavy precipitation events have been observed, even in places where total amounts have decreased. These changes are associated with increased water vapour in the atmosphere arising from the warming of the world’s oceans, especially at lower latitudes. There are also increases in some regions in the occurrences of both droughts and floods.

    See also section 9.5.4.2.2, Changes in extreme precipitation, page 714 (page 52 in the pdf).

    The possibility that global warming will result in more rainfall from tropical cyclones has been mentioned by Trenberth and others many times, for example in Uncertainty in Hurricanes and Global Warming . (Interestingly, about two years ago there was a paper arguing that the models were underestimating increases in rainfall due to global warming: How Much More Rain Will Global Warming Bring? .)

    Comment by llewelly — 30 Sep 2009 @ 11:18 PM

  71. I highly recommend people to Reto Knutti’s article on the subject of believing model predictions in the future. It is not primarily concerned with decadal predictions in the sense of a lot of recent literature, but it’s very informative on model usage and I’ve found myself citing it a lot.
    http://www.iac.ethz.ch/people/knuttir/papers/knutti08ptrs.pdf

    A relatively recent report by USCCP on the Strengths and Limitations of climate models is also a very comprehensive treatment (maybe even moreso than IPCC) of the subject.

    Just suggestions.

    Comment by Chris Colose — 30 Sep 2009 @ 11:21 PM

  72. Re: Mr. Miller, energy imbalances:

    is this your calculation ?

    if m0 is the current mass of CO2 in the air, j0 is the current radiative forcing in watts/sq. m., and you add dm more mass of CO2

    then do you take the additional forcing dj to be given by

    dm/m0=dj/j0

    (of course, i expect this relation might hold only when dm is much less than m0 and dj is much less than j0…)

    and then go to additional heat dQ by multipling by area and time thus

    dQ=dj*A*T=dm*j0/m0*A*T

    where j is in watts, A is the appropriate area in sq. m. and T is the the number of seconds in a year

    Comment by sidd — 30 Sep 2009 @ 11:27 PM

  73. re BPL list of model predictions;

    Svante Arrhenius (1859-1927)
    “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”(excerpts) Philosophical Magazine 41, 237-276 (1896)[1]
    at http://web.lemoyne.edu/~giunta/arrhenius.html

    “The influence is in general greater in the winter than in the summer, except in the case of the parts that lie between the maximum and the pole. The influence will also be greater the higher the value of ν, that is in general somewhat greater for land than for ocean. On account of the nebulosity of the Southern hemisphere, the effect will be less there than in the Northern hemisphere. An increase in the quantity of carbonic acid will of course diminish the difference in temperature between day and night. A very important secondary elevation of the effect will be produced in those places that alter their albedo by the extension or regression of the snow-covering (see p. 257 [omitted from this excerpt–CJG]), and this secondary effect will probably remove the maximum effect from lower parallels to the neighbourhood of the poles[12].”

    “I should certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them.” Still is, except in some (wilfully ignorant) quarters.

    Of course, denialists will try to conflate “inaccurate” with “wrong”,(he overestimated the drop in CO2 for an ice age) but he did remarkably well for a paper & pencil model.

    Comment by Brian Dodge — 1 Oct 2009 @ 12:08 AM

  74. Phil, CM, Brian Dodge: Thanks! That’s some really useful information. I’ll acknowledge yinz in the final version of the web page.

    Comment by Barton Paul Levenson — 1 Oct 2009 @ 2:51 AM

  75. Not entirely OT … with September over it’s again time to update the Bitz curve (well, Holland, Bitz and Tremblay if you like).

    Recall that this effort was the first to project Arctic sea ice collapse by mid century (by as early as 2040); and also to project that “abrupt reductions are a common feature of (the) 21st century”. That was back in 2006; and the “abrupt reduction” happened in, well, 2007. Surely the most prescient sub-decadal projection in the history of climate science!

    Through 2009, this work is still looking damn fine (if a touch conservative):

    [The labelled September extents are from NSIDC. Extents through 2006 are Bitz’s, and differ slightly from NSIDC.]

    Comment by GlenFergus — 1 Oct 2009 @ 2:51 AM

  76. My calculation of the ratio between heating from formation of co2 vs its heating as a greenhousegas.

    3e12 ton co2 in atmosphere (wikipedia)
    Radforce = 5.35 ln(co2new/co2old)
    2x co2: 5,35ln(6/3)=3,7W/m2 including some feedbacks (ipcc)
    0,75K/(W/m2) would mean 2,8 deg most people agree on this

    One ton extra co2 5.35ln(3e12+1/3e12)=1,783333e-12 W/m2
    Earthsurface 510072000km2
    1 ton heats 510,072*1,7833333=909,6W/earthsurface (the first year)

    Co2 residence time from globalwarmingart, estimation of the surface under the curve just by mesuring with a ruler on the screen
    After 50 years 40% co2 is remaining in the atmosphere
    Year1=909W year50 0,4*909=363,8W an average of 636,3W during those 50 years
    Energy year1 909*365*24*3600=28666MJ
    Energy år50 …..=11466MJ
    Whole 50 year period 1003310MJ about 1TJ
    A nice round number for one ton co2 during 50 years

    1l gasoline 2,8kg co2 (i think)
    Year1 28,666*2,8=80,300MJ/l
    During 50 years 1TJ*0,0028=2810,5MJ/l
    1l gasoline energy 35MJ (i think)
    gwp gasoline about 2,3 times its heatingvalue the first year
    gwp gasoline about 80,3 times its heatingvalue the first 50 years

    From the curve in globalwarmingart i estimate just as much heating during the next 150 years so in 200 years 1l gasoline will heat the earth 160 times its heatingvalue.

    Not included in this calculation are future emissions that will reduce the effect because of the ln behavior of added co2.

    Comment by Henrik — 1 Oct 2009 @ 4:20 AM

  77. #46 Richard

    Ensemble mean forecasts have greatly helped in weather prediction over the years for the reasons that Gavin states. Here is a good link:

    http://www.hpc.ncep.noaa.gov/ensembletraining/

    #48 BPL,

    LOL. I was actually going to email you today or tomorrow to ask you for citations for your excellent list. I want to add your list to my “Climate Models & Accuracy” page.

    Comment by Scott A. Mandia — 1 Oct 2009 @ 5:24 AM

  78. John Phillips,
    One might have trouble with this if one knew absolutely nothing about radiative physics, but then model overfitting would be the least of your problems in understanding climate science.

    Fortunately, the modelers DO know the the physics and base their models on it. Since the physics is well understood, that leaves far fewer knobs to twiddle in such a dynamical, physics-based climate model. You might want to start your education by learning the difference between a statistical and a dynamical model.

    Comment by Ray Ladbury — 1 Oct 2009 @ 7:13 AM

  79. Recently we have had new measurements of icesheet ice loss and its disappearing faster than projections seems to allow so how does this not affect computer models where ice is being lost faster than projections project.

    The icesheets were measured by using lasers from space and hence it does sound quite accurate. James Hansen said that the computer models are usrful tools but its the paleoclimatic data that tells the real story for our future with BAU scenarios.

    if the media get the stories wrong or are confused and tell it wrong then what is the significance of that. Copenhagen is going to happen and the USA is close to passing a climate change bill.

    Comment by pete best — 1 Oct 2009 @ 7:36 AM

  80. Yes, Sidd, (72) that’s exactly what I was figuring.

    Judging by the results, Henrik came up with just about the same answer in # 76

    Marcus, you had a perfect summary in #69. I’m just looking for a rough ratio for the heat from combustion to heat trapped by the resulting CO2 for the next small unit (ton, kilo, pound) of carbon burned. Decreased values for increasing levels of CO2 need not apply, I’m looking for the right order of magnitude here.

    Comment by David Miller — 1 Oct 2009 @ 8:27 AM

  81. General Perspective on Decadal Predictions

    Actually, I’m not sure how important decadal predictions are to agriculture? 1 to 3 years would help though. Generally, I sort of see the refinement going in two directions from 30 years toward less, and from the current weather predictions of 1 to 2 weeks + seasonal predictability based on what is now known of ocean current influences.

    It would be great to have better predictability for sure though, especially in the 1 to 3 year range for agriculture and disaster planning.

    As decadal was a big topic at WCC-3, I spoke with a few people about the fact that it will take some time to get relevant predictability and that from an adaptation point of view, just plan on the general changes that are becoming more probable. Expecting that will help with current adaptation planning. i.e. more droughts, floods, snowstorms etc. as time rolls on. My point there was don’t wait for the resolution to improve when you already know the general direction of the trends.

    My argument runs into problems when you examine quantifiability for insurance, but regional governments can reasonably expect certain things averaged over time.

    Comment by John P. Reisman (OSS Foundation) — 1 Oct 2009 @ 8:32 AM

  82. “if the media get the stories wrong or are confused and tell it wrong then what is the significance of that. Copenhagen is going to happen and the USA is close to passing a climate change bill.”

    I don’t think the people in the discussions for the climate bill get their information from the tabloids, pete…

    Comment by Mark — 1 Oct 2009 @ 10:07 AM

  83. llewelly #70,
    thanks for the pointers! Table 10.3 from WG2 was also pertinent, though I’m a bit confused about the dates: “On an average, 20 cyclones cross the Philippines Area of Responsibility with about 8 to 9 landfall each year; with an increase of 4.2 in the frequency of cyclones entering PAR during the period 1990 to 2003 (PAGASA, 2001).”

    (Typhoon Ketsana (aka ‘Ondoy’) flooded Metro Manila with a record 45.5 cm rain in 24 hours, leaving more than 250 dead, and now Filipinos are already bracing for Parma (‘Pepeng’), a category 4 typhoon.)

    Comment by CM — 1 Oct 2009 @ 2:21 PM

  84. Re BPL list of model predictions:
    BPL #74, you’re very welcome. If time permits I’ll look for more.
    Brian Dodge #73, your Arrhenius (1896) sure trumps my Manabe (1980).

    Comment by CM — 1 Oct 2009 @ 2:39 PM

  85. #75

    Bitz update is here.

    [Did someone tell us img tags no longer work?]

    Comment by GlenFergus — 1 Oct 2009 @ 6:18 PM

  86. The misinterpretations and distortions of Mojib Latif’s presentation at WWC-3 in Geneva have really got out of hand, with George Will quoting Revkin misquoting Latif, and Canada’s Lorne Gunter getting it all spectacularly wrong, this time in the Calgary Herald.

    Admittedly, I’ve spent too much time on this (going so far as to email back and forth with Latif), but these misinterpretations and distortions of Keenlyside et al 2008 Nature article (on which Latif was co-author) are really upsetting.

    “Anatomy of a lie: How Marc Morano and Lorne Gunter spun Mojib Latif’s remarks out of control”

    Read the following sequence of quotes and weep:

    * Sept. 1: It may well happen that you enter a decade, or maybe even two, when the temperature cools, relative to the present level. – Mojib Latif at World Climate Conference in Geneva

    * Sept. 4: One of the world’s top climate modellers said Thursday we could be about to enter one or even two decades during which temperatures cool. – Fred Pearce, New Scientist.

    * Sept. 5: UN Fears (More) Global Cooling Commeth! IPCC Scientist Warns UN: We are about to enter ‘one or even 2 decades during which temps cool’ – Marc Morano, Climate Depot (CFACT)

    * Sept. 19: Latif conceded … that we are likely entering “one or even two decades during which temperatures cool.” – Lorne Gunter, Calgary Herald.

    * Sept. 25: Mojib Latif of Kiel University in Germany told a UN conference earlier this month that he is now predicting global cooling for several decades. – Marc Morano, Climate Depot (CFACT).

    * Sept. 28: 1240 hits, and counting, for the Google search “Latif” “ likely entering one or even two decades during which temperatures cool”

    http://deepclimate.org/2009/10/02/anatomy-of-a-lie-how-morano-and-gunter-spun-latif-out-of-contro/

    I’ve also transcribed key parts of Latif’s remarks, along with key slides, so that everyone can see just how badly they have been misinterpreted and distorted.

    http://deepclimate.org/2009/10/02/key-excerpts-from-mojib-latifs-wcc-presentation/

    And I’ve put up my email exchange with Latif as well.

    Comment by Deep Climate — 1 Oct 2009 @ 10:20 PM

  87. Deep Climate (#86), I have linked to your useful discussion (and this thread) in the comments at the New Scientist story.

    Comment by CM — 2 Oct 2009 @ 3:28 AM

  88. I am not sure if September ice extent figures are helpful to this discussion… but here they are:

    September (month end averages) NSIDC (sea ice extent)

    30 yrs ago
    1980 Southern Hemisphere = 19.1 million sq km
    1980 Northern Hemisphere = 7.8 million sq km
    Total = 26.9 million sq km

    Recorded Arctic min yr.
    2007 Southern Hemisphere = 19.2 million sq km
    2007 Northern Hemisphere = 4.3 million sq km
    Total = 23.5 million sq km

    Last yr.
    2008 Southern Hemisphere = 18.5 million sq km
    2008 Northern Hemisphere = 4.7 million sq km
    Total = 23.2 million sq km

    This yr.
    2009 Southern Hemisphere = 19.1 million sq km
    2009 Northern Hemisphere = 5.4 million sq km
    Total = 24.5 million sq km

    On September 12, 2009 Arctic sea ice extent dropped to 5.10 million square kilometers (1.97 million square miles).

    Source plates:
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/N_200909_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/S_200909_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/N_200809_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/S_200809_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/N_200709_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/S_200709_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/N_198009_extn.png
    ftp://sidads.colorado.edu//DATASETS/NOAA/G02135/Sep/S_198009_extn.png

    Comment by G. Karst — 2 Oct 2009 @ 2:50 PM

  89. http://climateinteractive.org/state-of-the-global-deal

    Comment by Hank Roberts — 2 Oct 2009 @ 10:38 PM

  90. Another full text article found on an author’s page:

    http://sciences.blogs.liberation.fr/files/climat-2009-2019-2.pdf.

    How will Earth’s surface temperature change in future decades?
    GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L15708, doi:10.1029/2009GL038932, 2009

    Comment by Hank Roberts — 4 Oct 2009 @ 7:23 PM

  91. No comment on http://www.washingtonpost.com/wp-dyn/content/article/2009/09/24/AR2009092402602.html

    Comment by Larry — 4 Oct 2009 @ 11:20 PM

  92. Re: Hank’s #30 and Ed’s #42 — I had the same question as Hank, and I wonder if the results would differ at all if only runs that captured the historical patterns were used to predict the future (rather than constraining all runs to match the past). Am I understanding this correctly?

    Comment by Steve L — 5 Oct 2009 @ 10:11 AM

  93. The 9/24 WaPo article Larry points to in his posting of Oct 4 refers to the same Ventana/Sustainability/MIT model I mentioned 10/2 above — there’s a laptop version apparently widely in use by the delegates who will be going to Copenhagen.

    I’m hoping someone who actually has, or was involved in creating, the laptop version will show up to talk about it. The website version is a very simple demo of the interface, near as I can tell, but it appears the people using it aren’t bloggers.

    Comment by Hank Roberts — 5 Oct 2009 @ 1:18 PM

  94. Re: #92 – Steve L,

    Is a good question. In the Hawkins & Sutton study there is a weak weighting applied to the models so that the models which performed better in predicting recent global trends were given a (slightly) higher weight. In fact, it made little difference to the overall results.

    However, how to best constrain predictions using how well models do at reconstructing past climate using more complex methods is a very active current area of research. There are immediately issues about what observations you use, i.e. just global averages, regional patterns, rainfall, temperature, sea level pressure, variability or mean state etc etc, and each model has strengths and weaknesses, so what do you prioritise? A google search on ‘climate model metrics’ brings up a host of articles and webpages on these kinds of issues.

    And, this is also an aspect of climate research which the planned decadal predictions will be invaluable for. We will be able to test the climate models in real forecasts for the first time and see which ones work well (and not so well), and perhaps more importantly, why they work well/not so well. With a bit of luck, we might then even be able to improve them!

    Ed.

    Comment by Ed Hawkins — 5 Oct 2009 @ 3:43 PM

  95. The text says “uncertainty associated with uncertain or inaccurate models grows with time”, but the graphs show “fractional uncertainty” for models going down with time. Seems like a contradiction, but what does “fractional uncertainty” mean?

    Comment by Tom Adams — 7 Oct 2009 @ 10:33 AM

  96. Re #95 – Tom Adams,

    It is a slightly confusing graph. Try reading comment #1, and Gavin’s reply, and the graph linked to in comment #3. Or the paper itself, which is linked in the article and freely available.

    Comment by Ed Hawkins — 8 Oct 2009 @ 2:28 AM

  97. Ah, good od G Karst with his continuous and eternal cut n paste.

    Stll insisting that including the winter maximum in a list of summer minima is worthwhile. And STILL insisting that the sea ice extent tells you much about the total sea ice volume.

    What a [edit]

    Comment by Mark — 8 Oct 2009 @ 3:33 AM

  98. Temperature is rising and polar bears are drowning earth is turning to A WATER WORLD!fUTURE GENERATIONS WILL NOT SURVIVE ITS UP TO US NOW TO MAKE A CHANGE!

    Comment by Jean Michael — 8 Oct 2009 @ 3:47 AM

  99. Jean Michael, I’m just one of the readers here, not a scientist, but since you post your website behind your name, my opinion:

    After taking a look — seems to me you need to work on your own understanding of the science first before setting up as a teacher. Ask someone for help with your website description of global warming. It’s too simple and has some basic ideas very unclear.

    Comment by Hank Roberts — 8 Oct 2009 @ 8:53 AM

  100. Jean Michael #98: I’m sure you mean well, but please… listen to Hank.

    Comment by Martin Vermeer — 8 Oct 2009 @ 10:07 AM

  101. [edit]

    3rdOct: New Moon.

    Within 24 hours of it, G Karst has repeated a tired old cut n paste that has never worked.

    Obviously his posting is driven by the full moon.

    [edit–see previous comment. if we can’t keep the discussion civil, we’ll nix it]

    Comment by Mark — 8 Oct 2009 @ 11:34 AM

  102. Peter Sinclair has posted another video this week;
    http://www.youtube.com/watch?v=khikoh3sJg8

    Pete

    Comment by Pete Wirfs — 8 Oct 2009 @ 11:47 AM

  103. it’s not really about the data. It’s about the unorthodox scientist as hero, challenging the establishment with a cool new idea. It’s about the underdog against the arrogance of the establishment. It’s about publicity, self-promotion, and sexy press releases. It’s about insisting relentlessly from publication to publication that a link has been shown, and a theory is gaining strength. It’s about writing a popular book about oneself, ignoring every objection that has been raised except the one for which one happens to have a good reply. And it’s about wishful thinking that this somehow means greenhouse warming isn’t happening. People want Svensmark to be right.

    Comment by pretty-tiffany — 10 Oct 2009 @ 2:28 AM

Sorry, the comment form is closed at this time.

Close this window.

0.414 Powered by WordPress