Decadal predictions

… to meet the expectations of society, it is both necessary and possible to revolutionize climate prediction. … It is possible firstly because of major advances in scientific understanding, secondly because of the development of seamless prediction systems which unify weather and climate prediction, thus bringing the insights and constraints of weather prediction into the climate change arena, and thirdly because of the ever-expanding power of computers.

However, just because something is necessary (according to the expectations of society) does not automatically mean that it is possible! Indeed, there is a real danger for society’s expectations to get completely out of line with what eventually will prove possible, and it’s important that policies don’t get put in place that are not robust to the real uncertainty in such predictions.

Does this mean that climate predictions can’t get better? Not at all. The component of the forecast uncertainty associated with the models themselves can certainly be reduced (the blue line above) – through more judicious weighting of the various models (perhaps using paleo-climate data from the LGM and mid-Holocene which will also be part of the new IPCC archive), improvements in parameterisations and greater realism in forcings and physical interactions (for instance between clouds and aerosols). In fact, one might hazard a guess that these efforts will prove more effective in reducing uncertainty in the coming round of model simulations than the still-experimental attempts in decadal forecasting.

Page 3 of 3 | Previous page

103 comments on this post.
  1. Benjamin:

    You write, “uncertainty associated with uncertain or inaccurate models grows with time,” but the graphs both show model uncertainty declining with increasing time. Or, am I interpreting them wrong?

    [Response: The graphs show the fractional uncertainty compared to the signal, specifically: “The relative importance of each source of uncertainty in decadal mean surface temperature projections is shown by the fractional uncertainty (the 90% confidence level divided by the mean prediction) relative to the warming from the 1971–2000 mean” – gavin]

  2. Charles Raguse:

    I assume that a decade is a running mean of ten years. Is this correct?

  3. Ed Hawkins:

    Re #1:
    As the author of the referenced paper, I know that this can be a confusing picture! There is another version here:

    which I think clarifies the growth of uncertainty, and it’s components.
    Ed Hawkins.

  4. David N:


    [Response: cough.. fixed. – gavin]

  5. Kevin McKinney:

    Thanks for another informative post!

    Typo in heading: “predicitions” for “predictions.”

  6. Len Shaffrey:

    The link to the World Modelling Summit for Climate Prediction at the end of the article doesn’t mention decadal forecasting. As far as I know (I wasn’t invited to it!) that meeting was more about how to improve the representation of regional climate and extreme weather in climate models, partly by increasing the resolution of climate models, and partly by testing climate models by performing weather forecasts and seasonal to decadal predictions (seamless prediction) with them.

    I agree wholeheartedly with your assessment of the challenges to producing skillful and useful decadal predictions, but I think you’re possibly missing out by not mentioning that they might be extremely useful tools for evaluating the representation of processes within climate models (e.g. can trying to forecast ENSO with a climate model tell you anything about how well the model represents ENSO?).


    Len Shaffrey

    [Response: Thanks for the comment. I’m certainly not suggesting that research on decadal predictions is not worthwhile – it is and for the reasons you suggest. However, this is more a caution against people thinking this is a mature field of research which it certainly isn’t. There is however a link between regional predictions and decadal predictions – which is highlighted by the figure from Hawkins and Sutton – and that is the role of internal variability. Statements about ‘necessary’ improvements to regional forecasts (which depend on greater predictability of the internal variability) without proper assessments of its feasability, I find to be somewhat jumping the gun. – gavin]

  7. franz mair:

    well, so if we expect some more years of no warming and perhaps some cooling, we must be “prepared” for the media, the politics and the own credability, not to loose.
    having the same globale temperature at 2020 as around 2000, it will show again, that the proposed climate sensivity for CO2 is not well calculated and any stated future scenario must be wrong.

  8. Howard S.:

    I’m sure you want to update this
    It’s got some problems.
    How do we know that recent CO2 increases are due to human activities?
    — 22 December 2004
    Another, quite independent way that we know that fossil fuel burning and land clearing specifically are responsible for the increase in CO2 in the last 150 years is through the measurement of carbon isotopes.

    One of the methods used is to measure the 13C/12C in tree rings, and use this to infer those same ratios in atmospheric CO2. This works because during photosynthesis, trees take up carbon from the atmosphere and lay this carbon down as plant organic material in the form of rings, providing a snapshot of the atmospheric composition of that time.

    Sequences of annual tree rings going back thousands of years have now been analyzed for their 13C/12C ratios. Because the age of each ring is precisely known** we can make a graph of the atmospheric 13C/12C ratio vs. time. What is found is at no time in the last 10,000 years are the 13C/12C ratios in the atmosphere as low as they are today. Furthermore, the 13C/12C ratios begin to decline dramatically just as the CO2 starts to increase — around 1850 AD. This is exactly what we expect if the increased CO2 is in fact due to fossil fuel burning.

    [Response: Not following your point at all. What do you think is wrong with this? – gavin]

  9. Chris Dudley:

    In addition to economic, technological and sociological changes as sources of uncertainty related to knowing the level of carbon dioxide in the atmosphere over time, there are also uncertainties in our understanding of the carbon cycle where, even if emissions and deforestation were perfectly known, uptake or release of carbon dioxide from or to natural carbon pools is also unclear. And, while direct anthropogenic effects dominate now, an exponential feedback could take over as the dominant term and thus may dominate the scenario uncertainty in the longer timescale. The only way to control for this is to avoid perturbations which might trigger such a feedback. In other words, good models may only be possible for a reduced emissions regime.

  10. Howard S.:



    [Response: Not even close. Carbon isotope analyses in well-dated tree rings have absolutely nothing to do with tree rings used as climate proxies. In this context the only thing that matters is that the carbon in the wood is independently datable. – gavin]

  11. Ed Hawkins:

    Following up on the points raised in #6….

    The internal variability of climate on regional scales is, of course, an important issue. We can still potentially significantly improve projections of regional climate for later in the century, where the internal variability component is a smaller fraction of the total uncertainty. Projections of regional details and changes in extreme temperatures in the mid-21st century should still be useful(?), and that is likely to require higher resolution. To predict regional changes for the next decade or two will probably require both higher resolution and initialised predictions.

    Of course, we have a long way to go to realise this potential. And, as #9 has suggested, there is additional uncertainty in the carbon cycle on the longer timescales.


  12. Walt Bennett:

    When will you be posting on the implications of the BAS Antarctic survey?

  13. Andy Revkin:

    Here’s more from Gavin on this important issue. He’s been warning about inflated expectations for a long time on short-term prediction:

  14. franz mair:

    now a had a better look at the graphs above. what would you tell us with that “uncertanities”?
    the black curve lies globaly arround 0,5, so what, can be right, can be true, we dont know.
    at british islands by 0,7, ok. that means we dont know, but we have a chance, that the “predictions” (better simulations!) could be ok.

    thank you guys, thats what we wont explain to you for many years.

  15. cervantes:

    I’m only a public health sociologist, but I can make one decadal prediction with a high degree of certainty. If there is no clear warming trend for the next ten years, the chance of getting anything meaningful done politically in the United States, whether it concerns greenhouse gas emissions or mitigation, is going to be 1+(e^(i*pi)). Just sayin.

  16. Kevin:

    Agree with #15.

    At the risk of driving Gavin crazy, if possible to reduce model uncertainty just a bit, would be very helpful. Backcasting explanations make it all look so easy.
    We can deal with scenario uncertainty and “weather.”

  17. MikeN:

    With regards to model predictions and uncertain variables like CO2 emissions.
    Why not freeze the models along with the model runs?
    So instead of just posting the output of a model run on a scenario, the model code itself should be frozen, so it can be rerun with updated emissions scenarios.

  18. Naindj:

    What confusing graphs !!!
    Very very hard to read…(thank you Ed Hawkins, the other graph helps a lot, but I still can’t link it to the two on this page)
    For example I don’t understand why the fractional uncertainty does not start from more than 1 for the global temp…
    I had understood that for less than 30 years, the natural variations (or “noise”) can hide the long term trend. So it means that you even cannot tell if it will warm or cool down.
    It that light, was does mean the total fractional uncertainty of 0.4 at ten years ?
    I understand we are talking about decadal mean temperature, but still…

  19. franz mair:

    @15 cervates:
    I’m only a public health sociologist, but I can make one decadal prediction with a high degree of certainty. If there is no clear warming trend for the next ten years, the chance of getting anything meaningful done politically in the United States, whether it concerns greenhouse gas emissions or mitigation, is going to be 1+(e^(i*pi)). Just sayin.

    3-5 years stedy or slightly cooling will be enought and they are almost affraid, that it will be happen and the co2 climatesensivity is said to be completely wrong!

  20. Hank Roberts:

    MikeN, the reason they don’t keep outdated models running after improvements are added is — just from what I’ve gathered as an ordinary reader, mind you:
    — old models are less useful than improved ones
    — each model uses all available computer and time resources
    — all models are wrong.
    Do you know the rest of that 3rd line?

    [Response: Actually, these days it’s pretty easy to find old versions of the models. EdGCM was bascially the old Hansen model from 1988, and I think you could still find the previous public releases of the NCAR models. There are issues with software libraries and compilers of course. – gavin]

  21. John N-G:

    #18 Naindj –

    The graphs are for 10-year average temperature predictions relative to the 1971-2000 mean (not the forecast starting year of 2000). So middle time of the first data point (2001-2010) is already 20 years past the middle time of the baseline. Decadal-scale uncertainty is competing with 20 years of global warming, and the global warming is larger by a factor of two than the 90% spread of the model sample.

    Approximate numerical example: mean model forecast for 2001-2010 = +0.28 C relative to 1971-2000. 90% range of model forecasts: +/- 0.14. Fractional uncertainty 0.14/0.28 = 0.5.

    In their paper, they also estimate fractional uncertainty in predicting 2001-2010 temperatures relative to a year 2000 baseline, and that’s quite a bit higher initially (0.9).

  22. Francis:

    So, regional applications of global models show that northern California will warm as a result of global climate change, especially during the winter and spring, resulting in decreased snowpack and increased stress on the statewide water system.

    At the same time, increased warming will result in more frequent El Nino conditions as the Pacific ocean looks to dump heat back into the atmosphere. This means more rain in southern California.

    So, if Southern California water managers get their act together, they can swap snowpack losses for El Nino gains.

    Is this right?

    [Response: Hmm… not really. The big issue is of course what ENSO will do – and this is not well understood. I don’t think you can safely say that “increased warming will result in more frequent El Nino” – the projections and credibility for this aspect of climate are very varied. The snow pack changes are more reliable. – gavin]

  23. David B. Benson:

    cervates (15) & franz mair (19) — Here are the decadal averages from the HadCRUTv3 global temperature product:

    Almost fifty of steady, very fastm warming ought to be enough to spur action, don’t you think?

  24. Ed Hawkins:

    Re #18, Naindj,

    Hopefully the graphs make more sense if you read the whole paper linked to by Gavin! I agree that they are not easy to read at first, but are updates of a similar graph made by Cox & Stephenson (2007, Science). There are other versions in the paper as well which may be easier to understand, like this one:
    where the colours are the same as in the graphs in the article and show the fraction of the total uncertainty due to each source.

    Internal variability can indeed hide the warming for a decade or so, as we are experiencing at the moment, but not for 30 years globally.

    Re: the fractional uncertainty – it does not start from more than 1 because, as you say, we are considering a decadal mean temperature which reduces the uncertainty in the internal variability component, and importantly, the temperatures are always measured relative to the mean temperature from 1971-2000. We can therefore be confident that the temperature in the next decade will be warmer than the average of 1971-2000.

    As the fractional uncertainty is the total uncertainty divided by the temperature change above 1971-2000, the uncertainty would have to be quite large to make this quantity greater than 1. [In fact, it is very close to 1 for an annual prediction 1 year ahead]. A value of 0.4 for the fractional uncertainty means that if the temperature prediction is 0.5K above 1971-2000 then the uncertainty is 0.2K, and so on.

    Hope this helps!

  25. Ed Hawkins:

    Re #18 (again)
    Also, of course regionally the internal variability is much larger and so we can be less confident in our projections of the next decade on regional scales, as shown by the second panel in the article, where the fractional uncertainty is indeed larger than 1 for the next decade for the UK.
    This does not make the projections useless, but it is important to realise the uncertainties in short term projections. It is hoped that the decadal predictions may help in this respect, though much more research is needed.

  26. David Harrington:

    It very much looks like the model predictions are just about hanging in there when compared to the real world measurements. If temperatures continue to fall or flatten out then observations will break out of the lower band and then it will be interesting to see where the modellers go then.

    Is it just me or is there a lot less certainty surrounding this subject recently, shrill nonsense stories in the lead up to Copenhagen notwithstanding.

  27. Naindj:

    Re #21, #24 and #25 (John and Hank)

    Yes it helped!
    (I’m slow but not a desperate case)
    Quite interesting graph indeed. (once you understand it!)


  28. Dean:

    I think that it would help many of us understand the basis of the discussion if “decadal predictions” were defined and that was contrasted with climate change predictions (or projections if that be a better word) whose scale can be “decadal”.

    [Response: The projections that are usually talked about (and that are the basis of the IPCC figures for instance) are all from long simulations that hindcast from the 19th Century using only the external forcings (CO2, methane, aerosols, volcanoes, solar etc.). By the time they get to 2010 or 2020 there is a wide range of ‘states’ of the internal variability. For instance, one realisation might have a big El Nino in 2009, another would be having a La Nina event and more would be ENSO-neutral. Similarly, some might have an anomalously strong circulation in the North Atlantic, others would have an anomalously weak circulation etc. Thus the projections for the next decade from these sets of runs encompass a wide range of internal variability which has no relationship to the state of internal variability in the real world right now. Thus their future simulations are generally only used to forecast the element of climate change that is forced by the continuing rise in greenhouse gases etc. and to give an assessment of the range of possible departures from the forced change that result from different internal variability.

    The difference that the decadal predictions make is that they try and sync up the state of internal variability from the real world with an analogous state in the model, and thus attempt to predict the unforced variability and the forced variability together. In theory, this should be closer to what actually happens – at least for the short time period. – gavin ]

  29. Kevin McKinney:

    A bit OT here, but speaking of changing the energy mix, Ontario, Canada’s largest province, has just enacted feed-in-tariffs a la Spain–and we may be seeing the first corporate impacts from that. It should be noted that Ontario plans to phase out coal by 2014. (Coal was 23% of the energy mix five years ago, and is now just 7%.)

  30. Hank Roberts:

    > sync … with an analogous state in the model

    Does that mean picking from the simulations run from 1900 to the present a few that (now? year after year from the start?) happened to stay the closest to observed reality in as many ways as possible?

    Some of that closeness would be happenstance (getting the volcanos right); is the notion that some of the closeness would be meaningful and those scenario runs would happen to have picked values within the ranges that are …..

    I’ll stop guessing now. But before the topic derails, if you can, or some of the visiting authors can, please do set out a 7th-grade-level description of what’s done?

  31. David B. Benson:

    David Harrington (26) — Tempatures continue to rise. There is a nice graphic here:

    and here are the decadal averages from the HadCRUTv3 global temperature product:

  32. Theo Hopkins:

    On trends and 30 year ‘windows’.

    I’m not a scientist, nor a mathematician.

    Dipping into RC from time to time I have learned that climate is determined by 30 year time spans; anything less is natural variations – or ‘weather’. Variations tends to obscure trend. So far, so good.

    So how can I put faith in the work of scientist and modelers in the continuing, if slightly erratic, upward trend.

    This would help me.

    If one were to take every 30 time span from 1850 (reasonable start point?), so that’s 1850 – 1879, then 1851 – 1880, blah, blah, etc finally ending at 1979 – 2008, are all these year on year 30 year trends level or upward?

    Essentially, I am looking for something that would satisfy my (always doubting) mind.

    I really would like to get an answer to this.

    IF not, what do people on this site say.


    PS. Above, where I wrote “always doubting” I would have preferred to have used the word ‘sceptical’, but that word, ‘sceptical’, sees to have become poisoned.

  33. pete best:

    From the web article referenced in your article its all about natural variability being so overwhelming on the relatively short time scales of the oscillations of natural phenomena (ENSO, La nina, el nino etc). In order for climate science to tease a viable projection of a specific trend shorter than a statistically significant time line is much harder than knowing what will most likely happen over a entire century period (100 years) ?

    If we BAU for as long as we can on fossil fuels at the annual rise rate of 2% then we double our usage in 35 years (log2/2) which also means an additional 1.6 Trillion tonnes will have been releasd by 2045. If sinks falter then upto 1 trillion tonnes remains in the atmosphere.

    Do anyone know the billion tonnes of CO2 to Co2 ppmv in the atmosphere at all? I did read 50 ppmv per 200 billion tonnes which would mean a 250 ppmv rise in CO2 by 2045.

  34. P. Marbaix:

    I have the impression that this discussion on decadal prediction is ignoring natural external forcing:
    for the very next decades, it seems to me that the potential for major volcanic eruption(s) and the unknowns regarding solar activity are absent from the BAMS figures – or at least, I wonder where it could be : this is not internal variability (it is short term fluctuations in external forcing), it is not model error, and it is probably not included in the “scenario” uncertainty (it might have been, but a quick look at the paper suggests that it is not the case).
    Thus my impression is that this is a theoretical view about models, but it does not fully answer the question “what do we know about the climate within x decades”, at least for short term prediction. Right ?

    [Response: Not really. For natural forcings that are somewhat predictable (i.e. the existence of the solar cycle – if not its precise magnitude), this can be included. Volcanoes are obviously a wild card. In some sense this can be likened to scenario uncertainty – but it would be a greater magnitude than the scenario uncertainty addressed by Hawkins. – gavin]

  35. David B. Benson:

    Theo Hopkins (33) — The graph of decadal temperature averages linked in comment #32 starts in 1850 CE. You can estimate the 30 year averages from that. Of the more recent such intervals 1940–1969 might be flat or even down a bit; certainly not after that.

  36. Kevin McKinney:

    Theo, that’s a great project. I think that, in order to achieve maximum reassurance for your mind–doubting, skeptical, or just questioning–you should do the plotting yourself. Woodfortrees offers a wonderful tool to do this. I’ve done the first 30-year span for you; knock yourself out!

    (And let us know what you find.)

  37. MikeN:

    – old models are less useful than improved ones
    Of course. But the old models still have some use.
    – each model uses all available computer and time resources
    No they don’t. Only when they are running. I’m asking for the code to be stored.
    – all models are wrong.
    but some are more wrong then others?

  38. llewelly:

    Theo Hopkins says:
    29 September 2009 at 1:51 PM:

    If one were to take every 30 time span from 1850 (reasonable start point?), so that’s 1850 – 1879, then 1851 – 1880, blah, blah, etc finally ending at 1979 – 2008, are all these year on year 30 year trends level or upward?

    No. Why would you expect them to be?
    1851 – 1880
    1881 – 1910
    1911 – 1940
    1941 – 1970
    1971 – 2000
    1851 – 2008

    I used hadcrut3 because it goes back to 1851. GISSTEMP is similar, but only goes back to 1880. (It has been argued that from 1851 – 1880 hadcrut3 only represents the NH well.)

    Here’s GISTEMP:
    1881 – 1910
    1911 – 1940
    1941 – 1970
    1971 – 2000
    1880 – 2008

  39. Hank Roberts:

    Theo, Kevin’s right, you should do it yourself rather than just trust some stranger on a blog somewhere to do it for you.

    This may help:

  40. Hank Roberts:

    MikeN — you can look it up. You know how.

  41. CM:

    Theo Hopkins (#32),

    What you’re looking for (though 20- or 25-year trends rather than 30-year) is in figures 3, 4 and 5, but do read the whole post, or you’ll miss out.

  42. Ed Hawkins:

    Re #30 (Hank)

    The ‘syncing’ of the models to the internal variability is done by inserting (or ‘assimilating’) ocean observations into the climate model (this type of thing is done for weather forecasting too) to force the climate model to be close to the current state. It is then let to run free without being given any more observations, and is then a prediction.


  43. Jim Galasyn:

    Our Stefan is pumping up the volume:

    Two meter sea level rise unstoppable: climate scientists

    OXFORD, England (Reuters) – A rise of at least two meters in the world’s sea levels is now almost unstoppable, experts told a climate conference at Oxford University on Tuesday.

    “The crux of the sea level issue is that it starts very slowly but once it gets going it is practically unstoppable,” said Stefan Rahmstorf, a scientist at Germany’s Potsdam Institute and a widely recognized sea level expert.

    “There is no way I can see to stop this rise, even if we have gone to zero emissions.” …

  44. Chris Dudley:

    Gavin in #28,

    Is it not possible to test with presently available model runs if the idea of detailed predictions can be hoped for? There must be instances when model and observation make a pretty good match in the hindcasts. Do they then stay close for a period longer than just being temporary neighbors might produce? Is there extended coherence between model and observation when they get close? If so, then there is hope. If not, then the models are not close enough in detail to the world yet.

  45. Martin Vermeer:

    Jim Galasyn #43, note that the two metres is not a forecast for 2100AD. Just in case someone tries to pull a Gore on Stefan…

  46. Richard:

    I would like to ask you a general question – How accurate are the GCM’s in predicting reality?

    You have said that for temperature, the output from single GCM runs will not match the data as well as a statistical model based purely on the forcings, but that the mean of many simulations is better at predictions than any individual simulation.

    I am puzzled by this. If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

    [Response: Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean). Just as in weather forecasting, the forecast with the greatest skill turns out to be this ensemble mean. i.e. you do the least badly by not trying to forecast the ‘noise’. This isn’t necessarily cast in stone, but it is a good rule of thumb. – gavin]

  47. DanH:

    Should we really treat “scenario uncertainty” as an uncertainty (part of the noise), or as sensitivity to an independent variable (part of the signal)?

    [Response: It is not part of the noise, it is part of the uncertainty in the forced signal and so is very different in kind from internal variability. – gavin]

  48. Barton Paul Levenson:

    I’ve put a new page on my climatology web site dealing with the successful predictions of the models. I cribbed a great deal from earlier posts at places like RealClimate and Open Mind. Here it is:

    Can anyone who knows any of these details send me a citation for a prediction and/or the observation(s) which confirmed it? I have some of this information already, but not all of it. If I can get a comprehensive list together I’ll put it all into that web page as an appendix to prove I’m not just blowing smoke.

  49. bushy:

    Paul in 48. It is all very well to trumpet the success of the models at predicting this or the other but where is the model that can predict at least a fair number of these results in its own right? Model “a” got this right and model “b” got that right but on the whole the process is an amalgamation or average and on this basis it is not going that well. Cherry picking is not science.

    [Response: Fair general point, but not actually a propos. Publishing particular results with a particular model is normal, publishing repetitions of the same result with a different model tends to be less common – but it certainly happens. And in assessments such as the IPCC or the CCSP reports, the results tend to be taken over multiple models. – gavin]

  50. John P. Reisman (OSS Foundation):

    #32 Theo Hopkins

    30 years with attribution helps. In other words, look at the period after WWII. There was about a 33 year cooling trend, but that trend is generally attributed to the effects of industrial aerosols masking the global warming effect during that period.

    The industrial output at that time had some downsides, like the destruction of the ozone and acid rain. So the Montreal protocol was put in place and the industrial output of those pollutants was reduced and the CO2 pollutant then became more dominant and warming resumed.

    PS I’m still a skeptic.