• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

RealClimate

Climate science from climate scientists...

  • Start here
  • Model-Observation Comparisons
  • Miscellaneous Climate Graphics
  • Surface temperature graphics
You are here: Home / Archives for Climate Science / Climate modelling

Climate modelling

CMIP5 simulations

11 Aug 2011 by Gavin

Climate modeling groups all across the world are racing to add their contributions to the CMIP5 archive of coupled model simulations. This coordinated project, proposed, conceived and specified by the climate modeling community itself, will be an important resource for analysts and for the IPCC AR5 report (due in 2013), and beyond.

There have been previous incarnations of the CMIP projects going back to the 1990s, but I think it’s safe to say that it was only with CMIP3 (in 2004/2005) that the project gained a real maturity. The CMIP3 archive was heavily used in the IPCC AR4 report – so much so that people often describe those models and simulations as the ‘IPCC models’. That is a reasonable shorthand, but is not really an accurate description (the models were not chosen by IPCC, designed by IPCC, or run by IPCC) even though I’ve used it on occasion. Part of the success of CMIP3 was the relatively open data access policy which allowed many scientists and hobbyists alike to access the data – many of whom were dealing with GCM output for the first time. Some 600 papers have been written using data from this archive. We discussed some of this success (and some of the problems) back in 2008.

Now that CMIP5 is gearing up for a similar exercise, it is worth looking into what has changed – it terms of both the model specifications, the requested simulations and the data serving to the wider community. Many of these issues are being discussed in a the current CLIVAR newsletter (Exchanges no. 56). (The references below are all to articles in this pdf).

There are three main novelties this time around that I think are noteworthy: the use of more interactive Earth System models, a focus on initiallised decadal predictions, and the inclusion of key paleo-climate simulations as part of the suite of runs.

The term Earth System Model is a little ambiguous with some people reserving that for models that include a carbon cycle, and others (including me) using it more generally to denote models with more interactive components than used in more standard (AR4-style) GCMs (i.e. atmospheric chemistry, aerosols, ice sheets, dynamic vegetation etc.). Regardless of terminology, the 20th Century historical simulations in CMIP5 will use a much more diverse set of model types than did the similar simulations in CMIP3 (where all models were standard coupled GCMs). That both expands the range of possible evaluations of the models, but also increases the complexity of that evaluation.

The ‘decadal prediction’ simulations are mostly being run with standard GCMs (see the article by Doblas-Reyes et al, p8). The different groups are trying multiple methods to initialise their ocean circulations and heat content at specific points in the past and are then seeing if they are able to better predict the actual course of events. This is very different from standard climate modelling where no attempt is made to synchronise modes of internal variability with the real world. The hope is that one can reduce the initial condition uncertainty for predictions in some useful way, though this has yet to be demonstrated. Early attempts to do this have had mixed results, and from what I’ve seen of the preliminary results in the CMIP5 runs, significant problems remain. This is one area to watch carefully though.

Personally, I am far more interested in the inclusion of the paleo component in CMIP5 (see Braconnot et al, p15). Paleo-climate simulations with the same models that are being used for the future projections allow for the possibility that we can have true ‘out-of-sample’ testing of the models over periods with significant climate changes. Much of the previous work in evaluating the IPCC models has been based on modern period skill metrics (the climatology, seasonality, interannual variability, the response to Pinatubo etc.), but while useful, this doesn’t encompass changes of the same magnitude as the changes predicted for the 21st Century. Including tests with simulations of the last glacial maximum, the Mid-Holocene or the Last Millennium greatly expands the range of model evaluation (see Schmidt (2010) for more discussion).

The CLIVAR newsletter has a number of other interesting articles, on CFMIP (p20), the scenarios begin used (RCPs) (p12), the ESG data delivery system (p40), satellite comparisons (p46, and p47) and the carbon-cycle simulations (p27). Indeed, the range of issues covered I think presages the depth and interest that the CMIP5 archive will eventually generate.

There will be a WCRP meeting in October in Denver that will be very focused on the CMIP5 results, and it is likely that much of context for the AR5 report will be reflected there.

Filed Under: Climate modelling, Climate Science, IPCC

Reanalyses ‘R’ Us

26 Jul 2011 by Gavin

There is an interesting new wiki site, Reanalyses.org, that has been developed by a number of groups dedicated to documenting the various reanalysis products for atmosphere and ocean that are increasingly being made available.

For those that don’t know, a ‘reanalysis’ is a climate or weather model simulation of the past that includes data assimilation of historical observations. The observations can be very comprehensive (satellite, in situ, multiple variables) or relatively sparse (say, sea level pressure only), and the models themselves are quite varied. Generally these models are drawn from the weather forecasting community (at least for the atmospheric components) which explains the odd terminology. An ‘analysis’ from a weather forecasting model is the 6 hour (say) forecast from the time of observations. Weather forecasting groups realised a decade or so ago that the time series of their weather forecasts (the analyses) could not be used to track long term changes because their models had been updated many times over the decades. Thus the idea arose to ‘re-analyse’ the historical observations with a single consistent model. These sets of 6 hour forecasts using the data available at each point are then more consistent in time (and presumably more accurate) that the original analyses were.

The first two reanalysis projects (NCEP1 and ERA-40) were groundbreaking and allowed a lot of analysis of the historical climate (around 1958 or 1948 onwards) that had not been possible before. Essentially, the models are being used to interpolate between observations in a (hopefully) physically consistent manner providing a gridded and complete data set. However, there are noted problems with this approach that need to be borne in mind.

The most important issue is that the amount and quality of the assimilated data has changed enormously over time. Particularly in the pre-satellite era (around 1979), data is relatively sparse and reliant on networks of in-situ measurements. After 1979 the amount of data being brought in increases by orders of magnitude. It is also important to consider how even continuous measurement series have changed. For instance, the response time for sensors in radiosondes (that are used to track atmospheric profiles of temperature and humidity) has steadily improved which, if uncorrected for in the reanalyses, would lead to an erroneous drying in the upper troposphere that has nothing to do with any actual climate trend. In fact it is hard to correct for such problems in data coverage and accuracy, and so trend analysis in the reanalyses have to be treated very carefully (and sometimes avoided altogether).

A further problem is that different outputs from the reanalyses are differently constrained by observations. Where observations are plentiful and span the variability, the reanalysis field is close to what actually happened (for instance, horizontal components of the wind), but where the output field is only indirectly related to the assimilated observations (rainfall, cloudiness etc.), the changes and variability are much more of a product of the model.

The more modern products are substantially improved (NCEP-2, ERA-Interim, MERRA and others) over the first set, and new approaches are also being tried. The ‘20th Century Reanalysis‘ is a new product that only uses (plentiful) surface pressure measurements to constrain dynamics and although it uses less data than other products, it can go back much earlier (to the 19th Century) and still produce meaningful results. Other new products are the ocean reanalyses (ECCO for instance) that tries to take the same approach with ocean temperature and salinity measurements.

These products should definitely not be assumed to have the status of ‘real observations’, but they are very useful as long as people are careful to take the caveats seriously, and be clear about the structural uncertainties. Results that differ enormously across different reanalyses should be viewed with caution.

The new site includes some very promising descriptions on how to download and plot the data, and will hopefully soon be able to fill up the rest of pages. Some suggestions might be for a list of key papers discussing the results of these reanalyses and lists of issues found (so that others don’t waste their time). It’s a very promising start though.

Filed Under: Climate modelling, Climate Science, Instrumental Record

Making climate science more useful

29 Mar 2011 by rasmus

Impression from the ICTPLast week, there was a CORDEX workshop on regional climate modelling at International Centre for Theoretical Physics (ICTP), near Trieste, Italy.

The CORDEX initiative, as the abbreviation ‘COordinated Regional climate Downscaling Experiment‘ suggests, tries to bring together the community of regional climate modellers. At least, this initiative has got a blessing from the World Climate Research Programme WCRP.

I think the most important take-home message from the workshop is that the stake holders and end users of climate information should not look at just one simulation from global climate models, or just one downscaling method. This is very much in agreement with the recommendations from the IPCC Good Practice Guidance Paper. The main reason for this is the degree of uncertainties involved in regional climate modelling, as discussed in a previous post.

[Read more…] about Making climate science more useful

Filed Under: Climate modelling, Climate Science, Communicating Climate, Scientific practice

From blog to Science

13 Feb 2011 by Gavin

There is a lot of talk around about why science isn’t being done on blogs. It can happen though, and sometimes blog posts can even end up as (part of) a real Science paper. However, the process is non-trivial and the relatively small number of examples of such a transition demonstrate clearly why blog science is not going to replace the peer-reviewed literature any time soon.

[Read more…] about From blog to Science

Filed Under: Climate modelling, Climate Science, Greenhouse gases

2010 updates to model-data comparisons

21 Jan 2011 by Gavin

As we did roughly a year ago (and as we will probably do every year around this time), we can add another data point to a set of reasonably standard model-data comparisons that have proven interesting over the years.
[Read more…] about 2010 updates to model-data comparisons

Filed Under: Climate modelling, Climate Science, Instrumental Record, Model-Obs Comparisons

Cold winter in a world of warming?

14 Dec 2010 by rasmus

Last June, during the International Polar Year conference, James Overland suggested that there are more cold and snowy winters to come. He argued that the exceptionally cold snowy 2009-2010 winter in Europe had a connection with the loss of sea-ice in the Arctic. The cold winters were associated with a persistent ‘blocking event’, bringing in cold air over Europe from the north and the east.

[Read more…] about Cold winter in a world of warming?

Filed Under: Arctic and Antarctic, Climate modelling, Climate Science

So how did that global cooling bet work out?

22 Nov 2010 by group

Two and a half years ago, a paper was published in Nature purporting to be a real prediction of how global temperatures would develop, based on a method for initialising the ocean state using temperature observations (Keenlyside et al, 2008) (K08). In the subsequent period, this paper has been highly cited, very often in a misleading way by contrarians (for instance, Lindzen misrepresents it on a regular basis). But what of the paper’s actual claims, how are they holding up?
[Read more…] about So how did that global cooling bet work out?

Filed Under: Climate modelling, Climate Science, Oceans

Solar spectral stumper

7 Oct 2010 by Gavin

It’s again time for one of those puzzling results that if they turn out to be true, would have some very important implications and upset a lot of relatively established science. The big issue of course is the “if”. The case in question relates to some results published this week in Nature by Joanna Haigh and colleagues. They took some ‘hot off the presses’ satellite data from the SORCE mission (which has been in operation since 2003) and ran it through a relatively complex chemistry/radiation model. These data are measurements of how the solar output varies as a function of wavelength from an instrument called “SIM” (the Spectral Irradiance Monitor).
[Read more…] about Solar spectral stumper

Filed Under: Climate modelling, Climate Science, Sun-earth connections

On attribution

26 May 2010 by Gavin

How do we know what caused climate to change – or even if anything did?

This is a central question with respect to recent temperature trends, but of course it is much more general and applies to a whole range of climate changes over all time scales. Judging from comments we receive here and discussions elsewhere on the web, there is a fair amount of confusion about how this process works and what can (and cannot) be said with confidence. For instance, many people appear to (incorrectly) think that attribution is just based on a naive correlation of the global mean temperature, or that it is impossible to do unless a change is ‘unprecedented’ or that the answers are based on our lack of imagination about other causes.

In fact the process is more sophisticated than these misconceptions imply and I’ll go over the main issues below. But the executive summary is this:

  • You can’t do attribution based only on statistics
  • Attribution has nothing to do with something being “unprecedented”
  • You always need a model of some sort
  • The more distinct the fingerprint of a particular cause is, the easier it is to detect

Note that it helps enormously to think about attribution in contexts that don’t have anything to do with anthropogenic causes. For some reason that allows people to think a little bit more clearly about the problem.
[Read more…] about On attribution

Filed Under: Climate modelling, Climate Science

Ocean heat content increases update

21 May 2010 by Gavin

Translations: (Italian) (English)

Una traducción en español está disponible aquí.

Filed Under: Climate modelling, Climate Science, Oceans

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 13
  • Page 14
  • Page 15
  • Page 16
  • Page 17
  • Interim pages omitted …
  • Page 24
  • Go to Next Page »

Primary Sidebar

Search

Search for:

Email Notification

get new posts sent to you automatically (free)
Loading

Recent Posts

  • Climate Scientists response to DOE report
  • Critique of Chapter 6 “Extreme Weather” in the DOE review
  • Unforced Variations: Sep 2025
  • Critiques of the ‘Critical Review’
  • Unforced Variations: Aug 2025
  • Are direct water vapor emissions endangering anyone?

Our Books

Book covers
This list of books since 2005 (in reverse chronological order) that we have been involved in, accompanied by the publisher’s official description, and some comments of independent reviewers of the work.
All Books >>

Recent Comments

  • BJ Chippindale on Unforced Variations: Sep 2025
  • BJ Chippindale on Unforced Variations: Sep 2025
  • CherylJosie on Unforced Variations: Sep 2025
  • CherylJosie on Unforced Variations: Sep 2025
  • CherylJosie on Unforced Variations: Sep 2025
  • BJ Chippindale on Unforced Variations: Sep 2025
  • David on Unforced Variations: Sep 2025
  • jgnfld on Critique of Chapter 6 “Extreme Weather” in the DOE review
  • nigelj on Critique of Chapter 6 “Extreme Weather” in the DOE review
  • Susan Anderson on Climate Scientists response to DOE report
  • Susan Anderson on Climate Scientists response to DOE report
  • Susan Anderson on Climate Scientists response to DOE report
  • Susan Anderson on Unforced Variations: Sep 2025
  • Susan Anderson on Unforced Variations: Sep 2025
  • Tomáš Kalisz on Climate Scientists response to DOE report
  • Piotr on Unforced Variations: Sep 2025
  • nigelj on Unforced Variations: Sep 2025
  • Tomáš Kalisz on Unforced Variations: Sep 2025
  • Piotr on Unforced Variations: Sep 2025
  • nigelj on Unforced Variations: Sep 2025
  • Tomáš Kalisz on Unforced Variations: Sep 2025
  • Tomáš Kalisz on Unforced Variations: Sep 2025
  • Russell Seitz on Climate Scientists response to DOE report
  • Piotr on Unforced Variations: Sep 2025
  • Piotr on Unforced Variations: Sep 2025
  • Piotr on Unforced Variations: Sep 2025
  • Piotr on Unforced Variations: Sep 2025
  • zebra on Unforced Variations: Sep 2025
  • Piotr on Unforced Variations: Sep 2025
  • b fagan on Climate Scientists response to DOE report

Footer

ABOUT

  • About
  • Translations
  • Privacy Policy
  • Contact Page
  • Login

DATA AND GRAPHICS

  • Data Sources
  • Model-Observation Comparisons
  • Surface temperature graphics
  • Miscellaneous Climate Graphics

INDEX

  • Acronym index
  • Index
  • Archives
  • Contributors

Realclimate Stats

1,378 posts

11 pages

246,474 comments

Copyright © 2025 · RealClimate is a commentary site on climate science by working climate scientists for the interested public and journalists.