RealClimate logo


Technical Note: Sorry for any recent performance issues. We are working on it.

FAQ on climate models

Filed under: — group @ 3 November 2008 - (Svenska)

We discuss climate models a lot, and from the comments here and in other forums it’s clear that there remains a great deal of confusion about what climate models do and how their results should be interpreted. This post is designed to be a FAQ for climate model questions – of which a few are already given. If you have comments or other questions, ask them as concisely as possible in the comment section and if they are of enough interest, we’ll add them to the post so that we can have a resource for future discussions. (We would ask that you please focus on real questions that have real answers and, as always, avoid rhetorical excesses).

Part II is here.

Quick definitions:

  • GCM – General Circulation Model (sometimes Global Climate Model) which includes the physics of the atmosphere and often the ocean, sea ice and land surface as well.
  • Simulation – a single experiment with a GCM
  • Initial Condition Ensemble – a set of simulations using a single GCM but with slight perturbations in the initial conditions. This is an attempt to average over chaotic behaviour in the weather.
  • Multi-model Ensemble – a set of simulations from multiple models. Surprisingly, an average over these simulations gives a better match to climatological observations than any single model.
  • Model weather – the path that any individual simulation will take has very different individual storms and wave patterns than any other simulation. The model weather is the part of the solution (usually high frequency and small scale) that is uncorrelated with another simulation in the same ensemble.
  • Model climate – the part of the simulation that is robust and is the same in different ensemble members (usually these are long-term averages, statistics, and relationships between variables).
  • Forcings – anything that is imposed from the outside that causes a model’s climate to change.
  • Feedbacks – changes in the model that occur in response to the initial forcing that end up adding to (for positive feedbacks) or damping (negative feedbacks) the initial response. Classic examples are the amplifying ice-albedo feedback, or the damping long-wave radiative feedback.

Questions:

  • What is the difference between a physics-based model and a statistical model?

    Models in statistics or in many colloquial uses of the term often imply a simple relationship that is fitted to some observations. A linear regression line through a change of temperature with time, or a sinusoidal fit to the seasonal cycle for instance. More complicated fits are also possible (neural nets for instance). These statistical models are very efficient at encapsulating existing information concisely and as long as things don’t change much, they can provide reasonable predictions of future behaviour. However, they aren’t much good for predictions if you know the underlying system is changing in ways that might possibly affect how your original variables will interact.

    Physics-based models on the other hand, try to capture the real physical cause of any relationship, which hopefully are understood at a deeper level. Since those fundamentals are not likely to change in the future, the anticipation of a successful prediction is higher. A classic example is Newton’s Law of motion, F=ma, which can be used in multiple contexts to give highly accurate results completely independently of the data Newton himself had on hand.

    Climate models are fundamentally physics-based, but some of the small scale physics is only known empirically (for instance, the increase of evaporation as the wind increases). Thus statistical fits to the observed data are included in the climate model formulation, but these are only used for process-level parameterisations, not for trends in time.

  • Are climate models just a fit to the trend in the global temperature data?

    No. Much of the confusion concerning this point comes from a misunderstanding stemming from the point above. Model development actually does not use the trend data in tuning (see below). Instead, modellers work to improve the climatology of the model (the fit to the average conditions), and it’s intrinsic variability (such as the frequency and amplitude of tropical variability). The resulting model is pretty much used ‘as is’ in hindcast experiments for the 20th Century.

  • Why are there ‘wiggles’ in the output?

    GCMs perform calculations with timesteps of about 20 to 30 minutes so that they can capture the daily cycle and the progression of weather systems. As with weather forecasting models, the weather in a climate model is chaotic. Starting from a very similar (but not identical) state, a different simulation will ensue – with different weather, different storms, different wind patterns – i.e different wiggles. In control simulations, there are wiggles at almost all timescales – daily, monthly, yearly, decadally and longer – and modellers need to test very carefully how much of any change that happens because of a change in forcing is really associated with that forcing and how much might simply be due to the internal wiggles.

  • What is robust in a climate projection and how can I tell?

    Since every wiggle is not necessarily significant, modellers need to assess how robust particular model results are. They do this by seeing whether the same result is seen in other simulations, with other models, whether it makes physical sense and whether there is some evidence of similar things in the observational or paleo record. If that result is seen in multiple models and multiple simulations, it is likely to be a robust consequence of the underlying assumptions, or in other words, it probably isn’t due to any of the relatively arbitrary choices that mark the differences between different models. If the magnitude of the effect makes theoretical sense independent of these kinds of model, then that adds to it’s credibility, and if in fact this effect matches what is seen in observations, then that adds more. Robust results are therefore those that quantitatively match in all three domains. Examples are the warming of planet as a function of increasing greenhouse gases, or the change in water vapour with temperature. All models show basically the same behaviour that is in line with basic theory and observations. Examples of non-robust results are the changes in El Niño as a result of climate forcings, or the impact on hurricanes. In both of these cases, models produce very disparate results, the theory is not yet fully developed and observations are ambiguous.

  • How have models changed over the years?

    Initially (ca. 1975), GCMs were based purely on atmospheric processes – the winds, radiation, and with simplified clouds. By the mid-1980s, there were simple treatments of the upper ocean and sea ice, and clouds parameterisations started to get slightly more sophisticated. In the 1990s, fully coupled ocean-atmosphere models started to become available. This is when the first Coupled Model Intercomparison Project (CMIP) was started. This has subsequently seen two further iterations, the latest (CMIP3) being the database used in support of much of the model work in the IPCC AR4. Over that time, model simulations have become demonstrably more realistic (Reichler and Kim, 2008) as resolution has increased and parameterisations have become more sophisticated. Nowadays, models also include dynamic sea ice, aerosols and atmospheric chemistry modules. Issues like excessive ‘climate drift’ (the tendency for a coupled model to move away from the a state resembling the actual climate) which were problematic in the early days are now much minimised.

  • What is tuning?

    We are still a long way from being able to simulate the climate with a true first principles calculation. While many basic aspects of physics can be included (conservation of mass, energy etc.), many need to be approximated for reasons of efficiency or resolutions (i.e. the equations of motion need estimates of sub-gridscale turbulent effects, radiative transfer codes approximate the line-by-line calculations using band averaging), and still others are only known empirically (the formula for how fast clouds turn to rain for instance). With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist. Adjusting these values is described as tuning and falls into two categories. First, there is the tuning in a single formula in order for that formula to best match the observed values of that specific relationship. This happens most frequently when new parameterisations are being developed.

    Secondly, there are tuning parameters that control aspects of the emergent system. Gravity wave drag parameters are not very constrained by data, and so are often tuned to improve the climatology of stratospheric zonal winds. The threshold relative humidity for making clouds is tuned often to get the most realistic cloud cover and global albedo. Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data. It is important to note that these exercises are done with the mean climate (including the seasonal cycle and some internal variability) – and once set they are kept fixed for any perturbation experiment.

  • How are models evaluated?

    The amount of data that is available for model evaluation is vast, but falls into a few clear categories. First, there is the climatological average (maybe for each month or season) of key observed fields like temperature, rainfall, winds and clouds. This is the zeroth order comparison to see whether the model is getting the basics reasonably correct. Next comes the variability in these basic fields – does the model have a realistic North Atlantic Oscillation, or ENSO, or MJO. These are harder to match (and indeed many models do not yet have realistic El Niños). More subtle are comparisons of relationships in the model and in the real world. This is useful for short data records (such as those retrieves by satellite) where there is a lot of weather noise one wouldn’t expect the model to capture. In those cases, looking at the relationship between temperatures and humidity, or cloudiness and aerosols can give insight into whether the model processes are realistic or not.

    Then there are the tests of climate changes themselves: how does a model respond to the addition of aerosols in the stratosphere such as was seen in the Mt Pinatubo ‘natural experiment’? How does it respond over the whole of the 20th Century, or at the Maunder Minimum, or the mid-Holocene or the Last Glacial Maximum? In each case, there is usually sufficient data available to evaluate how well the model is doing.

  • Are the models complete? That is, do they contain all the processes we know about?

    No. While models contain a lot of physics, they don’t contain many small-scale processes that more specialised groups (of atmospheric chemists, or coastal oceanographers for instance) might worry about a lot. Mostly this is a question of scale (model grid boxes are too large for the details to be resolved), but sometimes it’s a matter of being uncertain how to include it (for instance, the impact of ocean eddies on tracers).

    Additionally, many important bio-physical-chemical cycles (for the carbon fluxes, aerosols, ozone) are only just starting to be incorporated. Ice sheet and vegetation components are very much still under development.

  • Do models have global warming built in?

    No. If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were. Given different drivers, volcanoes or CO2 say, they will warm or cool as a function of the basic physics of aerosols or the greenhouse effect.

  • How do I write a paper that proves that models are wrong?

    Much more simply than you might think since, of course, all models are indeed wrong (though some are useful – George Box). Showing a mismatch between the real world and the observational data is made much easier if you recall the signal-to-noise issue we mentioned above. As you go to smaller spatial and shorter temporal scales the amount of internal variability increases markedly and so the number of diagnostics which will be different to the expected values from the models will increase (in both directions of course). So pick a variable, restrict your analysis to a small part of the planet, and calculate some statistic over a short period of time and you’re done. If the models match through some fluke, make the space smaller, and use a shorter time period and eventually they won’t. Even if models get much better than they are now, this will always work – call it the RealClimate theory of persistence. Now, appropriate statistics can be used to see whether these mismatches are significant and not just the result of chance or cherry-picking, but a surprising number of papers don’t bother to check such things correctly. Getting people outside the, shall we say, more ‘excitable’ parts of the blogosphere to pay any attention is, unfortunately, a lot harder.

  • Can GCMs predict the temperature and precipitation for my home?

    No. There are often large variation in the temperature and precipitation statistics over short distances because the local climatic characteristics are affected by the local geography. The GCMs are designed to describe the most important large-scale features of the climate, such as the energy flow, the circulation, and the temperature in a grid-box volume (through physical laws of thermodynamics, the dynamics, and the ideal gas laws). A typical grid-box may have a horizontal area of ~100×100 km2, but the size has tended to reduce over the years as computers have increased in speed. The shape of the landscape (the details of mountains, coastline etc.) used in the models reflect the spatial resolution, hence the model will not have sufficient detail to describe local climate variation associated with local geographical features (e.g. mountains, valleys, lakes, etc.). However, it is possible to use a GCM to derive some information about the local climate through downscaling, as it is affected by both the local geography (a more or less given constant) as well as the large-scale atmospheric conditions. The results derived through downscaling can then be compared with local climate variables, and can be used for further (and more severe) assessments of the combination model-downscaling technique. This is however still an experimental technique.

  • Can I use a climate model myself?

    Yes! There is a project called EdGCM which has a nice interface and works with Windows and lets you try out a large number of tests. ClimatePrediction.Net has a climate model that runs as a screensaver in a coordinated set of simulations. GISS ModelE is available as a download for Unix-based machines and can be run on a normal desktop. NCAR CCSM is the US community model and is well-documented and freely available.


464 Responses to “FAQ on climate models”

  1. 451
    jcbmack says:

    I just want to add that Piaget is considered the progenitor of constructvism and he was Biologically oriented, however, the resurgence of his ideas in light of modern brain scans and cognitive psychology really changed the paradigm in the 1970′s and 1980′s.

  2. 452
    Alexander Harvey says:

    Dear All,

    Re: my 443,

    I can now answer part of my own question:

    The NCDC havew provided us with the absolute temperatures underlying their 1901-2000 baseline at:

    http://www.ncdc.noaa.gov/oa/climate/research/anomalies/anomalies.html#means

    one can clearly see one of the reasons that we commonly use anomalies.

    Whereas the global mean is 13.9C the seasonal variation is +/-1.9C (July/January), clearly showing the asymmetry between the Northern and Southern hemispheres, I think.

    Given those figures can any one tell me the following:

    How well do any of the models match these figures?

    and

    How does one get access to CMIP3 data? (When I try to access the registration page:

    https://esg.llnl.gov:8443/about/registration.do

    I get an error 403 (Forbidden)

    Any help will be very gratefully received.

    Best Wishes

    Alexander Harvey

  3. 453
    Alexander Harvey says:

    Dear All,

    Re: my 452,

    I have found this page:

    http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/ms_text.php

    Which in Section 2. Present Day Climate: Subsection a: Global and annual means

    gives a brief statement regarding CMIP2 control runs:

    I quote:

    “Taking into consideration all of the observational uncertainties, it appears that the actual value of surface air temperature was between 13.5°C and 14.0°C during the second half of the 20th Century and roughly 0.5°C less in the late 19th Century. It therefore seems that several of the models (which simulate values from less than 12°C to over 16°C) are in significant disagreement with the observations of this fundamental quantity.”

    It should be noted that the models assume somewhat differing levels for CO2 and the solar constant.

    In all I am not surprised by the spread of around +/- 2.5C, one would have to get a lot of things right to get this close.

    There is much more to read on their web page and many figures worth having a look at.

    Best Wishes

    Alexander Harvey

  4. 454
    Richard C says:

    The EdGCM files are no longer available for download. Are there any other Windows models available? Are there any open source model projects underway?

    [Response: Oh dear. I'll look into this and see what the story is. - gavin]

  5. 455
    Uli says:

    Re#428:
    Hank Roberts,
    sorry for the late answer. I have searched some journals, and used google and google scholar but maybe I have not the right keywords.
    The best I have found
    Myhre et.al. “Infrared absorption cross section, radiative forcing, and GWP of four hydrofluoro(poly)ethers”
    doi:10.1016/S1352-2310(99)00208-3
    I cite “In both models the calculations are performed with 0.1 ppbv of the gases to ensure that the weak limit approximation is valid,…”
    An upper limit from the absorption cross sections in the pictures are about 10 to 20 ppbv depending on the gases. But this maybe too high if the resolution is too low.
    So maybe a linear increase in radiative forcing do not overestmate it too much.

    The N2 and O2 dissolved in the ocean are only 1 or 2 percent of the amount in the atmosphere. CO2 is the only main gas of the atmosphere which has higher solubility.
    In #441 you cite Berners ‘GEOCARBSULF’, as a source for possible large change in oxygen, and so total pressure, I asked about the consequences for the total radiation budget.
    I does not expect that amateur readers can (easyly) answer this questions. But I hoped that a few of the scientists working in this field read this and maybe know a relevant paper or other answer.

  6. 456
    Hank Roberts says:

    Thanks Uli. I can’t help further as an amateur, but you’re right, perhaps as this thread is reviewed for material for use in a FAQ someone knowledgeable can answer your question.

    Also for possible FAQ use, this might be an area of change worth noting in a discussion of ocean circulation:
    http://www.agu.org/pubs/crossref/2008/2008GL036118.shtml

  7. 457
    Chris says:

    Apologies if this is covered elsewhere on this excellent site but one of the points made by skeptics concerns the warming footprint or greenhouse signature (I think in the tropics 10KM up?) which the IPCC models predict. The argument is that it has not been found and therefore either the model is wrong or CO2 is not causing the warming. Any links/advice appreciated.

    [Response: Part i,Part ii, Part iii. - gavin]

  8. 458
    Ian says:

    Your description at the start of this thread of how climate models use observed data is along the lines (forgive the oversimplification) of
    i) build the best mechanistic model we can
    ii) start from some observed initial point
    iii) ‘forecast’ the results from thr IP to now
    iv) assess the fit of the predictions to the observations

    Presumably poor fits motivate you to change model components – indeed it seems likely than one trigger for publication is that you have a substantially better fit than the previous published model.

    Question – how do you prevent over-fitting to the observed data with this algorithm 0 indeed don’t you almost guarantee it, since you likely don’t publish models with a poorer fit?

    [Response: You separate the observations used to develop the model from those used to evaluate it. Specifically, we use the climatology of temperature, water vapour etc. to test the algorithms and their response to climate change (such as Pinatubo, or the 20th C) to evaluate the model. We don't tweak the algorithms to get a better sensitivity and it would be next to impossible to do in any case. We do add in effects that are demonstrated to be important - such as more complex oceans, more detail in the stratosphere etc. but only things that have an a priori reason to be there. We don't tune individual parameterisations to get better trends over the 20th C. - gavin]

  9. 459
    Ian says:

    Thanks

    To be clear, I wasn’t assuming that you were ‘tuning to to fit’ current data.

    But, I’m not completely sure you wouldn’t get some over-fitting without any parameter tweaking/tuning – the public record might act essentially as a GA. I guess you end up relying on the constraints of doing the physics accurately, so it’s probably not as bad as the awful problems we sometimes have with complex non-mechanistic models.

  10. 460
    Antunes says:

    # 42 In response to Craig Allen

    “Can you provide an example of any economic model that has been anywhere near as successful?”

    Actually I can. Just as in climatology, economic models have to explain known facts before they are widely accepted. The Solow (1956) model, for example, attempts to mimic some long-term stylized facts of advanced economies, and it is indeed able to replicate them. For instance, the Solow model conforms to the following empirical facts: 1) per capita output growth is approximately constant over long periods of time; 2) the share of income that comes out of labor is approximately constant over time; 3) the capital to output ratio is approximately constant over time; 4) the real hourly wage rate grows approximately at the same rate of output per capita; 5) the capital rental rate is approximately constant over time.

    More recent models are able to replicate these facts but also replicate other, more complex empirical facts. The basic Real Business Cycle model, for example, explains roughly 70% of economic fluctuations of the post-war US economy using only technological shocks, that is, labor-augmenting productivity shocks. Other models explain the long-run rate of growth of economies, the labor market behavior of agents, etc.

    These models have been used to study extreme episodes or interesting topics like the Great Depression, the Industrial Revolution, and the size of the informal economy.

    Of course, numerous other models have been proposed that failed to mimic reality on several accounts and have therefore been abandoned. But some basic models and their extensions have stood the test of time and are widely used and trusted upon.

    I can give you detailed bibliographic references on all these claims but I believe that is unnecessary here.

    In the 70s, Robert Lucas, a brilliant economist and Nobel prize winner, warned economists about the risks of using historic statistical correlations alone to predict or simulate economic systems. Doing so assumes that expectations of agents are fixed. But since they can change due to policy changes, the use of the previous estimations, based solely on correlations, must be abandoned if we are to study the impact of policies. In order to unearth the true economic parameters that govern the agents’ actions, one needs a sound theoretical foundation, and then estimate those parameters conditional on the model that we use.

    Now I think this criticism applies to climatology, so you guys have to be careful. In your case, assuming that human activity changed the climate, past correlations should be invalid or at least incomplete. So using them in your models might be inaccurate or wrong. And the fact is that, like economic models, a GCM may conform very well to past experiences but be terribly wrong in predictions or simulations of policy changes.

  11. 461
    Ray Ladbury says:

    Antunes says: “In your case, assuming that human activity changed the climate, past correlations should be invalid or at least incomplete.”

    That is why global climate models are not statistical models, but rather dynamical. Past experience is important only in terms of identifying forcints, etc. and validating the models. Uniformitarianism is a well established principle in Earth sciences–the basic physics doesn’t change

  12. 462
    David B. Benson says:

    Antunes (460) — The early anthropocene hypothesis has been well studied by W.F. Ruddiman and others. Ruddiman has written the popular “Plows, Plagues and Petroleum” and papers on his web site are readily accessible. In addition, he has a guest thread here on RealClimate.

    While others have recently aided in the verification of the central theme of early AGW, Ruddiman’s popular book is a good place to start. While reading this, note that no actors changed their motivations, AFAIK.

  13. 463
    Uli says:

    Re:Antunes(#460),

    this sounds interesting. I have a few questions.
    Does these models like the Solow (1956) model or the basic Real Business Cycle model, also applicable for other economies as the US?

    And the long time behaviour of these models.
    Can for example the economy of the UK from year 1000 to 2000 successfull simulated, provided someone makes some usefull assumptions on the unknown needed data of the past?

    “In your case, assuming that human activity changed the climate, past correlations should be invalid or at least incomplete. So using them in your models might be inaccurate or wrong.”
    Yes, of cource. Therefore physically based (climate) models like GCMs do not use past correlations at all.
    Physical models are intended to make predictions of what happens if some parameters change.

    Do some economic models use past correlations (of time series)?

  14. 464
    Pyper says:

    As a fairly well educated non-scientist, the comment in #21 about satellite measurements resonates — if there’s more heat coming in than going out it must be staying here. Pretty convincing argument that eliminates all the FUD.

    Is there good science behind the satellite observation concept?

    Another (cross discipline) question regarding climate: The Vikings expansion ca 1000 AD is typically explained as the result of a warming trend the fueled population growth in Northern areas. Do climate models explain this?

    Regarding economic models — they actually are based on physics. Unfortunately they are based on physics as it stood before the discover of the 2nd law of thermodynamics, which is part of the reason why they don’t work. The book, “The Origin of Wealth” does a really good job of explaining it in detail, but another thing they don’t take into account is that economic outcomes are based on individual decisions made by vast numbers of people. Economic systems are not random, but neither can they be predicted with any accuracy very far into the future. Pretty much like the weather.

    Suggestion for the FAQ: In the definitions section you may wish to define “climate.”


Switch to our mobile site