### FAQ on climate models: Part II

Filed under: — gavin @ 6 January 2009

This is a continuation of a previous post including interesting questions from the comments.

### More Questions

• What are parameterisations?

Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be ‘parameterised’ i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code – instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale (‘bulk’) conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are ‘tuned’ to reproduce the observed processes as much as possible.

• How are the parameterisations evaluated?

In at least two ways. At the process scale, and at the emergent phenomena scale. For instance, taking one of the two examples mentioned above, the radiation code can be tested against field measurements at specific times and places where the composition of the atmosphere is known alongside a line-by-line code. It would need to capture the variations seen over time (the daily cycle, weather, cloudiness etc.). This is a test at the level of the actual process being parameterised and is a necessary component in all parameterisations. The more important tests occur when we examine how the parameterisation impacts larger-scale or emergent phenomena. Does changing the evaporation improve the patterns of precipitation? the match of the specific humidity field to observations? etc. This can be an exhaustive set of tests but again are mostly necessary. Note that most ‘tunings’ are done at the process level. Only those that can’t be constrained using direct observations of the phenomena are available for tuning to get better large scale climate features. As mentioned in the previous post, there are only a handful of such parameters that get used in practice.

• Are clouds included in models? How are they parameterised?

Models do indeed include clouds, and do allow changes in clouds as a response to forcings. There are certainly questions about how realistic those clouds are and whether they have the right sensitivity – but all models do have them! In general, models suggest that they are a positive feedback – i.e. there is a relative increase in high clouds (which warm more than they cool) compared to low clouds (which cool more than they warm) – but this is quite variable among models and not very well constrained from data.

Cloud parameterisations are amongst the most complex in the models. The large differences in mechanisms for cloud formation (tropical convection, mid-latitude storms, marine stratus decks) require multiple cases to be looked at and many sensitivities to be explored (to vertical motion, humidity, stratification etc.). Clouds also have important micro-physics that determine their properties (such as cloud particle size and phase) and interact strongly with aerosols. Standard GCMs have most of this physics included, and some are even going so far as to embed cloud resolving models in each grid box. These models are supposed to do away with much of the parameterisation (though they too need some, smaller-scale, ones), but at the cost of greatly increased complexity and computation time. Something like this is probably the way of the future.

• What is being done to address the considerable uncertainty associated with cloud and aerosol forcings?

As alluded to above, cloud parameterisations are becoming much more detailed and are being matched to an ever larger amount of observations. However, there are still problems in getting sufficient data to constrain the models. For instance, it’s only recently that separate diagnostics for cloud liquid water and cloud ice have become available. We still aren’t able to distinguish different kinds of aerosols from satellites (though maybe by this time next year).

However, none of this is to say that clouds are a done deal, they certainly aren’t. In both cloud and aerosol modelling the current approach is get as wide a spectrum of approaches as possible and to discern what is and what is not robust among those results. Hopefully soon we will start converging on the approaches that are the most realistic, but we are not there yet.

Forcings over time are a slightly different issue, and there it is likely that substantial uncertainties will remain because of the difficulty in reconstructing the true emission data for periods more than a few decades back. That involves making pretty unconstrained estimates of the efficiency of 1930s technology (for instance) and 19th Century deforestation rates. Educated guesses are possible, but independent constraints (such as particulates in ice cores) are partial at best.

• Do models assume a constant relative humidity?

No. Relative humidity is a diagnostic of the models’ temperature and water distribution and will vary according to the dynamics, convection etc. However, many processes that remove water from the atmosphere (i.e. cloud formation and rainfall) have a clear functional dependence on the relative humidity rather than the total amount of water (i.e. clouds form when air parcels are saturated at their local temperature, not when humidity reaches X g/m3). These leads to the phenomenon observed in the models and the real world that long-term mean relative humidity is pretty stable. In models it varies by a couple of percent over temperature changes that lead to specific humidity (the total amount of water) changing by much larger amounts. Thus a good estimate of the model relative humidity response is that it is roughly constant, similar to the situation seen in observations. But this is a derived result, not an assumption. You can see for yourself here (select Relative Humidty (%) from the diagnostics).

• What are boundary conditions?

These are the basic data input into the models that define the land/ocean mask, the height of the mountains, river routing and the orbit of the Earth. For standard models additional inputs are the distribution of vegetation types and their properties, soil properties, and mountain glacier, lake, and wetland distributions. In more sophisticated models some of what were boundary conditions in simpler models have now become prognostic variables. For instance, dynamic vegetation models predict the vegetation types as a function of climate. Other examples in a simple atmospheric model might be the distribution of ozone or the level of carbon dioxide. In more complex models that calculate atmospheric chemistry or the carbon cycle, the boundary conditions would instead be the emissions of ozone precursors or anthropogenic CO2. Variations in these boundary conditions (for whatever reason) will change the climate simulation and can be considered forcings in the most general sense (see the next few questions).

• Does the climate change if the boundary conditions are stable?

The answer to this question depends very much on perspective. On the longest timescales a climate model with constant boundary conditions is stable – that is, the mean properties and their statistical distribution don’t vary. However, the spectrum of variability can be wide, and so there is variation from one decade to the next, from one century to the next, that are the result of internal variations in (for instance) the ocean circulation. While the long term stability is easy to demonstrate in climate models, it can’t be unambiguously determined whether this is true in the real world since boundary conditions are always changing (albeit slowly most of the time).

• Does the climate change if boundary conditions change?

Yes. If any of the factors that influence the simulation change, there will be a response in the climate. It might be large or small, but it will always be detectable if you run the model for long enough. For example, making the Rockies smaller (as they were a few million years ago) changes the planetary wave patterns and the temperature patterns downstream. Changing the ozone distribution changes temperatures, the height of the tropopause and stratospheric winds. Changing the land-ocean mask (because of sea level rise or tectonic changes for instance) changes ocean circulation, patterns of atmospheric convection and heat transports.

• What is a forcing then?

The most straightforward definition is simply that a forcing is a change in any of the boundary conditions. Note however that this definition is not absolute with respect to any particular bit of physics. Take ozone for instance. In a standard atmospheric model, the ozone distribution is fixed and any change in that fixed distribution (because of stratospheric ozone depletion, tropospheric pollution, or changes over a solar cycle) would be a forcing causing the climate to change. In a model that calculates atmospheric chemistry, the ozone distribution is a function of the emissions of chemical precursors, the solar UV input and the climate itself. In such a model, ozone changes are a response (possibly leading to a feedback) to other imposed changes. Thus it doesn’t make sense to ask whether ozone changes are or aren’t a forcing without discussing what kind of model you are talking about.

There is however a default model setup in which many forcings are considered. This is not always stated explicitly and leads to (somewhat semantic) confusion even among specialists. This setup consists of an atmospheric model with a simple mixed-layer ocean model, but that doesn’t include chemistry, aerosol vegetation or dynamic ice sheet modules. Not coincidentally this corresponds to the state-of-the-art of climate models around 1980 when the first comparisons of different forcings started to be done. It persists in the literature all the way through to the latest IPCC report (figure xx). However, there is a good reason for this, and that is observation that different forcings that have equal ‘radiative’ impacts have very similar responses. This allows many different forcings to be compared in magnitude and added up.

The ‘radiative forcing’ is calculated (roughly) as the net change in radiative fluxes (both short wave and long wave) at the top of the atmosphere when a component of the default model set up is changed. Increased solar irradiance is an easy radiative forcing to calculate, as is the value for well-mixed greenhouse gases. The direct effect of aerosols (the change in reflectance and absorption) is also easy (though uncertain due to the distributional uncertainty), while the indirect effect of aerosols on clouds is a little trickier. However, some forcings in the general sense defined above don’t have an easy-to-caclulate ‘radiative forcing’ at all. What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz? Yet both of these examples have large impacts on the models’ climate. Some other forcings have a very small global radiative forcing and yet lead to large impacts (orbital changes for instance) through components of the climate that aren’t included in the default set-up. This isn’t a problem for actually modelling the effects, but it does make comparing them to other forcings without doing the calculations a little more tricky.

• What are the differences between climate models and weather models?

Conceptually they are very similar, but in practice they are used very differently. Weather models use as much data as there is available to start off close to the current weather situation and then use their knowledge of physics to step forward in time. This has good skill for a few days and some skill for a little longer. Because they are run for short periods of time only, they tend to have much higher resolution and more detailed physics than climate models (but note that the Hadley Centre for instance, uses the same model for climate and weather purposes). Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models. There are many current attempts to improve the short-term predictability in climate models in line with the best weather models, though it is unclear what impact that will have on projections.

• How are solar variations represented in the models?

This varies a lot because of uncertainties in the past record and complexities in the responses. But given a particular estimate of solar activity there are a number of modelled responses. First, the total amount of solar radiation (TSI) can be varied – this changes the total amount of energy coming into the system and is very easy to implement. Second, the variation over the the solar cycle at different frequencies (from the UV to the near infra-red) don’t all vary with the same amplitude – UV changes are about 10 times as large as those in the total irradiance. Since UV is mostly absorbed by ozone in the stratosphere, including these changes increases the magnitude of the solar cycle variability in the stratosphere. Furthermore, the change in UV has an impact on the production of ozone itself (even down into the troposphere). This can be calculated with chemistry-climate models, and is increasingly being used in climate model scenarios (see here for instance).

There are also other hypothesised impacts of solar activity on climate, most notably the impact of galactic cosmic rays (which are modulated by the solar magnetic activity on solar cycle timescales) on atmospheric ionisation, which in turn has been linked to aerosol formation, and in turn linked to cloud amounts. Most of these links are based on untested theories and somewhat dubious correlations, however, as was recognised many years ago (Dickinson, 1975), this is a plausible idea. Implementing it in climate models is however a challenge. It requires models to have a full model of aerosol creation, growth, accretion and cloud nucleation. There are many other processes that affect aerosols and GCR-related ionisation is only a small part of that. Additionally there is a huge amount of uncertainty in aerosol-cloud effects (the ‘aerosol indirect effect’). Preliminary work seems to indicate that the GCR-aerosol-cloud link is very small (i.e. the other effects dominate), but this is still in the early stages of research. Should this prove to be significant, climate models will likely incorporate this directly (using embedded aerosol codes), or will parameterise the effects based on calculated cloud variations from more detailed models. What models can’t do (except perhaps as a sensitivity study) is take purported global scale correlations and just ‘stick them in’ – cloud processes and effects are so tightly wound up in the model dynamics and radiation and have so much spatial and temporal structure that this couldn’t be done in a way that made physical sense. For instance, part of the observed correlation could be due to the other solar effects, and so how could they be separated out? (and that’s even assuming that the correlations actually hold up over time, which doesn’t seem to be the case).

• What do you mean when you say a model has “skill”?

‘Skill’ is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skillful if it works better than just assuming that each day is the same as the last (‘persistence’). It should be noted that ‘persistence’ itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn’t necessarily an analogue for ‘persistence’ in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a ‘no change from the present’. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures – but only if natural and anthropogenic forcings are used – compared to an expectation of no change. Standard climate models don’t show skill at the interannual timescales which depend heavily on El Niño’s and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).

• How much can we learn from paleoclimate?

Lots! The main issue is that for the modern instrumental period the changes in many aspects of climate have not been very large – either compared with what is projected for the 21st Century, or from what we see in the past climate record. Thus we can’t rely on the modern observations to properly assess the sensitivity of the climate to future changes. For instance, we don’t have any good observations of changes in the ocean’s thermohaline circulation over recent decades because a) the measurements are difficult, and b) there is a lot of noise. However, in periods in the past, say around 8,200 years ago, or during the last ice age, there is lots of evidence that this circulation was greatly reduced, possibly as a function of surface freshwater forcing from large lake collapses or from the ice sheets. If those forcings and the response can be quantified they provide good targets against which the models’ sensitivity can be tested. Periods that are of possibly the most interest for testing sensitivities associated with uncertainties in future projections are the mid-Holocene (for tropical rainfall, sea ice), the 8.2kyr event (for the ocean thermohaline circulation), the last two millennia (for decadal/multi-decadal variability), the last interglacial (for ice sheets/sea level) etc. There are plenty of other examples, and of course, there is a lot of intrinsic interest in paleoclimate that is not related to climate models at all!

As before, if there are additional questions you’d like answered, put them in the comments and we’ll collate the interesting ones for the next FAQ.

### 191 Responses to “FAQ on climate models: Part II”

1. 151
Steve Bloom says:

Re #145: Thanks for the far more informed reply and the references, Jim. All I would add is that there’s also a large oceanic aspect (involving both vegetation and animals; just in the last week interesting results came out finding that fish poop is an important component) and that the recent success in glacial cycle modeling gives us a degree of confidence that the basics are understood.

2. 152
3. 153
Hank Roberts says:

Here’s one, perhaps, for Ray Pierrehumbert’s attention:

http://www.centauri-dreams.org/?p=5653

“… according to recent work by Luc Arnold (Observatoire de Haute Provence) and team. If green vegetation on another planet is anything like what we have on Earth, then it will share a distinctive spectral signature called the Vegetation Red Edge, or VRE. The new paper creates climate simulations that explore whether planets with a distinctively different climate than modern Earth’s could be so detected.

Two earlier eras are useful here. …
This is fascinating stuff in its own right, as a look at the image … suggests, with its story of climate change on our planet. ….”

4. 154
Mark says:

wrt Gavin’s response to #148.

Aye, “Do unto others as they have done unto you.”

And inspiration (for want of a better word… :-) ) for some of my sarcasm is from Scott Adams. I forget which book, but there was one way he claimed helped decipher management speek into english: “which is most likely”.

Use sarcasm to counter. Because they aren’t looking for an education, they are just looking to find you wrong and they don’t have to sweat logical thinking to do it, so you’re at a disadvantage. So be sarcastic. If they are open to understanding, it may make them stop and think. If they aren’t then you may at least embarrass them into stopping.

Of course, it’s very VERY much nicer to not have to do that. More work though. But even if minds aren’t changed, at least the expanse of “what is known or thought” has been enlarged and maybe changed perspective, though not the decisions.

5. 155
6. 156

#150 “his blog comments clearly demonstrate that he is not interested in scientific discourse. He only supports discussing viewpoints that agree with his, ”

Having personally witnessed Gavin’s responses to a wide variety of statements, for years! and also by Gavin’s very nature to allow, open the door to new ideas and suggestions without condemning them immediately. Roger has no grounds in these accusations, since I personally have suggested new methods, new ideas which were never shot down, even not yet judged by science peers, not necessarily in agreement with established science. Roger should remove emotions from his charges,
and read RC a whole lot more.

7. 157
Steve Tirrell says:

Great responses to my question on vegetation. I have another, that I have never heard anyone address. We all know that the earth has been through many warming and cooling cycles. What causes the cycles to end and reverse themselves and for each direction? I would think the knowledge of how this works would be crucial in both modeling (because something big has to happen) and in possibly doing something about the problem.

8. 158
Hank Roberts says:

Steve,
> question I have never heard anyone address

Have you read the info under the “Start Here” button at the top of the page?

Someone else asked this in almost the same words a day or two ago.

It’s among the Frequently Answered Questions (“FAQs”).

Here’s one approach: Take your own question or a long chunk of it as I used below, exactly, and paste it into:

1) the Google search. You’ll get a gallimaufry, lots of ice age claims, lots of PR and political spin sites with nonsense about this.

You still get some woo-woo stuff (the first hit is a hoot);
you’ll also get a good helping of good scientific information.
Look at the “cited by” number and click that; see what other scientists found useful. Click “Recent” for newer papers. There is no “Wisdom” button for the search tool.

Short answer to your question is: it’s complicated, been figured out for geologically recent times, isn’t simple, has happened slowly.

The current rate of change — the last 200 years caused by people — is 10x or maybe 100x faster than anything since the last big asteroid impact.

9. 159
Steve Tirrell says:

Hank, I did read this entire thread before posting, and did not see the same question posted anywhere. However, I will admit that I skimmed over the sections where there were a lot of personal attacks occuring.

The Start Here section was helpful, but it only partially answered my question. In FAQs 6.1 & 6.2 there is reference to sections 6.3 and 6.4. Where are those sections. Maybe they will answer my questions.

I would like to know more about how the tectonics affect climate change and also what might cause the ocean currents to suddenly change.

10. 160
Hank Roberts says:

Steve, this thread is collecting new suggestions for the FAQ; if you’re not expecting answers right away, you’re in the right place.

Click the “Home” button. Look at the topics here (going back years).

> 6.3 and 6.4. Where are those sections?

You missed a link under “Start Here” — see where it says:

“The IPCC AR4 Frequently Asked Questions (here) is an excellent start.”

When you click that link, you’ll go to the index page for the IPCC FAQ

Google Scholar also has a good natural language query, try posting your question there as well, e.g.:

11. 161
Jim Bouldin says:

157 (Steve):

Try this for the history of ice age research:
http://www.aip.org/history/climate/cycles.htm

and section 6.4 of IPCC WGI ch 6:
http://www.ipcc.ch/ipccreports/ar4-syr.htm

12. 162

Steve Tirrell says, “I would like to know more about how the tectonics affect climate change and also what might cause the ocean currents to suddenly change.”

Given maximum plate speed is cm per year, very, very slowly. Sorry, Steve, couldn’t resist. ;-)

13. 163
David B. Benson says:

Steve Tirrell (157) — Plate tectonics changed the climate to the modern one by closing the Isthmus of Panama about 4 million years ago. That changed ocean circulations, but certainly slowly.

Rapid ocean current change is part of what is being explored on

http://www.realclimate.org/index.php/archives/2009/01/the-younger-dryas-comet-impact-hypothesis-gem-of-an-idea-or-fools-gold/langswitch_lang/in

and there are some papers referenced or linked. But I thiink what you want for that is a book explaining thermohaline circulation (THC); unfortunately, while I am sure there are such, I don’t know of one.

14. 164
Steve Tirrell says:

> 6.3 and 6.4. Where are those sections?

You missed a link under “Start Here” — see where it says:

“The IPCC AR4 Frequently Asked Questions (here) is an excellent start.”

When you click that link, you’ll go to the index page for the IPCC FAQ

Hank, I did go to FAQs, as I referenced. In FAQs 6.1 & 6.2 it refers to other FAQs with a link and a description like this, “See FAQ X.X.” Elsewhere, there is a reference to something else that goes like this, “See section X.X.” I believe from Jim’s post they are refering to the IPCC Fourth Assessment. It would be helpful if that was also in link form and actually mentioned that it was a section in the IPCC Fourth Assessment.

15. 165
Hank Roberts says:

OK, Steve, I see the problem you’re describing

I’d guess you know how to find your way around, and you’re saying this out of concern that someone else might become lost and confused.

We don’t want that to happen. So, just in case, I’ll spell it out as I understand the setup, for the next reader along who may need this:

“FAQ” is generic — there are “FAQ” pages all over the web.

— you started in the Start Here section at RC;
— you clicked on the link that took you into the IPCC FAQ section.

http://ipcc-wg1.ucar.edu/wg1/FAQ/wg1_faq-6.2.html

“more than a thousand years with decreasing spatial coverage for earlier periods (see Section 6.5)”

Question is — where is Section 6.5, and why don’t they link to it?

You’re inside a document hierarchy (nested subfolders).
Those are _all_ under the IPCC FAQ level.
The IPCC FAQs are under the broader IPCC folder level.
The actual IPCC Report (the “Sections”) are not in HTML, they’re PDFs.

Two ways to figure out where you are:

— Look at the URL in the navigation bar.
— Look at the top of the page; that page has “Table of Contents” and “Home” and the “Home” button takes you to the main page.

One general caution — linking into the middle of the IPCC pages often confuses people; they do change their links as documents are updated, rather than replacing old with new documents.

That way they leave a complete record of old material, BUT any link into the middle can become outdated.

For people writing FAQs and pointers, it’s usually safer to link to the Home page and describe what to look for.
——–

16. 166
Hank Roberts says:

Further — I came across either a Firefox bug or an IPCC web page bug; The navigation bar URL does not change when using the internal links from “www.ipcc-wg2.org”

The “Back” key is confused by this. So am I.

Thanks for leading me into confusion, Steven (grin). I reported it to FF and the IPCC webmaster (hoping it’s not just me)!

17. 167
Steve Tirrell says:

Hnak, I have found the two sections. They are in the IPCC Working Guide 1, which Jim refers to but his link goes to IPCC Fourth Assessment. I am in the middle of reading section 6 right now. I definitely covers in depth the areas of interest that I had.

Since the publications page of the IPCC breaks that book down by chapter, I would think it would be safe to at least link to the specific chapter. However, that is just my opinion. At a minimum though, the FAQ should state the title of the publication not just a section number. Even those, such as yourself, couldn’t immeadiately tell me what publication it was.

18. 168
Jim Bouldin says:

Steve:

The link I provided goes to the overview page of the IPCC WG1 AR4. The IPCC has three working groups (WG1 through WG3). Each covers different topics (WG1 = the physical science basis, WG2 = impacts,WG3 = mitigation) and each produces an independent report at the same time, every 5-6 years. So each one of those groups produced an AR4 (Assessment Report #4, being the fourth such report put out since 1990). Thus, I sent you to the WG1 AR4 report main page. I would have given you the link directly to chapter 6 but I wasn’t able to pull it up at the moment I sent the message for some reason. I figured you’d get there.
Jim

Also Spencer Weart’s history is very readable–check it out.

19. 169
Hank Roberts says:

Just to be very clear

— if you’re reading a page that starts with
“… //www.ipcc …”
— that is not the RealClimate FAQ (for which this topic is set up)
— that’s an IPCC FAQ — different organization. Their pages, their abbreviations and cross-reference system. They do have a contact email address at the bottom of their pages.

Jumping into the middle of any large volume is always confusing.
Most big documents use abbreviations. Start from (or go to) the beginning of the IPCC document, and you’ll see it explains the abbreviations used. That’s another good reason to link to the main page, for people may become confused if they start in the middle.
____________________________

20. 170
Chris P says:

“FAQ Part I, Answer 9. Do models have global warming built in?
No. If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were.”

“FAQ Part I, Answer 7. How are models evaluated?
How does it respond over the whole of the 20th Century, or at the Maunder Minimum, or the mid-Holocene or the Last Glacial Maximum? In each case, there is usually sufficient data available to evaluate how well the model is doing.”

Physics-based models, even if they do not solely rely on first-principles, have proven their worth in many other scientific and engineering fields. When evaluating the GCM as you described above, what forcing functions do you use to cause the long-term mean of the model to dive/rise into the Maunder Minimum, the mid-Holocene, or the Last Glacial Maximum (or the Medieval Climate Optimum for that matter)?

[Response: Whatever is appropriate – solar/volcanic, orbital, ice sheets/greenhouse gases/vegetation/dust etc. – gavin]

21. 171
jcbmack says:

Ray Ladbury, not always, sometimes tectonic plate shifts can have more dramatic and rapid influences upon oceans and climate, but I know what you were getting at:)

22. 172
Steve Tirrell says:

Now to get back to the real subject of this thread. There are a few disturbing things that I read. First let me give you a background of myself. I am an MS Chem E and have worked with both FEA analysis and plastics molding analysis, so computer modeling is not foreign to me. Now the troubling areas. These are passages from the IPCC WG1 AR4.

1. Palaeoclimatic observations indicate that abrupt decadalto
centennial-scale changes in the regional frequency
of tropical cyclones, floods, decadal droughts and the
intensity of the African-Asian summer monsoon very
likely occurred during the past 10 kyr. However, the
mechanisms behind these abrupt shifts are not well
understood, nor have they been thoroughly investigated
using current climate models.

2. As in the study noted above, climate models cannot produce a
response to increased CO2 with large high-latitude warming,
and yet minimal tropical temperature change, without strong
increases in ocean heat transport (Rind and Chandler, 1991).
(Not quoted, but the rest of the paragraph basically states that physical evidence indicates that the poles saw significantly temperature change, while the tropics saw very little and that this is likely caused by ocean effects.)

I have always found computer models to be very helpful when designing a part or building a mold, but I would never forgo physical testing because of a great FEA result. That goes for highly studied, homogeneous materials in common designs. Now compare that with a climate model of a very non-homogeneous planet with infinitely more variables and instead of interpolating known data, we need to extrapolate effects that have never occured before. That makes me feel uneasy.

There seems to be many references to ocean effects causing significant rapid changes, but there are also references to that being a weak area in the computer models. Since there is more water area on the planet than land area, that gets my vote for the area needing more study.

23. 173
Steve Tirrell says:

Before my post at 10:39 on Jan 25th, I posted another that went something like this.

Jim, you are correct that it was the fourth assessment for Working Group 1 that has the correct chapters in it. However your link is to fourth assessment synthesis. Since WG1 puts each of their chapters in a separate PDF, and link can me made directly to chapter 6. It is http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter6.pdf.

I have figured out why they do not reference the publication for their chapters. The FAQs were originally part of the WG1 report. They were then pulled out of the report and added to the Real Climate web site. Real Climate created the links to the other FAQs, so I believe they could also create the links to the proper chapters in the WG1 fourth assessment or at a minimum to the entire publication itself.

[Response: We do – the link is on the side bar. – gavin]

24. 174
Martin Vermeer says:

David B. Benson #163:

But I thiink what you want for that is a book explaining thermohaline circulation (THC); unfortunately, while I am sure there are such, I don’t know of one.

This is a good starter:

http://www.pik-potsdam.de/~stefan/thc_fact_sheet.html

25. 175
David B. Benson says:

Martin Vermeer (174) — Thank you! I’ll keep the link in case someone else later needs a starting point.

26. 176
Mark says:

re: 172.

Well, show us another earth we can piss about with…

And models don’t do what Real Life (TM) does. E.g.

Small scale minatures being flooded never look right (the viscosity of water doesn’t scale down),

nanomachines that can’t use propellors because they don’t work at that scale,

nanoboats that travel with no motor,

etc.

So you’re ALWAYS left with something uncertain.

27. 177
Hank Roberts says:

> propellors
Flagellum
http://www.nanonet.go.jp/english/mailmag/2004/011a.html
Just because it’s beautiful.

28. 178
Jim Bouldin says:

Steve (172), take what I say with a grain of salt because I know virtually nothing about ocean circulation, and was formerly pretty suspicious of models in general (and modelers, both due at least in part to my own ignorance), but I think it’s very important to keep in mind a couple of things:

1. In science, models are almost always works in progress, best explanations at the moment, improving as knowledge grows. Now that may appear to be a rationalization for bad work (and it can be, but in most cases it’s not), so more importantly…
2. Models are judged useful or not more by their ability to best explain either (a) the full range of observational evidence and/or (b) the major, dominant dynamics of the system, rather than (c) their inability to accurately explain everything in the system. Modeling almost always involves abstraction, attempting to include the most important elements and processes, and leaving out or minimizing the less crucial.
3. Model validation is VERY much a part of the science. A closely related topic is “detection and attribution”. Chapters 8/9 of the WG1 AR4 report provide the review. These topics are the crux of the biscuit really–and highly interesting IMO, not only because of their importance, but also from the degree of sophistication of many of the analytical procedures used.

But maybe I’ve simply stated the patently obvious. Gave me an excuse not to work though :)

29. 179
Chris P says:

Thank you Gavin, as always.

[Re: Re: 170] Are there sophisticated enough models out there that do not use “ice sheets/greenhouse gases/vegetation/dust” as the initial forcing functions? Those are the output variables we use now to measure climate change (I can see each of these being dynamically caused in the code by the geologic/solar/orbital inputs you mentioned).

Do you know of papers where I can read up on the state variables used in these models? I have access to many scientific journals but climate science is not my field so my searches have been less than fruitful.

30. 180
Mark says:

Chris 179,

Not as far as I know.

Then again,

a) we don’t have any old data to test against, meaning we have to wait 50 years to see if it was right
b) if it were used, then there would be lots of people out there going “You are just form-fitting!!!!”

And so where’s the benefit? Do you consider there any to be had?

31. 181
Hank Roberts says:

Still collecting FAQ suggestions in this thread:

found at: http://www.sej.org/index.htm
Possibly outdated by now, but for journalists:

News University,
http://www.newsu.org/
the Poynter Institute’s innovative e-learning center that helps journalists through self-directed training, has made available “Covering Climate Change: A Seminar Snapshot,”
http://www.newsu.org/courses/course_detail.aspx?id=snap_sej08
at no cost. This Seminar Snapshot, produced in partnership with the Society of Environmental Journalists (SEJ) and the Metcalf Institute for Marine and Environmental Reporting, is an edited version of the full-day News Executive Roundtable on Climate Change, a pre-conference event for SEJ’s 2007 annual conference
http://www.sej.org/confer/index3.htm
at Stanford University. …

32. 182
Steve Tirrell says:

As I have mentioned before, I am very curious about what causes the earth to go from warming to cooling and vice-versa. I visited the link that Martin suggested (174) and the discussion there definitely supports changes to the THC being able to cause dramatic changes to the ice sheets in the North. It surmises that a large increase of freshwater in the North would slow down or even stop the current ocean flows. This inturn would mean less warming of the North from the ocean. At some point this could reverse the thawing of the ice sheets and reverse the process. Once the process is reversed, than the ice sheets would begin to absorb CO2 instead of releasing it, thereby causing positive feedback in the opposite direction.

If all of that is possible, then it seems to me that including this effect in a GCM would be mandatory. The big question is when do we hit that reversal point? Would it be after we are underwater, or would it be a lot sooner? Or would our current situation overwhelm any effects of a decreased THC?

33. 183
Chris P says:

Mark,

I do see there being great benefit. This is what I thought Gavin was saying in his answer to the FAQ Part I, Question 7, which I quoted above in comment #170.

My FAQ question was asking for more detail. As he states, all physics based models oscillate around some mean if the forcing functions are left unchanged. They are strong predictive tools because of their ability to change when the relationship between the forcing functions change. This is the beauty of physics-based models in any scientific/engineering field. To make sure the error in the prediction is as small as possible you check it against the reality you know as Gavin describes below:

FAQ Part I, Answer 4: “What is robust in a climate projection and how can I tell?”
“Since every wiggle is not necessarily significant, modelers need to assess how robust particular model results are. They do this by seeing whether the same result is seen in other simulations, with other models, whether it makes physical sense and whether there is some evidence of similar things in the observational or paleo record…Examples of non-robust results are the changes in El Niño as a result of climate forcings, or the impact on hurricanes. In both of these cases, models produce very disparate results, the theory is not yet fully developed and observations are ambiguous.”

My question was specifically what change in the forcing functions was used to move the model’s mean into any of the paleo or recent (pre industrial revolution) climate periods? From Gavin’s answer to FAQ Part I, question 7, he implies that this check has been done to the satisfaction of the experts in the field. When he answered my question by saying “Whatever is appropriate – solar/volcanic, orbital, ice sheets/greenhouse gases/vegetation/dust etc,” I asked if there were any models sophisticated enough to calculate the change in ice sheets / greenhouse gases / vegetation / dust etc as a function of the feedback from the change in known solar/volcanic, orbital forcing. Sorry, that sentence is long. This is because these last three forcings were what ran the system before we got evolved. They are an effect of the initial cause, though they certainly dampen or quicken any state change (depending on the physics). I would argue that this is not form fitting, it is validating a physics-based model to the best approximation using known scientific relations (without being a first-principles model).

Since this is a part time job for Gavin I also/instead asked for any papers describing the state variables used in these models where I can learn the answers to these questions on my own. I am mathematically literate and it might take him a lot more time than it’s worth to answer some questions (for good reason).

34. 184
Chris P says:

Sorry, when I said “They are an effect of the initial cause,” in the 6th sentence of the 4th paragraph I am referring to the ice sheets / greenhouse gases / vegetation / dust etc. I didn’t mean to be ambiguous.

35. 185
David B. Benson says:

Steve Tirrell (182) — Any good introductory textbook will answer your questions. I started with W.F. Ruddiman’s “Earth’s Climate: Past and Future”, which is now in its econd edition. As a more advanced alternative, try “The Discovery of Global Warming” by Spencer Weart, first link in the Science section of the sidebar.

36. 186
Hank Roberts says:

Chris, the closest I find searching for something using terms like those in your question is a discussion over at CA. Are you looking for something along those lines? You might want to try Scholar, there are some papers that might be helpful to refining the question.

This thread was set up for collecting ideas — that can eventually be dealt with in an update to the FAQ.

You might not get immediate answers.

You might look at the climate model intercomparison project too.

37. 187
Chris P says:

Thank you very much Hank.

38. 188
Mark says:

re 183.

But will changing ice coverage from a diagnostic to an input produce an output that is more accurate and that change big enough to make up for losing the ability to check the model against a consequence (the measurement of ice coverage)?

After all, if we had as input *everything* we then don’t know if we are getting better because the science is more accurate or because we keep changing the model: we cannot refute the claim that we merely tuned our model to get the answer we want, since everything could be tuned then.

Leaving out consequences in your drivers can mean you have an avenue to prove (in the old fashioned sense of test against the reality) your accuracy: these figures should fall out automagically correct and if they don’t that shows you’re missing something important.

39. 189
Chris P says:

[re 188]

Mark,

Your last post confuses me. I think we are saying the same thing. I don’t want to clog these comments since they are meant for additional FAQ questions. We can take this to email if you’d like.

I agree, when running a GCM “ice sheets / greenhouse gases / vegetation / dust etc” should be inputs when checking the models’ accuracy. They should be used as the initial conditions at a given point in time. My question referred to the forcing functions (not I.C.s) and what change is required to produce different paleo climate events. “Ice sheets / greenhouse gases / vegetation / dust etc” are system feedbacks, not forcing functions. Specifically, my question was: given an initial starting point before one of the last freezes or warming periods (where the climate began at a condition similar to ours, pre-human CO2), what changes in the forcing functions were used to move the model away from our mean. Moving to our mean from those events is a good check as well, however I am interested in moving away at this time.

Ultimately I’d like to learn about the gains (weights) used for each forcing in the code and the inputs to each (step functions, ramp functions etc… a more technical, system dynamics answer). As implied previously, these checks have already been done to the satisfaction of the experts in the field and I would simply like to learn about them. As Gavin states in this FAQ, such tests “provide good targets against which the models’ sensitivity can be tested.” I want to learn more. Hank was very helpful and I am going to start by spending time with the resources he suggested.

[Response: As discussed in the FAQ, the definition of a ‘forcing’ depends on the model you are talking about. We do not yet have models that contain every single aspect of the climate, and so many experiments are done with models in which some changes are imposed as forcings. For instance, most GCMs don’t include models of ice sheets or the carbon cycle. In those models, changes in the ice sheets or in CO2 levels are imposed as a forcing. Those have been the models in which ideas about the LGM for instance have been mostly explored. – gavin]

40. 190
Mark says:

Chris. One way to put it is that if you take a graph that you know depends on approximately x^2 and fit a line to it as x^2 with the minimum RMS errors, you can see if the supposition that it fits x^2 is right.

If, however, you fit whatever curve hits the spots then you can’t see if your curve fitting is right until a long time later, when you have enough figures you didn’t include in the curve fitting.

You also lose any future discovery based on the curve. Whereas if you figured it was x^2 and after enough fittings, you figured “actually, if I add a sine to that, it fits MUCH better”, then you can

a) use future values to see if that is correct
b) use that extra fit to work out what may be causing it

So ice as an output can be used historically as a “sanity check” (RMS errors shouldn’t increase as time goes on if fitting to x^2 is right) unless you parameterise your inputs with it (fit the curve).

So will adding ice into the system as a parameterised forcing make enough of a difference to the models to negate the loss of an independent source of verification.

41. 191
Chris P says:

Mark,

You can reach me at chris.mann.117 – at – gmail.com. Let us continue this there.

I agree climate models should not be line-fitters, they should be physics based. I do not think I’ve implied that I wish to fit a nth degree polynomial, trig function, or ln to a set of historic proxies or data points. I understand the dubious ability of such a model to predict future trends (i.e. watch how quickly Taylor’s 5th degree polynomial fit to a sin function fails).

I do not agree that you can classify ice cover as an independent variable (or vegetation, the CO2 cycle etc); hence you cannot loose it as an independent source of verification.

As I said, I am now taking the time to read the resources suggested.