This is a continuation of a previous post including interesting questions from the comments.
- What are parameterisations?
Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be ‘parameterised’ i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code – instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale (‘bulk’) conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are ‘tuned’ to reproduce the observed processes as much as possible.
- How are the parameterisations evaluated?
In at least two ways. At the process scale, and at the emergent phenomena scale. For instance, taking one of the two examples mentioned above, the radiation code can be tested against field measurements at specific times and places where the composition of the atmosphere is known alongside a line-by-line code. It would need to capture the variations seen over time (the daily cycle, weather, cloudiness etc.). This is a test at the level of the actual process being parameterised and is a necessary component in all parameterisations. The more important tests occur when we examine how the parameterisation impacts larger-scale or emergent phenomena. Does changing the evaporation improve the patterns of precipitation? the match of the specific humidity field to observations? etc. This can be an exhaustive set of tests but again are mostly necessary. Note that most ‘tunings’ are done at the process level. Only those that can’t be constrained using direct observations of the phenomena are available for tuning to get better large scale climate features. As mentioned in the previous post, there are only a handful of such parameters that get used in practice.
- Are clouds included in models? How are they parameterised?
Models do indeed include clouds, and do allow changes in clouds as a response to forcings. There are certainly questions about how realistic those clouds are and whether they have the right sensitivity – but all models do have them! In general, models suggest that they are a positive feedback – i.e. there is a relative increase in high clouds (which warm more than they cool) compared to low clouds (which cool more than they warm) – but this is quite variable among models and not very well constrained from data.
Cloud parameterisations are amongst the most complex in the models. The large differences in mechanisms for cloud formation (tropical convection, mid-latitude storms, marine stratus decks) require multiple cases to be looked at and many sensitivities to be explored (to vertical motion, humidity, stratification etc.). Clouds also have important micro-physics that determine their properties (such as cloud particle size and phase) and interact strongly with aerosols. Standard GCMs have most of this physics included, and some are even going so far as to embed cloud resolving models in each grid box. These models are supposed to do away with much of the parameterisation (though they too need some, smaller-scale, ones), but at the cost of greatly increased complexity and computation time. Something like this is probably the way of the future.
- What is being done to address the considerable uncertainty associated with cloud and aerosol forcings?
As alluded to above, cloud parameterisations are becoming much more detailed and are being matched to an ever larger amount of observations. However, there are still problems in getting sufficient data to constrain the models. For instance, it’s only recently that separate diagnostics for cloud liquid water and cloud ice have become available. We still aren’t able to distinguish different kinds of aerosols from satellites (though maybe by this time next year).
However, none of this is to say that clouds are a done deal, they certainly aren’t. In both cloud and aerosol modelling the current approach is get as wide a spectrum of approaches as possible and to discern what is and what is not robust among those results. Hopefully soon we will start converging on the approaches that are the most realistic, but we are not there yet.
Forcings over time are a slightly different issue, and there it is likely that substantial uncertainties will remain because of the difficulty in reconstructing the true emission data for periods more than a few decades back. That involves making pretty unconstrained estimates of the efficiency of 1930s technology (for instance) and 19th Century deforestation rates. Educated guesses are possible, but independent constraints (such as particulates in ice cores) are partial at best.
- Do models assume a constant relative humidity?
No. Relative humidity is a diagnostic of the models’ temperature and water distribution and will vary according to the dynamics, convection etc. However, many processes that remove water from the atmosphere (i.e. cloud formation and rainfall) have a clear functional dependence on the relative humidity rather than the total amount of water (i.e. clouds form when air parcels are saturated at their local temperature, not when humidity reaches X g/m3). These leads to the phenomenon observed in the models and the real world that long-term mean relative humidity is pretty stable. In models it varies by a couple of percent over temperature changes that lead to specific humidity (the total amount of water) changing by much larger amounts. Thus a good estimate of the model relative humidity response is that it is roughly constant, similar to the situation seen in observations. But this is a derived result, not an assumption. You can see for yourself here (select Relative Humidty (%) from the diagnostics).
- What are boundary conditions?
These are the basic data input into the models that define the land/ocean mask, the height of the mountains, river routing and the orbit of the Earth. For standard models additional inputs are the distribution of vegetation types and their properties, soil properties, and mountain glacier, lake, and wetland distributions. In more sophisticated models some of what were boundary conditions in simpler models have now become prognostic variables. For instance, dynamic vegetation models predict the vegetation types as a function of climate. Other examples in a simple atmospheric model might be the distribution of ozone or the level of carbon dioxide. In more complex models that calculate atmospheric chemistry or the carbon cycle, the boundary conditions would instead be the emissions of ozone precursors or anthropogenic CO2. Variations in these boundary conditions (for whatever reason) will change the climate simulation and can be considered forcings in the most general sense (see the next few questions).
- Does the climate change if the boundary conditions are stable?
The answer to this question depends very much on perspective. On the longest timescales a climate model with constant boundary conditions is stable – that is, the mean properties and their statistical distribution don’t vary. However, the spectrum of variability can be wide, and so there is variation from one decade to the next, from one century to the next, that are the result of internal variations in (for instance) the ocean circulation. While the long term stability is easy to demonstrate in climate models, it can’t be unambiguously determined whether this is true in the real world since boundary conditions are always changing (albeit slowly most of the time).
- Does the climate change if boundary conditions change?
Yes. If any of the factors that influence the simulation change, there will be a response in the climate. It might be large or small, but it will always be detectable if you run the model for long enough. For example, making the Rockies smaller (as they were a few million years ago) changes the planetary wave patterns and the temperature patterns downstream. Changing the ozone distribution changes temperatures, the height of the tropopause and stratospheric winds. Changing the land-ocean mask (because of sea level rise or tectonic changes for instance) changes ocean circulation, patterns of atmospheric convection and heat transports.
- What is a forcing then?
The most straightforward definition is simply that a forcing is a change in any of the boundary conditions. Note however that this definition is not absolute with respect to any particular bit of physics. Take ozone for instance. In a standard atmospheric model, the ozone distribution is fixed and any change in that fixed distribution (because of stratospheric ozone depletion, tropospheric pollution, or changes over a solar cycle) would be a forcing causing the climate to change. In a model that calculates atmospheric chemistry, the ozone distribution is a function of the emissions of chemical precursors, the solar UV input and the climate itself. In such a model, ozone changes are a response (possibly leading to a feedback) to other imposed changes. Thus it doesn’t make sense to ask whether ozone changes are or aren’t a forcing without discussing what kind of model you are talking about.
There is however a default model setup in which many forcings are considered. This is not always stated explicitly and leads to (somewhat semantic) confusion even among specialists. This setup consists of an atmospheric model with a simple mixed-layer ocean model, but that doesn’t include chemistry, aerosol vegetation or dynamic ice sheet modules. Not coincidentally this corresponds to the state-of-the-art of climate models around 1980 when the first comparisons of different forcings started to be done. It persists in the literature all the way through to the latest IPCC report (figure xx). However, there is a good reason for this, and that is observation that different forcings that have equal ‘radiative’ impacts have very similar responses. This allows many different forcings to be compared in magnitude and added up.
The ‘radiative forcing’ is calculated (roughly) as the net change in radiative fluxes (both short wave and long wave) at the top of the atmosphere when a component of the default model set up is changed. Increased solar irradiance is an easy radiative forcing to calculate, as is the value for well-mixed greenhouse gases. The direct effect of aerosols (the change in reflectance and absorption) is also easy (though uncertain due to the distributional uncertainty), while the indirect effect of aerosols on clouds is a little trickier. However, some forcings in the general sense defined above don’t have an easy-to-caclulate ‘radiative forcing’ at all. What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz? Yet both of these examples have large impacts on the models’ climate. Some other forcings have a very small global radiative forcing and yet lead to large impacts (orbital changes for instance) through components of the climate that aren’t included in the default set-up. This isn’t a problem for actually modelling the effects, but it does make comparing them to other forcings without doing the calculations a little more tricky.
- What are the differences between climate models and weather models?
Conceptually they are very similar, but in practice they are used very differently. Weather models use as much data as there is available to start off close to the current weather situation and then use their knowledge of physics to step forward in time. This has good skill for a few days and some skill for a little longer. Because they are run for short periods of time only, they tend to have much higher resolution and more detailed physics than climate models (but note that the Hadley Centre for instance, uses the same model for climate and weather purposes). Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models. There are many current attempts to improve the short-term predictability in climate models in line with the best weather models, though it is unclear what impact that will have on projections.
- How are solar variations represented in the models?
This varies a lot because of uncertainties in the past record and complexities in the responses. But given a particular estimate of solar activity there are a number of modelled responses. First, the total amount of solar radiation (TSI) can be varied – this changes the total amount of energy coming into the system and is very easy to implement. Second, the variation over the the solar cycle at different frequencies (from the UV to the near infra-red) don’t all vary with the same amplitude – UV changes are about 10 times as large as those in the total irradiance. Since UV is mostly absorbed by ozone in the stratosphere, including these changes increases the magnitude of the solar cycle variability in the stratosphere. Furthermore, the change in UV has an impact on the production of ozone itself (even down into the troposphere). This can be calculated with chemistry-climate models, and is increasingly being used in climate model scenarios (see here for instance).
There are also other hypothesised impacts of solar activity on climate, most notably the impact of galactic cosmic rays (which are modulated by the solar magnetic activity on solar cycle timescales) on atmospheric ionisation, which in turn has been linked to aerosol formation, and in turn linked to cloud amounts. Most of these links are based on untested theories and somewhat dubious correlations, however, as was recognised many years ago (Dickinson, 1975), this is a plausible idea. Implementing it in climate models is however a challenge. It requires models to have a full model of aerosol creation, growth, accretion and cloud nucleation. There are many other processes that affect aerosols and GCR-related ionisation is only a small part of that. Additionally there is a huge amount of uncertainty in aerosol-cloud effects (the ‘aerosol indirect effect’). Preliminary work seems to indicate that the GCR-aerosol-cloud link is very small (i.e. the other effects dominate), but this is still in the early stages of research. Should this prove to be significant, climate models will likely incorporate this directly (using embedded aerosol codes), or will parameterise the effects based on calculated cloud variations from more detailed models. What models can’t do (except perhaps as a sensitivity study) is take purported global scale correlations and just ‘stick them in’ – cloud processes and effects are so tightly wound up in the model dynamics and radiation and have so much spatial and temporal structure that this couldn’t be done in a way that made physical sense. For instance, part of the observed correlation could be due to the other solar effects, and so how could they be separated out? (and that’s even assuming that the correlations actually hold up over time, which doesn’t seem to be the case).
- What do you mean when you say a model has “skill”?
‘Skill’ is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skillful if it works better than just assuming that each day is the same as the last (‘persistence’). It should be noted that ‘persistence’ itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn’t necessarily an analogue for ‘persistence’ in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a ‘no change from the present’. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures – but only if natural and anthropogenic forcings are used – compared to an expectation of no change. Standard climate models don’t show skill at the interannual timescales which depend heavily on El Niño’s and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).
- How much can we learn from paleoclimate?
Lots! The main issue is that for the modern instrumental period the changes in many aspects of climate have not been very large – either compared with what is projected for the 21st Century, or from what we see in the past climate record. Thus we can’t rely on the modern observations to properly assess the sensitivity of the climate to future changes. For instance, we don’t have any good observations of changes in the ocean’s thermohaline circulation over recent decades because a) the measurements are difficult, and b) there is a lot of noise. However, in periods in the past, say around 8,200 years ago, or during the last ice age, there is lots of evidence that this circulation was greatly reduced, possibly as a function of surface freshwater forcing from large lake collapses or from the ice sheets. If those forcings and the response can be quantified they provide good targets against which the models’ sensitivity can be tested. Periods that are of possibly the most interest for testing sensitivities associated with uncertainties in future projections are the mid-Holocene (for tropical rainfall, sea ice), the 8.2kyr event (for the ocean thermohaline circulation), the last two millennia (for decadal/multi-decadal variability), the last interglacial (for ice sheets/sea level) etc. There are plenty of other examples, and of course, there is a lot of intrinsic interest in paleoclimate that is not related to climate models at all!
As before, if there are additional questions you’d like answered, put them in the comments and we’ll collate the interesting ones for the next FAQ.
191 Responses to "FAQ on climate models: Part II"
Sean Dorcy says
I have two questions. Are the assumptions/unknowns the cause for many climate models to be more conservative in their predictions and causing them to fall short of the actual occurrences with the climate?
Shouldn’t actual physical evidence be placed ahead of what a climate model states as far as trying to prove that climate change is an actuality?
[Response: If we knew why models were not perfect, we’d fix them. Your second question doesn’t make much sense. Climate models don’t prove anything on their own. It is the match up of theory, observations and simulations that allows us to attribute causes and effects and make predictions about what would happen with or without various causes. The evidence from those exercises are pretty convincing that anthropogenic climate change (which is what I presume you mean) is ongoing – but that is a conclusion that comes from consideration of the actual physical evidence. Your question presumes a distinction that doesn’t exist. – gavin]
Kevin McKinney says
A very meaty and helpful post. Many thanks once again. I will be linking to this!
Disclaimer: I am not an expert
You mention that you believe conservative models’ projections have fallen short of actual occurences. I am wondering where you derive the assumptions that a) the projections are conservative and b) projections have fallen short of actual occurences?
If you’re talking about global mean temperature I would advise you to compare the projections of the IPCC to the actual measurements of GISS as well as HadCRUT, RSS MSU, and UAH MSU measured data. If that is the parameter that you are saying has fallen short of actual measurements, I would say it hasn’t. There are some who would argue that the projections are too aggressive, and the best argument is probably that we don’t know quite enough yet whether it is too aggressive or conservative…
You can see what CO2 concentration has looked like over the years here (represented by Mauna Loa Observatory) and compare it to the different scenarios assumed in each IPCC projection which then averages the output of the models:
If your question has to do with melting ice, I would note for you that there are certain wind and ocean circulation effects that have added greatly to the short term melting of Arctic Ice over the last couple years (not just global warming) and these short term effects are not where a long term ice projection applies.
In other words, we read in the press that this melt was caused by global warming effects exceeding projections, but it would be more factual to say we are seeing natural effects superimposed on global warming effects over a pretty short time frame over which projections aren’t specifically made. For instance, if ice rebounds to 1979 levels in 2009 to 2011 that doesn’t disprove global warming theory just like the last few years of low ice levels didn’t prove it. You need to look at longer term trends against longer term projections. Don’t be surprised if the press isn’t able to give you good scientific information when you hear about projections parameters being exceeded.
If your question has to do with storm/hurricane numbers and intensity, I would note that is not exactly settled. Last I saw from NOAA was global warming decreasing numbers but increasing intensities:
However the methods and equipment for measuring occurences and intensity have improved so much that we’re not exactly comparing apples to apples when we calibrate today’s numbers with 70 years ago to quantify a correlation, it’s effect, and provide a projection.
In other words, there’s allot of good science in the models, enough to provide insight into how the climate interacts and make a current “best bet” projections, but remember:
a) models will certainly be modified in the future. It is not out of question for projection results to change just a little, or possibly even allot in either direction.
b) can’t be used for comparison to short term trends which contain volatile natural variability. Short term trends vs models neither proves nor disproves models or the underlying theory.
What a great post! There are many issues addressed here which are common skeptic claims, as well as sources of unease amongst nonexperts. Thanks so much for all the work you do!
Arthur Smith says
Gavin – nice collection of questions and answers!
Since you bring up the thermo-haline circulation, a question I have been recently pondering is why is the deep ocean so cold? After all, if average surface temperature is 15 C, wouldn’t you expect land and ocean below the surface to equilibrate at roughly that temperature (with a slightly rising gradient to account for the flow of Earth’s internal heat)?
The best simple answer I’ve seen is basically that you have to go to a 2-box model of Earth, with warm tropics and cold poles, and then realize that thanks to the thermohaline circulation the deep oceans are coupled almost exclusively to the polar regions, and so are in the “cold” box and not the warm one or some average of them.
However, the question then is – can this switch? Are there boundary conditions or any stable solution that would couple the deep oceans to the “hot” box rather than the cold one? If that ever happened what would it imply for surface temperatures?
Julius St Swithin says
I have a comment on paleo data.
Proxy temperature data are calculated in the form of:
Temperature = a + b * x, where ‘x’ is something like tree ring thickness or O18/O16 ratios. Unless r2 is 1, the proxy temperature data will always have a lower standard deviation than the measured data. In the limit, if r2 is 0, the proxy will have the value of the mean of the observed calibration data and zero standard deviation. I know that more sophisticated regression methods are employed but similar problems are unavoidable. What is more, different proxies will have different smoothing effects; the thickness of a ring is in part a function of how well the tree grew the previous year: gas migrates between layers of snow before they become consolidated. Mixing proxies will therefore further suppress the variance of the proxy data.
If observed data from recent years are added to proxy data from earlier years to create a long-term series what steps are taken to ensure that the artificially low variance of the proxy data compared to the true variance of the observed does not produce a distorted picture.
[Response: You have a very limited concept of how paleodata is used. For instance, in almost all of my work I have emphasised the need to forward model proxy data within climate simulations so that all of the processes by which proxy data is recorded can be simulated, rather than using the inverse approach you discuss. Multiple proxies which have different systematic errors can also be used to isolate underlying signals. – gavin]
[Response: adding to the above, in most actual proxy-reconstruction studies (including those of my group’s) the unresolved variance is of course one of the central quantities looked it. It is used to define the uncertainties in the reconstructions, i.e. those error bars you typically see in association with proxy-reconstructed quantities are telling you the envelope of uncertainty within which any comparisons to modern instrumental data should be made, based on the variance that is not resolved by the paleoclimate data (in both calibration and, importantly, cross-validation tests). They are a crucial guide to the interpretation of comparisons of the reconstructed past w/ the modern instrumental record. This above stuff is basic, and should be clear from even a cursory reading of the peer-reviewed literature in this area. I would suggest you review that literature, e.g. start with the IPCC AR4 chapter (6) on paleoclimate. -mike]
Jim Bouldin says
Gavin, thanks for yet another very helpful article, though I admit I’ve not read all of it. As for additional topics, perhaps a brief explanation on why confidence in attribution (and prediction) of temperature change is strongest at large scales and weakest at small scales, ie something about the issue of signal to noise relative to spatial scale.
Arthur: Deep ocean temperature is fixed by the compressibility properties of water. Although the variation is small, water happens to be densest at 4°C. At 4000 m beneath surface pressure is roughly 400 atmospheres, which is enough to force water to be in its state of maximal density, hence a fixed given temperature. This is why the deep ocean remains always liquid (any other liquid would turn to solid) and some say it is a necessary (although not sufficient) condition for the development of life on Earth.
Jim Morrison says
As a consumer of GCM results for use in natural resource planning, your FAQ’s are very helpful. Thanks. I have another set of questions I hope you may be able to help me with. Do GCMs skillfully simulate significant patterns of natural variability? More specifically, when examining GCM output, or multi-model mean data from CMIP3 analyses, specific to the western US for the next 5 decades, is it appropriate to mentally overlay anticipated PDO or ENSO oscillations? Or should I assume these patterns are already included in the projections? Could GCM projections substantially overestimate temperature trends for the western US if PDO shifts from its current warm phase to a cool phase? These questions are relevant to meso-scale (1-5 decades; local and regional extent) adaptation proposals and decisions. Randall et al. (AR4 WGI Chapter 8 ) doesn’t provide a clear answer to these questions.
David B. Benson says
Gavin — “opening the isthmus of Panama?”
Should not that read “closing”?
[Response: Depends on your point of view. – gavin]
Thanks for excellent article above – for next FAQ as well as ENSO, PDO/IPO mentioned above would also like to hear about modelling of phenomena like Southern Annular Mode and Indian Ocean Dipole. The underlying issue is about both model completeness and how much these phenomena might move future projections around. Additionally interested in land surface/biospheric feedbacks.
Arthur Smith says
JacquesLB (#8) – your argument only explains why the bottom of the ocean is not colder than it is, or indeed frozen at the bottom – colder water heads upwards and freezes at the surface. So the deep ocean coupled to the “cold box” can’t get much colder than 4 C. But it could easily be warmer with no violation of any laws of physics – a lot warmer. Why isn’t it, and are there any conditions for a planet similar to Earth under which the deep ocean could be much warmer?
[Response: Indeed the ocean depths used to be a lot warmer – maybe 15 deg C during the Eocene for instance. The issue is where and how dense water is formed – today it is in the polar regions where you have freezing conditions and enormous heat fluxes to the atmosphere. In other times, with warmer poles, or perhaps very salty tropics, you could make deep water with very different properties. It only needs to be denser than other water at the surface. – gavin]
I read (In Thin Ice I believe. Book that soon brought me to RC & AGW ) about the consequences of the development of the Isthmus of Panama on global climate. Why then opening?
Eric Swanson says
Arthur Smith mentions the maximum density of water. It’s true that for pure water, the minimum density occurs at a temperature of 4°C, however, for the oceans, the salt content is such that the maximum density is the at the freezing point at −1.8°C. The coldest water is on the bottom because that’s the densest water. Of course, during winter as the water freezes on the surface, the resulting sea-ice is less dense, so it floats. And, when the sea water freezes, much of the salt is rejected in the process, which can cause the remaining water on the surface to become even more dense and sink. This is the reason that the water on the very bottom of the ocean originates around the Antarctic as the result of the yearly sea-ice cycle.
JacquesLB and Arthur: With respect to the temperature in the deep ocean, I would like to point out that the oceans are filled with seawater, not fresh water.
Patrick 027 says
Arthur Smith says
Gavin (#12 response) – ah, but we’re headed to times of warmer poles (with higher fresh water content from melting ice, as opposed to higher salt levels from freezing), and likely saltier tropics due to higher evaporation levels (or do tropical precipitation increases balance that?) – so has anybody modeled where the tipping point might be to a switch to tropical coupling, as opposed to polar coupling, and what the impact would be in a world with high CO2?
The paleoclimate record (8.2kyr, and earlier “large lake collapses”) shows a dramatic drop in surface temperatures for a substantial period of time when the ocean circulation shuts off or changes, but is that actually what would be expected under these warming conditions? How long would be required to warm the deep oceans? At 4 W/m^2 and about 1 billion cubic km of ocean to warm by 10 C, I think that comes to 600 or 700 years. My guess is it might lead to relatively stable surface temperatures during this warming period, but ever-increasing sea surface levels as the ocean expands?
Concerning Paleoclimate; the 8.2Ka event was involved with the Laurentide ice sheet, and is long gone. As such, it will have limited applicability to the future.
We seem to be heading for a climate more like the Miocene. Antarctica was and will be around, but it it becomes surrounded by lots of melt water, and may stop driving deep ocean currents.
Then then only driver of deep ocean current will be evaporation and it’s not clear if that is sufficient to keep the mixing. That could isolate the surface waters from the deep and result in accelerated warming.
Edward Greisch says
Thanks for a great post. The following is not for myself but for many of those people out there who have the weather/climate confusion. I understand that the methods of the weather bureau are so different from the methods of climatology that there is a huge and, at this time, unfillable gap between the 2. Weather is short term, like days. Climate is long term, like centuries.
It is precisely in the gap that a forecast would be most beneficial to most people. Therefore, people get frustrated trying to argue you into filling in the gap. They want you to combine weather prediction and climatology into a science that accurately predicts next year and the next 5 years. That is their planning horizon. After all, you and the weather forecasters use big computers and data that sound the same. You even mentioned weather a lot in your article.
It is only when you actually try to fill in the gap that you, the scientists, become so frustrated that you give up on that project. Since the average person has no experience with trying to solve mathematical problems and no experience with computationally intensive computer programs, he does not understand your refusal to do that which you cannot.
I think that this may be a part of the problem with denialists. Of course, the denialists, in general, and the people who listen to them, have other problems or agendas.
Geoff Beacon says
Quote from the Hadley Centre a year or so ago:
“The CH4 (and CO2) permafrost feedback isn’t included in current
EarthSystemModels and it is potentially large but no-one really knows.”
Anyone know of any progress?
We do need estimates for policy making. It’s not much use having exquisite climate models that model the wrong reality.
I think the FAQs should at least have a section “What feedbacks are missing?” We ought to be told what are their probable impacts.
“Not known” is a better answer than none.
Is there an official list of missing feedbacks?
Paula Thomas says
One question. Are the models sophisticated enough to take account of effects on the boundary conditions of previous cycles? e.g. temperature in winter must have an effect on CO2 emissions and therefore CO2 levels in the next cycle.
[Response: The models that include a carbon cycle and dynamic vegetation should have such effects – but this is still a rather experimental class of models. The ‘standard’ models impose a CO2 concentration derived from observations or in a scenario and wouldn’t have such a process. – gavin]
Gavin, how the climatic variability is accounted for in the models?
Is it conceivable that the best actual climate models, only with the basic laws of fluid thermodynamics,could reproduce a climate variability such ENSO, AMO, NAO,…, or is there the need of parametrization?
I think you agree that the ocean is a huge tank of coldness.
Its mean temperature is 3.5°C and should be sufficient to neutralize several centuries of anthropogenic greenhouse effect.
Surely it’s difficult for this coldness to shift towards the surface but even only a very small part can have some surface effects.
It seems, for example, that there are some surprising effects in the Southern ocean (strengthening westerlies and midle latitude decreasing SST).
So isn’t the ocean one of the biggest problem of actual models?
(I apologize for my english)
[Response: The internal variability is an emergent property of the whole package. For instance, all models show variability in the ocean temperatures in the tropical Pacific – but the magnitude and spectra of that variability depends a lot on the resolution, and how mixing is handled in the upper ocean etc. The oceans are a difficult part of the system for a number of reasons (mainly that the scale at which important things happen (bottom currents, eddies, western boundary currents) is quite small relative to similar processes in the atmosphere. However, the ocean is very strongly stratified, and the interaction with the bulk of the deep cold water is very slow – it is generally the upper ocean that determines the time scale for the transient warming we might expect. – gavin]
I have a question regarding climate sensitivity and momentum. There is still a rather broad range of expected equilibrium global temperature response for CO2 doubling of between 2 to 4.5 degree C.
CO2 levels have been rising about 0.04%/yr.
Global temperatures are trending about 0.016 C/yr, but only about 75% of that is due to CO2. So, perhaps the amount of CO2 warming is only 0.012 C/yr.
This implies a sensitivity of about 3 degree C per doubling which is very close to the expected mid range. On the other hand, if the upper end of the sensitivity range is correct, then that implies there is a lot of momentum in the system.
So, my question is how much momentum do the models generally predict and is it inversely related to their sensitivity values?
[Response: (Momentum is not really the right word – ocean thermal inertia is a better description). There is actually a very strong connection – the bigger the sensitivity, the longer the adjustment time. This is one of the reasons why the 20th C changes haven’t been very useful at constraining the higher end of the possible sensitivities. – gavin]
Gavin, you stated in the article,
“Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models.”
Intuitively, it might seem that models which are good at resolving the physics on short time scales should not be worse in a climatological sense. In your view, are there any clearly identifiable reasons why this should the case?
[Response: Yes. Errors in radiative or surface fluxes don’t influence baroclinic instability very much (which is a dynamical thing). When you re-initialise a weather model every 6 hours, errors in temperatures/humidity that arose from the errors in fluxes will get corrected. But, if you let the model run freely they don’t, and thus you end up with models that are horribly out of energy balance – and that leads to bad climatologies. I’d be happy to have anyone from the NWP community chime in and expand or correct this interpretation though. – gavin]
If climate sensitivity and thermal inertia are strongly connected, then that implies two extreme possibilities since the recent rate of warming is currently near the middle of the range:
At the low end of sensitivity, we are living in a period of over reaction by the climate and the rate of warming should tend to revert lower towards the equilibrium value.
At the high end of sensitivity, we are in store for significantly more warming for an extended duration.
My hope would be that the science will advance to narrow the range so they are not so extreme.
So, a follow-up question is what areas of research are available to narrow the range?
David B. Benson says
Jim Morrison (9) — I’m an amateur here, but I think you are taking a wrong approach. Ocean oscillations are not predicatable in any strong sense, so you should use lots of runs with different internal variability patterns. This will give you a range of results to establish some form of error bounds on the parameters of interest, temerature and precipitation I suppose.
Dr. Spencer has posted a pre-publication paper at http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/ where his abstract says:
What is your assessment of the technique he uses?
What would be the impact the future development of your model GISS Model E?
[Response: Spencer’s critique has not been published in the peer reviewed literature and so it is difficult to know what he has done. From the figures he has shown he is using different averaging periods for the data and the models (12 month running mean vs. 91 month running mean) and is not stated whether he is looking at analogous periods. Comparing models to observations is perfectly fine, but the comparison has to be apples-with-apples and the analysis has to be a little more sophisticated than saying ‘look at the lines’ (or ‘linear striations’). His contention that models were built incorrectly because of a mis-interpretation of cloud data is completely bogus. – gavin]
Andrew (#25): I think one key for untangling climate system inertia and climate sensitivity is to improve our understanding of how heat is entering the oceans. If we knew ocean heat uptake as well as we know atmospheric temperature change, then we could pin down fairly well the radiative imbalance at the top of the atmosphere, which would give us a fair indication of how much warming is ‘in the pipeline’ given current greenhouse gas concentrations.
The problem is that our understanding of that budget is still in flux (see http://earthobservatory.nasa.gov/Features/OceanCooling/page4.php for one discussion).
Alternatively, more direct observations of that radiative imbalance would be nice, or better theoretical and observational understanding of the water vapor and cloud feedbacks, or more paleoclimate data which can give us constraints on historical feedbacks, but my guess is that ocean heat content measurements would be the best near term bet for improving our understanding of this issue.
Geoff Beacon says
I have contacted Don Blake of the University of Califonia, Irvine. He says
I am struck by “if emissions of methane to the atmosphere were decreased then concentrations of methane in the atmosphere would soon begin to decrease”.
I have a long-distant background in physics and that leads me to feel the converse is also likely i.e. “if emissions of methane to the atmosphere were increased concentrations would quickly increase”.
As we used to say, anything else doesn’t smell right.
Has anyone a better nose?
Jim Dukelow says
In #23 Andrew wrote:
“CO2 levels have been rising about 0.04%/yr.”
Looking at the Keeling curve, CO2 levels increased from approximately 368 ppmv at the start of 2000 to approximately 378 ppmv at the end of 2004. That is 2 ppmv increase per year on a base of approximately 370 ppmv or an increase of 0.54% per year.
T Gannett says
I have a few questions, probably unfrequently asked, that I hope someone has answers to. Does anyone know what the fluorescence quantum yield is for v(1) to v(0) for the CO2 15um line. I would like to get an idea of how much of the energy a CO2 molecule acquires when absorbing a 15um photon ends up re-emitted as an infra-red photon. The rest of the energy will end up partitioned between translational, rotational and vibrational states. This raises another question. For a collection of CO2 molecules, say at 20’C, what proportion of the molecules are in the various excited vibrational states accessible to CO2? Anyone know? Answers can be sent to email@example.com. Thanks.
David B. Benson says
Andrew (25) — The propect for significantly narrowing the uncertainty in climate sensitivity in the near term does not appear good, IMO. However there are two excellent papers by Annan & Hargreaves you may wish to study. For one of them, there is an earlier thread here on RealClimate.
Pat Neuman says
I think the early Cenozoic, with it’s much high concentrations of greenhouse gases and much warmer global
climate needs more attention here.
… “The extreme case is the Early Eocene Climatic Optimum (EECO), 51–53 million years ago, when
pCO2 was high and global temperature reached a long-term maximum.
Only over the past 34 million years have CO2 concentrations been
low, temperatures relatively cool, and the poles glaciated. …
Excellent post. Informative.
Bryan S says
Back to the thermal inertia question and using 20th century changes to constrain sensitivities. Suppose we doubled CO2 instantly. Now consider the transient behavior of the temperature increase needed to fully equilibriate this forcing change+feedbacks. Assuming an equilibrium sensitivity of 3C, what percentage of the total equilibrium temperature increase would have occurred after 1,10,100,1000 years? Another way; if we plot the transient temperature response on a semi-log graph, are there any relevant observations?
If the majority of the temperature response takes place in only a few years, with the remaining small fraction taking place over hundreds to several thousands of years, then the long thermal lag time is not all that relevant. The remaining temperature rise left “in the pipeline” would be small and spread out over such a length of time that the signal would be swamped by natural variability.
Due to the limited mass of the components of the climate system which are effectively coupled to the atmosphere, would it not seem that much of the temperature response would occur rapidly (be frontloaded), followed by a very long tail as heat slowly “leaks off” below the thermocline into the almost impermeable deep ocean (where most of the mass resides)?
The statement that the sensitivity is proportional to the time constant would seem obvious since for a given rate of heat imput, it will take longer to increase the temperature 5C than for an increase of 1C. Based on the nature of the transient temperature response (a function of the heat capacities of the various components) however, exactly what is meant by sensitivity (equilibrium vs pseudo-equilibrium?) and what is meant by time constant (which one?) may require better definition.
Can these issues be better explored by carefully comparing model experiments to observations?
Have any of your opinions of global warming changed in any way and if so could you explain? Thank you.
“Can these issues be better explored by carefully comparing model experiments to observations?”
How do we put a pulse of CO2 that is visible in the records and then let NOTHING ELSE change? So observations won’t verify anything.
Please think about how it would be practical to do before asking “couldn’t we…?” all the time. It’s about as helpful as saying “couldn’t we remove world poverty by taking the money from the rich people and giving it out equally to the world?”.
I have a question on the influence of the Coriolis force on the latitudal energy transport? I suppose the latitudal energy transport is reduced due the Coriolis force, especially away from the tropics. In the Palaeozoic the day was about 22 h. How large would the latitude depended temperature change if today the day would have 22 h compared to 24 h?
Bart Verheggen says
Andrew (23) and Bryan (35):
The problem is that climate sensitivity and thermal inertia could be traded off mathematically in producing a decent match with the observed temeperature record of the 20th century (because it’s out of equilibrium. In an equilibrium situation, the thermal inertia wouldn’t play as important of a role anymore). Even more, the net forcing isn’t very accurately known either, mainly because of the uncertainties in aerosol forcing.
A larger negative aerosol forcing (and thus a weaker net positive forcing) would need to be combined with a higher climate sensitivity and/or a shorter ocean response time in order to still provide a good match, and vice versa. Of course, there are other constraints on these processes as well that have to be taken into account. Hansen for example suggested (at the AGU in dec 2008) that climate sensitivity is known more accurately than the other two quantities, whereas the more often heard trade-off (correct me if I’m wrong) is between aerosol forcing and sensitivity.
Bert (38) (and others in this discussion): You might be interested in Figure 2 in the following Stott et al. paper: the paper addressed different climate modeling attempts to use past date to constrain future scenarios:
My guess, having missed this AGU, would be that Hansen’s “better constrained climate sensitivity” would be due more to paleoclimate data than to 20th century data, where the potential masking of heating from aerosols and ocean uptake is too large to fully constrain the upper bound of sensitivities…
There is a new study that shows the climate models referended in IPCC 4th report were wrong about Antarctic temperatures. What adjustments are needed to correct for errors in Antarctic modeling and how will that change the current projections from those in the IPCC 4th Report?
Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models. Andrew Monaghan, David Bromwich, and David Schneider. Geophysical Research Letters, April 5, 2008
Will this lead to better climate models?
yves fouquart says
Although I am a regular reader of RC, this is the first time I post.
I used to be active in the radiation field , indeed I co chaired the first ICRCCM study (Intercomparison of Radiation Codes for Climate Models).
At that time, we had a rather long discussions about whether or not a radiation code was a parameterization.
We concluded that this was not the case because tje basic physics is known. What is done in radiation codes is APPROXIMATION , this is fairly different fron cloud parameterizations for instance since , in that case, there is some physics which is by-passed because the physics works at a smaller scale. Unless thigs have changed a lot since I retired, cloud parameterizations are not simply an approximation of cloud resolving models.
Nonetheless, this is more of a detail and this is quite a good post. Thanks for all the work you make here.
Phil. Felton says
Does anyone know what the fluorescence quantum yield is for v(1) to v(0) for the CO2 15um line. I would like to get an idea of how much of the energy a CO2 molecule acquires when absorbing a 15um photon ends up re-emitted as an infra-red photon.
I don’t have a numerical value but in the atmosphere at ~100kPa it’s below 0.001.
The rest of the energy will end up partitioned between translational, rotational and vibrational states. This raises another question. For a collection of CO2 molecules, say at 20′C, what proportion of the molecules are in the various excited vibrational states accessible to CO2?
I’m not sure what you mean in the last question.
Philip Machanick says
Thanks for including my “What do you mean when you say model has “skill”?” question including the grammar error :) It should of course be “… a model has “skill”?”
No need to post this.
Philip Machanick says
Here’s another one. You briefly mention El Niño in some answers. My understand is that El Niño and La Niña are heat transfers between the ocean and atmosphere i.e., from one part of the system to another, that affect short-term temperature but not the long-term trend, because they do not alter the overall energy balance.
Is this correct?
In any case, answering a question something like “What is the effect of El Niño and La Niña on long-term trends?” would be useful.
Pat Neuman #33: Bob Carter makes big deal of how the early Cenozoic had much higher CO2 but the planet was teaming with life. The paper you link to is a good answer. Note particularly the evidence of lowered ocean oxygen. I’ve been told that current models exclude anoxic oceans as a future possibility.
That leads to another question: “What can we learn from the relationship between past extinction events and climate change?”
Hank Roberts says
Mare asked 8 January 2009 at 2:31 AM
“Have any of your opinions of global warming changed…”
This is a good summary:
T. Gannett says
Phil. Felton #43
Thanks for the reply. The value you provide for CO2 IR fluoresence quantum yield is plenty good for my purposes. If it is correct, then the IR radiation emitted from the earth’s surface and absorbed will be nearly completely thermalized and not re-emitted, i.e. it will heat the air. It can be reasonably calculated from available extinction coefficients and CO2 concentration that >99% of the IR photons emitted by the earth’s surface that can be absorbed by CO2 will be absorbed in the first 100m. With >99% of that energy being thermalized it won’t be retransmitted to the earth’s surface as IR. This leads to my 2nd question. Unless CO2 at temperatures near the surface has a reasonable population of vibrationally excited molecules then the heat generated by IR absorbtion can not be redistributed radiationally. This last statement requires that the atmosphere not be able to lose energy via black body radiation or to do so only poorly.
Hank Roberts says
> will be absorbed in the first 100m ….
Hank Roberts says
T. Gannett, I wonder if this is where you’re headed:
Bulletin of the American Meteorological Society
Earth’s global energy budget
Kevin E. Trenberth, John T. Fasullo, Jeffrey Kiehl
“…This article provides an update on the Kiehl and Trenberth (1997) article on the global energy budget that was published in BAMS. A figure showing the global energy budget in that paper is widely used and appears in many places on the internet. It has also been reproduced in several forms in many articles and books. But it is dated. A primary purpose of this article is to provide a full color figure to update this work. At the same time, we expand upon it somewhat by detailing changes over time and aspects of the land vs ocean differences in heat budgets that should be of general interest. We also expand on the discussion of uncertainty and the remaining challenges in our understanding of the budget. …”
The image is _very_ familiar. But the updated image hasn’t shown up much yet.
Ray Ladbury says
T. Gannett, Do the math. The Maxwell-Boltzmann distribution says that roughly 0.8% of molecules will be sufficiently energetic even at 200 K.
Now think about the physics: If the energy isn’t being emitted radiatively, then it’s going into heating the atmosphere, which heats up until there is in fact a significant vibrationally excited population.