FAQ on climate models: Part II

As before, if there are additional questions you’d like answered, put them in the comments and we’ll collate the interesting ones for the next FAQ.

Page 3 of 3 | Previous page

191 comments on this post.
  1. Sean Dorcy:

    I have two questions. Are the assumptions/unknowns the cause for many climate models to be more conservative in their predictions and causing them to fall short of the actual occurrences with the climate?

    Shouldn’t actual physical evidence be placed ahead of what a climate model states as far as trying to prove that climate change is an actuality?

    [Response: If we knew why models were not perfect, we’d fix them. Your second question doesn’t make much sense. Climate models don’t prove anything on their own. It is the match up of theory, observations and simulations that allows us to attribute causes and effects and make predictions about what would happen with or without various causes. The evidence from those exercises are pretty convincing that anthropogenic climate change (which is what I presume you mean) is ongoing – but that is a conclusion that comes from consideration of the actual physical evidence. Your question presumes a distinction that doesn’t exist. – gavin]

  2. Kevin McKinney:

    A very meaty and helpful post. Many thanks once again. I will be linking to this!

  3. hmm:

    Sean Dorcy,
    Disclaimer: I am not an expert
    You mention that you believe conservative models’ projections have fallen short of actual occurences. I am wondering where you derive the assumptions that a) the projections are conservative and b) projections have fallen short of actual occurences?

    If you’re talking about global mean temperature I would advise you to compare the projections of the IPCC to the actual measurements of GISS as well as HadCRUT, RSS MSU, and UAH MSU measured data. If that is the parameter that you are saying has fallen short of actual measurements, I would say it hasn’t. There are some who would argue that the projections are too aggressive, and the best argument is probably that we don’t know quite enough yet whether it is too aggressive or conservative…

    You can see what CO2 concentration has looked like over the years here (represented by Mauna Loa Observatory) and compare it to the different scenarios assumed in each IPCC projection which then averages the output of the models:

    If your question has to do with melting ice, I would note for you that there are certain wind and ocean circulation effects that have added greatly to the short term melting of Arctic Ice over the last couple years (not just global warming) and these short term effects are not where a long term ice projection applies.
    In other words, we read in the press that this melt was caused by global warming effects exceeding projections, but it would be more factual to say we are seeing natural effects superimposed on global warming effects over a pretty short time frame over which projections aren’t specifically made. For instance, if ice rebounds to 1979 levels in 2009 to 2011 that doesn’t disprove global warming theory just like the last few years of low ice levels didn’t prove it. You need to look at longer term trends against longer term projections. Don’t be surprised if the press isn’t able to give you good scientific information when you hear about projections parameters being exceeded.

    If your question has to do with storm/hurricane numbers and intensity, I would note that is not exactly settled. Last I saw from NOAA was global warming decreasing numbers but increasing intensities:
    However the methods and equipment for measuring occurences and intensity have improved so much that we’re not exactly comparing apples to apples when we calibrate today’s numbers with 70 years ago to quantify a correlation, it’s effect, and provide a projection.

    In other words, there’s allot of good science in the models, enough to provide insight into how the climate interacts and make a current “best bet” projections, but remember:
    a) models will certainly be modified in the future. It is not out of question for projection results to change just a little, or possibly even allot in either direction.
    b) can’t be used for comparison to short term trends which contain volatile natural variability. Short term trends vs models neither proves nor disproves models or the underlying theory.

  4. Sean:

    What a great post! There are many issues addressed here which are common skeptic claims, as well as sources of unease amongst nonexperts. Thanks so much for all the work you do!

  5. Arthur Smith:

    Gavin – nice collection of questions and answers!

    Since you bring up the thermo-haline circulation, a question I have been recently pondering is why is the deep ocean so cold? After all, if average surface temperature is 15 C, wouldn’t you expect land and ocean below the surface to equilibrate at roughly that temperature (with a slightly rising gradient to account for the flow of Earth’s internal heat)?

    The best simple answer I’ve seen is basically that you have to go to a 2-box model of Earth, with warm tropics and cold poles, and then realize that thanks to the thermohaline circulation the deep oceans are coupled almost exclusively to the polar regions, and so are in the “cold” box and not the warm one or some average of them.

    However, the question then is – can this switch? Are there boundary conditions or any stable solution that would couple the deep oceans to the “hot” box rather than the cold one? If that ever happened what would it imply for surface temperatures?

  6. Julius St Swithin:

    I have a comment on paleo data.

    Proxy temperature data are calculated in the form of:
    Temperature = a + b * x, where ‘x’ is something like tree ring thickness or O18/O16 ratios. Unless r2 is 1, the proxy temperature data will always have a lower standard deviation than the measured data. In the limit, if r2 is 0, the proxy will have the value of the mean of the observed calibration data and zero standard deviation. I know that more sophisticated regression methods are employed but similar problems are unavoidable. What is more, different proxies will have different smoothing effects; the thickness of a ring is in part a function of how well the tree grew the previous year: gas migrates between layers of snow before they become consolidated. Mixing proxies will therefore further suppress the variance of the proxy data.

    If observed data from recent years are added to proxy data from earlier years to create a long-term series what steps are taken to ensure that the artificially low variance of the proxy data compared to the true variance of the observed does not produce a distorted picture.

    [Response: You have a very limited concept of how paleodata is used. For instance, in almost all of my work I have emphasised the need to forward model proxy data within climate simulations so that all of the processes by which proxy data is recorded can be simulated, rather than using the inverse approach you discuss. Multiple proxies which have different systematic errors can also be used to isolate underlying signals. – gavin]

    [Response: adding to the above, in most actual proxy-reconstruction studies (including those of my group’s) the unresolved variance is of course one of the central quantities looked it. It is used to define the uncertainties in the reconstructions, i.e. those error bars you typically see in association with proxy-reconstructed quantities are telling you the envelope of uncertainty within which any comparisons to modern instrumental data should be made, based on the variance that is not resolved by the paleoclimate data (in both calibration and, importantly, cross-validation tests). They are a crucial guide to the interpretation of comparisons of the reconstructed past w/ the modern instrumental record. This above stuff is basic, and should be clear from even a cursory reading of the peer-reviewed literature in this area. I would suggest you review that literature, e.g. start with the IPCC AR4 chapter (6) on paleoclimate. -mike]

  7. Jim Bouldin:

    Gavin, thanks for yet another very helpful article, though I admit I’ve not read all of it. As for additional topics, perhaps a brief explanation on why confidence in attribution (and prediction) of temperature change is strongest at large scales and weakest at small scales, ie something about the issue of signal to noise relative to spatial scale.

  8. JacquesLB:

    Arthur: Deep ocean temperature is fixed by the compressibility properties of water. Although the variation is small, water happens to be densest at 4°C. At 4000 m beneath surface pressure is roughly 400 atmospheres, which is enough to force water to be in its state of maximal density, hence a fixed given temperature. This is why the deep ocean remains always liquid (any other liquid would turn to solid) and some say it is a necessary (although not sufficient) condition for the development of life on Earth.

  9. Jim Morrison:

    As a consumer of GCM results for use in natural resource planning, your FAQ’s are very helpful. Thanks. I have another set of questions I hope you may be able to help me with. Do GCMs skillfully simulate significant patterns of natural variability? More specifically, when examining GCM output, or multi-model mean data from CMIP3 analyses, specific to the western US for the next 5 decades, is it appropriate to mentally overlay anticipated PDO or ENSO oscillations? Or should I assume these patterns are already included in the projections? Could GCM projections substantially overestimate temperature trends for the western US if PDO shifts from its current warm phase to a cool phase? These questions are relevant to meso-scale (1-5 decades; local and regional extent) adaptation proposals and decisions. Randall et al. (AR4 WGI Chapter 8 ) doesn’t provide a clear answer to these questions.

  10. David B. Benson:

    Gavin — “opening the isthmus of Panama?”

    Should not that read “closing”?

    [Response: Depends on your point of view. – gavin]

  11. Luke:

    Thanks for excellent article above – for next FAQ as well as ENSO, PDO/IPO mentioned above would also like to hear about modelling of phenomena like Southern Annular Mode and Indian Ocean Dipole. The underlying issue is about both model completeness and how much these phenomena might move future projections around. Additionally interested in land surface/biospheric feedbacks.

  12. Arthur Smith:

    JacquesLB (#8) – your argument only explains why the bottom of the ocean is not colder than it is, or indeed frozen at the bottom – colder water heads upwards and freezes at the surface. So the deep ocean coupled to the “cold box” can’t get much colder than 4 C. But it could easily be warmer with no violation of any laws of physics – a lot warmer. Why isn’t it, and are there any conditions for a planet similar to Earth under which the deep ocean could be much warmer?

    [Response: Indeed the ocean depths used to be a lot warmer – maybe 15 deg C during the Eocene for instance. The issue is where and how dense water is formed – today it is in the polar regions where you have freezing conditions and enormous heat fluxes to the atmosphere. In other times, with warmer poles, or perhaps very salty tropics, you could make deep water with very different properties. It only needs to be denser than other water at the surface. – gavin]

  13. ChuckG:

    What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz?

    I read (In Thin Ice I believe. Book that soon brought me to RC & AGW ) about the consequences of the development of the Isthmus of Panama on global climate. Why then opening?


  14. Eric Swanson:

    Re: #12

    Arthur Smith mentions the maximum density of water. It’s true that for pure water, the minimum density occurs at a temperature of 4°C, however, for the oceans, the salt content is such that the maximum density is the at the freezing point at −1.8°C. The coldest water is on the bottom because that’s the densest water. Of course, during winter as the water freezes on the surface, the resulting sea-ice is less dense, so it floats. And, when the sea water freezes, much of the salt is rejected in the process, which can cause the remaining water on the surface to become even more dense and sink. This is the reason that the water on the very bottom of the ocean originates around the Antarctic as the result of the yearly sea-ice cycle.

    E. S.

  15. oms:

    JacquesLB and Arthur: With respect to the temperature in the deep ocean, I would like to point out that the oceans are filled with seawater, not fresh water.

  16. Patrick 027:


  17. Arthur Smith:

    Gavin (#12 response) – ah, but we’re headed to times of warmer poles (with higher fresh water content from melting ice, as opposed to higher salt levels from freezing), and likely saltier tropics due to higher evaporation levels (or do tropical precipitation increases balance that?) – so has anybody modeled where the tipping point might be to a switch to tropical coupling, as opposed to polar coupling, and what the impact would be in a world with high CO2?

    The paleoclimate record (8.2kyr, and earlier “large lake collapses”) shows a dramatic drop in surface temperatures for a substantial period of time when the ocean circulation shuts off or changes, but is that actually what would be expected under these warming conditions? How long would be required to warm the deep oceans? At 4 W/m^2 and about 1 billion cubic km of ocean to warm by 10 C, I think that comes to 600 or 700 years. My guess is it might lead to relatively stable surface temperatures during this warming period, but ever-increasing sea surface levels as the ocean expands?

  18. Andrew:

    Concerning Paleoclimate; the 8.2Ka event was involved with the Laurentide ice sheet, and is long gone. As such, it will have limited applicability to the future.

    We seem to be heading for a climate more like the Miocene. Antarctica was and will be around, but it it becomes surrounded by lots of melt water, and may stop driving deep ocean currents.

    Then then only driver of deep ocean current will be evaporation and it’s not clear if that is sufficient to keep the mixing. That could isolate the surface waters from the deep and result in accelerated warming.

  19. Edward Greisch:

    Thanks for a great post. The following is not for myself but for many of those people out there who have the weather/climate confusion. I understand that the methods of the weather bureau are so different from the methods of climatology that there is a huge and, at this time, unfillable gap between the 2. Weather is short term, like days. Climate is long term, like centuries.

    It is precisely in the gap that a forecast would be most beneficial to most people. Therefore, people get frustrated trying to argue you into filling in the gap. They want you to combine weather prediction and climatology into a science that accurately predicts next year and the next 5 years. That is their planning horizon. After all, you and the weather forecasters use big computers and data that sound the same. You even mentioned weather a lot in your article.

    It is only when you actually try to fill in the gap that you, the scientists, become so frustrated that you give up on that project. Since the average person has no experience with trying to solve mathematical problems and no experience with computationally intensive computer programs, he does not understand your refusal to do that which you cannot.

    I think that this may be a part of the problem with denialists. Of course, the denialists, in general, and the people who listen to them, have other problems or agendas.

  20. Geoff Beacon:

    Quote from the Hadley Centre a year or so ago:

    “The CH4 (and CO2) permafrost feedback isn’t included in current
    EarthSystemModels and it is potentially large but no-one really knows.”

    Anyone know of any progress?

    We do need estimates for policy making. It’s not much use having exquisite climate models that model the wrong reality.

    I think the FAQs should at least have a section “What feedbacks are missing?” We ought to be told what are their probable impacts.

    “Not known” is a better answer than none.

    Is there an official list of missing feedbacks?

  21. Paula Thomas:

    Good post!!

    One question. Are the models sophisticated enough to take account of effects on the boundary conditions of previous cycles? e.g. temperature in winter must have an effect on CO2 emissions and therefore CO2 levels in the next cycle.

    [Response: The models that include a carbon cycle and dynamic vegetation should have such effects – but this is still a rather experimental class of models. The ‘standard’ models impose a CO2 concentration derived from observations or in a scenario and wouldn’t have such a process. – gavin]

  22. pascal:

    Gavin, how the climatic variability is accounted for in the models?
    Is it conceivable that the best actual climate models, only with the basic laws of fluid thermodynamics,could reproduce a climate variability such ENSO, AMO, NAO,…, or is there the need of parametrization?
    I think you agree that the ocean is a huge tank of coldness.
    Its mean temperature is 3.5°C and should be sufficient to neutralize several centuries of anthropogenic greenhouse effect.
    Surely it’s difficult for this coldness to shift towards the surface but even only a very small part can have some surface effects.
    It seems, for example, that there are some surprising effects in the Southern ocean (strengthening westerlies and midle latitude decreasing SST).
    So isn’t the ocean one of the biggest problem of actual models?
    (I apologize for my english)

    [Response: The internal variability is an emergent property of the whole package. For instance, all models show variability in the ocean temperatures in the tropical Pacific – but the magnitude and spectra of that variability depends a lot on the resolution, and how mixing is handled in the upper ocean etc. The oceans are a difficult part of the system for a number of reasons (mainly that the scale at which important things happen (bottom currents, eddies, western boundary currents) is quite small relative to similar processes in the atmosphere. However, the ocean is very strongly stratified, and the interaction with the bulk of the deep cold water is very slow – it is generally the upper ocean that determines the time scale for the transient warming we might expect. – gavin]

  23. Andrew:

    I have a question regarding climate sensitivity and momentum. There is still a rather broad range of expected equilibrium global temperature response for CO2 doubling of between 2 to 4.5 degree C.

    CO2 levels have been rising about 0.04%/yr.
    Global temperatures are trending about 0.016 C/yr, but only about 75% of that is due to CO2. So, perhaps the amount of CO2 warming is only 0.012 C/yr.
    This implies a sensitivity of about 3 degree C per doubling which is very close to the expected mid range. On the other hand, if the upper end of the sensitivity range is correct, then that implies there is a lot of momentum in the system.

    So, my question is how much momentum do the models generally predict and is it inversely related to their sensitivity values?

    [Response: (Momentum is not really the right word – ocean thermal inertia is a better description). There is actually a very strong connection – the bigger the sensitivity, the longer the adjustment time. This is one of the reasons why the 20th C changes haven’t been very useful at constraining the higher end of the possible sensitivities. – gavin]

  24. oms:

    Gavin, you stated in the article,

    “Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models.”

    Intuitively, it might seem that models which are good at resolving the physics on short time scales should not be worse in a climatological sense. In your view, are there any clearly identifiable reasons why this should the case?

    [Response: Yes. Errors in radiative or surface fluxes don’t influence baroclinic instability very much (which is a dynamical thing). When you re-initialise a weather model every 6 hours, errors in temperatures/humidity that arose from the errors in fluxes will get corrected. But, if you let the model run freely they don’t, and thus you end up with models that are horribly out of energy balance – and that leads to bad climatologies. I’d be happy to have anyone from the NWP community chime in and expand or correct this interpretation though. – gavin]

  25. Andrew:

    If climate sensitivity and thermal inertia are strongly connected, then that implies two extreme possibilities since the recent rate of warming is currently near the middle of the range:

    At the low end of sensitivity, we are living in a period of over reaction by the climate and the rate of warming should tend to revert lower towards the equilibrium value.

    At the high end of sensitivity, we are in store for significantly more warming for an extended duration.

    My hope would be that the science will advance to narrow the range so they are not so extreme.

    So, a follow-up question is what areas of research are available to narrow the range?

  26. David B. Benson:

    Jim Morrison (9) — I’m an amateur here, but I think you are taking a wrong approach. Ocean oscillations are not predicatable in any strong sense, so you should use lots of runs with different internal variability patterns. This will give you a range of results to establish some form of error bounds on the parameters of interest, temerature and precipitation I suppose.

  27. Vernon:

    Dr. Spencer has posted a pre-publication paper at http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/ where his abstract says:

    Three IPCC climate models, recent NASA Aqua satellite data, and a simple 3-layer climate model are used together to demonstrate that the IPCC climate models are far too sensitive, resulting in their prediction of too much global warming in response to anthropogenic greenhouse gas emissions. The models’ high sensitivity is probably the result of a confusion between forcing and feedback (cause and effect) when researchers have interpreted cloud and temperature variations in the real climate system.

    What is your assessment of the technique he uses?
    What would be the impact the future development of your model GISS Model E?

    [Response: Spencer’s critique has not been published in the peer reviewed literature and so it is difficult to know what he has done. From the figures he has shown he is using different averaging periods for the data and the models (12 month running mean vs. 91 month running mean) and is not stated whether he is looking at analogous periods. Comparing models to observations is perfectly fine, but the comparison has to be apples-with-apples and the analysis has to be a little more sophisticated than saying ‘look at the lines’ (or ‘linear striations’). His contention that models were built incorrectly because of a mis-interpretation of cloud data is completely bogus. – gavin]

  28. Marcus:

    Andrew (#25): I think one key for untangling climate system inertia and climate sensitivity is to improve our understanding of how heat is entering the oceans. If we knew ocean heat uptake as well as we know atmospheric temperature change, then we could pin down fairly well the radiative imbalance at the top of the atmosphere, which would give us a fair indication of how much warming is ‘in the pipeline’ given current greenhouse gas concentrations.

    The problem is that our understanding of that budget is still in flux (see http://earthobservatory.nasa.gov/Features/OceanCooling/page4.php for one discussion).

    Alternatively, more direct observations of that radiative imbalance would be nice, or better theoretical and observational understanding of the water vapor and cloud feedbacks, or more paleoclimate data which can give us constraints on historical feedbacks, but my guess is that ocean heat content measurements would be the best near term bet for improving our understanding of this issue.

  29. Geoff Beacon:

    I have contacted Don Blake of the University of Califonia, Irvine. He says

    The increase in methane concentrations were fairly constant during
    the late 1970s and throughout the 1980s. The concentration has been
    flat or slightly increasing during the last decade. The lifetime of
    methane is about 10 years which is much less than CO2. Thus, if
    emissions of methane to the atmosphere were decreased then
    concentrations of methane in the atmosphere would soon begin to
    decrease. This is similar to what has happened with methyl
    chlororform relative to CFC-12. Methyl chloroform has a lifetime of
    about 5 years and CFC-12 has a lifetime of about 100 years. Both are
    gases that destroy stratospheric ozone and both have been almost
    completely stopped being produced/used. The methyl chloroform
    concentrations in the atmosphere are now about 1/10 of what they were
    20 years ago while CFC-12 is only a few percent lower than they were
    20 years ago.

    I am struck by “if emissions of methane to the atmosphere were decreased then concentrations of methane in the atmosphere would soon begin to decrease”.

    I have a long-distant background in physics and that leads me to feel the converse is also likely i.e. “if emissions of methane to the atmosphere were increased concentrations would quickly increase”.

    As we used to say, anything else doesn’t smell right.

    Has anyone a better nose?

  30. Jim Dukelow:

    In #23 Andrew wrote:

    “CO2 levels have been rising about 0.04%/yr.”

    Looking at the Keeling curve, CO2 levels increased from approximately 368 ppmv at the start of 2000 to approximately 378 ppmv at the end of 2004. That is 2 ppmv increase per year on a base of approximately 370 ppmv or an increase of 0.54% per year.

    Best regards.

    Jim Dukelow

  31. T Gannett:

    I have a few questions, probably unfrequently asked, that I hope someone has answers to. Does anyone know what the fluorescence quantum yield is for v(1) to v(0) for the CO2 15um line. I would like to get an idea of how much of the energy a CO2 molecule acquires when absorbing a 15um photon ends up re-emitted as an infra-red photon. The rest of the energy will end up partitioned between translational, rotational and vibrational states. This raises another question. For a collection of CO2 molecules, say at 20’C, what proportion of the molecules are in the various excited vibrational states accessible to CO2? Anyone know? Answers can be sent to gannett3@comcast.net. Thanks.

  32. David B. Benson:

    Andrew (25) — The propect for significantly narrowing the uncertainty in climate sensitivity in the near term does not appear good, IMO. However there are two excellent papers by Annan & Hargreaves you may wish to study. For one of them, there is an earlier thread here on RealClimate.

  33. Pat Neuman:

    I think the early Cenozoic, with it’s much high concentrations of greenhouse gases and much warmer global
    climate needs more attention here.

    … “The extreme case is the Early Eocene Climatic Optimum (EECO), 51–53 million years ago, when
    pCO2 was high and global temperature reached a long-term maximum.
    Only over the past 34 million years have CO2 concentrations been
    low, temperatures relatively cool, and the poles glaciated. …


  34. jcbmack:

    Excellent post. Informative.

  35. Bryan S:

    Back to the thermal inertia question and using 20th century changes to constrain sensitivities. Suppose we doubled CO2 instantly. Now consider the transient behavior of the temperature increase needed to fully equilibriate this forcing change+feedbacks. Assuming an equilibrium sensitivity of 3C, what percentage of the total equilibrium temperature increase would have occurred after 1,10,100,1000 years? Another way; if we plot the transient temperature response on a semi-log graph, are there any relevant observations?

    If the majority of the temperature response takes place in only a few years, with the remaining small fraction taking place over hundreds to several thousands of years, then the long thermal lag time is not all that relevant. The remaining temperature rise left “in the pipeline” would be small and spread out over such a length of time that the signal would be swamped by natural variability.

    Due to the limited mass of the components of the climate system which are effectively coupled to the atmosphere, would it not seem that much of the temperature response would occur rapidly (be frontloaded), followed by a very long tail as heat slowly “leaks off” below the thermocline into the almost impermeable deep ocean (where most of the mass resides)?

    The statement that the sensitivity is proportional to the time constant would seem obvious since for a given rate of heat imput, it will take longer to increase the temperature 5C than for an increase of 1C. Based on the nature of the transient temperature response (a function of the heat capacities of the various components) however, exactly what is meant by sensitivity (equilibrium vs pseudo-equilibrium?) and what is meant by time constant (which one?) may require better definition.

    Can these issues be better explored by carefully comparing model experiments to observations?

  36. Mare:

    Have any of your opinions of global warming changed in any way and if so could you explain? Thank you.

  37. Mark:

    “Can these issues be better explored by carefully comparing model experiments to observations?”

    Not really.

    How do we put a pulse of CO2 that is visible in the records and then let NOTHING ELSE change? So observations won’t verify anything.

    Please think about how it would be practical to do before asking “couldn’t we…?” all the time. It’s about as helpful as saying “couldn’t we remove world poverty by taking the money from the rich people and giving it out equally to the world?”.

  38. Uli:

    I have a question on the influence of the Coriolis force on the latitudal energy transport? I suppose the latitudal energy transport is reduced due the Coriolis force, especially away from the tropics. In the Palaeozoic the day was about 22 h. How large would the latitude depended temperature change if today the day would have 22 h compared to 24 h?

  39. Bart Verheggen:

    Andrew (23) and Bryan (35):
    The problem is that climate sensitivity and thermal inertia could be traded off mathematically in producing a decent match with the observed temeperature record of the 20th century (because it’s out of equilibrium. In an equilibrium situation, the thermal inertia wouldn’t play as important of a role anymore). Even more, the net forcing isn’t very accurately known either, mainly because of the uncertainties in aerosol forcing.

    A larger negative aerosol forcing (and thus a weaker net positive forcing) would need to be combined with a higher climate sensitivity and/or a shorter ocean response time in order to still provide a good match, and vice versa. Of course, there are other constraints on these processes as well that have to be taken into account. Hansen for example suggested (at the AGU in dec 2008) that climate sensitivity is known more accurately than the other two quantities, whereas the more often heard trade-off (correct me if I’m wrong) is between aerosol forcing and sensitivity.

  40. Marcus:

    Bert (38) (and others in this discussion): You might be interested in Figure 2 in the following Stott et al. paper: the paper addressed different climate modeling attempts to use past date to constrain future scenarios:

    My guess, having missed this AGU, would be that Hansen’s “better constrained climate sensitivity” would be due more to paleoclimate data than to 20th century data, where the potential masking of heating from aerosols and ocean uptake is too large to fully constrain the upper bound of sensitivities…

  41. Vernon:

    There is a new study that shows the climate models referended in IPCC 4th report were wrong about Antarctic temperatures. What adjustments are needed to correct for errors in Antarctic modeling and how will that change the current projections from those in the IPCC 4th Report?

    Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models. Andrew Monaghan, David Bromwich, and David Schneider. Geophysical Research Letters, April 5, 2008

    “We can now compare computer simulations with observations of actual climate trends in Antarctica,” says NCAR scientist Andrew Monaghan, the lead author of the study. “This is showing us that, over the past century, most of Antarctica has not undergone the fairly dramatic warming that has affected the rest of the globe. The challenges of studying climate in this remote environment make it difficult to say what the future holds for Antarctica’s climate.”

    The authors compared recently constructed temperature data sets from Antarctica, based on data from ice cores and ground weather stations, to 20th century simulations from computer models used by scientists to simulate global climate. While the observed Antarctic temperatures rose by about 0.4 degrees Fahrenheit (0.2 degrees Celsius) over the past century, the climate models simulated increases in Antarctic temperatures during the same period of 1.4 degrees F (0.75 degrees C).

    The error appeared to be caused by models overestimating the amount of water vapor in the Antarctic atmosphere, the new study concludes. The reason may have to do with the cold Antarctic atmosphere handling moisture differently than the atmosphere over warmer regions.

    Will this lead to better climate models?

  42. yves fouquart:

    Although I am a regular reader of RC, this is the first time I post.
    I used to be active in the radiation field , indeed I co chaired the first ICRCCM study (Intercomparison of Radiation Codes for Climate Models).
    At that time, we had a rather long discussions about whether or not a radiation code was a parameterization.
    We concluded that this was not the case because tje basic physics is known. What is done in radiation codes is APPROXIMATION , this is fairly different fron cloud parameterizations for instance since , in that case, there is some physics which is by-passed because the physics works at a smaller scale. Unless thigs have changed a lot since I retired, cloud parameterizations are not simply an approximation of cloud resolving models.

    Nonetheless, this is more of a detail and this is quite a good post. Thanks for all the work you make here.

  43. Phil. Felton:

    Does anyone know what the fluorescence quantum yield is for v(1) to v(0) for the CO2 15um line. I would like to get an idea of how much of the energy a CO2 molecule acquires when absorbing a 15um photon ends up re-emitted as an infra-red photon.

    I don’t have a numerical value but in the atmosphere at ~100kPa it’s below 0.001.

    The rest of the energy will end up partitioned between translational, rotational and vibrational states. This raises another question. For a collection of CO2 molecules, say at 20′C, what proportion of the molecules are in the various excited vibrational states accessible to CO2?

    I’m not sure what you mean in the last question.

  44. Philip Machanick:

    Thanks for including my “What do you mean when you say model has “skill”?” question including the grammar error :) It should of course be “… a model has “skill”?”

    No need to post this.

  45. Philip Machanick:

    Here’s another one. You briefly mention El Niño in some answers. My understand is that El Niño and La Niña are heat transfers between the ocean and atmosphere i.e., from one part of the system to another, that affect short-term temperature but not the long-term trend, because they do not alter the overall energy balance.

    Is this correct?

    In any case, answering a question something like “What is the effect of El Niño and La Niña on long-term trends?” would be useful.

    Pat Neuman #33: Bob Carter makes big deal of how the early Cenozoic had much higher CO2 but the planet was teaming with life. The paper you link to is a good answer. Note particularly the evidence of lowered ocean oxygen. I’ve been told that current models exclude anoxic oceans as a future possibility.

    That leads to another question: “What can we learn from the relationship between past extinction events and climate change?”

  46. Hank Roberts:

    Mare asked 8 January 2009 at 2:31 AM

    “Have any of your opinions of global warming changed…”

    This is a good summary:

  47. T. Gannett:

    Phil. Felton #43

    Thanks for the reply. The value you provide for CO2 IR fluoresence quantum yield is plenty good for my purposes. If it is correct, then the IR radiation emitted from the earth’s surface and absorbed will be nearly completely thermalized and not re-emitted, i.e. it will heat the air. It can be reasonably calculated from available extinction coefficients and CO2 concentration that >99% of the IR photons emitted by the earth’s surface that can be absorbed by CO2 will be absorbed in the first 100m. With >99% of that energy being thermalized it won’t be retransmitted to the earth’s surface as IR. This leads to my 2nd question. Unless CO2 at temperatures near the surface has a reasonable population of vibrationally excited molecules then the heat generated by IR absorbtion can not be redistributed radiationally. This last statement requires that the atmosphere not be able to lose energy via black body radiation or to do so only poorly.

  48. Hank Roberts:

    > will be absorbed in the first 100m ….
    Miskolczi, right?

  49. Hank Roberts:

    T. Gannett, I wonder if this is where you’re headed:

    Bulletin of the American Meteorological Society
    Earth’s global energy budget
    Kevin E. Trenberth, John T. Fasullo, Jeffrey Kiehl




    “…This article provides an update on the Kiehl and Trenberth (1997) article on the global energy budget that was published in BAMS. A figure showing the global energy budget in that paper is widely used and appears in many places on the internet. It has also been reproduced in several forms in many articles and books. But it is dated. A primary purpose of this article is to provide a full color figure to update this work. At the same time, we expand upon it somewhat by detailing changes over time and aspects of the land vs ocean differences in heat budgets that should be of general interest. We also expand on the discussion of uncertainty and the remaining challenges in our understanding of the budget. …”

    DOI: 10.1175/2008BAMS2634.1

    The image is _very_ familiar. But the updated image hasn’t shown up much yet.

  50. Ray Ladbury:

    T. Gannett, Do the math. The Maxwell-Boltzmann distribution says that roughly 0.8% of molecules will be sufficiently energetic even at 200 K.
    Now think about the physics: If the energy isn’t being emitted radiatively, then it’s going into heating the atmosphere, which heats up until there is in fact a significant vibrationally excited population.