RealClimate

Comments

RSS feed for comments on this post.

  1. I have two questions. Are the assumptions/unknowns the cause for many climate models to be more conservative in their predictions and causing them to fall short of the actual occurrences with the climate?

    Shouldn’t actual physical evidence be placed ahead of what a climate model states as far as trying to prove that climate change is an actuality?

    [Response: If we knew why models were not perfect, we’d fix them. Your second question doesn’t make much sense. Climate models don’t prove anything on their own. It is the match up of theory, observations and simulations that allows us to attribute causes and effects and make predictions about what would happen with or without various causes. The evidence from those exercises are pretty convincing that anthropogenic climate change (which is what I presume you mean) is ongoing – but that is a conclusion that comes from consideration of the actual physical evidence. Your question presumes a distinction that doesn’t exist. – gavin]

    Comment by Sean Dorcy — 6 Jan 2009 @ 10:41 AM

  2. A very meaty and helpful post. Many thanks once again. I will be linking to this!

    Comment by Kevin McKinney — 6 Jan 2009 @ 11:39 AM

  3. Sean Dorcy,
    Disclaimer: I am not an expert
    You mention that you believe conservative models’ projections have fallen short of actual occurences. I am wondering where you derive the assumptions that a) the projections are conservative and b) projections have fallen short of actual occurences?

    If you’re talking about global mean temperature I would advise you to compare the projections of the IPCC to the actual measurements of GISS as well as HadCRUT, RSS MSU, and UAH MSU measured data. If that is the parameter that you are saying has fallen short of actual measurements, I would say it hasn’t. There are some who would argue that the projections are too aggressive, and the best argument is probably that we don’t know quite enough yet whether it is too aggressive or conservative…

    You can see what CO2 concentration has looked like over the years here (represented by Mauna Loa Observatory) and compare it to the different scenarios assumed in each IPCC projection which then averages the output of the models:
    http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html
    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    If your question has to do with melting ice, I would note for you that there are certain wind and ocean circulation effects that have added greatly to the short term melting of Arctic Ice over the last couple years (not just global warming) and these short term effects are not where a long term ice projection applies.
    http://www.nasa.gov/vision/earth/lookingatearth/quikscat-20071001.html
    In other words, we read in the press that this melt was caused by global warming effects exceeding projections, but it would be more factual to say we are seeing natural effects superimposed on global warming effects over a pretty short time frame over which projections aren’t specifically made. For instance, if ice rebounds to 1979 levels in 2009 to 2011 that doesn’t disprove global warming theory just like the last few years of low ice levels didn’t prove it. You need to look at longer term trends against longer term projections. Don’t be surprised if the press isn’t able to give you good scientific information when you hear about projections parameters being exceeded.

    If your question has to do with storm/hurricane numbers and intensity, I would note that is not exactly settled. Last I saw from NOAA was global warming decreasing numbers but increasing intensities:
    http://www.sciencedaily.com/releases/2008/05/080519134306.htm
    However the methods and equipment for measuring occurences and intensity have improved so much that we’re not exactly comparing apples to apples when we calibrate today’s numbers with 70 years ago to quantify a correlation, it’s effect, and provide a projection.

    In other words, there’s allot of good science in the models, enough to provide insight into how the climate interacts and make a current “best bet” projections, but remember:
    a) models will certainly be modified in the future. It is not out of question for projection results to change just a little, or possibly even allot in either direction.
    b) can’t be used for comparison to short term trends which contain volatile natural variability. Short term trends vs models neither proves nor disproves models or the underlying theory.

    Comment by hmm — 6 Jan 2009 @ 12:29 PM

  4. What a great post! There are many issues addressed here which are common skeptic claims, as well as sources of unease amongst nonexperts. Thanks so much for all the work you do!

    Comment by Sean — 6 Jan 2009 @ 2:57 PM

  5. Gavin – nice collection of questions and answers!

    Since you bring up the thermo-haline circulation, a question I have been recently pondering is why is the deep ocean so cold? After all, if average surface temperature is 15 C, wouldn’t you expect land and ocean below the surface to equilibrate at roughly that temperature (with a slightly rising gradient to account for the flow of Earth’s internal heat)?

    The best simple answer I’ve seen is basically that you have to go to a 2-box model of Earth, with warm tropics and cold poles, and then realize that thanks to the thermohaline circulation the deep oceans are coupled almost exclusively to the polar regions, and so are in the “cold” box and not the warm one or some average of them.

    However, the question then is – can this switch? Are there boundary conditions or any stable solution that would couple the deep oceans to the “hot” box rather than the cold one? If that ever happened what would it imply for surface temperatures?

    Comment by Arthur Smith — 6 Jan 2009 @ 3:04 PM

  6. I have a comment on paleo data.

    Proxy temperature data are calculated in the form of:
    Temperature = a + b * x, where ‘x’ is something like tree ring thickness or O18/O16 ratios. Unless r2 is 1, the proxy temperature data will always have a lower standard deviation than the measured data. In the limit, if r2 is 0, the proxy will have the value of the mean of the observed calibration data and zero standard deviation. I know that more sophisticated regression methods are employed but similar problems are unavoidable. What is more, different proxies will have different smoothing effects; the thickness of a ring is in part a function of how well the tree grew the previous year: gas migrates between layers of snow before they become consolidated. Mixing proxies will therefore further suppress the variance of the proxy data.

    If observed data from recent years are added to proxy data from earlier years to create a long-term series what steps are taken to ensure that the artificially low variance of the proxy data compared to the true variance of the observed does not produce a distorted picture.

    [Response: You have a very limited concept of how paleodata is used. For instance, in almost all of my work I have emphasised the need to forward model proxy data within climate simulations so that all of the processes by which proxy data is recorded can be simulated, rather than using the inverse approach you discuss. Multiple proxies which have different systematic errors can also be used to isolate underlying signals. – gavin]

    [Response: adding to the above, in most actual proxy-reconstruction studies (including those of my group’s) the unresolved variance is of course one of the central quantities looked it. It is used to define the uncertainties in the reconstructions, i.e. those error bars you typically see in association with proxy-reconstructed quantities are telling you the envelope of uncertainty within which any comparisons to modern instrumental data should be made, based on the variance that is not resolved by the paleoclimate data (in both calibration and, importantly, cross-validation tests). They are a crucial guide to the interpretation of comparisons of the reconstructed past w/ the modern instrumental record. This above stuff is basic, and should be clear from even a cursory reading of the peer-reviewed literature in this area. I would suggest you review that literature, e.g. start with the IPCC AR4 chapter (6) on paleoclimate. -mike]

    Comment by Julius St Swithin — 6 Jan 2009 @ 3:32 PM

  7. Gavin, thanks for yet another very helpful article, though I admit I’ve not read all of it. As for additional topics, perhaps a brief explanation on why confidence in attribution (and prediction) of temperature change is strongest at large scales and weakest at small scales, ie something about the issue of signal to noise relative to spatial scale.

    Comment by Jim Bouldin — 6 Jan 2009 @ 3:33 PM

  8. Arthur: Deep ocean temperature is fixed by the compressibility properties of water. Although the variation is small, water happens to be densest at 4°C. At 4000 m beneath surface pressure is roughly 400 atmospheres, which is enough to force water to be in its state of maximal density, hence a fixed given temperature. This is why the deep ocean remains always liquid (any other liquid would turn to solid) and some say it is a necessary (although not sufficient) condition for the development of life on Earth.

    Comment by JacquesLB — 6 Jan 2009 @ 3:46 PM

  9. As a consumer of GCM results for use in natural resource planning, your FAQ’s are very helpful. Thanks. I have another set of questions I hope you may be able to help me with. Do GCMs skillfully simulate significant patterns of natural variability? More specifically, when examining GCM output, or multi-model mean data from CMIP3 analyses, specific to the western US for the next 5 decades, is it appropriate to mentally overlay anticipated PDO or ENSO oscillations? Or should I assume these patterns are already included in the projections? Could GCM projections substantially overestimate temperature trends for the western US if PDO shifts from its current warm phase to a cool phase? These questions are relevant to meso-scale (1-5 decades; local and regional extent) adaptation proposals and decisions. Randall et al. (AR4 WGI Chapter 8 ) doesn’t provide a clear answer to these questions.

    Comment by Jim Morrison — 6 Jan 2009 @ 4:01 PM

  10. Gavin — “opening the isthmus of Panama?”

    Should not that read “closing”?

    [Response: Depends on your point of view. – gavin]

    Comment by David B. Benson — 6 Jan 2009 @ 5:54 PM

  11. Thanks for excellent article above – for next FAQ as well as ENSO, PDO/IPO mentioned above would also like to hear about modelling of phenomena like Southern Annular Mode and Indian Ocean Dipole. The underlying issue is about both model completeness and how much these phenomena might move future projections around. Additionally interested in land surface/biospheric feedbacks.

    Comment by Luke — 6 Jan 2009 @ 6:01 PM

  12. JacquesLB (#8) – your argument only explains why the bottom of the ocean is not colder than it is, or indeed frozen at the bottom – colder water heads upwards and freezes at the surface. So the deep ocean coupled to the “cold box” can’t get much colder than 4 C. But it could easily be warmer with no violation of any laws of physics – a lot warmer. Why isn’t it, and are there any conditions for a planet similar to Earth under which the deep ocean could be much warmer?

    [Response: Indeed the ocean depths used to be a lot warmer – maybe 15 deg C during the Eocene for instance. The issue is where and how dense water is formed – today it is in the polar regions where you have freezing conditions and enormous heat fluxes to the atmosphere. In other times, with warmer poles, or perhaps very salty tropics, you could make deep water with very different properties. It only needs to be denser than other water at the surface. – gavin]

    Comment by Arthur Smith — 6 Jan 2009 @ 6:46 PM

  13. What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz?

    I read (In Thin Ice I believe. Book that soon brought me to RC & AGW ) about the consequences of the development of the Isthmus of Panama on global climate. Why then opening?

    http://en.wikipedia.org/wiki/Isthmus_of_Panama

    Comment by ChuckG — 6 Jan 2009 @ 6:54 PM

  14. Re: #12

    Arthur Smith mentions the maximum density of water. It’s true that for pure water, the minimum density occurs at a temperature of 4°C, however, for the oceans, the salt content is such that the maximum density is the at the freezing point at −1.8°C. The coldest water is on the bottom because that’s the densest water. Of course, during winter as the water freezes on the surface, the resulting sea-ice is less dense, so it floats. And, when the sea water freezes, much of the salt is rejected in the process, which can cause the remaining water on the surface to become even more dense and sink. This is the reason that the water on the very bottom of the ocean originates around the Antarctic as the result of the yearly sea-ice cycle.

    E. S.

    Comment by Eric Swanson — 6 Jan 2009 @ 8:13 PM

  15. JacquesLB and Arthur: With respect to the temperature in the deep ocean, I would like to point out that the oceans are filled with seawater, not fresh water.

    Comment by oms — 6 Jan 2009 @ 8:15 PM

  16. GREAT POST!

    Comment by Patrick 027 — 6 Jan 2009 @ 9:30 PM

  17. Gavin (#12 response) – ah, but we’re headed to times of warmer poles (with higher fresh water content from melting ice, as opposed to higher salt levels from freezing), and likely saltier tropics due to higher evaporation levels (or do tropical precipitation increases balance that?) – so has anybody modeled where the tipping point might be to a switch to tropical coupling, as opposed to polar coupling, and what the impact would be in a world with high CO2?

    The paleoclimate record (8.2kyr, and earlier “large lake collapses”) shows a dramatic drop in surface temperatures for a substantial period of time when the ocean circulation shuts off or changes, but is that actually what would be expected under these warming conditions? How long would be required to warm the deep oceans? At 4 W/m^2 and about 1 billion cubic km of ocean to warm by 10 C, I think that comes to 600 or 700 years. My guess is it might lead to relatively stable surface temperatures during this warming period, but ever-increasing sea surface levels as the ocean expands?

    Comment by Arthur Smith — 6 Jan 2009 @ 9:52 PM

  18. Concerning Paleoclimate; the 8.2Ka event was involved with the Laurentide ice sheet, and is long gone. As such, it will have limited applicability to the future.

    We seem to be heading for a climate more like the Miocene. Antarctica was and will be around, but it it becomes surrounded by lots of melt water, and may stop driving deep ocean currents.

    Then then only driver of deep ocean current will be evaporation and it’s not clear if that is sufficient to keep the mixing. That could isolate the surface waters from the deep and result in accelerated warming.

    Comment by Andrew — 6 Jan 2009 @ 10:31 PM

  19. Thanks for a great post. The following is not for myself but for many of those people out there who have the weather/climate confusion. I understand that the methods of the weather bureau are so different from the methods of climatology that there is a huge and, at this time, unfillable gap between the 2. Weather is short term, like days. Climate is long term, like centuries.

    It is precisely in the gap that a forecast would be most beneficial to most people. Therefore, people get frustrated trying to argue you into filling in the gap. They want you to combine weather prediction and climatology into a science that accurately predicts next year and the next 5 years. That is their planning horizon. After all, you and the weather forecasters use big computers and data that sound the same. You even mentioned weather a lot in your article.

    It is only when you actually try to fill in the gap that you, the scientists, become so frustrated that you give up on that project. Since the average person has no experience with trying to solve mathematical problems and no experience with computationally intensive computer programs, he does not understand your refusal to do that which you cannot.

    I think that this may be a part of the problem with denialists. Of course, the denialists, in general, and the people who listen to them, have other problems or agendas.

    Comment by Edward Greisch — 6 Jan 2009 @ 11:33 PM

  20. Quote from the Hadley Centre a year or so ago:

    “The CH4 (and CO2) permafrost feedback isn’t included in current
    EarthSystemModels and it is potentially large but no-one really knows.”

    Anyone know of any progress?

    We do need estimates for policy making. It’s not much use having exquisite climate models that model the wrong reality.

    I think the FAQs should at least have a section “What feedbacks are missing?” We ought to be told what are their probable impacts.

    “Not known” is a better answer than none.

    Is there an official list of missing feedbacks?

    Comment by Geoff Beacon — 7 Jan 2009 @ 12:48 AM

  21. Good post!!

    One question. Are the models sophisticated enough to take account of effects on the boundary conditions of previous cycles? e.g. temperature in winter must have an effect on CO2 emissions and therefore CO2 levels in the next cycle.

    [Response: The models that include a carbon cycle and dynamic vegetation should have such effects – but this is still a rather experimental class of models. The ‘standard’ models impose a CO2 concentration derived from observations or in a scenario and wouldn’t have such a process. – gavin]

    Comment by Paula Thomas — 7 Jan 2009 @ 2:55 AM

  22. Gavin, how the climatic variability is accounted for in the models?
    Is it conceivable that the best actual climate models, only with the basic laws of fluid thermodynamics,could reproduce a climate variability such ENSO, AMO, NAO,…, or is there the need of parametrization?
    I think you agree that the ocean is a huge tank of coldness.
    Its mean temperature is 3.5°C and should be sufficient to neutralize several centuries of anthropogenic greenhouse effect.
    Surely it’s difficult for this coldness to shift towards the surface but even only a very small part can have some surface effects.
    It seems, for example, that there are some surprising effects in the Southern ocean (strengthening westerlies and midle latitude decreasing SST).
    So isn’t the ocean one of the biggest problem of actual models?
    (I apologize for my english)

    [Response: The internal variability is an emergent property of the whole package. For instance, all models show variability in the ocean temperatures in the tropical Pacific – but the magnitude and spectra of that variability depends a lot on the resolution, and how mixing is handled in the upper ocean etc. The oceans are a difficult part of the system for a number of reasons (mainly that the scale at which important things happen (bottom currents, eddies, western boundary currents) is quite small relative to similar processes in the atmosphere. However, the ocean is very strongly stratified, and the interaction with the bulk of the deep cold water is very slow – it is generally the upper ocean that determines the time scale for the transient warming we might expect. – gavin]

    Comment by pascal — 7 Jan 2009 @ 6:19 AM

  23. I have a question regarding climate sensitivity and momentum. There is still a rather broad range of expected equilibrium global temperature response for CO2 doubling of between 2 to 4.5 degree C.

    CO2 levels have been rising about 0.04%/yr.
    Global temperatures are trending about 0.016 C/yr, but only about 75% of that is due to CO2. So, perhaps the amount of CO2 warming is only 0.012 C/yr.
    This implies a sensitivity of about 3 degree C per doubling which is very close to the expected mid range. On the other hand, if the upper end of the sensitivity range is correct, then that implies there is a lot of momentum in the system.

    So, my question is how much momentum do the models generally predict and is it inversely related to their sensitivity values?

    [Response: (Momentum is not really the right word – ocean thermal inertia is a better description). There is actually a very strong connection – the bigger the sensitivity, the longer the adjustment time. This is one of the reasons why the 20th C changes haven’t been very useful at constraining the higher end of the possible sensitivities. – gavin]

    Comment by Andrew — 7 Jan 2009 @ 11:57 AM

  24. Gavin, you stated in the article,

    “Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models.”

    Intuitively, it might seem that models which are good at resolving the physics on short time scales should not be worse in a climatological sense. In your view, are there any clearly identifiable reasons why this should the case?

    [Response: Yes. Errors in radiative or surface fluxes don’t influence baroclinic instability very much (which is a dynamical thing). When you re-initialise a weather model every 6 hours, errors in temperatures/humidity that arose from the errors in fluxes will get corrected. But, if you let the model run freely they don’t, and thus you end up with models that are horribly out of energy balance – and that leads to bad climatologies. I’d be happy to have anyone from the NWP community chime in and expand or correct this interpretation though. – gavin]

    Comment by oms — 7 Jan 2009 @ 1:24 PM

  25. If climate sensitivity and thermal inertia are strongly connected, then that implies two extreme possibilities since the recent rate of warming is currently near the middle of the range:

    At the low end of sensitivity, we are living in a period of over reaction by the climate and the rate of warming should tend to revert lower towards the equilibrium value.

    At the high end of sensitivity, we are in store for significantly more warming for an extended duration.

    My hope would be that the science will advance to narrow the range so they are not so extreme.

    So, a follow-up question is what areas of research are available to narrow the range?

    Comment by Andrew — 7 Jan 2009 @ 2:48 PM

  26. Jim Morrison (9) — I’m an amateur here, but I think you are taking a wrong approach. Ocean oscillations are not predicatable in any strong sense, so you should use lots of runs with different internal variability patterns. This will give you a range of results to establish some form of error bounds on the parameters of interest, temerature and precipitation I suppose.

    Comment by David B. Benson — 7 Jan 2009 @ 3:13 PM

  27. Dr. Spencer has posted a pre-publication paper at http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/ where his abstract says:

    Three IPCC climate models, recent NASA Aqua satellite data, and a simple 3-layer climate model are used together to demonstrate that the IPCC climate models are far too sensitive, resulting in their prediction of too much global warming in response to anthropogenic greenhouse gas emissions. The models’ high sensitivity is probably the result of a confusion between forcing and feedback (cause and effect) when researchers have interpreted cloud and temperature variations in the real climate system.

    What is your assessment of the technique he uses?
    What would be the impact the future development of your model GISS Model E?

    [Response: Spencer’s critique has not been published in the peer reviewed literature and so it is difficult to know what he has done. From the figures he has shown he is using different averaging periods for the data and the models (12 month running mean vs. 91 month running mean) and is not stated whether he is looking at analogous periods. Comparing models to observations is perfectly fine, but the comparison has to be apples-with-apples and the analysis has to be a little more sophisticated than saying ‘look at the lines’ (or ‘linear striations’). His contention that models were built incorrectly because of a mis-interpretation of cloud data is completely bogus. – gavin]

    Comment by Vernon — 7 Jan 2009 @ 3:39 PM

  28. Andrew (#25): I think one key for untangling climate system inertia and climate sensitivity is to improve our understanding of how heat is entering the oceans. If we knew ocean heat uptake as well as we know atmospheric temperature change, then we could pin down fairly well the radiative imbalance at the top of the atmosphere, which would give us a fair indication of how much warming is ‘in the pipeline’ given current greenhouse gas concentrations.

    The problem is that our understanding of that budget is still in flux (see http://earthobservatory.nasa.gov/Features/OceanCooling/page4.php for one discussion).

    Alternatively, more direct observations of that radiative imbalance would be nice, or better theoretical and observational understanding of the water vapor and cloud feedbacks, or more paleoclimate data which can give us constraints on historical feedbacks, but my guess is that ocean heat content measurements would be the best near term bet for improving our understanding of this issue.

    Comment by Marcus — 7 Jan 2009 @ 3:49 PM

  29. I have contacted Don Blake of the University of Califonia, Irvine. He says

    The increase in methane concentrations were fairly constant during
    the late 1970s and throughout the 1980s. The concentration has been
    flat or slightly increasing during the last decade. The lifetime of
    methane is about 10 years which is much less than CO2. Thus, if
    emissions of methane to the atmosphere were decreased then
    concentrations of methane in the atmosphere would soon begin to
    decrease. This is similar to what has happened with methyl
    chlororform relative to CFC-12. Methyl chloroform has a lifetime of
    about 5 years and CFC-12 has a lifetime of about 100 years. Both are
    gases that destroy stratospheric ozone and both have been almost
    completely stopped being produced/used. The methyl chloroform
    concentrations in the atmosphere are now about 1/10 of what they were
    20 years ago while CFC-12 is only a few percent lower than they were
    20 years ago.

    I am struck by “if emissions of methane to the atmosphere were decreased then concentrations of methane in the atmosphere would soon begin to decrease”.

    I have a long-distant background in physics and that leads me to feel the converse is also likely i.e. “if emissions of methane to the atmosphere were increased concentrations would quickly increase”.

    As we used to say, anything else doesn’t smell right.

    Has anyone a better nose?

    Comment by Geoff Beacon — 7 Jan 2009 @ 4:16 PM

  30. In #23 Andrew wrote:

    “CO2 levels have been rising about 0.04%/yr.”

    Looking at the Keeling curve, CO2 levels increased from approximately 368 ppmv at the start of 2000 to approximately 378 ppmv at the end of 2004. That is 2 ppmv increase per year on a base of approximately 370 ppmv or an increase of 0.54% per year.

    Best regards.

    Jim Dukelow

    Comment by Jim Dukelow — 7 Jan 2009 @ 4:38 PM

  31. I have a few questions, probably unfrequently asked, that I hope someone has answers to. Does anyone know what the fluorescence quantum yield is for v(1) to v(0) for the CO2 15um line. I would like to get an idea of how much of the energy a CO2 molecule acquires when absorbing a 15um photon ends up re-emitted as an infra-red photon. The rest of the energy will end up partitioned between translational, rotational and vibrational states. This raises another question. For a collection of CO2 molecules, say at 20’C, what proportion of the molecules are in the various excited vibrational states accessible to CO2? Anyone know? Answers can be sent to gannett3@comcast.net. Thanks.

    Comment by T Gannett — 7 Jan 2009 @ 5:06 PM

  32. Andrew (25) — The propect for significantly narrowing the uncertainty in climate sensitivity in the near term does not appear good, IMO. However there are two excellent papers by Annan & Hargreaves you may wish to study. For one of them, there is an earlier thread here on RealClimate.

    Comment by David B. Benson — 7 Jan 2009 @ 6:05 PM

  33. I think the early Cenozoic, with it’s much high concentrations of greenhouse gases and much warmer global
    climate needs more attention here.

    … “The extreme case is the Early Eocene Climatic Optimum (EECO), 51–53 million years ago, when
    pCO2 was high and global temperature reached a long-term maximum.
    Only over the past 34 million years have CO2 concentrations been
    low, temperatures relatively cool, and the poles glaciated. …

    http://www.es.ucsc.edu/%7Ejzachos/pubs/Zachos_Dickens_Zeebe_08.pdf

    Comment by Pat Neuman — 7 Jan 2009 @ 8:21 PM

  34. Excellent post. Informative.

    Comment by jcbmack — 7 Jan 2009 @ 10:58 PM

  35. Back to the thermal inertia question and using 20th century changes to constrain sensitivities. Suppose we doubled CO2 instantly. Now consider the transient behavior of the temperature increase needed to fully equilibriate this forcing change+feedbacks. Assuming an equilibrium sensitivity of 3C, what percentage of the total equilibrium temperature increase would have occurred after 1,10,100,1000 years? Another way; if we plot the transient temperature response on a semi-log graph, are there any relevant observations?

    If the majority of the temperature response takes place in only a few years, with the remaining small fraction taking place over hundreds to several thousands of years, then the long thermal lag time is not all that relevant. The remaining temperature rise left “in the pipeline” would be small and spread out over such a length of time that the signal would be swamped by natural variability.

    Due to the limited mass of the components of the climate system which are effectively coupled to the atmosphere, would it not seem that much of the temperature response would occur rapidly (be frontloaded), followed by a very long tail as heat slowly “leaks off” below the thermocline into the almost impermeable deep ocean (where most of the mass resides)?

    The statement that the sensitivity is proportional to the time constant would seem obvious since for a given rate of heat imput, it will take longer to increase the temperature 5C than for an increase of 1C. Based on the nature of the transient temperature response (a function of the heat capacities of the various components) however, exactly what is meant by sensitivity (equilibrium vs pseudo-equilibrium?) and what is meant by time constant (which one?) may require better definition.

    Can these issues be better explored by carefully comparing model experiments to observations?

    Comment by Bryan S — 8 Jan 2009 @ 12:35 AM

  36. Have any of your opinions of global warming changed in any way and if so could you explain? Thank you.

    Comment by Mare — 8 Jan 2009 @ 2:31 AM

  37. “Can these issues be better explored by carefully comparing model experiments to observations?”

    Not really.

    How do we put a pulse of CO2 that is visible in the records and then let NOTHING ELSE change? So observations won’t verify anything.

    Please think about how it would be practical to do before asking “couldn’t we…?” all the time. It’s about as helpful as saying “couldn’t we remove world poverty by taking the money from the rich people and giving it out equally to the world?”.

    Comment by Mark — 8 Jan 2009 @ 4:13 AM

  38. I have a question on the influence of the Coriolis force on the latitudal energy transport? I suppose the latitudal energy transport is reduced due the Coriolis force, especially away from the tropics. In the Palaeozoic the day was about 22 h. How large would the latitude depended temperature change if today the day would have 22 h compared to 24 h?

    Comment by Uli — 8 Jan 2009 @ 8:27 AM

  39. Andrew (23) and Bryan (35):
    The problem is that climate sensitivity and thermal inertia could be traded off mathematically in producing a decent match with the observed temeperature record of the 20th century (because it’s out of equilibrium. In an equilibrium situation, the thermal inertia wouldn’t play as important of a role anymore). Even more, the net forcing isn’t very accurately known either, mainly because of the uncertainties in aerosol forcing.

    A larger negative aerosol forcing (and thus a weaker net positive forcing) would need to be combined with a higher climate sensitivity and/or a shorter ocean response time in order to still provide a good match, and vice versa. Of course, there are other constraints on these processes as well that have to be taken into account. Hansen for example suggested (at the AGU in dec 2008) that climate sensitivity is known more accurately than the other two quantities, whereas the more often heard trade-off (correct me if I’m wrong) is between aerosol forcing and sensitivity.

    Comment by Bart Verheggen — 8 Jan 2009 @ 9:16 AM

  40. Bert (38) (and others in this discussion): You might be interested in Figure 2 in the following Stott et al. paper: the paper addressed different climate modeling attempts to use past date to constrain future scenarios:
    http://globalchange.mit.edu/files/document/MITJPSPGC_Reprint07-13.pdf

    My guess, having missed this AGU, would be that Hansen’s “better constrained climate sensitivity” would be due more to paleoclimate data than to 20th century data, where the potential masking of heating from aerosols and ocean uptake is too large to fully constrain the upper bound of sensitivities…

    Comment by Marcus — 8 Jan 2009 @ 10:25 AM

  41. There is a new study that shows the climate models referended in IPCC 4th report were wrong about Antarctic temperatures. What adjustments are needed to correct for errors in Antarctic modeling and how will that change the current projections from those in the IPCC 4th Report?

    Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models. Andrew Monaghan, David Bromwich, and David Schneider. Geophysical Research Letters, April 5, 2008

    “We can now compare computer simulations with observations of actual climate trends in Antarctica,” says NCAR scientist Andrew Monaghan, the lead author of the study. “This is showing us that, over the past century, most of Antarctica has not undergone the fairly dramatic warming that has affected the rest of the globe. The challenges of studying climate in this remote environment make it difficult to say what the future holds for Antarctica’s climate.”

    The authors compared recently constructed temperature data sets from Antarctica, based on data from ice cores and ground weather stations, to 20th century simulations from computer models used by scientists to simulate global climate. While the observed Antarctic temperatures rose by about 0.4 degrees Fahrenheit (0.2 degrees Celsius) over the past century, the climate models simulated increases in Antarctic temperatures during the same period of 1.4 degrees F (0.75 degrees C).

    The error appeared to be caused by models overestimating the amount of water vapor in the Antarctic atmosphere, the new study concludes. The reason may have to do with the cold Antarctic atmosphere handling moisture differently than the atmosphere over warmer regions.

    Will this lead to better climate models?

    Comment by Vernon — 8 Jan 2009 @ 10:33 AM

  42. Although I am a regular reader of RC, this is the first time I post.
    I used to be active in the radiation field , indeed I co chaired the first ICRCCM study (Intercomparison of Radiation Codes for Climate Models).
    At that time, we had a rather long discussions about whether or not a radiation code was a parameterization.
    We concluded that this was not the case because tje basic physics is known. What is done in radiation codes is APPROXIMATION , this is fairly different fron cloud parameterizations for instance since , in that case, there is some physics which is by-passed because the physics works at a smaller scale. Unless thigs have changed a lot since I retired, cloud parameterizations are not simply an approximation of cloud resolving models.

    Nonetheless, this is more of a detail and this is quite a good post. Thanks for all the work you make here.

    Comment by yves fouquart — 8 Jan 2009 @ 12:30 PM

  43. Does anyone know what the fluorescence quantum yield is for v(1) to v(0) for the CO2 15um line. I would like to get an idea of how much of the energy a CO2 molecule acquires when absorbing a 15um photon ends up re-emitted as an infra-red photon.

    I don’t have a numerical value but in the atmosphere at ~100kPa it’s below 0.001.

    The rest of the energy will end up partitioned between translational, rotational and vibrational states. This raises another question. For a collection of CO2 molecules, say at 20′C, what proportion of the molecules are in the various excited vibrational states accessible to CO2?

    I’m not sure what you mean in the last question.

    Comment by Phil. Felton — 8 Jan 2009 @ 12:35 PM

  44. Thanks for including my “What do you mean when you say model has “skill”?” question including the grammar error :) It should of course be “… a model has “skill”?”

    No need to post this.

    Comment by Philip Machanick — 8 Jan 2009 @ 7:19 PM

  45. Here’s another one. You briefly mention El Niño in some answers. My understand is that El Niño and La Niña are heat transfers between the ocean and atmosphere i.e., from one part of the system to another, that affect short-term temperature but not the long-term trend, because they do not alter the overall energy balance.

    Is this correct?

    In any case, answering a question something like “What is the effect of El Niño and La Niña on long-term trends?” would be useful.

    Pat Neuman #33: Bob Carter makes big deal of how the early Cenozoic had much higher CO2 but the planet was teaming with life. The paper you link to is a good answer. Note particularly the evidence of lowered ocean oxygen. I’ve been told that current models exclude anoxic oceans as a future possibility.

    That leads to another question: “What can we learn from the relationship between past extinction events and climate change?”

    Comment by Philip Machanick — 8 Jan 2009 @ 7:53 PM

  46. Mare asked 8 January 2009 at 2:31 AM

    “Have any of your opinions of global warming changed…”

    This is a good summary:
    http://bravenewclimate.com/2009/01/08/what-weve-learned-about-climate-change-in-2008/

    Comment by Hank Roberts — 8 Jan 2009 @ 8:42 PM

  47. Phil. Felton #43

    Thanks for the reply. The value you provide for CO2 IR fluoresence quantum yield is plenty good for my purposes. If it is correct, then the IR radiation emitted from the earth’s surface and absorbed will be nearly completely thermalized and not re-emitted, i.e. it will heat the air. It can be reasonably calculated from available extinction coefficients and CO2 concentration that >99% of the IR photons emitted by the earth’s surface that can be absorbed by CO2 will be absorbed in the first 100m. With >99% of that energy being thermalized it won’t be retransmitted to the earth’s surface as IR. This leads to my 2nd question. Unless CO2 at temperatures near the surface has a reasonable population of vibrationally excited molecules then the heat generated by IR absorbtion can not be redistributed radiationally. This last statement requires that the atmosphere not be able to lose energy via black body radiation or to do so only poorly.

    Comment by T. Gannett — 8 Jan 2009 @ 10:22 PM

  48. > will be absorbed in the first 100m ….
    Miskolczi, right?

    Comment by Hank Roberts — 8 Jan 2009 @ 11:11 PM

  49. T. Gannett, I wonder if this is where you’re headed:

    Bulletin of the American Meteorological Society
    Earth’s global energy budget
    Kevin E. Trenberth, John T. Fasullo, Jeffrey Kiehl

    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F2008BAMS2634.1

    http://ams.allenpress.com/perlserv/?request=res-loc&uri=urn%3Aap%3Apdf%3Adoi%3A10.1175%2F2008BAMS2634.1

    —–excerpt_follows———

    “…This article provides an update on the Kiehl and Trenberth (1997) article on the global energy budget that was published in BAMS. A figure showing the global energy budget in that paper is widely used and appears in many places on the internet. It has also been reproduced in several forms in many articles and books. But it is dated. A primary purpose of this article is to provide a full color figure to update this work. At the same time, we expand upon it somewhat by detailing changes over time and aspects of the land vs ocean differences in heat budgets that should be of general interest. We also expand on the discussion of uncertainty and the remaining challenges in our understanding of the budget. …”

    DOI: 10.1175/2008BAMS2634.1

    The image is _very_ familiar. But the updated image hasn’t shown up much yet.

    Comment by Hank Roberts — 9 Jan 2009 @ 12:30 AM

  50. T. Gannett, Do the math. The Maxwell-Boltzmann distribution says that roughly 0.8% of molecules will be sufficiently energetic even at 200 K.
    Now think about the physics: If the energy isn’t being emitted radiatively, then it’s going into heating the atmosphere, which heats up until there is in fact a significant vibrationally excited population.

    Comment by Ray Ladbury — 9 Jan 2009 @ 8:04 AM

  51. Gavin,

    IRT #41 I was wonding if the GISS Model was changed in light of this April 2008 Study. It is a peer reviewed paper. If the model was not changed to reflect these finding, why? If changes were made, how did that changes impact the climate forcast?

    Regards

    [Response: Many model intercomparisons appear in the peer-reviewed literature. Many of them speculate about the cause of some perceived discrepancy. Unfortunately, very few of these are useful in actually improving the models. The reason is that a) the speculation (in this case a larger sensitivity of water vapour transports to the forcing) is not actually compared to data (there are no water vapour transport observations in the paper), and b) water vapour transports are a function of dozens of bits of physics – evaporation, cloud formation, mixing in storms, cyclogenesis, topographic steering, etc… Which of these do you think should be changed? and how? Thus while a paper like the one you quote is a good benchmark to see if future versions of the models (or future revisions of the data) reduce the discrepancy, models don’t change specifically because of them. Models do change because resolution gets better (which will have impacts on many of the aspects that control water vapour transports), because individual parameterisations are improved based usually on specific measurements of the process in question, and because models become more complete (for instance, having a more a priori treatment of aerosol/cloud interactions). When we re-do the long runs that are analogous to the ones discussed in the paper (which are actually starting very soon), we’ll see if things get better or not. Note too that the repeat time for these kinds of comprehensive experiments is multiple years, not months. – gavin]

    Comment by Vernon — 9 Jan 2009 @ 9:21 AM

  52. Do you include sunspot cycles and their effect on cloud formation in your modeling?

    [Response: The answer is already above. – gavin]

    Comment by Shoshin — 9 Jan 2009 @ 11:19 AM

  53. Gavin,
    In model intercomparisons, do models with ocean circulation, all having similar sensitivities for the same forcing scenarios, have similar time constants for fully relaxing the radiative imbalance? I understand the relationship between sensitivity and relaxation time, but do different models show different type transient behavior?

    [Response: Yes. Their uptake of heat into the deep ocean is different – those with less uptake warm faster than the others. – gavin]

    Comment by Bryan S — 9 Jan 2009 @ 11:59 AM

  54. T. Gannet (47), et al: This is one of my areas of interest (which is not the same as an area of knowledge). I’ll throw out some stuff that might shed some light or might generate informative rebuttals.

    The atmosphere does radiate a broad spectrum ala Planck’s blackbody, though the view is that it is done poorly. At least when you get beyond dividing the atmosphere into 3 or 4 strata for analysis it gets very complex to analyze and figure out. [There is some debate over this assertion which, IMO, mostly boils down to what “blackbody radiation” is exactly.]

    Non-CO2 molecules that have been thermally heated via collision with CO2, gaining translation energy from CO2 vibration energy, can, among other processes, later collide with another CO2 (say at a higher altitude), and transfer some of its translation energy back to the CO2 molecule, some of which will/may go into a vibration mode and at least be eligible then to radiate outward (or inward) as IR.

    Comment by Rod B — 9 Jan 2009 @ 1:22 PM

  55. IRT #53 Since both Argo floats show no ocean warming past the surface mixing boundry and neither UAH and RSS satellite data shows troposherical warming, both predicted by climate models, how do you go about determining what parameters need to be changed based on the real world data? What is the process used to determine what part of the science being incorrectly parameterized?

    I am not be critical of the models but rather looking for insight into the process you use to refine your model inlight of new information.

    [Response: For a start you get the characterisation of the data correct. Both RSS and UAH MSU-LT show warming, as does long term ocean heat content data (Domingues et al, 2008). But these aren’t the kind of data that are used to improve models – they are (if they are precise enough) the kind of data that is used evaluate the models. To improve a model I need something like good data on how sea ice alebdo varies with snow conditions or melt-pond extent, to evaluate a model I need to look at the whether the interannual variability of sea ice extent is comparable to the obs. The former is specific to a process (and therefore a particular chunk of code), while the latter is an emergent property. I’ve said this before, and I’ll say it again, models are not tuned to match long-term time-series data of any sort. – gavin]

    Comment by Vernon — 9 Jan 2009 @ 3:12 PM

  56. IRT #55 Thank you for your comment. I ment to say that both RSS and UAH do not show tropical upper tropo warming which is called for in climate models but I hashed it up.

    Sorry about that, but thanks for the insight.

    [Response: RSS does. – gavin]

    Comment by Vernon — 9 Jan 2009 @ 3:26 PM

  57. Re: #53

    Gavin,

    When an emmission scenario is imput into various ocean-atmosphere coupled models (known to have the same equilibrium sensitivities), and they are run out over hundreds to several thousand years, are there large differences in the length of time the temperature continues to rise after the emmission change approaches 0? Conversely, what is the scatter (from the model comparisons) in the (fractional percentage) of the temperature rise that has already been realized at the time that the emmission change approaches 0? Is there already significant scatter at time 0, or is there a tight cluster, with increasing scatter out in time?

    Comment by Bryan S — 9 Jan 2009 @ 10:07 PM

  58. OT

    Mauna Loa posts .24 yearly rise in co2 for 2008, the smallest since recording began in 1959!!!

    http://www.esrl.noaa.gov/gmd/ccgg/trends/

    Hopefully they fully checked these numbers before posting!!

    Comment by kuhnkat — 11 Jan 2009 @ 1:25 PM

  59. As I work my way through David Archer’s Understanding the Forecast and read through other more technical references, I have a couple questions relating to the overlap of CO2 and H2O that maybe Hank, Ray or one of the other regulars could help me with.

    I understand that the absorption spectrum is non-continuous, but rather made of discrete wavelength bands, i.e. the “picket fence” analogy.

    My first question is: Do the CO2 bands, or “pickets,” coincide with those of H2O, or are they offset from each other?
    I ask because If they are offset it would undermine the popular argument that CO2 does not matter in the region of overlap.

    Second question regarding pressure broadening: Does the broadening only occur outward in either wing of the wider frequency range, or does each discrete band, or “picket,” broaden as well?

    Comment by Jim Eager — 11 Jan 2009 @ 2:12 PM

  60. Re 58, Did you read this disclaimer?

    “The last year of data are still preliminary, pending recalibrations of reference gases and other quality control checks.”

    And if you click on this link:

    “globally averaged CO2 concentration at the surface.”
    http://www.esrl.noaa.gov/gmd/ccgg/trends/index.html#global

    The Annual Mean Growth Rate shows 1.82 for 2008.

    Comment by Jim Eager — 11 Jan 2009 @ 3:20 PM

  61. Jim, rather than divert this topic meant to collect FAQ suggestions, — you’ve posted a good one — you might look at
    http://www.aip.org/history/climate/Radmath.htm
    where the explanation includes

    “…. Take a single molecule of CO2 or H2O. It will absorb light only in a set of specific wavelengths, which show up as thin dark lines in a spectrum. In a gas at sea-level temperature and pressure, the countless molecules colliding with one another at different velocities each absorb at slightly different wavelengths, so the lines are broadened …. In cold air at low pressure, each band resolves into a cluster of sharply defined lines, like a picket fence. There are gaps between the H2O lines where radiation can get through unless blocked by CO2 lines….”

    Read the whole thing and its pointers, not just my excerpt. The ‘band’ is a picture on an instrument, the instruments have continued to improve, but you’re probably asking about the radiation physics. My hunch (only a hunch) is that at the point where almost all of the molecules have time to wring out a photon before they bumble into one another and get their vibrations mixed up, the lines will be most precise.

    Let me try an analogy — purely a hunch, someone knowledgeable will correct me. Have you seen a chaotic pendulum? It’s a simple device in which several different pieces can spin independently, and the energy is moving back and forth throughout the whole thing.
    http://www.youtube.com/watch?v=BrMQ7G1DtPw
    http://www.youtube.com/watch?v=mhxcMFQjVRs&NR=1

    Let’s take a composite pendulum and enable it to capture and release pingpong balls, but the firing end has to be spinning at some high speed before it can fire off a pingpong ball.

    One of them alone will eventually reach the firing point.

    Take a bunch of such chaotic pendulum devices floating around in a small area (postulate zero gravity and a vacuum …) they’ll run into one another far more often than any one of them will happen to concentrate enough of its energy into one particular arm of the device and emit a pingpong ball.

    No, this isn’t a scientific explanation ….

    Comment by Hank Roberts — 11 Jan 2009 @ 3:37 PM

  62. Jim Eager, The 15 micron band for CO2 is on the edge of the H2O band. The best illustration I know of is this one:

    http://www.globalwarmingart.com/wiki/Image:Atmospheric_Transmission_png

    So while CO2 is absorbing very strongly in this band, water vapor is weakly absorbing.

    WRT pressure and doppler broadening, my understanding is that the whole line broadens and flattens slightly.

    Comment by Ray Ladbury — 11 Jan 2009 @ 3:45 PM

  63. Pressure broadening broadens ALL lines to become wider. Reason: severalfold (and I may have forgotten a few) but things like doppler shift (velocity of a particle goes up when you increase pressure, PV=nRT, K.E. varies with T) where the velocity is randomly distributed and in a random direction WRT the direction of radiation will broaden the lines is a major one. Affects the absorption spectra directly.

    Other things affect the ability to absorb indirectly, by changing the energy or by siphoning off energy too quickly to hold on and emptying the band quickly to be refreshed anew.

    But the broadening is used in stellar physics to see what the temperature of something is and, because that is symmetrical, doesn’t affect the doppler shift of recession for distant objects (which is only one way).

    Comment by Mark — 11 Jan 2009 @ 6:21 PM

  64. Re #57: I will speculate on my own question in hope of recieving education.

    *My hypothesis* is that in the actual climate system, most of the radiative imbalance will have already been equilibriated when the forcing change reaches 0. There is a long thermal lag, but the long tail of the transient response represents only a small fraction of total temperature change needed to reach equilibrium.

    The concept of significant heating “left in the pipeline” is a flawed hypothesis.

    Reason: The portion of the shallow ocean land and cryosphere that are effectively coupled with the atmosphere have a relatively small amount of mass allowing the atmosphere temperature to increase rapidly. The temperature rise will nearly equilibriate the forcing change within only a very short time period (maybe a few years). The perturbation will not completely die off for several thousand years however due to the slow uptake of heat by the deep ocean. The deep ocean has a very long memory and will record a complex interference pattern of past, present, and future perturbations. Models not only fail to initialize past ocean conditions, but additionally, are known not to accurately resolve ocean turbulent eddy motion on decadal to multi-decadal time scales, therefore their transient responses that are produced cannot be considered skillful predictions of the real climate system. They are rather only process experiments.

    Why is my hypothesis flawed?

    Comment by Bryan S — 11 Jan 2009 @ 11:18 PM

  65. Ray Ladbury Says:
    11 janvier 2009 at 3:45 PM
    Jim Eager, The 15 micron band for CO2 is on the edge of the H2O band. The best illustration I know of is this one:

    http://www.globalwarmingart.com/wiki/Image:Atmospheric_Transmission_png

    So while CO2 is absorbing very strongly in this band, water vapor is weakly absorbing.

    WRT pressure and doppler broadening, my understanding is that the whole line broadens and flattens slightly.

    The trouble with that figure is that it’s such low resolution that it gives the the false impression that the band adsorbs at all wavelengths. A blown up region shown below shows a more realistic picture.

    http://i302.photobucket.com/albums/nn107/Sprintstar400/CO2H2O.gif

    Comment by Phil. Felton — 12 Jan 2009 @ 12:18 AM

  66. Mark: pressure broadening and Doppler broadening are quite different.

    Doppler broadening is due to component of the speed of the absorbing molecule along the light view.

    To understand pressure line broadening, you must keep in mind that there are a huge number of transitions that occur simultaneously. Typically, the number of transitions is of the order of the Avogadro number.

    Each transition occurs at a discret frequency.

    Each molecule generates an electric field which acts on the charged particles of any other molecule which is sufficiently close( this what is called a “collision”) This results in a slight modification of the characteristics of the molecule so that the posssible transition occurs at a slightly different frequency. If you consider all molecules of a given gas, that gives you a distribution of discrete transitions.

    What you see is the enveloppe of that distribution.

    Pressure line broadening occurs at all frequency but how much boadening depends upon the gas under consideration as well as upon the frequency.

    Comment by Y fouquart — 12 Jan 2009 @ 7:09 AM

  67. Bryan S., Your view basically assumes that all the ocean below the first 30 meters is inert, as water above that depth has mass equal to the entire atmosphere. Since we have evidence of warming from below that depth, tain’t so. What is more, GCM with more realistic oceans perform better. Your model wouldn’t look much like Earth.

    Comment by Ray Ladbury — 12 Jan 2009 @ 10:04 AM

  68. Bryan S., #57,#64
    Try your own calculation.

    A short Tutorial:
    Step 1. Find values for the ocean mass and water heat capacity.
    Hint: The difference between salty water and pure water could be ignored in this crude calculation.
    Step 2. The ocean is divided in an upper part of 3% of total mass and the deep ocean of 97% total mass. Assume that the temperatures of both parts are in equilibrium (but not necessary equal).
    Step 3. Calculate the energy E in Joule needed to heat up the deep ocean by 1 K.
    Step 4. By the assumption that the energy that goes into the deep ocean is proportionally to the difference in the temperature annomaly the response on a (relativly) fast rise of the upper part temperature by 1 K at t=0 sec is
    t_deep=(1-exp(-t/tau))
    tau is the response time of the deep ocean.
    t_deep is the temperature anomaly of the deep ocean, it is 0 at t=0.
    Step 5. Choose different response times (likely between 100 and 1000 years) you like and convert it to seconds.
    Step 6. The energy per second that goes into the deep ocean under these simple assumptions is
    P=(1K-t_deep)*E/tau
    Calculate it at least for t=0 in units of TW (TeraWatt) or PW.
    You can also calculate as a function of t if you like.
    Step 7. Choose a climate sensitivity s you like in K/(W/m^2). If you have it in K per doubling CO2, divied it by 3.708 to get this value.
    Step 8. Calculate the additional heating (for example by ‘radiative forcing’) H in TW to get 1K (long term) temperature response by multipling s with 510e12 m² the area of Earth.
    Step 9. Compare H and P for t=0. You will need H+P TW additional heating to reach 1K very soon in the presence of the deep ocean uptake. The total long term temperatur response to H+P TW will be H+P K, the short term only 1K
    Step 10. If you like try more values for tau and s.

    I hope that helps.

    Comment by Uli — 12 Jan 2009 @ 12:15 PM

  69. Ray. Not inert. Only that the thermocline will appear as a sharp boundary in the transient response. The atmospheric temperature will appear to equilibriate very rapidly at first due to the heating of land and shallow ocean (the smaller the mass, the faster the temperature increases). If plotted graphically, dT/dt will be large at first, then will decrease rapidly along a log function until it becomes asymptotic. It is is the complex dynamical structure of the upper ocean that will determine the exact shape of the curve.

    Think about this analogy. If you run the faucet in your wash basin, the water level (temperature) will rise if the flow of water (heat) into the basin exceeds the rate that water drains out the bottom. How fast it rises is determined by 1) The size of the basin 2) the rate of the water flow into the basin, and 3)the size of the drain. Your wash basin may drain into the ocean, but the size of the ocean will not be relevant to the problem of determining how long it will take to fill up the basin. Once water begins running over the side of your wash basin, it will by definition stop rising (equilibrium), despite the drain to the underlying ocean. If your basin had a sensor to detect when it was full, the faucet would shut off when it ran over. It would then intermittently kick on to keep the basin full. During this period, the rate of inflow would equal the rate of drainage. Once the ocean was full, the wash basin would stop draining completely, and it would be at complete equilibrium, with no water going in or out.

    The increased GHG forcing just makes the sides of the basin taller, so the water level must get higher before it runs over. Despite the drain, as the sides of the basin slowly get taller, the water level is always lapping full at the sides, as the faucet is kicked on immediately as the sensor shows that water is not running over. There is very little lag time needed to keep the basin full as the sides get higher.

    Comment by Bryan S — 12 Jan 2009 @ 12:29 PM

  70. Thanks very much to Hank, Ray, and Mark.
    Hank, The “picket fence” of sharply defined absorption lines described in the short excerpt from Spencer Weart’s The Dicovery of Global Warming is exactly what I was asking about. I have read Weart, but mainly the book version as I found it too tedious to do so on-line, but I will go back through the on-line Basic Radiation Calculations chapter again, since I know the on-live version has embedded links to supporting material. I enjoyed your links to the chaotic pendulum (so that’s what those are clalled), but but I’m not entirely grasping your analogy.

    Ray, I understand that CO2’s absoption in the 15 micron band is much stronger than water vapour’s, it’s the total overlap further to the left that I’m more interested in. And that diagram is much too course to show the sharply defined absorption lines. Someone once posted a comparison here at RC showing the marked difference between a solid-appearing absorption curve and a high-resolution plot of the individual discreet absorption lines. This image showing the pressure broadening in the wings of the 15 micron band:
    http://home.casema.nl/errenwijlens/co2/co205124.gif
    doesn’t quite do it since it only resolves into discrete absorption spikes in the wings.

    Ahhh, these two threads at Eli’s show what I mean:
    http://rabett.blogspot.com/2007/07/pressure-broadening-eli-has-been-happy.html
    http://rabett.blogspot.com/2007/07/temperature-anonymice-gave-eli-new.html

    They also illustrates what Mark said, that pressure broadening causes all lines to become wider, expanding into gaps between the “pickets.”

    So, my second question has been addressed, but the first remains:
    In the region of CO2-H2O overlap do the absorptive lines of each coincide, or are they offset?

    Comment by Jim Eager — 12 Jan 2009 @ 1:59 PM

  71. Yes Phil (65), that’s what I mean. It may even have been you who posted the comparison that I recall, although that’s not specifically it.

    Comment by Jim Eager — 12 Jan 2009 @ 2:02 PM

  72. Bryan S., I appreciate your hypothesis for causing me to scurry off & review what I thought I knew about the thermocline. That review caused me to reflect that presumably one consequence of the reduced Arctic ice cover characterizing the last three years must increased vertical mixing in the Arctic Ocean. This in turn should affect oceanic heat transport, although just in what manner I don’t dare speculate.

    Anyway, turning to your hypothesis, my sense of the thermocline (post-review) would require that the bottom of the basin in your analogy be chaotically reforming itself on an ongoing basis. Moreover, I think the thermocline depth is frequently much deeper than Ray’s upper 30 m of ocean, which would suggest a much more gradual warming curve than you are imagining. My two cents. . .

    Comment by Kevin McKinney — 12 Jan 2009 @ 2:39 PM

  73. > chaotic pendulum … analogy

    Someone who knows something should comment on that. Eli, you hereabouts?

    Look at the online pictures showing the various different ways that a CO2 molecule can vibrate — angle changes, bond length changes, simultaneous or alternating. The energy there can move around the molecule “sorta kinda like” the various pieces of a chaotic pendulum can change their speed and direction as energy moves around in that system.

    In an isolated molecule, if a photon is absorbed it adds more energy, and if the molecule doesn’t collide with another molecule and get rid of energy by collision, the energy moves chaotically (?) within the molecule’s many kinds and patterns of vibration, and if one of those happens to be the right [er uh size?] that vibration produces a photon and off that parcel of energy goes.

    Or so I imagine. There, I’ve done the hard part, someone else can explain why it makes sense (grin) and do the computer animation ….

    Ha!
    ReCaptcha says for this post:

    “publish tuned”

    Comment by Hank Roberts — 12 Jan 2009 @ 3:42 PM

  74. PS, you can find triatomic molecule vibrations illustrated online; search for

    co2 molecule vibration mode applet

    Note how many _more_ possibilities are present with a triatomic molecule. A “chaotic pendulum” is very simple by comparison, with rotation but no stretch or bending modes. (I wonder if stretching a bond is like changing the length of a macroscopic pendulum?)

    There’s a challenge for any Exploratorium or other science-museum hardware builders!
    ______________
    “factors that”

    Comment by Hank Roberts — 12 Jan 2009 @ 3:51 PM

  75. And one more to sum up the idea:

    http://www.maths.ed.ac.uk/~s9905488/other/mol.pdf

    “… considering the following basic view of the interaction of radiation and matter. Imagine a diatomic molecule which has a natural period of oscillation; if electromagnetic waves of a certain frequency pass by and somehow drive the oscillations of the molecule, then the molecule will extract more energy from the waves when their frequency matches the characteristic frequency of vibration of the molecule. Conversely, if the molecule was somehow able to store energy and then release it through its oscillations, again in the form of electromagnetic waves, then it would radiate waves with a frequency corresponding to the natural frequency of the molecule. This is a very qualitative argument but it does give a basic idea of the quantum mechanics which links molecular vibrations to observed spectral features.”
    ____________
    “be- select”

    Comment by Hank Roberts — 12 Jan 2009 @ 3:55 PM

  76. OK, Hank, that’s more or less what I thought you were driving at as I watched some of the secondary pendulums all of a sudden start spinning much faster.
    Thanks –Jim

    Comment by Jim Eager — 12 Jan 2009 @ 3:58 PM

  77. Further to #73, the extra energy changes the entire system somewhat so that what used to be a virbational mode now no longer is resonant, because, for example, the electron spends longer now between the O-C nuclei and so change the electrostatic potential that causes the restoration of a vibrational mode.

    The torsional force of flexion excitation changes for the same sort of reason (the C end can’t “flex” in as far because of electrostatic repulsion).

    etc.

    Very complicated.

    Comment by Mark — 12 Jan 2009 @ 4:09 PM

  78. Jim, #70, your question doesn’t make sense. If “CO2-H2O overlap” mean the absorption spectra overlapping (which is the only one that makes sense in the context asked), then yes the absorption lines do coincide. But that’s a tautology, so I can’t make out what you’re on about.

    I doubt anyone else can either.

    It doesn’t really matter anyway, since the collisional relaxation at about 15um is very much within the chances of happening at STP in our atmosphere by kinetic sources. So even if they didn’t overlap, they could populate each others’ absorbtion spectra interchangably by hitting each other head on (to up the energy transfer to the resonance of the exitation energy) or by rear-ending (to reduce the energy transfer to the resonance of the exitation energy).

    Or hit N2. Argon. Xenon. Yo momma. Whatever.

    Comment by Mark — 12 Jan 2009 @ 4:16 PM

  79. Here’s what seems (to me, a purely amateurish reader) a helpful explanation of absorbtion lines in simple words.

    http://www.applet-magic.com/absorptionspectra.htm

    Written and coded by an economist, if I read the home page right.

    Comment by Hank Roberts — 12 Jan 2009 @ 4:23 PM

  80. Mark, it may well not make any sense, but I didn’t know that because I don’t have the physics background, so I asked.

    The spectrum is continuous, no? While the absorption lines are centered on specific discrete wave lengths, right?

    My thought was that if the absorption lines of CO2 and H2O do not coincide then the lines of one gas would be offset into the gap between the lines of the other gas, meaning that the absorption effect of CO2 would not be redundant to that of the more numerous H2O in the region of overlap, as some claim.

    Perhaps my mistake is in my lack of understanding of what determines the discrete wave lengths of the absorption lines. Is it a characteristic of the particular molecule (CO2 vs H2O) or is is a characteristic of the bonds?

    Comment by Jim Eager — 12 Jan 2009 @ 5:34 PM

  81. Jim, the answer is yes, both. Complicated, hardly answerable in a blog posting. Lots of links posted above though. Maybe a FAQ will help.

    Comment by Hank Roberts — 12 Jan 2009 @ 6:07 PM

  82. Potential FAQ material if anyone from someplace like the Exploratorium is checking in here — kids, you _can_ do this stuff at home nowadays.

    Need a single-mode red laser, cheap? Check your pockets!
    J. Chem. Phys. 124, 236101 (2006); DOI:10.1063/1.2212940– 16 June 2006
    REFERENCES (6)
    Joel Tellinghuisen
    Department of Chemistry, Vanderbilt University

    “An inexpensive (less than $5) key-chain model of red laser pointer (RLP) operates with typically 98% of its total emission in a single longitudinal cavity mode. The laser self-tunes with time, interpreted as due to thermal effects. The laser can also be tuned by varying its operating current and voltage. These properties permit one to quickly and easily record absorption spectral segments spanning ranges of 1–6 cm–1, with high quantitative reliability, resolution, and accuracy….”

    Comment by Hank Roberts — 12 Jan 2009 @ 6:15 PM

  83. Jim, I think the problem is the wording of the question. Read up on some A-level physics first. But here’s a few that may help firm up what you’re asking.

    There is a continuous spectra and line spectra. It isn’t “The spectrum is continuous”. The line spectra have a very limited width. An example is the 15um line spectra. The width of a line spectrum depends on its half-life or stability. The definition of time is from a meta-stable (nearly stable) transition of caesium. Because the half-life of this exited state is so long, the width of the emission line is very thin and so the error in counting X vibrations is very accurate and the time measured therefrom likewise accurate.

    The IR absorbtion is generally composed of many very close stable (or semi-stable) excited states. Because they are close together and very unstable, they can merge into a wide band of excited energies that are a resonant frequency of the system.

    A resonant frequency is more likely to be absorbed.

    That causes the energy to be absorbed and reemitted in a random direction. Therefore in this band, instead of shooting straight out into the universe, the photon and it’s concommitant energy take a random walk through the atmosphere, with a very short transfer time between reabsorbtions. Only when the distance ot absorbtion is getting to something of the order of the distance out of the atmosphere does it take a more direct route.

    Now, the random walk will move much slower (to the square of the linear distance, given constant mean path between absorptions) the thicker the layer is.

    Now what happens when you get more absorbtion elements per unit space?

    Shorter mean free path.

    Which means it takes longer to get out.

    Because of the mixing, the height at which the mean free path is about what the distance to the exopshere is must also get higher (because the density of absorbers goes up more quickly where it is low, with the proviso that VERY well mixed concentrations will not show this change).

    So it doesn’t matter if the absorption bands overlap at some point: all that means is that the mean free path goes up a lot when they DO overlap for that overlapping energy domain. And that keeps the energy in that domain in the atmosphere much longer (doubling concentration can quadruple the resident time, for example).

    IRL it’s not that simple, since doppler shifting can move an emitted photon from an absorbing particle into a band that isn’t an absorption band, or the energy can pass into another form whose product can be outside the absorption band too. Then again, that which is outside the absorption band can be seen (from the POV of the absorber) to be moved IN to the band it is resonating with and be captured.

    But this sort of stuff is getting on toward degree level info to actually work out roughly, and PhD level to model with some degree of promise.

    Comment by Mark — 12 Jan 2009 @ 6:17 PM

  84. Hank and Mark, thanks for bearing with me. Your replies have been quite helpful, even if my grasp of them has not been complete.

    Yes, Mark, a better–and more recent–grounding in A-level physics would definitely help. It’s been a long time since I took basic level physics and chem as electives for non-science majors. That considered, I don’t think I’m doing that badly. I’ve been following the discussions at RC and Tamino’s and Eli’s and reading increasingly more technical references for a couple of years now. I’m now starting to drill into harder stuff, at least for me. I do understand the part about shorter mean free path and increased residence time in your reply @83, and I also ralize that the overlap is not really the issue that some make it out to be, I just had the thought that it could perhaps be even less of an issue, hence my questions.
    Time to finish reading the links that I’ve been directed to.

    Comment by Jim Eager — 12 Jan 2009 @ 8:10 PM

  85. Chuckle. Don’t miss the two earlier “saturated gassy argument” threads here, which many of us staggered through trying to grasp this stuff without using math.
    I _think_ I sort of understand a bit more than before, in a vague and poetic sense. But I keep hoping someone who really does will straighten me out when I try to pass on my vague notion of how it works to others.
    I’m aiming for fifth grade level comprehensible language, more or less, nothing more than that.

    Comment by Hank Roberts — 12 Jan 2009 @ 9:15 PM

  86. Yep, time to reread them, too. They might even make a bit more sense to me now than they did then. And trust me, I even slogged through your multi-thread back and forth with Rod over vibrational states and what temperature is and is not. Thanks to all of you for the education.

    Comment by Jim Eager — 12 Jan 2009 @ 10:30 PM

  87. Re:#68
    there is a error in step 8, sorry. The correct version is:
    “Step 8. Calculate the additional heating (for example by ‘radiative forcing’) H in TW to get 1K (long term) temperature response by dividing the area of Earth 510e12 m² by s.”

    Comment by Uli — 13 Jan 2009 @ 2:41 AM

  88. Re:#87: Thank you Uli for taking interest in my comments. I have previously gone through this arithmetic, and the results plus some background reading and personal research with other types of transient systems have spurred the above comments. The background of this subject deals with the controversial Stephen Schwartz paper (2007), and his reply (2008) to the paper in rebuttal by Forster et. al (2008) in which both Gavin Schmidt and Mike Mann were co-authors. Schwartz used a simple energy balance model given by dH/dt=Q-E=CdTs/dt to relate the change in system heat content to the transient change in global surface temperature to try and constrain a system relaxation time and thus an estimate of the climate sensitivity given by S=t/C, where S is the climate sensitivity, t is the time constant, and C is the effective heat capacity. His analysis used a controversial method of autocorrelation assuming a linear trend plus a first order Markov process to estimate the time constant. From this analysis, he calculated a short time-constant (8.5 years) which was judged by the subsequent authors as being un-physical, due to the very slow uptake of heat by the deep ocean. His analysis is supposedly compromised by the various heat reservoirs in the climate system, each having their own time constant, plus a noisy temperature signal owing to natural variablity.

    As I have thought about this problem of various components of the system affecting the transient response, it turns out to be a similar type problem to the practice of pressure-transient analysis in subsurface petroleum or groundwater reservoirs. When the pressure in a subsurface reservoir is perturbed, we can study the decay in the transient response with respect to time. As the ripple from the peturbation intersects various boundaries in the geological system (rocks with different permeablity), it affects the transient response, and the shape of the transient curve when plotted along a graph. We see several characteristics which I think may have analogs in the climate system. If considering a pressure buildup test after a withdraw of fluids, the transient response on a semi-log plot will generally have a form where the pressure moves toward equlibrium rapidly at first, then the change in slope decreases with time. The coupling of the components of differing permeablility will determine the ultimate shape of the buildup response. It is might be possible to visualize such a system in terms of several distinct time-constants since the graphical slope of the response may be defined by several straight lines of differing slopes.

    If we consiser a thin permeable rock formation (of low pore volume) that it is bounded by much less permeable rock (of high pore volume), and bounded again by completely impermeable rock, the relaxation time will be very small for the permeable section, and very large for the almost impermeable section. Even though the bulk of the fluid might be held in the very impermeable section, it is not effectively coupled to the permeable section. The transient response will appear to approach an asymptote as the boundaries of the permeable section are reached. The pressure in the test might continue to build very slowly for very long periods of time if the bounding rock formation is very impermeable (as it slowly exhanges fluids with the permeable layer), but for the purposes of the transient response, the pressure will have appeared to almost completely equilize after only a much shorter time period.

    The climate system is analogous in that it has components of small heat capacity that are very “permeable” to heating, which are weakly coupled a component of very large heat capacity which is almost “impermeable. I think that for all practicle purposes, this almost impermeable component should appear as a strong boundary (change in slope) in the transient response, such that it can be almost ignored for the purposes of transient climate sensitivity. I also suspect that the Schwartz paper and rebuttles might lead to a better way to analyze the so-called time constant, by viewing the climate response as a continuous transient response of changing slopes.

    This explanation is why I have convinced myself that the concept of “heating remaining in the pipeline” or committed warming” is a seriously flawed concept. First, it does not necessarily convey any truly meaningful information to policy makers. If 85% percent of the response to a change in forcing is realized in 5-10 years, with the remaining portion realized over several thousand years (as Schwartz, 2008 suggests), the system is effectively at equilibrium after only 5-10 years. Technically, it is not, but practically it is. Now, if the radiative imbalance induced by the changing forcing allows heat to accumulate in the small permeable reservoirs much more rapidly than it can be leaked off to the deep impermeable reservoir, then the temperature will increase in the atmosphere (small reservoir) nearly like the deep reservoir is not even present. The radiative imbalance due to the forcing change+feedbacks will be *almost* equilibriated quickly. The deep ocean is thus effectivly decoupled from the climate system, and can be nearly trival to the problem at hand, as I have explained in my lavoratory example above.

    Comment by Bryan S — 13 Jan 2009 @ 1:17 PM

  89. Bryan, your “lavoratory” example isn’t credible.

    You claim you’ve convinced yourself by logic and analogy, without doing the math. Faith not arithmetic.

    Look at the ocean turnover numbers. Look how they change over time.

    Comment by Hank Roberts — 13 Jan 2009 @ 2:22 PM

  90. Bryan S., The problem with your model is that you are assuming only 2 timescales–1 very short and one very long, that can be neglected. This is, granted, an improvement on Schwartz’s 1 timescale, but the same criticism applies. I suggest going back and rereading:
    http://www.realclimate.org/index.php/archives/2007/09/climate-insensitivity/

    Comment by Ray Ladbury — 13 Jan 2009 @ 2:55 PM

  91. Has this possibility been addressed? Water is densest at 4’C. It may be that significant heat has been going into the warming of water which was at 3’C or less. If so then that water has been getting denser, and thus actually tending to cause sea level to fall, in opposition to the sea level rise caused by warming of >4’C waters and the melting of grounded ice. If that has been happening, then if the ocean substantially runs out of sub-4’C water the rate of sea level rise will increase significantly.

    [Response: 4 C is only for fresh water. Salt water (greater than 25 psu or so) is densest at the freezing point (roughly -0.054*S, i.e. -1.9 deg C for S=35). – gavin]

    Comment by richard schumacher — 13 Jan 2009 @ 4:18 PM

  92. Serves me right for being ignorant and lazy :_> Thanks!

    Comment by richard schumacher — 13 Jan 2009 @ 4:41 PM

  93. Say it aint so Hank,
    The lavoratory example is indeed credible for the intended purposes.. to motivate one to think! My epistemology is indeed mathematical. Math is a pure form of logical thinking. Mearly saying it ain’t so has no basis in either math or logic.

    Ray, Please read the 2008 replies and responses (actual peer-reviewed papers) to the original paper for more background. The archive from 2007 is dated. I think everyone now agrees (Schwartz included) that there are several timescales that characterize the system. It is agreed that the components with small mass have a short time-constant, and the 10 ton gorilla is the deep ocean, with a very long time-constant. The point is that heat will accumulate immediately in the coupled smaller reservoirs at a rate equal to the net radiative imbalance minus the rate of heat uptake by the deep ocean. Since the diffusivity to the deep ocean is low and the mass of all other components is low, the atmosphere will realize increased temperature almost immediately. The net radiative imbalance (averaged over a number of years) will soon be diminished to the rate of heat uptake by the deep ocean.

    Comment by Bryan S — 13 Jan 2009 @ 5:56 PM

  94. Re: #88 and #93: “lavoratory”

    Urban Dictionary

    1. lavoratory
    A place where scientific research is conducted which doubles as a toilet.

    “Dr Frank Fischenhoffer’s world-class lavoratory in California was one of the first in the world to create fluorescent urine after his initial discovery pissing in one of the cubicles.”

    http://www.urbandictionary.com/define.php?term=lavoratory

    Comment by Jim Eaton — 13 Jan 2009 @ 6:40 PM

  95. Bryan S, it surely ain’t spellin.

    And it’s no good if you can convince yourself you’re right. There’s a bloke I know convinced himself he was napoleon.

    For some reason, he couldn’t convince anyone else…

    Comment by Mark — 14 Jan 2009 @ 1:40 PM

  96. Bryan S, I’ve found Schwartz 2008 and some related stuff, and reviewed some of your earlier posts and the responses. I’m just a lay person. Is it possible for you to better explain what you mean in your analogy when you say additional GHG raise the sides of the basin.

    Comment by JCH — 14 Jan 2009 @ 2:10 PM

  97. > Math is a pure form of logical thinking.

    Show your work, please.

    There is no single deep ocean. There are ocean basins; look at the recent studies for changes in them.

    http://www.google.com/search?q=deep+ocean+basin+warming

    Eschew the “21stcenturysciencetech” hit; the others on the first page are good sources to research.

    Go figure.

    Comment by Hank Roberts — 14 Jan 2009 @ 3:07 PM

  98. PS, Erik replied to your earlier idea on ocean heat storage here: http://www.realclimate.org/index.php/archives/2006/08/antarctica-snowfall/#comment-18305

    Comment by Hank Roberts — 14 Jan 2009 @ 3:38 PM

  99. JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, D15103, doi:10.1029/2007JD009473, 2008

    http://www.iac.ethz.ch/people/knuttir/papers/knutti08jgr.pdf (full text)

    “… For the climate change problem, in order to achieve stabilization of global temperature, the relevant response timescales are those of the deep ocean, and the short timescales found by SES are therefore irrelevant to the problem of estimating climate sensitivity. The argument of abrupt temperature shifts in glacial periods is misleading, because these were local or regional warming events caused by a change in the ocean thermohaline circulation and sea ice, with little or no signal in global temperature.”

    “… In his reply, Schwartz [2008] proposes revised methods to estimate the response time scale of the system (his equations (5) and (6)), but based on essentially the same arguments. We applied that method to all GCM control simulations and find that correlation is still insignificant, and that the revised method has no more skill in predicting climate sensitivity than the original one, even in the optimal situation ….”

    Comment by Hank Roberts — 14 Jan 2009 @ 4:52 PM

  100. Thinking of another question for future FAQ’s…

    I notice the answers in this edition seemed to carefully avoid much mention of the structure of the atmosphere – troposphere, stratosphere, tropopause, etc. Clearly that structure is not a “boundary condition”, but an outcome of modeling. Can you make any broad statements about the causes for atmospheric structure and relation with the various forcings? For example, we know tropopause height increases with GHG forcing, but what else can we say generally about the origins and dependencies of that structure and how it impacts the other components of the system?

    [Response: Well that’s not really a climate modelling issue per se. It’s more a climatology question. The tropopause exists because of the ozone in the stratosphere which due to its local heating effects is a barrier against convection. Thus convection from near the surface (predicated because the atmosphere is mostly transparent to solar radiation can only go so far up. Variations to the tropopause will then occur due to changes in stratification (i.e. an ozone change, or volcanic aerosols) or to the structure of the troposphere (temperatures, water vapour etc.). You can certainly explore these dependencies using models (for instance, try running with no ozone at all!). – gavin]

    Comment by Arthur Smith — 14 Jan 2009 @ 5:24 PM

  101. Re: 96 JCH, your question is a good one. As greenhouse gas concentrations rise in the atmosphere, a radiative imbalance at the top of the atmosphere results due to the difference between incoming shortwave and the decreased outgoing longwave radiation (due to the extra GHG+feedbacks). This imbalance will continue until the temperature of the atmosphere rises enough to equilibriate the imbalance, so that incoming shortwave equals outgoing longwave. When I say the sides of the wash basin have been raised, it is an analogy to the extra greenhouse gas. Think of it in terms of a change in boundary conditions. As the sides of the wash basin get higher, the water level will rise until it begins running over the side, just as the temperature will rise until incoming shortwave equals outgoing longwave. The water draining out the bottom is an analogy to the heat uptake by the deep ocean.

    Re: #95: Mark, Love and happiness right back at you.

    Re: #99: Hank, thank you for finally figuring out what the discussion is about. but… you just told me there is no such thing as a deep ocean. Now you provide a link for this paper. Do you now think that there is in fact a deep ocean heat reservoir??

    Knutti (2008) states: “For the climate change problem, in order to achieve stabilization of global temperature, the relevant response timescales are those of the deep ocean, and the short timescales found by SES are therefore irrelevant to the problem of estimating climate sensitivity”

    Knutti is wrong! His wash basin is turned upside down. [edit]

    [Response: No he isn’t. – gavin]

    Comment by Bryan S — 14 Jan 2009 @ 5:50 PM

  102. Re: #102: it is generally the upper ocean that determines the time scale for the transient warming we might expect. – gavin]

    Gavin, he is wrong, and you know it! The climate change problem (how much warming we might expect) is by definition one of transient response, just as you have stated above.

    Bryan

    [Response: You can’t simply define the climate change problem as the ‘transient response’ (unless you extend it be any time period you can think of). Knutti is one of the clearest thinkers on this subject that there is. I would recommend you pay much more attention to his papers than you have. – gavin]

    Comment by Bryan S — 14 Jan 2009 @ 6:12 PM

  103. http://www.iac.ethz.ch/people/knuttir/papers/

    Comment by Hank Roberts — 14 Jan 2009 @ 6:34 PM

  104. Bryan S. says, “Mearly saying it ain’t so has no basis in either math or logic.”

    and then asserts “Knutti is wrong! His wash basin is turned upside down.”

    So, merely saying it ain’t so only works when YOU do it?

    Comment by Ray Ladbury — 14 Jan 2009 @ 8:16 PM

  105. Much information available linked on Dr. Knutti’s web page — journal articles over the years, book chapters, an entire book, educational articles. Info but not yet links for upcoming papers sent to journals, so watch that space.

    Comment by Hank Roberts — 14 Jan 2009 @ 8:51 PM

  106. Re: 100 – Gavin, your response: “The tropopause exists because of the ozone in the stratosphere which due to its local heating effects is a barrier against convection.” – that’s the reason for the structure when your independent variable is the level of direct atmospheric absorption of sunlight (i.e. ozone). But if you kept that constant and vary infrared absorption instead, then things turn out differently – I’m thinking of interplanetary comparisons here: Mars has almost no troposphere for example, why?

    The main reason I ask is that in most elementary treatments of the basic greenhouse effect (the 33 C number for Earth) the troposphere is pretty much assumed to be there for you – but it ain’t necessarily there and the two actually work together in a fashion that I’d like to see explained better than I’ve been able to up to now!

    Comment by Arthur Smith — 14 Jan 2009 @ 9:56 PM

  107. Bryan S, thanks for the clarification.

    Hank, did you see this?

    Comment by JCH — 14 Jan 2009 @ 10:52 PM

  108. JCH, hit the reload button to make sure you’re seeing all the posts.

    Comment by Hank Roberts — 15 Jan 2009 @ 12:18 AM

  109. Gavin, what happens if we compare the transient response in an AOGCM with a dynamic ocean of an average depth of 150 meters and no deep ocean, to a model with a realistic fully dynamic ocean? The equilibrium climate sensitivity in the two models is known to be the same. Everything in the models is identical except the ocean. We then perform the CO2 doubling experiment that Knutti shows in his paper. I predict that we still get a long tail in the the transient response of the shallow ocean model owing not to deep ocean heat uptake, but rather to slow responding feedbacks in the model such as changing ice sheets and high clouds. The transient response will be only slightly effected (15-20% difference in time to equilibrium) by the deep ocean heat uptake.

    Comment by Bryan S — 15 Jan 2009 @ 12:28 AM

  110. Re: Uli (#463, Part I),

    Uli, the response to all your questions is positive. The Solow and RBC models have been extensively applied to economies other than the US. There are some variations on these models that have been applied to developing countries, or to the transition from undeveloped to developed countries.

    As for the long-run behavior of the economies, there are also versions of the basic models that explain the levels of output in economies over the centuries, and the endogenous transition from so-called Malthusian economies to industrial economies, as that of 18th century England. On this see the chapter:

    Parente, Stephen L. & Prescott, Edward C., 2005. “A Unified Theory of the Evolution of International Income Levels,” Handbook of Economic Growth, in: Philippe Aghion & Steven Durlauf (ed.), Handbook of Economic Growth, edition 1, volume 1, chapter 21, pages 1371-1416 Elsevier.

    I reckon that your “physically based climate models” the equivalent to my “models based in sound theory”. In economics, most of these models are used for understanding the behavior of economies, and not so much for forecasts. They are the workhorse of the profession.

    “Do some economic models use past correlations (of time series)?”

    In short-run forecasts (one month to one year), it is very difficult to beat basic statistical models that have zero or a couple of built-in theoretical concepts, but are not structural. So we do use past correlations on the assumption that rules governing beliefs of agents do not change much in the short run.

    Comment by Antunes — 15 Jan 2009 @ 12:43 PM

  111. In an argument with a relatively knowledgeable denialist elsewhere, he has claimed that GCMs show correlated errors that cannot be overcome using ensembles, and specifically, that “the errors in the clouds are in the 10s of Watts/m^2.” How would you respond?

    [Response: What is this being used to claim? There are of course systematic errors in the GCMs that don’t go away when you average lots of them together (a large part of the random error does disappear though). Modellers spend a lot of time trying to reduce this of course, but no-one ever claimed models were perfect. A possible error in logic would be to claim that such a systematic bias automatically implied that the sensitivity of the model to climate forcings was grossly in error. That doesn’t follow at all. For instance, take a really simple model T4 = S*(1-a)/(4 sigma) (where S is the solar input 1365W/m2, a is the albedo). The sensitivity to a change in S, dT/dS is T_eq/(4*S). Thus for a 10% error in ‘a’ (from 0.3 to 0.33 say), the difference it makes in the sensitivity is a little more than 1% (i.e. 0.0467 to 0.0461). And a 10% error in albedo is over 40W/m2 in the absorbed solar! The bottom line is that errors in climatological values, while important, have less impact on sensitivity than you might think. – gavin]

    Comment by Nick Gotts — 16 Jan 2009 @ 8:39 AM

  112. Gavin – thanks very much. What he’s claiming is that such errors mean the models are not good enough to make predictions sufficiently good to justify taking action to limit AGW – so the error in logic you identify is indeed implicit in his argument. Effectively the same error has already been pointed out to him by John Gammon-Nielsen in an online argument he pointed me to. I’ll see what his response is to your really simple model (I already know he won’t admit he’s wrong!)

    Comment by Nick Gotts — 16 Jan 2009 @ 11:27 AM

  113. Nick, who are you referring to in your posts #111 & 112? I seem to be missing some context.

    Comment by Kevin McKinney — 16 Jan 2009 @ 12:42 PM

  114. Nick, #111.
    How would I respond?

    With “You’re wrong”.

    Why put more effort into answering than they did in getting a question?

    Comment by Mark — 16 Jan 2009 @ 1:18 PM

  115. I would like to have a better understanding of what, if any, variables that act predominantly at larger time (or space) scales, are included in GCMs but (presumably) not in weather models. Although, other that the sun spot cycle, I cannot conceive of what these would be. Seasonal albedo or evapotranspiration changes perhaps?

    Comment by Jim Bouldin — 16 Jan 2009 @ 10:00 PM

  116. re 115. Nowadays? Nothing, really. Same model for weather as for climate is now common.

    Comment by Mark — 19 Jan 2009 @ 4:55 PM

  117. Gavin, Jim asks a good FAQ question — is the same model used for weather and for climate?

    I’d thought not.
    Mark says so.

    Comment by Hank Roberts — 19 Jan 2009 @ 9:06 PM

  118. Hank, read the 2nd comment and response in the previous post.

    Comment by JCH — 19 Jan 2009 @ 11:59 PM

  119. It was Gavin’s response there that actually spurred my question, particularly the statement: “Weather models develop in ways that improve the short term predictions – but aren’t necessarily optimal for long term statistics”. Just looking for a little more elaboration there.

    BTW, the Chu confirmation hearing is now being brodcast live on C-span 3: http://www.c-span.org/Watch/C-SPAN3_wm.aspx

    Comment by Jim Bouldin — 20 Jan 2009 @ 8:42 AM

  120. Thanks JCH, that’s in reference to my puzzlement over Mark’s “same model” — Gavin’s explanation, inline in that topic here: 3 novembre 2008 at 7:48 AM is, how you say, rather more nuanced (grin). I think definitely FAQworthy

    Comment by Hank Roberts — 20 Jan 2009 @ 10:19 AM

  121. I don’t claim to be the first person to ask for an explanation of the differences between a weather model and a climate model, but long ago I suggested that RC would benefit from having something to explain it better for lay people. Months later I tried again to ask something that might help me better understand. The response:

    [Response: You would get a spread associated with the unforced variability in the global mean temperature – roughly a standard deviation of 0.15 deg C on top of the very small yearly increase in the long term forced trend (0.02 deg C). So the 5-95% range for next years anomaly would be something like -0.28 to 0.32 deg C (relative to some recent baseline). Note that for a proper forecast you’d need to assimilate the past changes in temperature which is something that the climate models in AR4 didn’t do, but is starting to be done more often. – gavin]

    And it worked.

    Comment by JCH — 20 Jan 2009 @ 11:35 AM

  122. Gavin – I have posted a weblog which questions several of your answers [http://climatesci.org/2009/01/20/comments-on-real-climates-post-faq-on-climate-models-part-ii/]. I would be glad to post as a guest weblog your responses on Climate Science. Roger

    Comment by Roger A. Pielke Sr. — 20 Jan 2009 @ 12:22 PM

  123. I read a comment stating that Hadley Centre uses the same model for both climate and weather. If so, they are probably unique.

    [Response: Yes – though the configurations (i.e. resolution) are different (details from anyone involved?). – gavin]

    Comment by David B. Benson — 20 Jan 2009 @ 1:24 PM

  124. Hank 120, how can you ask of others what you will not do yourself? E.g. check up on the FAQs.

    Rather than get sarcastic (and complain when I do).

    To be the “good guy” you have to be BETTER than good. That’s what “role model” means. Unfair, but if you WANT to be held up as a paragon, you have to do it.

    Comment by Mark — 20 Jan 2009 @ 1:56 PM

  125. Re #122: Roger, if you haven’t figured out by now that the proper response to your extracurricular activities is to ignore you, probably you never will. The fact that you haven’t yet managed to place yourself entirely outside the scientific pale is a testament to the tolerance of the scientific community.

    Comment by Steve Bloom — 20 Jan 2009 @ 6:40 PM

  126. Steve Bloom: “Roger, if you haven’t figured out by now that the proper response to your extracurricular activities is to ignore you…”

    That is a rather unhelpful attitude for those of us trying to see how criticisms of the RC point of view from a highly qualified climate scientist might be refuted.

    I hope it is not shared by the professionals here.

    Comment by Steve Reynolds — 20 Jan 2009 @ 8:12 PM

  127. Mark, my aspirations aren’t what you imagine. I want other people to be better, not to be better than they.

    Your 2-word answer, without cite, needed help; Gavin’s answer had pointed to how to find more. That’s the idea, help people find stuff.

    Comment by Hank Roberts — 20 Jan 2009 @ 8:31 PM

  128. The below paper by Reto Knutti is an excellent supplement to this post;

    Should we believe model predictions of future
    climate change? Phil. Trans. R. Soc. A (2008) 366, 4647–4664
    doi:10.1098/rsta.2008.0169 Published online 25 September 2008

    It is entirely qualitative with clear language (so it should be readable by almost anyone) and gives an excellent summary of the processes in climate modeling and their developments.

    http://www.iac.ethz.ch/people/knuttir/papers/knutti08ptrs.pdf

    Comment by Chris Colose — 20 Jan 2009 @ 10:40 PM

  129. Steve Bloom, your post #125 is unbecoming, unprofessional, and downright silly. Sorry, I could not pass — I’m just saying…

    Comment by Rod B — 20 Jan 2009 @ 11:45 PM

  130. Hank 127, the two words answered the question.

    Rather like “What’s the difference between floor cleaner and worktop cleaner”. Well, none, really, depends on the application. However, it used to be that the floor cleaners were more caustic …

    and so on for four pages.

    Or none, really.

    Two words answering the question.

    I mean, why do you think the Met Office has a Unified Model? What would it be unified with?

    Comment by Mark — 21 Jan 2009 @ 3:56 AM

  131. re 123 and gavin’s answer. There’s the problem right there, Gavin and Hank. Davd asked question with “is it a or b” and gavin answered with yes.

    Now, if you know that, yes they are the same, then you can work out what yes gavin is saying yes to.

    The Met Office do a unified model. The diagnostics output and the physics put in can change, but this is usually done from trying to work out what works best with a new model configuration and whether it works well enough to replace the current model.

    And it is called the unified model because it is used in weather and in climate forecasting.

    So, David, the answer to YOUR question is, they use the same model.

    NOTE: I don’t do this, but I know a man who does…

    Comment by Mark — 21 Jan 2009 @ 4:00 AM

  132. PS to that last reply. David, why do you wish to know? I believe (though this is old information, I have no real need to know, so don’t ask) that the US use a separate model for the climate from their weather forcasting, but several countries have bought the Met Office UM and France has changed to a unified model.

    The reason? Computers are now fast enough to manage to process the complex physics for climatological simulation in a usable timeframe whereas before the inclusion of these physics would mean the answers would not be forthcoming in a timely manner and so the more physically accurate weather forcasting model was not usable for climate work.

    Climate models then use parameterisations for many of these calculations.

    But, unless I’m VERY mistaken in my undestanding, this is all described in the text of this thread right at the top, which I would have assumed people would have, y’know, READ.

    Comment by Mark — 21 Jan 2009 @ 4:04 AM

  133. re #129: I agree. RP Sr deserves respect. Which means an intelligent and reasoned response. Lets see what RC can do!!

    Comment by concerned of berkeley — 21 Jan 2009 @ 5:06 AM

  134. Kevin McKinney@113, Mark@114,

    This concerns a commenter at Pharyngula, calling himself “africangenesis”. I tried to post more but got told my comment was spam.

    Comment by Nick Gotts — 21 Jan 2009 @ 7:41 AM

  135. Thanks for clarifying that, Nick.

    Comment by Kevin McKinney — 21 Jan 2009 @ 8:28 AM

  136. 20 janvier 2009 at 1:24 PM
    Thanks David; do you recall where you read that? Given that, we’ve heard _one_ group is now using a single model — in different ways — for both weather and climate. Proof of concept should be there, if anyone from there is reading and can say more on that. Let’s hope.

    Knutti’s paper Chris points to is definitely helpful on why so many different climate models are in current use and development, being used and useful, and some of them used in different ways for different aspects of modeling (thank you Chris for, I’d guess, starting to read through that large online archive). Lots of information there.

    Comment by Hank Roberts — 21 Jan 2009 @ 11:42 AM

  137. Re #s 129/133: If RP Sr. wants to be treated with respect, he needs to learn to work and play well with others on a level more advanced than that of a petulant junior high student.

    His blog is riddled with examples of how not to behave; see e.g. this recent post. I don’t know who thought it was desirable to invite Roger to such an event, but it’s a mistake that seems unlikely to be repeated. BTW, this is not a criticism of Roger’s scientific views (to which he is entitled), but of his behavior and of the paranoid ideation expressed in his closing paragraph:

    “The agency representatives at the NRC planning meeting on December 8 2008, either are inadvertantly neglecting the need for independent oversight, or they are deliberately ignoring this lack of an independent assessment because the IPCC findings fit their agenda on the climate issue. In either case, the policymakers and the public are being misled on the degree of understanding of the climate system, including the human role within in it.”

    Note also that the meeting presenters seem to have figured out that Roger requires special treatment:

    “Most of the several powerpoint talks that were given at the planning meeting, however, are not being made available by the presenters [a very unusual arrangement; as contrasted with other meetings such as the 2006 SORCE meeting – see]. The talks were each very informative, so it is unfortunate that they have chosen not to share.”

    See also this previous episode involving Roger (noting that his blog has many subsequent posts on the subject of this resignation). In the comments, a scientist working in an unrelated field comments:

    “Honestly, Professor, if I had a collegue who, upon a disagreement about writing up a section of a project, loudly and publically stormed away, then insisted that I was politicizing the work, and telling me that I was ‘inappropriately, vigorously discourag[ing] the inclusion of diversity of perspectives’ in the paper, I would find it very tiresome.

    “Yes, working with other people is difficult, particularly in committees where it’s possible the majority of people might well disagree with you, but we are scientists, and that is the lot we have chosen in life. Also, I too have had bad articles written in the NYT about my line of work. Somehow I managed to carry on. Their science writing isn’t very good, you know.

    “At any rate, please be sure and release your version of the chapter; I think you owe it to the world to Reveal the Truth that the Authorities have Suppressed.”

    So there’s a long history to these problems.

    Comment by Steve Bloom — 21 Jan 2009 @ 5:45 PM

  138. Mark (131, 132) — You misread my comment #123. Also, I did not particularly need to know, but some earlier commenter asked.

    Hank Roberts (136) — I don’t recall, nor does it seem to matter. Gavin afirms it is so.

    Comment by David B. Benson — 21 Jan 2009 @ 7:03 PM

  139. The fact that Real Climate allows Steve Bloom’s comments through its moderation suggests a willing proxy attack on Prof. Pielke. Why not just disallow the ad homs and respond to the substance of the post? You guys are all adults I believe.

    Steve Bloom, who in the world are you? And why should anyone care about your views?

    Real Climate, don’t let this become a nasty site, what about a focus on comments on science?

    Comment by Tokyo Tim — 21 Jan 2009 @ 7:04 PM

  140. I have always wondered whether plant life is sufficiently accounted for in climate models. More specifically, as ice sheets recede, and plant life expands, do they absorb CO2 and start to reverse the process? Also, since photosynthesis is a endothermic process, does that also help?

    Comment by Steve Tirrell — 21 Jan 2009 @ 7:19 PM

  141. Re #139: It’s a very well-known factor, noting that an effect opposite to (and IIRC larger than) CO2 absorption is albedo change. I strongly suspect the endothermic effect would be a very small term.

    Comment by Steve Bloom — 21 Jan 2009 @ 10:22 PM

  142. The fact that Real Climate allows Steve Bloom’s comments through its moderation suggests a willing proxy attack on Prof. Pielke. Why not just disallow the ad homs and respond to the substance of the post? You guys are all adults I believe.

    Why doesn’t Pielke Sr. open his blog to comments, so people can point out his “mistakes”? He’s an adult, i believe, and if he’s honestly interested in seeking truth, why does he make it impossible for other people to correct his errors posted on his blog?

    Comment by dhogaza — 21 Jan 2009 @ 10:53 PM

  143. Steve Tirrell, 140, yes, plant life is included in the climate models.

    Remember, as the plants move polewards, they move away from the current dessert areas (because the dessert is expanding). And there’s more equatorial land per degree than there is at the pole.

    Comment by Mark — 22 Jan 2009 @ 3:53 AM

  144. David B 138, I don’t see the question from anyone but me, but I also don’t see an answer to it.

    If you can supply one.

    Education doesn’t have to be one-way you know.

    Comment by Mark — 22 Jan 2009 @ 3:55 AM

  145. #140 (Steve):

    Vegetation analysis adds another layer of complexity. I don’t believe that most climate models specifically integrate terrestrial and oceanic carbon cycling directly, though some might now. Rather, there is a series of models known as DGVMs (Dynamic Global Vegetation Models), which are coupled to GCMs to estimate carbon cycle feedbacks in response to atmospheric and climatic changes. There was a comparison of several of these coupled model runs a few years ago, in a project called C4MIP, which I don’t know whether is ongoing or has been superseded by something else. The overall finding was that the non-atmospheric part of the carbon cycle has an upper limit (tends toward saturation), which reduces the responsiveness to atmospheric CO2 over time (over next century and beyond). It is also important to remember that terrestrial carbon is labile–sensitive to heat and drought, via increased respiration, plant mortality, and fire, and land use changes etc. DGVMs attempt to account for some or all of these things.

    Vegetation feedbacks are not as simple as just carbon accounting though. Changes in albedo and evapotranspiration have to be accounted for too, and can, either locally or globally, counteract the carbon sequestration effect. For example, boreal tree expansion in the tundra may store carbon, but may do so at the expense not only of increased respiration of vast soil carbon stocks (and also possible methane releases), but also a decreased albedo, since trees are darker than tundra vegetation and much darker than open snowcover in the spring. Water budgets and evaporation are also involved, so you now have numerous complex interactions between temperature, precip, CO2 and vegetation, operating at all sorts of time and space scales. The effects at one scale can oppose those at another.

    Interesting concept re the endothermic photosynthesis process. Have not heard that before and have no idea how that might factor in, if at all. My guess is very little, because only a small fraction of light energy goes into ps.

    Check out:

    http://gaim.unh.edu/Structure/Future/MIPs/C4MIP.html
    http://c4mip.lsce.ipsl.fr/protocol_final.txt
    http://c4mip.lsce.ipsl.fr/C4MIP_protocol.doc

    and

    Bonan, G. 2008. Forests and Climate Change: Forcings, Feedbacks, and the Climate Benefits of Forests. Science 320:1444-1449.
    Friedlingstein et al. 2006 Climate–Carbon Cycle Feedback Analysis: Results from the C4MIP Model Intercomparison. J Climate 19:3337-

    Comment by Jim Bouldin — 22 Jan 2009 @ 7:51 AM

  146. Steves (Bloom and Tirrell): From the point of view of science, the problem with Roger’s approach is not his point of view or even his behavior [edit]. Rather, the problem is that Roger takes a scattershot approach, launching bold attacks where he perceives a weakness, but then never following through. As a result, I find little in his work that advances understanding, and if you don’t advance understanding, you don’t get far in science. The only unifying theme in Roger’s work is a hostility to the consensus science–and ironically, all he accomplishes in his scattershot opposition is highlight how well the science has done at advancing our understanding of climate.

    Comment by Ray Ladbury — 22 Jan 2009 @ 8:35 AM

  147. > I don’t see the question
    Jim asked it:
    16 janvier 2009 at 10:00 PM
    I seconded it:
    19 janvier 2009 at 9:06 PM
    Jim mentioned an earlier answer he’d hoped someone would elaborate on:
    20 janvier 2009 at 8:42 AM
    David confirmed one known example, Hadley, a model does double duty:
    20 janvier 2009 at 1:24 PM
    Gavin, inline, added some detail and invited someone knowledgeable about how Hadley’s model does this to contribute.

    Nice work on everybody’s part winkling out the detail requested, perhaps someone who knows more will eventually add to this.

    Comment by Hank Roberts — 22 Jan 2009 @ 10:32 AM

  148. Above, I submitted a comment stating: “that’s what happens when computer models replace brains”. Gavin properly edited my comment since it might have been construed (incorrectly) that I was personally insulting a respected scientist with whom Gavin happened to agree. If given the opportunity to clarify, I would have made it clear that the comment was not meant to refer to the author, only to make a pointed opinion that computer modeling may at times act to obscure certain physical processes that need to be more fully researched, but are tacitly *assumed* by the scientist to already be well understood.

    Now comes Mr. Bloom’s comments mounting a direct personal attack on Roger Pielke Sr. For the record, Dr. Pielke has over 300 peer-reviewed published papers to his credit, 50 chapters in books, co-edited 9 texbooks, has been elected a fellow of the AMS, and in 2004* to the American Geophysical Union (after his views on climate change were well known). He is an *ISI Highly Cited Researcher* (hard to accomplished if one’s work is deemed “scattershot” or unimportant by the community), was a tenured faculty member in the Department of Atmospheric Science at Colorado State University, is the former Colorado State Climatologist, and is presently a senior researcher at the University of Colorado at Boulder. His resume doesn’t guarantee that all of his science arguments will be compelling, but he has obviously earned the right to speak with some authority on the issue of climate change.

    Also for the record, it should be known that for about two years, Dr. Pielke did indeed accept open comments on his Climate Science weblog. During that time, Mr. Bloom was a regular commentor, often taking the opportunity to hurl personal insults toward Dr. Pielke. During this time, Dr. Pielke was remarkably magnanimous and gracious toward Mr. Bloom despite the personal attacks, while continually re-directing the discussion to the peer-reviewed science literature. One might wonder why Mr. Bloom crudades against Dr. Pielke? One might also wonder whether or not Mr. Bloom possesses a degree of any kind in a natural science, whether he has ever published a peer-reviewed science paper, whether he possesses a genuine curiosity for gaining understanding of the earth system, or rather whether he is instead driven more by a certain political agenda in the social sciences. Only he knows the answers.

    Finally, in allowing Bloom’s comments to stand, Real Climate has by proxy slandered Dr. Pielke’s reputation. Even several of the regular Real Climate bloggers (who are usually sympathetic to your science claims have openly expressed regret at these comments. It therefore seems that there is a strong suggestion by a number of your readers that Mr. Bloom’s comments be removed.

    [Response: The balance between justifiable criticism and unjustifiable comments is a fine line (and assessing it is a full time job). Given this is only a part time gig, there will be times when judgment calls go different ways at different times. On balance, I’m going to allow Bloom’s comments to stand (along with your critique) because he alludes to a valid point – not that behaviour or attitude determines the correctness of ones argument (it doesn’t), but that the way one behaves towards colleagues is a big determinant of how much time people will devote to addressing your concerns. Roger’s post on the NRC meeting was very odd, full of unverifiable and untrue suppositions of motives of the people there, and which did not reflect the substantive conversations that actually took place there (which concerned solar impacts on climate, not evaluating the IPCC). It is valid to point this out, as it is valid to note that people need to choose who to interact with (given the limited time everyone has). Respect is very much a two way street. – gavin]

    Comment by Bryan S — 22 Jan 2009 @ 10:39 AM

  149. Hank #147.

    You’re welcome (see #116, #131 & #132)

    But I guess you’re shy around me, eh?

    Comment by Mark — 22 Jan 2009 @ 1:46 PM

  150. Climate Science has posted a response to Gavin Schmidt’s comment in #148
    http://climatesci.org/2009/01/22/real-climate-gavin-schmidt-response-to-the-climate-science-post-comments-on-real-climate%e2%80%99s-post-%e2%80%9cfaq-on-climate-models-part-ii%e2%80%9d/

    [edit – to save time, here is the comment:]

    Gavin Schmidt has responded to the Climate Science request for further information on the Q&A he completed on climate modeling. His comment is below.

    Quite frankly, I am very disappointed by his lack of professional courtesy. He also clearly wants to avoid answering the scientific questions posed in response to his Q&A. He has also decided to misrepresent the NRC meeting.

    I have tried repeatedly to constructively interact with Real Climate and Gavin Schmidt, in particular, but my interactions with him at the NRC meeting and through his blog comments clearly demonstrate that he is not interested in scientific discourse. He only supports discussing viewpoints that agree with his, and both undertakes himself, and supports ad hominin attacks by others, on those who differ with him in their scientific perspective.

    [Response: Roger, If you think that accusing people of being in a conspiracy to defend the IPCC and imply that disagreeing with you means that people are actively trying to “mislead” the public, is ‘constructive interaction’, you might want to buy a new dictionary. The NRC meeting I attended was a discussion of whether a new report on solar-climate interactions would be helpful for the NRC to do. Most presentations focussed very specifically on solar-climate links. It remains a puzzle to me why you thought the proposed report would be something related to the IPCC report at all. Thus your characterisation of that meeting was, frankly, extremely partial and unfair to the participants and agency reps who attended. Your accusation that I misrepresented that meeting in the comment above is ludicrous.

    As for your comments on the above FAQ, I have not yet responded to your comments through lack of time. Hopefully, I will be able to in the future, perhaps when I don’t have to respond to juvenile accusations about my integrity. To be clear, I have the utmost respect for your body of work over your career, but your current style of engagement and constant accusatory tone online is not conducive to ‘constructive interaction’. If that is your aim, you might want to rethink your tactics. – gavin]

    Comment by Roger A. Pielke Sr. — 22 Jan 2009 @ 4:08 PM

  151. Re #145: Thanks for the far more informed reply and the references, Jim. All I would add is that there’s also a large oceanic aspect (involving both vegetation and animals; just in the last week interesting results came out finding that fish poop is an important component) and that the recent success in glacial cycle modeling gives us a degree of confidence that the basics are understood.

    Comment by Steve Bloom — 22 Jan 2009 @ 5:30 PM

  152. Follow up weblog posting with respect to Gavin’s reply to #150; see http://climatesci.org/wp-admin/post.php?action=edit&post=1523&message=1&_wp_original_http_referer=http%3A%2F%2Fclimatesci.org%2Fwp-admin%2Fedit.php

    Comment by Roger A. Pielke Sr. — 22 Jan 2009 @ 6:03 PM

  153. Here’s one, perhaps, for Ray Pierrehumbert’s attention:

    http://www.centauri-dreams.org/?p=5653

    “… according to recent work by Luc Arnold (Observatoire de Haute Provence) and team. If green vegetation on another planet is anything like what we have on Earth, then it will share a distinctive spectral signature called the Vegetation Red Edge, or VRE. The new paper creates climate simulations that explore whether planets with a distinctively different climate than modern Earth’s could be so detected.

    Two earlier eras are useful here. …
    This is fascinating stuff in its own right, as a look at the image … suggests, with its story of climate change on our planet. ….”

    Comment by Hank Roberts — 22 Jan 2009 @ 6:28 PM

  154. wrt Gavin’s response to #148.

    Aye, “Do unto others as they have done unto you.”

    And inspiration (for want of a better word… :-) ) for some of my sarcasm is from Scott Adams. I forget which book, but there was one way he claimed helped decipher management speek into english: “which is most likely”.

    Use sarcasm to counter. Because they aren’t looking for an education, they are just looking to find you wrong and they don’t have to sweat logical thinking to do it, so you’re at a disadvantage. So be sarcastic. If they are open to understanding, it may make them stop and think. If they aren’t then you may at least embarrass them into stopping.

    Of course, it’s very VERY much nicer to not have to do that. More work though. But even if minds aren’t changed, at least the expanse of “what is known or thought” has been enlarged and maybe changed perspective, though not the decisions.

    Comment by Mark — 22 Jan 2009 @ 6:50 PM

  155. Re #152, the correct url is http://climatesci.org/2009/01/22/real-climate-gavin-schmidt-response-to-the-climate-science-post-comments-on-real-climate%e2%80%99s-post-%e2%80%9cfaq-on-climate-models-part-ii%e2%80%9d/

    Comment by Roger A. Pielke Sr. — 22 Jan 2009 @ 9:25 PM

  156. #150 “his blog comments clearly demonstrate that he is not interested in scientific discourse. He only supports discussing viewpoints that agree with his, ”

    Having personally witnessed Gavin’s responses to a wide variety of statements, for years! and also by Gavin’s very nature to allow, open the door to new ideas and suggestions without condemning them immediately. Roger has no grounds in these accusations, since I personally have suggested new methods, new ideas which were never shot down, even not yet judged by science peers, not necessarily in agreement with established science. Roger should remove emotions from his charges,
    and read RC a whole lot more.

    Comment by Wayne Davidson — 23 Jan 2009 @ 1:00 AM

  157. Great responses to my question on vegetation. I have another, that I have never heard anyone address. We all know that the earth has been through many warming and cooling cycles. What causes the cycles to end and reverse themselves and for each direction? I would think the knowledge of how this works would be crucial in both modeling (because something big has to happen) and in possibly doing something about the problem.

    Comment by Steve Tirrell — 24 Jan 2009 @ 10:54 AM

  158. Steve,
    > question I have never heard anyone address

    Have you read the info under the “Start Here” button at the top of the page?

    Someone else asked this in almost the same words a day or two ago.

    It’s among the Frequently Answered Questions (“FAQs”).

    Here’s one approach: Take your own question or a long chunk of it as I used below, exactly, and paste it into:

    1) the Google search. You’ll get a gallimaufry, lots of ice age claims, lots of PR and political spin sites with nonsense about this.

    2) the Google Scholar search, like this:
    http://scholar.google.com/scholar?q=earth+has+been+through+many+warming+and+cooling+cycles.+What+causes+the+cycles+to+end+and+reverse+themselves+and+for+each+direction%3F

    You still get some woo-woo stuff (the first hit is a hoot);
    you’ll also get a good helping of good scientific information.
    Look at the “cited by” number and click that; see what other scientists found useful. Click “Recent” for newer papers. There is no “Wisdom” button for the search tool.

    Short answer to your question is: it’s complicated, been figured out for geologically recent times, isn’t simple, has happened slowly.

    The current rate of change — the last 200 years caused by people — is 10x or maybe 100x faster than anything since the last big asteroid impact.

    Comment by Hank Roberts — 24 Jan 2009 @ 12:32 PM

  159. Hank, I did read this entire thread before posting, and did not see the same question posted anywhere. However, I will admit that I skimmed over the sections where there were a lot of personal attacks occuring.

    The Start Here section was helpful, but it only partially answered my question. In FAQs 6.1 & 6.2 there is reference to sections 6.3 and 6.4. Where are those sections. Maybe they will answer my questions.

    I would like to know more about how the tectonics affect climate change and also what might cause the ocean currents to suddenly change.

    Comment by Steve Tirrell — 24 Jan 2009 @ 1:27 PM

  160. Steve, this thread is collecting new suggestions for the FAQ; if you’re not expecting answers right away, you’re in the right place.

    To find answers already available, try these:
    Click the “Home” button. Look at the topics here (going back years).

    > 6.3 and 6.4. Where are those sections?

    You missed a link under “Start Here” — see where it says:

    “The IPCC AR4 Frequently Asked Questions (here) is an excellent start.”

    When you click that link, you’ll go to the index page for the IPCC FAQ

    Google Scholar also has a good natural language query, try posting your question there as well, e.g.:
    http://scholar.google.com/scholar?q=tectonics+affect+climate+change+%3F

    Comment by Hank Roberts — 24 Jan 2009 @ 2:08 PM

  161. 157 (Steve):

    Try this for the history of ice age research:
    http://www.aip.org/history/climate/cycles.htm

    and section 6.4 of IPCC WGI ch 6:
    http://www.ipcc.ch/ipccreports/ar4-syr.htm

    Comment by Jim Bouldin — 24 Jan 2009 @ 3:08 PM

  162. Steve Tirrell says, “I would like to know more about how the tectonics affect climate change and also what might cause the ocean currents to suddenly change.”

    Given maximum plate speed is cm per year, very, very slowly. Sorry, Steve, couldn’t resist. ;-)

    Comment by Ray Ladbury — 24 Jan 2009 @ 3:13 PM

  163. Steve Tirrell (157) — Plate tectonics changed the climate to the modern one by closing the Isthmus of Panama about 4 million years ago. That changed ocean circulations, but certainly slowly.

    Rapid ocean current change is part of what is being explored on

    http://www.realclimate.org/index.php/archives/2009/01/the-younger-dryas-comet-impact-hypothesis-gem-of-an-idea-or-fools-gold/langswitch_lang/in

    and there are some papers referenced or linked. But I thiink what you want for that is a book explaining thermohaline circulation (THC); unfortunately, while I am sure there are such, I don’t know of one.

    Comment by David B. Benson — 24 Jan 2009 @ 4:44 PM

  164. > 6.3 and 6.4. Where are those sections?

    You missed a link under “Start Here” — see where it says:

    “The IPCC AR4 Frequently Asked Questions (here) is an excellent start.”

    When you click that link, you’ll go to the index page for the IPCC FAQ

    Hank, I did go to FAQs, as I referenced. In FAQs 6.1 & 6.2 it refers to other FAQs with a link and a description like this, “See FAQ X.X.” Elsewhere, there is a reference to something else that goes like this, “See section X.X.” I believe from Jim’s post they are refering to the IPCC Fourth Assessment. It would be helpful if that was also in link form and actually mentioned that it was a section in the IPCC Fourth Assessment.

    Comment by Steve Tirrell — 25 Jan 2009 @ 2:03 PM

  165. OK, Steve, I see the problem you’re describing

    I’d guess you know how to find your way around, and you’re saying this out of concern that someone else might become lost and confused.

    We don’t want that to happen. So, just in case, I’ll spell it out as I understand the setup, for the next reader along who may need this:

    “FAQ” is generic — there are “FAQ” pages all over the web.

    — you started in the Start Here section at RC;
    — you clicked on the link that took you into the IPCC FAQ section.

    There, you find, using your example, on this page:
    http://ipcc-wg1.ucar.edu/wg1/FAQ/wg1_faq-6.2.html

    “more than a thousand years with decreasing spatial coverage for earlier periods (see Section 6.5)”

    Question is — where is Section 6.5, and why don’t they link to it?

    Here’s my (purely unofficial) answer as a reader who’s looked:

    You’re inside a document hierarchy (nested subfolders).
    Those are _all_ under the IPCC FAQ level.
    The IPCC FAQs are under the broader IPCC folder level.
    The actual IPCC Report (the “Sections”) are not in HTML, they’re PDFs.

    Two ways to figure out where you are:

    — Look at the URL in the navigation bar.
    — Look at the top of the page; that page has “Table of Contents” and “Home” and the “Home” button takes you to the main page.

    One general caution — linking into the middle of the IPCC pages often confuses people; they do change their links as documents are updated, rather than replacing old with new documents.

    That way they leave a complete record of old material, BUT any link into the middle can become outdated.

    For people writing FAQs and pointers, it’s usually safer to link to the Home page and describe what to look for.
    ——–

    Comment by Hank Roberts — 25 Jan 2009 @ 3:08 PM

  166. Further — I came across either a Firefox bug or an IPCC web page bug; The navigation bar URL does not change when using the internal links from “www.ipcc-wg2.org”

    The “Back” key is confused by this. So am I.

    Thanks for leading me into confusion, Steven (grin). I reported it to FF and the IPCC webmaster (hoping it’s not just me)!

    Comment by Hank Roberts — 25 Jan 2009 @ 3:20 PM

  167. Hnak, I have found the two sections. They are in the IPCC Working Guide 1, which Jim refers to but his link goes to IPCC Fourth Assessment. I am in the middle of reading section 6 right now. I definitely covers in depth the areas of interest that I had.

    Since the publications page of the IPCC breaks that book down by chapter, I would think it would be safe to at least link to the specific chapter. However, that is just my opinion. At a minimum though, the FAQ should state the title of the publication not just a section number. Even those, such as yourself, couldn’t immeadiately tell me what publication it was.

    Comment by Steve Tirrell — 25 Jan 2009 @ 3:24 PM

  168. Steve:

    The link I provided goes to the overview page of the IPCC WG1 AR4. The IPCC has three working groups (WG1 through WG3). Each covers different topics (WG1 = the physical science basis, WG2 = impacts,WG3 = mitigation) and each produces an independent report at the same time, every 5-6 years. So each one of those groups produced an AR4 (Assessment Report #4, being the fourth such report put out since 1990). Thus, I sent you to the WG1 AR4 report main page. I would have given you the link directly to chapter 6 but I wasn’t able to pull it up at the moment I sent the message for some reason. I figured you’d get there.
    Jim

    Also Spencer Weart’s history is very readable–check it out.

    Comment by Jim Bouldin — 25 Jan 2009 @ 6:28 PM

  169. Just to be very clear

    — if you’re reading a page that starts with
    “… //www.ipcc …”
    — that is not the RealClimate FAQ (for which this topic is set up)
    — that’s an IPCC FAQ — different organization. Their pages, their abbreviations and cross-reference system. They do have a contact email address at the bottom of their pages.

    Jumping into the middle of any large volume is always confusing.
    Most big documents use abbreviations. Start from (or go to) the beginning of the IPCC document, and you’ll see it explains the abbreviations used. That’s another good reason to link to the main page, for people may become confused if they start in the middle.
    ____________________________
    ReCaptcha: “clerks Quarrel”

    Comment by Hank Roberts — 25 Jan 2009 @ 8:18 PM

  170. “FAQ Part I, Answer 9. Do models have global warming built in?
    No. If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were.”

    “FAQ Part I, Answer 7. How are models evaluated?
    How does it respond over the whole of the 20th Century, or at the Maunder Minimum, or the mid-Holocene or the Last Glacial Maximum? In each case, there is usually sufficient data available to evaluate how well the model is doing.”

    Physics-based models, even if they do not solely rely on first-principles, have proven their worth in many other scientific and engineering fields. When evaluating the GCM as you described above, what forcing functions do you use to cause the long-term mean of the model to dive/rise into the Maunder Minimum, the mid-Holocene, or the Last Glacial Maximum (or the Medieval Climate Optimum for that matter)?

    [Response: Whatever is appropriate – solar/volcanic, orbital, ice sheets/greenhouse gases/vegetation/dust etc. – gavin]

    Comment by Chris P — 25 Jan 2009 @ 9:04 PM

  171. Ray Ladbury, not always, sometimes tectonic plate shifts can have more dramatic and rapid influences upon oceans and climate, but I know what you were getting at:)

    Comment by jcbmack — 25 Jan 2009 @ 10:04 PM

  172. Now to get back to the real subject of this thread. There are a few disturbing things that I read. First let me give you a background of myself. I am an MS Chem E and have worked with both FEA analysis and plastics molding analysis, so computer modeling is not foreign to me. Now the troubling areas. These are passages from the IPCC WG1 AR4.

    1. Palaeoclimatic observations indicate that abrupt decadalto
    centennial-scale changes in the regional frequency
    of tropical cyclones, floods, decadal droughts and the
    intensity of the African-Asian summer monsoon very
    likely occurred during the past 10 kyr. However, the
    mechanisms behind these abrupt shifts are not well
    understood, nor have they been thoroughly investigated
    using current climate models.

    2. As in the study noted above, climate models cannot produce a
    response to increased CO2 with large high-latitude warming,
    and yet minimal tropical temperature change, without strong
    increases in ocean heat transport (Rind and Chandler, 1991).
    (Not quoted, but the rest of the paragraph basically states that physical evidence indicates that the poles saw significantly temperature change, while the tropics saw very little and that this is likely caused by ocean effects.)

    I have always found computer models to be very helpful when designing a part or building a mold, but I would never forgo physical testing because of a great FEA result. That goes for highly studied, homogeneous materials in common designs. Now compare that with a climate model of a very non-homogeneous planet with infinitely more variables and instead of interpolating known data, we need to extrapolate effects that have never occured before. That makes me feel uneasy.

    There seems to be many references to ocean effects causing significant rapid changes, but there are also references to that being a weak area in the computer models. Since there is more water area on the planet than land area, that gets my vote for the area needing more study.

    Comment by Steve Tirrell — 25 Jan 2009 @ 10:39 PM

  173. Before my post at 10:39 on Jan 25th, I posted another that went something like this.

    Jim, you are correct that it was the fourth assessment for Working Group 1 that has the correct chapters in it. However your link is to fourth assessment synthesis. Since WG1 puts each of their chapters in a separate PDF, and link can me made directly to chapter 6. It is http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter6.pdf.

    I have figured out why they do not reference the publication for their chapters. The FAQs were originally part of the WG1 report. They were then pulled out of the report and added to the Real Climate web site. Real Climate created the links to the other FAQs, so I believe they could also create the links to the proper chapters in the WG1 fourth assessment or at a minimum to the entire publication itself.

    [Response: We do – the link is on the side bar. – gavin]

    Comment by Steve Tirrell — 26 Jan 2009 @ 8:55 AM

  174. David B. Benson #163:

    But I thiink what you want for that is a book explaining thermohaline circulation (THC); unfortunately, while I am sure there are such, I don’t know of one.

    This is a good starter:

    http://www.pik-potsdam.de/~stefan/thc_fact_sheet.html

    Comment by Martin Vermeer — 26 Jan 2009 @ 2:45 PM

  175. Martin Vermeer (174) — Thank you! I’ll keep the link in case someone else later needs a starting point.

    [reCAPTCHA comments on my page of climate ralated links: “place stuffed”.]

    Comment by David B. Benson — 26 Jan 2009 @ 3:13 PM

  176. re: 172.

    Well, show us another earth we can piss about with…

    And models don’t do what Real Life (TM) does. E.g.

    Small scale minatures being flooded never look right (the viscosity of water doesn’t scale down),

    nanomachines that can’t use propellors because they don’t work at that scale,

    nanoboats that travel with no motor,

    etc.

    So you’re ALWAYS left with something uncertain.

    Comment by Mark — 26 Jan 2009 @ 3:31 PM

  177. > propellors
    Flagellum
    http://www.nanonet.go.jp/english/mailmag/2004/011a.html
    Just because it’s beautiful.

    Comment by Hank Roberts — 26 Jan 2009 @ 5:01 PM

  178. Steve (172), take what I say with a grain of salt because I know virtually nothing about ocean circulation, and was formerly pretty suspicious of models in general (and modelers, both due at least in part to my own ignorance), but I think it’s very important to keep in mind a couple of things:

    1. In science, models are almost always works in progress, best explanations at the moment, improving as knowledge grows. Now that may appear to be a rationalization for bad work (and it can be, but in most cases it’s not), so more importantly…
    2. Models are judged useful or not more by their ability to best explain either (a) the full range of observational evidence and/or (b) the major, dominant dynamics of the system, rather than (c) their inability to accurately explain everything in the system. Modeling almost always involves abstraction, attempting to include the most important elements and processes, and leaving out or minimizing the less crucial.
    3. Model validation is VERY much a part of the science. A closely related topic is “detection and attribution”. Chapters 8/9 of the WG1 AR4 report provide the review. These topics are the crux of the biscuit really–and highly interesting IMO, not only because of their importance, but also from the degree of sophistication of many of the analytical procedures used.

    But maybe I’ve simply stated the patently obvious. Gave me an excuse not to work though :)

    Comment by Jim Bouldin — 26 Jan 2009 @ 8:13 PM

  179. Thank you Gavin, as always.

    [Re: Re: 170] Are there sophisticated enough models out there that do not use “ice sheets/greenhouse gases/vegetation/dust” as the initial forcing functions? Those are the output variables we use now to measure climate change (I can see each of these being dynamically caused in the code by the geologic/solar/orbital inputs you mentioned).

    Do you know of papers where I can read up on the state variables used in these models? I have access to many scientific journals but climate science is not my field so my searches have been less than fruitful.

    Comment by Chris P — 27 Jan 2009 @ 11:12 AM

  180. Chris 179,

    Not as far as I know.

    Then again,

    a) we don’t have any old data to test against, meaning we have to wait 50 years to see if it was right
    b) if it were used, then there would be lots of people out there going “You are just form-fitting!!!!”

    And so where’s the benefit? Do you consider there any to be had?

    Comment by Mark — 27 Jan 2009 @ 1:02 PM

  181. Still collecting FAQ suggestions in this thread:

    found at: http://www.sej.org/index.htm
    Possibly outdated by now, but for journalists:

    News University,
    http://www.newsu.org/
    the Poynter Institute’s innovative e-learning center that helps journalists through self-directed training, has made available “Covering Climate Change: A Seminar Snapshot,”
    http://www.newsu.org/courses/course_detail.aspx?id=snap_sej08
    at no cost. This Seminar Snapshot, produced in partnership with the Society of Environmental Journalists (SEJ) and the Metcalf Institute for Marine and Environmental Reporting, is an edited version of the full-day News Executive Roundtable on Climate Change, a pre-conference event for SEJ’s 2007 annual conference
    http://www.sej.org/confer/index3.htm
    at Stanford University. …

    Comment by Hank Roberts — 27 Jan 2009 @ 4:34 PM

  182. As I have mentioned before, I am very curious about what causes the earth to go from warming to cooling and vice-versa. I visited the link that Martin suggested (174) and the discussion there definitely supports changes to the THC being able to cause dramatic changes to the ice sheets in the North. It surmises that a large increase of freshwater in the North would slow down or even stop the current ocean flows. This inturn would mean less warming of the North from the ocean. At some point this could reverse the thawing of the ice sheets and reverse the process. Once the process is reversed, than the ice sheets would begin to absorb CO2 instead of releasing it, thereby causing positive feedback in the opposite direction.

    If all of that is possible, then it seems to me that including this effect in a GCM would be mandatory. The big question is when do we hit that reversal point? Would it be after we are underwater, or would it be a lot sooner? Or would our current situation overwhelm any effects of a decreased THC?

    Comment by Steve Tirrell — 27 Jan 2009 @ 5:48 PM

  183. Mark,

    I do see there being great benefit. This is what I thought Gavin was saying in his answer to the FAQ Part I, Question 7, which I quoted above in comment #170.

    My FAQ question was asking for more detail. As he states, all physics based models oscillate around some mean if the forcing functions are left unchanged. They are strong predictive tools because of their ability to change when the relationship between the forcing functions change. This is the beauty of physics-based models in any scientific/engineering field. To make sure the error in the prediction is as small as possible you check it against the reality you know as Gavin describes below:

    FAQ Part I, Answer 4: “What is robust in a climate projection and how can I tell?”
    “Since every wiggle is not necessarily significant, modelers need to assess how robust particular model results are. They do this by seeing whether the same result is seen in other simulations, with other models, whether it makes physical sense and whether there is some evidence of similar things in the observational or paleo record…Examples of non-robust results are the changes in El Niño as a result of climate forcings, or the impact on hurricanes. In both of these cases, models produce very disparate results, the theory is not yet fully developed and observations are ambiguous.”

    My question was specifically what change in the forcing functions was used to move the model’s mean into any of the paleo or recent (pre industrial revolution) climate periods? From Gavin’s answer to FAQ Part I, question 7, he implies that this check has been done to the satisfaction of the experts in the field. When he answered my question by saying “Whatever is appropriate – solar/volcanic, orbital, ice sheets/greenhouse gases/vegetation/dust etc,” I asked if there were any models sophisticated enough to calculate the change in ice sheets / greenhouse gases / vegetation / dust etc as a function of the feedback from the change in known solar/volcanic, orbital forcing. Sorry, that sentence is long. This is because these last three forcings were what ran the system before we got evolved. They are an effect of the initial cause, though they certainly dampen or quicken any state change (depending on the physics). I would argue that this is not form fitting, it is validating a physics-based model to the best approximation using known scientific relations (without being a first-principles model).

    Since this is a part time job for Gavin I also/instead asked for any papers describing the state variables used in these models where I can learn the answers to these questions on my own. I am mathematically literate and it might take him a lot more time than it’s worth to answer some questions (for good reason).

    Comment by Chris P — 27 Jan 2009 @ 6:30 PM

  184. Sorry, when I said “They are an effect of the initial cause,” in the 6th sentence of the 4th paragraph I am referring to the ice sheets / greenhouse gases / vegetation / dust etc. I didn’t mean to be ambiguous.

    Comment by Chris P — 27 Jan 2009 @ 7:00 PM

  185. Steve Tirrell (182) — Any good introductory textbook will answer your questions. I started with W.F. Ruddiman’s “Earth’s Climate: Past and Future”, which is now in its econd edition. As a more advanced alternative, try “The Discovery of Global Warming” by Spencer Weart, first link in the Science section of the sidebar.

    Comment by David B. Benson — 27 Jan 2009 @ 7:10 PM

  186. Chris, the closest I find searching for something using terms like those in your question is a discussion over at CA. Are you looking for something along those lines? You might want to try Scholar, there are some papers that might be helpful to refining the question.

    This thread was set up for collecting ideas — that can eventually be dealt with in an update to the FAQ.

    You might not get immediate answers.

    You might look at the climate model intercomparison project too.

    Comment by Hank Roberts — 27 Jan 2009 @ 8:56 PM

  187. Thank you very much Hank.

    Comment by Chris P — 28 Jan 2009 @ 10:34 AM

  188. re 183.

    But will changing ice coverage from a diagnostic to an input produce an output that is more accurate and that change big enough to make up for losing the ability to check the model against a consequence (the measurement of ice coverage)?

    After all, if we had as input *everything* we then don’t know if we are getting better because the science is more accurate or because we keep changing the model: we cannot refute the claim that we merely tuned our model to get the answer we want, since everything could be tuned then.

    Leaving out consequences in your drivers can mean you have an avenue to prove (in the old fashioned sense of test against the reality) your accuracy: these figures should fall out automagically correct and if they don’t that shows you’re missing something important.

    Comment by Mark — 28 Jan 2009 @ 11:14 AM

  189. [re 188]

    Mark,

    Your last post confuses me. I think we are saying the same thing. I don’t want to clog these comments since they are meant for additional FAQ questions. We can take this to email if you’d like.

    I agree, when running a GCM “ice sheets / greenhouse gases / vegetation / dust etc” should be inputs when checking the models’ accuracy. They should be used as the initial conditions at a given point in time. My question referred to the forcing functions (not I.C.s) and what change is required to produce different paleo climate events. “Ice sheets / greenhouse gases / vegetation / dust etc” are system feedbacks, not forcing functions. Specifically, my question was: given an initial starting point before one of the last freezes or warming periods (where the climate began at a condition similar to ours, pre-human CO2), what changes in the forcing functions were used to move the model away from our mean. Moving to our mean from those events is a good check as well, however I am interested in moving away at this time.

    Ultimately I’d like to learn about the gains (weights) used for each forcing in the code and the inputs to each (step functions, ramp functions etc… a more technical, system dynamics answer). As implied previously, these checks have already been done to the satisfaction of the experts in the field and I would simply like to learn about them. As Gavin states in this FAQ, such tests “provide good targets against which the models’ sensitivity can be tested.” I want to learn more. Hank was very helpful and I am going to start by spending time with the resources he suggested.

    Did this answer your question Mark?

    [Response: As discussed in the FAQ, the definition of a ‘forcing’ depends on the model you are talking about. We do not yet have models that contain every single aspect of the climate, and so many experiments are done with models in which some changes are imposed as forcings. For instance, most GCMs don’t include models of ice sheets or the carbon cycle. In those models, changes in the ice sheets or in CO2 levels are imposed as a forcing. Those have been the models in which ideas about the LGM for instance have been mostly explored. – gavin]

    Comment by Chris P — 28 Jan 2009 @ 2:00 PM

  190. Chris. One way to put it is that if you take a graph that you know depends on approximately x^2 and fit a line to it as x^2 with the minimum RMS errors, you can see if the supposition that it fits x^2 is right.

    If, however, you fit whatever curve hits the spots then you can’t see if your curve fitting is right until a long time later, when you have enough figures you didn’t include in the curve fitting.

    You also lose any future discovery based on the curve. Whereas if you figured it was x^2 and after enough fittings, you figured “actually, if I add a sine to that, it fits MUCH better”, then you can

    a) use future values to see if that is correct
    b) use that extra fit to work out what may be causing it

    So ice as an output can be used historically as a “sanity check” (RMS errors shouldn’t increase as time goes on if fitting to x^2 is right) unless you parameterise your inputs with it (fit the curve).

    So will adding ice into the system as a parameterised forcing make enough of a difference to the models to negate the loss of an independent source of verification.

    Comment by Mark — 28 Jan 2009 @ 2:50 PM

  191. Mark,

    You can reach me at chris.mann.117 – at – gmail.com. Let us continue this there.

    I agree climate models should not be line-fitters, they should be physics based. I do not think I’ve implied that I wish to fit a nth degree polynomial, trig function, or ln to a set of historic proxies or data points. I understand the dubious ability of such a model to predict future trends (i.e. watch how quickly Taylor’s 5th degree polynomial fit to a sin function fails).

    I do not agree that you can classify ice cover as an independent variable (or vegetation, the CO2 cycle etc); hence you cannot loose it as an independent source of verification.

    As I said, I am now taking the time to read the resources suggested.

    Comment by Chris P — 28 Jan 2009 @ 6:31 PM

Sorry, the comment form is closed at this time.

Close this window.

0.545 Powered by WordPress