RealClimate

Comments

RSS feed for comments on this post.

  1. Thanks for this article. I agree on the risk being only a little postponed if Schmittner is right. The risk is still there.

    Comment by Edward Greisch — 28 Nov 2011 @ 11:24 AM

  2. “the best fit from the UVic model used in the new paper has 3.5ºC cooling, well outside this range (weighted average calculated from the online data, a slightly different number is stated in Nathan Urban’s interview – not sure why)”

    This is a typo in the published paper. An earlier version of our analysis found a different best-fit ECS (2.5 ºC, I think), with a LGM SAT cooling of 3.6 ºC. The published version of our analysis has a slightly lower ECS and less LGM cooling, but we neglected to update the manuscript.

    Also, I think our supplemental material says our best fitting model is 2.4 ºC, which is also from an earlier version of the analysis. For the final analysis, I didn’t actually calculate the posterior mode (the “best fit”), just the mean and median.

    The 3.3 ºC LGM cooling I quote in my interview is for the 2.35 ºC ECS UVic run, close to our median ECS estimate of 2.25 ºC. I haven’t computed the LGM cooling directly at the median estimate, which doesn’t correspond to any of our UVic runs. (I’d have to train an emulator on the full model output instead of just the data-present grid cells. This is possible, but I haven’t done it yet.)

    [Response: Thanks. But the sat_sst_aice_2.35.dat file in the SI has -3.5 deg C cooling in SAT (assuming I didn’t mess up the area weighting). Can you check? – gavin]

    Comment by Nathan Urban — 28 Nov 2011 @ 11:26 AM

  3. As stated, it doesn’t really buy us much breathing space. At 2.3, the 2 degree limit is exceeded when CO2e reaches 550ppm. Does anyone thing the world is going to get serious about mitigation, quickly enough, to stablilise below 550ppm?

    We still hear talk of stablising GHG concentrations at 450ppm CO2e. The only emissions scenarios the IPCC reported in AR4 that stablised at 450ppm or lower saw global emissions PEAK in 2015. The big emitters in Durban, so the BBC tells me, aren’t even thinking of a DEAL before 2015, far less emissions peaking.

    So 450ppm WILL be exceeded, the policy-makers seem to have decided. How long before the window to stablisation at or below 500ppm is exceed? (The scenarios in AR4 suggest that if CO2 emissions peak after 2020)

    Anyone want to take a bet on when CO2 emissions are going to peak?

    Comment by Silk — 28 Nov 2011 @ 11:47 AM

  4. Group: the following excerpt sums the Schmittner Report for me:

    “The first thing that must be recognized regarding all studies of this type is that it is unclear to what extent behavior in the LGM is a reliable guide to how much it will warm when CO2 is increased from its pre-industrial value.”

    It is all about time lengths and CO2 concentration increases.

    A drag racer can go from 0 to 150 in three minutes without much noticeable change save for the vehicle speed.

    That can can go to 120 in seven seconds and the effects are dramatic. The car vibrates heavily. Great clouds of smoke pour from the burning rubber. The car is never really in control during that time.

    I see the LGM as the former and the industrial world’s contribution to global warming as the latter. A simplistic opiniin but I only know what I read.

    Comment by John McCormick — 28 Nov 2011 @ 12:34 PM

  5. Xref – some discussion of paper (with links to more at SKS (Schmittner comment), Planet3.0 interview, Tamino’s Open Thead) here at Tamino’s (link)

    Comment by anna haynes — 28 Nov 2011 @ 12:53 PM

  6. Gavin,

    You’re right. For the 2.35 ºC ECS run the area-weighted SAT cooling is 3.5 ºC. The value 3.3 ºC comes from the 2.22 ºC ECS run. This latter run is actually closest to our posterior median ECS of 2.25 ºC, so 3.3 ºC is probably closer to our “best” estimate of LGM cooling.

    [Response: Thanks. – gavin]

    Comment by Nathan Urban — 28 Nov 2011 @ 1:13 PM

  7. I agree with the critiques of this paper, so far as I can clearly understand them, and had hoped there would be a response here when I came across it yesterday.

    An additional critique that may be inherent in your comments, but missed by me (or be so much dreck), is that the glacial maximum seems an odd place to start from. Climate sensitivity is important for the range and rate of change more so than what the ultimate change is. It’s the part between the peaks and valleys in the record that is interesting. We know pretty well the long-period cycles that trigger the glacial/interglacial cycle, after all. We also know humans were pretty much only in Africa for most of the glacial cycles and was highly mobile during the most recent. There is literally almost nothing analogous to the current climate system and our response to it.

    Climate sensitivity is always stated as a range, but we tend to want, with our human brains, to think of it as a number, say, 2.4C. We like averages and means because they give us a comfortable place to think from. But climate does not appear to be this simple. Earth System Sensitivity, as noted, is the key, but for me it’s not a “maybe” or a “perhaps”, but is definitive.

    I think of it this way, the potential change is the high end while the minimum change is the low end, right? At any given point in time climate sensitivity is dependent on which, if any, tipping points have been breached. If we approach a tipping point without passing it, the swing in response will be a given amplitude from which it will recover toward the center. If a given tipping point is breached, a greater amplitude follows.

    The risk assessment must be based on every tipping point being breached, so for all intents and purposes, climate sensitivity is unconstrained. The worst it can get is the sensitivity we must do our risk assessment for because it is, in this case, a/an societal/existential threat. What happens if burn another trillion barrels of oil/oil equivalents and/or the methane in the permafrost and clathrates goes kablooey?

    Most, if not all, posting here are well aware rate of change is perhaps more important than total change. Adapting to 2C over 1,000 years was probably easily done. Adjusting to it over 250 years, particularly with most of it coming in the latter 3/5 of the time period, is a different matter altogether. Adjusting t o 3 – 6C over the same time frames is a joke.

    Rapid change acts as a multiplier because events that can be mitigated at one time scale may not be at another. Yet, there is no rate of change factor in climate sensitivity, but perhaps should be so the references for public consumption are more easily digested.

    As a non-scientist, I am constantly tracking in my head that which I can understand clearly and usefully, and what I see is a planet that is responding at a rate and extent greater than any but our most pessimistic models suggest. Ipso facto, climate sensitivity is at the high end, so discussions of anything else are so much number-crunching. Valuable, useful and needed number-crunching, but number-crunching all the same so long as the work continues to trail the observed.

    What is also not commonly discussed as being part of the models is the effect of having diminished the ability of the ecosystem to respond to stress because we have altered the entire system. In previous eras some key change/tipping point would trigger others in a cascade, but each of those triggers was hit by a change that preceded it so the rate of change was slowed by the magnitude of change needed to cause the change. I don’t mean to imply this is a purely linear process, but we have created a preexisting condition where every element of the climate system is already sensitive to change due to already having been altered before the tipping point that would have triggered it arrives. ALL of them have already been diminished.

    Forest ecosystems will decline faster because there are far fewer of them than would be the case from purely natural change. Oceans are already altered in terms of oxygen-depleted zones, pollution, depleted populations, erosion and eradication of coastal ecosystems, etc.

    Etc, etc.

    There is no analog for climate change as humans have triggered it, so our sensitivities are even less sure than the science suggests, even with Earth System Sensitivity since it also presumably doesn’t account for rate of change nor the preconditioning the human presence has resulted in.

    The risks are far greater than ever before, imho, and any implication otherwise is a dangerous message.

    reCaptcha offers: “style: srdinge” I ask, What is how a drunken sailor orders small, salty fish on his pizza?

    Comment by ccpo — 28 Nov 2011 @ 2:01 PM

  8. Re: It would have been more appropriate to say something like “our estimate of the effect is less than many previous estimates”.

    I can’t see how that re-wording would make any difference to the press coverage. Remember that the media responds to this very differently from how scientists respond. Journalists know that the vast majority of their readers will never read the original paper, and don’t care about what this study actually did and how it might compare to previous studies. Their readers just want a sound bite, along the lines of “can we stop worrying about it” or “should we worry about it more than we did?”. As “less than” tends towards the former, and as journalists have to sex up the story to win eyeballs, the result is inevitable. The further downmarket you go in the media industry, the more “sexing up” there will be.

    One of the things we scientists and academics have to keep reminding ourselves is that many of the people outside the science community who write about this don’t regard it as their job to educate people. Their salaries depend on selling copy, which usually means telling people what they want to hear. In that situation, how likely is it that they will ever do a good job of placing this paper within it’s scientific context?

    Comment by Steve Easterbrook — 28 Nov 2011 @ 2:09 PM

  9. There may be a bias in the SST estimates used in this (and other LGM-sensitivity papers). Many of the SST estimates are derived from planktonic foraminifera community composition, calibrated against 10m water depth SST. However, much of the foram community lives at much greater depths. This discrepancy between the depths at which foraminifera live and the depth at which they are calibrated to SST could introduce a bias if the thermocline structure has changed (i.e. steeper or shallower temperature gradients over the top 100 or so metres). I’m currently working to determine how large this bias is (probably small) and in which direction.

    Comment by rjt — 28 Nov 2011 @ 2:50 PM

  10. This may not be the Hegerl & Russon Perspective linked above but it is a pretty good substitute.

    Comment by Pete Dunkelberg — 28 Nov 2011 @ 2:51 PM

  11. I read the popular summaries of this as something of a good-news/bad-news story. If true, the good news is that global temperature appears modestly less sensitive to C02 forcing than the IPCC mean sensitivity suggests. (Though this posting presents a clear description of the caveats.) The bad news is that habitability of the earth’s surface appears more sensitive to temperature change than had been previously thought. The hostile environment of the glacial maximum was produced by just ~3.5C cooling. That’s part and parcel of the low sensitivity estimate. Since I care about the habitability of the surface, not temperature per se, I don’t see the results as reassuring in the least.

    Comment by Christopher Hogan — 28 Nov 2011 @ 3:12 PM

  12. How does this paper relate to the recent draft paper by Hansen et al on Energy Imbalance:

    http://www.columbia.edu/~jeh1/mailings/2011/20110826_EnergyImbalancePaper.pdf

    And to Hansen and Sato’s paper on Paleoclimate Implication

    http://www.columbia.edu/~jeh1/mailings/2011/20110720_PaleoPaper.pdf

    Climate or Earth System Sensitivity, however defined and at whatever geological period (see the discussion in de Energy Imbalance draft), is one thing, but polar amplification is another, it seems to me. Hansen and others now argue that that the Eemian was probably less warm globally than previously thought (so lower sensitivity?), but polar amplification was therefore higher, and the risk of sea level rise as well.

    What’s the merit in this argument, if any, and how does it relate to the climate sensitivity discussion?

    Comment by Lennart van der Linde — 28 Nov 2011 @ 3:14 PM

  13. Someone please tell me the relevance anymore of man’s continued contribution to atmospheric CO2 levels, when enough momentum has already been unleashed to begin the thaw of the permafrost region, which of course contains an immense self-feeding mechanism.

    Are rational people still discussing the possibility of stopping or limiting the thaw?

    Comment by Walt Bennett — 28 Nov 2011 @ 3:29 PM

  14. for Walt Bennett
    You are assuming the worst is unavoidable and that we can’t make it any worse — or better — by managing what we can manage.

    There’s no science supporting that conclusion.

    Comment by Hank Roberts — 28 Nov 2011 @ 3:50 PM

  15. further for Walt Bennett:
    http://www.sciencedirect.com/science/article/pii/S0012825211000687
    Earth-Science Reviews
    Volume 107, Issues 3-4, August 2011, Pages 423-442
    doi:10.1016/j.earscirev.2011.04.006
    Atmospheric methane from organic carbon mobilization in sedimentary basins — The sleeping giant?

    Comment by Hank Roberts — 28 Nov 2011 @ 3:52 PM

  16. Lennart van der Linde (apologies for comment length),

    You’re right that “climate sensitivity” and “polar amplification” refer to two different things, though there is some overlap.

    Climate sensitivity, in the context of this post, refers to a globally averaged quantity. Specifically, it refers to the ratio of the global temperature change to the radiative perturbation that causes it (and thus has units of degrees C per Watts per square meter, for example).

    Polar amplification, on the other hand, relates to the fact that the high latitudes typically exhibit a larger temperature change to a given perturbation than does low, equatorial latitudes (or for that matter, the global mean).

    You could think of the polar regions as having a higher climate sensitivity than the low latitudes, but in the context of the Schmittner paper, they are only talking about a single globally-averaged number for sensitivity. Ideally, that would include the amplified polar regions (assuming you have good contribution of data from those areas).

    As for the Hansen and Sato (2011) paper, they use a different methodology to diagnose sensitivity than the Schmittner paper, though both Hansen and the Schmittner paper are focusing on the same overall issue and like to use the LGM as a good target for tackling the problem.

    In many of Hansen’s papers, he uses data for the global temperature change during a specific period (such as the LGM) compared to modern. He then uses what information is available to quantify (in Watts per square meter) what radiative terms drive that temperature change (for the LGM this is primarily increased surface albedo from more ice/snow cover, and also changes in greenhouse gases…the former is treated as a forcing, not a feedback; also, the orbital variations which technically drive the process are rather small in the global mean). Then, he takes the ratio of the temperature change to the total forcing and defines this as the sensitivity. This method tries to maximize using pure observations to find the temperature change and the forcing (you might need a model to constrain some of the forcings, but there’s a lot of uncertainty about how the surface and atmospheric albedo changed during glacial times…a lot of studies only look at dust and not other aerosols, there is a lot of uncertainty about vegetation change, etc).

    Sometimes various factors like aerosols or vegetation change aren’t considered, and thus whatever effect they might have would just be lumped into the value of climate sensitivity value that emerges from this method.

    Hansen also extends this analysis further (see also his 2008 Target CO2 paper for a lot of the methodology). He takes the climate sensitivity value derived from the LGM, based on the above methodology (about 3 C per doubling of CO2), treats that as a constant, and then applies it to the whole 800,000 yr ice core record. In order to establish a time series of global radiative forcing over the whole record he needs to assume a particular relationship between sea level/temperature records and the extent of the ice sheets over the last 800,000 years, which in turn defines the albedo forcing. Since there is no time-series of global temperature change, he also needs to assume a particular relationship between Antarctica temperature change (or the deep-sea change) and the global mean, and that the relationship always holds (probably untrue). He gets a good fit to the calculated temperature reconstruction when he uses the fixed climate sensitivity multiplied by the time series of radiative forcing.

    In contrast to this observational approach, Schmittner (as well as a number of other papers, like reference #4 in this post) take advantage of both models and observations, and try to use the observations to constrain which feedback parameters in the model are consistent (e.g., an overly sensitive model with the same forcings as another model will produce too big of a temperature change than observations allow). There are different ways to weight the ensemble members, such as by assuming a Gaussian distribution in the paleo-data, or by brackering the “acceptance limit” of an ensemble member at the 1σ limit of the paleo-data. Note that the observational approach needs to assume a constant climate sensitivity between different states, whereas perturbed physics ensembles don’t (though you still need to understand what feedback processes are important between different climate states to have confidence in the results).

    Hope that helps

    Comment by Chris Colose — 28 Nov 2011 @ 5:13 PM

  17. > planktonic foraminifera

    Lots out there. Calibration is always an issue — the BEST folks will have to discuss these questions when looking at sea temps.

    A high-school science exercise:
    http://www.ucmp.berkeley.edu/fosrec/Olson3.html
    “… the deepest dwelling forms represent the true water-depth…. foraminifera will be found there which preferred to live in that environment as well as foraminifera which were brought in from shallower water by downslope processes (i.e., the Amazon Fan moves sediment, including foraminifera, in a downslope direction off of Brazil).”

    Core drilling reports:
    Deep Sea Drilling Project Initial Reports Volume 47 Part 1
    http://www.deepseadrilling.org/47_1/volume/dsdp47pt1_36.pdf
    GF Lutze – Cited by 23 – Related articles
    … both benthic and planktonic foraminifers follow the surface-water oxygen-isotope curve, which served to calibrate the benthic foraminiferal record …

    Correlations:
    On the fidelity of shell-derived δ18Oseawater estimates
    http://www.ldeo.columbia.edu/~peter/site/…/Arbuszewski.etal.2010.pdf

    Comment by Hank Roberts — 28 Nov 2011 @ 5:39 PM

  18. My interest is not in a “best fit” point estimate, but in their characterization of the distribution of possible fits. I had a quick look at their paper and I worry about the possibility of statistical misspecification they raise (page 3 of SCIENCE EXPRESS article). In particular, although they have considered spatial dependencies in their covariance (Supporting Online Material or “SOM”, section 6.3), they do insist upon a Gaussian shape. There’s no reason why that should be the case. Also troubling in their report in the SOM that the marginal distribution for ECS2xC is strongly multimodal (see their Figure S13) for which they have no real explanation, although they offer possibilities.

    I’m willing to buy their “best fit” point estimate. As a result of this look, I’m far more skeptical of the accuracy of their tight bound on its range. It’ll come out in the wash. I’m sure there’ll be many remarks on this paper and its methods.

    Comment by Jan Galkowski — 28 Nov 2011 @ 5:48 PM

  19. More:

    http://www.nature.com/nature/journal/v470/n7333/full/nature09751.html
    Corrected online 14 April 2011
    Erratum (April, 2011)
    “… results, based on TEX86 sea surface temperature (SST) proxy evidence from a marine sediment core …”

    and (for calibration, this sort of work will be useful for ‘et al. and Robert Rohde’ for sea surface reconstructions)

    doi:10.1016/j.palaeo.2011.08.014
    The effect of sea surface properties on shell morphology and size of the planktonic foraminifer Neogloboquadrina pachyderma in the North Atlantic

    Tobias Moller, Hartmut Schulza, Michal Kuceraa
    Received 14 February 2011; revised 27 June 2011; Accepted 8 July 2011. Available online 3 September 2011.
    Abstract

    “The variability in size and shape of shells of the polar planktonic foraminifer Neogloboquadrina pachyderma have been quantified in 33 recent surface sediment samples …. shell size showed a strong correlation with sea surface temperature. …”

    Comment by Hank Roberts — 28 Nov 2011 @ 5:54 PM

  20. 18 Jan Galkowski: Roger that. I think you are too young to remember when the wheels fell off of certain cars. [1968] Other things fell off of the same cars. The statistician at that company assumed that all distributions were Gaussian. I was taking a statistics course from the Army Management Engineering Training Agency at the time.

    The shape of the distribution of climate sensitivities is definitely a critical issue and it has been discussed recently at RealClimate.
    http://www.realclimate.org/index.php/archives/2011/10/unforced-variations-nov-2011/comment-page-7/#comments

    Is the climate sensitivities distribution possibly 2 humped? The second hump being the methane hydrate gun? The fat right tail is a big question. The distributions I have seen recently look like a lot needs to be researched. No criticism of the science that has been done.

    Comment by Edward Greisch — 28 Nov 2011 @ 6:42 PM

  21. Jan Galkowski (#18),

    Re: Gaussianity, see Figure S10 of the SOM. The residual errors around the best fit (posterior mean) appear somewhat non-Gaussian, but this appears to be due to the land and ocean data not centering on the same ECS value. If you separate them into land errors and ocean errors (as our analysis does), there doesn’t appear to be strong evidence of non-Gaussianity. Nevertheless, we also tested one non-Gaussian assumption, using a Student-t distribution with heavier tails (figures S14 and S15). It doesn’t change the result much.

    If there is statistical misspecification, I think it’s probably in regional biases or nonstationary covariances, not in non-Gaussianity. Either one could make the uncertainty interval too narrow (or too wide, I suppose, but it usually winds up the other way around).

    Re: multimodality, we haven’t proven a cause. But I suspect it is mostly an artifact of interpolation (though maybe a bit physical), as we discuss. I don’t think it has much to do with the width of our uncertainty interval.

    Edward Greisch (#20):

    I think you may be conflating two separate concepts. Jan is talking about non-Gaussian errors in the data-model residuals. You appear to be talking about non-Gaussian distributions of climate sensitivity.

    Comment by Nathan Urban — 28 Nov 2011 @ 7:27 PM

  22. Thank you for the article. One thing I picked up on is your closing paragraph indicating how much time we have. You’ve estimated 24 years to eat up the emissions budget for reaching 2 degree rise based on current IPCC estimates, or 35 years if the Schmittner estimate were closer to the mark – assuming the current rate of growth of emissions (3%/year).

    If both are too low then we have even less time to act.

    Numbers like this should bring it home to more people just how urgent it is to reduce emissions – (to those over 40 who probably have a different perspective on time than many of those under 30).

    Comment by Sou — 28 Nov 2011 @ 7:47 PM

  23. When I read the Schmittner et al. paper, I was taken aback by the fact that the SOM contained important information which I might expect to see in the main paper. There are two rather different issues mixed together. First is the temperature re-construction for the LGM and the comparison between the LGM and present temperatures. Second is the use of a computer model to compare the change between the LGM and the present, then extrapolate this comparison to assess the effect of 2x pre-industrial CO2 relative to our recent climate.

    The first finding, the lessened change in temperature between LGM and the present, is an important issue in itself and suggests to me that the climate may be more sensitive to a perturbation than previously suspected. Also, the first effort shows that during the LGM, some areas may have been warmer than present, such as over the Nordic Seas and northwestern Pacific. We now see a world in which Siberia is considered to be the coldest area during NH winters, yet, Siberia appears to have been free of glaciers during the LGM from what little I’ve seen. Given the various sources of uncertainty mentioned in other posts, the temperature reconstruction is the first question that needs to be studied.

    The second issue is the modeling effort, which uses the UVic model of “intermediate complexity”. I think this work may be seriously flawed for several reasons, many of which are acknowledged in the SOM, but not the main paper. The simple energy balance model of the atmosphere does not include a term for the fourth power of temperature. Given that all energy leaving the TOA is IR, which is a function of T^4 at some effective elevation in the atmosphere, surely this should be included. The original polynomial model was published in a by Thompson and Warren in 1982 at a time when GCM development was still primitive. I think that much has been learned since then. The ocean model does not include any flow between the North Pacific and the Arctic Ocean via the Bering Strait, which would be appropriate during the LGM when sea level was lower, but not for today’s climate and into the expected warmer future. The atmospheric model uses prescribed wind stresses on the ocean and sea-ice, with variations for the LGM using anomalies calculated from a GCM, which produces a stronger AMOC than that found in the GCM. Clouds are not calculated, but set by climatology. Finally, the authors excluded cases where the AMOC collapsed during LGM, noting that these cases would imply a larger climate sensitivity than that presented in their report (SOM, section 5 & 7.5).

    These questions would not be so serious, except that the paper is to appear in SCIENCE and thus will be taken as evidence against the prospect of dangerous climate change. As noted above, there are already commentaries claiming this report shows little need for concern over future climate changes.

    [Response: I agree that there are serious problems with the representation of atmospheric feedbacks in the model, but the lack of a fourth-power dependence in infrared emission vs T is not the key one them. Over ranges of temperature of a few degrees, you can linearize sigma*T^4 about the base temperature T0 with little loss of accuracy. Moreover, for temperatures similar to the present global mean, water vapor feedback actually cancels out some of the positive curvature from the fourth-power law (see Chapter 4 of my book, Principles of Planetary Climate). So it’s not a manifestly fatal flaw to have a linearized emission representation. It is true that the linearization does substantially misrepresent some aspects of the north-south gradient in temperature, given that over the temperature difference between tropics and Antarctic winter the nonlinearity becomes significant. What is more severe, in my view, is that the energy balance model cannot represent the geographic distribution of lapse rate, relative humidity or clouds. In the interview over on Planet 3, Nathan Urban clearly doesn’t understand the full limitations of the model even though he is one of the authors of the paper. It’s more than just failing to represent the albedo effects of clouds — the model doesn’t represent the geographical variation of cloud infrared effects either, or the way these change with climate. Given that clouds are known to be the primary source of uncertainty in climate sensitivity, how much confidence can you place in a study based on a model that doesn’t even attempt to simulate clouds? –raypierre]

    Comment by Eric Swanson — 28 Nov 2011 @ 8:14 PM

  24. This discussion reminds me of philosophers of the Middle Ages debating how many angels could stand on the head of a pin.

    Another analogy might be a group of naval ship designers on the Titanic discussing, while the ship slowly fills with water, technical detail about the hull steel plates size, the number of water tight compartments, the angle of the iceberg strike, etc.

    Gentlemen – the last ten years are the hottest on historical record. We are now at nearly 400 parts per million CO2 in our atmosphere. Carbon in our atmosphere increased by an estimated huge 6% just last year, crops were wiped out all across the US south by what many see as a clear global warming event. It goes on and on.

    Don’t we have more urgent business than to measure the rapidity and other obscure scientific details of how our Titanic is doing and the number of seconds or hours it will take before it sinks?

    Talk about Nero fiddling while Rome burns. We are fiddling with scientific minutia while the world burns.

    Comment by William P — 28 Nov 2011 @ 9:17 PM

  25. “Be that as it may, all these studies, despite the large variety in data used, model structure and approach, have one thing in common: without the role of CO2 as a greenhouse gas, i.e. the cooling effect of the lower glacial CO2 concentration, the ice age climate cannot be explained…”

    Indeed, the ice age cycles cannot be explained without lower glacial CO2 concentration if a positive feedback is involved.
    I guess all these studies have one thing in common (positive feedback), so have all reached similar conclusions (CO2 is a major factor).
    I’m really surprised at the result, despite the fact you have reached similar conclusions without really changing anything that would significantly change the outcome.

    Comment by Isotopious — 28 Nov 2011 @ 9:31 PM

  26. Informative & critical review; thank you.

    Comment by David B. Benson — 28 Nov 2011 @ 9:40 PM

  27. In the response by raypierre- I agree about the problems with simple energy balance model and its lack of spatial representation, but it’s tough to fault the authors for the lack of cloud detail, since the science is not up to the task of solving that problem (and doing so would be outside the scope of the paper; very few paleoclimate papers that tackle the sensitivity issue do much with clouds).

    The paper states that “Non-linear cloud feedbacks in different complex models make the relation between LGM and 2×CO2 derived climate sensitivity more ambiguous than apparent in our simplified model ensemble” so they do recognize this (it is of course hand-waving, but fair hand-waving). Absent understanding of cloud feedback processes, the best you can really do is mesh it into the definition of the emergent climate sensitivity, but I think probing (at least some of) the uncertainties in effects like this is one of the whole points of these ensemble-based studies.

    [Response: Chris, you are way off the mark here. The lack of certainty in our knowledge of cloud behavior is no excuse for leaving it out of a model entirely. GCM’s vary in their could effects — that’s one of the things one should be trying to test against the LGM. But they do at least have certain basic physical principles in their cloud representations — clouds over ice have less albedo effect than clouds over water, you don’t get high clouds in regions of subsidence, stable boundary layers lead to marine stratus, etc. These are all things that are crucial in determining the relation between LGM behavior of a model and 2xCO2 behavior. To make such strong claims about climate sensitivity based on a model that represents none of this verges on irresponsibility. Yes, I fault them. –raypierre]

    Comment by Chris Colose — 28 Nov 2011 @ 9:43 PM

  28. Eric Swanson (#23),

    Science has severe length constraints on what can go into the main body of the paper, hence the SOM. And no, we didn’t exclude AMOC collapsed cases. The main result includes all the data, but in the SOM we tested the exclusion of data from the AMOC collapsed region. We also tested the sensitivity to stronger or weaker wind stresses (though with the same spatial pattern).

    raypierre (#23),

    I don’t really feel it’s necessary here to use phrases like “clearly doesn’t understand”. I am fully aware that UVic doesn’t explicitly resolve the longwave physics of clouds (nor the shortwave physics), and nothing in my interview states otherwise. Perhaps you overlooked “e.g.”.

    [Response: No, Nathan, I didn’t overlook the e.g. Your remark in the interview clearly implied that it was only the shortwave effect of the clouds that was excluded. If you meant to say that the model flat out doesn’t compute clouds at all, why not come right out and say so? –raypierre]

    Comment by Nathan Urban — 28 Nov 2011 @ 10:05 PM

  29. An interesting paper, and thanks for the review of it. I’m still finding it quite hard to see where the cause for optimism comes from given the results presented. Notably, the last line of the abstract: ” [with caveats] … these results imply lower probability of imminent extreme climatic change than previously thought.”

    Media articles jumped on this I think, not least the BBC, yet I don’t see the justification for this statement. The paper seems to support a climate in which you get more bang for your buck, or more “change” (however you quantify it) for every degree Celsius warming.

    We may not know exactly how many degrees Celsius we’ll get for each doubling of atmospheric CO2, but we do know that the world was a helluva lot different at the LGM compared to now. Schmittner et al hints that a comparable change will not take too many degrees Celsius, as the temperature difference to LGM is smaller than in other estimates. At our present rate of warming, this smaller change will come about quicker than previously suggested, not slower. Thus we’ll effect the equivalent of an LGM-Holocene change really quite fast. Climate may be less sensitive to CO2 change according to the paper, but it is more sensitive to each degree Celsius temperature change – this, to me, is saying that we should be more worried, not less worried based on these results.

    If I’m off the mark here, that’s good, but can anyone clarify this point?

    Comment by skywatcher — 28 Nov 2011 @ 10:09 PM

  30. Your understanding is correct skywatcher. No reason for optimism.

    Comment by DrTskoul — 28 Nov 2011 @ 10:23 PM

  31. While not a climate scientist, I am an active experimental atomic physicist, with decades of experience in data analysis and uncertainty analysis. I am disturbed with the quoted uncertainty in the Schmittner paper, in light of the significant discrepancies between the sensitivity extracted from the ocean and the land data (their figure 3). Their quoted 95% result of 2.8C is in contradiction with at least half of their sensitivity range as measured from the land data. The discrepancy between the land and ocean results clearly point to a serious problem – whether in the data or model. But given that, I do not see how one can make any combined statistical prediction – the discrepancies point to a clearly systematic problem.

    I can point to numerous cases in precision measurements in my field where improved results are many sigma different than previous results – precisely because systematic effects were underestimated or not even known. But here we have a clear demonstration of a dramatic systematic effect, and yet it is treated as statistical in nature. I would much prefer the results to be reported as a sensitivity of 2.4K extracted from ocean data and 3.4K as extracted from land data.

    I can certainly tell you that it would not be acceptable to report such a narrow combined uncertainty in a physics measurement (where uncertainties are much more under control) if the result of two measurements showed such discrepancies.

    Comment by Steve R — 28 Nov 2011 @ 10:26 PM

  32. raypierre (#28),

    My remark in the interview was not intended to imply any such thing. I’m sorry if you read it that way. I gave one example of longwave feedbacks (water vapor) and one example of shortwave feedbacks (clouds). Those were not intended to be exclusive or exhaustive examples, hence the “e.g.” I know we both know what an energy-balance atmosphere model is. If you want to assume otherwise, I can’t really help that, and I don’t see any point arguing about it. However, I do appreciate your remarks concerning the importance of resolving cloud physics.

    [Response: Here is the exact quote from the interview: “One limitation of our study is that we assume that the physical feedbacks affecting climate sensitivity occur through changes in outgoing infrared radiation (e.g., through the greenhouse effect of water vapor). In reality, feedbacks affecting reflected visible light are also important (e.g., cloud behavior). Our study did not account for these feedbacks explicitly. Also, as I mentioned earlier, our simplified atmospheric model does not represent these feedbacks in a fully physical manner. ” I’ll leave it for people to judge for themselves what this is supposed to mean. I can’t make much sense myself of what you are trying to say about water vapor feedback here. But as you say, there’s no point in arguing about it. The important thing is that we all understand that with regard to clouds it’s not just that they’re not represented “in a fully physical manner.” How about,like, not in any physical manner at all? –raypierre]

    Comment by Nathan Urban — 28 Nov 2011 @ 11:10 PM

  33. Isotopious @25 — Please first study Ray Pierrehumbert’s “Principles of Planetary Climate” after reading “The Discovery of Global Warming” by Spencer Weart:
    http://www.aip.org/history/climate/index.html
    The effects of atmospheric CO2 were completely worked out by 1979, at the latest.

    [Response: And for a historical perspective don’t forget The Warming Papers, by Dave Archer and myself. –raypierre]

    Steve R @31 — That the climate sensitivity of oceans and land surfaces are quite different comes as no surprise.

    [Response: Actually, the issue regarding land vs. ocean in the Schmittner et al paper is more perplexing than just that. Warming over land is amplified relative to global mean by a model-dependent amount that is often around 50%. The Schmittner et al result does not simply say that land is more sensitive than ocean. Rather, their analysis shows that if you compare the LGM land cooling with the model land cooling, then the model that fits the land best has much higher GLOBAL climate sensitivity than you get for best fit if you use ocean data. As stated in the paper, that could reflect an error in the land temperature reconstruction (too cold), an error in the ocean reconstruction (not cold enough) or an error in the models land/ocean ratio. It has been argued that the land amplification is associated with lapse rate changes (not represented in the UVic model), and it is certain that drying of the land can play a role (not reliable in the UVic model since diffusing water vapor gives you a crummy hydrological cycle, especially over land). To his credit, Nathan Urban flagged the land-ocean mismatch as one of the caveats in the study indicating all is not right. That makes it particularly egregious that Pat Michaels erased the land curve from the sensitivity histogram in his World Climate Report piece (see Nathan’s interview at Planet 3). I just wish the paper and especially the press release had been as up front about the study’s shortcomings as Nathan’s interview. –raypierre]

    Comment by David B. Benson — 28 Nov 2011 @ 11:16 PM

  34. Patrick “halfgraph” Michaels strikes again.

    That aside, what would be a good model to use in this study?

    [Response: The right way to do this would be to use a perturbed physics GCM ensemble akin to ClimatePrediction.net. Even better to do it with perturbed physics ensembles from several different models. It’s clear this is very computationally intensive, and one can’t fault the authors for beginning with a study that is less ambitious. Still, in my view it would have been far better if the authors had presented their study as a “proof of concept” with some intriguing speculative results, rather than making the exaggerated (and insufficiently caveated) claims made in the paper, and more especially the press release. –raypierre]

    Comment by Pete Dunkelberg — 28 Nov 2011 @ 11:52 PM

  35. And for a historical perspective don’t forget The Warming Papers, by Dave Archer and myself. –raypierre

    I don’t suppose that will be coming out in e-book form? Too many diagrams, equations, which don’t render well in e-readers? I’m just a bit reluctant to spend nearly $90 CND on another book when I still have another climate book (Principles of Planetary Climate :) I’m struggling through in my spare time.

    [Response: I don’t think the publishers have immediate plans for an e-book edition. It wouldn’t work on a Kindle or Nook, but it would do fine on an iPad, but I don’t think our publishers are set up for that yet. But the main thing is that the price for an ebook wouldn’t be significantly less, since almost all of the price is due to the extortionate copyright fees the publisher had to pay to the journals for permission to reprint. The only thing that would bring that down would be selling LOT’s and LOT’s of copies. Hope that happens eventually :) . Meanwhile the best bet would be to persuade your local public library to buy it, or maybe form a little reading group and chip in $10 each then share it. I really do regret the price. –raypierre]

    Comment by Daniel J. Andrews — 29 Nov 2011 @ 12:06 AM

  36. [raypierre response @ 33] — Right, also The Warming Papers.

    Thanks also for the addional comment regarding the Schmittner et al paper [which I found certainly helpful]. My lack of surprise regarding solely the ‘actual’ difference in the ocean and land sensitivities is motivated by this LGM vegetation map:
    http://www.ncdc.noaa.gov/paleo/pubs/ray2001/ray_adams_2001.pdf
    which certainly suggests it was quite dry during LGM. My amateur understanding is that this should affect the averaged land temperature more than the averaged ocean temperature.

    Comment by David B. Benson — 29 Nov 2011 @ 12:12 AM

  37. 1998:
    “… the average turnover time of phytoplankton carbon in the ocean is on the order of a week or less, total and export production are extremely sensitive to external forcing and consequently are seldom in steady state. … oceanic biota responded to and affected natural climatic variability in the geological past, and will respond to anthropogenically influenced changes in coming decades.”

    Science Volume: 281, Issue: 5374, Pages: 200-206, DOI: 10.1126/science.281.5374.200

    2009:

    “Recently, unprecedented time-series studies conducted over the past two decades in the North Pacific and the North Atlantic at >4,000-m depth have revealed unexpectedly large changes in deep-ocean ecosystems significantly correlated to climate-driven changes in the surface ocean that can impact the global carbon cycle. Climate-driven variation affects oceanic communities from surface waters to the much-overlooked deep sea ….”
    http://www.pnas.org/content/106/46/19211.abstract

    Comment by Hank Roberts — 29 Nov 2011 @ 1:27 AM

  38. Low-oxygen and low-pH events are an increasing concern and threat in the Eastern Pacific coastal waters, and can be lethal for benthic and demersal organisms on the continental shelf. The normal seasonal cycle includes uplifting of isopycnals during upwelling in spring, which brings low-oxygen and low-pH water onto the shelf. Five years of continuous observations of subsurface dissolved oxygen off Southern California, reveal large additional oxygen deficiencies relative to the seasonal cycle during the latest La Niña event…. the observed oxygen changes are 2–3 times larger …. the additional oxygen decrease beyond that is strongly correlated with decreased subsurface primary production and strengthened poleward flows by the California Undercurrent. The combined actions of these three processes created a La Niña-caused oxygen decrease as large and as long as the normal seasonal minimum during upwelling period in spring, but later in the year. With a different timing of a La Niña, the seasonal oxygen minimum and the La Niña anomaly could overlap to potentially create hypoxic events of previously not observed magnitudes…..”

    Nam S., Kim H.-J., & Send U., 2011. Amplification of hypoxic and acidic events by La Niña conditions on the continental shelf off California. Geophysical Research Letters 38: L22602.

    ——-
    Rate of change, rate of change, rate of change

    Comment by Hank Roberts — 29 Nov 2011 @ 1:32 AM

  39. David Benson,

    Actually the issue of different sensitivity between land and ocean is not immediately obvious, and has been the subject of various papers in recent years. The larger thermal inertia of the ocean is important, but the higher sensitivity over land than in the ocean is also seen in equilibrium simulations when the ocean has had time to “catch up,” so that argument doesn’t hold as equilibrium is approached. As raypierre mentioned, the amplification of land:ocean is about 1.5 (see Sutton et al., 2007, GRL). There are some various proposed mechanisms to explain this that involve the surface energy balance (e.g., less coupling between the ground temperature and lower air temperature over land because of less potential for evaporation), and also lapse rate differences over ocean and land (see Joshi et al 2008, Climate Dynamics), as well as vegetation or cloud changes.

    In any case, there’s been a surprisingly large number of people that have (incorrectly) interpreted the figure in Dr. Urban’s interview as just meaning a higher land sensitivity than ocean sensitivity. You can certainly define land vs. ocean sensitivity meaningfully, but then you can also talk about polar vs tropical sensitivity too (which probably has a much higher ratio than land:ocean), or whatever else you feel like calculating. But the Schmittner paper is only focused on a global climate sensitivity, and that’s what they calculate and report. It’s important to understand that the method used in the paper involves using observations to constrain which model version (of 47 ensemble members that range from a sensitivity of 0.3 to 8.3 K/2xCO2) are compatible with the obs. The different sensitivities are made possible by tweaking a parameter that relates temperature to OLR (a weaker slope of OLR vs. T implies a higher sensitivity).

    Thus, the interpretation of the figure is that the ocean SST data favor a lower global climate sensitivity value than the land SAT data, which is why people suspect the results are not robust (and probably that the temperature anomaly of the LGM-modern is too low).

    Comment by Chris Colose — 29 Nov 2011 @ 1:48 AM

  40. Quoting the two sets of median values given above, if a 55% increase in CO2 from LGM to pre-industrial caused a 3.3-5.8C warming, how does one calculate a 2.3-3C doubling sensitivity?

    We’re told that in the depths of an ice age orbital changes provide a small forcing which warms the planet a tad, causing natural systems to release a small amount of CO2. This causes a warming/CO2 feedback loop and we end up perhaps 5C warmer in an interglacial. Now, isn’t that entire 5C the natural response to the small orbital forcing? Assuming the current anthropogenic CO2 forcing is larger than orbital forcings, shouldn’t we expect more than 5C warming as an ultimate result? It seems that current sensitivity estimates ignore feedback CO2 for the modern era. This might work in the short term, as feedback CO2 takes a long time to show up (the 400-800 year lag problem), but ultimately, isn’t this wrong? Instead, isn’t the scenario: we inject CO2 into the atmosphere, causing a spike. This forcing CO2 gets spread around the planetary system over hundreds of years, which mitigates the increase in the atmosphere. Meanwhile, over thousands of years feedback and re-feedback CO2 enters the atmosphere, and we’re at a supercharged version of emerging from an ice age. However, the orbital signal will still be pointed at “cooling”, so perhaps that will save the day.

    Thanks in advance for clearing up my confusion!

    Comment by RichardC — 29 Nov 2011 @ 2:07 AM

  41. Very informative post and a really good discussion thread. Thanks to all, and especially to Nathan Urban for joining in.

    [Response: I, too appreciate Nathan having joined us, as well as Andreas Schmittner (see comment below). –raypierre]

    Comment by CM — 29 Nov 2011 @ 3:22 AM

  42. Thank you for the article. One thing I picked up on is your closing paragraph indicating how much time we have. You’ve estimated 24 years to eat up the emissions budget for reaching 2 degree rise based on current IPCC estimates, or 35 years if the Schmittner estimate were closer to the mark – assuming the current rate of growth of emissions (3%/year).

    If both are too low then we have even less time to act.

    Numbers like this should bring it home to more people just how urgent it is to reduce emissions – (to those over 40 who probably have a different perspective on time than many of those under 30).

    Comment by Sou — 28 Nov 2011 @ 7:47 PM

    This is why I have encouraged everyone – literally – to take a true systems approach to these issues. Climate divorced from energy divorced from how long it takes to cycle through infrastructure minus embedded energy of infrastructure minus EROEI…. and on and on…. is meaningless.

    The IEA’s annual World Energy Report (WEO) stressed that by 2017 we will have built the infrastructure that will carry us to 450 ppm CO2.

    And some are expecting an ice-free Arctic by 2016.

    And others believe clathrates of a whatever kind are already accelerating in their melt rates (which, paradoxically may show up better in atmospheric CO2 than methane since a recent study said 50% of methane is converted to CO2 via methanogenesis, perhaps helping with the accounting re: last year’s massive increase)…

    Comment by ccpo — 29 Nov 2011 @ 3:24 AM

  43. #33
    David Benson. Thanks for the advice. A positive feedback works well for some of the ice age climate, just not all of it (specifically the ‘cycle’ part). I understand the boom part of it, just not the bust. What is the bust, or switch, or tipping point that turns off, or (dare I say it) ‘negates’ the positive feedback?

    A made up example…..

    Let’s say there is an initial change of 1 deg C.
    Which causes a change in CO2 giving 0.75 deg C,
    Which both causes a change in water vapour giving a total of 6 deg C.

    This is now what you call a changed ‘state’. It’s completely different from the initial conditions.
    In fact, now that the positive feedback has been ‘kick started’, the initial mechanism which began the whole positive feedback is no longer of prime importance. Taking it away will not reverse the effect of the positive feedback. Sure, there will be a change if you take away the initial forcing, but it will be relative to the new state of 6 deg C.

    It will still be 5 deg C. And sure, I know what others think. Take away the forcing and the water vapour will follow. And to an extent, that will happen. But that’s the problem with a positive feedback loop that is dependent on temperature change.

    It is the temperature that the system is dependent on. Once the state has changed, that’s it, it a new state. The feedback loop doesn’t care how it got there.

    [Response: The loop doesn’t, but the thing that starts the loop can change direction and as long as the entire feedback loop is not unstable, you simply have an amplified response to the drivers (in this case, orbital forcing). – gavin]

    Comment by Isotopious — 29 Nov 2011 @ 5:30 AM

  44. @16 Chris Colose and 40 RichardC:

    That certainly helps, although it leaves me wondering what climate sensitivity we should use in our collective risk assessment, both for mitigation and adaptation? Hansen looks at both faster and slower feedbacks, but how slow are the slower feedbacks is uncertain. So how much time do we have for an overshoot and to how much warming are we committed in the longer term of several centuries? It seems to me we should use the higher values for climate sensitivity, including the slower feedbacks, for a complete assessment of risks upto the seventh generation, so to speak. All this discussion of the Schmittner et al paper should not distract from the point that Hansen and others (including RichardC in #40 and William P in #24) try to make: that there seems to be a significant risk that climate sensitivity could be on the higher end of the various ranges, especially if we include the slower feedbacks and take into account that these could kick in faster than generally assumed.

    Comment by Lennart van der Linde — 29 Nov 2011 @ 5:57 AM

  45. Chris Colose – I was looking at Sutton et al. 2007 with regards to this paper on SkS a couple of days ago. The mean and median of the equilibrium land-ocean warming contrast (using a slab ocean) in the GCMs considered is ~1.3 (with a spread of about 1.2-1.5). To properly interpret the Schmittner et al. paper on this point wouldn’t we need to know the equilibrium land-ocean warming contrast inherent in the model used? Tom Curtis at SkS reports a warming ratio of about 1.7 in Schmittner et al., though I don’t know if this derives from the model or the data (I confess I haven’t read the paper).

    Raypierre, regarding missing cloud effects. Wouldn’t including cloud changes simply alter the effect of their parameterisation tweaks for each run – they would simply use different values to get a spread of sensitivities. The probability distribution will likely be different but presumably their best fit ECS to the data would remain at about 2.3-2.4?

    [Response: Computed cloud feedbacks would mainly have the potential to affect the results by changing the asymmetry between the climate sensitivity going into the LGM vs. going into a 2xCO2 world. –raypierre]

    Comment by Paul S — 29 Nov 2011 @ 6:08 AM

  46. Hi guys, (I don’t know exactly who you are so I hope you forgive me that I call you guys)

    thanks for the criticism of our paper. (Nothing new for me there, though, since we already discuss all these points explicitly in the paper).

    I’d just like to quickly comment on a few incorrect statements:

    1. In the main article you state “the fact that the energy balance model used by Schmittner et al cannot compute cloud radiative forcing is particularly serious.”

    2. Raypierre (#23, thanks for identifying yourself), you state “how much confidence can you place in a study based on a model that doesn’t even attempt to simulate clouds?”

    3. Raypierre (#32) ‘The important thing is that we all understand that with regard to clouds it’s not just that they’re not represented “in a fully physical manner.” How about,like, not in any physical manner at all?’

    All three of these statements are incorrect. The UVic model computes cloud radiative effects. The effect of clouds on the longwave radiation is included in the Thompson and Warren (1982) parameterization. The radiative effect of clouds on the shortwave fluxes is computed as a seasonally varying (but fixed from one year to the next) and spatially varying atmospheric albedo.

    Since we vary the Thompson & Warren parameterization we do implicitly take into account the uncertainty of clouds on the longwave fluxes. And as Nathan correctly states, and as we have already pointed out in our paper, we do not take into account the uncertainty of clouds on shortwave fluxes (atmospheric albedo was not varied).

    [Response: Hi Andreas, thanks for stopping by. I think the actual point that we were making was that the cloud feedback (how clouds change as a function of the temperature, circulation, humidity etc., and how that impacts the radiative balance) is not being calculated here. I’ve edited the main article to make that clearer. – gavin]

    [Response: Yes, that is indeed the point I was making. A model without any dynamics in the atmosphere doesn’t even have the right information in it to even have a chance of doing cloud feedbacks correctly. But regarding missing cloud effects, I do think the one emphasized by Nathan in his Planet 3 interview is probably the key one. Bony et. al find that the spread in 2xCO2 climate sensitivity among CMIP GCM’s is largely due to differences in low cloud behavior, and that’s primarily an albedo effect. –raypierre]

    Comment by Andreas Schmittner — 29 Nov 2011 @ 7:12 AM

  47. Can someone point me to how the Icesheet and Vegetation forcing is simulated. I’m looking for something fairly detailed.

    Comment by PaulW — 29 Nov 2011 @ 7:55 AM

  48. Nathan Urban (#21),

    Thanks, Professor Urban, for your explanation.

    Regarding “Re: multimodality, we haven’t proven a cause. But I suspect it is mostly an artifact of interpolation (though maybe a bit physical), as we discuss. I don’t think it has much to do with the width of our uncertainty interval”, while the main probability mass is in the range advertised, and I certainly understand the convention of coming out with a point estimate and estimate of dispersion, the multimodality, taken at face value, suggests such a single point estimate may be inappropriate. The paper does present the entire density, and that’s great, and lines up components.

    Regarding the Gibbs-like towers on the density (S13, and main paper, Figure 3), if there’s some way of gauging the uncertainty in the density itself, perhaps using bootstrap resampling, perhaps the bumps might be seen to be not statistically significant, and so the question would go away.

    Comment by Jan Galkowski — 29 Nov 2011 @ 8:36 AM

  49. William P (#24),

    Are you suggesting that all scientist should stop studying the climate? That’s a bold statement on a site about “climate science from climate scientists”.

    Comment by Steinar Midtskogen — 29 Nov 2011 @ 8:47 AM

  50. At first glance the SEA sensitivity result is largely a consequence of the LGM data set they used. But as the top post here says

    Curiously, the mean SEA estimate (2.4ºC) is identical to the mean KEA number, but there is a big difference in what they concluded the mean temperature at the LGM was,….

    It’s hard to say what KEA conclude about the LGM temperature:

    Abstract
    The temperature on Earth varied largely in the Pleistocene from cold glacials to interglacials of different warmths. To contribute to an understanding of the underlying causes of these changes we compile various environmental records (and model-based interpretations of some of them) in order to calculate
    the direct effect of various processes on Earth’s radiative budget and, thus, on global annual mean surface temperature over the last 800,000 years. The importance of orbital variations, of the greenhouse gases
    CO2, CH4 and N2O, of the albedo of land ice sheets, annual mean snow cover, sea ice area and vegetation, and of the radiative perturbation of mineral dust in the atmosphere are investigated. Altogether we can explain with these processes a global cooling of 3.9 +/- 0.8 K in the equilibrium temperature for the Last Glacial Maximum (LGM) directly from the radiative budget using only the Planck feedback that parameterises the direct effect on the radiative balance, but neglecting other feedbacks such as water vapour, cloud cover, and lapse rate. The unaccounted feedbacks and related uncertainties would, if taken at present day feedback strengths, decrease the global temperature at the LGM by -8.0+/- 1.6 K….

    But returning to what I started quoting from the top post here:

    Curiously, the mean SEA estimate (2.4ºC) is identical to the mean KEA number, but there is a big difference in what they concluded the mean temperature at the LGM was, and a small difference in how they defined sensitivity. Thus the estimates of the forcings must be proportionately less as well. The differences are that the UVic model has a smaller forcing from the ice sheets, possibly because of an insufficiently steep lapse rate (5ºC/km instead of a steeper value that would be more typical of dryer polar regions), and also a smaller change from increased dust.

    Perhaps these latter points can be checked with the UVic model itself. Ideally we might prefer a full Earth Systems Sensitivity approach with the computationally expensive modeling that has evidently been deemed too expensive to date. But are more extensive and less uncertain LGM data needed to justify this? And meanwhile is Hansen’s approach the best?

    Comment by Pete Dunkelberg — 29 Nov 2011 @ 9:23 AM

  51. One of the very best things about the present study is co-author Dr. Nathan Urban’s participation online explaining the result, the methods and the problems. If you have not read his excellent interview at Planet 3.0 now is the time. A combination of circumstances makes model-based sensitivity estimates of distant times and different climates hard to do, but at least we are getting a good education about it.

    Comment by Pete Dunkelberg — 29 Nov 2011 @ 9:41 AM

  52. I’m confused on one point: you report the LGM cooling as -3.5 °C; Urban reports it as -3.3 °C. Yet the paper states that “The best-fitting model (ECS = 2.4 K) reproduces well the reconstructed global mean cooling of 2.2 K…” I assume the difference is that the global mean cooling cited in the paper includes the contribution of SST change, which, according to MARGO, is -1.9 ± 1.8 °C, whereas the -3.3 or -3.5 °C is for SAT. If so, why is the SAT the relevant point of comparison?

    [Response: The 2.2 deg C cooling is the average only where there is data in the reconstruction. The 3.5 deg C cooling is the global mean SAT change in the ECS=2.35 model version. The latter is important for the climate sensitivity issue, while the former is key for the constraint. The model+statistical model is there to link the two. – gavin]

    Comment by David Lea — 29 Nov 2011 @ 1:19 PM

  53. So IPCC has 2.0 – 4.5 and this latest study has 1.6 – 2.8. It shifts the range as much as it narrows it. In either case the range is somewhat wide.

    Pardon my ignorance, but we’re now halfway through a doubling of CO2 since preindustrial times (the current 392 ppm divided by sqrt(2) is 277 ppm, right in the 260-280 ppm range given by Wikipedia for the level just before the industrial emissions began). We also have fairly reliable instrumental records of the temperature through most of this increase. If we abandon the models and simply extrapolate the trend, shouldn’t that by now, unless there is a huge or unknown temperature lag, give us a target with a similar range, and that range would more or less equal the estimated natural variation? I suppose an estimate based more on direct observations would cause less controversy than one based on models and pre instrument temperature reconstructions/proxies.

    Every decade that passes should narrow the range. How warm must this decade become to dismiss a value of 1.6 and how cold must this decade become to dismiss a value of 4.5?

    Comment by Steinar Midtskogen — 29 Nov 2011 @ 2:16 PM

  54. David Lea,

    You’re right 2.2 K (grid points where there is paleo-data) refers to the SST change over the ocean and SAT over land, and 3 K refers to the global SAT change. Also they use a 5×5° grid for the oceans (or SSTs and Shakun et al 2011) and 2×2° grid for the land, and because of more data in the oceans, the global mean is probably too biased toward the ocean.

    Comment by Chris Colose — 29 Nov 2011 @ 2:22 PM

  55. Steinar,

    There are several problems with your approach (if it were so easy the problem would be trivial and would have been solved a long time ago). First, global temperature does not just respond to the CO2 forcing, but to the total forcing. We don’t know the total forcing that well, primarily because we don’t know the aerosol (direct or indirect) effects. If the negative aerosol forcing were very large, then the cumalative forcing might only be a few tenths of a Watts per square meter, and it would require a rather high sensitivity to explain the observed trend. And we don’t expect the aerosol forcing to continue to grow and indefinitely offset CO2 in the future.

    The other issue is that the value you cite is close to correct for the transient climate response. This is a very important number, and probably the most relevant number as we progress into the 21st century, but paleoclimate studies are primarily focused on the equilibrium response (i.e., after the oceans have warmed up sufficiently to allow the top of atmosphere radiative budget to be balanced). In other words, the current climate might be at 390 ppm CO2, but the amount of warming we’ve seen (or for that matter, the extent of many glaciers, sea level, etc) has not equilibriated to a 390 ppm CO2 world.

    Comment by Chris Colose — 29 Nov 2011 @ 2:59 PM

  56. William P wrote: “We are fiddling with scientific minutia while the world burns.”

    Studying the climate is what climate scientists do. I hope they will continue doing so through hell and high water.

    Having said that, I agree that climate scientists have already learned and communicated far more than enough to justify urgent action to end anthropogenic GHG emissions as quickly as possible — which numerous national and international scientific organizations, and many individual climate scientists, have explicitly called for in public statements.

    However, it’s not up to the climate scientists to take those actions, or even to identify which specific actions are the most effective (or cost-effective) — or even to blog about such things. It’s up to the rest of us.

    Comment by SecularAnimist — 29 Nov 2011 @ 3:48 PM

  57. Re #52 Gavin – thanks. perfect!

    Comment by David Lea — 29 Nov 2011 @ 5:51 PM

  58. I see that some very helpful comments were posted ahead of mine this morning. I think the air has cleared. I have another question. This paper gives a detailed account of the UVic model, albeit a ten year old version. Evidently the model finds or found then the usual sensitivity of 3.0 for the present time, and a lower sensitivity for the LGM. Does the current UVic model get 3.0 for the present?

    [Response: The way the radiation is written in the Uvic model — which is typical for energy balance models of this sort — you can dial in whatever sensitivity parameter you want. In the Schmittner et al paper, they dial in a range of sensitivities, and then see which one looks most like the LGM. The numbers on the lines in the graph we reproduced in the post correspond to the respective 2xCO2 sensitivities for each of the settings of the dial. –raypierre]

    Comment by Pete Dunkelberg — 29 Nov 2011 @ 6:31 PM

  59. When Andreas’ paper was first written about in our local newspaper the Oregonian my first thought was “Whew! Glad that climate change scare is over!” since that’s how the paper’s lead more or less spun it.

    I’d like to ask the scientists here what they feel the next IPCC Report will conclude about this.

    I’d also like to ask them, “Are most things about climate change more dangerous or more benign than previously thought?”

    If one thing is more benign but a dozen others more dangerous, then overall things are more dangerous than previously thought. Is this the case?

    [Response: The first problem here is the term ‘previously thought’. By whom? and when did they think it? And have they been asked whether they’ve changed their minds? You are far better off being specific. i.e. ‘as concluded/estimated/discussed in the last IPCC report’ – that way people actually know what is being compared, and you are comparing a well-reviewed assessment rather than some bloke’s opinion. The second problem is the term ‘dangerous’ – this is very context-laden. What is dangerous for an Inuit, is probably not the same as what is dangerous to someone living in Delhi, or New York. So you are better off again being more specific, are predicted changes of greater magnitude/lesser magnitude/more or less certain etc. than compared to the last IPCC report?

    The answer is that it will be a mixed bag. Some things are changing faster (Arctic ice, greenland & Antarctic melt, wildfires and heat waves perhaps), some things are within what was expected (temperatures, rainfall), and a few things might not have changed as quickly as expected (though I can’t think of any). For the projections, the differences will mostly depend on different scenarios this time around (including some very optimistic ones) rather than any great improvement in physical understanding. I don’t anticipate any big change in the bottom line, though there will be a lot of progress in the details. – gavin]

    Comment by Richard Brenne — 29 Nov 2011 @ 7:48 PM

  60. Pete Dunkelberg (#58),

    I am a coauthor on a manuscript in revision, Olson et al., JGR-Atmospheres (2011), which has a climate sensitivity analysis from modern (historical instrumental) data, using a similar UVic perturbed-physics ensemble approach. It finds a little under 3 K for ECS (best estimate).

    [Response: Welcome back, Nathan. I’m glad my grouchiness up there hasn’t deterred you from continuing to contribute your insights to this discussion –raypierre]

    Comment by Nathan Urban — 29 Nov 2011 @ 8:05 PM

  61. Isotopious @43 — It is generally agreed that orbital forcing [at high northern latitudes] is the cause of the changes between the three states of interglacial/interstade(mild glacial)/stade(full glacial). The exact details are unknown AFAIK but one simple model is the trigger hypothesis of Paillard [for a highly readable use of this model I strongly recommend Archer/Ganopolski’s Movable Trigger]. Some of the big AOGCMs have attempted to reproduce the full behavior from the Eemian interglacial to the Holocene interglacial, but I’m not qualified to comment on the success of these endevors.

    Regarding feedbacks, both positive and negative, I recommend a goodly dose of what is called linear systems theory [or was 50 years ago when I studied it from David Cheng’s admirable textbook]. You will discover that the total system response depends upon both the input signal [forcing in climatological terminology] and the nature of the feedback; neither can be neglected in the general case.

    Comment by David B. Benson — 29 Nov 2011 @ 10:17 PM

  62. Chris Colose @ 39 — Thanks as always, but I am baffled by your The larger thermal inertia of the ocean is important, but the higher sensitivity over land than in the ocean is also seen in equilibrium simulations when the ocean has had time to “catch up,” so that argument doesn’t hold as equilibrium is approached. [Emphasis added]. What argument? I don’t recall that I made one.

    Further, perhaps you are under the misimpression that I actually understand all the fine points. Nothing I’ve written should be viewed as either (direct) criticism or defense of the paper undergoing this [extensive] review.

    Comment by David B. Benson — 29 Nov 2011 @ 10:28 PM

  63. When Andreas’ paper was first written about in our local newspaper the Oregonian my first thought was “Whew! Glad that climate change scare is over!”

    I, too, live in Oregon and apparently Brenne only read the headline. The article wasn’t as strong as it could be, but in no sense did it convey the notion that “the climate change scare is over”.

    Tch, tch, Brenne.

    And Brenne should know better …

    Comment by dhogaza — 29 Nov 2011 @ 11:28 PM

  64. Nathan Urban

    I am a coauthor on a manuscript in revision, Olson et al., JGR-Atmospheres (2011), which has a climate sensitivity analysis from modern (historical instrumental) data, using a similar UVic perturbed-physics ensemble approach. It finds a little under 3 K for ECS (best estimate).

    I think I recollect a statement by Gavin a year or so (or less) stating that the most recent (at that time) GISS Model E results are around 2.75.

    Not inconsistent with what you’re saying (though my memory may be totally whacked out here).

    Glad to see that you’re assisting on a variety of approaches, and that you’re not wedded to the Schmitter results …

    Comment by dhogaza — 29 Nov 2011 @ 11:33 PM

  65. It wouldn’t work on a Kindle or Nook, but it would do fine on an iPad, but I don’t think our publishers are set up for that yet.

    Thank you for the response. If there’s no immediate plans for an e-book version I’ll pick up a hard copy in the early new year sometime (after Christmas bills are paid and I have contracts lined up again–any time someone tells me scientists are in it for the money I want to bop them on the head with something heavy).

    Comment by Daniel J. Andrews — 29 Nov 2011 @ 11:34 PM

  66. Nathan Urban, I will be very interested in your new paper and I want to thank you very much for your online explanations and patience. You and Andreas Schmittner have I think made a greater contribution this way than by the original paper! I have read your two interviews at Planet 3.0 and of course your contributions here along with Andreas, and his help and yours at Skeptical Science. First your paper created great interest and then you both clarified many questions for the online science community. Since sensitivity is such a key parameter in climate change, the whole online climate community will be sharper overall thanks to you and Andreas and your coauthors and the resulting discussions, and this sharpness will filter out to all the people we communicate with.

    Thanks very much!

    Comment by Pete Dunkelberg — 29 Nov 2011 @ 11:55 PM

  67. David Benson- sorry for the misunderstanding, I wasn’t trying to address an argument you made specifically, just my way of discussing the mechanism of land:ocean sensitivity. Perhaps it’s because I’m accustomed to most people just talking about the thermal inertia in this context.

    Comment by Chris Colose — 30 Nov 2011 @ 12:39 AM

  68. Chris (#55),

    So we don’t really know the temperature lag or how the temperature is supposed to behave until equilibrium, and the sensitivity will be difficult to confirm by direct observation (say, in our lifetime)?

    You say that if the negative forcings that already affect temperature are large, the sensitivity will be high, and vice versa. That would mean that if those who say that the recent warming has everything to do with CO2 are right, that is, the masking/delay is low, then the sensitivity is also low, so there is less to worry about. And if those who say that the recent warming doesn’t say much about CO2 are right (disregarding whether they’re right for the right reasons), it might be because the masking/delay is high, and they’re in for a surprise. That sounds ironical.

    Comment by Steinar Midtskogen — 30 Nov 2011 @ 12:52 AM

  69. Please send direct me to any peer reviewed paper which shows a credable sea level rise beyond the 59cm maximum, by 2100, whch the IPCC says. By preferance post it on to the E+E section of British Democracy Forum. Thanks.

    P.S. your security is a complete pain this is the 5th time I have tried to post this!

    Comment by Tim — 30 Nov 2011 @ 5:18 AM

  70. #55, #68–

    Which brings up a question: what do we know about the actual time to equilibrium? I’ve seen some comments here which basically add up to “a few centuries,” and a quick search of the RC site turned up nothing more. On the other hand, John Cook has this piece which, quoting Hansen 2005 as authority, says “several decades.” I suspect the difference to be not so much a disagreement as a product of definitional differences, especially since the RC comments were somewhat informal.

    Is there a more detailed post on this? Some other source for which someone can offer a pointer?

    [Response: Check out the transient vs. equilibrium climate discussion in our National Research Council report, “Climate Stabilization Targets” (free from the NAS web site. Just google the title and you’ll find it.). Some part of the response does equilibrate within decades but a lot of the rest (easily half) takes a thousand years to be realized. The exact proportion is model dependent, and depends on deep ocean circulations. –raypierre]

    Comment by Kevin McKinney — 30 Nov 2011 @ 7:41 AM

  71. Kevin,

    In terms of modelling I think there is quite a bit of variation in time to equilibrium. The clearest indicator of those differences is the ratio between transient and equilibrium sensitivity. A GCM with relatively low transient response but relatively high equilibrium sensitivity probably has large thermal inertia, therefore will take longer to equilibrate, and vice versa.

    I recall reading in one paper on the CCSM model (I can’t recall which paper) that the time taken to achieve near-complete equilibration was 3000 years after a pulse perturbation. However, the equilibrium response would be non-linear so it could be that most of the distance to equlibration was made up in the first couple of centuries.

    I’ve read some snippets which suggest Hansen believes thermal inertia is too large in most GCMs, which would mean he thinks time to reach equilibrium will be fairly short (?)

    I don’t know if there have been any observation-based studies on this.

    Comment by Paul S — 30 Nov 2011 @ 8:27 AM

  72. Steinar-

    Both hindcasts and projections are strongly influenced by climate sensitivity and also by vertical ocean diffusivity. Because of the uncertainty in ocean heat uptake and surface-to-depth transport of excess heat, there will obviously be uncertainty in the transient evolution of surface warming. Moreover, the timeframe over which the planet comes to equilibrium increases with higher climate sensitivity. In terms of global temperature change there are also several timescales of consideration- there is a fast component (roughly proportional to the instantaneous radiative forcing), and a slow omponent that reflects surface manifestation of changes in the deep ocean. The long timescales (even ignoring the “Earth system” responses like ice sheets and vegetation) are not easy to get at in the instrumental record or by studying “abrupt forcing” events like volcanic eruptions.

    Similarly, many studies that attempt to examine the co-variability between Earth’s energy budget and temperature (such as in many of the pieces here at RC concerning the Spencer and Lindzen literature) are only as good as the assumptions made about base state of the atmosphere relative to which changes are measured, the “forcing” that is supposedly driving the changes (which are often just things like ENSO, and are irrelevant to radiative-induced changes that will be important for the future), and are limited by short and discontinuous data records. Thus, this is why people are motivated to look at ancient climates where there are times of much larger temperature changes and forcing signals, that we can hopefully relate to each other to interrogate the sensitivity problem in a more robust fashion.

    Your point about “those who say that the recent warming has everything to do with CO2 are right” is irrelevant because they are wrong, but I don’t know anyone who actually thinks that. Despite the uncertainty in aerosol forcing, there is a lot of confidence that it is negative, and thus has at least helped offset some global warming to date. This figure gives a sense of the wide uncertainty distribution in the total anthropogenic forcing relative to just the GHG forcing (which also includes methane, N2O, etc) which is a prime reason the instrumental record doesn’t inherently give good constraints on sensitivity.

    Comment by Chris Colose — 30 Nov 2011 @ 9:36 AM

  73. The definitions of climate sensitivity always talk about a “doubling of the C02 level.” But they don’t say doubling from what starting point.

    Without this information, it is hard to make much sense out of the discussion.

    [Response: This is because it doesn’t much matter. The radiative forcing from 180 to 360 ppm, or from 280 to 560 ppm, or from 380 to 760 ppm are all approximately the same (~4 W/m2). This is because forcing from CO2 is logarithmic in the concentration (F=5.35*log_e(CO2/CO2_orig) ), and the ‘doubling’ metric is preferred accordingly. -gavin]

    Comment by William Jockusch — 30 Nov 2011 @ 10:37 AM

  74. It’s very revealing how some journalists reported the findings of this study on their blogs. Andy Revkin’s Dot Earth and Eric Berger of the Houston Chronicle both delivered inaccurate posts on their blogs and they both cited each others writings as supporting their own beliefs.

    This shows how important this blog is where climate scientists can openly discuss and debate their work. Thanks for taking the time to do this.

    Comment by Andy — 30 Nov 2011 @ 10:54 AM

  75. I hope this is not off topic, but where could I find some good info on the lindzen & choi 2011 updated paper? All the links google gives are just to “the usual suspects” like Watts, Curry, Spencer.
    This paper is about sensitivity also so maybe it’s pertinent.

    Did they correct the problems in the 2009 paper for example?

    Comment by KTB — 30 Nov 2011 @ 10:59 AM

  76. KTB @ 75, this will get you started:
    http://www.skepticalscience.com/Richard_Lindzen_art.htm

    Comment by Pete Dunkelberg — 30 Nov 2011 @ 11:19 AM

  77. Chris Colose (#72) and especially Paul S (#71)–

    Thanks very much. It sounds as if there are both practical and definitional issues standing in the way of a clear and simple statement at present–so no-one is going to flatly say “The time to equilibrium is ‘x’.” Too bad, though not really surprising–most of this stuff is not so simple, after all.

    But at least I’ve garnered a little more detail about the problem–a bit like ‘being confused at a higher level,’ I suppose.

    Comment by Kevin McKinney — 30 Nov 2011 @ 11:34 AM

  78. @William Jockusch: The starting point is typical the preindustrial value of about 280ppm.
    @Gavin: I think the difference is not so much in the forcing itself but in the albedo feedback. Especially the snow, and sea ice feedback may be different, depending on the extend.

    Comment by Uli — 30 Nov 2011 @ 11:39 AM

  79. ktb, trying scholar for that paper
    http://scholar.google.com/scholar?q=lindzen++choi+2011
    Cited by 10
    finds http://www.mdpi.com/2072-4292/3/9/2051
    Remote Sens. 2011, 3(9), 2051-2056; doi:10.3390/rs3092051
    Commentary–Issues in Establishing Climate Sensitivity in Recent Studies
    Trenberth, Fasullo and Abraham
    “… many of the problems in LC09 [6] have been perpetuated, and Dessler [10] has pointed out similar issues with two more recent such attempts [7,8]. Here we briefly summarize more generally some of the pitfalls and issues involved in developing observational constraints on climate feedbacks. […]”

    Comment by Hank Roberts — 30 Nov 2011 @ 12:20 PM

  80. Gavin,

    Thanks for this. I had earlier asked you to give us your take on this and similar papers (see below). One thing in SEA I don’t think you discussed was their graph of “climate sensitivity” probability separated into land and ocean. Curiously the land sensitivity centered right about the IPCC value of 3.0°C! They comment that this is probably due to poor land-sea heat fluxes—another possible problem with this work.

    [Response: Note please that group posts like this one are written collectively by most or all of the RC participants, not by any one of us. As for your actual question, yes that should have been emphasized more in the post itself, since it’s a key point. If you look through the comments you’ll see some discussion of the point, and there’s a very good discussion of the importance of the mismatch in the interview with Urban on Planet 3. But to reiterate: the difference between climate sensitivity estimates based on land vs. ocean data indicates that something is seriously wrong, either with the model, or the data, or some of both. In my view, that mismatch alone is reason not to put too much credence in Schmittner’s claim to have trimmed of the upper end of the IPCC climate sensitivity tail. –raypierre]

    Comment by Chick Keller — 30 Nov 2011 @ 12:58 PM

  81. The LGM is the wrong study period. We are facing the extinction of glaciers and multi year sea ice in the Arctic. Although there is great data during the LGM, we have live observations in the Arctic which has way more facts. Rather sensitivity be tested today, when disappearing ice is compelling, especially when some like Pielke call for end of Global warming since 2002, when in fact it never stops, despite GT short time span trends. I have proof of even greater melting since 2006 http://eh2r.blogspot.com/

    [Response: I wouldn’t say it’s the wrong study period. It has a lot going for it, in that it is a big event, we know the CO2 well and it dropped significantly, and the major forcings are well known (dust being the prime exception). The trick is to tease out the correct sort of information from it, and that requires a suitable model. There’s no way to do it from data alone, because of the asymmetry problem. Except as a “proof of concept,” I myself do not think that the UVic model is a remotely suitable model for this sort of study. –raypierre]

    Comment by wayne davidson — 30 Nov 2011 @ 1:43 PM

  82. Chris (#72),

    You say that you don’t know anyone who thinks that “that the recent warming has everything to do with CO2″. That is, it’s too early to see any clear effects in global temperature of CO2 emissions, as also pointed out by others. Still, a great portion of the “summary for policymakers” deals with the recent temperature rise, and it concludes that it’s “likely” that there is a human contribution to the observed trend (by which I assume CO2 emissions are especially understood, even more so considered the negative forcings mentioned). While “everything to do with CO2″ was a figure of speech meant to push the point, IPCC certainly seem lean heavily in that direction. Are they jumping to conclusions here?

    Which takes me to the next thing which raises a question in light of the delay that we’ve been discussing. The summary gives various estimated temperature increases for the 21st century. A doubling of CO2 from 300 ppm in 1880 to 600 ppm in 2100 has a best estimate of 1.8 degrees (scenario B1) or about 2.3 degrees warming since 1880, which happens to be precisely the sensitivity figure given by Schmittner et al. The question that arises is how IPCC has conjured this figure, whose range (1.8 – 3.6) is even narrower than the sensitivity estimates, when research seems to suggest that we can only begin to guess at the temperatures that we’ll have when most of the distance to equilibrium is made up centuries later? The 2100 temperature is a key measure for politicians and activists, so this seems important.

    [Response: You need to study the IPCC report and it’s references more carefully. From what you write, it’s not even clear to me that you know that there are general circulation models involved (not “conjuring”). And further, in stating the year 2100 values, you are mixing up transient with equilibrium climate sensitivity. I don’t really see where you’re going with this. Please clarify. –raypierre]

    Comment by Steinar Midtskogen — 30 Nov 2011 @ 4:17 PM

  83. #61
    David Benson, it is generally agreed that the system is highly sensitive to changes in obliquity. This is interesting, since the amount of incoming radiation from the sun is unchanged. What changes is where the radiation goes. Instead of being more evenly distributed from the equator to the poles, the sunlight becomes more concentrated in the tropics. So it’s likely to be both the tropics and the poles regarding ‘the cause of the changes’.
    So again it comes back to the proxy data, is there lots of cooling, or just lots of ice, or both?
    Anyway, on the topic of feedback, you can’t have it both ways. If you’re talking about water vapour feedback, then it is dependent on temperature change. As I have demonstrated, positive feedback works fine up to a point…until you want it to reverse. After the Eemian peak, there was quite rapid cooling. There is no change in INSOLATION with regards to tilt, the all the GHGs don’t add up to a huge forcing, and it’s very cloudy in the polar regions, so you need a system which is very sensitive to the small amount of forcing to work. Unfortunately for this beautiful theory, there is an ugly fact. The feedback loop is temperature dependant. Therefore the resulting temperature changes quickly overpower whatever started them. Gavin’s model which says non-condensable GHGs ‘hold up’ the water vapour lacks proof. It is the TEMPERATURE which holds up the water vapour. You are arguing for a positve feedback loop but only when it suits.

    But again, you can ‘try’ and fudge your theory by saying ‘oh, it wasn’t that cold, so it doesn’t need to be that sensitive’ or ‘it’s just the polar regions, don’t worry about the tropics’, etc..

    Or you could accept the fact that interglacial cycles are, well..cycles, and like most cycles…could be negative.

    [Response: The concept of water vapor feedback (which goes back to Arrhenius, and was fully consolidated in the 1960’s by Manabe and co-workers) has always stated that the water vapor was determined by temperature. Anything that “holds up” the temperature, whether it be CO2 or changes in solar brightness, allows the atmosphere to hold more water. Now, for ice ages where the CO2 plays a crucial role is as a globalizer of the temperature response. Low obliquity makes it easier to accumulate ice at the poles, and the precession effects alternate between one pole and another as to where ice most easily persists. However, without the CO2 feedback the climate change would be much more confined to the Northern Hemisphere extratropics, where the great ice sheets wax and wane. This is why the LGM response in the tropics and Southern midlatitudes is such a crucial test of CO2 response. One gets relatively little Milankovic signal there without an effect from CO2. –raypierre]

    Comment by Isotopious — 30 Nov 2011 @ 4:29 PM

  84. Rate of change question — would methane-using microorganisms be likely to have handled most methane as it was released from warming at geological rates of change, but not increase fast enough to metabolize the amounts of methane described at current rate of change?

    http://www.nature.com/nature/journal/v480/n7375/full/480032a.html?WT.ec_id=NATURE-20111201
    (paywalled)
    Climate change: High risk of permafrost thaw
    Edward A. G. Schuur & Benjamin Abbott
    Nature 480, 32–33 (01 December 2011) doi:10.1038/480032a
    Published online 30 November 2011
    Northern soils will release huge amounts of carbon in a warmer world, say Edward A. G. Schuur, Benjamin Abbott and the Permafrost Carbon Network.

    News story: http://www.msnbc.msn.com/id/45494959/ns/us_news-environment/
    —excerpt follows—
    heat-trapping gases under the frozen Arctic ground may be a bigger factor in global warming than the cutting down of forests, and a scenario that climate scientists hadn’t quite accounted for, according to a group of permafrost experts. The gases won’t contribute as much as pollution from power plants, cars, trucks and planes, though.
    Image: Methane fire
    Todd Paris / University of Alaska, Fairbanks
    Researcher Katey Walter Anthony ignites trapped methane from under the ice in a pond on the University of Alaska, Fairbanks campus in 2009.

    The permafrost scientists predict that over the next three decades a total of about 45 billion metric tons of carbon from methane and carbon dioxide will seep into the atmosphere when permafrost thaws during summers. That’s about the same amount of heat-trapping gas the world spews during five years of burning coal, gas and other fossil fuels.

    And the picture is even more alarming for the end of the century. The scientists calculate that about than 300 billion metric tons of carbon will belch from the thawing Earth from now until 2100.

    Adding in that gas means that warming would happen “20 to 30 percent faster than from fossil fuel emissions alone,” said Edward Schuur of the University of Florida. “You are significantly speeding things up by releasing this carbon.”
    —-end excerpt—-

    Comment by Hank Roberts — 30 Nov 2011 @ 5:30 PM

  85. Steinar,

    Like raypierre, I can’t really interpret what you are saying.

    You need to distinguish more carefully between an end date (like “by 2100″) and an end CO2 target (like a “doubling of CO2″). The impact on climate when thinking “by 2100″ depends not only on the sensitivity, but also on socio-economic pathways and carbon cycle uncertainties, which determine what the CO2 concentration will be. Furthermore, you also need to distinguish the equilibrium response from the response at any point in time when following a particular scenario.

    Concerning your first paragraph, please have a harder look at the graph I linked to in my last comment to you, which is from IPCC AR4. This shows the best understanding (at the time of the AR4) of the relative contribution of different “forcings” to global temperature change since year 1750. Clearly, there are many positive forcings (warming influences) and negative forcings (cooling influences)– the total includes methane, N2O, black carbon, small changes in sunlight, aerosols, etc. You can also see the relative uncertainty between forcings (e.g., it is much larger for aerosols than for GHGs).

    The reason people “lean heavily” on CO2 has to do somewhat with the fact that it is the largest positive forcing over this time interval, but also because we’d expect it to grow even more strongly relative to other contributions in the future. This is due to the fact that it has the strongest potential to warm the globe in the long-run based on its long lifetime in the atmosphere (ranging from decades to centuries, and a tail end that extends to millennia, and with many climate impacts occurring over these slow timescales). In contrast, anthropogenic aerosols only affect climate insofar as they are emitted persistently. Moreover, any internal variability in the system will be superimposed on this even stronger growing positive trend, shifting the base climate into a state not seen for at least a few million years.

    Comment by Chris Colose — 30 Nov 2011 @ 6:02 PM

  86. “… without the CO2 feedback the climate change would be much more confined to the Northern Hemisphere extratropics, where the great ice sheets wax and wane. This is why the LGM response in the tropics and Southern midlatitudes is such a crucial test of CO2 response. One gets relatively little Milankovic signal there without an effect from CO2.. –raypierre”

    That’s very interesting – I had assumed that the circulation of the atmosphere and oceans would be the ‘globaliser of temperature response’. If it’s actually CO2, does that mean the absorption of CO2 (when the climate is cooling) and the emission of CO2 (when the climate is warming) is mainly going on at high latitudes and therefore transferring the temperature response to lower latitudes, rather than being a global process in response to global temperature change?

    [Response: It would be a very reasonable assumption to think the atmosphere is a globalizer, but Manabe and Broccoli showed it ain’t so, back in the 80’s. It’s surprisingly hard to get the cooling past the equator, and it doesn’t even penetrate all that well into the tropics. The ocean is somewhat better at connecting the hemispheres, but not all that great either. The importance of the details of atmospheric heat transport into the tropics, which David Battisti has shown is dependent on some rather intricate fluid dynamical behavior, is another reason that the UVic model (which represents all that by diffusion) is completely unsuitable for the uses it was put to in this study (except, as I said, as a proof of concept). –raypierre]

    Comment by Icarus62 — 30 Nov 2011 @ 6:13 PM

  87. Isotopious @83 — A positive feedback amplifies the signal (forcing). This is true whether the signal is increasing or decreasing. So at the end of the Eemian interglacial the signal (orbital forcing) decreased and the positive feedbacks of water vapor + CO2 enchanced the decline.

    Comment by David B. Benson — 30 Nov 2011 @ 7:23 PM

  88. Raypierre, Thanks.
    On temperature dependence, water vapour feedback, once in place, will be self- sustaining (it will require a greater forcing to reverse compared with the initial forcing it started with)?
    Lower obliquity should result in more water vapour in the tropics if the water vapour feedback holds true. What do the proxys show?
    Are you saying the tropics are insensitive to INSOLATION, but sensitive to CO2?
    Or maybe ‘all the action takes place in the extra tropics’…..?

    [Response: Please go read some books. I can’t give you a complete climate physics education in the comments. Dave Archer’s “Understanding the Forecast” book and lectures make a good place to start. No, water vapor feedback, once in place, is emphatically not self-sustaining. Take out the CO2, the atmosphere cools down, water goes, cools down more, and BINGO you are in a Snowball. My colleague Aiko Voight was the first to show this in print in a clear fashion, though this is something a lot of us knew from classroom arguments beforehand. I can’t make much sense of the rest of your comments but no, I am not saying that the tropics is insensitive to insolation. You’d better learn something about Milankovic forcing, and what the ocean does to average the precessional cyle out, and how it depends on latitude. –raypierre]

    Comment by Isotopious — 30 Nov 2011 @ 8:11 PM

  89. Icarus- In this context it doesn’t really matter where the CO2 is coming from (since it becomes well-mixed in the air over less than a few years), though the most plausible hypotheses usually require the Southern Ocean to be involved, and the associated feedbacks of ocean biogeochemistry and its interaction with the ocean’s physical circulation.

    CO2 provides only a minor effect in the obliquity and precession timescale band, but over 30% of the forcing in the 100 kyr band, so it is a key forcing agent that allows us to explain the magnitude of glacial-interglacial temperature variations. If only Milankovitch mattered the ice ages would be more localized than global.

    Comment by Chris Colose — 30 Nov 2011 @ 8:12 PM

  90. #87
    David Benson. That would only work if the forcing was greater (there therefore dominant) than the resulting temperature change (which it most certainly isn’t).

    [Response: This comment makes no sense at all. Forcing is in watts per square meter. Temperature is, well, temperature, like in Kelvins. Different units. Makes no sense to say one is bigger than the other. Think first, write second. –raypierre]

    Comment by Isotopious — 30 Nov 2011 @ 8:17 PM

  91. Re: Raypierre’s response to Wayne Davidson. I thought there was still a long way to go in cataloging the ocean temperatures of the LGM. Problems with ferreting out overlapping signals from different temperatures in unmixed layers and the geographic distribution of sediment core data. Also, putting it all together to describe heat transfer by ocean currents. Otherwise why the mismatch seen in this paper’s land and ocean LGM temperature delta and carbon dioxide sensitivity? I thought the big deal with this paper was not necessarily it’s new estimate of sensitivity, but rather the methods used to tease out LGM ocean temperature estimates?

    I’m lost. Can you point to a new book on paleooceanography/climatology that has up to date information on what the ongoing sediment drilling program is telling us? Or is it all still in the journals? I’m coming at this as a biologist who has necessarily had to learn outside of his comfort zone.

    Thank-you.

    [Response: I don’t understand this question. Where did I say we know the LGM temperatures well? We know them a whole lot better than in CLIMAP days, but there are still important mismatches between the various proxies. But there is enough data to say that the tropics and SH did cool much more than they would without the effect of CO2. Something even the UVic model can more or less do is to demonstrate that you don’t get anything remotely resembling the LGM in the tropics and SH unless you include the CO2 effect — though I suspect that the tropical results are very much contaminated by inadequacies in the diffusive representation of heat and moisture transport. –raypierre]

    Comment by Andy — 30 Nov 2011 @ 9:05 PM

  92. Isotopious @90 — I assure you that what I wrote follows directly from what is called Linear Systems Theory. I recommend studying before commenting.

    Comment by David B. Benson — 30 Nov 2011 @ 10:50 PM

  93. Thanks for the discussion of time to equilibrium. It makes sensitivity all the more difficult to define because you have to say “When” and it introduces more opportunities for denialists.

    Comment by Edward Greisch — 1 Dec 2011 @ 12:19 AM

  94. Raypierre,

    you said yourself “Anything that “holds up” the temperature, whether it be CO2 or changes in solar brightness, allows the atmosphere to hold more water….”
    Does that “Anything” include water vapour?

    If not, why not.

    [Response: Yes, that includes water vapor too. But that does not in any way conflict with my statement regarding the key importance of CO2. The warming due to water vapor helps the air hold water, but in the Earth’s orbit, it is not actually sufficient to keep the air warm enough to keep the water it already has — so you go into the death spiral, with a bit of cooling, less water, then more cooling, and so on to Snowball. The difference with CO2 (in Earth’s orbit) is that it doesn’t condense as the air gets a bit cooler. Get it? Take Benson’s advice, and learn a bit about feedbacks in general systems theory. Your ignorance transcends merely climate science, and extends really to the basics. –raypierre]

    Comment by Isotopious — 1 Dec 2011 @ 12:47 AM

  95. Taking the precession period as 26,000 years (26 ky), note that about 2 kya the thermal equator was at the southern extreme. This means that it was at the northern extreme around 15 kya, the (end) time of LGM. Compare that to the extent of the tropical rain forest in the Americas at that time (map previously posted) and note that the Amazon basin was largely savanna, warm but dry. That this seems controlled by precession is indicated that around 41 kya the Amazon basin was also savanna but also see the presentation on this matter 2–3 years ago at the AGU fall meeting.

    The relevance here is that precession appears to play some role in the climate state during LGM while the current value of precession is near the opposite extreme. I suppose this has some role in the difficulty in using LGM to determine Charney climate sensitivity with accuracy.

    [Response: Again, this is why you need a model to infer climate sensitivity from the LGM. To do that, you need to take into account all the forcings that are different between the LGM and the present, not just the CO2. Milankovic forcing is one of those things. Of course, one needs a model that responds accurately to those forcings, otherwise it will lead to incorrect attributions of the part of the climate change due to CO2. In some sense, though, almost any known forcing is useful in inferring climate sensitivity, since the same feedbacks that determine the response to Milankovic also determine response to CO2, though the relative weightings of the different feedbacks are likely to be different. That is why the post emphasizes that it is unclear that the model that does best at reproducing the LGM also does best at forecasting the future. There was more ice around in the LGM and that changes the weighting of ice-albedo feedback, but also the operation of the cloud feedback since clouds over ice have different effects than clouds over water. Just one of many caveats. –raypierre]

    Comment by David B. Benson — 1 Dec 2011 @ 1:01 AM

  96. In the response by raypierre- I agree about the problems with simple energy balance model and its lack of spatial representation, but it’s tough to fault the authors for the lack of cloud detail, since the science is not up to the task of solving that problem (and doing so would be outside the scope of the paper; very few paleoclimate papers that tackle the sensitivity issue do much with clouds).

    [Response: I can’t agree with this assessment. General circulation models do simulate clouds, and the clouds they simulate are a big part of the nature of their response to both doubled CO2 and to LGM forcing. However, because of the various unknowns in the cloud process, the models give quite different climate sensitivities, accounting for much of the IPCC spread. So, the key thing in evaluating climate sensitivity is to use the LGM as a test of how well the models are doing clouds, using the LGM, and then see what happens in the same model when you project to the future. You cannot do that in a model which doesn’t have the dynamics needed to simulate changes in clouds. –raypierre]

    Comment by Photography — 1 Dec 2011 @ 2:40 AM

  97. The context of what I wrote is the discussion above, which talks about how observed temperatures can’t yet say much about the sensitivity, perhaps not yet for many decades. I’m trying to reconcile that with the summary for policymakers: the likelihood (“likely”) that emissions are behind the observed temperature rise and the estimated 21st century temperature rise. Yes, the discussion above is about equilibrium climate sensitivity and by picking 2100 (or 2011) we’ll be measuring the transient climate sensitivity. I’m aware of the difference, that equilibrium requires a very long time, but I do assume that these sensitivities are connected, in particular that the transient sensitivity can be used as a lower limit for the equilibrium sensitivity.

    My original question was about when we can use observations to say something about the sensitivity (even only its lower limit). IPCC seem to say that we can already – despite the complexity and incertainties of the different forcings. In that case observations in the next few decades could narrow the sensitivity range. But the figure that was pointed out seems to suggest that the uncertainties of the negative forcings may postpone warming from emissions significantly. Also a bit puzzling is the IPCC statement “values substantially higher than 4.5 °C cannot be excluded, but agreement of models with observations is not as good for those values”. Since equilibrium could take a very long time, I would think that the observations (the instrument record) would say very little about the upper limit.

    Comment by Steinar Midtskogen — 1 Dec 2011 @ 3:37 AM

  98. Thanks for the criticisms, raypierre, I will try again:

    Benson: “A positive feedback amplifies the signal (forcing). This is true whether the signal is increasing or decreasing. So at the end of the Eemian interglacial the signal (orbital forcing) decreased and the positive feedbacks of water vapor + CO2 enchanced the decline.”

    If a positive feedback amplifies a signal, and the resulting change attributable to water vapour feedback is greater than the initial signal, then any further perturbations will be competing with the change attributable to water vapor. Both cause temperature change so both will play a role in any future water vapour feedback process that is dependent on temperature.
    That one component is non -condensing, so may play a stabilising role, definitely strengthens your argument and is definitely good science. I’m just trying to challenge your good science, that’s all.

    Comment by Isotopious — 1 Dec 2011 @ 4:10 AM

  99. I don’t see any answer to RichardC (#40). So here’s an attempt:
    When temperatures change because of an orbital forcing, you’ve got a strong CO2 feedback because the CO2 in the atmosphere was in equilibrium with the CO2 in the oceans before temperatures changed.
    In the case of warming caused by a disproportionate increase in atmospheric CO2 (compared with oceanic CO2), an increase in temperatures only slows down the rate at which CO2 is absorbed by the oceans. This effect is probably significant but it’s slow-acting and the CO2 self-feedback would only be fully realized when very little of the original CO2 pulse was left in the atmosphere.
    I imagine the CO2 feedback would be more important as a feedback to any albedo changes brought by warming. If a spike in temperatures due to CO2 causes a non-reversible change in ice cover, you have a situation more analogous to a deglaciation because you now have a forcing that has a strong effect on the equilibrium amount of CO2 in the atmosphere.
    There’s so much CO2 in the oceans that, unless some kind of tipping point is triggered, atmospheric emissions shouldn’t affect that (very) long term equilibrium anywhere as much as they affect the atmosphere in the short run.
    But IANAC so I could be way off…

    Comment by Anonymous Coward — 1 Dec 2011 @ 5:17 AM

  100. Isotopious, I think you are thinking of this too much in the abstract. These are all physical processes. Think how the system responds physically, and it might help.

    Example: CO2 increases. This takes a big bite out of the outgoing IR spectrum and throws the system out of equilibrium. Temperature rises. This shifts the IR spectrum just slightly and increases the IR emitted. It also increases the water vapor in the air slightly–which takes another bite out of the IR spectrum. You have to look at how the added ghgs–both CO2 and H2O respond to the altered IR spectrum and how all of the feedbacks and temperature evolve as the system moves again toward equilibrium. Does that help at all?

    Comment by Ray Ladbury — 1 Dec 2011 @ 9:43 AM

  101. and stated with rather more confidence than is warranted given the limitations of the study.

    That’s a good observation, especially considering that there is no evidence relative to the question of whether “the climate sensitivity” is even constant.

    [Response: By “constant” I believe you are referring to the issue of asymmetry between climate sensitivity that best fits cooling vs. warming climates. If so, you are right that this is one of the key issues, as we noted in the post. The MARGO LGM reconstruction team made a similar remark in their Nature Geoscience paper on the implications of their reconstruction. They state a 2xCO2 climate sensitivity range of 1. to 3.6 C, and that’s not even taking into account the possibility that MARGO underestimates LGM cooling by under-weighting modern proxies like Mg/Ca. What odds do you want to put on Mg/Ca being more likely to be right than foram assemblages?–raypierre]

    Comment by Septic Matthew — 1 Dec 2011 @ 11:24 AM

  102. Anonymous,
    CO2 absorption and degassing occur separately. Cold, polar waters constantly absorb CO2, sink as it becomes more dense, and is transported to the equatorial waters via the ThermoHaline and outgases in the warmer waters of the Indian and Pacific Oceans. The colder, polar waters have an ~3x higher CO2 solubility than the warmer, equatorial waters. There is a debate as to how long the THC takes to complete this absorption/degassing process, with the Vostok data indicating that it is on the order of a millenium.
    The CO2 solubility change due to the increase in ocean temperatures is small compared to the change in the atmospheric concentration. Therefore, the concentration gradient may be increasing the rate of absorption much faster than the warmer, equatorial waters are outgassing.
    This is just an over-simplification of a very complex interaction.

    Comment by Dan H. — 1 Dec 2011 @ 11:31 AM

  103. Isotopious,

    I really do not know what you are trying to “challenge.” If you want a tutorial into how climate scientists think about feedbacks, a good article to read is Roe, 2009. It takes nothing more than a little multi-variable calculus and algebra to fully digest, and you should be able to fully understand why positive feedback doesn’t imply some sort of runaway warming or cooling. You can think of the widely cited 1/(1-f) feedback dependence as a sort of converging series, though I’m not so sure that is the best place to start for intuition.

    Climate sensitivity can be thought of as the inverse of the slope of a line relating the net top-of-atmosphere energy balance to the surface temperature. For low climate sensitivity, the outgoing radiation is a stronger function of temperature; for a higher sensitivity, the response of the surface temperature is “more sluggish” and must rise more to accommodate the same change in IR emission that is necessary to come to balance. Water vapor makes the outgoing radiation less sensitive (more linear) than σT^4, but the system can still equilibriate at a higher temperature due to the Planck restoring effect winning out in the end. The water vapor just makes the Planck response less effective, so you need a higher temperature change for the same perturbation than in a no feedback case.

    The primary limit to the pressure of a vapor in equilibrium with a liquid (or solid) at a given T is governed by the Clausius-Clapeyron equation; the vapor pressure is a rapidly increasing function of temperature, and the T dependence is determined by the magnitude of the latent heat of vaporization. This imposes a strong constraint on the ability of water vapor to grow in the atmosphere, although their are also dynamical processes which keep the atmosphere unsaturated that are important. It is of course possible that the increase in greenhouse effect overwhelms the tendency for increased vapor pressure to result in saturation, and this is what happens in the runaway greenhouse; in this limit, the vapor pressure relates the optical depth to the radiating temperature of the moist troposphere, and the surface OLR no longer contributes to the planetary OLR. The water vapor feedback overwhelms the Planck response and provided you have enough solar insolation, equilibrium is never established until the water vapor feedback is terminated (i.e., when the oceans are gone)

    Comment by Chris Colose — 1 Dec 2011 @ 1:35 PM

  104. @102 & raypierre,

    Constant climate sensitivity or not: in cooling vs warming periods, but also in more or less glaciated periods, I suppose?

    Deglaciation seems to be a much faster process than glaciation, and depending on the amount of ice on the planet these processes may be slower or faster, if I understand Hansen correctly on this. How much scientific agreement or discussion is there on this point and what could be the full range of climate sensitivities for different periods, approximately?

    Comment by Lennart van der Linde — 1 Dec 2011 @ 3:30 PM

  105. Thanks Chris,

    So the reference system climate sensitivity parameter is based on a negative feedback due to Stefan’s law. As Roe says in the paper ‘However, it should be borne in mind that, in so choosing, the feedback factor becomes dependent on the reference-system sensitivity parameter’. So the feedback loop takes some ‘fraction’ of output and feeds it back into the input.

    I suppose the point I was trying to make is that even though water vapor is condensable, once in place, it is essentially a forcing. It may only last for 9 days, but that in my opinion would be sufficient time for it to be stable, since its concentration is determined by heat in the system (you mentioned the remarkable latent heat of water, so the effect of the ocean heat content would provide plenty of stability).

    I guess an event such as Pinatubo may provide an insight into what I have suggested. The water vapour would have dropped a fair bit, then bounced back soon after. Is that bounce back due to non –condensable GHGs or thermal inertia?

    I would suggest it is due to the latter (for obvious reasons). So this poses a potential problem for CO2 – water vapor feedback theory. It is entirely possible that water vapor feedback has very little to do with non –condensable GHGs.

    That’s the challenge.

    Comment by Isotopious — 1 Dec 2011 @ 5:51 PM

  106. Steinar Midtskogen (#97),

    “when we can use observations to say something about the sensitivity (even only its lower limit). IPCC seem to say that we can already”

    That’s also my understanding.

    “Also a bit puzzling is the IPCC statement “values substantially higher than 4.5 °C cannot be excluded, but agreement of models with observations is not as good for those values”. Since equilibrium could take a very long time, I would think that the observations (the instrument record) would say very little about the upper limit.”

    Perhaps you should refer to section 9.6.2 of the IPCC’s AR4 (WG1).

    There is more to obervations than the global warming in the instrumental record.
    What this is saying I think is that no one had managed at that point to build a physics-based model which produces a very high sensitivity while agreeing well with observations such as the effect of the Pinatubo eruption.
    But IANAC so…

    Comment by Anonymous Coward — 1 Dec 2011 @ 5:55 PM

  107. Isotopious,

    Huh? Re-read what people have said. You aren’t making much sense and trying to decipher what you are saying is troublesome, and I suspect at this point you’re just playing games.

    Once again, the water vapor feedback responds to temperature, not *specifically* to CO2. In the case of a volcanic eruption, it’s the sulfate aerosols injected into the stratosphere that provide the large cooling, and after they are removed (over a timescale of a few years), then provide a return back near original conditions (as it happens, Brian Soden has a paper validating the WV effect after the Pinatubo response here). This has nothing to do with thermal inertia, except that the response would have been even stronger on a planet with no buffering capability, as occurs with Martian dust storms for example. The instantaneous radiative forcing was comparable to that of a doubling of CO2, except in the cooling direction, but was very short-lived.

    If CO2 were only a minor part of the terrestrial greenhouse effect, the influence on temperature may not be severe enough to trigger the snowball scenario. This wouldn’t work with N2O or methane for example. But in the case of CO2, which is a rather sizable portion of the greenhouse effect, the temperatures would be cold enough to trigger the scenario raypierre discussed (in the work of Aiko Voigt; he also discusses this in his 2007 general circulation paper, and more recently was discussed in the Lacis et al 2010 paper in Science).

    Comment by Chris Colose — 1 Dec 2011 @ 6:15 PM

  108. Ocean currents that may carry large amounts of heat are not calculated into the GCM, and thus we do not have a good estimate of the rate of energy transfer at the boundaries of specific sea -floor methane systems. Recent methane measurements at Terceira Island, Azores, Portugal and Tae-ahn Peninsula, Republic of Korea (See http://www.esrl.noaa.gov/gmd/dv/iadv/index.php ) in the context of outlier data points over the last decade at sites such as Storhofdi, Vestmannaeyjar, Iceland, and reports of methane releases from the Arctic seabed, tell us that at current levels of AGW, the Earth’s sea-floor methane systems are not stable. Add in CO2/Ch4 from Arctic and tropical wetlands along with carbon from deforestation, and the assumption that carbon feed-backs can be ignored as we estimate future conditions is wrong. Any policy based on projections made without including all feed-backs will be flawed.

    Thus, short term climate sensitivity is moot. We now face long term climate sensitivity for the coal that we burned 50 or 100 years ago. Long term climate sensitivity is twice (or three times?) the short term value.

    The last paragraph in the post on risk should have made an honest estimate of all warming as a result of of applying all feed backs to all greenhouse gases in the atmosphere. You may not be able to “prove” such an honest estimate, but it is more likely to be correct than a value based on some estimate of short term climate sensitivity. Models using short term climate sensitivity for carbon that we released a century ago do not provide a realistic estimate of what can be reasonably expected over the next 24 to 35 years. (i.e., Total feed-backs for anthropogenic carbon in the atmosphere for a period of 135 years or more.)

    Use of long term climate sensitivity also changes the cost (and time for depreciation) of releasing more carbon. Facing (and planning for) the full impact of long term climate sensitivity is the rational risk management approach.

    Comment by Aaron Lewis — 1 Dec 2011 @ 6:19 PM

  109. Dan H.: “Cold, polar waters constantly absorb CO2, sink as it becomes more dense, and is transported to the equatorial waters via the ThermoHaline and outgases in the warmer waters of the Indian and Pacific Oceans.”

    This is precisely why warming in the poles is of such concern–there’s a whole helluva lot of carbon sequestered up there…in oceans, permafrost, clathrates…

    Comment by Ray Ladbury — 1 Dec 2011 @ 6:21 PM

  110. And some human activities had no parallel in the paleo record:

    GEOPHYSICAL RESEARCH LETTERS, doi:10.1029/2011GL049784
    Arctic winter 2010/2011 at the brink of an ozone hole
    “… severe ozone depletion like in 2010/2011 or even worse could appear for cold Arctic winters over the next decades if the observed tendency for cold Arctic winters to become colder continues into the future….”

    http://www.agu.org/pubs/crossref/pip/2011GL049784.shtml
    GEOPHYSICAL RESEARCH LETTERS, doi:10.1029/2011GL049761

    Stratospheric heating by potential geoengineering aerosols
    Geoengineering aerosols change stratospheric radiative heating rates
    Heating rates depend on aerosol species and size

    Comment by Hank Roberts — 1 Dec 2011 @ 6:27 PM

  111. Lennart van der Linde (#104),
    Yes, a different planet should have a different sensitivity.

    Traditionally, the definition of climate sensitivity excludes changes in ice cover over land among other things. The temperature variations across glacial cycles are obviously larger than what can attributed to changes in atmospheric CO2.
    So the actual (very) long term warming should be higher than what you would get from sensitivity alone.

    Comment by Anonymous Coward — 1 Dec 2011 @ 6:28 PM

  112. Chris,

    Please note, I have not provided any mechanism for system change (i.e. I have not provided a substitute theory).

    All I am highlighting is the possibility that water vapor feedback may have very little to do with non –condensable GHGs. The argument that water vapour is a feedback, rather than a forcing, seems to rest on the fact that it is condensable within a few days, rather than 100’s of years like CO2.

    In a hypothetical carbon feedback, a positive feedback amplifies the signal (CO2), by releasing more carbon from a sink. In this case it is possible, for the CO2 attributed to feedback, to then become as large a signal as the initial signal. Now we have doubled the signal. In this case, simply reversing the initial perturbation back to zero will not bring the system back to its initial state.

    I am arguing that there is no difference between CO2 and water vapour in this regard. That one is non –condensing makes little difference for the simple fact that the heat in the entire climate system is enormous. I disagree with the Soden paper you linked to. The water vapour responds to the temperature, not the CO2. Comparing the heat contribution of CO2, with the heat in the entire climate system on any time scale, is like comparing an elephant with an ant.

    Comment by Isotopious — 1 Dec 2011 @ 8:03 PM

  113. Isotopious, Do you know that the southerners among us are right this moment reading your post and saying, “Bless his heart.”

    Dude, Water vapor is determined by temperature–it is therefore a feedback. CO2, when it is released thermally is a feedback. When CO2 is varied indepenently of temperature it is a forcing.

    Comment by Ray Ladbury — 1 Dec 2011 @ 9:14 PM

  114. Isotopious — I admire your good natured persistence, but former students of mine control elephants with ants all the time.

    First of all, simplify by imagining a box with orbital forcing as input and an (unobserable) temperature as output; the water vapor feed back is inside and is effectively instaneous on the scale of millennia appropriate for considerring ice age variations. Call that output temperature a signal which goes into box with the CO2 feedback; the output of this second box is the observable temperatures from paleoclimate proxies.

    Considering just the second box, suppose the signal s=1 and the amplifying feedback from CO2 is f=1/2. Then the response becomes 1+1/2 except that the new 1/2 is also feedback givine another 1/4 and so on; we have a response

    r = 1 + 1/2 + 1/4 + 1/8 + … = 2

    and the general form (in this simple situation) is r = s/(1-f) for values of f less than 1. This form holds whether the signal is positive or negative. In any case of f=1/2 we see that the response is twice the signal. [That is approximately the correct value for ice age variations once water vapor is treated properly which I haven’t done.]

    So yes, the system response to orbital forcing is about double the ‘raw’ value. But as Ray Ladbury ponts out, there is another way to perturb the system; add more atmospheric CO2 from a previously untapped reservoir. Charney (equilibrium) climate sensitivity is a (partial) indication of the system repsonse to instantly doubling CO2. The goal of the paper under review, as I take it, is an attempt to put an upper bound on the Charney climate sensitivity feedback by considering the LCM paleoclimate.

    Comment by David B. Benson — 1 Dec 2011 @ 10:55 PM

  115. Isotopious – einstein? gallileo? or bozo the clown.

    You decide …

    Hey, dude …

    The water vapour responds to the temperature, not the CO2.

    Climate scientists will be stunned, stunned I say, to learn this!

    (actually, the only reason you know this is that this is exactly what atmospheric physics tells us. Trivially, you’re right, except you’re in total denial over the fact that CO2 does raise the temperature.)

    Comment by dhogaza — 1 Dec 2011 @ 11:38 PM

  116. I suggest RC should do a whole article on sensitivity, all the way down to zero CO2 and all the way up to 2000 ppm. How logarithmic does it stay? You might need 5 dimensional graph paper.

    Comment by Edward Greisch — 1 Dec 2011 @ 11:41 PM

  117. Comparing the heat contribution of CO2, with the heat in the entire climate system on any time scale, is like comparing an elephant with an ant.

    Or comparing an elephant with LSD …

    The 7000-pound bull elephant named Tusko was injected with a huge dose of LSD (297 mg) into one buttock with a dart rifle. Five minutes later, the elephant collapsed and went into convulsions.

    And died.

    0.00065477292 pounds killed a 7000 pound bull elephant.

    Apparently ants weigh something like 0.1 mg, i.e. 1/2970th of the weight of LSD in this case.

    What makes you think that magnitude comparisons of this sort are particularly useful?

    Comment by dhogaza — 1 Dec 2011 @ 11:45 PM

  118. Perhaps this is the right place to mention a new paper:

    Pagani et al
    The Role of Carbon Dioxide During the Onset of Antarctic Glaciation

    http://www.sciencemag.org/content/334/6060/1261

    Comment by AIC — 1 Dec 2011 @ 11:46 PM

  119. David Benson. Going off these calculations, how much ppm change during the ice cycles?
    100ppm
    What is the radiative perturbation to the system without feedback?
    0.6 deg C
    What is the radiative perturbation to the system with feedback (f =0.67)?
    1.8 deg C
    I can understand when RC thinks this current study underestimates climate sensitivity.

    Comment by Isotopious — 2 Dec 2011 @ 5:55 AM

  120. “Comparing the heat contribution of CO2, with the heat in the entire climate system on any time scale, is like comparing an elephant with an ant.”

    Or, in other words, “Oooh! Look how small it is!”

    Yeah. I’ve had years worth of conversation now in which “Oooh! Look how small it is!” (repeated incessantly in various forms) was pretty much the whole contribution from the ‘skeptic’ in question. Maybe I’m a bit jaded, but this seems one of the more foolish hand-wavings to me at this point.

    At least, it’s a very poor substitute for actual attempts to quantify properly. You know, like this paper we’re discussing tries to do.

    Comment by Kevin McKinney — 2 Dec 2011 @ 8:08 AM

  121. Isotopious seems to be repeating a meme that is gaining popularity amongst the less conspiracy-theory minded denialati–namely that the greenhouse effect is water vapor all the way down, and that CO2 “is a weak greenhouse gas”.

    This “theory” at least has the merit that the warming produced would look like greenhouse warming. However, it would utterly fail to explain why the stratosphere is cooling (not much water vapor there). It also utterly ignores the transient nature of water vapor.

    Anyone have any idea which denialist site this latest rebunking started on?

    Comment by Ray Ladbury — 2 Dec 2011 @ 9:08 AM

  122. My understanding Ray, is that an increase in water vapor in the troposphere will cause cooling in the stratosphere. Just another correlation fallacy.

    [Response: Based on what? Increases in local stratospheric water vapour cool the stratosphere (i.e Oinas et al, 2005), but I’m not aware of any study indicating that tropospheric water vapour increases have the same effect. Perhaps I’ve missed it? – gavin]

    CO2 is a GHG and plays a role in warming, and as for the greenhouse effect of 30 odd deg C, estimates made by Gavin here on Real Climate are probably spot on. About 25 % CO2, etc, and about 75 % for water vapor and clouds. That would suggest CO2 is far from ‘weak’, however, I would suggest that it is no more important than water vapor in the role it plays in past ice cycles (look at the numbers!). CO2 is no more a feedback than water vapor, look at the ice core data.

    [Response: CO2/GHG changes add about 40% to LGM cooling, water vapour feedback adds about 60%, so they are comparable in size – and both large! – gavin]

    The desire to attribute water vapor to CO2 is easily understood from the denialist point of view. Like any good competitor, you need to knock out the competition, propose a joint venture.

    Unfortunately, you can’t have it both ways. A stronger feedback is not plausible, it will blow up. And the IPCC estimate of around 3 deg C is what I used to change 0.6 into 1.8. But even that number is suspect since there is no proof that water vapor needs CO2 to ‘hold it up’.

    Lot’s assumptions for very little effect.

    Comment by Isotopious — 2 Dec 2011 @ 4:23 PM

  123. I’m just guessing, Gavin.

    From memory, I’m pretty sure that when there is warming in the troposphere, you get cooling in the stratosphere, and vice versa.

    I think that’s what you see on the Aqua channels.

    [Response: CO2/GHG changes add about 40% to LGM cooling, water vapour feedback adds about 60%, so they are comparable in size – and both large! – gavin]

    Whatever, you get ~0.6 deg C from CO2 alone. It’s likely such a small forcing will have some serious competition.

    Yes, ~8 deg C is large, however, a change from 8 to 8.6 is not very large at all.

    Comment by Isotopious — 2 Dec 2011 @ 6:07 PM

  124. Isotopious, you are missing the point. You start with a proximate cause. It causes warming or cooling. The climate responds to the warming or cooling, in part by increasing or decreasing water vapor a la Claussius-Clapeyron. Other greenhouse gasses also change. The changes in forcing brought about by these greenhouse gas changes (including water vapor) are a feedback on the initial forcing of the proximate cause.

    For Milankovitch cycles, the proximate cause is changes in insolation, especially in the Northern Hemisphere. In the current warming epoch, the proximate cause is anthropogenic CO2. After all, the initial warming that led to the increase in H2O would not have happened without it. How hard is that to understand?

    Comment by Ray Ladbury — 2 Dec 2011 @ 6:20 PM

  125. > the denialist point of view. Like any good competitor,
    > … knock out the competition, propose a joint venture.

    For those who can’t figure out where ‘iso’ gets this version of “science” this may help — it’s market-based “scientific” thinking. Quite popular among those who like that sort of thing.

    The Rise of the Dedicated Natural Science Think Tank
    http://www.ssrc.org/workspace/images/crm/new_publication_3/%7Beee91c8f-ac35-de11-afac-001cc477ec70%7D.pdf

    Seriously — read it. It’s a worldview unimaginable to most in the sciences.

    “… to have the ambition to change the very nature of knowledge production about both the natural and social worlds. Analysts need to take neoliberal theorists like Hayek at their word when they state that the Market is the superior information processor par excellence. The theoretical impetus behind the rise of the natural science think tanks is the belief that science progresses when everyone can buy the type of science they like, dispensing with whatever the academic disciplines say is mainstream or discredited science….

    … the provision of targeted scientific findings (from ‘data’ to ‘theories’ to full-fledged scientific publications) within a concerted “marketplace of ideas” framework, with the intention of altering the balance of orthodoxy and heterodoxy from outside the university sector….

    … The expanding role of natural science think tanks have due to two high profile events over the last few years: …. the numerous think tanks behind the contrarian science opposing the IPCC and academic global warming studies. But the real eye-opener for those concerned with science policy was the cache of documents made public in the series of tobacco industry settlements of the mid-1990s …..”

    Those in that worldview believe everyone does this, it’s their whole world. They try to come up with ideas they can sell, not ideas they can falsify.

    Just watch.

    Comment by Hank Roberts — 2 Dec 2011 @ 8:24 PM

  126. Ray. You are arguing that 100ppm change in radiative forcing for CO2, amounting to 0.6 deg C change, multiplied by a wv feedback giving 1.8 deg C change, plays an important part in raising the sea level by 100 meters.

    I’m saying it doesn’t, because even if the above were true (which it probably isn’t, where’s the proof?) it’s not enough warming, and even if you add some orbital wobble, it still isn’t enough. Indeed, most of the response is probably due to obliquity, which has close to zero radiative forcing all said and done.

    It’s possible the global anomaly is around 8 deg C for the ice cycle. If that were the case then CO2 alone would explain 7.5%.
    92 meters of sea level to go, add some IPCC water vapor feedback, 77 meters to go, add some more positive feedbacks. Indeed, multiple feedbacks can have confounding effects so maybe that would help ‘fudge’ the data a little more.

    But the point is that you are stretching reality, I have only tried to explain the climate here, let only the cycle. As Benson said ‘whether the signal is positive or negative’, it doesn’t matter. That maybe true for one simple feedback state, but in the case of multiple feedbacks with confounding effects I very much doubt that would happen.
    ‘How hard is that to understand?’

    Extremely complex and difficult to follow.

    Comment by Isotopious — 2 Dec 2011 @ 8:38 PM

  127. Isotopious,
    Uh, Dude, where did I say CO2 was the forcing that was responsible for the Ice ages and the interglacials? That is clearly the Milankovitch cycles that initiate the process–and CO2 and water vapor (along with changes in albedo due to snow and vegetation) are both feedbacks.

    Might I suggest reading for comprehension?

    Comment by Ray Ladbury — 2 Dec 2011 @ 10:35 PM

  128. “100ppm change in radiative forcing for CO2″–Don’t want to seem churlish here, but you can’t specify “radiative forcing” in ppm.

    Just saying.

    Comment by Kevin McKinney — 2 Dec 2011 @ 11:05 PM

  129. Isotopius:

    Extremely complex and difficult to follow.

    Speak for yourself. Don’t imagine that it’s true for scientists working in the field, or for PhD physicists like Ray who don’t work in the field, or even for humble software engineers like myself who have a mathematical background.

    [edit – please don’t]

    Comment by dhogaza — 3 Dec 2011 @ 12:04 AM

  130. Isotopious @126 — So long as the system reponse is linear the internal complexity doesn’t matter; the positive, i.e., enhancing, feedbacks remain postive and similarly for the negative ones. However, the ice age variations certainly involve some non-linear components. That makes the analysis more difficult but doesn’t change the polarity of the various feedbacks, just the strengths involved.

    Obliquity is not the only aspect to orbital forcing; an appropriate graphic is found in A Moveable Trigger. More generally, the entire Quaternary was analyzed in Paillard and found to fit his trigger model rather well; studying that is recommended.

    Comment by David B. Benson — 3 Dec 2011 @ 12:23 AM

  131. Gavin @ 122,

    The stratopsheric cooling may be caused by the tropospheric water vapor (see figure 3 of http://www.springerlink.com/content/6677gr5lx8421105/fulltext.pdf ) – but in that figure water vapor is fixed only above sigma = 0.14 (~140 hPa), so the cooling may also be caused by the increase in lower stratospheric water vapor.

    Comment by Jianhua Lu — 3 Dec 2011 @ 12:24 PM

  132. # 80, raypierre in the inline response,

    …. But to reiterate: the difference between climate sensitivity estimates based on land vs. ocean data indicates that something is seriously wrong, either with the model, or the data, or some of both. In my view, that mismatch alone is reason not to put too much credence in Schmittner’s claim to have trimmed of the upper end of the IPCC climate sensitivity tail

    Thanks, as this punter has been seeking a simply stated reasons for the view that the paper under discussion don’t seem to constrain upper bounds as well as it is variously reported to do so.

    I would welcome any further substantive illumination on this point of from anyone.

    And, I add a 2nd (more a 12th or 17th) to the expressions of thanks to Drs. Urban and Schmittner for their engagement here and other places on the net. And thanks, always, to the Real Climate providers.

    Comment by WhiteBeard — 3 Dec 2011 @ 7:56 PM

  133. Unless I’m missing something obvious, I don’t see how one can extrapolate or estimate current climate sensitivity from the amount of temperature change to the solar forcing change that ocurred from last glaciation to the present interglacial period.

    For starters, one simply cannot equate the positive feedback effect of melting ice (both reduced albedo and increased water vapor) from that of leaving maximum ice to that of minimum ice where the climate is now (and is during every interglacial period). There just isn’t much ice left, and what is left would be extremely difficult to melt, as most of it is located at high latitudes around the poles which are mostly dark 6 months out of the year with way below freezing temperatures. A lot of the ice is thousands of feet above sea level too where the air is significantly colder on average. Unless we wait a few 10s of millions of years for plate tectonics to move Antarctica and Greenland to lower latitudes (if they are even moving in that direction), no significant amount of ice is going to melt from a relatively small ‘instrinstic’ rise in global average temperature from 2xCO2.

    Furthermore, the relatively high ‘sensitivity’ from glacial to interglacial is largely driven by the change in the orbit relative to the Sun, which changes the distribution of incident solar energy into the system quite dramatically (more energy is distributed to the higher latitudes in the NH summer, in particular). This combined with positive feedback effect from melting of a huge amount surface ice was enough to cause the 5-6C rise. The roughly +7 W/m^2 or so increase from the Sun was likely only a minor contributor to the total net temperature change. Moreover, we are also nearing the end of this interglacial period, so if anything the orbital component has already flipped back in the direction of glaciation and cooling.

    [Response: Your thinking is very much right on this in general but note that what is done with these estimates of climate sensitivity for LGM climate is to use the state of the climate already in place at the LGM — including the ice albedo. Note also that orbital configuration is not very different than today. You’re entirely right that huge seasonal forcing (40 W/m^2 or so) is what causes the demise of the ice in the first place (or -40 to cause it to start building up). But the actual forcing difference at the LGM, which is the period being considered, is minimal. And globally averaged, it is very close to zero (and definitely negligible in this context. It’s all about transient vs. equilibrium. Hope that helps.–eric]

    Comment by RW — 4 Dec 2011 @ 7:50 PM

  134. RW

    For starters, one simply cannot equate the positive feedback effect of melting ice (both reduced albedo and increased water vapor) from that of leaving maximum ice to that of minimum ice where the climate is now (and is during every interglacial period).

    You’ve hit on one of the weaknesses of the paper, as the model they use admittedly doesn’t model changes in albedo (at least, not as a model output).

    Thank you for pointing out one reason why the paper’s estimate of ECR is too low …

    [Response: UVic doesn’t model changes in cloud albedo, but I’m quite sure it models changes in albedo due to sea ice and land ice. –raypierre]

    Comment by dhogaza — 4 Dec 2011 @ 11:16 PM

  135. Eric,

    “note that what is done with these estimates of climate sensitivity for LGM climate is to use the state of the climate already in place at the LGM — including the ice albedo.”

    I know, but the bottom line is most of the positive feedback from the melting of ice has already been used up as we left the LGM. For the reasons I stated, it’s mostly a ‘clamped’ effect in the current climate.

    “Note also that orbital configuration is not very different than today. You’re entirely right that huge seasonal forcing (40 W/m^2 or so) is what causes the demise of the ice in the first place (or -40 to cause it to start building up). But the actual forcing difference at the LGM, which is the period being considered, is minimal. And globally averaged, it is very close to zero (and definitely negligible in this context. It’s all about transient vs. equilibrium.”

    I was mainly referring to the change in the distribution of the incoming solar energy, which from what I understand is rather large. My point is this particular mechanism or influencing factor is not in play in the current climate.

    I know the actual radiative forcing increase at the beginning of the transition from the LGM started out small and gradually increased, but the point is in total it’s estimated to be about 7 W/m^2. As I understand it, they are more or less trying to equate the 0.8C per 1 W/m^2 needed for a 3C rise from 2xCO2 (0.8 x 3.7 = 3C) to the +7 W/m^2 of net incident solar from the orbital change that ultimately resulted in about an increase of 5-6C from the LGM to the current interglacial period (0.8 x 7 = 5.6C).

    [Response: No. They are comparing under nearly identical orbital conditions, and the ice is treated as a forcing, not a feedback. Of course in reality it is a slow feedback, but it can be treated as a forcing for the ‘fast feedback’ analysis that is being done. At least, this is (strongly) how I understand what was done (and definitely what older papers did).–eric]

    If the two main factors that drove the bulk of temperature change from the LGM to the current interglacial are either non-existent (the orbital distribution change) or significantly reduced (the positive feedback from melting ice), I don’t see how it can be extrapolated or proportional to the current climate sensitivity.

    [Response: You don’t understand how the estimate is done. It is not a simple extrapolation. The calculation (like many before it) uses a model subjected to LGM forcings, but with various settings for the atmospheric feedback magnitudes, to see how well the LGM is reproduced with various assumptions about feedbacks. The presumption is that the same feedbacks are operating in the future — it is not that the forcing is the same, or even that the ice-albedo feedback operates symmetrically. It’s a reasonable approach, and the only one that can deal with the fact that the LGM was so different than the present — but it’s an approach that is only as good as the model used. Hence my reservations about whether UVic is a suitable model for this kind of study. –raypierre]

    Comment by RW — 4 Dec 2011 @ 11:42 PM

  136. Ray,

    “You don’t understand how the estimate is done. It is not a simple extrapolation. The calculation (like many before it) uses a model subjected to LGM forcings, but with various settings for the atmospheric feedback magnitudes, to see how well the LGM is reproduced with various assumptions about feedbacks. The presumption is that the same feedbacks are operating in the future — it is not that the forcing is the same, or even that the ice-albedo feedback operates symmetrically.”

    I understand it’s not an overtly direct extrapolation, but fundamentally it more or less still is if the underlying assumptions about the feedbacks are presumed to not only be correct but also operate proportionally the same to the forcing from the LGM as they do in reponse to future forcings in the current climate.

    BTW, I have your book and enjoy it.

    Comment by RW — 5 Dec 2011 @ 1:12 AM

  137. RW,
    You wrote: “A lot of the ice is thousands of feet above sea level too where the air is significantly colder on average. Unless we wait a few 10s of millions of years for plate tectonics to move Antarctica and Greenland to lower latitudes (if they are even moving in that direction), no significant amount of ice is going to melt from a relatively small ‘instrinstic’ rise in global average temperature from 2xCO2.”

    The idea is that ice flows downwards to the sea level around Antarctica and Greenland where it has always been melting. Warming simply makes it flow and melt faster, shrinking the ice sheets.
    So it wouldn’t take that much warming to get rid of all the ice which geography isn’t keeping from flowing downwards. It must also have been really cold most of the year at the top of the Laurentide ice sheet when it melted away.

    This post should get you started about expectations with regard to melting on human timescales: http://www.realclimate.org/index.php/archives/2008/09/on-straw-men-and-greenland-tad-pfeffer-responds/
    You may also be interested in an article about a recent publication looking at CO2 and ice sheets on a geological timescale:
    http://www.sciencedaily.com/releases/2011/12/111201174225.htm

    Comment by Anonymous Coward — 5 Dec 2011 @ 2:20 AM

  138. Anonymous,
    You must remember that the ice sheets flow due to the pressure imposed upon it by the ice pushing down at the center. While melting at the edges may cause the termini to recede, it will not cause the ice to flow any faster.

    [Response: not true at all. Glacier speed also depends on bottom drag (which is a function of temperature and lubrication by melt water) and also stresses within the ice sheet/shelf as well. Ice flow sped up by a factor of 4 to 5 times in the source glaciers to the Larsen C ice shelf after it collapsed for instance. – gavin]

    Comment by Dan H. — 5 Dec 2011 @ 5:53 PM

  139. Lubrication by meltwater is a contentious subject. Stresses within the ice typically result from the pressures exerted by the bulk ice. Terminius melted has little to do with these stresses. The collapse of the Larsen C ice shelf was a special case, whereby the blocking of the glacier path was removed, and the ice was allowed to advance once more. Again, this is related to the pressure exerted on the glacie by the ice and the center. This is significantly different from termini melting. You may want to rethink your first line above.

    [Response: Oh, so now you’re an expert on glacial dynamics too?–Jim]

    Comment by Dan H. — 5 Dec 2011 @ 8:27 PM

  140. he’s rebunking, I suspect, claims like those in the article debunked here:
    http://www.geolsoc.org.uk/gsl/pid/7523

    “As glaciologists and glacial geologists, we respond to the article “ Glaciers – science and nonsense ” by Cliff Ollier in the March issue of Geoscientist1 . We believe that the standfirst of this piece, which states that the author “takes issue with some common misconceptions about how ice-sheets move, and doubts many pronouncements about the “collapse” of the planet’s ice sheets” misleads the reader by assuming that Ollier’s arguments are correct.

    We demonstrate in this article that those arguments are not, in fact, based on an accurate understanding of contemporary glaciology. Our response is structured around five general themes that are misrepresented by Ollier in his argument that ice sheets are not responding to recent climate warming …”

    [Note the time scale changes]

    http://www.geolsoc.org.uk/webdav/site/GSL/shared/images/geoscientist/Geoscientist%2020.06/Fig%201resized.jpg
    Current and future cryospheric change

    In contrast to the message portrayed by Ollier, extensive scientific evidence indicates that ice masses are in fact melting at rates that far exceed background trends and this is happening in nearly all glacierised (glacier-covered) regions on Earth.”

    Comment by Hank Roberts — 5 Dec 2011 @ 9:38 PM

  141. Jim,
    Do you have anything productive to add, or are you just making snide comments? These little snipes of yours are getting old.

    [Response: Then go find a place that likes your various proclamations and assertions and opinions.–Jim]

    Comment by Dan H. — 5 Dec 2011 @ 9:54 PM

  142. Anonymous Coward,

    “The idea is that ice flows downwards to the sea level around Antarctica and Greenland where it has always been melting. Warming simply makes it flow and melt faster, shrinking the ice sheets. So it wouldn’t take that much warming to get rid of all the ice which geography isn’t keeping from flowing downwards. It must also have been really cold most of the year at the top of the Laurentide ice sheet when it melted away.”

    “This post should get you started about expectations with regard to melting on human timescales:”

    I was mainly referring to the melting of ice in the context of potential decreased surface albedo – not potential sea level rise. That’s for another thread.

    The bottom line though is permanent ice melting or ice loss generally requires yearly averaged temperatures above 0C. Below 0C there just isn’t going to be any permanent or long term melting. Also, most of the ice on Greenland and Antarctica is located in areas where the yearly average temperature is already significantly below 0C – many areas multiple 10s of degrees below 0C.

    Comment by RW — 5 Dec 2011 @ 10:24 PM

  143. … ratio of retreat to the along-flow stress-coupling length is proportional to the relative increase in speed, consistent with typical ice flow and sliding laws. This affirms that speedup results from loss of resistive stress at the front during retreat, which leads to along-flow stress transfer. Many retreats began with an increase in thinning rates near the front in the summer of 2003, a year of record high coastal-air and sea-surface temperatures.

    Comment by JCH — 5 Dec 2011 @ 10:33 PM

  144. Dan H,

    I’m rather ignorant in glacial interiors/dynamics, except for some stuff I picked up in an undergrad geology class on the subject, but there are a number of real-world examples that show you are wrong. For instance, see Zwally et al (2002, Science, Vol 297)that showed clear evidence for velocity variations at Swiss Camp (Greenland), with speed-ups in the summer that likely reflects increase basal motion in response to surface meltwater that can reach the glacier bed.

    Also, the outlet glaciers on Greenland are all variable in flow speed and ice discharge, and I think RC did something before on the acceleration of Jakobshavns Isbrae. And for glaciers grounded in deep water, accleration will be promoted whenever you get lowered resistance via lowering the average basal effective pressure.

    Comment by Chris Colose — 5 Dec 2011 @ 11:22 PM

  145. RW,
    Ice sheets affect both the sea level and albedo so ice sheet dynamics matter.

    You wrote “The bottom line though is permanent ice melting or ice loss generally requires yearly averaged temperatures above 0C. Below 0C there just isn’t going to be any permanent or long term melting.”

    What do you figure the average temperature is in Nunavut for instance? The Laurentide ice sheet is nevertheless gone and that definitely affected the albedo of the aera.
    I think it’s simple enough to understand: ice is mobile so the surface or interior temperature of most of the ice sheet doesn’t affect the ultimate result. If more ice is lost at the margins than gained at the core, the ice sheet shrinks, ultimately affecting albedo as (depending on the underlying geography) lakes form, some rockbed is exposed and areas are reconquered by the ocean.
    I guess a relatively small change in temperatures wouldn’t affect the albedo of a flat highland near the poles. But slopes do not remain white so easily unless they are covered by an ice sheet. Next to the oceans, it’s not going to be so cold and continental interiors could possibly (I don’t know) be very dry and therefore lack a year-round white cover in spite of very low winter temperatures (like areas in Northeastern Asia).

    Dan,
    I know virtually nothing about ice sheet dynamics but even I can understand that, even assuming no lubrification or sliding at the base, lateral ice flow is not going to be caused by the weight of the ice at the center. Thin glaciers would barely flow compared to thick ice sheets if that was the case.
    Intuitively, it seems weight gradients and not absolute weight should drive the flow as neighboring columns of ice buttress each other. So if the margin was lost without ice loss at the center, the new margin having lost its support would become very unstable and the whole ice sheet would to have to lose volume to establish a new equilibrium. At least that would be my ignorant assumption…

    Comment by Anonymous Coward — 6 Dec 2011 @ 3:34 AM

  146. Does anyone understand the math in this statement?

    “We have increased CO2 from 280 to 390, going up faster each year. [There’s relevant news here.] If we continue we’ll soon get close to 560 [double the carbon dioxide concentration preceding the industrial revolution] or 2.3 (our estimate) to 3 (I.P.C.C.) degrees C. (4.1-5.4 degrees F.) global average surface air warming.

    This is getting close to the warming from the LGM [peak of the last ice age] to today (3-4 degrees C.), hence we can expect dramatic changes over land, at mid- to high latitudes, in vegetation, mountain glaciers, ice sheets, and sea level.”

    http://dotearth.blogs.nytimes.com/2011/12/05/more-on-the-sensitive-climate-question/

    The change in the carbon dioxide concentration from the LGM to the pre-industrial period was less than a doubling. The change from then to “today” (390 ppm) is about a doubling, but because the recent increase is so rapid, today’s temperature is not representative of typical 390 ppm conditions.

    Maybe this is what is meant though. 2.3C (their estimate from LGM to pre-industrial) plus 0.7C (from pre-industrial to “today”) comes to 3C while the IPCC comes to 3.7C or about 4 consistent with “3-4 degrees C” above.

    But then, Schmittner is saying 3C for a doubling rather than 2.3C so where is the beef? It seems hard to believe that this boils down to forgetting to correct to a doubling sensitivity by carelessly shifting from a pre-industrial end point to a “today” endpoint.

    Comment by Chris Dudley — 6 Dec 2011 @ 7:45 AM

  147. Chris,
    You are essentially correct. The pressures exerted from the interior of the glacier will force movement into those areas with the least resistance. Glacial advance will speed up or slow down based on this pressure difference. There are several examples of neighboring glaciers are moving in opposite directions.

    NOAA has a nice visual of the recession of the Jakobshavn glacier which RC posted here:
    http://www.realclimate.org/images/jakobshavn.jpg
    There was a rather large retreat from 1851 until 1913, then slowing until 2001, with a recent acceleration. There was very little movement from 1964 – 2001.

    The most recent changes in Greenland’s largest claciers can be found here:
    http://www.staff.science.uu.nl/~lenae101/pubs/Howat2011.pdf

    Comment by Dan H. — 6 Dec 2011 @ 7:57 AM

  148. #142–

    “Also, most of the ice on Greenland and Antarctica is located in areas where the yearly average temperature is already significantly below 0C –”

    But it doesn’t all stay there; it flows toward the margins, where it can and does melt. That was AC’s point–cf. #140 and #143 as well.

    Comment by Kevin McKinney — 6 Dec 2011 @ 8:04 AM

  149. #146 Chris Dudley,
    I understand 2.3C is the center of the range they estimate for 2xCO2 Charney sensitivity while the center of the AR4 range is 3C.
    I don’t think they’re saying the temperature difference from LGM to preindustrial is 2.3C.

    What you may be missing is that that difference is greater than what you would get from Charney sensitivity alone. CO2 wasn’t the forcing anyway.

    Comment by Anonymous Coward — 6 Dec 2011 @ 8:45 AM

  150. #149,

    Thanks for your response. Looking at this pdf, http://mgg.coas.oregonstate.edu/~andreas/pdf/S/schmittner11sci_man.pdf I think that in the paper they are talking about pre-industrial controls so ‘today’ should mean 1700 or so. 185 to 280 ppm is 60% of a doubling logarithmically. So, 3 to 4 C cooling between the LGM and 1700 is a 5 to 7 C long term doubling sensitivity (all feedbacks). If the Charney sensitivity is half of that as proposed by Hansen, then we are at 2.5 to 3.5 for the Charney sensitivity. Again, where’s the beef?

    It is quite confusing. The claim that a doubling of CO2 would begin to approach the difference from the LGM only makes sense if “today” means today since then it would be a doubling in both cases. But the paper seems to take “today” to be pre-industrial which is probably the correct thing to do. In that case, doubling CO2 should have a larger effect than changes from the LGM to pre-induustrial conditions and the wording does not make sense over at dotearth.

    Comment by Chris Dudley — 6 Dec 2011 @ 11:40 AM

  151. #150,

    Their ~2.3C finding applies to a doubling from preindustrial, not to what happened in the past when CO2 was a feedback.
    They found ~2.3C and not ~3C. That’s “the beef”. It’s not a big deal (see comments by the RC crew above).
    This finding is not some kind of observation. You can not determine sensitivity or convert from one type to another without a model.
    Sensitivity is confusing. I’d rather not say more considering how exitable some commenters get when the limitations of the concept are brought up.

    What was apparently said to that journalist is that a doubling from preindustrial would lead to a temperature difference commensurate with the difference between LGM and today. That’s all.

    Comment by Anonymous Coward — 6 Dec 2011 @ 2:50 PM

  152. RW @ 142: The bottom line though is permanent ice melting or ice loss generally requires yearly averaged temperatures above 0C. Below 0C there just isn’t going to be any permanent or long term melting. Also, most of the ice on Greenland and Antarctica is located in areas where the yearly average temperature is already significantly below 0C – many areas multiple 10s of degrees below 0C.

    Can you please explain how so much of the globe where mean annual temperatures are below 0C are not covered with ice year-round then? The following map (I know, it uses data from 1899, but we know any warming since then is all made up, don’t we?) seems to think that much of northern Canada and Siberia are below 0C (32F on the map), yet so much of that area does not have permanent snow/ice cover. Permafrost, yes, but even then the top metre of soil thaws each summer.

    Global Temperature Map from 1899

    Comment by Bob Loblaw — 6 Dec 2011 @ 4:55 PM

  153. Bob,
    Here is a more recent map of global isotherms:

    http://www.physicalgeography.net/fundamentals/7m.html

    Comment by Dan H. — 6 Dec 2011 @ 8:05 PM

  154. #151,

    Thanks again. That seems to be the gist of it but something still seems to be backwards in his thinking. A doubling should have a greater effect than less than a doubling….

    Comment by Chris Dudley — 7 Dec 2011 @ 1:46 AM

  155. #154,

    Naturally a doubling would have a greater effect than going from 180 to 280 ppm if everything else was the same.
    But everything else isn’t the same. You shouldn’t assume there’s a simple relationship between atmospheric CO2 and temperatures.

    The uncertainties and the ultimate effect of a stabilization at 560 ppm weren’t mentionned in the communication you linked to. I guess it could have mentionned for the sake of completeness that higher sensitivities are plausible and that feedbacks which were not anticipated or quantified in AR4 might cause further warming down the road.
    But does it matter really? The communication was trying to convey the big piture. We don’t know exactly what emissions scenarios would lead to a stabilization at 560 ppm or what exactly the impacts of a given average termperature increase would be. There are uncertainties all over the place. But you’ve got to cut to the chase and give people a glimpse of the big picture anyway. Going into minutiae about sensitivity isn’t helpful. Journalists in general are not helpful.

    Comment by Anonymous Coward — 7 Dec 2011 @ 3:33 AM

  156. Anonymous,
    I would add that the sensitivity may not be a constant. While it may have been ~2.3 during the LGM, that is their calculation based on their inputs at the time. There were uncertainties in their calculations, just are there are uncertainties today. The sensitivity today could be higher or lower, and similarly for a future whereby atmospheric CO2 levels are higher.

    Journalists are only helpful when they have adequate knowledge about the field in which they are writing, which appears to be rather rare these days.

    Comment by Dan H. — 8 Dec 2011 @ 8:12 AM

Sorry, the comment form is closed at this time.

Close this window.

0.552 Powered by WordPress