I doubt very much if it affects the thrust of the argument, but I think diagram might be a trifle misleading. Eyeballing it, one tends to look at the average value for each contribution and then mentally to try to combine those average values. Presumably the individual feedbacks are not independent variables. That would mean the contribution from each factor and hence the total would be dependent on the model. So ideally, might it not be clearer if the individual dots were labeled by model?
[Response: The individual numbers are all in Table 1 in the paper. – gavin ]
Brian- In your discussion of climate feedbacks, you ignored an important 2005 National Research Council report.
National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp.
Among the conclusions in the Executive Summary of the Report is,
“Despite all these advantages, the traditional global mean TOA radiative forcing concept has some important limitations, which have come increasingly to light over the past decade. The concept is inadequate for some forcing agents, such as absorbing aerosols and land-use changes, that may have regional climate impacts much greater than would be predicted from TOA radiative forcing. Also, it diagnoses only one measure of climate change ‘global mean surface temperature response’ while offering little information on regional climate change or precipitation. These limitations can be addressed by expanding the radiative forcing concept and through the introduction of additional forcing metrics. In particular, the concept needs to be extended to account for (1) the vertical structure of radiative forcing, (2) regional variability in radiative forcing, and (3) nonradiative forcing. A new metric to account for the vertical structure of radiative forcing is recommended below.”
Your comments on this to Real Climate readers would be informative. Your statement, for example, that “Consistent with previous studies, clouds were found to provide the largest source of uncertainty in current models.” is not supported by the findings in the 2005 NRC report. The climate forcings are more diverse than summarized in your weblog, and the climate feedbacks, therefore, are more complex and involve more than atmospheric processes.
[Response: Roger, I think you are confusing the role of cloud effects as forcings (principally from indirect aerosol effects) from their role as feedbacks to changes in the radiative budget caused by other factors. This post (as I read it) is concerned with feeedbacks – not the multitude of different forcings. – gavin]
I’ve given a fair amount of thought to the problem you describe, and my conclusion is that the models are under estimating the water vapour feedback which is strongly positive, but are correct when finding the cloud feed back is close to neutral in the short term. This will mean that when the Arctic sea ice melts, the decrease in albedo will cause a rapid warming due to the positive feedback from the greenhouse effect of water vapour. It is only when the temperature has risen to such an extent that there is a switch into a new cloud regime that the climate will again be in equilibrium.
Basically this is what happened at the end of the Younger Dryas, when the sea ice that had spread out of the Arctic Ocean as far south as Ireland suddenly disappeared. Temperatures in Greenland rose rapidly by 20C, and only stabilised when the ITCZ expanded making the Amazon wetter [Maslin & Burns, Science, 2000, http://www.sciencemag.org/cgi/content/short/290/5500/2285 ]
Thanks for the great post, Brian!
I’m sure that I am misunderstanding some of the basic physics here, maybe you can clarify. You state that doubling atmospheric CO2 (presumably from 280 ppm to 560 ppm or thereabouts) adds around 4 W/m^2 to Earth’s power budget, with a naive effect of, in the steady-state, increasing the mean surface temperature by ~1 degree Celsius. You then compute the key atmospheric feedbacks as having an aggregate effect of 0.85 to 1.7 W/m^2/K or so – with a mean value of maybe 1.25 W/m^2/K or so. When I go and plug this mean value in with the 1 Kelvin rise in temperature from CO2 doubling, I get an additional effect of 1.25 W/m^2 from feedback, which with a linear temperature response, heats up Earth by an extra 0.3 K or so. We have to add the feedback from this as well and so on, so in the limit I get a temperature increase of 1.45 degrees C or so. But you state that the expected mean temperature impact of doubling CO2 is between 2.6 and 4.1 degrees Celsius increase, so I’ve done something wrong somewhere. What was it?
Comment by Steffen Christensen — 3 Aug 2006 @ 11:28 AM
I’ll 2nd the remark in #1. The preliminary discussion above posits a straight 1C rise in temps per doubling of CO2 without a consideration of feedbacks. The graph, then, has a value marked ALL which looks like it posits a 1C (or less) rise in temps due to all the forcings under examination. I presume that means all the interactions of positive forcings, negative forcings, and associated effects. So, the combined rise in temps looks to be 1C for CO2 alone and around 1C (or less) when the feedbacks are included in the equation. A total of 2C (or less). How does one get to the 2.6-4.1C rise predicted in the first sentence of the article?
You must have been anticipating my question: What is the magnitude of the negative feedback from convection and conduction (& the associated negative Water Vapor feedback) that results when the added GHGs (& positive WV feedback) result in warming the atmosphere? Is the convection effect ALL included in the “lapse rate feedback”? But what is the magnitude of conduction (electron movement) feedback?
It seems to me that when the anthropogenic GHGs result in atmospheric warming by keeping the absorbed & quickly released radiation energy in the air longer than it would have been without the GHGs, then the subsequent atmospheric warming results in a larger temperature differential between the ground and space. This differential will drive larger (faster) convection currents (aka stronger hurricanes) and larger vertical conduction currents (aka more vertical movement of electrons and more lightning to cause more wildfires). The question is what is the magnitude of this increased transportation of NON RADIATIVE energy or negative feedback (or decreased energy residence time in the air) and are these effects accounted for in the computer simulation programs? Is there a reference for the discussion of conduction effects anywhere?
I visualize this ground to space energy transfer process as being a set of 5 (or more) “resistance” pathways in parallel connecting the ground & space- ie 1.Direct radiation energy to space, 2.Radiation energy thru Water Vapor absorption which is temperature dependant, 3.Radiation thru GHG (mostly CO2) absorption other than WV, 4.Convection or physical transport of hot air molecules/wind energy and 5.Conduction or transport of energy by electron movement. If you increase the “resistance” in one leg (eg add CO2 and thus raise the air temp due to the increased residence time of the GHG radiation energy in the added 200ppm/century of CO2) then the energy current will compensate somewhat & go thru the other paths (eg faster movement of 1,000,000 ppm of air) to get from ground to space in order to maintain the equilibrium energy-in equals energy-out balance. But what are the relative magnitudes? and the net result?
So greater/lesser albedo is a feedback, but what about carbon releases from melting permafrost & ocean hydrates due to the warming? Would this be a feedback, or a forcing? Or maybe a feedback that becomes a forcing?
Are there any models that include these factors? I imagine the uncertainties & inability to quantify with any precision would be an obstacle to that — which doesn’t mean it isn’t happening or won’t happen.
Comment by Lynn Vincentnathan — 3 Aug 2006 @ 12:39 PM
This is an excellent topic, the added accuracy and resolution of weather modeling along with modeling other factors is what is needed to determine the warming, determine the effects (good and bad), and determine the best solution.
My primary comment about feedbacks is that they need to be considered in their time scale. Weather responds in hours, vegetation in months, snow cover in years (with seasonal variations) and ocean changes in decades. Those are only some and only rough.
Regarding weather feedback, there are lots of subtleties that can make a large difference in climate results. Some examples are cloud at night (positive) versus the day (negative), higher latitude cloud (positive) versus lower, high topped clouds (positive) versus low topped, diffuse or weak convection (positive) versus concentrated and strong, and a steady jet stream (positive) versus amplified.
Other feedbacks to be considered and modeled are rotting vegetation (positive), burning vegetation (negative), ocean stratification (positive) versus circulation (negative), soil moisture (negative), etc. many of which are tied to local weather conditions.
Gavin- Regarding your reply to my posting, The 2005 NRC Report discussed and illustrated feedbacks;
“A climate forcing is an energy imbalance imposed on the climate system either externally or by human activities. Examples include changes in solar energy output, volcanic emissions, deliberate land modification, or anthropogenic emissions of greenhouse gases, aerosols, and their precursors. A climate feedback is an internal climate process that amplifies or dampens the climate response to an initial forcing. An example is the increase in atmospheric water vapor that is triggered by an initial warming due to rising carbon dioxide (CO2) concentrations, which then acts to amplify the warming through the greenhouse properties of water vapor. Climate change feedbacks are the subject of a recent report of the National Research Council (NRC, 2003).”
The caption to Figure 1-2 in the 2005 NRC Report reads,
“FIGURE 1-2 Conceptual framework of climate forcing, response, and feedbacks under present-day climate conditions. Examples of human activities, forcing agents, climate system components, and variables that can be involved in climate response are provided in the lists in each box.”
As Brian wrote,
“The exact boundary between a feedback and a forcing depends on what is considered to be part of the ‘system’ and can sometimes be a little fuzzy. This discussion addresses just the feedbacks associated with the atmospheric physical system (see this earlier article for why that is), but other, less well understood, feedbacks (changes in land vegetation, biogeochemical processes, and atmospheric chemical feedbacks – see the NRC 2003 report), while potentially important, are not part of the generally understood definition of ‘climate sensitivity’. ”
A major finding of the 2005 NRC report is that these other feedbacks are very much a part of “climate sensitivity”. The feedbacks also cannot be adequately discussed without also discussing the climate forcings as illustrated in Figure 1-2 in the NRC Report; http://darwin.nap.edu/books/0309095069/html/13.html).
The climate system is clearly more than the “the atmospheric physical system.”. Note that the heading of Brian’s weblog is “Climate feedbacks”, not “Feedbacks with the atmospheric component of the climate system.”
[Response: I don’t really disagree. The climate is sensitive in a much more complex way to emissions etc. than is encapsulated in the traditional “climate sensitivity” metric. I think Brian was clear that he is talking about the the feedbacks that determine the variation in the classically defined “climate sensitivity” – and within that limited definition (which was clearly outlined) these results are very interesting. We had a big long discussion on the wider issue of ‘how sensitive climate is’ (in the larger sense) in the comments on carbon cycle feedbacks post recently. It seems to be that as long as one is clear about what is being talked about, you should be fine. But talking about classical ‘climate sensitivity’ in no way implies that there is nothing else worth talking about. -gavin]
It seems complex, but may be it is not? Given that there is OLR at all height levels , and also CO2 equally distributed, I presume that there would be warming at all levels of the troposphere, causing a potential increase for water vapor feedback everywhere till the tropopause (water vapor is rare in the stratosphere due to tropopause inversion). Convective transport, lapse rate negative feedback, is a bit confusing since this convection distributes water vapor to higher altitudes, some of which precipitates, but I see the increase presence of water vapor as the main source of temperature increase everywhere in troposphere, I look at convection slightly differently than the results suggest, the over all mechanics over time would be slight, but significant feedback increase due to water vapor interspersed more uniformly during longer periods of time. This is seen in the Polar Regions, more rain during summer, and in the tropics, less rain, as saturation levels are not reached due higher temperatures
Re: #1 Thanks for pointing out the discpreancy between the “All” column and the sum of the other feedback terms. The values for “All” are acutally mis-labeled in the Figure (and erratum to be written!). They are, in fact, the “effective sensitivity” from Table 1 of Soden and Held (2006). The effective sensitivity can be computed by summing the individual feedback terms and including the Planck radiative damping (which is roughly -3.2 W/m2/K; see Table 1 of Soden and Held) and then flipping the sign. The results of this are listed in Table 1. Or, in ascii math, dT = dQ/eff_sens, where dT is the temperature response and dQ is the radiative forcing. My apologies for the confusion and thanks for pointing it out.
[Response: I edited the figure to make this clearer. One other point should be clearer is that the ‘traditional’ climate sensitivity (deg K/(W/m2)) is simply the inverse of this number, and then for the sensitivity to 2xCO2 it is ~ 4/eff_sens. – gavin]
Re #2. Roger – The subject of radiative forcings is an equally interesting one and I certainly do not mean to suggest that uncertainties in radiative forcings are not an important part of the problem. Regarding the connection between feedbacks and forcings, I believe this reflects, in part, the limitations of defining forcings in terms of tropopause level fluxes. I personally like the “adjusted troposphere and stratosphere” forcing approach outlined by Shine et al.,2003 (http://www.agu.org/pubs/crossref/2003/2003GL01814.shtml) which they argue provides a more reliable metric for relating climate forcing with the associated climate feedbacks and temperature response. As far as our calculations, we used the climate forcing estimates and uncertainties cited in TAR. It should be noted, however, that to my knowledge there has never been a formal intercomparison between GCMs of the radiative forcings that drive their scenario responses. As discussed in our paper these also represent a potential source of error in our estimates of the cloud feedback.
[Response: Actually, we just contributed to such a comparison for AR4. The preliminary range of actual (adjusted) forcings for 2xCO2 is between 3.5 and 4.2 W/m2 – comparable to the +/-10% error bars you used. -gavin]
Unless the answer to Lynn Vincentnathan’s question has already been answered, I too have wondered if the strong potential for increased Artic carbon release and methane, as a result of melting permafrost, has been factored into any model.
My question: How does the GISS GCM Model E take relate the biogeochemical release of Artic CO2 and Methane?
It’s my understanding that buried methyl hydrates also exist in areas of the permafrost, in addition to deep ocean; if permafrost continues to melt, the methane and co2 from decay are very likely going to release in the atmosphere.
With respect to evidence of increased climate sensitivity, I recall a 2006 paper from the journal of Paleoceanography: A multiple proxy and model study of Cretaceous upper ocean temperatures and atmospheric CO2 concentrations (Bice, K.L., Birgel, B., Myers, M.A., Dahl, K.A, Hinrichs, K., & Norris, R.D.).
Indicated by plankton ratios, were very high tropical Atlantic ocean temps–as the press reported, the water of 100 million years ago was warmer than a hot tub.
On a general pattern for systems, as different phases are reached, relationships change. I don’t see how the Earth’s climate system, with the driving currency being the phase state change of water, being any different. At some increased energy state, a new equilibrium point is likely to dominate.
I have heard several scientists exclaim it’s not what we know that we are concerned about, it’s the surprises worry us the most. After reading the draft of the next IPCC, I was left to wonder if in the name of agreement, the â��policymakerâ�� warnings of future warming (that we have already bought by current day emissions) in the pipeline were not forcefully enough stated. As most of us would agree, there seems to be a lot at stake.
I would also like to thank readers #1,#4, and #5 for pointing out this error in the figure. It is interesting how these things get through the review process!
Another way of stating the results from this paper is that the feedbacks that we are moderately confident about (water vapor, lapse rate, and snow/sea ice albedo) seem to generate a sensitivity near the low end of the canonical range, with the more uncertain cloud feedbacks then providing the positive push, in these models, to generate all of the higher sensitivities. I think the picture that many of us had, speaking for myself at least, was that the first set of feedbacks brought us with moderate confidence to the middle of the canonical range, with cloud feedbacks, both positive and negative, then providing the spread about this midpoint. One evidently has to argue for a signficantly positive cloud feedback to get to the 3K sensitivity that various empirical studies seem to be pointing towards.
We needed to make a lot of approximations in this analysis, especially for the cloud feedback term, because of the limitations of what we could do with the model results that have been archived, so it will be interesting to see if this picture holds up. If, in fact, this is an accurate diagnosis of what the models are doing, why is it that they all have positive cloud feedbacks? This is in itself a bit surprising given the diverse schemes used to predict clouds in these models.
I’m glad to see the nomenclature problem acknowledged — by seeing “classic ‘climate sensitivity'” used above by Gavin in the inline response to #10.
But please don’t change the meaning of the original term, even by adding “classic” — beware the New Coke confusion.
Nor do I recommend varying it.
Imagine: Classic plus unexpected natural events — “baroque climate sensitivity”?
— and anthropogenic events — “modern climate sensitivity”
— and unexpected events caused by anthropogenic warming — “hiphop climate sensitivity”?
I suggest leaving alone the original defined term (at equilibrium, at 2xCO2, defines climate sensitivity).
Too confusing otherwise.
Does climatology have no simple term for what people are really asking about — how fast things will change?
Climate fungibility? climate fragility? Climate vector sum divergence?
“Dangerous Climate Change” is accurate, eh? Arrrrgh.
Regarding moisture, especially in the tropics. I just got back from spending a few weeks in the tropics in SE Asia. Here are some qualitative observations. This is related to being at the edge of some typhoon outflow. With the relatively high sun angle still in place, prior to the approach of the disturbance it was very hot and muggy, with much heat trapped in the boundary layer especially in urban areas. With the onset of typhoon / monsoonal mode, firstly there was enough wind to stir things up and secondly the clouds and rain resulted in significant cooling overall. Without the muggy stillness, less heat was trapped in the boundary layer at night.
Changing gears to the mid latitudes. Witness the synoptic progression in California since mid Spring. First we had the persistent Siberia Express / deep digging trough until June. It slackened a bit in June but nonetheless we had a cold and moist June. We had a “normal” Summer pattern for about a week (e.g. Pacific High, onshore breeze, warm inland). Then the “heat wave” set in, driven by three weeks of a rather non wavy jet, coupled with a tripple barrel high and subsequent offshore flow. Interestingly, such a configuration is far more typical of Fall than Summer here. Coupled with the July sun angle, it meant oppressive heat. When this broke down last week, it broke down in the extreme. Got the major onshore push and a 3K ft deep marine layer, well intruding. Now, there is a trough digging, and a cut off low progged to be in place by the weekend. It will be interesting to see what we get the rest of this month. The cut off may actually yield some pops. A cut off doing that in September would be unremarkable but in August, it would be unusual – August rain here is usually from the monsoon not a cut off.
Radiation is a forcing as is CO2 and Methane. However water vapour is a feedback? Does that means that carbon and methane release from organic matter decay is a forcing as ultimately it is just gas and hence a forcing?
Earths albedo is a feedback because it is caused by a forcing?
Hi Pete, forcings are external to a system and feedbacks are part of the system. Climate forcings are those factors that affect climate but aren’t affected by it. So methane from plant decay is mostly a feedback since it changes when climate changes. Methane released from volcanoes and man-made methane are forcings.
re: 19. This is where I understand the border between a ‘forcing’ and a ‘feedback’ becomes blurred. The original input to the system of doubling the CO2 level I think we can call a ‘forcing’. If one of the effects of doubling the CO2 level is to generate some more CO2 from forest fires, or releasing methane from decaying organics, I think we can term the secondary inputs as ‘feedbacks’. Sound good to the rest of you?
In response to 7, Lynn Vincentnathan, I am no climatologist but by what I have read, if the model knows how to calculate it then it is a feedback, if you have to tell the model then it is a forcing.
In your example, if the model has formulas on how much CO2 is released from a certain square of permafrost with a given raise of temperature, then this raise is a feedback. If there is a table where you tell the model “add X tons of CO2 in this square for this year”, then it is a forcing. That is why Pinatubo and LA will always be forcings, unless you can model their behaviour and then you should drop climatology and get rich in volcanic eruption prediction or politics.
Comment by Ezequiel Martin Camara — 3 Aug 2006 @ 5:57 PM
I could easily see permafrost methane starting out as a feedback but becoming a forcing. Eric(s) brings up an interesting point: what about factors that are external to the climate AND are affected by it, like the frozen methane?
In this presentation, what is considered to be “top of the atmosphere”? I suspect it is the tropopause. Also Relative humidity considered almost constant is a bit misleading, since air with 50% RH at +30 C has a whole lot more water vapor molecules than with the same parcel of air with 50% RH at -30 C. I would like to know if there is any idea, perhaps with numbers beyond Giga Tons of water vapor increased on a yearly basis? Can this increase be measured using satellite technology?
[Response:No. Top of the atmopshere is the top of the atmosphere (zero mb). The tropopause is usually considered to be a key level for the forcing and is around 100 to 200 mb depending on latitude. -gavin]
The principal by-product of hydrogen combustion is water vapor. If the hydrogen economy becomes a reality, does this mean that water vapor will become a forcing mechanism rather than a feedback mediator? If water vapor is the most efficient greenhouse gas, what does such an economy portend?
[Response: It’s very difficult to alter the water vapour distribution directly since the residence time is so short and the natural fluxes so strong. There is some thought that leakages from hydrogen tanks might effect atmospheric chemistry but the impacts are expected to be small. – gavin]
Gavin- Regarding your response to my Comment(#10), we agree on the need for a broader definition of climate sensitivity. Is Real Climate going to post such a weblog based, at least in part, on the 2005 NRC report? This needs to be more than just the carbon cycle or the global average surface temperature trend, but based on climate metrics that have the most direct impact on society and the environment. As discussed on the Climate Science weblog,
“The needed focus for the study of climate change and variability is on the regional and local scales. Global and zonally-averaged climate metrics would only be important to the extent that they provide useful information on these space scales.”
Thanks so much for the clarifications, Brian and Isaac, and thanks for the forcings numbers, Gavin! My new revised calculation has the range of values for total sensitivity at 3.2 – .85 to 3.2 – 1.7 W/m^2/K, or 1.5 to 2.35 W/m^2/K, in the same positive sense as the other contribution components. Using 4 W/m^2 as the baseline for doubling CO2 gives me a temperature increase at equilibrium of 1.6 to 2.4 degrees C. Using Gavin’s range of estimates gives a full range of 1.5 to 3.0 degrees C, finally according with observational constraints. It still seems a little low, which nicely explains Isaac’s statement that cloud sensitivity must be at the high end of the range to get 3 K. Any chance that your model is missing a significant source of sensitivity? You wouldn’t need much to move the numbers up to the 2.6 to 4.1 K range from other work. Alternately, your estimates could be all fine, and the forcing from doubling CO2 could actually be on the low end of Gavin’s estimates. Not being a climate scientist, I can’t say which is more likely… any thoughts?
Comment by Steffen Christensen — 3 Aug 2006 @ 11:55 PM
I concur with Steve Sadlow. While I’m greatful for the relief, I’ve never seen this in my decade in S. Calif.
This may be a dumb question but I’ll ask anyway [I have no shame so says my mother] and more importantly I couldn’t seem to find an answer using web searches: What is the local (e.g. regional) effect of burning wood from forest fires, brush clearing for farming and ranching, cooking and heating on the local climate? The amount of wood/brush consumed in natural and man-made fires is quite significant it would seem. Does the moisture laden smoke close to the ground raise surface temps to any significance or have any effect other than on the carbon cycle? Or if it does, does it cancel itself out when the moisture condenses out and the aerosols remain producing temporary cooling? Or would it do something weird like warm things slightly in winter and cool things slightly in the summer?
I’m just curious. A link to a reliable explanation would suffice if someone knows of one.
Steffan #27, you seem to assume that “observational constraints” are an additional estimate of climate sensitivity independent of the models. However, if you are thinking of Annan’s work for instance, models were central to his analysis of the Last Glacial Maximum and of the Maunder Minumum. Furthermore Annan’s work and the earlier work of Gregory are both based on the assumption that all radiative forcings are equivilent, and therefore past sensitivity of the climate to solar forcings can be used to estimate sensitivity to CO2 forcing. This assumption is open to question, given the unexplained strength of paleo climate correlations to apparently weak changes in solar forcing.
The solar irradiance is 1367 W/m^2 with the 11-year solar cycle varying by +/- 0.6 W/m^2 and longer-term changes varying as much +/- 3.0 W/m^2 over the past 1,000 years.
If these changes are not reflected in the climate record, why would we expect 4 W/m^2 to make such a diference?
[Response: To compare like with like you need to divide the solar irradiance changes by 4 and multiply by 0.7 to account for the geometry and albedo effects. So even with your (rather high) estimate of what the long term solar changes are only about 0.5 W/m2 – significantly smaller than the ~1.6 W/m2 net estimate of all anthropogenic forcings since the pre-industrial. That’s why it has been such a challenge to tease out a consistent solar response in the paleo-record. – gavin]
I was actually just trying to lookup how much difference the Milankovitch cycles cause in W/m²… Particularly, how much of an increase in the amount of energy absorbed was enough to move the earth’s climate into the current interglacial.
Anyway: The 1367 W/m² you quote is for how much radiation hits the earth, not how much is absorbed. Eg, fresh snow reflects 90% of the sunlight that hits it. But even factoring in the Earth’s albedo, I’m not quite sure how an extra 4W absorbed leads to a 1K change directly (and more with feedback effects).
Re #33 (the question about why the forcings from the sun’s variability aren’t comparable to that from doubling CO2):
In addition to needing to correct for the earth’s albedo, as noted in #34 (which entails multiplying the solar constant by something like 0.7), the solar constant has to be divided by 4. The reason for this is that although the output of the sun is 1367 W/m^2 at the radius of the earth’s orbit, the earth’s cross-section to this radiation is pi*r^2 while its total surface area is 4*pi*r^2. [From a physical point-of-view, you can also look at it like this: Yes, on the sunny side of the earth, there is 1367 W/m^2, but on the opposite side there is 0 W/m^2…And, even the 1367 W/m^2 on the sunny side is only true at the point where the sun is directly overhead. At other points, it has to be corrected by an obliquity factor. But noting the ratio of 4 between the earth’s surface area and its cross-section to solar radiation is a way to determine the correction factor quickly without having to do this calculation in this more complicated way.]
So, at the end of the day, your estimates of the top-of-the-atmosphere solar forcing variation are almost a factor of 6 too high than the correct number to use.
Re: 26″ “If water vapor is the most efficient greenhouse gas, what does such an economy portend?”
Reponse by Elifritz: “The water vapor itself is trivial. Do I have to explain it to you, or can you click your way into science literacy?”
I’ll bite on this snide reponse. (I ‘click’ to this site for some enlightenment.) Why is the water vapour trivial? If, for example, we replace all current fossil fuel and nuclear energy generation with H2 (assuming it was somehow generated with zero emissions), and the rest of the world somehow achieves our lifestyle, that’s 6.5B people spewing water vapour.
Is it trivial because it promptly rains out or disappears into bodies of water? Is it trivial because, even in the 6.5B-people-on-H2 scenario, the water vapour we would emit is dwarfed by that in the atmosphere from natural sources? (Focusing on climate, because presumably a city full of water-vapour-emitting cars would increase local humidity, molds, etc)
Re: Chris Rijk, #30. Converting the 4 W/m^2 absorbed to a 1 K difference can be done by a slightly modified version of the Arrhenius equation. Arrhenius figured out the temperature of a blackbody at a given distance from the sun by equating the incoming energy, bouncing energy off using albedo, and assuming that the round body acts as a black-body emitter with emissivity 1. The zeroth-order approximation of climate comes from adding in a factor to this equation accounting for heat trapping; i.e. accounting for long-wave radiation emissivity. The final equation looks like this: Temperature = ((1-albedo) * solar_luminosity / (16 * sigma * emissivity * pi * solar_distance^2))^0.25. If you use SI units, the solar luminosity of the sun is 3.844E26 watts, the solar distance is 1 AU = 1.496E11 meters and sigma is the Stefan-Boltzmann constant = 5.6704e-8 W/m^2/K^4. The bond albedo of Earth is around .29 (the authors above are using .3), and you have to run the equation backwards to get the long-wave emissivity of the Earth in pre-industrial times. If you take the mean preindustrial temperature as 288.0 K (= 14.85 degrees Celsius), sub it in the equation, the time-averaged long-wave emissivity comes out to .6220. You can then compute the average intensity of the Earth’s outgoing radiation using Intensity = sigma * emissivity * Temperature^4. This comes out to 242.6 W/m^2 for preindustrial Earth. You then suck back 4 W/m^2 from that number for the doubled CO2, dropping it to 238.6 W/m^2, and recompute the emissivity for the new Earth. This gives .6117. Plug that back in the temperature equation, and voila, new Earth has a balanced temperature of 289.2 K. Subtract the two temperatures to get 1.2 K per 4 W/m^2 extra absorbance, as required. Or you could use calculus and get essentially the same number, which is what they are doing.
Comment by Steffen Christensen — 4 Aug 2006 @ 11:39 AM
With respect to Steffen’s comments in #28:
I don’t know what calculations he is doing. I don’t myself know how to calculate any of this, but consider Gavin’s remark following #13. He says sensitivity to doubling CO_2 is given approximately by 4 divided by effective sensitivity. Working backwards, that would yield a range of 2.6 to 4.1 K in CO_2 doubling sensitivity from a range of about 1 to 1.4 in effective sensitivity. That seems broadly consistent with the range for effective sensitivity shown in the diagram. But I presume it is actually a bit more complicated than that for reasons similar to those I raised in my comment #1.
Part of the trouble with reading these comments, particularly those getting into technical detail is that you don’t know how seriously to take them. Some people are talking with authority based on a thorough understanding of the research literature in the subject. That would include the RC moderators. Others are doing back of the envelope calculations based on what they imagine the theory should be. It is sometimes hard to tell one from the other.
[Response: Well, our stuff is always in turquoise…. In the Soden and Held calculation they used 4.3 W/m2 for the instantaneous forcing, and so that would give a range of 1 to 1.7 W/m2/K for their effective sensitivity. – gavin]
[Response: Whoops. Scrub that last comment. I misunderstood what they did. Their 4.3 W/m2 is the forcing from the scenario they were using, not from a 2xCO2 calc. -gavin]
wrt to Gavin’s comment on #24, while his definition of TOA makes sense, the effective level from which the earth radiates is about 6 km (much lower than the tropopause) and one of the principal effects of raising greenhouse gas mixing ratios is to raise this level and make it wetter.
[Response: Point taken. In practice ‘0 mb’ is difficult to precisely locate and so for various purposes levels lower down (i.e 0.1 mb (~60 km)) end up being used. However, in a model context, ‘0 mb’ makes sense and so all model-related uses of TOA really do refer to the very top. – gavin]
Yep — “zero” depends on the sensitivity of the instrument. And will change all the time! Each time the sun flares the upper atmosphere expands. The SpaAlpha (the ISS) orbits at zero millibars, I’m quite sure, but still within the atmosphere. They photograph the aurora while flying through it.
Re #42: Since the dependence of the forcing on CO2 levels is roughly logarithmic and 1% CO2 corresponds to roughly 36X the pre-industrial values, this would correspond to roughly 5 doublings so you would take the estimates for doubling and multiply by 5…I.e., if the climate sensitivity were 3 deg K per doubling, you would get a temperature of ~15K higher.
However, there are probably lots of reasons why such an estimate would not be very accurate…most of which probably relate to the caveat “all else being equal”. In fact, the further you go back in time, the more things are likely to not be equal. For examples, the continents would be in different places and the amount of other stuff in the atmosphere (such as aerosols) might be very different. And, since the last time when CO2 levels were believed with confidence to be higher than they are today is on the order of 20 million years ago, it is indeed likely that such differences would have been significant. (In fact, to get up to levels of roughly 0.5% for CO2, it looks like you have to go back close to 200 million years according to this graphic: http://www.grida.no/climate/ipcc_tar/wg1/figts-10.htm )
Also, I don’t know if climate models really do forecast the temperature rise to remain approximately linear on the forcing (and the forcing to remain approximately logarithmic on the CO2 concentration) up to such large values since a paper in Science [Daniel P. Schrag and Richard B. Alley, Science, Vol. 306, pp. 821-822 (2004)] commented that current climate models could not simulate the warmth of some previous climates such as the eocene of ~50 million years ago (especially the high-lattitude warmth in continental interiors) no matter how high one turned up the CO2 levels. [Perhaps the experts can weigh in on this?]
Is the quoted 4 W/m^2 forcing for doubling CO2 a global average? Where could I find how this changes with latitude or other important factors? I assume CO2 is more important at high latitudes due to less masking by water vapor.
Also, I’ve seen statements that the CO2 forcing is logarithmic with concentration. Is some data available showing the exact dependance?
Sorry if I gave the impression that I am being authoritative, it wasn’t my intent. With respect to Brian’s post #12, I reran the calculations with a couple of different sets of numbers. The ones that worked were consistent with the data in the graph and with this statement of Brian’s: “The effective sensitivity can be computed by summing the individual feedback terms and including the Planck radiative damping (which is roughly -3.2 W/m2/K; see Table 1 of Soden and Held) and then flipping the sign,”. I took that statement to mean that you take the data graphed in the effective sensitivity column, add -3.2 W/m^2/K, and flip the sign to get the summed sensitivity. The data in the graph ranges between 0.85 W/m^2/K and 1.7 W/m^2/K, which gives us an ordinary summed sensitivity range of between 1.5 W/m^2/K and 2.35 W/m^2/K, which is fully consistent with the data presented, and follows your intuition of #1 that the errors are not independent. To get the temperature increase at equilibrium, I took the 4 W/m^2 from doubling CO2 giving a 1 K rise in the absence of feedbacks noted in the original post, and added the 1.5 W/m^2/K feedback rise on top of that, using the same 4:1 conversion ratio. This raises the temperature another 0.375 K. That temperature rise, in turn, must get amplified by the same feedback effects as well, adding 0.56 W/m^2 more input. That raises the temperature another 0.141 K. This continues until the terms get so small that they don’t count any more. To three decimal places, this gives a 1.600 K temperature rise at equilibrium. Same calculation for the upper bound gives a 2.424 K temperature rise. My post in #38 comes straight out of Physics class or a first class on physics in climate, for instance http://www.geo.umass.edu/courses/climat/radbal.html .
Comment by Steffen Christensen — 4 Aug 2006 @ 5:21 PM
Re: Martin #31, thanks, good point. I thought I gathered from other posts here that there are some observational constraints on climate sensitivity based on historical data. Certainly I can’t expect models to give an independent test of the possible values, since they’re the same models. Of course, what is in and out of the models is a free parameter, and not all models use the same outside factors.
Comment by Steffen Christensen — 4 Aug 2006 @ 5:27 PM
Still a thorn in our side / Positive cloud feedback?
Cloud feedback still largest source of uncertainty in GCM’s
What did they say? Clouds a positive feedback?
Last week, a new article by Brian Soden of the University of Miami and Isaac Held of GFDL entitled “An Assessment of Climate Feedbacks in Co…
But will even a little more damage from leaking hydrogen be a problem in the near term? The Arctic ozone, not just the Antarctic, remains very depleted (and this is a feedback issue, assuming the Arctic stratosphere too is getting cold enough for catalysis on the surface of ice clouds to happen).
“The severe Arctic ozone reduction in the winter 2004/2005 is analyzed … The average maximum ozone loss was about 2.1 ppmv at 475 Kâ��500 K (â�¼18 kmâ��20 km). Over 60% of the ozone between 425 Kâ��475 K (â�¼16 kmâ��18 km) was destroyed. The average total column ozone loss was 119 DU, â�¼20â��30 DU larger than the largest previously observed Arctic ozone loss in the winter 1999/2000.”
On the good side I recall the current knowledge of hydrogen behavior is good enough that changes can be studied against a balanced baseline.
Well, interesting comments. Let me point out that the climate sensitivity estimates are the results of GCMs, and no napkin scribbles are going to reproduce these results – if they could, this would have all been figured out 100 years ago. What about a variability index along with a climate sensitivity index? – averages can be misleading when it comes to the dynamics of an oscillating system.
As I recall, the central problem with clouds in models was that grid scales in models were far too big (for reasons of computing power) to explicitly model cloud formation, so various parameterizations were used. If different parameterizations produce similar results, that’s somewhat encouraging. The original notion was that cloud tops have high albedo, reflecting incoming light to space and producing a cooling effect – but where in the troposphere the clouds form was also important. It does make some sense that the feedbacks are largely positive if one thinks about shady day UV sunburns – but I was surprised. Thanks for tackling this topic, also.
I think this really points out the need for NASA to refocus on climate monitoring programs and to abandon the whole man on Mars concept. The potential surprises in the climate system are not likely to be revealed by theoretical modelling, but rather by observations and data collection. I won’t harp on any more about the disproved notion that glaciers are ice cubes, other then to point out that it was field data, not theoretical modelling, that led to that revelation. It looks like this info is starting to influence notions of glacial cycles, as well.
For those interested in the Milankovitch forcings and the glacial cycles, the recent issue of Science has interesting reports and an overview by Didier Paillard at http://www.sciencemag.org/cgi/content/summary/313/5786/455 on this issue – and a nice graph showing that modern CO2 concentrations are exceeding any seen in the last 3 million years.
There is also, in this issue, an article by Landsea et. al in which the recent record of tropical hurricane intensity is described as artifactual – the gist is that old data collection techniques underestimated hurricane intensity, so the observed trend of increasing intensity is just an artifact. Yet Landsea also uses the historical record of hurricane intensity as the basis for the North Atlantic Oscillation notion, stretching back to the early years of this century – What!? In this case it’s good data, in that case it’s bad data? Looks like a bit of a logical contradiction there, as far as I can tell.
I’m not color blind, so I do understand when the RC moderators comment. But other people also go into elaborate technical detail. Sometimes this amounts to just throwing words around without any real understanding of the science. Other times real experts are responding, and there is a continuum in between. When RC moderators respond, it becomes clearer how sensisble the comment was, but you don’t always do that, nor should you.
[Response: It was just a joke, not a criticism. Apologies if that didn’t come over properly. Your main point is very well taken though and we do often comment on the seemingly technical stuff that could be misleading to other readers. The obvious errors don’t need so much correction… – gavin]
I did think I understood the process you used to draw your conclusion. I taught many generations of caluclus students about the geometric series. It seems a plausible way to argue to me, but what do I know? What I didn’t understand was why it was at variance with the estimate of 2.6 to 4.1 K. Presumably what any given model does is hard to analyze by such methods, and the Soden Held paper is an attempt to grapple with that. I think we amateurs have to accept the 2.6 to 4.1 estimate as roughly correct whether or not it seems to square with a back of the envelope estimate we can do. From what you’ve said since I think you weren’t suggesting you had discovered some basic flaw but rather you were trying to understand, like the rest of us, just what is going on.
In response to the comments about hydrogen producing water vapor, let’s not forget that combustion produces water vapor, taking methane for example.
CH4 + 2 O2 -> 2 H2O + CO2 + spare energy
The long residence time of CO2 (~100 years) compared to water vapor in the atmosphere is the central problem, as pointed out above. The choice of forcing and feedback is like the choice of system and surroundings – it is a matter of the most convenient description. Over very long time periods, one might even say that CO2 is a feedback and the Milankovitch cycles are the forcing; over ‘short’ time periods CO2 is a forcing and H2O vapor is a feedback.
Plants run this in reverse; solar energy + CO2 + H2O -> organic carbon + O2
So, a personal step you can take to help with the problem is to start planting trees, stop burning fossil fuels, and conserve energy as much as possible.
RE: “I’ve never seen this in my decade in S. Calif.”
One of the things many newcomers to California are surprised by is the innate variability of both weather and climate here. Even in the reputedly “mild” mediterranean (and sliver of marine west coast up north) zones, variation can be amazing, especially on decadal and multi decadal scales. In the remaining climate zones besides the alpine, extremes are the norm. Many newcomers witness a certain set of conditions for a few years and assume that it’s “normal.” I saw this with many who arrived during the drought dominated 1980s – they thought that it was normally that sunny and dry during the winter and some actually moved out during the rainy 90s (and 00s thus far) because they could not stand the winter gloom. On the other hand, anyone who arrived here after the late 80s got a very mild temperature situation right through the turn of the century. This year’s “heat wave,” while rare, was certainly not a surprise to me. When I was a youth in the 70s, we had several very hot summers where I lived at the time (SF Peninsula) and again, a few notable ones in the mid 80s. In contrast, the short sharp 100 plus wave in late May 2000 notwithstanding, the past 10 or so summers were very much dominated by deep onshore push and essentially sucked from a warmth perspective. To be honest, for me personally, it’s nice that we’ve had some real heat this year. Too bad it was only for a mere third of the summer. I’d not be surprised if we had no more real heat this summer.
To be honest, for me personally, it’s nice that we’ve had some real heat this year. Too bad it was only for a mere third of the summer. I’d not be surprised if we had no more real heat this summer.
Steve, I agree. Mark, to me this is what a normal California summer feels like. But normal depends on exactly what weather pattern you first experienced when you moved to the state.
I moved to Fresno from Dallas in ’78 and have been a resident of the state ever since, moving back and forth from Fresno to San Diego. This weather season seems much more in line with the type of weather the central valley experienced the first few years I was here in Fresno. Untill this summer, the June, July, August, months seemed not quite as hot tep wise, but much more humid, with less variation in daytime temps. This year there is a nice tempurature arch, if graphed, would look much like a standard bell curve, where previous summers would be flatter in comparison.
Re: # 45. Sorry for the confusion Steffen. The last column in the figure represent the effective sensitivity. They range in value from 0.88 W/m2/K to 1.64 W/m2/K. If you assume a canonical value for the radiative forcing from doubling CO2 of 4W/m2, you can estimate the corresponding surface temperature change as dT=dQ/eff_sens. The low end of the range for these models is 4/1.64 = 2.4 K. The high end of the range is 4/0.88=4.5. So the estimated range for this set of models (2.4-4.5 K) is slightly larger than that referred to at the top of the article.
Re: 19. Water vapor and surface albedo is a feedback because their changes are driven internally by the change in surface temperature. It is not a distinction betweeen gaseous and non-gaseous changes, as water vapor is itself a gas. If the amount of carbon and methane released from decaying organic matter increased or decreased in response to the temperature changes, then that would also provide a feedback onto the climate system. However, the increase in CO2 due to the burning of fossil fuels is a forcing, not a feedback, because the amount of CO2 emitted is determined by processes which are external to the climate system (energy consumption, population, etc.). The CO2 fossil fuel emissions do not arise from the change in temperature, as the water vapor/alebdo changes do. Rather they initiate the change.
I don’t want to speak for Steffen, but perhaps he wants an explanation for how dT = dQ/eff_sens is derived and why it appears to differ from the admittedly naive estimate he came to by a different route. My guess is that there is no good way to understand these matters except to learn about the physics and how it is applied in models starting from scratch. Even then, a complex model may give results that can’t be explained more simply. After all, if there were simple physical explanations for all these things, we wouldn’t need the models. But perhaps there is some easier way to do it in this case.
Re #37: Brian – That is a good question. The main reason why hydrogen emissions are unlikely to provide a significant radiative forcing is that the concentrations of water vapor in the atmosphere are not determined by the flux of vapor into the atmosphere, but rather by the temperature, or more precisely, by the saturation vapor pressure which depends strongly on temperature. The fluxes of water into (evaporation) and out of (precipitation) the atmosphere are in close balance and very large compared to the amount of water stored in the atmosphere. (For example, if you were able to somehow double the rate of evaporation while holding precipitation constant, the concentrations of vapor in the atmosphere would have to double in ~10 days.) In climate models, the increase in water vapor have little to do with the increase in evaporation predicted by the model. In fact, models with the same increase in evaporation can have changes in water vapor which differ by a factor of 2 or more (and vice-versa). Instead, the change in water vapor in all models is tightly coupled to the simulated change in temperature. Thus, even if the increase in H2 emissions was comparable to the increase in evaporation predicted by models (roughly 5% by 2100), this increase would be balanced by an increase in precipitation rather than an increased storage in the atmosphere. However, as you point out, there may be important localized impacts of such changes.
Yesterday I sent an email to the NOAA Paleoclimatology people expressing surprise that their webpage on climate change had not been updated since 2000, and asked when we could expect an update. Today, I received this response from Bruce Bauer:
“Should be sometime this fall. We have an update essentially
complete – it is in peer review at the moment. I’ll let you know when it is ready!”
I wonder who is doing the peer review? It will be interesting to see the product! Let us hope…
Brian introducing a term like effective sensitivity is very confusing.
In signal processing two cases of amplification exist: transient amplification (high frequency) and equilibrium amplification (low frequency). Both terms are in fact special cases of frequency domain amplification. Can you indicate at which time scales the equilibrium takes place? Isn’t that typically 2 to 3 centuries?
see also: http://home.casema.nl/errenwijlens/co2/tcscrichton.htm
I don’t see what signal processing has to do with the matter. I could be wrong, but I don’t believe that is a proper analogy. It seems to me you should think more in terms of what happens when you heat a pan of water by suddenly raising the temperature. CO_2 sensitivity is a measure of what the final response would be after equilibrium was achieved, if we suddently increased the CO_2 concentration by a set amount. It would generally take a while to reach equilibrium. It is not supposed to be a specific prediction but rather a basic parameter which it is important to know. But computer models can make a variety of projections based on sensitivity, on different continuous emission scenarios, and for different periods of time in the future. It is easy to find estimates for what the average global temperature will be in 2001, for example. The chart in the link you gave shows various growth curves for different emission scenarios and presumably some assumed implicit sensitivity. But the comment there you made is silly. It suggests that since up to 2030, the scenarios don’t show much difference, we can wait until then and then decide what to do. Among other things, that assumes the history up to 2030 won’t make any difference to what happens afterwards.
“As she stared down into a wide-mouthed plastic jar aboard the R/V Discoverer, Victoria Fabry peered into the future.
“The marine snails she was studying — graceful creatures with winglike feet that help them glide through the water — had started to dissolve.
“In 20 years of studying the snails, called pteropods, a vital ingredient in the polar food supply, the marine biologist from California State University, San Marcos had never seen such damage.
“In a brief experiment aboard the federal research vessel plowing through rough Alaskan seas, the pteropods were sealed in jars. The carbon dioxide they exhaled made the water inside more acidic. Though slight, this change in water chemistry ravaged the snails’ translucent shells….”
This is from the text that accompanied the video, ” about 20 meters down off the coast of Santa Barbara, Calif.: bubbles, millions of bubbles of methane â?? 20 times more powerful as a greenhouse gas than carbon dioxide.
The methane is bubbling up naturally from some of the enormous natural undersea reservoirs of the gas mostly locked into the frozen mud under the sea floor.
Scientists have just released video showing how, for the first time, they have been able to measure these natural up-wellings to tell whether, if large amounts of this methane ever thawed out from its deep sea beds, it would reach the atmosphere, rather than being absorbed in the water, and thus make the earth even hotter.
The findings of oceanographer Ira Leifer et al, published in a strictly peer-reviewed scientific journal, are that it would do just that.
In other words, all that undersea methane is a potential ‘positive feedback’ of catastrophic proportions.”
I wonder if a method might be devised that vacuums the released methane? I know there’re schemes for mining it, but why mine when it’s being freely released?
And one must wonder how many other methane beds are madly bubbling away from camera sight?
Thank you for the info and the link to the article. If I understood the article, the video is of a natural gas blowout that occurred in 2002, not a methane hydrate blowout. That’s a lot less scary.
Southern California is known for petroleum seeps, both on land and underwater. A famous one is the La Brea tarpits in Los Angeles. A pocket of methane bubbling up in shallow near-shore water is not particularly alarming. So I’ll stop hyperventilating. :-)
RE: #61 – Beyond the viral popups that killed my Mozilla, I have some real problems with the false claim of methane coming up beneath frozen mud off of SB. Firstly, what bubbles up there is actually mostly hydrogen sulfide, not methane. And it is from the same trends as the oil deposits, not some shallow deposits below so called frozen mud. And because the entire area of the So Cal coast is continental crust, and non abyssal, while certainly there are deep water temps down in the upper 30s, there ain’t no frozen mud. [edited]
Got facts? Who provides your information? I’m seeing this claim the gas is “not methane” suddenly a lot, it must be coming from some source people are reading criticizing this story. Please, where did you read it? And why do you believe them?
Got methane? Here’s what I found, among the cites I posted above, just before your #71.
“Compositional changes in natural gas bubble plumes: observations from the Coal Oil Point marine hydrocarbon seepfield….
Received: 22 January 2002/ Accepted: 22 July 2003/ Published online: 3 October 2003 Springer-Verlag 2003
Detailed measurements of bubble composition, dissolved gas concentrations, and plume dynamics were conducted during a 9-month period at a very intense, shallow (22-m water depth) marine hydrocarbon seep in the Santa Barbara Channel, California. Methane, carbon dioxide, and heavier hydrocarbons were lost from rising seep bubbles, while nitrogen and oxygen were gained. Within the rising seawater bubble plume, dissolved methane concentrations were more than 4 orders of magnitude greater than atmospheric equilibrium concentrations. Strong upwelling ï¬�ows were observed and bubble-rise times were ~40s, demonstrating the rapid exchange of gases within the bubble plume.
Regarding the posts on methane/hydrocarbon seeps and venting, I think the area of concern here is the northern permaforst regions, which are going to be warming a lot faster then the deeper ocean is – but shallow areas are a concern. I’m guessing that by the time the ocean warms that much, the thermal expansion will already have drowned most coastal cities.
For a more detailed report, see: http://www.iarc.uaf.edu/highlights/methane/index.php
“The largest source of natural gases, mostly composed of CH4 , is stored in gas-hydrates beneath permafrost and the onshore permafrost reservoir is roughly estimated to be as much as 32,000 Gt. (1Gt = 109 tons). This is 106, or one million, times as much as the CH4 released in the atmosphere of all northern ecosystems. Dr. Shakhova feels that a very small disturbance of gas hydrates could cause catastrophic consequences within a few decades. Shallow bottom sediment and underlying permafrost have warmed approximately 15Â°C since the time they originated. The implications of this trend are that shallow off-shore gas hydrate deposits could become vulnerable (Fig.2). She also notes that methane plumes found in the East-Siberian Sea (ESS) during the 1 st and 2nd Russian-U.S. joint cruises during September of 2003 and 2004 may indicate decaying gas hydrates in thawing undersea permafrost.”
This is a big worry; I’m very interested to see what the next IPCC report makes of this.
Seeps happen to be rather inconvenient right now as any GHG added no matter the source adds to the problem. It’s said that seepage from deepwater is absorbed by the ocean, but wouldn’t this also help fill the capacity of the ocean as GHG sink sooner rather than later? Surely, at some point the ocean can only hold so much methane before it starts offgassing rather than absorbing?
There is enough space for almost unlimited carbon emissions, a US team reports in the Proceedings of the National Academy of Sciences.”
[Which I take to mean that the ‘problem’ of CO2 emissions has thus been ‘solved’ by technology.]
Signal processing has everything to do with climate sensitivity. The system earth is a feedback amplifier which responses like a low pass filter to forcings, this meeans the lower the frequency of the signal, the higher the response, that’s why the earth reacts to low frequency signals like Milankovitch orbital changes. This is also the reason why you can’t simply plug in an ice age sensitivity into models that only look at a centuries duration.
A polite nudge again to those who’ve posted above your belief that the seeps discussed are “not methane” —
Who provides your information? I’m seeing this claim the gas is “not methane” suddenly a lot, it must be coming from some source people are reading criticizing this story. Please, where did you read it? And why do you believe them?
“Signal processing has everything to do with climate sensitivity. The system earth is a feedback amplifier which responses like a low pass filter to forcings …”
As someone who has had extensive training in physics and mathematics, but who is still very much an amateur in climate science, this statement seems to me to be an example of throwing words around with little of substance behind them. Would someone, beside the originator of the statement, be willing to comment further? It is certainly possible to treat any physical system using a particular mathematical approach, and sometimes this is useful. But in the current situation, it seems to me that such an approach is fruitless. For one thing, to apply analyses of this kind, it would have to be true that a decomposition of the input into periodic components translates simply into a correponding decomposition of the output. If we could figure out what was happening to climate that simply, we wouldn’t need complex models.
Thanks to pointing out the source for this. I checked THEIR statement. It’s, to put it as politely as possible, utter bullshit.
If you look this up (“Google is your friend”) you’ll discover the distortion in the USA Today article turns the sense upside down.
Schneider warned that _geoengineering_ of the sort just recently discussed could overshoot, risking conditions like the ‘Little Ice Age’.
Got that? Not “Ice Age” but “Little Ice Age” — a historical period. Not “coming Ice Age” but risk of overshooting geoengineering of climate.
It was a smart, reasonable caution, and it’s being totally misrepresented.
It took me all of 30 seconds to find this out. USA Today failed badly.
Here’s a citation, a quote from it and the relevant footnote to undetstand what his Genesis Strategy book had suggested:
Reproduced, with permission, from: Schneider, S. H. 1989. The greenhouse effect: Science and policy. Science 243: 771-81.
“Policy responses. The last stage in diagnosing the greenhouse effect concerns the question of appropriate policy responses. Three classes of actions could be considered. First, engineering countermeasures: purposeful interventions in the environment to minimize the potential effects [for example, deliberately spreading dust in the stratosphere to reflect some extra sunlight to cool the climate as a countermeasure to the inadvertent CO2 warming (60)]. These countermeasures suffer from the immediate and obvious flaw that if there is admitted uncertainty associated with predicting the unintentional consequences of human activities, then likewise substantial uncertainty surrounds any deliberate climatic modification. Thus, it is quite possible that the unintentional change might be overestimated by computer models and the intentional change underestimated, in which case human intervention would be a “cure worse than the disease” (61).
61. S.H. Schneider and L.E. Mesirow, The Genesis Strategy: Climate and Global Survival (Plenum, New York, 1976), chap. 7, p. 215.
Re #78 George,
I’m not clear how Craig Bohren’s response (in both his answer to the question and in his sidebar observations) is expected to open anyone’s eyes to “new ideas and sage thoughts” – nothing he says is new, and much (if not all) of it is already discussed and debated in great detail at RealClimate.org. And, in my humble opinion, his observation that “it is fair to say that another Ice Age would be equally or more catastrophic for Earth than global warming” has no relevance whatsoever to a discussion about climate over the next few centuries.
More to the point, he does seem to acknowledge that global warming is real and that fossil fuel combustion could be a contributing factor, though perhaps less a factor than some of the contributors and posters at this site.
Actually, I am a bit surpised that Bohren is as receptive to the reality of global warming as he seems to be. After all, he was quite skeptical about it, and the greenhouse effect, 20 years ago: In his 1987 book, Clouds in a Glass of Beer (Wiley), he argued in Chapter 10 that no one really understands how a greenhouse works – it may work by trapping infrared radiation (solar infrared penetrates the glass, or plastic, warms the objects within, and the emitted terrestrial infrared can’t escape), or by suppressing convective heat transfer (that is, providing a shelter from the wind; the thickness of the glass is also a likely factor here). He then concludes that both views are supported by the experimental evidence, hence, they are both correct (or both wrong). I have no problem with that, but he uses this alleged confusion about the greenhouse effect to cast doubt about global warming. As in the USA Today article you cited, he conceded that atmospheric CO2 is rising, but dismisses the concerns because the effects of rising CO2 can’t be predicted as accurately as he would like (he didn’t trust the computer models back then, either).
Bottom line: He is a knowledgeable scientist who makes some valid points, but chooses to focus on the uncertainty about projections. Nevertheless, in the USA-Today article sidebar, he offers the same advice that is offered by folks posting on this site:
“A prudent society would reduce its dependence on fossil fuels, especially oil, as quickly as possible for many reasons, not just the possibility of global warming. A prudent society would also develop drought-resistant crops and make other long-term plans for inevitable climate change of any kind.” Of course, his advocacy for nuclear power will not sit well with many people (personally, I nuclear power doesn’t frighten me, but I don’t see it as a panacea – there are valid arguments that it won’t do much to cut our reliance on fossil fuels).
Bottom line: The article was interesting, but it says nothing new that I can see.
BTW: Bohren’s book, Clouds in a Glass of Beer, is a very good read (at least from my point of view as a biologist)- lots of clever experiments that can be done at home to illustrate basic principles of atmospheric physics, and explanations for natural phenomena we see outdoors.
Re # 82 Clarification:
In my previous post I wrote: “he does seem to acknowledge that global warming is real and that fossil fuel combustion could be a contributing factor, though perhaps less a factor than some of the contributors and posters at this site.” I meant not as much of a factor as contributors and posters to this site believe it to be. I was not implying that the CO2 (or other greenhouse gases) expelled by the people posting here is contributing to AGW.
I agree that Bohren has said nothing that is new or helpful. Frankly, much of the text seems to be the pointless meanderings of an aging mind. He reminds me of older men I know who think that everything taking place in the world can be interpreted in terms of their own life experiences. (FYI, I am near 70.) Perhaps that helps explain his utter contempt for computer modeling and modelers, which I find a bit astonishing.
Re #72 and others: The Coal Oil Point seep has been going on for years. In the early 1980s I was part of a team that measured the volume of gas; it amounted to about 6 tons per day. It was mostly methane, but there were significant amounts of higher order alkanes (ethane, propane,etc.). Quite a bit of H2S as well (it smelled REALLY bad!). Our study was sufficient to convince ARCO to build an underwater “umbrella” that captured most of the gas and sent it ashore via pipleline to a processing plant. ARCO received emission credits from the local air pollution control district for doing so.
The bonus was that the beaches of Santa Barbara (my home town) became much less covered with tar!
With an unpolluted ocean, warming should lead to increased upwelling and a more efficient biological pump — negative. A more efficient biological pump should lead to increased production of DMS and an increase in low-level cloud — negative.
With an unpolluted ocean surface, warmer climate should lead to increased storm activity, more hygroscopic nuclei in the lower atmosphere and increased albedo — negative. Increased wave activity should lead to increased mechanical uptake of CO2 by the oceans — negative.
I’d love to see the climate numbers crunched with allowance made for the fact that .25% of oil production is spilt on the sea, .1% is turned into surfactant and both these pollutants mitigate the above effects. The petrochemical industry took off at just the right time to produce the hockey stick. Persistent surfactant production ties in nicely with the ’40s uptick.
It started as a spoof new theory of global warming (see http://www.floodsclimbers.co.uk) but the more I look at it the more worried I become. Maybe someone has done the figures?
>87, 80 — Steve, I gave that as an example after a brief search you can do in Google. You’ll notice the accusations of an ‘ice age’ are from the septical PR sites, and the accurate descriptions of a reference to risks from geoengineering recreating the ‘Little Ice Age’ are from the science journals. Look into this kind of accusation when you see climate scientists pilloried in the popular press, that’s my suggestion and point.
You’ll want to make up your own mind. I’m suggesting getting citations and reading them rather than relying on USA Today for your scientific basis.
>88 — Cute idea, maybe credible, but I notice they casually dismiss ocen acidification as not likely with no basis at all; look at the published research for the actual figures and levels at which aragonite shells dissolve, and you’ll see the physical chemistry basis for the prediction of major effects by 2100.
Alaska Oil pipeline will shut down at least til December! Which, I think, means oil tanker traffic and normal spills from associated tanker traffic will be reduced or ended.
I don’t have any idea how the volume spilled from normal shipping would compare to the amount still entering the ocean fromthe Exxon Valdez spill after 17 years. http://www.physorg.com/news67009981.html
The other natural experiment would be to find research done in areas that subsequently have large oil spills — of which there are plenty.
Is an interesting opportunity, if anyone knows how to look into it.
Agreed, even if it’s happening, it doesn’t change the greenhouse effect from fossil fuel use.
And the assumption made — that an unpolluted ocean would generate more negative feedbacks because it would produce more couds — assumes more than we know about cloud feedbacks.
IPCC TAR chapter 6.2.1 “lambda is a nearly invariant parameter (typically, about 0.5 K/(Wm-2); Ramanathan et al., 1985) for a variety of radiative forcings”
This statement is challenged, lambda is different for solar and CO2, and it is also not an invariant for forcing frequency. lambda is much smaller for fast varying forcings like volcanic sulphate pulsated cooling.
[Response: Possibly we are not reading the same sentence. ‘lambda’ is ‘nearly’ invariant – true, and of course lambda defined is for equilibirum changes and works well for long term volcanic forcing regardless of it’s fast varying pulsing. If you are going to keep insisting, at least find some actual evidence that supports your case. – gavin]
Dr. Soden, wouldn’t it be more correct to characterize cloud feedback as a sink of model uncertainty rather than as a source? As you noted, cloud feedback is poorly constrained by observations. In practice, modelers tweak cloud parameters to balance global energy budgets or to match albedo observations, and consequently, perhaps unintentionally, use cloud parameters to cover model errors from numerous sources, rather than to achieve cloud feedback realism as their first order priority for these parameters. In a nonlinear system, it would be fortuitious, if this general method of correcting model errors were equivilent to more specific corrections.
“… we provide a measurement-based assessment of the global direct climate forcing (DCF) of anthropogenic aerosols at the top of atmosphere (TOA) only for cloud free oceans. The mean TOA DCF of anthropogenic aerosols over cloud-free oceans [60Nâ��60S] is â��1.4 Â± 0.9 Wmâ��2, which is in excellent agreement (mean value of â��1.4 Wmâ��2) with a recent observational study by Kaufman et al. .”
Re #98 Climate science is not like physics. You cannnot run an experiment to find out what would happen to the Earth with double CO2, then decide what to do with the real Earth. If you use a cliamte models to predict the climate, you cannot check that the climate model’s predictions are right until the fifty years has expired. Even if the model fits today’s climate, you can never be sure it will get future climate correct. That is a fact of life, not some clever argument from a sceptic.
In fact the models are wrong, but because they give the “correct” results for today (provided you frig the MSU and radiosonde results) everyone thinks they are right!
Re# 98 “real world testing in a lab environment” would seem to be a contradiction in terms. And scientists can never “prove” a theory – they can only disprove it, or generate evidence that supports it.
RE#100 Which models are wrong? And in what way are they wrong?
re 90: not intended as a dismissal of the science, but a mild objection to the terminology used. The oceans will become less alkaline, not acidic.
re 91: a major oil spill from a tanker is a minor inconvenience. The vast majority of spill comes down sewers — it’s waste oil from human activity, not accident. A quick and ludicrously dirty calculation (deriving in part from an observation of oil on a pond by Benjamin Franklin, which meant at one point I was dealing with teaspoons per fortnight as a unit, so someone who knows about such things might usefully look at the figures again) suggests that the entire surface could be covered every fortnight.
re 94: increased heat gives increased evaporation, higher salinity in surface waters, saline water sinks and the deep water rises to compensate. I think. It makes me wonder about the Grand Banks and whether the nutrient value of the environment has dropped. That would explain the failure of the fishery to regenerate but is, perhaps, a handwave too far.
re 92 and 93: I’m not sure these relate to my question. However, if they do, the theory of the cause of CO2 increase does nothing to deny the models — it merely explains the initial fact of CO2 rise. It has always seemed unlikely to me that a few percent increase in output would disturb the overall carbon equilibrium: oil spill and surfactant pollution are small inputs which could produce large effects, which makes me remember the fluorocarbon/ozone problem.
re 97: are there satellite records of low level cloud cover? Reductions in low-level cloud cover should reduce albedo and increase warming. I believe high level cloud does the opposite.
Incidentally, I would really like to see some measurements that refute this theory. It suggests, I think, that there should be a mismatch in partial pressure between dissolved and atmospheric CO2 in deep water, not just in the well-mixed shore water. Testable, of course. Does the data exist?
Re #101 “Which models are wrong? And in what way are they wrong?”
The ones that are worong are the radiative convective models and all the general circulation models (GCMs.) In other words, the all the models scientists use to calculate the future climate.
They are wrong because they cannot replicate the abrupt climate changes which have happened in the past. Moreover they cannot even calculate the cloud base correctly, but this does not matter because the weather men know how to make the correction. More controversially, they do not produce the same lapse rate as that measured by satellites and radiosondes.
The problem is that they calculate the absorption of outgoing long wave radiation by assuming it is absorbed at all levels of the atmosphere, whereas according to Chapman’s law it all absorbed at the base of the atmosphere. Both methods lead to the same specrtum of long wave radiation as measured by satellites, since both absorb the same amount of radiation overall. Since the current method agrees with the measured spectrum, the scientists are sure that their method is correct.
Re #98 and “Just out of curiousity, has ANYONE done any real world testing in a lab environment to prove or disprove ANY of these theories? ”
John Tyndall demonstrated that carbon dioxide absorbed infrared light in 1859. The literature is full of laboratory tests of quantum mechanics, radiative transfer, greenhouse gases, etc., etc. Go to a university library and look at the textbooks. Track down some of the references in the journals. The USAF’s HITRAN project involved testing and recording several MILLION spectral lines.
re 107: go not to Google for answers, for it will say both no and yes. As will informed opinion even here, looking at no. 93 above. In order for the oceanic surface pollution theory of global warming to be correct then cloud cover should be falling and that should be positive. I’ll go back to the 10^6 hits and start looking: it’s sure to be in there somewhere.
Thanks for the link. I’ve also searched for partial pressure matching but had no luck, although the RS reports suggests that measurements have been made — but in shallows where wave action is guaranteed.
Re 40: The two sources cited are of very different repute and audience. I am responsible for the second, which is aimed at a K-12 audience. The 30 km value is a historic artifact and I will change it tomorrow to the 20 km value that was determined in the Loeb et al paper. Should have done this in the first place. Mea culpa.
I overlooked this discussion, as it was started during my trip to Iceland… But there are a few recent studies about cloud behaviour which challenge the positive cloud feedback included in (near all) current climate models.
Chen and Wielicki (2002) observed satellite based cloud changes in the period 1985-2000, where an increasing SST (+0.085 C/decade) was accompanied with higher insolation (2-3 W/m2), but also higher escape of heat to space (~5 W/m2), with as net result 2-3 W/m2 TOA loss to space for the 30N-30S band. This was caused by faster Walker/Hadley cell circulation, drying up of the upper troposphere and less cirrus clouds.
In 2005, these findings were expanded by J. Norris with surface based cloud observations in time (from 1952 on for clouds over the oceans, from 1971 on over land) and latitudes. There is a negative trend for upper-level clouds over these periods of 1.3-1.5%. As upper-level clouds have a warming effect, this seems to be an important negative feedback.
J. Norris has a paper in preparation about cloud cover trends and global climate change.
On page 58, there is a calculation of cloud feedback, assuming that the observed change in cloud cover is solely a response to increased forcing. The net response is -0.8, which is a very strong negative feedback… Of course this is the response, if nothing else is influencing cloud properties/cover, but important enough for further investigation.
Even internal oscillations, like an El Nino (1998) leads to several extra W/m2 more net loss of energy to space, due to higher sea surface temperatures. Thus IMHO, if models include a (zero, small or large) positive feedback by clouds, they are not reflecting reality.
NASA Langley’s ERBE team, who made the report by Wielicki et al. (2002) in “Science”, has revised their calibration. The increasing decadal trend of outgoing longwave radiation in the tropics still exists, but is much reduced in magnitude. Combined with the decreasing trend of reflected solar radiation (which has changed little by the revision), the net trend is now increase of gain by the earth (~ 1 W/m2 per decade). Analysis of ISCCP data at NASA GISS, though indirect as evaluation of radiation budgets, resulted in similar trend as the revised ERBE trend. See http://eosweb.larc.nasa.gov/PRODOCS/erbe/quality_summaries/s10n_wfov/erbe_s10n_wfov_nf_sf_erbs_edition3.html
and a preprint of a paper by Wong et al. to be published in J. Climate at http://asd-www.larc.nasa.gov/~tak/wong/f20m.pdf .
I do not know if it has bothered or confused others but I have found the definition of terms in the paper less than helpful.
R == change in radiative flux at the top of the atmosphere.
Now the forcing is internal to the system as defined as everything below the TOA. All external forcings are zero.
At equilibrium there is zero net flux at the TOA.
So if equilibrium were obtained in the future at 2xCO2 there would have been no net change in either the inward or outward fluxes (as there is no change in external forcing and no change in net flux) with respect to any other equilibrium condition.
This can not be what the authors intended.
The usage of R implies they mean the term to reflect the change in outward flux due their feedbacks that compensates for the loss of outward flux due to increasing CO2. In effect they require R to be a combination of the real change in flux and the change in the forcing. What they have done may be acceptable as a shorthand but is not as clear as it could or perhaps should be.
Later when they are calculating the effective sensitivity they do need deltaR to equal the real net flux at the TOA as this is just the same thing as the change in total heat of the system (everything below TOA) which combined with the forcing can be used to give a value to the effective sensitivity. This can not be the same use as deltaR is put to earlier.
Another little piece of shorthand is the equation defining cloud feedback using effective sensitivity. Rearranging this equation and comparing to the earlier equation that defines a total feedback as the sum of individual feedbacks gives the identity
total feedback = effective sensitivity
and hence they can give a value to the cloud feedback. This is a little misleading as the sum of the feedbacks is not really the same as the effective sensitivity. That is, it would be incorrect to substitute the total of the four feedbacks (the total feedback) for the left hand side of the Murphy 1995 equation. It is accepted that calculated values for effective sensitivity can be used as estimates of climate sensitivity but they are not to be confused with the real thing.
On a more general point; feedbacks as used here are not to be confused with the way feedbacks are used in other disciplines where feedback factors or ratios are just that “ratios”.
Climatic feedbacks have the form of admittances (flux/potential). That is they could more naturally be considered as responses (or sensitivities). It seems common practice to single out one or more of these feedbacks to indicate a standard response and then to treat the other responses as feedbacks modifying the standard response. This is actually quite arbitrary if convenient. Interestingly in Table 1 all the responses are listed as feedbacks including the Planck response.
In the case of the ~1C value associated with the 4.3W/m^2. This indicated that either the Planck response alone which would give a value ~1.5C or the sum of the Planck and lapse rate feedback (response) has been used. The latter giving ~1C. Depending on which one assumes makes a significant difference to back of an envelop computations as some above may have found.
When it comes down to it. It is simpler to treat them all as equivalent admittances when you want to do the maths (as they do) even if you find it more convenient to think of them as if they were feedbacks when considering the situation qualitatively.
Comment by Alexander Harvey — 3 Sep 2006 @ 11:38 AM
Again regarding the Soden & Held Paper
I simply can not understand how they could start to justify their conclusions on the variance (uncertainty) of the cloud factor.
From the Abstract:
“Consistent with previous studies, it is found that the vertical changes in temperature and water vapor are tightly coupled in all models and, importantly, demonstrate that intermodel differences in the sum of lapse rate and water vapor feedbacks are small. In contrast, intermodel differences in cloud feedback are found to provide the largest source of uncertainty in current predictions of climate sensitivity.”
And from the summary:
“(ii) clouds provide the largest source of uncertainty in current model predictions of climate sensitivity.”
Now unlike the other terms, the value for clouds is determined by aggregation, this is important. Comparing the variance of aggregates to the variance individual terms is just the sort of thing we were taught not to do.
The cloud value is defined as the difference betweeen the effective sensitivity and the sum of the other terms. One would well expect this to give a variance close to the sum of the variances of the aggregated terms (and hence the greatest variance) due to the way it has been calculated. Any difference from the aggregate of the other variances being due to the size and sign of the covariances between the other terms.
Surely the variance of the cloud factor says more about the variance of, and covariance between, the other terms than anything about the cloud factor itself.
I think this is pretty fundamental.
And so I think that the justification for making any statement about the variance of the cloud factor is very thin indeed.
To be frank I am a bit shocked that the authors have not more thoroughly explored the consequences of their own caveat:
“Keeping in mind the limitations of computing cloud feedbacks as a residual, and the lack of precise information on radiative forcing in the models, our results are consistent with differences in cloud feedbacks being the largest contributor to intermodel differences in climate sensitivity (Fig. 1).”
One conclusion that this could have led them to would be:
It is not safe; so if you can’t specifically justify it then do not do it at all.
I would be interested to know who cast a critical eye over this, and if it was open at the time.
Sorry, but I think this is tragic.
[Response: You are overreaching here. It’s true that the cloud feedback is calculated as a residual – but in the cases where they were able to test this, it works well. It is however the best that can be done with the data at hand. Romeo and Juliet is tragic, Soden and Held is just interesting. – gavin]
Comment by Alexander Harvey — 3 Sep 2006 @ 7:49 PM
Regarding Forcings versus Feedbacks:
I would think that the distinction might be more clearly put if the wording was Forcings versus Responses.
By that I mean responses to a change in temperature.
If a modifier is principly a function of the temperature then its action is a response to the temperature. Hence it is principly a response (feedback) not a forcing.
So if its action can plausibly be described as a ratio of “change in flux”/”change in temperature” that is a constant or a function of temperature then it is acting as a response (feedback).
Here it could be educative to consider clouds.
If cloud is principly a function of temperature (i.e. all other determining parameters are constant) then its action is a response to temperature (a feedback).
If some controlled process that is not simply a function of temperature is directly stimulating cloud production (seeding , aerosols etc) then cloud cover is partially a forced condition and its action is partly responsive (a feedback) and partially forced (a forcing).
[Response: A response that ‘feeds back’ on the original change is a feedback. – gavin]
Comment by Alexander Harvey — 3 Sep 2006 @ 9:29 PM
Gavin Regarding Feedbacks:
The Planck response is refered to as the Planck feedback parameter.
In what shape or form is this response a feedback? What part of the output is fed back to the input?
If the sun gets hotter causing the earth to get hotter does the planck response act as a feedback. I.E. does it act to restore the temperature of the earth by making the sun cooler.
No, It is simply a response.
As I have said before. In form it is an admittance as are the other feedbacks (or responses), whenever they are stated as a parameter (with units of admittance). They are also treated as admittances, albeit some of them as negetive admittances. They are summed and multiplied by a difference in temperature to give a flux or a flux is divided by the sum to give a difference in temperature.
If they were consistantly treated as feedbacks it would be different. In treating feedbacks one must take account of time delay. One needs to be specific about boundaries to define inputs and outputs. In all the mathematics is much more complicated (we would be talking about a model of the weather system).
If they were stated as a proportion of an output flux from a system that is delayed and fed back (or added) to the input flux to that system they would be being treated as feed backs and the Planck response could not be included (there is no feedback ratio that can be associated with the Planck response).
In some cases the delays may be so short as to be irrelevant on climate timespans but I am not sure that this is true of albedo. If it is not or that albedo is not a simple function of temperature, which is likely, then the whole treatment of the albedo as a simple admittance (response or feedback if you will) is questionable. One can only wonder if this is taken into account in the paper. The shift that is made from a differential to a difference equation, begs questions. The ability to treat this effect as an admittance while thinking about it as a feedback might just be clouding the picture. Again a principle issues are delays and the possible dependence of albedo on both temperature and time in some complex way.
To recap: what I am saying is that effects that originate from feedbacks are commonly treated as admittances and in the case of the Planck response an admittance is erroneously considered a feedback. Yet there is a real difference. Understanding that difference can make any laxity in the use of the mathematics apparent.
I understand that this is a shorthand that allows complexities to be put to one side but it is important to remind oneself and others that it is just that.
[Response: Not following you at all. The Planck response (long wave emission) is the principle negative feedback because as T increases, LW increases and moderates the T. All terms in the T equation that include a dependency on T itself (however indirect and including delays) are feedbacks. And yes, we are talking about weather models! – gavin]
Comment by Alexander Harvey — 4 Sep 2006 @ 8:40 AM
[…] Range of estimated magnitudes of major climate feedbacks from the most recent IPCC report and Colman 2003. Figure taken from Soden and Held 2006 (pdf) via Realclimate. […]