RealClimate logo

“Misdiagnosis of Surface Temperature Feedback”

Filed under: — mike @ 29 July 2011
  • SumoMe

Guest commentary by Kevin Trenberth and John Fasullo

The hype surrounding a new paper by Roy Spencer and Danny Braswell is impressive (see for instance Fox News); unfortunately the paper itself is not. News releases and blogs on climate denier web sites have publicized the claim from the paper’s news release that “Climate models get energy balance wrong, make too hot forecasts of global warming”. The paper has been published in a journal called Remote sensing which is a fine journal for geographers, but it does not deal with atmospheric and climate science, and it is evident that this paper did not get an adequate peer review. It should not have been published.

The paper’s title “On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance” is provocative and should have raised red flags with the editors. The basic material in the paper has very basic shortcomings because no statistical significance of results, error bars or uncertainties are given either in the figures or discussed in the text. Moreover the description of methods of what was done is not sufficient to be able to replicate results. As a first step, some quick checks have been made to see whether results can be replicated and we find some points of contention.

The basic observational result seems to be similar to what we can produce but use of slightly different datasets, such as the EBAF CERES dataset, changes the results to be somewhat less in magnitude. And some parts of the results do appear to be significant. So are they replicated in climate models? Spencer and Braswell say no, but this is where attempts to replicate their results require clarification. In contrast, some model results do appear to fall well within the range of uncertainties of the observations. How can that be? For one, the observations cover a 10 year period. The models cover a hundred year period for the 20th century. The latter were detrended by Spencer but for the 20th century that should not be necessary. One could and perhaps should treat the 100 years as 10 sets of 10 years and see whether the observations match any of the ten year periods, but instead what appears to have been done is to use only the one hundred year set by itself. We have done exactly this and the result is in the Figure..
[ed. note: italics below replace the deleted sentence above, to make it clearer what is meant here.]

SB11 appears to have used the full 100 year record to evaluate the models, but this provides no indication of the robustness of their derived relationships. Here instead, we have considered each decade of the 20th century individually and quantified the inter-decadal variability to derive the Figure below. What this figure shows is the results for the observations, as in Spencer and Braswell, using the EBAF dataset (in black). Then we show results from 2 different models, one which does not replicate ENSO well (top) and one which does (second panel). Here we give the average result (red curve) for all 10 decades, plus the range of results that reflects the variations from one decade to the next. The MPI-Echam5 model replicates the observations very well. When all model results from CMIP3 are included, the bottom panel results, showing the red curve not too dis-similar from Spencer and Braswell, but with a huge range, due both to the spread among models, and also the spread due to decadal variability.

Figure: Lagged regression analysis for the Top-of-the-atmosphere Net Radiation against surface temperature. The CERES data is in black (as in SB11), and the individual models in each panel are in red. The dashed lines are the span of the regressions for specific 10 year periods in the model (so that the variance is comparable to the 10 years of the CERES data). The three panels show results for a) a model with poor ENSO variability, b) a model with reasonable ENSO variability, and c) all models.

Consequently, our results suggest that there are good models and some not so good, but rather than stratifying them by climate sensitivity, one should, in this case, stratify them by ability to simulate ENSO. In the Figure, the model that replicates the observations better has high sensitivity while the other has low sensitivity. The net result is that the models agree within reasonable bounds with the observations.

To help interpret the results, Spencer uses a simple model. But the simple model used by Spencer is too simple (Einstein says that things should be made as simple as possible but not simpler): well this has gone way beyond being too simple (see for instance this post by Barry Bickmore). The model has no realistic ocean, no El Niño, and no hydrological cycle, and it was tuned to give the result it gave. Most of what goes on in the real world of significance that causes the relationship in the paper is ENSO. We have already rebutted Lindzen’s work on exactly this point. The clouds respond to ENSO, not the other way round [see: Trenberth, K. E., J. T. Fasullo, C. O’Dell, and T. Wong, 2010: Relationships between tropical sea surface temperatures and top-of-atmosphere radiation. Geophys. Res. Lett., 37, L03702, doi:10.1029/2009GL042314.] During ENSO there is a major uptake of heat by the ocean during the La Niña phase and the heat is moved around and stored in the ocean in the tropical western Pacific, setting the stage for the next El Niño, as which point it is redistributed across the tropical Pacific. The ocean cools as the atmosphere responds with characteristic El Niño weather patterns forced from the region that influence weather patterns world wide. Ocean dynamics play a major role in moving heat around, and atmosphere-ocean interaction is a key to the ENSO cycle. None of those processes are included in the Spencer model.

Even so, the Spencer interpretation has no merit. The interannual global temperature variations were not radiatively forced, as claimed for the 2000s, and therefore cannot be used to say anything about climate sensitivity. Clouds are not a forcing of the climate system (except for the small portion related to human related aerosol effects, which have a small effect on clouds). Clouds mainly occur because of weather systems (e.g., warm air rises and produces convection, and so on); they do not cause the weather systems. Clouds may provide feedbacks on the weather systems. Spencer has made this error of confounding forcing and feedback before and it leads to a misinterpretation of his results.

The bottom line is that there is NO merit whatsoever in this paper. It turns out that Spencer and Braswell have an almost perfect title for their paper: “the misdiagnosis of surface temperature feedbacks from variations in the Earth’s Radiant Energy Balance” (leaving out the “On”).

282 Responses to ““Misdiagnosis of Surface Temperature Feedback””

  1. 251
    Craig Nazor says:

    John W – What are you trying to say about the Wikipedia article?

    “More recent work continues to support a best-guess value [of the climate sensitivity to be] around 3°C.”

    Why would this need updating?

    Why can’t the climate sensitivity be what you are defining as “dynamic?” Because, under the general parameters of the present day earth, it’s not. Remember that climate sensitivity is defined as the change in average temperature resulting from a doubling of atmospheric CO2. Any likely change in solar irradiance is an external forcing which will affect the temperature, but by itself won’t change the climate sensitivity. The effect of the climate sensitivity would be an internal forcing on temperature. Direct observations, proxy estimates, and climate modeling all support the fact that the climate sensitivity, including feedbacks, is around 3ºC.

  2. 252

    @John W 250
    Craig Nazor has already sufficiently addressed the scientific error you’re making, but the fact that you asked it, in this forum, in this context and not via research of your own, reveals a more fundamental misunderstanding on your part of The Way Science Works. I will now address that.

    Since “sensitivity” isn’t an intrinsic property, why can’t it be dynamic? It seems to me the sensitivity could be different for different forcings in different situations. For example, an increase in solar irradiance might have very little effect on a snowball earth but a GHE increase might have a huge impact on temperature. (Arctic snow having a high albedo for UV/Visible but low for IR)

    Anything is possible, but if you believe that it is so, it is your burden to cite some evidence, or if it doesn’t already exist, to produce it via original research of your own. That is non-negotiable.

    When you dispute the top experts, in this or any field of science (which is what you are doing when you gainsay the consensus that climate sensitivity is ~3C) it is not sufficient to just pull something out of your hat, wave your hands to suggest it “could be” the way you suppose, and then shift the burden of proof to the real scientists. If you really believe that the scientific consensus on climate sensitivity (or any other subject) is wrong, then you need to apply for a research grant to start substantiating your belief And if you are not already working in an climate or an allied field of physical science, then this may require you to obtain some advanced degree(s) as prerequisite.

    Those are the standards of evidence in science.

    That is life.

    And to preempt the anticipated argument from “chaos,” Chaos Theory has a specific meaning. The important thing about that definition for our purposes is high sensitivity to small changes in initial conditions. We know from Lorenz that specific weather events cannot be predicted “reliably” beyond ~5 days, sometimes less and sometimes a little more in some regions, but anyway, there is an accepted limit on the ability to predict the specific events of weather. From that, it does not follow that climate’s aggregate measurements, such as global mean temperature, atmospheric CO2 concentration, water vapor concentration and sensitivity are necessarily “chaotic” too. Such would be a hypothesis, and also your burden to substantiate.

    My apologies if you did not plan to bring up chaos, but I have seen it so many times in online discussions about climate, and usually with claims such as yours, that something “could be” different from the consensus position because is cannot be known that I like to stop such nonsense before it starts. And I have seen “chaos” in several discussions of SB11 specifically, also invoked incorrectly, so I have good reason to suppose it’s about to rear its ugly head again.

    Weather is reasonably predictable up to about five days, and climate is reasonably predictable on multi-decadal time scales using averages including sensitivity. If you sincerely believe any of those quantities is in error, then your duty is not just to make something up and say it “could be,” it’s to do the hard work to show that it is, or at least cite a scientist who has. Failing that, ya got nuthin. Absolutely nuthin.

  3. 253
    John W says:

    Craig Nazor
    The wikipedia article cites studies, whose findings for sensitivity are all over the place, not a dozen “independent” lines of evidence all pointing to 3C for 2XCO2, more like all “average” to 3C for 2XCO2, as you quoted from the article. The difference is subtle indeed and perhaps not worth mentioning.

    Sorry, I didn’t realize the caveat “for 2XCO2″ wasn’t necessary. I was under the impression that “climate sensitivity” was more broadly defined. So, for future reference if I am discussing the climate response to increased solar activity and wanted to use a term that described the temperature increase for a given increase in heat flux what would be the preferred term?

  4. 254
    trrll says:

    co2isnotevil says:

    Yes, climatology defines feedback (and sensitivity/gain) differently than the way it’s formally defined. I contend that this is a significant error. You can’t claim the kinds of scary things associated with positive feedback in an amplifier (i.e. instability, massive gain boosting, etc.) when the definition you’re using for feedback has nothing to do with the criteria for determining the stability of a system.

    You know, I kind of agree with you. The fact that climate science defines “feedback” differently from EE creates confusion and unnecessary obstacles to people who come to climate science with a background in EE. EE has been using the term “feedback” for a very long time, and it would be a lot clearer if climate scientists had chosen to use the term in a parallel way. It was an unfortunate choice of jargon.

    But frankly, it is hardly the only unfortunate jargon choice in science. Very often, scientists in one field use a term in a very different way than it is used in common parlance or in another scientific field. EE is hardly immune; after all, in most circuits, the “direction” of current flow is the opposite from the direction of movement of charge carriers. This is unnecessarily confusing and an obstacle to learning for many students who are accustomed to thinking of current in terms of fluid flow. So should we redefine or rename current to correct this “error”? Or maybe swap “positive” and “negative” so at least the direction of current in a wire corresponds to the movement of electrons? Think of the confusion that would result! When you read the literature, you would have to check the date to see if the terms are being used in the old sense or the new sense.

    So face it. When you approach a new technical field, you have to accept that they will have their own jargon. Very often this means using a familiar word in an unfamiliar way, and you just have to learn that the words mean what they are defined to mean in that context, even if you are used to using them differently in another context. Perhaps somebody once upon a time made a poor choice of terminology. It may have been an “error” in the sense of a bad decision, but it is not a scientific error so long as the reasoning follows the definition of the word as it is used in that field. Stomping your feet and insisting that they have it wrong doesn’t enhance understanding or advance knowledge; it just makes you tedious.

  5. 255
    John W says:

    Settled Science says:
    “If you really believe that the scientific consensus on climate sensitivity (or any other subject) is wrong”

    I did not intend that at all. All I’m saying is that if there is scientist A, B, and C all determining climate sensitivity from different periods over different time frames with different forcings that they could all get different answers and none invalidates the other (they’re all right), because (and I don’t believe I’m contridicting any scientific finding here) climate sensitivity is not rigidly static. Most (if not all) who have published on the subject report climate sensitivity as an average over some time period suggesting it isn’t “a value”. Granted I’m not a climate scientist so I may be misinterpreting what I’ve read; hence, the question mark.

  6. 256
    Hank Roberts says:

    > suggesting it isn’t “a value”

    Sure it is. It’s calculated: take a world; take its average temperature; double the CO2 in the atmosphere instantaneously, holding everything else unchanged; wait til the temperature stabilizes; take the temperature again; do the arithmetic. There’s your value. That’s the definition (as paraphrased by a non-climatologist).

    Problems arise when figuring this out for real world situations.

    It’s not a “constant” — like gravity or the speed of light.

  7. 257


    You’re just making more claims without any evidence, and these are different claims than you made in 250.

    I did not intend that at all.

    Well it sure looks to me like you did, until you were shown to be wrong.

    Ray Ladbury says:
    There are about a dozen independent lines of evidence for climate sensitivity. All of them favor a value around 3 degrees per doubling.

    That was the statement that you contradicted, like so:

    Ray Ladbury says:
    There are about a dozen independent lines of evidence for climate sensitivity. All of them favor a value around 3 degrees per doubling.

    Please update wikipedia: [snarky! which helps remove all doubt: you were disputing Ray Ladbury there, so stop denying it, will you?]
    Since “sensitivity” isn’t an intrinsic property [unsupported assumption], why can’t it be dynamic? It seems to me the sensitivity could be different for different forcings in different situations. For example, an increase in solar irradiance might have very little effect on a snowball earth but a GHE increase might have a huge impact on temperature. (Arctic snow having a high albedo for UV/Visible but low for IR)

    Now, you’re saying something different. That’s fine, but that does not change the fact that you did start out in 250 claiming that what Ray Ladbury said about the scientific consensus on sensitivity is wrong.

    All I’m saying [now] is that if there is scientist A, B, and C all determining climate sensitivity from different periods over different time frames with different forcings that they could

    And again, anything “could” be but not every imaginable scenario that “could be” is relevant just because the evidence doesn’t rule it out beyond a shadow of a doubt. All that is relevant is which of the myriad possibilities the evidence shows to be so, or at least can be shown by evidence to be likely.

    … all get different answers and none invalidates the other (they’re all right), because (and I don’t believe I’m contridicting any scientific finding here) climate sensitivity is not rigidly static.

    That’s one possibility, but it is not logically implied by the mere fact that slightly different values are obtained by different methods. The fact that they are mostly around 3°C suggests that the real, correct value is around 3°C.

    Most (if not all) who have published on the subject report climate sensitivity as an average over some time period [the time period each one is studying] suggesting it isn’t “a value”.

    That is not a logical inference, because the time period being examined is not the only thing varying from one study to another. They also depend on different measurement methods and different ways of accounting for other influences, some ways undoubtedly better than others. In short, more plausible reasons are evident for the small variations observed.

    In general, a distribution of measured or computed values does not imply that the quantity itself is varying, particularly when the measured or computed values cluster around one quantity, as the values of climate sensitivity do. It takes much more than variation in measurements to establish that the quantity itself is actually varying. In simplest terms, to support your assertion would require measured or computed variation that is greater than the plausible margin of error of measurements and computations, and I don’t believe you have that.

    “Please update wikipedia”

    In the future, I suggest you present your speculations in the form of questions. HTH.

  8. 258
    Ray Ladbury says:

    John W. How are you proposing that the climate system should know precisely whether a watt of forcing comes from IR or UV–particularly since one of the processes in Greenhouse warming is rapid thermalization (e.g. the vibrational excitation relaxing collisionally and imparting the excess energy to another molecule, say N2). Folks have looked into this sort of thing–they really have. And given the diversity of methods and data, doesn’t that make it all the more remarkable that the favored value is so consistently around 3 degrees per doubling? Shouldn’t that tell you something?

  9. 259
    Ray Ladbury says:

    John W., BTW, the following site is a lot more informative (and correct) that the Wikipedia article.

  10. 260
    John W says:

    Settled Science

    In short, more plausible reasons are evident for the small variations observed.

    OK the values are from 0.7 to 6.8; I don’t know but that doesn’t seem like a small variation clustered around 3, so your explanation doesn’t fit the data; it seems more like averaged to 3 (depending on the distribution). But again, it’s a only subtle difference.

    Now, you’re saying something different.

    I’m not saying anything different, if you can’t understand how a dynamic sensitivity means different research findings could all be right, well I guess I’m not explaining it well. BTW: I put at least three variables into the sensitivity dynamics and didn’t claim they are all there are.

    In the future, I suggest you present your speculations in the form of questions. HTH.

    Again, hence the question mark in the original.

    I don’t understand why that idea [dynamic sensitivity] would be controversial. It doesn’t contradict the IPCC or any other scientific research that I’m aware of, personally I kinda thought it explains why RS’s paper (the Topic) doesn’t invalidate anything (as claimed).

  11. 261
    Hank Roberts says:

    > dynamic sensitivity

    Well, look at what Scholar finds; nothing controversial, but nothing extremely simple either. Here’s the search I tried, for starters:“dynamic+sensitivity”

    Three quotes from three result hits on the first page:

    “Typically, the dynamic sensitivity in 2100 is about 2/3 of the static sensitivity.”

    “… the difference between the static and the dynamic approach was negligible for small climate perturbations …”

    “For the 2000-50 period, dynamic sensitivity is distinctly higher than static sensitivity.”

    Shorter: “It depends on what exactly you’re talking about.”

  12. 262
    Didactylos says:

    John W, you are quite right to highlight the importance of distribution in measures of climate sensitivity. Your unqualified description of the range omits crucial details of where the estimates cluster, as well as the “long tail” in the distribution.

    This chart summarises climate sensitivity derived from various lines of evidence, showing the distribution. Note that all lines of evidence show a climate sensitivity best estimate within the IPCC “likely” range, and very close to 3 degrees. The fact that these lines of evidence are based on several different climates (or periods of climate) implies strongly that climate sensitivity does not change much over time.

    Please note the long tail shown by each estimate. This rules out lower values for climate sensitivity, but fails to rule out higher values.

    I find it telling that Spencer is futzing around with climate sensitivities of 1 degree – this is the lowest possible sensitivity that the data currently supports, and is a long way from the best estimate and light years away from the worse case!

  13. 263
    John W says:

    Ray Ladbury
    “How are you proposing that the climate system should know precisely whether a watt of forcing comes from IR or UV”
    Well in the example I gave, it would be from the albedo being different for UV than IR.

    “–particularly since one of the processes in Greenhouse warming is rapid thermalization (e.g. the vibrational excitation relaxing collisionally and imparting the excess energy to another molecule, say N2).”

    Well, here’s some of the confusion, y’all seem to take “climate sensitivity” as what I would call “climate sensitivity to 2XCO2″. I was using the term much more broadly than is apparently acceptable. But back to your statement: “vibrational excitation relaxing collisionally and imparting the excess energy to another molecule”; but also, collisionally imparted vibrational excitation relaxing emissively thereby transferring translational energy from another molecule into IR energy, thereby cooling the surroundings (atmosphere) and warming IR absorbing surfaces(earth); this is the “fingerprint” of GHG increase under current conditions. Yes, taking “climate sensitivity” as “climate sensitivity to 2XCO2″ especially under reasonbly similar conditions (ie: not snowball earth, or hothouse earth) I don’t see any reason why there would be much variability, but again, I’m not a climatologist and my question is still not answered: (rephrasing) Does the overall average temperature response differ between forcings under differing conditions? (Yes, No, Maybe) If yes, then are we talking orders of magnitude or tiny shifts?

  14. 264
    Meow says:

    @263: Hansen et al 2005 explores the temperature change produced by a range of like W/m^2 of different forcings as compared to a 1.5x CO2 reference (“efficacy”) by applying them individually to the same climate state under GISS Model E attached to three different ocean models. Fig. 25 summarizes the results. Searching for cites to this paper should yield lots of refinements.

  15. 265
    Patrick 027 says:

    Re 250 John W –

    see ‘efficacy’ – it is possible that the climate response to the same global average tropopause-level forcing with stratospheric adjustment would be different due to different distributions of that forcing. I’m not sure on the current status of this, but dark aerosols specifically distributed on snow cover/ice could tend to have a larger global average response because there is a local/regional concentration coincident with a concentration in positive feedback.

    But for some specific examples you list, the expectation of different climate response actually comes from different amounts of forcing. The solar forcing, in terms of tropopause-level (or TOA, for that matter) net flux changes, would not be equal to changes in TSI/4. It would depend on changes in the amount of solar heating, which is modulated by the albedo.

    Climate sensitivity often refers to an equilibrium sensitivity but there is also a transient sensitivity, which is affected by the rate of equilibration.

  16. 266
    Patrick 027 says:

    Re 250 John W – also, the effect of solar TSI changes on tropopause-level solar forcing is affected by stratospheric ozone, and solar radiation also affects ozone.

    Orbital forcing is especially idiosyncratic, as it’s big effects on climate depend on the seasonal and latitudinal rearrangement of insolation; the changes in global annual average solar forcing are small.

    And the relationship between temperature and radiation is nonlinear, so of course climate sensitivity could be different when in snowball conditions, but a linear approximation (constant sensivity) could apply with smaller error for a smaller range of conditions.

    There is also the distinction between Charney sensitivity and a ‘fuller’ sensitivity. Charney senstivity includes water vapor, clouds, snow, and I think sea ice (and maybe a few other things?), as feedbacks, and treats ice sheets and CO2 and CH4, among other things, as external forcings. Charney sensitivity will generally tend to be similar to actual sensitivity over shorter time periods, as CO2 and ice sheets can respond to climate changes but generally take a longer time to do so (and thus over short time periods, they act approximately like boundary conditions – thus like external forcings).

    My understanding is that Charney equilibrium sensitivity will vary less over geologic time and over differing conditions. The fuller climate sensitivity can easily vary – for example, at temperatures too warm for significant ice sheets, ice sheets are not available as feedback; for different continental arrangements and other conditions, CO2 feedback could be different. Also, sensitivity has to be defined differently – if CO2 is included as feedback, then the external forcing can’t be the radiative forcing from changes in CO2; it could be a non-climate-induced CO2 perturbation, such as the amount of CO2 emitted directly from anthropogenic fossil fuel burning, deforestation/etc, and cement production – although one could argue that human activity is a feedback to the recent climate history and then to the whole of climate history via evolution and ecological succession, etc. :)

  17. 267
    Ray Ladbury says:

    John W.,
    You missed my point. Thermalization occurs regardless of the source of the energy. It is true that different types of energy may enter the atmosphere via different mechanism, but because they will disturb the thermal balance between energy modes, eventually energy will flow into the other modes until local thermal equilibrium is restored. The rapid thermalization is why the climate (to first order at least) will not “remember” where the energy came from, and all sources of energy will likely have the same sensitivity (again to first order).

    Again, I can assure you that folks have thought about this.

  18. 268
    Hank Roberts says:

    Ray, perhaps you could point John W to one of the places he can learn about this stuff? There must be a teaching site or thread out there.

    RC tends to revisit basic questions a lot, each time a new topic opens, no matter what it’s meant to discuss.

  19. 269
    Ray Ladbury says:

    Hank, I wish I knew of a good site for tutorials on thermo. It is unfortunately a subject that doesn’t get enough pedagogical attention.

    If John W. will tell me his level of mathematical comprehension, I could maybe recommend a book.

    Hmm. Maybe I can find something on MIT’s Open courseware site.

  20. 270
    Patrick 027 says:

    Re 216 Marcus

    All in all there seems to be no silver bullet for the conversion from CO2 concentration to radiative forcing.

    Actually there is a quick back-of-the-envelope way to roughly approximate it. If you have a spectrum for global time average OLR, you can see where the CO2 valley is, and interpolate across it where (roughly) the spectrum would be if CO2 were removed. Such a spectrum was calculated (using globally-representative conditions) by Kiehl and Trenberh 199x (x is either 7 or 8, I’ll have to get back to you about which) (alternatively, if we took a weighted average of satellite data…). You can compare the area on the graph added to OLR by removing CO2 to the value in Kiehl and Trenberth for total CO2 forcing excluding overlaps.

    Anyway, an approximate answer for instantaneous TOA forcing per doubling of CO2 can be gained by multiplying the depth of that valley (relative to OLR without CO2) by the sum of the spectral intervals over which CO2 optical thickness decreases by half on each side of the CO2 peak

    (a logarithmic plot of CO2 optical thickness graphed over the spectrum using the same units as the OLR spectrum uses would be helpful)
    (this is ignoring the smaller-scale texture of the CO2 spectrum – if this is sufficiently self similar over small intervals, the texture won’t matter, but there is a bit of a larger-scale bumpy wobble in the spectrum that will have an effect – see next paranthetical statement)
    (if this spectral interval value changes a bit, you’d want to look at what it is at those wavelengths or frequencies where the OLR CO2 valley slopes from it’s outer edge to it’s bottom)
    (When I tried this myself, I took the interval (visually estimated) for a tenfold change in optical thickness and multiplied by log(2)/log(10). Perhaps it would have been better to start with the interval for a 100-fold change – perhaps less accurate at any given CO2 value but it would apply over a wider range of CO2 amounts).

    To find the tropopause level forcing, note that the net longwave flux in the center of the CO2 band is nearly zero, while the net upward flux just outside of the CO2 valley is about the same at the tropopause as it is at TOA (because the stratosphere is nearly transparent in that part of the spectrum except for CO2). Thus the depth of the valley to use to calculate forcing at that level is what the OLR would be absent the CO2. The difference between the two values is a forcing on the stratosphere that causes cooling, a portion of which is then transfered to the forcing at the tropopause.

    This takes advantage the shape of the CO2 band centered at 15 microns, and CO2’s saturation or near saturation at/near the band center at the tropopause and at TOA (in the sense that adding CO2, beyond some point, doesn’t decrease OLR farther). This is setting aside the effect of the growth of the little peak in the bottom of the CO2 valley (from the warmer thin upper stratosphere). The interpolation to find the OLR absent CO2 assumes the water vapor spectrum is sufficiently smooth and linear in that (setting aside the finer scale texture) in that part of the spectrum. This sets aside the SW forcing by CO2. This also sets aside the curvature of the Planck function; to approximately(?)**** correct for that, you could put the OLR spectrum in terms of brightness temperature and find the corresponding range in flux for the same brightness temperature rhange at the parts of the spectrum where the CO2 valley’s slopes are, and use that instead of the depth of the valley at the band center (and also, use that value on each side to multiply withthe band-widenning interval on the same side, in case there’s an assymetry in the later; then take the sum).

    **** at some point in the past I think I said that the same optical thickness distribution at a different part of the spectrum would lead to the same brightness temperature in the flux or intensity at that part of the spectrum, but the nonlinearity of the Planck function’s dependence on temperature makes this generally untrue, although it can be an approximation – especially at sufficiently long wavelengths for a given range of temperatures.

    Wrote that up in a hurry, pardon any errors.


    Re 218 Ray Ladbury I won’t make you refer to the square root of negative 1 as ‘i’ if you don’t insist that climate science follow all the protocols of EE.

    That was awesome! (AC circuits can’t exist because i isn’t j :) )

    PS in order for feedback to be expressed as unitless, you have to exclude Planck feedback so that you’ve got a ratio of temperature changes, or a ratio of flux changes (taking the feedbacks in terms of W/m2 per K and then dividing by … etc.) We can do this, of course, but it’s interesting that then you’re not counting all the feedbacks. I’ve only see a little EE but I’m guessing the forcing and output are in the same units, like the power or amplitude of a sinusoidal signal?

  21. 271
    Patrick 027 says:

    re my comment about graphically estimating CO2 forcing –
    actually it wouldn’t matter so much if H2O had (up to a point) wild variations in optical thickness sufficiently deep inside the CO2 valley; the linear interpolation across the valley to find the depth of the valley at it’s center gives a sense of the height of the area added to the valley as it widens with CO2 doubling (setting aside band asymmetry and Planck function curvature).

  22. 272
    Patrick 027 says:

    … actually , the one key thing that stays relatively constant (up to a point?) over the spectrum is the minimum in brightness temperature that can be achieved by a well-mixed greenhouse gas with sufficient optical thickness. The corresponding Planck function curve gives the flux at any given part of the spectrum; the height of the area added to the CO2 valley on either side is approximately the difference between this flux and that of the OLR absent CO2 (which may have different brightness temperatures at different spectral locations) at the part of the spectrum where the slope of the valley occurs.

  23. 273
    John W says:

    Ray Ladbury says:
    “If John W. will tell me his level of mathematical comprehension, I could maybe recommend a book.”

    It’s been a while since I’ve really had to “do the math” myself(management), but I think I can handle anything through third-order nonlinear partial differential equations without sweating it too much.

    I really wasn’t looking for anything that serious though, just kinda curious what the opinion of those “in the know”.

  24. 274
    Marcus says:

    Patrick027, I appreciate Your efforts and keep a bookmark on this procedure.. actually, for myself its better for the moment *first* to refurbish my spectroscopy knowledge and work through pierrehumberts “principles”


  25. 275
    J.C. Moore says:

    Thanks for a good analysis of Spencer’s article. It’s too bad that millions will read the sensationalized version of it but he will read the climate scientists opinion. I traced the path of the article to see how it got from from Remote Sensing to Yahoo!News in just five days and posted it at

  26. 276
    J.C. Moore says:

    Thanks for a good analysis of Spencer’s article. I traced his path from Remote Sensing to see how I got to Forbes and onto the front page of Yahoo!News in just five days, and posted it at

    It’s the same that millions of people may read the sensationalized version but very few will read the climate scientists analysis of all the errors in the paper.

  27. 277
    RW says:


    “You are repeating the same errors as RW – to whit: you are defining sensitivity incorrectly even for trivially simple examples, you are linearising a highly non-linear problem over a huge range, and you are blaming a semantic confusion on your part for your inability to actually work through the (relatively simple) algebra. Just as I suggested to him that he work out why his calculation and the real sensitivity in a simple model are not the same, I suggest the same to you.”

    Which specific formula are you referring to?

    [Response: There is no ‘formula’ – rather there is a simple model, as outlined in the post I pointed you too weeks ago that has a clear greenhouse effect, a well defined sensitivity and the straightforward possibility to compute any function you like that you think defines the sensitivity. You should then take the equation showing the real sensitivity of this model and compare it to the one based on your definition. They will not be the same. Until you have done this and assimilated what this means, there is very little point in continuing the conversation. – gavin]

  28. 278
    RW says:


    “There is no ‘formula’ – rather there is a simple model, as outlined in the post I pointed you too weeks ago that has a clear greenhouse effect, a well defined sensitivity and the straightforward possibility to compute any function you like that you think defines the sensitivity. You should then take the equation showing the real sensitivity of this model and compare it to the one based on your definition.”

    I’ve looked at the post and I’m not sure what you are referring to. Is it too much to ask for you to state it specifically?

    Which equation shows the “real sensitivity” you’re referring to?

    [Response: The post is here, and the last two sections are specifically and explicitly about the sensitivity of the simple model system (i.e. the equations just before point 3 and point 4). – gavin]

  29. 279
    co2isnotevil says:


    The simplified model in the post Gavin referred you is oversimplified. With a few simple modifications, it can represent an equivalent representation of the real Earth’s climate system. Interestingly enough, this produces approximately the same sensitivity of about 0.3C per W/m^2 and not the 0.8C per W/m^2 claimed by the IPCC.

    What needs to be corrected are the assumptions that the planets emissivity is due exclusively to the greenhouse effect and that S is independent of Ftop. S is more appropriately expressed as S*(1-a), where S is the average power density from the Sun and a is the albedo. The global albedo is expressed as the cloud percentage weighted sum of the surface and cloud albedos. Similarly, the global emissivity is the cloud percentage weighted emissivity of the clear and cloudy skies. The last thing that must be corrected is the assumption that &Delta T is linear to &Delta G, even though G is proportional to T^4.

    From the ISCCP data, the temperature coefficients of the albedo, clear and cloudy sky emissivities, surface and cloud reflectivities and the global emissivity can all be extracted. The approximate sensitivities, all expressed in equivalent relative units, are as follows:

    global albedo -2.0% per C increase in surface temp
    cloud amount -1.5% per C
    cloudy emiss -1.0% per C
    clear emiss &lt +0.25% per C
    cloud reflect -0.25% per C
    global emissivity -0.06% per C
    surface refl &lt -0.05% per C (above 273K)
    surface refl &gt +10.0% per C (below 273K)
    It’s the change in average cloud coverage as a function of temperatures which is responsible for the corresponding change in most of the other factors, where the consequence is that the net global emissivity remains relatively constant. When CO2 concentrations change, mostly the clear sky emissivity is affected, however; the clear sky sensitivity to temperature has a relatively small contribution to begin with. Also note that the surface reflectivity has a sharp knee at 0C, which is why polar estimates of sensitivity greatly overestimate the global sensitivity since the energy balance related to the global sensitivity is dominated by regions whose temperatures are above 0C.

    The way that feedbacks are added is not really representative of the real system. It’s kind of like dividing both sides by zero and saying zero equals zero. This is because the sensitivity of the global emissivity to the surface temperature is rather small when compared to the other things that are changing. Besides, nowhere is a metric for feedback actually quantified, which I claim is quantified by control theory, where control theory defined gain is equal to 1 / (1 – 0.5 &lambda).


    [Response: No simple model can tell you what the sensitivity is – in all such cases that is just an input. Their role is rather to give an example of a system that has some connection to reality and where ideas such as you and RW have proposed can be tested in verifiable conditions. – gavin]

  30. 280
    co2isnotevil says:


    A model far simpler than a GCM can be used to provide reliable estimates of the climates sensitivity to many factors, including changes in static GHG concentrations. Consider simulating an internal combustion engine. You can determine the approximate performance by applying thermodynamic laws to inlet/outlet temperatures, etc. and determine how the performance will change as parameters vary or you can take the GCM approach and simulate the chaos in the combustion chamber and hope that the proper high level thermodynamic behavior emerges. Both approaches will get about the same answer. Which is more correct depends on the accuracy of all the various tuneable parameters in the model. GCM’s have far more parameters that must be tuned than the thermodynamic model, providing more degrees of freedom to compensate for flawed assumptions and/or to tweak the model in a desired direction.

    Of the parameters in my thermodynamic model, cloud coverage fraction, clear/cloudy sky emissivity, surface/cloud reflectivity, the power in and out of the planet, the individual power contributions of the cloudy and clear skies and the time constant of the planets thermal mass, most are directly measured in the ISCCP data set and the simple model that you say doesn’t work does an exceptionally good job at predicting the relationships between the 10 variables using a system of 4 equations describing the system (including a differential equation) and where 8 of the variables are directly measured. In effect, there are 4 equations and 2 unknowns, which if the model was incorrect, would never be able to match because it would be far too over constrained.


    P.S. why does &lt, etc. work in preview, but not in the posted comment?

    [Response: Your coefficients are not robust to artificial trends in the ISCCP data, nor to the errors you have made with the water vapour and lapse rate feedbacks. I wish it was as easy as you claim, but it isn’t. – gavin]

  31. 281
    co2isnotevil says:


    The artificial trends in the ISCCP data are predominately caused by incorrect calibration between satellites, model differences across the processing of different samples and the uncorrected movement of satellites.

    My analysis of the sensitivities is based on monthly averages of 3-hour sampled data for each 2.5 degree slice of latitude, where what’s common across a slice is the incident power from the Sun, i.e. 72 unique measurements spanning 72 unique incident power profiles, per 3-hour sample, aggregated into 72 monthly data points. This analysis is largely insensitive to the types of errors found in the ISCCP data, all of which I’ve analyzed in detail. I’ve also started from raw satellite data in the DX data set (well not really the raw data since it has the faulty cross calibration already applied) and ran it through much more sophisticated image processing algorithms to re-linearize, re-normalize and re-align the data from the various satellites which includes robust predictors for missing data. This produces largely the same results for the sensitivity analysis but does produce in a more reliable indication of real trends over the 30 years of the satellite record, which as you probably already know, contradicts GISS temp.

    The analysis includes the effects of water vapor feedback, ice/snow albedo feedback, lapse rate feedback and all other feedback mechanisms, known and unknown.

    BTW, it’s not that easy, but it’s relatively straight forward and matches the data quite well. In fact, I can use my model to identify the flaws in the ISCCP data as these errors are in places where the deviation between the model and measurements is the largest. Every large deviation I’ve identified can be traced to a specific flaw in the ISCCP data.


  32. 282
    RW says:


    “So if \lambda^\prime is positive, there will be an amplification of any particular change, if it’s negative, a dampening i.e. if water vapour increases with temperature that that will increase the greenhouse effect and cause additional warming. For instance, \lambda^\prime=0.1, then the sensitivity increases to 0.33 C/(W/m2).”

    I do not see how you’re getting 0.33 C/(W/m^2). Can you elaborate on this in more detail?

    [Response: 0.25/(.0567*2.88*2.88*2.88*(1-0.5*(0.1+0.769)) = 0.326 C/(W/m2). – gavin]

Switch to our mobile site