RealClimate logo


Shindell: On constraining the Transient Climate Response

Filed under: — group @ 8 April 2014

Guest commentary from Drew Shindell

There has been a lot of discussion of my recent paper in Nature Climate Change (Shindell, 2014). That study addressed a puzzle, namely that recent studies using the observed changes in Earth’s surface temperature suggested climate sensitivity is likely towards the lower end of the estimated range. However, studies evaluating model performance on key observed processes and paleoclimate evidence suggest that the higher end of sensitivity is more likely, partially conflicting with the studies based on the recent transient observed warming. The new study shows that climate sensitivity to historical changes in the abundance of aerosol particles in the atmosphere is larger than the sensitivity to CO2, primarily because the aerosols are largely located near industrialized areas in the Northern Hemisphere middle and high latitudes where they trigger more rapid land responses and strong snow & ice feedbacks. Therefore studies based on observed warming have underestimated climate sensitivity as they did not account for the greater response to aerosol forcing, and multiple lines of evidence are now consistent in showing that climate sensitivity is in fact very unlikely to be at the low end of the range in recent estimates.

In particular, a criticism of the paper written by Nic Lewis has gotten some attention. Lewis makes a couple of potentially interesting points, chief of which concern the magnitude and uncertainty in the aerosol forcing I used and the time period over which the calculation is done, and I address these issues here. There are also a number of less substantive points in his piece that I will not bother with.

Lewis states that “The extensive adjustments made by Shindell to the data he uses are a source of concern. One of those adjustments is to add +0.3 W/m² to the figures used for model aerosol forcing to bring the estimated model aerosol forcing into line with the AR5 best estimate of -0.9 W/m².” Indeed the estimate of aerosol forcing used in the calculation of transient climate response (TCR) in the paper does not come directly from climate models, but instead incorporates an adjustment to those models so that the forcing better matches the assessed estimates from the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC). An adjustment is necessary because as climate models are continually evaluated against observations evidence has become emerged that the strength of their aerosol-cloud interactions are too strong (i.e. the models’ ‘aerosol indirect effect’ is larger than inferred from observations). There have been numerous papers on this topic and this issue was thoroughly assessed in IPCC AR5 chapter 7. The assessed best estimate was that the historical negative aerosol forcing (radiation and cloud effects, but not black carbon on snow/ice) was too strong by about 0.3 Wm-2 in the models that included that effect, a conclusion very much in line with a prior publication on climate sensitivity by Otto et al. (2013). Given numerous scientific studies on this topic, there is ample support for the conclusion that models overestimate the magnitude of aerosol forcing, though the uncertainty in aerosol forcing (which is incorporated into the analysis in the paper) is large, especially in comparison with CO2 forcing which can be better constrained by observations.

The second substantive point Lewis raised relates to the time period over which the TCR is evaluated. The IPCC emphasizes forcing estimates relative to 1750 since most of the important anthropogenic impacts are thought to have been small at that time (biomass burning may be an exception, but appears to have a relatively small net forcing). Surface temperature observations become sparser going back further in time, however, and the most widely used datasets only go back to 1880 or 1850. Radiative forcing, especially that due to aerosols, is highly uncertain for the period 1750-1850 as there is little modeling and even less data to constrain those models. The AR5 gives a value for 1850 aerosol forcing (relative to 1750) (Annex II, Table AII.1.2) of -0.178 W/m² for direct+indirect (radiation+clouds). There is also a BC snow forcing of 0.014 W/m², for a total of -0.164 W/m². While these estimates are small, they are nonetheless very poorly constrained.

Hence there are two logical choices for an analysis of TCR. One could assume that there was minimal global mean surface temperature change between 1750 and 1850, as some datasets suggest, and compare the 1850-2000 temperature change with the full 1750-2000 forcing estimate, as in my paper and Otto et al. In this case, aerosol forcing over 1750-2000 is used.

Alternatively, one could assume we can estimate forcing during this early period realistically enough to remove if from the longer 1750-2000 estimates, and so compare forcing and response over 1850-2000. In this case, this must be done for all forcings, not just for the aerosols. The well-mixed greenhouse gas forcing in 1850 is 0.213 W/m². Including well-mixed solar and stratospheric water that becomes 0.215 W/m². LU and ozone almost exactly cancel one another. So to adjust from 1750-2000 to 1850-2000 forcings, one must remove 0.215 W/m² and also remove the -0.164 W/m² aerosol forcing, multiplying the latter by it’s impact relative to that of well-mixed greenhouse gases (~1.5) that gives about -0.25 W/m².

If this is done consistently, the denominator of the climate sensitivity calculation containing total forcing barely changes and hence the TCR results are essentially the same (a change of only 0.03°C). Lewis’ claim that the my TCR results are mistaken because they did not account for 1750-1850 aerosol forcing is incorrect because he fails to use consistent time periods for all forcing agents. The results are in fact quite robust to either analysis option provided they are done consistently.

Lewis also discusses the uncertainty in aerosol forcing and in the degree to which the response to aerosols are enhanced relative to the response to CO2. Much of this discussion follows a common pattern of looking through the peer-reviewed paper to find all the caveats and discussion points, and then repeating them back as if they undermine the paper’s conclusions rather than reflecting that they are uncertainties that were already taken into account. It is important to realize that the results presented in the paper include both the uncertainty in the aerosol forcing and the uncertainty in the enhancement of the response to aerosol forcing, as explicitly stated. Hence any statement that the uncertainty is underestimated in the results presented in the paper, due to the fact that (included) uncertainty in these two components is large, is groundless.

In fact, this is an important issue to keep in mind as Lewis also argues that the climate models do not provide good enough information to determine the value of the enhanced aerosol response (the parameter I call E in the paper, where E is the ratio of the global mean temperature response to aerosol forcing versus the response to the same global mean magnitude of CO2 forcing, so that E=1.5 would be a 50% stronger response to aerosols). While the models indeed are imperfect and have uncertainties, they provide the best available method we have to determine the value of E as this cannot be isolated from observations directly. Furthermore, basic physical understanding supports the modeled value of E being substantially greater than 1, as deep oceans clearly take longer to respond than the land surface, so the Northern Hemisphere, with most of the world’s land, will respond more rapidly than the Southern Hemisphere with more ocean. Quantifying the value of E accurately is difficult, and the variation across the models is substantial, primarily reflecting our incomplete knowledge of aerosol forcing. This leads to a range of E quoted in the paper of 1.18 to 2.43. I used this range, assuming a lognormal distribution, along with the mean value of 1.53, in the calculation for the TCR.

Lewis then argues that the large uncertainty ranges in E and in aerosol forcing make it the TCR estimates “worthless”. While “worthless” is a little strong, it is important to fully assess uncertainties in trying to constrain any properties in the real world. It’s worthwhile to note that Lewis co-authored a recent report claiming that TCR could in fact be constrained to be low. That report relies on studies that include the large aerosol forcing uncertainty, so criticizing my paper for that would be inconsistent. However, Lewis’ study assumed that all forcings induce the same response in global mean temperature as CO2. This is equivalent to assuming that E is exactly 1.0 with NO uncertainty whatsoever. This is a reasonable first guess in the absence of evidence to the contrary, but as my paper recently showed, there is evidence to indicate that assumption is biased.

But while Lewis argues that the uncertainty in E is large and climate models do not give the value as accurately as we’d like, that does not justify ignoring that uncertainty entirely. Instead, we need to characterize that uncertainty as best we can and propagate that through the calculation (as can be seen in the figure below). The real question is not whether climate models provide us perfect information (they do not), but rather whether they provide better information than some naïve prior assumption. In this case, it is clear that they do.



Figure shows representative probability distribution functions for TCR using the numbers from Shindell (2014) in a Monte Carlo calculation (Gaussian for Fghg and dTobs, lognormal fits for the skewed distributions for Faerosol+ozone+LU and E). The green line is if you assume exactly no difference between the effects of aerosols and GHGs; Red is if you estimate that difference using climate models; Dashed red is the small difference made by using a different start date (1850 instead of 1750).

This highlights the critical distinction in our reasoning: I fully support the basic methods used in prior work such as Otto et al and have simply quantified an additional physical factor in the existing methodology. I am however confused that Lewis, on one hand, appears to now object to the basic method used in prior work in which the authors first adjusted aerosol forcing, second included it’s uncertainty, and then finally quantified estimates of TCR, Yet on the other hand, he not only co-authored the Otto et al paper but released a report praising that study just three days before the publication of my paper.

For completeness, I should acknowledge that Lewis correctly identified a typo in the last row of the first column of Table S2, which has been corrected in the version posted where there is also access to the computer codes used in the calculations. The climate model output itself is already publicly available at the CMIP5 website (also linked at that page).

Finally, I note that the conclusions of the paper send a sobering message. It would be nice if sensitivity was indeed quite low and society could get away with smaller emission cuts to stabilize climate. Unfortunately, several lines of independent evidence now agree that this is not the case.


References

  1. D.T. Shindell, "Inhomogeneous forcing and transient climate sensitivity", Nature Climate change, vol. 4, pp. 274-277, 2014. http://dx.doi.org/10.1038/nclimate2136
  2. A. Otto, F.E.L. Otto, O. Boucher, J. Church, G. Hegerl, P.M. Forster, N.P. Gillett, J. Gregory, G.C. Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and M.R. Allen, "Energy budget constraints on climate response", Nature Geosci, vol. 6, pp. 415-416, 2013. http://dx.doi.org/10.1038/ngeo1836

39 Responses to “Shindell: On constraining the Transient Climate Response”

  1. 1
    Eli Rabett says:

    Since the period 1750 – 1880 was a period of rapid industrialization in the Northern Hemisphere, huge amounts of black carbon and smoke were released, certainly wrt the status quo ante. Not only in urban areas either, but as steamships and trains began crossing the oceans the effect must have been like or even worse than contrails from jets.

    Should one expect to see this in the records?? if nothing else in ice cores.

  2. 2

    “It would be nice…smaller emission cuts” ?

    Disappointed with that that phrase, such a statement requires lots of study and many postings, don’t you think?

    More realistic is a complete emission halt then evaluate how much sequestration. Leave it to economists to promote impacts of “smaller cuts”

    Thank you so much for the sensitivity review. As we gain a better understanding of the How Bad, and the When – it’s time to start modeling and describing appropriate levels of mitigation.

  3. 3
    SteveF says:

    Really interesting post, thanks. Wondering if you had any comments on this:

    http://troyca.wordpress.com/2014/03/15/does-the-shindell-2014-apparent-tcr-estimate-bias-apply-to-the-real-world/

  4. 4
    Troy Masters says:

    Did you test your updated method (for calculating TCR) on each model using the value of E calculated for that model? That would seem to be a good test of whether the method produces a good estimate of TCR independent of the uncertainty in E. I tried such a thing, and my main objection to the Shindell (2014) paper is that when I test the “simple” Otto method vs. the Shindell method on the same model set in the paper, the Otto et al (2013) method still seems to perform better. This is particularly true in the cases where the NH/SH warming ratio in that model is similar to the one observed. See http://troyca.wordpress.com/2014/03/15/does-the-shindell-2014-apparent-tcr-estimate-bias-apply-to-the-real-world/ for more details. For example, in the model with the largest value of E (IPSL-CM5-LR), the Shindell (2014) method overestimates the TCR by 40% (!) when using the actual output from that model, whereas using the simpler approach of E=1.0 (a la Otto et al. (2013)) only leads to a tiny underestimate of only 6%. Moreover, the NH/SH warming ratio in that model is 1.49, very close to that of the observed ratio of 1.48 in Cowtan and Way (2013). Given that the degree of under-estimation of TCR using the Otto method seems inversely correlated with the NH/SH warming ratio, at least in the models used in Shindell (2014), it would seem that the rather large NH/SH warming ratio observed in the “real” earth system indicates a tiny to non-existent underestimation of TCR when using those simple methods (e.g. Otto et al) in the real world. What do you think?

  5. 5
    toto says:

    “Much of this discussion follows a common pattern of looking through the peer-reviewed paper to find all the caveats and discussion points, and then repeating them back as if they undermine the paper’s conclusions rather than reflecting that they are uncertainties that were already taken into account.”

    For some unfathomable reason, I am strongly (furiously?) reminded of one statistically-minded mining industry executive…

  6. 6
    John Mashey says:

    Thanks, very helpful!
    I’m also happy to see lognormal distribution mentioned, certainly appropriate for some right-skewed distributions.

    Given the different ways that processes generate normal/lognormal (additive/multiplicative) effects, might Drew comment on why lognormal looks a better fit?

  7. 7
    Alexis Crawford says:

    I think it’s entirely possible for aerosols to pose a bigger threat than CO2 since they are more potent. Due to that, I can easily see how aerosol levels can effect climate more so than CO2. I disagree with Leis’s claim that climate models do not good enough information on aerosol effects. This is a new idea, and therefore the current models might not be fully accurate. But give it some time and patience and it should be a very valuable source of data.

  8. 8
    Doug Proctor says:

    Interesting graph. TCR since 1850, then, between 1.0 (unlikely) and 2.4 (unlikely), with 1850 to 2014, what? 1.0?, leaving nil (unlikely) to 1.4C (unlikely) by time of doubling (560 ppm CO2). Time for doubling, with China and India going great guns, 60 years, 2075?

    CO2 needs to rise at >2.6 ppm/yr by 2018 (?), temps need to start rising towards >0.11 to maximum 0.23/decade av by 2018, unless the negative forces that have caused the “pause” BECOME STRONGER with time.

    In the next couple of years, the temp records will reveal the skill level of the IPCC models. Not so much the warming part, but the cooling/mitigating part. The skeptics say the warming part is wrong. When the cooling part ends, not much will happen. If, instead, the cooling part is wrong, then the world will, indeed, warm with a vengeance when the cooling factors end.

    Interesting to be in the middle of the science experiment that much of the world’s population doesn’t care one way or the other.

  9. 9
    Eli Rabett says:

    #7 Alexis, aerosols may be more potent but they are a hell of a lot shorter lived, so no, they are not by themselves a threat, but rather the sources are (which are easier to shut down)

  10. 10
    MARodger says:

    Eli @1.
    I think if you say 1880, you may have a point. But 1850 I feel is too early in the industrialisation process. 1750-1850 did see coal use/CO2 emissions rise 18-times bigger, but from a very small start level. 1850-1880 it quadruple and this was the period when, for instance, the London pea-souper started appearing (ie the 1870s).

    I do have a pet theory about black carbon & smoke but for a century later. It is that the rise of electricity and the power-station & ‘clean’ domestic coal 1940-1970 may have cut black carbon more than is presently accounted for and thus with the renewed ramp-up of SO2 emissions in that period, more readily provide the cause of the 1940-75 temperature “hiatus”. This pet theory will thus, of course, concern Northern Hemisphere forcings as per the Shindell paper.

  11. 11
    Salamano says:

    “Much of this discussion follows a common pattern of looking through the peer-reviewed paper to find all the caveats and discussion points, and then repeating them back as if they undermine the paper’s conclusions rather than reflecting that they are uncertainties that were already taken into account. It is important to realize that the results presented in the paper include both the uncertainty in the aerosol forcing and the uncertainty in the enhancement of the response to aerosol forcing, as explicitly stated. Hence any statement that the uncertainty is underestimated in the results presented in the paper, due to the fact that (included) uncertainty in these two components is large, is groundless.”

    I would love to see what Nic’s response to this is…

    The “caveats” governing conclusions of a paper are often precisely the reason why some are found lacking, and this “common pattern” of examining them casts a broad net that more-or-less captures the whole of peer-review. I bet any science researcher would love to be able to decisively declare at the outset that “any” of various types of criticism to their work are hereby groundless.

    I’m grateful that this back-and-forth exchange, though not all located in one place, exists in the open. Victory through discussion over silencing/marginalizing is better.

  12. 12
    Adam R. says:

    Nic Lewis:

    As with most papers by establishment climate scientists, no data or computer code appears to be archived in relation to the paper.

    Damned establishment climate scientists! Hiding the decline again!

  13. 13
    dhogaza says:

    Salamano:

    “I bet any science researcher would love to be able to decisively declare at the outset that “any” of various types of criticism to their work are hereby groundless.”

    That’s not the point. Indeed, the caveats exist to point out possible weaknesses, areas where more work is needed, where results might not be as strongly supported by the work as the authors wish, etc.

    The point is that certain prominent denialists have the bad happen of searching for such caveats, then posting about them as though the amateur “auditor” were the first to think about these points, and as though the paper itself does NOT contain such caveats. In other words, it’s a way to lie about the quality of the paper by claiming there are weaknesses the authors were unaware of even when the weaknesses are clearly stated in the paper itself.

  14. 14
    Pete Dunkelberg says:

    A note on industrial pollution and its intersection with another area of science that has not pleased everyone:

    Find “Edleston notes that by 1864 it was the more common morph in his garden in Manchester.” in 1.

    See the paragraph that starts with “For those readers who have never experienced coal-era industrial pollution,….” in 2.

    Small bonus: “…In fact, I do not think anyone grasped the line of argument through inability to follow the simple algebraic reasoning which Fritz Müller has adopted.” from3. (smile)

    Note that mutations in general are very common. 4. Indeed, you are a mutant. Distinctive pigment mutations are not so common is us mammals, a few felines notwithstanding. But drop in on a herpetology exposition and you will see no end of oddly colored snakes.

  15. 15
    Chris Dudley says:

    Eli (#1),

    The 1880 American Lloyd’s register has

    113 pages of ships
    368 pages of barks
    158 pages of brigs
    196 pages of schooners
    84 pages of steamers.

    Looks like steam power was still just a fraction of the shipping at that time.

    http://library.mysticseaport.org/initiative/ShipRegister.cfm?BibID=237571880

  16. 16
    Steven Sullivan says:

    Salamano — 9 Apr 2014 @ 7:02 AM, you’ve stunningly missed Drew’s point. He wasn’t objecting to peer review. He’s objecting to Lewis’ charge that uncertainties invalidate the conclusions, when actually they were factored into the results. It’s not as if the uncertainties were just given a pro forma acknowledgment in the discussion/conclusions.

  17. 17
    Hank Roberts says:

    We present continuous 1772–2003 monthly and annually averaged deposition records for highly toxic thallium, cadmium, and lead from a Greenland ice core showing that atmospheric deposition was much higher than expected in the early 20th century, with tenfold increases from preindustrial levels by the early 1900s that were two to five times higher than during recent decades. Tracer measurements indicate that coal burning in North America and Europe was the likely source of these metals in the Arctic after 1860.

    Proc Natl Acad Sci U S A. Aug 26, 2008; 105(34): 12140–12144.
    Published online Aug 18, 2008. doi: 10.1073/pnas.0803564105

  18. 18

    Salamano wrote (10):

    I would love to see what Nic’s response to this is…

    I would be interested in finding out whether he has had the chance to publish a second peer-reviewed paper that is in some way related to climatology. While his post at McIntyre’s Climate Audit may be of some interest, I doubt it would qualify. Then again, being a financier in real estate no doubt keeps one busy, as does making the transition to government administration. It does however illustrate he is a man of many talents.

    I would also be interested in whether he has had the chance to look at the criticisms made of his first paper, e.g.,

    Climate Sensitivity Single Study Syndrome, Nic Lewis Edition
    by Dana Nuccitelli, 18 Apr 2013
    https://www.skepticalscience.com/climate-sensitivity-single-study-syndrome-nic-lewis-edition.html

    Dana believes one paper that may be especially relevant to Nic Lewis’s is the following:

    each realization of internal climate variability can result in a considerable discrepancy between the best CS [climate sensitivity] estimate and the true value … average discrepancy due to the unresolved internal variability is 0.84°C.

    Olson, R., et al. “What is the effect of unresolved internal climate variability on climate sensitivity estimates?.” Journal of Geophysical Research: Atmospheres 118.10 (2013): 4348-4358.

    Likewise, we find that natural variability, this last decade warming on the low end compared previous decades, the lack of coverage in the Arctic and so on may have played a role in Lewis’ underestimating transient climate sensitivity:

    … those pointing to the lower end are sensitive to the particular realization of natural climate variability (Huber et al 2014). As a consequence, their results are strongly influenced by the low increase in observed warming during the past decade (about 0.05 °C/decade in the 1998–2012 period compared to about 0.12 °C/decade from 1951 to 2012, see IPCC 2013), and therewith possibly also by the incomplete coverage of global temperature observations (Cowtan and Way 2013). Studies that point towards the lower end also rely on simple energy-balance models with constant feedbacks for all forcings—and forcing quantifications that are derived from various modeling exercises.

    Rogelj, Joeri, et al. “Implications of potentially lower climate sensitivity on climate projections and policy.” Environmental Research Letters 9.3 (2014): 031003.
    http://iopscience.iop.org/1748-9326/9/3/031003/article

    … but then Lewis is somewhat new to this.

    When Lewis criticizes Shindell’s paper:

    The extensive adjustments made by Shindell to the data he uses are a source of concern. One of those adjustments is to add +0.3 W/m² to the figures used for model aerosol forcing to bring the estimated model aerosol forcing into line with the AR5 best estimate of -0.9 W/m².

    … and Shindell responds:

    Indeed the estimate of aerosol forcing used in the calculation of transient climate response (TCR) in the paper does not come directly from climate models, but instead incorporates an adjustment to those models so that the forcing better matches the assessed estimates from the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC). An adjustment is necessary because as climate models are continually evaluated against observations evidence has become emerged that the strength of their aerosol-cloud interactions are too strong (i.e. the models’ ‘aerosol indirect effect’ is larger than inferred from observations). There have been numerous papers on this topic and this issue was thoroughly assessed in IPCC AR5 chapter 7.

    … then continues by pointing out that the specific value of the adjustment he made in his own paper is supported by prior literature, the above response by Shindell would seem to speak directly to Lewis’ criticism. This would also seem to illustrate the point that familiarity with prior literature is a prerequisite for the sort of criticism that Lewis was attempting.

    No doubt, however, we should keep in mind that Lewis is quite new to all of this. He can’t really be expected to have familiarized himself with the literature to the extent that might otherwise be desired. Making the transition from private to public careers no doubt keeps him quite busy, and I for one will be grateful if at some point he is simply able to make the time to read the peer-reviewed papers that respond or address issues related to his one peer-reviewed paper. In the meantime, those who have made far greater contributions to the literature should make allowances for this most ambitious financier.

  19. 19
    Drew Shindell says:

    #4 Troy,

    I appreciate all the effort put into this analysis, but there are several issues.

    The first issue is that calculating TCR for each model from historical ‘All forcing’ runs (histAll) requires using the same simulations and time periods as used in calculating the forcings. I don’t know where the dT values come from in your Table 1, but while many are close to the dT in the histAll cases from my analysis some are quite different. In addition, comparing with the response to all forcing requires accounting for all forcings and not only those listed in Masters’ table. In my analysis, I removed the response to natural forcings deliberately to isolate the response to the inhomogeneous forcings as best as possible, for example, but included natural forcings when comparing with observations. Comparison with the modeled temperature response in histAll is inconsistent without accounting for stratospheric water, land-use, solar, etc, some of which are poorly characterized (hence I did not make use of a calculation like this).

    The second, and more fundamental issue, is what is learned from calculating the TCR in a historical model simulation? I used was the surface temperature responses from histAll – (histGHG + histNatural) to obtain the response to aerosols+ozone+land-use and derive the enhancement of the response for that case relative to WMGHGs that I called E. Calculation of TCR based on histAll in a model is approximately the same as calculating the sum of responses to histGHG, histNat, and histInhomogeneous where the latter includes the factor E. So in essence, one would use the difference in the modeled response in the various simulations to derive E, then see if one used the derived E and put the sum back together would you get the same answer. This is thus circular logic rather than an independent test. As Table S1 in my paper showed, there are differences even in the response to histGHG vs 1% per year CO2 on the order of 10%, so we would expect the TCR for each model to vary somewhat when computed based on a slightly different set of experiments, but even if the correct dT values and all forcings were included I don’t believe the results would tell us anything useful.

    The third issue concerns the NH/SH warming ratio in the models. This is a more interesting one as it brings up some compelling physical questions, and is worth additional work. I used this ratio in Figure 2 of my paper to illustrate a point and provide support to the basic physical principles of faster land response vs ocean. The hemispheric responses, in particular in the SH where the imposed aerosol forcing is very small, can be quite sensitive to factors such as how a given model transports heat between the hemispheres, however. Hence the relationship between forcing in various locations and global mean response, which included the full planet so is not dependent upon heat transport, is a more robust feature in the models. In contrast, there is not a strong correlation between imposed forcing and response in the SH, suggesting that modeled responses in the SH are a function of much more than imposed forcing and global mean sensitivity. Hence the NH/SH ratio does not make a good quantitative test of modeled sensitivity in my opinion.

  20. 20
    AndyL says:

    Timothy Lewis seems to think that this was Nic Lewis’s only peer reviewed paper on climate science.

    In fact this is at least his third, including Otto et al in which Lewis was a co-author with Drew Shindell

  21. 21
    Tim Osborn says:

    #4 Troy and #19 Drew,

    Yes, the NH/SH warming ratio is of interest here and might be valuable for constraining the behaviour of these methods or at least for assessing physical processes as simulated by the GCMs.

    However important to note the considerable uncertainty in the observed value of NH/SH warming ratio. I’ve not seen a figure showing the uncertainty in this ratio, but the NH-SH warming difference for HadCRUT3 has considerable uncertainty relative to the observed changes themselves:

    http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/hemispheric/difference/

    Would be nice to see this for HadCRUT4, and specifically for the ratio of warming rather than the difference in anomalies. The quoted warming ratio of 1.48 may be rather uncertain.

    There’s also some interesting structure in the NH-SH difference series, with abrupt changes as well as a post-1985 trend. Determining the combination of forcing, response, internal variability and observational error that could give rise to a structure like this might be a fruitful exercise in itself.

  22. 22
    Eli Rabett says:

    MA Rogers @10 and Chris Dudley @ 16

    First, agreed none of this is a slam dunk. For example, while there may have been fewer steam ships, they were larger, and incredibly dirty as were the trains and they covered a lot more groun

    As interestingly (Eli is easily entertained) the clearing of the Americas for farms must have increased the amount of dust in the air powerfully. Not quite dust bowl amounts, but a lot.

  23. 23

    AndyL(20) Point taken.

  24. 24
    Troy Masters says:

    Hi Drew (#19),

    Thanks for the response. I agree that given that calculation, testing your method on those same models is not quite independent – it essentially just shows the sensitivity to the method of calculating E (whether using difference in decadal means vs. regression to calculate TCR, or using TCR_histGHG vs. TCR_1%CO2 in the denominator). That being said, I do include the land use forcing in my table, and I’d be surprised if the Strat_WV+solar+volcanic change in forcing is much larger than 0.1 W/m^2 (given that the latter offsets some of the former two), so the discrepancies are likely due to the different values of dT calculated in histAll. I calculated mine using a weighted average from the runs available at the ETH subarchive (which should be the same as CMIP5, but is easier IMO to access), so it is possible that I’ve made an error…would it be possible for you to give the values you get for dT_histAll for those various models?

    “Hence the NH/SH ratio does not make a good quantitative test of modeled sensitivity in my opinion”.

    I want to stress that I am saying the NH/SH ratio is a quantitative test of the bias produced by using the simpler (E=1.0) method, and that used in conjuction with the Otto et al., (2013) method this would likely constrain the TCR better. Consider that the bias in the E=1.0 scenario is going to be a function of the actual value of E and the magnitude of F_inhom, with the former value depending largely on the difference in hemispheric forcing. Given that, it is hard to reconcile a large magnitude combined value of E and F_inhom with a large NH/SH ratio (for example, IPSL has a large E but a lower magnitude F_inhom, which allows for the larger NH/SH ratio) So ideally one would assign lower probability to the large E, very negative F_inhom events given the observed ratio, but a Monte Carlo sampling that treats the uncertainties in these values as independent would not do so. You mentioned that the rate of SH warming may be sensitive to hemispheric heat transfer, but in that case couldn’t one argue that the heat transfered from one hemisphere to another would have a differential effect in surface warming, for many of the same reasons (e.g. greater land mass in the NH) that the F_inhom forcings have a different effect? I would suggest that the degree to which horizontal heat tranfer is reflected in the NH/SH ratio, it would also impact the TCR bias, leaving the ratio as a good constraint on the potential bias.

    I think Tim’s point (#21) about examining the combination of forcings and natural variablity that could produce this effect is interesting, and would be important for constraining TCR. Paul S also noted that much of the NH/SH ratio comes from the greater land / ocean warming ratio in the NH than is generally modelled, which is another mystery. And while there may be uncertainty regarding the NH/SH ratio of 1.48 from pre-indust to now, I think there is considerably less uncertainty about (for instance) this ratio from ~1975 to present, which is quite large relative to models. Similarly (and perhaps relatedly), the magnitude of the change in aerosol forcing from ~1975 to present relative to the change in all forcings is much smaller than from pre-ind through present, which I think should make the TCR estimated over that period insensitive to the value of E. As such, it seems to me (and Nic Lewis also brought up this point in his CA post) that if there were a substantial bias in the E=1.0 method, the Otto et al estimate from 1970s-2000s should be substantially larger than the base-period through 2000s estimate, right?

  25. 25
    Hank Roberts says:

    clearing of the Americas for farms must have increased the amount of dust in the air

    http://www.washington.edu/news/2014/02/26/pine-forest-particles-appear-out-of-thin-air-influence-climate/

    New research by German, Finnish and U.S. scientists elucidates the process by which gas wafting from coniferous trees creates particles that can reflect sunlight or promote cloud formation, both important climate feedbacks. The study is published Feb. 27 in Nature.

    Changed, presumably, with removal of the original forests and changed back with regrowth subsequently.

    Maybe “rain follows the plow” was actually clouds following the axe and stump-pulling mule.

  26. 26
    Walter Crain says:

    i’m becoming less confident that humans will ever get the co2 message. even if some humans take the necessary steps to reduce co2 emissions, others won’t…. so… does anyone have any sort of handle on a calculation of what the atmospheric co2 ppm would rise to if ALL the (reasonably-attainable) fossil fuels on earth were burned? sadly, i think that’s where we’re headed.

  27. 27
    Jon Kirwan says:

    @Walter Crain, #26:

    I can’t vouch for the following site and I don’t know what their calculations actually mean. But I remember that we probably have some hundreds of years worth of coal left and the site appears to confirm that.

    http://www.energyrealities.org/chapter/energys-future/item/years-of-fossil-fuels-remaining/erp6D28994FAAEECC86A

    I would suppose those periods are based upon current consumption rates. But the IPCC report recently stated, “with atmospheric carbon dioxide levels rising almost twice as fast in the first decade of this century as they did in the last decades of the 20th century,” so I imagine usage rates are and will continue going up for some time yet. Well more than the 15 years from the IPCC report: “only an intensive push over the next 15 years to bring those emissions under control can achieve the goal.” That’s simply not happening, not even close, in my opinion.

    There are also rapid changes in extraction technologies. So who knows, really? I’d tend to imagine we can very significantly raise atmospheric and ocean levels of CO2, if we decide to accept that we are headed to burning everything that is or will be economically extractable.

    I’d be very interested to see some well-reasoned estimates. Just to put a cap on the question. But I can’t recall seeing anyone attempt it.

  28. 28
    Brucie Bruce says:

    @26

    Bill McKibbon authored an article which you might find sufficient: Global Warming’s Terrifying New Math.

    The Carbon Tracker Initiative – led by James Leaton, an environmentalist who served as an adviser at the accounting giant PricewaterhouseCoopers – combed through proprietary databases to figure out how much oil, gas and coal the world’s major energy companies hold in reserve. The numbers aren’t perfect – they don’t fully reflect the recent surge in unconventional energy sources like shale gas, and they don’t accurately reflect coal reserves, which are subject to less stringent reporting requirements than oil and gas. But for the biggest companies, the figures are quite exact: If you burned everything in the inventories of Russia’s Lukoil and America’s ExxonMobil, for instance, which lead the list of oil and gas companies, each would release more than 40 gigatons of carbon dioxide into the atmosphere.

    Which is exactly why this new number, 2,795 gigatons, is such a big deal. Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

    We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

    …One of the Oil Drum’s members has a blog post up in which she argues that we’re likely to follow RCP 2.6 because of imminent economic problems due to the limits of recovering all those reserves.

    Probably the answer is in between?

  29. 29
    Brucie Bruce says:

    @26: Bill McKibbon’s article from 2012 has some numbers: Global Warming’s Terrifying New Math:

    The Carbon Tracker Initiative – led by James Leaton, an environmentalist who served as an adviser at the accounting giant PricewaterhouseCoopers – combed through proprietary databases to figure out how much oil, gas and coal the world’s major energy companies hold in reserve. The numbers aren’t perfect – they don’t fully reflect the recent surge in unconventional energy sources like shale gas, and they don’t accurately reflect coal reserves, which are subject to less stringent reporting requirements than oil and gas. But for the biggest companies, the figures are quite exact: If you burned everything in the inventories of Russia’s Lukoil and America’s ExxonMobil, for instance, which lead the list of oil and gas companies, each would release more than 40 gigatons of carbon dioxide into the atmosphere.

    Which is exactly why this new number, 2,795 gigatons, is such a big deal. Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

    We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

    One of the Authors from the now defunct The Oil Drum website is more sanguine, if you can call it that. Her most recent post predicts we’ll actually follow RCP 2.6 because of technical and economic limits to carbon fuel recovery: Oil Limits and Climate Change – How They Fit Together

    Maybe we’re headed somewhere in between? Yea!?

  30. 30
  31. 31
    Fergus Brown says:

    Eli, #1, #22: hi Eli. I believe there is a signal from early industrial, but it takes a relatively long time to show up, because of the enormous difference in scale by comparison to modern times, and the extension to global rather than ‘Western’ use. The other problem is that the scale of the difference is masked more readily by variability, events such as Krakatoa,and the needs of statistics to hit significance levels…
    TBH I haven’t done the math, but we shouldn’t be surprised if we now achieve in a year, in emissions terms,what would have taken most of the nineteenth century to manage.

  32. 32
    Jon Kirwan says:

    @Hank, #29:

    Thanks. That’s even more than I’d hoped to see. Glad the paper is “open access,” too.

  33. 33
    Walter says:

    For Fergus, Eli, Brucie, Jon, Walter C, Timothy, Pete.

    Total cumulative fossil fuel CO2 emissions
    1750 to 2011 amounts to 365 ± 30 PgC
    That 261 years equals 1.4 PgC per year average
    Equals a 120+ ppm rise of CO2 to 400 ppm
    2000 to 2009 the PgC increased by 3.2% per year

    Ref: AR5 WGI Technical Summary
    Table AII.2.1c: Anthropogenic total CO2 emissions (PgC yr –1)
    2010 decade 9.98
    2020 decade 12.28
    2030 decade 14.53
    2040 decade 17.33
    and then
    2100 decade 28.77

    Therefore, the Yearly Rate of Fossil Fuel Use doubles by 2050
    Then it increases to almost three fold the current rate by 2100

    The IPCC RCP 8.5 Carbon emission assumptions for fossil fuel energy use may be (or are) significantly less than real world BAU figures would suggest from the available Energy data reports and their forecasts to 2030 and 2040 (the IEA/EIA and others). Carbon Energy Use of Total 2010 ~83%, 2040 ~78%

    Much Talk very little Walk!

    Even so, the IPCC estimates above indicate:
    1) Total Net Atmospheric Carbon Emissions to 2100 will amount to ~2050 PgC (or more) on current Trends,
    2) A BAU projected estimate would push CO2 to ~952 ppm by 2100 (or more), and
    3) Global average temperature increase/anomaly would be as high as ~6.8C by 2100

    The IPCC assumptions are far from perfect or guaranteed. Despite their ‘best practice’ standards and their scientific validity they are also known to be imperfect and noticeably conservative estimates by ‘negotiated’ agreement and not necessarily the best or latest science or real world Carbon Energy use and projections.

    Compare these figures to the known implications for high Carbon emissions:
    Hansen et al. 2013 … “A cumulative industrial-era (1750-?) limit of ~500 GtC fossil fuel emissions and 100 GtC storage in the biosphere and soil would keep climate close to the Holocene range to which humanity and other species are adapted. Cumulative emissions of ~1000 GtC, sometimes associated with 2°C global warming, would spur “slow” feedbacks and eventual warming of 3-4°C with disastrous consequences.” [end quote]

    BAU estimates cumulative Carbon emissions in 2100 well above 2000 GtC

    BAU estimates put warming increases in 2100 at ~4 C or higher (~6.8 C ?)

    Our remaining ~250 GtC Carbon Budget to remain under 2 C (a dangerous climate change temperature) runs out in 2033 – Mann puts it at 2036 because he didn’t include annual growth increases.

    So, ‘Business As Usual Fossil Fuel Energy Use’ is a disaster in the making.
    Remembering IPCC projections for atmospheric CO2e ppm growth do NOT include significant and already known positive climate feedback mechanisms.

    Critical Science Info: Paleoclimate science research from UW and Dr Peter Ward et al

    The Key (only) CC issue is CO2 @ 1000 ppm MUST lead to Ice caps melt, SLR, Stalled ocean atmosphere circulation physics, De-oxygenation of the oceans, and eventually another repeat of historical Greenhouse Mass Extinctions
    Please view 8 minutes – time set http://youtu.be/HP_Fvs48hb4?t=33m1s

  34. 34
    Walter says:

    Ref: Allowable carbon emissions lowered by multiple climate targets
    Marco Steinacher, Fortunat Joos & Thomas F. Stocker
    Corresponding author Climate and Environmental Physics, University of Bern, 3012 Bern, Switzerland Oeschger Centre for Climate Change Research,
    Nature (2013) doi:10.1038/nature12269 Published online 03 July 2013
    http://www.climateemergencyinstitute.com/uploads/Multiple_climate_targets_Allowable_carbon_July_2013_.pdf

  35. 35
    Patrick says:

    “Finally, I note that the conclusions of the paper send a sobering message. It would be nice if sensitivity was indeed quite low and society could get away with smaller emission cuts to stabilize climate. Unfortunately, several lines of independent evidence now agree that this is not the case.”

    I am not a climate scientist and I have no idea what is going on in this paper or in Lewis’ criticisms. But this last sentence worries me. This is talk of a political activist. I would say to Drew Shindell, never mind what society is to do, that is not your concern here. You should be concerned with one thing: “what is the sensitivity of the climate?” All other concerns are irrelevent. Your job is to understand the climate, not to tell society what to do. The sensitivity of the climate is a property of the universe and your job is to determine it. Climate sensitivity does not care about emission schemes or human concerns. It is what it is.

    Please remove yourself from the political debate (at least if you want to be taken seriously as a scientist), otherwise it is impossible for impartial members of the public to take what you say with any seriousness. You should be concerned with the truth, above all else.

    [Response: Of course scientists (and indeed everyone else) should be concerned with the truth as best it can be discerned. And equally clearly, there is nothing 'untrue' about the last line. The idea that scientists should be wholly unconcerned with the wider implications of their work (as opposed to commenters on blogs who don't restrict themselves in any way) is just silly and seems to be mainly used as an argument to dismiss science that appears inconvenient. - gavin]

  36. 36
    Susan Anderson says:

    Elizabeth Kolbert speaks to the subject of all the havering going on about how people should stick to their jobs. If anyone hasn’t noticed, there is a substantial effort to distract and delay and it’s been going on for decades, and we are in real trouble, even if sensitivity is low. It’s everybody’s business to do everything in their power to participate in a wakeup call. Most of the criticism of sounding the alarm is coming from the best PR/lobbying money can buy, and a lot of people duped by these tactics which have been escalating over the last four decades.

    The areas that are most backwards in accepting science are the US, Canada, England , and Australia, all of which are imprisoned by ignorance which is not even slightly interested in whether things might be a bit slower than some are predicting.

    http://www.newyorker.com/online/blogs/elements/2014/05/the-west-antarctica-ice-sheet-melt-defending-the-drama.html

    Of the many inane arguments that are made against taking action on climate change, perhaps the most fatuous is that the projections climate models offer about the future are too uncertain to justify taking steps that might inconvenience us in the present. The implicit assumption here is that the problem will turn out to be less serious than the models predict; thus, any carbon we have chosen to leave in the ground out of fear for the consequences of global warming will have gone uncombusted for nothing.

    But the unfortunate fact about uncertainty is that the error bars always go in both directions. While it is possible that the problem could turn out to be less serious than the consensus forecast, it is equally likely to turn out to be more serious. In fact, it increasingly appears that, if there is any systemic bias in the climate models, it’s that they understate the gravity of the situation. In an interesting paper that appeared in the journal Global Environmental Change, a group of scholars, including Naomi Oreskes, a historian of science at Harvard, and Michael Oppenheimer, a geoscientist at Princeton, note that so-called climate skeptics frequently accuse climate scientists of “alarmism” and “overreacting to evidence of human impacts on the climate system.” But, when you actually measure the predictions that climate scientists have made against observations of how the climate has already changed, you find the exact opposite: a pattern “of under- rather than over-prediction” emerges. The scholars attribute this bias to the norms of scientific discourse: “The scientific values of rationality, dispassion, and self-restraint tend to lead scientists to demand greater levels of evidence in support of surprising, dramatic, or alarming conclusions.” They call this tendency “erring on the side of least drama,” or E.S.L.D. for short.

    Unfortunately, we live in dramatic times. Yesterday’s news about the West Antarctic Ice Sheet is just the latest reminder of this; there will, almost certainly, be much more “surprising” and “alarming” news to follow. Which is why counting on uncertainty is such a dangerous idea.

  37. 37

    What Gavin said. If the passage quoted were in a scientific paper, that would be one thing.

    But what Dr. Shindell wrote here was a blog post–there is no need in that context to check either his civic responsibility or his humanity at the door.

  38. 38
    Uli says:

    #19 Drew Shindell,

    “…Hence the NH/SH ratio does not make a good quantitative test of modeled sensitivity in my opinion.”

    Would the land-ocean dT difference a better quantitative test for modeled sensitivity?
    Or maybe can the chance distribution of the aerosol forcing (main emissions moved from US/Europe to Asia f.e.) used to reduce the uncertainty of the size of the aerosol forcing or the factor E?
    Does this factor E>1 mean only a faster response to the aerosol forcing an hence a larger TCR under the same ECS?

  39. 39
    Hank Roberts says:

    I’d asked a few questions about ocean microbiology and climate in the May UV thread (not getting attention); just came across this modeling project which seems to aim at questions along those lines:

    http://www.greencycles.org/events-EGU2013.html

    EGU2013 – GCII Session BG5.1: Biospheric feedbacks on the Earth system

    Event Dates: 7-12 April 2013
    Location: Vienna, Austria
    Convenors: Andrew Friend, Bethan Jones, Meike Vogt, Sönke Zahele, Han Dolman, Aideen Foley, Mehera Kidston, Gerardo López-Saldaña, Ioannis Bistinas, Catherine Morfopoulos

    This session examined the biogeochemical processes that are likely to affect the evolution of the Earth system over the coming decades, with a focus on the dynamics of marine and terrestrial ecosystems and the development of improved understanding through (a) fieldwork and laboratory experiments, (b) development of new observational datasets, both modern and palaeo, and (c) simulations using numerical models. Novel approaches combining observational datasets with models to improve constraints were particularly well represented….


Switch to our mobile site