RealClimate

Comments

RSS feed for comments on this post.

  1. Great overview, thanks. I think it is important to stress that with the current growth of fossil fuel emissions we are above the highest IPCC emissions scenario (RCP 8.5), at least for fossil fuel combustion. If this persists in the future we will be in the 3 degree range in 2100 even with the lowest CS estimates.

    Comment by Guido van der Werf — 3 Jan 2013 @ 12:03 PM

  2. Gavin Schmidt, could you maybe have a look at a catastrophic “paper” by the economist Alan Carlin that got lost in the scientific literature? One of his reasons to claim that “the risk of catastrophic anthropogenic global warming appears to be so low that it is not currently worth doing anything to try to control it” is that he uses a very low value for the climate sensitivity based on non-reviewed “studies”, while ignoring the peer-reviewed work.

    [Response: As pointed out by Hank in the comment below, we've already wasted enough of our neurons on Carlin. See here. --eric]

    Comment by Victor Venema — 3 Jan 2013 @ 12:40 PM

  3. Very illuminating, thank you. I agree with Guido, and would add that it would be helpful to stress how critical constraining ECS is. It’s not necessarily obvious to the uninitiated what a huge effect this ~2ºC uncertainty in ECS estimates has on scenarios that attempt to predict the magnitude and timing of climate change impacts (e.g. the AR5 RCPs). Also I found some possible minor typos:

    “strongly dependent [on?] still unresolved issues”

    “has been looked at before of course, but [efforts?] have been severely hampered”

    “are more likely to follow to be correlated to the TCR” [remove "to follow"?]

    It also might be helpful to spell out Aerosol Indirect Effect.

    [Response: thanks! - gavin]

    Comment by Chris Korda — 3 Jan 2013 @ 1:01 PM

  4. Victor Venema links to his own blog above. RC’s link on Carlin:
    http://www.realclimate.org/index.php/archives/2009/06/bubkes/

    Comment by Hank Roberts — 3 Jan 2013 @ 1:09 PM

  5. I think what really matters are the changes we can expect in the world in terms of livability, including the ability to grow adequate food. Using the change in temperature from the last glacial maximum as a ruler, if the change in temperature from that is near 6 Kelvin, then 2 K warmer than pre-industrial means a certain level of effects. If that change is only 4 K, then 2 K warmer means a higher level of change ecologically.

    The 2 K limit generally accepted as a dangerous limit would mean something different if the change in temperature from the last GM is 6 K (approximately 1/3 change again) versus 4 K (approximately 1/2 change again). In other words, if climate sensitivity is toward the low end, 2 K is more dangerous than we currently give it credit for, and arguments for low risk because of low sensitivity are less valid because that means that more ecological changes occur for a given temperature change than currently thought.

    Comment by Chris G — 3 Jan 2013 @ 2:26 PM

  6. Quite helpful Gavin. Well done.

    Comment by David B. Benson — 3 Jan 2013 @ 3:11 PM

  7. A useful post.

    Correct me if I am wrong, but this appears to be walking back the CS numbers a bit. ~3C seems to be heading towards ~2.5C. I am encouraged as I have been a somewhat vocal critic that when models have been over estimating the temps fairly consistently that it somehow wasn’t translating into lower CS estimates, or constraining the upper range. Many forcings were being twiddled to account for observations (namely aerosols), but the main CO2 forcing seemed to be the third rail.

    [Response: Huh? Forcing is not the same as sensitivity. For reference, GISS-ModelE had a sensitivity of 2.7ºC, and GISS-E2 has sensitivity of 2.4ºC to 2.8ºC depending on version. All very mainstream. - gavin]

    Comment by Tom Scharf — 3 Jan 2013 @ 4:30 PM

  8. Thanks for this crucial science on sensitivity. A crucial subject for understanding. This leads to questions about micro-sensitivity. People are thinking about their individual impact to global warming. Generalities of carbon footprint try to package the message but fail. We might want to know the impact of specific actions.

    Certainly operating a carbon fueled car has real consequences, although for any one vehicle they are very slight. A single cylinder emission is the lowest unit of micro-sensitivity. One person in a car might have a few million per day – or a pound of CO2 per mile traveled. It’s like pissing in a trout stream, one person may not do more harm than scare away the fish, but with millions along the shores all streaming away all day, pretty soon it is the yellow river of death.

    Somewhere we need to measure cognitive sensitivity to human impacts of global warming. Perhaps the visual display would be like a car’s tachometer – it would measure ineffectual use of CO2. The micro-sensitivity meter would indicate how effectively carbon fuel is used to deploy clean energy. It would sit right on the dashboard.

    Comment by richard pauli — 3 Jan 2013 @ 4:33 PM

  9. Tom Scharf wrote: “models have been over estimating the temps fairly consistently”

    That’s simply not true.

    Contrary to Contrarian Claims, IPCC Temperature Projections Have Been Exceptionally Accurate
    27 December 2012
    SkepticalScience

    “In this post we will evaluate this contrarian claim by comparing the global surface temperature projections from each of the first four IPCC reports to the subsequent observed temperature changes. We will see what the peer-reviewed scientific literature has to say on the subject, and show that not only have the IPCC surface temperature projections been remarkably accurate, but they have also performed much better than predictions made by climate contrarians.”

    Comment by SecularAnimist — 3 Jan 2013 @ 5:20 PM

  10. This was just posted at SkSc:

    “Time-varying climate sensitivity from regional feedbacks

    Time-varying climate sensitivity from regional feedbacks – Armour et al. (2012) [FULL TEXT]

    Abstract:”The sensitivity of global climate with respect to forcing is generally described in terms of the global climate feedback—the global radiative response per degree of global annual mean surface temperature change. While the global climate feedback is often assumed to be constant, its value—diagnosed from global climate models—shows substantial time-variation under transient warming. Here we propose that a reformulation of the global climate feedback in terms of its contributions from regional climate feedbacks provides a clear physical insight into this behavior. Using (i) a state-of-the-art global climate model and (ii) a low-order energy balance model, we show that the global climate feedback is fundamentally linked to the geographic pattern of regional climate feedbacks and the geographic pattern of surface warming at any given time. Time-variation of the global climate feedback arises naturally when the pattern of surface warming evolves, actuating regional feedbacks of different strengths. This result has substantial implications for our ability to constrain future climate changes from observations of past and present climate states. The regional climate feedbacks formulation reveals fundamental biases in a widely-used method for diagnosing climate sensitivity, feedbacks and radiative forcing—the regression of the global top-of-atmosphere radiation flux on global surface temperature. Further, it suggests a clear mechanism for the ‘efficacies’ of both ocean heat uptake and radiative forcing.”

    Citation: Kyle C. Armour, Cecilia M. Bitz, Gerard H. Roe, Journal of Climate 2012, doi: http://dx.doi.org/10.1175/JCLI-D-12-00544.1.”

    http://www.skepticalscience.com/new_research_52_2012.html

    Comment by wili — 3 Jan 2013 @ 5:45 PM

  11. And this:

    “Late Pleistocene tropical Pacific temperatures suggest higher climate sensitivity than currently thought

    Late Pleistocene tropical Pacific temperature sensitivity to radiative greenhouse gas forcing – Dyez & Ravelo (2012)

    Abstract: “Understanding how global temperature changes with increasing atmospheric greenhouse gas concentrations, or climate sensitivity, is of central importance to climate change research. Climate models provide sensitivity estimates that may not fully incorporate slow, long-term feedbacks such as those involving ice sheets and vegetation. Geological studies, on the other hand, can provide estimates that integrate long- and short-term climate feedbacks to radiative forcing. Because high latitudes are thought to be most sensitive to greenhouse gas forcing owing to, for example, ice-albedo feedbacks, we focus on the tropical Pacific Ocean to derive a minimum value for long-term climate sensitivity. Using Mg/Ca paleothermometry from the planktonic foraminifera Globigerinoides ruber from the past 500 k.y. at Ocean Drilling Program (ODP) Site 871 in the western Pacific warm pool, we estimate the tropical Pacific climate sensitivity parameter (λ) to be 0.94–1.06 °C (W m−2)−1, higher than that predicted by model simulations of the Last Glacial Maximum or by models of doubled greenhouse gas concentration forcing. This result suggests that models may not yet adequately represent the long-term feedbacks related to ocean circulation, vegetation and associated dust, or the cryosphere, and/or may underestimate the effects of tropical clouds or other short-term feedback processes.”

    Citation: Kelsey A. Dyez and A. Christina Ravelo, Geology, v. 41 no. 1 p. 23-26, doi: 10.1130/G33425.1.”

    In general, as we get better and better data and as models include more and more feedbacks, are studies moving toward higher and higher sensitivities?

    That had been my impression, but perhaps this is the result of selective reading on my part.

    [Response: I would need to check, but I think this is a constraint on the Earth System Sensitivity - not the same thing (see the first figure). - gavin]

    Comment by wili — 3 Jan 2013 @ 5:53 PM

  12. I’m increasingly thinking that what we really need is an estimate of the sensitivity of the system to an injection of carbon dioxide including the feedback from the carbon cycle etc. I suppose that is the Earth System Sensitivity in this terminology. Using sensitivities where carbon dioxide concentrations is an exogenous variable could underestimate the cost of emissions impacts.

    Comment by David Stern — 3 Jan 2013 @ 8:35 PM

  13. Surely the models described are all lagging behind the real world. The CMIP5 models seem to predict an Arctic free of summer sea ice in a few decades but the real world trend is for this to happen in the next few summers.

    So why should policy makers care what these models predict as climate sensitivity? I suppose it is an interesting scientific problem but we should bear in mind that most or all of them are on the optimistic side.

    Quite apart from the known underestimated feedbacks – more forest fires, melting permafrost, the decomposition of wetlands – new possibilities turn up. What if we see more Great Arctic Cyclones pop up in coming years to speed very unwelcome climate changes?

    Don’t you think we’re seeing changes that exceed predictions already. Lots of droughts, floods and snow.

    I would be interested in your take on frankenstorm Sandy. A sign of climate change or just blowing in the wind?

    Comment by Geoff Beacon — 4 Jan 2013 @ 4:38 AM

  14. Since the AGU/Wiley publishing switchover seems to have anihilated the Hargreaves et al paper (hopefully only temporarily!), here’s my own copy of it.

    [Response: James -- thanks. One of my papers disappeared this way too. Mildly annoying! I added a link in the post which we'll remove once AGU sorts things out. --eric]

    Comment by James Annan — 4 Jan 2013 @ 5:48 AM

  15. Thanks for this interesting post.

    Comment by Alex Harvey — 4 Jan 2013 @ 6:40 AM

  16. If “sensitivity” is the response to a given injection of CO_2, how can we measure this directly when the CO_2 level is constantly increasing?

    [Response: That isn't the point. Sensitivity is a measure of the system, and many things are strongly coupled to it - including what happens in a transient situation (although the relationship is not as strong as one might think). The quest for a constraint on sensitivity is not based on the assumption that we will get to 2xCO2 and stay there forever, but really just as a shorthand to characterise the system. Thus for many questions - such as the climate in 2050, the uncertainties in the ECS are secondary. - gavin]

    Comment by Philip Machanick — 4 Jan 2013 @ 11:34 AM

  17. “Palaeosens (2012)” — the reference in footnote 1 — is
    http://www.nature.com/nature/journal/v491/n7426/abs/nature11574.html
    Making sense of palaeoclimate sensitivity
    PALAEOSENS Project Members

    (Paywalled, but the Supplementary Information (4.8M) PDF is available

    Comment by Hank Roberts — 4 Jan 2013 @ 12:05 PM

  18. Geoff Beacon #13: You may indeed be able to cite cases where models are “lagging behind the real world”. Arctic sea ice measurements below a past prediction do constitute such a case. But comparing different *future* predictions of “an Arctic free of summer sea ice” cannot, logically, be cited today as a discrepancy between a past prediction and the real world, as measured. Please don’t confuse these 2 situations, which are quite different. If you want to bet on an ice-free Arctic, by some appropriate definition, by some date in a couple years, you can probably find a place to do it, but that’s a different thing from pointing out how a past prediction missed something in the real world.

    Comment by Ric Merritt — 4 Jan 2013 @ 1:14 PM

  19. Ric Merritt #18

    I actually did make money betting on the Arctic sea ice this year – but I would like to point out the bets were motivated more by anger than avarice.

    Just look at the plots taken from CMIP4 and CMIP5 models when they are compared with measured extents from NSIDC data then tell us where you would place your bet for a summer free of sea ice.

    I’d expect to see the Arctic essentially free of ice during September within three years.

    What’s your bet?

    Neven’s Sea Ice Blog has some pieces that will help :

    The real AR5 bombshell

    Models are improving, but can they catch up?

    Comment by Geoff Beacon — 4 Jan 2013 @ 4:02 PM

  20. > Ideally, one would want to do a study across all
    > these constraints with models that were capable of
    > running all the important experiments – the LGM,
    > historical period, 1% increasing CO2 (to get the TCR),
    > and 2xCO2 (for the model ECS) – and build a multiply
    > constrained estimate taking into account internal
    > variability, forcing uncertainties, and model scope.
    > This will be possible with data from CMIP5 ….

    How soon? Is there any coordination among those doing this, before papers get to publication, so you know what’s being done by which group, and all the scientists are aware of each other’s work so they, taken as a group, can nail down as many loose ends as possible?

    Comment by Hank Roberts — 4 Jan 2013 @ 5:03 PM

  21. Volume has a more immediate signal than extent. In other words, measuring extent masks the problem. Since we now can talk with either term, it is a disservice for the IPCC to speak extent. I suggest the whole sea ice section be re-written with a volume-centric view. I’m betting all those “models more or less worked for extent up to 2011″ would turn into “models were way off on volume through 2012″.

    Comment by Jim Larsen — 4 Jan 2013 @ 5:10 PM

  22. Oops, I see that’s been answered:

    http://www.metoffice.gov.uk/research/news/cmip5
    “… (CMIP5) is an internationally coordinated activity to perform climate model simulations for a common set of experiments across all the world’s major climate modelling centres….
    …. and deliver the results to a publicly available database. The CMIP5 modelling exercise involved many more experiments and many more model-years of simulation than previous CMIP projects, and has been referred to as “the moon-shot of climate modelling” by Gerry Meehl, a senior member of the international steering committee, WGCM…..”

    Comment by Hank Roberts — 4 Jan 2013 @ 5:11 PM

  23. How does this submitted paper by Hansen et al fit in:
    http://arxiv.org/ftp/arxiv/papers/1211/1211.4846.pdf

    Do I understand correctly that this paper suggests a current CS of about 4 degrees C and earth system sensitivity of about 5 degrees, and seems to rule out CS-values lower than 3 degrees?

    They also speak about sea level sensitivity as being higher than current ice sheet models show. It seems about 500 ppm CO2 could eventually mean an ice free planet, much lower than the circa 1000 ppm that ice sheet models seem to estimate.

    Any thoughts on this approach and these conclusions?

    Comment by Lennart van der Linde — 4 Jan 2013 @ 5:16 PM

  24. Splendid word, I’d guess a typo, in the Hansen conclusion:

    “16×CO2 is conceivable, but of course governments would not be so foolhearty….”

    Comment by Hank Roberts — 4 Jan 2013 @ 7:53 PM

  25. Gavin

    This is not the subject but it seems that, in AR5 (sorry it is the leaked version), the mean total aerosol forcing is less (30%) than this same forcing in AR4.(-0.9W/m2 against -1.3W/m2)
    On this link, http://data.giss.nasa.gov/modelforce/RadF.txt ,NASA-GISS provides a total aerosol forcing, in 2011, of -1.84W/m2.
    I think that, if it is easy to conciliate a 3°C sensitivity with -1.84W/m2, it seems impossible with -0.9W/m2 (the new IPCC mean forcing), maybe a 2°C sensitivity works better.
    So, is there another aerosol effect (different of the adjustment) accounted by the models, or other things?

    [Response: That file is the result of an inverse calculation in Hansen et al, 2011. You need to read that for the rationale. The forcings in our CMIP5 runs are smaller. - gavin]

    Comment by meteor — 5 Jan 2013 @ 4:24 AM

  26. On the studies of sensitivity based on the last glacial maximum, what reduction in solar forcing is used based on the increased Albedo of the ice-sheets, snow and desert. It doesn’t appear to be outined in the papers.

    Comment by Paul Williams — 5 Jan 2013 @ 7:56 AM

  27. This is off topic, but I was wondering about the Alaska earthquake this morning and its impact on the methane hydrates along the continental shelf. Info on this would be helpful.

    Comment by Jack Wolf — 5 Jan 2013 @ 9:31 AM

  28. Geoff,
    My bet would be the opposite. Historically, a new low sea ice extent (area) is set every five years, with small recoveries in-between. My bet would be that 2012 was an overshoot, and that the next three years will show higher extents and areas. The next lower sea ice will occur sometime thereafter.

    Comment by Dan H. — 5 Jan 2013 @ 12:07 PM

  29. Looking again at Hansen’s submitted paper leaves me guessing his earth system sensitivty in the current state a little more than 5 degrees C, more like 6-8 degrees. Any other interpretations?

    Comment by Lennart van der Linde — 5 Jan 2013 @ 5:35 PM

  30. 26 Paul W asked, “On the studies of sensitivity based on the last glacial maximum, what reduction in solar forcing is used based on the increased Albedo of the ice-sheets, snow and desert. It doesn’t appear to be outined in the papers.”

    Yes, the obvious questions that make the most sense are often missing. What’s the total watts/m2 of the initial orbital push from LGM to HCO (totally silent on this), and what’s the total increase in temperature (4-6C?)?

    Combine the two and you’ve got a total system sensitivity for conditions during an ice age. I’ve heard that sensitivity for current conditions is probably higher, but regardless, isn’t that the first thing one would want answered about climate sensitivity?

    1. What was the initial push historically?
    2. What was the final result (pre-industrial temps)?
    3. What is the current push?

    RC often touches on the last two, but the answer to the all-important first question is rarely (if ever – I don’t ever remember seeing an answer) mentioned even though it seems to be the best way to derive some sort of prediction about the future that doesn’t rely on not-ready-for-prime-time systems.

    Has anybody ever heard of an estimate of the initial orbital forcing from LGM to HCO?

    Comment by Jim Larsen — 5 Jan 2013 @ 7:06 PM

  31. #28–Dan H wrote:

    My bet would be the opposite. Historically, a new low sea ice extent (area) is set every five years, with small recoveries in-between. My bet would be that 2012 was an overshoot, and that the next three years will show higher extents and areas. The next lower sea ice will occur sometime thereafter.

    Maybe. But didn’t we have a conversation here on RC, not so long ago, about the virtues and vices of extrapolation?

    I’m looking at the winter temps from 80 N this year (continuing toasty, relatively), and thinking about ENSO–neutral is now favored through spring–and remembering a) that the weather last year was rather unremarkable for melt and b) we’re still at the height of the solar cycle, more or less.

    Throw in a quick consult with some chicken entrails, and I’ve concluded that I wouldn’t bet on Dan’s extrapolation.

    Comment by Kevin McKinney — 5 Jan 2013 @ 8:20 PM

  32. Jim Larsen #30,
    I think Jim Hansen mentions the initial orbital forcing for glaciation-deglaciation to be less than 1 W/m2 averaged over the planet, maybe just a few tenths of a W/m2. The resulting slow GHG and albedo feedbacks are about 3 W/m2 each, in his calculation.

    So what happens if the initial GHG forcing now is about 4 W/m2? Would that mean slow feedbacks would total tens of W/m2? Or less? It seems Hansen thinks less, about 4 W/m2 as well, but I don’t really understand why. Does it have to do with the initial orbital forcing being much stronger or effective locally, at the poles?

    It seems to me Hansen is really still struggling to understand this himself, and as a consequence his papers are not fully clear yet. Or maybe I just don’t understand clearly enough myself.

    Comment by Lennart van der Linde — 6 Jan 2013 @ 4:20 AM

  33. > Historically, a new low sea ice extent (area) is set every five years

    What planet equates extent and area, and has this 5-year cycle?

    Comment by Hank Roberts — 6 Jan 2013 @ 10:02 AM

  34. Re- Comment by Lennart van der Linde — 6 Jan 2013 @ 4:20 AM

    You say- “Does it have to do with the initial orbital forcing being much stronger or effective locally, at the poles?”

    The northern hemisphere has much more land than the southern hemisphere and they are therefore affected differentially by orbital forcing.

    Steve

    Comment by Steve Fish — 6 Jan 2013 @ 12:05 PM

  35. Dan H #28: I wouldn’t be so sure that 2012 is an outlier. Look at the second animation here. The last few years show ice thickness consistently below previous levels. The pattern is oscillation with a downward trend but around 2007, the previous record year for minimum extent, there’s a big drop, then another one in 2010. With increasingly less multi-season ice, rebuilding previous sea ice extent gets harder and harder. I’m sure some people also thought 2007 was an outlier.

    Of course you could be right that there’s some oscillation before it dips again, but I wouldn’t bet on it. There was nothing special about 2012 conditions to have caused a big dip (e.g. SOI index didn’t show any big El Niño events over the year).

    Comment by Philip Machanick — 7 Jan 2013 @ 5:23 AM

  36. Kevin,
    Fair enough. However, looking at the decrease in sea ice minimum over the past decade or so, both the 2007 and 2012 minima crashed through the preious lows in a typical overshoot pattern. Recently, a new low has been set every five years (2002, 2007, & 2012), with modest recoveries in-between. Last year, looks remarkably similar to 2007.

    Comment by Dan H. — 7 Jan 2013 @ 7:23 AM

  37. Doesn’t -3.5 W/m2 from the ice age Albedo forcing seem like an awfully low figure.

    The Arctic sea ice melting out above 75N would have almost no impact at all if that is the forcing change of glaciers down to Chicago and sea ice down to 45N (at lower latitudes where the Albedo has much more impact).

    Comment by Paul Williams — 7 Jan 2013 @ 8:50 AM

  38. Paul Williams,

    I can’t tell where you got the figure but -3.5W/m2 is about right for current understanding of “boundary condition” land albedo change between pre-industrial and LGM. In LGM simulations land albedo changes are prescribed (at least in regards to ice sheets and altered topography due to sea level; there are feedback land albedo changes) so are a forcing, whereas sea ice is determined interactively by the model climate, so is a feedback in this framework.

    Comment by Paul S — 7 Jan 2013 @ 12:26 PM

  39. Geoff Beacon #19: To answer your question, if you mean you expect the NSIDC to announce a September arctic sea ice minimum below, say 1M sq km, by 2015, I would bet against, but not a huge amount, because of uncertainty.

    This is not due to any denialist illness, or any reluctance to put my money where my mouth is. Over the last several years, when irritated by trolls on DotEarth, Joe Romm’s site, or the like, I have repeatedly offered to bet more than my current middle-class salary, indexed to the S&P 500 at time of settling up, on the course of global temperatures over decades. Strangely enough, I never got a serious bite.

    But my previous point, which you kinda ignored, was that future expectations, which are of course what folks make bets about, are fundamentally different from pointing out a difference between carefully recorded past expectations and carefully recorded (probably recent) past measurements. I think the conversation is clearer if we keep that straight.

    Comment by Ric Merritt — 7 Jan 2013 @ 1:28 PM

  40. #36–”Last year, looks remarkably similar to 2007.”

    Only in terms of the magnitude of the extent drop. But if there’s one thing I’ve learned about watching sea ice melt, it is that it ‘loves’ to confound.

    Comment by Kevin McKinney — 7 Jan 2013 @ 2:33 PM

  41. #37–Maybe, but IIRC, I saw an estimate of .7 w/m2 for an ice-free Arctic summer. So, maybe not–though the .7 estimate was probably somewhat of a ‘spherical cow in a vacuum’ deal.

    Comment by Kevin McKinney — 7 Jan 2013 @ 2:36 PM

  42. @James Annan (#14). Thanks for posting your paper. I think there is a disconnect between the modeling and paleodata community that is affecting your estimates. The data community (geochemical proxies) would argue that we’ve solidly established the 2.5-3 deg cooling level for the deep tropics during the LGM. The MARGO data is dominated by older foram transfer function estimates, which even its most ardent practitioners would agree do not record tropical changes accurately. This is an important point that is affecting a number of recent estimates of sensitivity using MARGO data.

    [Response: David, thanks for dropping by. I take it you mean that the Margo data is resulting in underestimates of climate sensitivity? --eric]

    Comment by David Lea — 7 Jan 2013 @ 3:28 PM

  43. @Eric: Yes, that’s the implication. if you look at Fig. 2 in Hargreaves et al, the observational band for LGM tropical cooling they use, based on MARGO, is -1.1 to -2.5 deg C, equating to a sensitivity of about 2.5 deg. Using an estimate of the mean tropical cooling based on geochemical proxies of 2.5-3 deg would yield a sensitivity closer to 3.5 deg (but perhaps Julia will comment).

    Comment by David Lea — 7 Jan 2013 @ 4:11 PM

  44. Paul Williams @37 — The ice sheets become dirtier over time.

    Comment by David B. Benson — 7 Jan 2013 @ 5:21 PM

  45. Do the ‘older foram transfer function estimates’ make different calculations but using the same original material? Or is this new field data? How did the geochemists come by the ‘geochemical proxies of 2.5-3′ now favored?

    Comment by Hank Roberts — 7 Jan 2013 @ 5:32 PM

  46. @Hank Roberts #45. I believe that the transfer function estimates used in MARGO are based on the traditional method used in CLIMAP, rather than newer approaches. And I also believe it is largely the same data set used in CLIMAP. As for the geochemical data, it is based on Mg/Ca in foraminifera, alkenone unsaturation in sediments and some sparse data from other techniques such as Ca isotopes, clumped isotopes and TEX86. The -2.5 to -3.0 deg cooling value is my subjective estimate based on knowledge of the data and various published compilations. Although LGM oxygen isotope changes cannot be used to independently assess cooling, they provide a useful additional constraint that is difficult to reconcile with a cooling much less than 3 deg.

    Comment by David Lea — 7 Jan 2013 @ 6:32 PM

  47. Thanks Dr. Lea for coming by,

    It would be interesting to have someone do a post on the history of LGM ocean-temperature reconstructions following CLIMAP, including the Sr/Ca evidence.

    There’s a lot to it (and I think is one of the initial pushes for Lindzen in developing his faith in a low climate sensitivity).

    Comment by Chris Colose — 7 Jan 2013 @ 7:24 PM

  48. This helps (I’d heard some of the terms, I’d have to look up all of ‘em again as most of what I know is decades out of date)

    Please go on at as much length as you have patience for.

    Comment by Hank Roberts — 7 Jan 2013 @ 8:29 PM

  49. > affecting a number of recent estimates
    > of sensitivity using MARGO data.

    Time to invite all the authors whose work is affected to a barbeque?

    How hard is it to revise a paper if the author (or reviewer, or editor) decides this change should be made? Simple, or complicated?

    Comment by Hank Roberts — 7 Jan 2013 @ 11:04 PM

  50. Dan H #28:
    Not sure where you get the idea that a record low extent is set every five years. The previous record to 2007 was in 2005. As I recall the 2007 record resulted from very favourable weather conditions, so it would have been unlikely for another record to be set for several years (and it wasn’t). I think we can say that the 2007 melt made it more likely that another record smashing melt season would occur eventually given the right conditions, and that we can say the same about 2012.

    http://nsidc.org/arcticseaicenews/2012/10/poles-apart-a-record-breaking-summer-and-winter/

    Comment by Bill Woolverton — 8 Jan 2013 @ 2:08 AM

  51. Bill,
    Sorry, my bad. On my dataset, the line obscurved the 2005 data point. Both the 2007 and 2012 lows were affected by favourable weather conditions, and I concur that it would be unlikely for another record to be set for several years. The data from 2008-2011 fell reasonably well on the linear trend established over the past two decades. I would expect the next few years to follow suit.

    Comment by Dan H. — 8 Jan 2013 @ 11:31 AM

  52. Dan H.,
    I would be careful in drawing any conclusions about the temporal dependence of new sea ice minima. The most notable aspect of the graph is the downward trend, and it is arguable that a linear trend no longer cuts it as a fit. What is more, the decline in thickness is even more marked that the sea ice extent decline, and thin ice is easier to melt. While it is true that weather affects the ultimate decline year to year, I don’t think it is an accident that the last two records haven’t just broken their predecessors, but smashed them. The number of aces in the deck has increased.

    Comment by Ray Ladbury — 8 Jan 2013 @ 12:04 PM

  53. > on my dataset the line obscurved the 2005 data
    But “Dan H.” claimed to be using the data file he pointed to, not a picture

    > I concur that it would be unlikely for another record to be set
    The familiar “I agree with myself and pretend you said it” bait again

    This is the uncanny valley simulation of discourse.

    Comment by Hank Roberts — 8 Jan 2013 @ 12:53 PM

  54. @Hank #49. Not that simple. I agree on getting everyone together, but you can’t go back and revise published papers (fortunately). The way forward is to continue to debate and refine the estimates. In some areas it just takes time to get consensus, but if the problem is solvable, we’ll eventually get there.

    [Response: actually you can go back and revise published results using updated datasets and/or calibrations and/or age models. Indeed we should be building archives that allow for that as a matter of course. I would expect that this might be much more cost effective than drilling new cores... ;-) -gavin]

    Comment by David Lea — 8 Jan 2013 @ 1:10 PM

  55. “I would expect that this might be much more cost effective than drilling new cores… ;-) -gavin]”

    So you turned something we all want into something “they” can say no to? Thanks…

    Comment by Jim Larsen — 8 Jan 2013 @ 3:51 PM

  56. > David Lea
    > …. I agree on getting everyone together

    Is there an appropriate umbrella organization (AGU?) that overlaps the two communities? (and hosts barbeques?)

    Is there a journal where reviewers are drawn from both modeling and paleo communities?

    Can scientists get such early feedback on choices before starting papers, without having their ideas stolen?

    > Gavin
    > … using updated datasets and/or calibrations and/or age models.
    > … we should be building archives

    Is there any project to create such archives?
    Some sketch of what would be collected, etc.?
    (Lists and pointers to lists, rather than copies — Rule One of Databases — if possible)
    (I’d guess this might be long discussed but lacking funding — and not yet ready for Kickstarter)

    Would authors (and journal editors) cooperate in creating a new layer of science publishing,
    to be dedicated to “using updated datasets and/or calibrations and/or age models” for revising/reworking papers?

    Would original authors — whose approach was known good — agree with having someone else crank through their
    same procedure after the “datasets and/or calibrations and/or age models” were changed? Get credit? Not lose face or funds?

    Could recalculating involve citizen/volunteers working with guidance from original authors?

    I know reworking previous papers isn’t a high priority for most scientists or grad students.
    They ought to be left free to do more interesting and new work.

    At the rate the “datasets and/or calibrations and/or age models” are improved — it’d sure be interesting.

    Comment by Hank Roberts — 8 Jan 2013 @ 4:57 PM

  57. On sea ice records and the probability of setting new records:

    I haven’t done a quantitative analysis, but my guess is that given an old record, “A”, and a new record, “B”, the expectation that a year soon after B will be even less than B (a newer record) is probably less the larger the B minus A difference (eg, reversion to the mean), BUT, the expectation that a year soon after B will be less than A should increase the larger that difference (eg, there is more confidence in a larger decreasing trend due to Bayesian updating).

    (I might also guess that the shorter the time period between A and B, the more expectation there should be that a year soon after B will exceed B)

    So, to apply this to 2012: the large 2012 minus 2007 difference means that I’d expect the next record to be more years off than I would have had 2012 barely beat 2007. It will be interesting to watch over the next few years… if we don’t see a new record until 2017, I wonder how many “sea ice recovery!” posts from WUWT we’ll have to endure…

    Comment by MMM — 8 Jan 2013 @ 5:23 PM

  58. “Both the 2007 and 2012 lows were affected by favourable weather conditions…”

    The only really favorable 2012 circumstance I can think of, weather-wise, was the cyclone. And that was in effect for about a week.

    The melt weather otherwise was fairly ‘middle of the road,’ according to assessments I’ve read. Yet the 2007 record was obliterated, and apparently still would have been without the cyclone.

    Moreover, as Tamino and others have pointed out, this is all extent, yet volume is in some ways more to the point, as implied by Ray’s comments in #52. The most remarkable year for volume decline was actually 2010, if the PIOMAS results are correct:

    http://tinyurl.com/WipneusSeptemberPIOMASannuals

    This decline isn’t just weird weather–though goodness knows, we seem to be seeing more of that:

    http://www.cbc.ca/news/technology/story/2013/01/08/sci-aussie-heat-map.html

    I don’t pretend to know what will happen with next year’s minimum. But it is much more likely to be below 2007 than not–and a new record would sadden but not shock me. (If I had to guess odds, I’d probably say 50-50.)

    Comment by Kevin McKinney — 8 Jan 2013 @ 10:19 PM

  59. 32 Lennart said, “It seems Hansen thinks less, about 4 W/m2 as well,”

    Thanks. I wouldn’t bet too wide of Hansen. Was he assuming cold turkey? If so, we’d end up a lot lower than our current 400ppm, so the persistent modern forcing (on which everything else must pile) is lots lower than the current CO2 forcing.

    Comment by Jim Larsen — 9 Jan 2013 @ 2:29 AM

  60. @David Lea #42-43: thanks for the comment, from where I’m sitting, it seems to be more of a disconnect between two sides of the paleodata community :-) I’m not trying to take sides, just using the most recent and comprehensive proxy compilations. You are certainly right that a colder tropical LGM would result in a higher sensitivity estimate.

    Comment by James Annan — 9 Jan 2013 @ 4:24 AM

  61. 44.Paul Williams @37 — The ice sheets become dirtier over time.

    Comment by David B. Benson — 7 Jan 2013
    ———

    Antarctica and Greenland and mountain glaciers seem to stay white enough. Everytime it snows, they are back to Albedos of 0.8.

    Now dirt and material will migrate to the top of a glacier as it is melting back/receding and the edges can even become black. But the main glacial region will still be white as long as it is stable or advancing and snow falls over a long enough period of the year. Sounds like a last glacial maximum.

    Comment by Paul Williams — 9 Jan 2013 @ 6:52 AM

  62. Jim @59,

    As I understand Hansen he’s saying: if we double CO2 this century (so upto about 550-600 ppm), that will mean a forcing of about 4 W/m2 and 3 degrees C warming in the short term (decades), and thru slow feedbacks (albedo + GHG) another 4 W/m2 and 3 degrees in the long term (centuries/millennia).

    Comment by Lennart van der Linde — 9 Jan 2013 @ 9:25 AM

  63. @James Annan #60. Your point is a fair one; what’s in print doesn’t always reflect what’s discussed in the halls. But more importantly, I think your paper provides a way forward on the sensitivity problem by providing a plausible scaling between tropical cooling and sensitivity — something that has eluded me in past papers. I am very confident that we can nail down the tropical cooling (if we havn’t already) and, as I said previously, I would be very surprised if it’s much less than 2.5 deg tropics wide — partly because of the agreement between the various geochemical proxies, partly because of the oxygen isotope constraint. If you adjusted the LGM tropical cooling to 2.8 ± 0.7 deg (my published value from 2000), what would it translate to in terms of sensitivity?

    Comment by David Lea — 9 Jan 2013 @ 11:25 AM

  64. Paul Williams @61 — But during LGM it didn’t snow over the entire Laurentide ice sheet. There were two accumulation centers. Around the margins the winds blew loess in great quantities. Much of it lies here in the Palouse but I’m rather sure that much of it ended up on the ice sheet.

    Comment by David B. Benson — 9 Jan 2013 @ 6:33 PM

  65. Ray,
    The most vulnerable ice has already melted, and was likely enhanced by changes in the AO.

    http://www.nasa.gov/topics/earth/features/earth20120104.html

    The remaining ice is further from the inflow of warm waters, and closer to the Greenland glaciers. This ice is thicker than the ice that melted previously. Going forward, it is likely not to melt linearly, but less so, as the ice becomes harder to melt.

    Comment by Dan H. — 10 Jan 2013 @ 1:55 PM

  66. Gavin Schmidt

    I am glad to see that my input into the Wall Street Journal op-ed pages has prompted a piece on climate sensitivity at RealClimate. I think that some comment on my energy balance based climate sensitivity estimate of 1.6-1.7°C (details at http://www.webcitation.org/6DNLRIeJH), which underpinned Matt Ridley’s WSJ op-ed, would have been relevant and of interest.

    [Response: Part III. - gavin]

    You refer to the recent papers examining the transient constraint, and say “The most thorough is Aldrin et al (2012). … Aldrin et al produce a number of (explicitly Bayesian) estimates, their ‘main’ one with a range of 1.2ºC to 3.5ºC (mean 2.0ºC) which assumes exactly zero indirect aerosol effects, and possibly a more realistic sensitivity test including a small Aerosol Indirect Effect of 1.2-4.8ºC (mean 2.5ºC).”

    The mean is not a good central estimate for a parameter like climate sensitivity with a highly skewed distribution. The median or mode (most likely value) provide more appropriate estimates. Aldrin’s main results mode for sensitivity is between 1.5 and 1.6ºC; the median is about halfway between the mode and the mean.

    [Response: All the pdfs are skewed - but using the mode to compare to the mean in previous work is just a sleight of hand to make the number smaller. The WSJ might be happy to play these kinds of games, but don't do it here. - gavin]

    I agree with you that Aldrin (available at http://folk.uio.no/gunnarmy/paper/aldrin_env_2012.pdf) is the most thorough study, although its use of a uniform prior distribution for climate sensitivity will have pushed up the mean, mainly by making the upper tail of its estimate worse constrained than if an objective Bayesian method with a noninformative prior had been used.

    It is not true that Aldrin assumes zero indirect aerosol effects. Table 1 and Figure 15 (2nd panel) of the Supplementary Material show that a wide prior extending from -0.3 to -1.8 W/m^2 (corresponding to the AR4 estimated range) was used for indirect aerosol forcing. The (posterior) mean estimated by the study was circa -0.3 W/m^2 for indirect aerosol forcing and -0.4 W/m^2 for direct. The total of -0.7 W/m^2 is the same as the best observational (satellite) total aerosol adjusted forcing estimate given in the leaked Second Order Draft of AR5 WG1, which includes cloud lifetime (2nd indirect) and other effects.

    When Aldrin adds a fixed cloud lifetime effect of -0.25 W/m^2 forcing on top of his variable parameter direct and (1st) indirect aerosol forcing, the mode of the sensitivity PDF increases from 1.6 to 1.8. The mean and the top of the range goes up a lot (to 2.5ºC and 4.8ºC, as you say) because the tail of the distribution becomes much fatter – a reflection of the distorting effect of using a uniform prior for ECS. But, given the revised aerosol forcing estimates given in the AR5 WG1 SOD, there is no justification at all for increasing the prior for aerosol indirect forcing prior by adding either -0.25 or -0.5 W/m^2. On the contrary, it should be reduced, by adding something like +0.5 W/m^2, to be consistent with the lower AR5 estimates.

    [Response: They aren't 'AR5' estimates yet. Still being reviewed, remember? - gavin]

    It is rather surprising that adding cloud lifetime effect forcing makes any difference, insofar as Aldrin is estimating indirect and direct aerosol forcings as part of his Bayesian procedure.

    [Response: Not sure this is true. I think they are starting to do so in subsequent papers. - gavin]

    The reason is probably, because the normal/lognormal priors he is using for direct and indirect aerosol forcing aren’t wide enough for the posterior mean fully to reflect what the model-observational data comparison is implying. When extra forcing of -0.25 or -0.5 W/m^2 is added his prior mean total aerosol forcing is very substantially more negative than -0.7 W/m^2 (the posterior mean without the extra indirect forcing). That results in the data maximum likelihoods for direct and indirect aerosol forcing being in the upper tails of the priors, biasing the aerosol forcing estimation to more negative values (and hence biasing ECS estimation to a higher value).

    Ring et al. (2012) (available from http://www.scirp.org/fileOperation/downLoad.aspx?path=ACS20120400002_59142760.pdf&type=journal) is another recent climate sensitivity study based on instrumental data. Using the current version, HadCRUT4, of the surface temperature dataset used in a predecessor study, it obtains central estimates for total aerosol forcing and climate sensitivity of respectively -0.5 W/m^2 and 1.6 ºC. This is a 0.9ºC reduction from the sensitivity of 2.5°C estimated in that predecessor study, which used the same climate model. The reduction resulted from correcting a bug found in the climate model computer code. (Somewhat lower and higher estimates of aerosol forcing and sensitivity are found using other, arguably less reliable, temperature datasets.)

    Comment by Nic Lewis — 12 Jan 2013 @ 1:27 PM

  67. > Dan H. says:…
    > The most vulnerable ice has already melted

    Begins with an assumption, without citation.
    Conflates sea level (floating) ice and glacial ice.
    Ignores multiple ways that ice shelves protect glaciers.
    Arrives at his usual conclusion.

    Search

    http://nsidc.org/cryosphere/quickfacts/iceshelves.html
    “… Ice streams and glaciers constantly push on ice shelves, but the shelves eventually come up against coastal features such as islands and peninsulas, building pressure that slows their movement into the ocean. If an ice shelf collapses, the backpressure disappears. The glaciers that fed into the ice shelf speed up, flowing more quickly out to sea….”

    Comment by Hank Roberts — 12 Jan 2013 @ 2:57 PM

  68. http://www.ldeo.columbia.edu/news-events/scientists-predict-faster-retreat-antarcticas-thwaites-glacier

    Comment by Hank Roberts — 12 Jan 2013 @ 2:58 PM

  69. Search better

    Comment by Hank Roberts — 12 Jan 2013 @ 3:37 PM

  70. Dan H. also says: “This ice is thicker than the ice that melted previously.”

    And yet most of the multi-year ice has already disappeared, and ice thickness continues to drop, as best as we can tell.

    Comment by Kevin McKinney — 13 Jan 2013 @ 8:36 AM

  71. Kevin,

    The minimum Arctic sea ice has declined by a little over half since its maximum extent of the past three decades.

    http://neven1.typepad.com/blog/2012/10/nsidc-arctic-sea-ice-news-september-2012.html

    The following animation from MIT shows the sea ice changes. Notice the ice repeatedly melts along the Siberian, Alaskan, and the Northern Canadian coastlines. The Ice around Northern Greenland, and northern Canadian islands remains year after year. This is the thick ice to which I was referring. Yes, it is thinner than two decades ago, but it is thicker than the ice around the continents which has melted in the previous summers. Does this adequately explain by preious posts?

    http://www.staplenews.com/home/2013/1/10/understanding-arctic-sea-ice-at-mit.html

    Comment by Dan H. — 13 Jan 2013 @ 5:09 PM

  72. I thought James Annan had demonstrated that using a uniform prior was bad practise. That would tend to spread the tails of the distribution nsuch that the mean is higher than the other measures of central tendency. So is it justified in this paper?

    Comment by Graeme — 13 Jan 2013 @ 6:01 PM

  73. Can we quit chasing Dan H.’s red herrings? He is _so_ good at diverting a topic to talking about his mistakes. Paste his claim into Google, and sigh.
    Large ice age animation
    The older ice is not up against the shoreline, and is not protected.

    Comment by Hank Roberts — 13 Jan 2013 @ 9:23 PM

  74. “Does this adequately explain by preious posts?”

    No, in view of the fact that the thick multi-year ice which formerly made up a considerable proportion of the sea ice has nearly disappeared.

    Comment by Kevin McKinney — 14 Jan 2013 @ 12:16 AM

  75. Dr. Joel Norris of the Scripps Institution gave an excellent colloquium presentation titled “Cloud feedbacks on climate: a challenging scientific problem” to Fermilab National Laboratory, May 12, 2010. Please see the archived video and his powerpoint presentation at this link:

    http://www-ppd.fnal.gov/EPPOffice-w/colloq/Past_09_10.html

    Comment by CRS DrPH — 14 Jan 2013 @ 1:59 AM

  76. New paper mixing “climate feedback parameter” with climate sensitivity… “climate feedback parameter was estimated to 5.5 ± 0.6 W m−2 K−1″ “Another issue to be considered in future work should be that the large value of the climate feedback parameter according to this work disagrees with much of the literature on climate sensitivity (Knutti and Hegerl, 2008; Randall et al., 2007; Huber et al.,
    2011). However, the value found here agrees with the report by Spencer and Braswell 10 (2010) that whenever linear striations were observed in their phase plane plots the slope was around 6Wm−2K−1. Spencer and Braswell (2010) used middle tropospheric temperature anomalies and although they did not consider any time lag they may have observed some feedback processes with negligible time lag considering that the tropospheric
    temperature is better correlated to the radiative flux than the surface air 15 temperature. The value found in this study also agrees with Lindzen and Choi (2011) who also considered the effects of lead-lag relations.”

    Open for interactive discussion.
    http://www.earth-syst-dynam-discuss.net/4/25/2013/esdd-4-25-2013.html

    [Response: Another paper confusing short term variations with long-term shifts. - gavin]

    Comment by Magnus W — 14 Jan 2013 @ 6:09 AM

  77. Regarding my #74: On sea ice thickness, here is an unreviewed but sensible discussion/analysis of Arctic sea ice volume and thickness as modeled by PIOMAS. Note particularly the plots of June and September thickness time series.

    http://dosbat.blogspot.co.uk/2012/09/why-2010-piomas-volume-loss-was.html

    Comment by Kevin McKinney — 14 Jan 2013 @ 8:08 AM

  78. Kevin,

    Not sure about your contention that the thick ice has nearly disappeared (it could be a difference in our definitions of “thick ice”). I am not disputing that the thicker ice has not thinned. Indeed, there has been a general thinning of the entire sea ice. The thickness of the remaining, multi-year ice, along with its geographic location, will make it more difficult to melt than the ice that was spread across the Arctic, and exposed to Pacific and Atlantic ocean currents, along with runoff from fresh water rivers.

    Also, usign volume to determine when the Arctic may be ice-free suffers from exponential decay. Volume will also decrease faster than area initially, but volumetric decrease will slow as less ice remains (simple mathematics). Hence, any prediction that volumetric losses will continue exponentially is mathematically flawed.

    Comment by Dan H. — 14 Jan 2013 @ 9:41 AM

  79. > the remaining, multi-year ice, along with its geographic location

    Still no cite for this theory that ice is protected north of Greenland.

    The multi-year ice is not protected by its geographic location.

    http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00113.1

    Dan H. linked to a video at MIT that shows dark red for older ice next to Greenland. Other models, and the satellite imagery, show that the older ice circulates — and is melting.

    Comment by Hank Roberts — 14 Jan 2013 @ 4:09 PM

  80. that’s from February 2012:
    Comiso, Josefino C., 2012: Large Decadal Decline of the Arctic Multiyear Ice Cover. J. Climate, 25, 1176–1193.
    doi: http://dx.doi.org/10.1175/JCLI-D-11-00113.1

    Comment by Hank Roberts — 14 Jan 2013 @ 4:10 PM

  81. Any one with better background in this area that want to point out the worst mistakes in the paper in 76 can still do so.

    http://www.earth-system-dynamics.net/comment_on_a_paper.html

    http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/comment-page-2/#comment-313595

    Comment by Magnus W — 14 Jan 2013 @ 4:13 PM

  82. #80–read the discussion linked, Dan. You’ll find that ice more than 3 meters thick now forms an insignificant proportion of the total population. It’s true, of course, that the Transpolar Drift tends to accumulate ice in the area we’ve been discussing, but that hardly means that it is going to slow the overall melt any.

    As to your second paragraph, nobody knows, at this point, whether volume will continue to follow an exponential curve right down to zero extent. That’s not a mathematical question, but an empirical one–and it will remain empirical unless and until we can model sea ice physically a lot better than we can now. Therefore, what you say in that paragraph is pure bravado, unsupported by any evidence. It would be nice if it were true, but there’s really no reason to think that it actually is.

    Comment by Kevin McKinney — 14 Jan 2013 @ 5:49 PM

  83. The total of -0.7 W/m^2 is the same as the best observational (satellite) total aerosol adjusted forcing estimate given in the leaked Second Order Draft of AR5 WG1, which includes cloud lifetime (2nd indirect) and other effects.

    There are only a handful of published estimates for total anthropogenic aerosol forcing, including first indirect and cloud lifetime effects. Of these, the smallest best estimate I can find is -0.85W/m^2, which means the reported -0.7 is unlikely to be representative of total aerosol forcing, whatever else it relates to.

    Comment by Paul S — 16 Jan 2013 @ 8:46 AM

  84. Kevin, the real reason that sea ice volume will likely not reach zero any time soon is that calving from Greenland and from the Canadian archipelago will continue and will likely increase. That will keep some (comparatively small) amount of ice in the sea for a good while.

    That is why informed folks, like those at Neven’s Arctic Sea Ice blog, talk instead about time frames for an ‘essentially’ or ‘virtually’ ice free Arctic Ocean–by this they generally mean anything under one million square meters of sea ice extent. And many see it as likely that we will reach this level very soon–if not this September, then within the next two or three years.

    But you are right that, given the massive failure of earlier modelers, we have to conclude that we can’t really know what is coming at us with any certainty.

    Comment by wili — 16 Jan 2013 @ 12:49 PM

  85. Following on from the comments by Nic Lewis and Graeme,

    Yes, using a flat prior for climate sensitivity doesn’t make sense at all.
    Subjective and objective Bayesians disagree on many things, but they would agree on that. The reasons why are repeated in most text books that discuss Bayesian statistics, and have been known for several decades. The impact of using a flat prior will be to shift the distribution to higher values, and increase the mean, median and mode. So quantitative results from any studies that use the flat prior should just be disregarded, and journals should stop publishing any results based on flat priors. Let’s hope the IPCC authors understand all that.

    Nic (or anyone else)…would you be able to list all the studies that have used flat priors to estimate climate sensitivity, so that people know to avoid them?

    Comment by Steve Jewson — 20 Jan 2013 @ 5:23 AM

  86. Steve Jewson,
    The problem is that the studies that do not use a flat prior wind up biasing the result via the choice of prior. This is a real problem given that some of the actors in the debate are not “honest brokers”. It has seemed to me that at some level an Empirical Bayes approach might be the best one here–either that or simply use the likelihood and the statistics thereof.

    Comment by Ray Ladbury — 20 Jan 2013 @ 9:40 AM

  87. Hank:

    Can we quit chasing Dan H.’s red herrings?

    It’s as easy as not reading them at all, which many of us are already doing. Try it! When catching up on a thread, at the point one’s eyes come to “Dan H. says:”, they simply skip past the entire comment without taking it in. Think of it as a mental killfile.

    Comment by Mal Adapted — 20 Jan 2013 @ 2:07 PM

  88. Ray,

    I agree that no-one should be able to bias the results by their choice of prior: there needs to be a sensible convention for how people choose the prior, and everyone should follow it to put all studies on the same footing and to make them comparable.

    And there is already a very good option for such a convention…it’s Jeffreys’ Prior (JP).

    JP is not 100% accepted by everybody in statistics, and it doesn’t have perfect statistical properties (there is no framework that has perfect statistical properties anywhere in statistics) but it’s by far the most widely accepted option for a conventional prior, it has various nice properties, and basically it’s the only chance we have for resolving this issue (the alternative is that we spend the next 30 years bickering about priors instead of discussing the real issues). Wrt the nice properties, in particular the results are independent of the choice of coordinates (e.g. you can use climate sensitivity, or inverse climate sensitivity, and it makes no difference).

    Using a flat prior is not the same as using Jeffreys’ prior, and the results are not independent of the choice of coordinates (e.g. a flat prior on climate sensitivity does not give the same results as a flat prior on inverse climate sensitivity).

    Using likelihood alone isn’t a good idea because again the results are dependent on the parameterisation chosen…you could bias your results just by making a coordinate transformation. Plus you don’t get a probabilistic prediction.

    When Nic Lewis referred to objective Bayesian statistics in post 66 above, I’d guess he meant the Jeffreys’ prior.

    Steve

    ps: I’m talking about the *second* version of JP, the 1946 version not the 1939 version, which resolves the famous issue that the 1939 version had related to the mean and variance of the normal distribution.

    Comment by Steve Jewson — 20 Jan 2013 @ 2:28 PM

  89. Steve, Ray

    First, when I refer to an objective Bayesian method with a noninformative prior, that means using what would be the original Jeffreys’ prior for inferring a joint posterior distribution for all parameters, appropriately modified if necessary to give as accurate inference (marginal posteriors) for individual parameters as possible. In general, that would mean using Bernardo and Berger “reference priors”, one targeted at each parameter of interest. In the case of independent scale and location parameters, doing so would equate to the second version of the Jeffreys’ prior that Steve refers to. In practice, when estimating S and Kv, marginal parameter inference may be little different between using the original Jeffreys’ prior and targeted reference priors.

    Secondly, here is a list of climate sensitivity studies that used a uniform prior for main results when for estimating climate sensitivity on its own, or when estimating climate sensitivity S jointly with effective ocean vertical diffusivity Kv (or any other parameter like those two in which observations are strongly nonlinear) used uniform priors for S and/or Kv.

    Forest et al (2002)
    Knutti et at (2002)
    Frame et al (2005)
    Forest et al (2006)
    Forster and Gregory (2006) – results as presented in IPCC AR4 WG1 report (the study itself used 1/S prior, which is the Jeffreys’ prior in this case, where S is the only parameter being estimated)
    Hegerl et al (2006)
    Forest et al (2008)
    Sanso, Forest and Zantedeschi (2008)
    Libardoni and Forest (2011) [unform for Kv, expert for S]
    Olson et al (2012)
    Aldrin et al (2012)

    This includes a large majority of the Bayesian climate studies that I could find.

    Some of these papers also used other priors for climate sensitivity as alternatives, typically either informative “expert” priors, priors uniform in the climate feedback parameter (1/S) or in one case a uniform in TCR prior. Some also used as alternative nonuniform priors for Kv or other parameters being estimated.

    Comment by Nic Lewis — 21 Jan 2013 @ 12:11 PM

  90. Sorry to go on about it, but this prior thing this is an important issue. So here are my 7 reasons for why climate scientists should *never* use uniform priors for climate sensitivity, and why the IPCC report shouldn’t cite studies that use them.

    It pains me a little to be so critical, especially as I know some of authors listed in Nic Lewis’s post, but better to say this now, and give the IPCC authors some opportunity to think about it, than after the IPCC report is published.

    1) *The results from uniform priors are arbitrary and hence non-scientific*

    If the authors that Nic Lewis lists above had chosen different coordinate systems, they would have got different results. For instance, if they had used 1/S, or log S, as their coordinates, instead of S, the climate sensitivity distributions would change. Scientific results should not depend on the choice of coordinate system.

    2) *If you use a uniform prior for S, someone might accuse you of choosing the prior to give high rates of climate change*

    It just so happens that using S gives higher values for climate sensitivity than using 1/S or log S.

    3) *The results may well be nonsense mathematically*

    When you apply a statistical method to a complex model, you’d want to first check that the method gives sensible results on simple models. But flat priors often given nonsense when applied to simple models. A good example is if you try and fit a normal distribution to 10 data values using a flat prior for the variance…the final variance estimate you get is higher than anything that any of the standard methods will give you, and is really just nonsense: it’s extremely biased, and the resulting predictions of the normal are much too wide. If flat priors fail on such a simple example, we can’t trust them on more complex examples.

    4) *You risk criticism from more or less the entire statistics community*

    The problems with flat priors have been well understood by statisticians for decades. I don’t think there is a single statistician in the world who would argue that flat priors are a good way to represent lack of knowledge, or who would say that they should be used as a convention (except for location parameters…but climate sensitivity isn’t a location parameter).

    5) *You risk criticism from scientists in many other disciplines too*

    In many other scientific disciplines these issues are well understood, and in many disciplines it would be impossible to publish a paper using a flat prior. (Even worse, pensioners from the UK and mathematicians from the insurance industry may criticize you too :)).

    6) *If your paper is cited in the IPCC report, IPCC may end up losing credibility*

    These are much worse problems than getting the date of melting glaciers wrong. Uniform priors are a fundamentally unjustifiable methodology that gives invalid quantitative results. If these papers are cited in the IPCC, the risk is that critics will (quite rightly) heap criticism on the IPCC for relying on such stuff, and the credibility of IPCC and climate science will suffer as a result.

    7) *There is a perfectly good alternative, that solves all these problems*

    Harold Jeffreys grappled with the problem of uniform priors in the 1930s, came up with the Jeffreys’ prior (well, I guess he didn’t call it that), and wrote a book about it. It fixes all the above problems: it gives results which are coordinate independent and so not arbitrary in that sense, it gives sensible results that agree with other methods when applied to simple models, and it’s used in statistics and many other fields.

    In Nic Lewis’s email (number 89 above), Nic describes a further refinement of the Jeffreys’ Prior, known as reference priors. Whether the 1946 version of Jeffreys’ Prior, or a reference prior, is the better choice, is a good topic for debate (although it’s a pretty technical question). But that debate does muddy the waters of this current discussion a little: the main point is that both of them are vastly preferable to uniform priors (and they are very similar anyway). If reference priors are too confusing, just use Jeffreys’ 1946 Prior. If you want to use the fanciest statistical technology, use reference priors.

    ps: if you go to your local statistics department, 50% of the statisticians will agree with what I’ve written above. The other 50% will agree that uniform priors are rubbish, but will say that JP is rubbish too, and that you should give up trying to use any kind of noninformative prior. This second 50% are the subjective Bayesians, who say that probability is just a measure of personal beliefs. They will tell you to make up your own prior according to your prior beliefs. To my mind this is a non-starter in climate research, and maybe in science in general, since it removes all objectivity. That’s another debate that climate scientists need to get ready to be having over the next few years.

    Steve

    Comment by Steve Jewson — 22 Jan 2013 @ 1:04 PM

  91. This thread now appears under “Older Entries”. Maybe the dialogue between Nic Lewis and Steve Jewson merits some continuing attention, unless it is accepted (as Ray Ladbury has confidently asserted) that Climate Sensitivity is now a “mature field” with a trend raround +2.8K generally agreed.

    Comment by simon abingdon — 25 Jan 2013 @ 5:42 AM

  92. #66 [Response: Part III. - gavin] Any expected release date?

    Comment by simon abingdon — 25 Jan 2013 @ 5:57 AM

  93. Steve Jewson,
    I agree that Jeffrey’s Prior is attractive in a lot of situations. However, it is not clear that it would help in this case, is it? I mean in some cases, JP is flat.

    Comment by Ray Ladbury — 25 Jan 2013 @ 10:01 AM

  94. To Steve Jewson:

    Steve,

    You have clarified many things that I have appreciated but have been unable to express with your clarity. I offer my thanks.

    I have been aware that some specific choices of priors are necessary in even the most mundane of statistical issues. E.G. the choice of a prior for the variance (or standard deviation) for the normal distribution. Also that there are desirable properties that should be maintained, be invariant, under coordinate transformations, and that parameter estimators should be unbiased in at least one case of the mean, median, or mode. Such things being elemental requirements based on the generic class of the problem.

    At the risk of disagreeing with you, I do have that problem that rejects the notion of anything being non-informative. To me such priors would be better called differently, elementally, generically, ideally, or statistically informed, if that gets my intention across to you.

    I doubt I have the reserves of wit to grapple with how the JP is derived from Fisher Information, for my purposes, and I suspect those required for the problem at hand, simpler arguments based directly on the need for invariance under coordinate transformations, specifically under a coordinate flip, and perhaps the desire for well behaved estimators would suffice, and be easier for those such as I to comprehend.

    As it happens, I have no objection to completely subjective priors providing people are prepared to hold the ensuing argument, and be clear and transparent in their reasoning, which I can either embrace or dismiss. That said, it seems that the JPs would be the better points for departure (as opposed to flat priors) to argue from.

    Many Thanks

    Alex

    Comment by Alexander Harvey — 25 Jan 2013 @ 2:10 PM

  95. In 2006, Barton Paul Levenson counted 62 papers that estimated climate sensitivity, starting with Arrhenius in 1896.

    http://bartonpaullevenson.com/ClimateSensitivity.html

    Values
    Less than 1 : 6
    Between 1 and 2: 19
    Between 2 and 3: 12
    Between 3 and 4: 12
    Between 4 and 5: 4
    Between 5 and 6: 1
    Greater than 6 : 1

    Average across all is 2.86, median 2.6

    Ari Jokimaki lists more recent papers here: http://agwobserver.wordpress.com/2009/11/05/papers-on-climate-sensitivity-estimates/

    A certain picture emerges.

    [Response: These are not all commensurable as discussed above, and some are just wrong or rely on out-of-date data. A proper assessment requires a little more work, not just counting. - gavin]

    Comment by Toby — 25 Jan 2013 @ 2:22 PM

  96. Not looking too good is it?

    Nic Lewis out first with his study on climate sensitivity, comments above by Steve Jewson on the use of flat priors, now a Norwegian group on their new revised estimate for climate sensitivity.
    http://www.forskningsradet.no/en/Newsarticle/Global_warming_less_extreme_than_feared/1253983344535/p1177315753918

    That 1.9 C for CO2 doubling not so unreasonable after all…

    Comment by nvw — 25 Jan 2013 @ 3:03 PM

  97. nvw,
    It seems that nature has a greater influence than first thought. So much for Simon’s claim above.

    Comment by Dan H. — 25 Jan 2013 @ 5:08 PM

  98. If a dominance of La Nina/ocean variability, is causing a hiatus, does that mean climate sensitivity is lower? Doesn’t sound right to me.

    Comment by JCH — 25 Jan 2013 @ 7:55 PM

  99. #96 from your link:
    “When the researchers at CICERO and the Norwegian Computing Center applied their model and statistics to analyse temperature readings from the air and ocean for the period ending in 2000, they found that climate sensitivity to a doubling of atmospheric CO2 concentration will most likely be 3.7°C, which is somewhat higher than the IPCC prognosis.

    But the researchers were surprised when they entered temperatures and other data from the decade 2000-2010 into the model; climate sensitivity was greatly reduced to a “mere” 1.9°C.”

    Well, doesn’t this just show that the method is not robust? How can a single decade be enough to overturn previous results that much, especially when there are other straightforward methods explaining the differences in trend from empirical data, i.e. Foster/Rahmsdorf 2012? I assume that climate sensitivity does not change with time.

    Comment by JohnL — 26 Jan 2013 @ 6:49 AM

  100. Any one seen the paper #99? I can not find a published article…

    Comment by Magnus W — 26 Jan 2013 @ 2:01 PM

  101. Ray Ladbury #93
    “I agree that Jeffrey’s Prior is attractive in a lot of situations. However, it is not clear that it would help in this case, is it? I mean in some cases, JP is flat”

    The form of the Jeffreys’ prior depends on both the relationship of the observed variable(s) to the parameter(s) and the nature of the observational errors and other uncertainties, which determine the form of the likelihood function. Typically the JP is only uniform where the estimation is of a simple location parameter, with the measured variable being the parameter (or a linear function thereof) plus an error whose distribution is independent of the parameter.

    Where (equilibrium/effective) climate sensitivity (S) is the only parameter being estimated, and the estimation method works directly from the observed variables (e.g., by regression, as in Forster and Gregory, 2006, or mean estimation, as in Gregory et al, 2002) over the instrumental period, then the JP for S will be almost of the form 1/S^2. That is equivalent to an almost uniform prior were instead 1/S, the climate feedback parameter (lambda), to be estimated.

    The reason why a 1/S^2 prior is noninformative is that estimates of climate sensitivity depend on comparing changes in temperature with changes in {forcing minus the Earth’s net radiative balance (or its proxy, ocean heat uptake)}. Over the instrumental period, fractional uncertainty in the latter is very much larger than fractional uncertainty in temperature change measurements, and is approximately normally distributed.

    There is really no valid argument against using a 1/S^2 prior in cases like Forster & Gregory, 2006 and Gregory et al, 2002, and that is what frequentist statistical methods implicitly use. For instance, Forster and Gregory, 2006, used linear regression of {forcing minus the Earth’s net radiative balance} on surface temperature, which as they stated implicitly used a uniform in lambda prior for lambda. When the normally distributed estimated PDF for lambda resulting from that approach is converted into a PDF for S, using the standard change of variables formula, that PDF implicitly uses a 1/S^2 prior for S. However, for presentation in the AR4 WG1 report (Fig. 9.20 and Table 3) the IPCC multiplied that PDF by S^2, converting it to a uniform-in-S prior basis, which is highly informative. As a result, the 95% bound on S shown in the AR4 report was 14.2 C, far higher than the 4.1 C bound reported in the study itself.

    Where climate sensitivity is estimated in studies involving comparing observations with values simulated by a forced climate model at varying parameter settings (see Appendix 9.B of AR4 WG1), the JP is likely to be different from what it would be were S estimated directly from the same underlying data. Where several parameters are estimated simultaneously, the JP will be a joint prior for all parameters and may well be a complex nonlinear function of the parameters.

    Comment by Nic Lewis — 26 Jan 2013 @ 4:32 PM

  102. I’m in need of some clarification on what we should be now using as a GWP for methane.

    From Archer 2007:
    …..so a single molecule of additional methane has a larger impact
    on the radiation 5 balance than a molecule of CO2, by about a factor of 24 (Wuebbles and Hayhoe, 2002)……
    …..To get an idea of the scale, we note that a doubling of methane
    10 from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and 10 times present methane would be equivalent to about a doubling of CO2. A release of 500 Gton C as methane (order 10% of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2……
    …..The current inventory of methane in the atmosphere is about 3 Gton C. Therefore, the release of 1 Gton C of methane catastrophically to the atmosphere would raise the methane concentration by 33%. 10 Gton C would triple atmospheric methane.

    (so doubling atmos methane requires 3 Gton release, 10x present methane requires 30 Gton released?)

    Here also GWP methane is taken as 24. As we know 20yr GWP methane is commonly stated as 72 (IPCC) or 105 (shindel).

    Factoring in findings of :
    Large methane releases lead to strong aerosol forcing and reduced cloudiness 2011 T. Kurt ´en1,2, L. Zhou1, R. Makkonen1, J. Merikanto1, P. R¨ais¨anen3, M. Boy1,N. Richards4, A. Rap4, S. Smolander1, A. Sogachev5, A. Guenther6, G. W. Mann4,K. Carslaw4, and M. Kulmala1

    -That previous GWP methane figures need x1.8 correction factor….
    We should be using 20yr GWP methane of 130 or 180. This is 5.4 or 7.5 times the 24 GWP that Archer 2007 appears to be using?

    So maybe the above should say, looking at a 20yr period(using the 100 becomes 180 gwp)?:

    …..To get an idea of the scale, we note that a [100% increase/7.5= 13% increase] of methane from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and [10 times/7.5= 1.333times] present methane would be equivalent to about a doubling of CO2. A release of [500/7.5=66.7] Gton C as methane (order [10%/7.5=1.3%] of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2……

    Comment by Aaron Franklin — 31 Jan 2013 @ 8:20 PM

  103. Aaron Franklin (102)

    I wouldn’t go so far to say that the collective climate science community has completely moved on from the idea, but I’d argue that GWP is a rather outdated and fairly useless metric for comparing various greenhouse gases. It is also very sensitive to the timescale over which it is calculated.

    It’s correct that an extra methane molecule is something like 25 times more influential than an extra CO2 molecule, although that ratio is primarily determined by the background atmospheric concentration of either gas, and GWP typically assumes that forcing is linear in emission pulse, which is not valid for very large perturbations. But because there’s not much methane to begin with, it’s not true that 1.33x methane has more impact than a doubling of CO2 (we’ve already increased methane by well over this amount)…a doubling of methane doesn’t even have nearly as much impact as a doubling of CO2.

    The key point, however, is the much longer residence time of CO2 in the atmosphere…GWP tries to address this in its own mystical way, but there are much better ways of thinking about the issue. See the recent paper from Susan Solomon, Ray Pierrehumbert, and others.

    Comment by Chris Colose — 1 Feb 2013 @ 12:06 PM

  104. Re: The Norwegian findings (#96-100), they’re still under review. Scroll down to “update”:
    http://www.forskningsradet.no/en/Newsarticle/Global_warming_less_extreme_than_feared/1253983344535/p1177315753918?WT.ac=forside_nyhet

    Clicking the Cicero link provided there takes you round trip — RealClimate is extensively referenced.

    Comment by CM — 1 Feb 2013 @ 2:14 PM

Sorry, the comment form is closed at this time.

Close this window.

0.430 Powered by WordPress