At the COP26 gathering last week much of the discussion related to “Net-Zero” goals. This concept derives from important physical science results highlighted in the Special Report on 1.5ºC and more thoroughly in the last IPCC report that future warming is tied to future emissions, and that warming will effectively cease only once anthropogenic CO2 emissions are balanced by anthropogenic CO2 removals. But some activists have (rightly) pointed out that large-scale CO2 removals are as yet untested, and so reliance on them to any significant extent to balance out emissions is akin not really committing to net zero at all. Their point is that “net-zero” is not zero and hence will serve as a smokescreen for insufficient climate action. To help sort this out some background might be helpful.[Read more…] about Net Zero/Not Zero
This month’s open thread. The first two weeks will be dominated by COP-26, and various science updates that will be announced there, including this year’s Global Carbon Project report. Curiously, there is some archival interest in the climategate affair possibly in connection to COP-26 (a BBC dramatization “The Trick“, a BBC radio series on the security aspects “The Hack that Changed the World”, and a couple of months ago, a podcast episode of “Cheat!”). Please stick to science-related topics on this thread.
All countries in the world urgently need to adapt to climate change but are not yet in a good position to do so. It’s urgent because we are not even adapted to the present climate. This fact is underscored by recent weather-related calamities, such as flooding in Central Europe and heatwaves over North America. It’s also urgent because the oceans act like a flywheel, making sure that cuts in emission of greenhouse gases will have a lagged effect on global warming.
Climate change adaptation was addressed in the Paris Agreement from 2015, the Climate Adaptation Summit in January 2021, and will be one of four key priorities during the upcoming COP26. Proper climate adaptation of course needs meteorological and climatological data for mapping weather-related risks to prepare us for future extreme weather. However, I would argue that the climate research community has not had a visible presence during any of these meetings. Instead the summits have been dominated by politicians and NGOs.[Read more…] about A science-based move to climate change adaptation
As many of you will know, Geert Jan van Oldenborgh died on Oct 12, 2021, and in the last week a number of very touching tributes have appeared. Notably, a lovely obituary in the NY Times by Henry Fountain, a segment on the BBC’s Inside Science from Roland Pease, a piece on Bloomberg News by Eric Roston and, of course, an appreciation from his colleagues at World Weather Attribution (including Friederike Otto, the co-recipient of the TIME 100 award to Geert earlier this year).
Geert’s work had been featured often at RealClimate (notably the rapid attribution work for the Pacific North West heat wave earlier this year), and we have made frequent reference to Climate Explorer, the tool he built to provide easier access to many sources of climate data. He also provided us with annual updates for the comparison between a 1981 climate projection to subsequent observations.
He let us know earlier this year that this was likely the last update. Moge hij rusten in vrede.
Last week, the Nobel physics prize was (half) awarded to Suki Manabe and Klaus Hasselmann for their work on climate prediction and the detection and attribution of climate change. This came as quite a surprise to the climate community – though it was welcomed warmly. We’ve discussed the early climate model predictions a lot (including some from Manabe and his colleagues), and we’ve discussed detection and attribution of climate change as well, though with less explicit discussion of Hasselmann’s contribution. Needless to say these are big topics which have had many inputs from many scientists over the years.
But RC has a more attuned audience to these topics than most, and so it might be fun to dive into the details of their early work to see what has stood the test of time and what has not, and how that differs (if it does) from their colleagues and rivals at the time.
Manabe’s Climate Modeling
Fortunately, Manabe recently wrote a retrospective on his early work in response to receiving the Crafoord prize in 2018. That paper (Manabe, 2019) gives a good overview of Manabe’s particular philosophy of climate modeling which was very much focused on getting things to work, and not worrying too much about the details. He makes a eloquent argument for a hierarchy of modeling where simpler, functional, models can contribute a lot to understanding in advance of the more complete and more detailed versions turning up. In this, he is in violent agreement with Isaac Held, his colleague at GFDL, and indeed most climate scientists.
But let’s go back to the beginning. Manabe’s early focus was on radiative-convective equilibrium, and his seminal 1967 paper (with his longtime collaborator Richard Wetherald, who passed in 2011). The Manabe and Wetherald (1967) paper has been described as the most influential climate paper ever.
The key aspects were the inclusion of water vapour feedback as temperatures increased, and the use of ‘convective adjustment’ to maintain stability of the lower atmospheric column. While not a great parameterization of the complexity of real convection, it served to keep the troposphere and surface linked in ways that match what happens in the real world. In practice, it was a big advance towards realism over the work of Plass or Möller from a few years before (despite the lack of cloud feedback). Two examples of the sensitivity of their model (which have mostly held up) are useful to look at at:
What they showed are the distinct fingerprints of two kinds of forcing; increasing solar activity which warms all parts of the atmosphere, and carbon dioxide increases which warm the surface and troposphere, but cool the stratosphere and above. The source of this result is the spectral resolution of the radiative transfer model they were using, but oddly enough they don’t discuss it at all. In a subsequent short paper Manabe (1970), Manabe extends this result to predict a temperature increase by 2000 of 0.8ºC based on a 25% increase in CO2, which was pretty close. (Funnily enough, this paper appeared in volume about environmental risks that was edited by a young(er) S. Fred Singer, before his turn to the dark side).
Manabe’s subsequent work led to the development of the GFDL GCM, initially just including the atmosphere, but eventually with an ocean, and then the transient results shown in Manabe and Stouffer (1993). Famously, these early results were half the input into the Charney report‘s estimate of climate sensitivity in 1979 (the other half being the preliminary results from Jim Hansen’s model at GISS). Both these predictions have been evaluated in recent years to see how well they did. The time series were included in the Hausfather et al (2020) paper and in the latest IPCC report:
The next step in climate modeling was to couple dynamic ocean models to the atmospheric models, and again, Manabe and his colleagues were pioneers (notably Manabe and Bryan (1969), but more comprehensively in Manabe et al. (1975), and Bryan et al. (1975)). But as expectations increased that coupled models could help climate predictions, there was a growing realization that there was a problem with how they were being designed.
The basic issue stems from the different timescales of the ocean and atmosphere. Given the ocean temperatures, an atmospheric model will equilibriate in a year or so of model time (maybe a decade if you care about the water vapour distribution in the upper stratosphere). However, given information from the atmosphere, an ocean model takes centuries to millennia to equilibriate the deep ocean. The tail of the age distribution for water parcels in the deep Pacific can reach 10,000 years or so. But back in the day, running a coupled model anything like that long was prohibitively expensive. So in order to get a coupled model simulation for near-present, the ocean needed to be ‘spun-up’ for a good while on it’s own, and then, once it was in equilibrium, the coupling was turned on, and voila! a coupled simulation of the present-day climate. Except…
… it didn’t generally work. The newly coupled model would drift away from today’s climate, sometimes with a collapse of the Atlantic overturning circulation, other times just towards a much warmer or cooler climate, or with a terrible ‘double ITCZ’. This was a problem because it’s not at all clear that the sensitivity of the simulated climate (which was off in serious ways) would be the same as the sensitivity of the real world. This stymied progress for a while (maybe a decade or so) as people worked to understand why the models drifted so much, and to find ways to fix it.
When I started in climate modeling (in the early 1990s), this was still a relevant issue, though two approaches had been adopted. One, advocated by Manabe’s group (and Hasselmann’s!), was the imposition of ‘flux corrections’ or ‘flux adjustments’ (Manabe and Stouffer, 1988; Sausen et al, 1988) which added artificial fluxes at the ocean-atmosphere boundary that gave the ocean and atmosphere what they both needed to stay stable, correcting for what would have been calculated, and then keeping that fixed in all future sensitivity experiments. This (by design) produced a good climatology, but effectively buried the models’ poor physics. The other approach was to work with models that had offsets from the real world (which you would keep trying to reduce) but would have sensitivities that were more physically coherent.
The implications of the two approaches are difficult to assess without a perfect model simulation to compare against, and if we had that, there’d be no need to worry about drifts. Thus during the early 90s there was a fair bit of unresolved religious-like discussions about what should be done. Manabe was vocal that you needed a reasonable model to play with and make progress, while others were of the opinion that the sensitivity of a flux-corrected model wasn’t informative of the real world, and that using flux corrections as a crutch, was actually holding back work on the physics that would (eventually) remove the need for the corrections in the first place. (Minor aside, I was a co-author on a paper that assessed this concept for a slightly simpler class of model, and found that the ‘flux-corrected’ version was not predictive of the ‘true’ sensitivity Bjornsson et al., 1997).
Over time the issue more or less resolved itself as models got incrementally better and computational resources increased so that longer coupled model simulations could be done more routinely. Occasionally, the issue still comes up (i.e. Gnanadesikan et al., 2018), but I think it’s fair to say that few modelers think it’s a useful tool anymore. For context, 10 out of 17 models in CMIP2 (~1995) used flux adjustments, and 6 out of 24 in the CMIP3 ensemble (~2001), but none in CMIP5 or CMIP6, while each generation has greater skill than the previous one. In his 2018 retrospective, Manabe doesn’t discuss the issue at all.
The proof of the pudding in climate model terms though are the quality and skill of the predictions. A recent paper Stouffer and Manabe (2017), assessed how good the Manabe and Stouffer (1989) predictions were. These came from an idealized 1% increasing CO2 experiment after 70 years, when CO2 has approximately doubled, and so is warmer than we would expect for 2020, but the pattern is quite robust:
Not too shabby!
Hasselmann’s Statistical Insights
[I have to admit to not knowing Hasselmann’s oeuvre as well as Manabe’s, and to my recollection I don’t think we’ve met, so this might need some amendment…]
I think the key paper to look at is Hasselmann (1979), which really set the stage for formal methods of detection and attribution of climate change. Later papers, notably Hasselmann (1997)(pdf) extended this to multi-variate attribution problems (written in tensor notation no less, so that probably helped 😉). The basic idea is that although there are a vast number of degrees of freedom in the atmosphere/climate system, you can make a lot of progress by reducing the degrees of freedom and looking just at the dominant modes of variability changes and comparing them with expected patterns from simulations. A key insight is that depending on how the noise and the forced patterns line up, the ‘optimal’ pattern to detect might not be what you first thought. But note that this was written when “continuous model time series of comparable length to analyzed global or hemispheric data [were] not available”, so the paper is mainly conceptual. It really is only in the late 1980s, and more clearly and with more models, the mid-1990s, that the data became available to really apply these methods.
The challenge with all detection & attribution (D&A) work is that it must rely on counter-factuals – i.e. estimates of how the climate would behave in special cases – for instance, if the only forcing was greenhouse gases, or if there was only natural forcings or only internal variability. Since the real world has all of these things going on at the same time, it’s hard to extract them from the observations, particularly since good direct observations don’t stretch back more than a century or so, and proxy climate observations have their own, increased, uncertainties. But even with perfect observations, getting a full characterization of internal variability would be hard, and perhaps impossible. So in practice, the ‘noise ellipsoid’ in the above figure is almost always taken from control runs of coupled climate models which, as Hasselmann acknowledged, were not available in 1979.
Hasselmann’s work before this paper was heavily related to measurements and understanding of ocean waves and the role of ‘random’ weather forcing on long term ocean variability, and that has been widely cited, and afterwards, he played a key role in developing the MPI climate model (i.e. Cubasch et al. (1992). But much of later well-cited work built off the 1979 paper and involved further refinements on the theme of D&A, often working with Gabi Hegerl (i.e. Hegerl et al., 1996; Hegerl et al., 1997).
These were very much the ideas that set the discussions in climate science in the 1990s. As you will recall, Hansen had declared in 1988 that “the greenhouse effect is here!”, based on a 3-sigma signal detected with the original GISS model. But the ocean model used there was a simple ‘Q-flux’, no-dynamics, module, and so had no ENSO, or other coupled modes of variability. The implicit estimate of the internal variability here was, to be clear, not widely accepted. There are a couple of articles and responses at the time that give a flavor, for instance “Hansen vs. the World” by Richard Kerr reporting from a workshop where Manabe, and Hasselmann’s coauthors (notably Cubasch and Barnett) were present, and the responses from Wally Broecker and James Risbey.
Hasselmann himself commented on this in Science in 1997, after the 1995 Second Assessment Report from IPCC declared that “the balance of evidence” suggested that the greenhouse gas signal had indeed been detected. The figure he showed there:
… supported the IPCC conclusion, and his last line is worth repeating:
It would be unfortunate if the current debate over this ultimately transitory issue should distract from the far more serious problem of the long-term evolution of global warming once the signal has been unequivocally detected above the background noise.Klaus Hasselmann (1997)
Today, 24 years later, the detection and attribution of anthropogenic climate change is “unequivocal”, but we are still being distracted by ultimately transitory issues…
What if the prize had been given a decade ago?
The two restrictions on the award of disciplinary Nobel prizes are that the awardees must still be alive, and that there is a limit of three laureates per prize. For advances made in the 1960s and 1970s, the first is extremely relevant, and makes the second condition somewhat less so. But without wishing to take anything away from the two awardees this year, ten years ago the decision would have been much tougher. Norman Philips published what is recognised as the first GCM in 1955 – he died in 2019. Akio Arakawa was the conceptual leader of climate modeling directly influencing both Manabe and Hansen – he died earlier this year. Of the published papers predicting global warming in the 1970s (as catalogued in the Hausfather et al paper), the authors Rasool, Schneider, Benton, Sawyer, Broecker and Mitchell have all passed. Only Nordhaus and Manabe are still alive – though now both have won Nobel prizes.
But the building of climate models and their application is broader than can be recognized like this. There are no prizes for the people that actually wrote the code for the models – people like Gary Russell or Ernst Maier-Reimer (nicely eulogized by Hasselmann), the specialists who designed the parameterizations, or the teams that developed the inputs and processed the outputs or the technicians that kept the old supercomputers running. In recent papers documenting model development, it’s not unusual to have dozens of authors – not the level of the CERN collaborations, but significantly beyond the Nobel limit. The huge advances in understanding we’ve seen since the 1970s have been the work of thousands of smart and dedicated people all around the world, only a few of which will ever be recognized as widely as this. We should always remember this while we celebrate the winners.
Finally, while it is many scientists’ dream to win a Nobel Prize, Hasselmann’s statement that he would rather have “no global warming and no Nobel Prize” captures the ambiguity that many of us feel in successfully predicting events and trends that we don’t want to come true.
- S. Manabe, "Role of greenhouse gas in climate change**", Tellus A: Dynamic Meteorology and Oceanography, vol. 71, pp. 1620078, 2019. http://dx.doi.org/10.1080/16000870.2019.1620078
- S. Manabe, and R.T. Wetherald, "Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative Humidity", Journal of the Atmospheric Sciences, vol. 24, pp. 241-259, 1967. http://dx.doi.org/10.1175/1520-0469(1967)024<0241:TEOTAW>2.0.CO;2
- S. Manabe, "The Dependence of Atmospheric Temperature on the Concentration of Carbon Dioxide", Global Effects of Environmental Pollution, pp. 25-29, 1970. http://dx.doi.org/10.1007/978-94-010-3290-2_4
- S. Manabe, and R.J. Stouffer, "Century-scale effects of increased atmospheric C02 on the ocean–atmosphere system", Nature, vol. 364, pp. 215-218, 1993. http://dx.doi.org/10.1038/364215a0
- Z. Hausfather, H.F. Drake, T. Abbott, and G.A. Schmidt, "Evaluating the Performance of Past Climate Model Projections", Geophysical Research Letters, vol. 47, 2020. http://dx.doi.org/10.1029/2019GL085378
- S. Manabe, and K. Bryan, "Climate Calculations with a Combined Ocean-Atmosphere Model", Journal of the Atmospheric Sciences, vol. 26, pp. 786-789, 1969. http://dx.doi.org/10.1175/1520-0469(1969)026<0786:CCWACO>2.0.CO;2
- S. Manabe, K. Bryan, and M.J. Spelman, "A Global Ocean-Atmosphere Climate Model. Part I. The Atmospheric Circulation", Journal of Physical Oceanography, vol. 5, pp. 3-29, 1975. http://dx.doi.org/10.1175/1520-0485(1975)005<0003:AGOACM>2.0.CO;2
- K. Bryan, S. Manabe, and R.C. Pacanowski, "A Global Ocean-Atmosphere Climate Model. Part II. The Oceanic Circulation", Journal of Physical Oceanography, vol. 5, pp. 30-46, 1975. http://dx.doi.org/10.1175/1520-0485(1975)005<0030:AGOACM>2.0.CO;2
- S. Manabe, and R.J. Stouffer, "Two Stable Equilibria of a Coupled Ocean-Atmosphere Model", Journal of Climate, vol. 1, pp. 841-866, 1988. http://dx.doi.org/10.1175/1520-0442(1988)001<0841:TSEOAC>2.0.CO;2
- R. Sausen, K. Barthel, and K. Hasselmann, "Coupled ocean-atmosphere models with flux correction", Climate Dynamics, vol. 2, pp. 145-163, 1988. http://dx.doi.org/10.1007/BF01053472
- H. Bjornsson, L.A. Mysak, and G.A. Schmidt, "Mixed Boundary Conditions versus Coupling with an Energy–Moisture Balance Model for a Zonally Averaged Ocean Climate Model", Journal of Climate, vol. 10, pp. 2412-2430, 1997. http://dx.doi.org/10.1175/1520-0442(1997)010<2412:MBCVCW>2.0.CO;2
- A. Gnanadesikan, R. Kelson, and M. Sten, "Flux Correction and Overturning Stability: Insights from a Dynamical Box Model", Journal of Climate, vol. 31, pp. 9335-9350, 2018. http://dx.doi.org/10.1175/JCLI-D-18-0388.1
- R.J. Stouffer, and S. Manabe, "Assessing temperature pattern projections made in 1989", Nature Climate Change, vol. 7, pp. 163-165, 2017. http://dx.doi.org/10.1038/nclimate3224
- R.J. Stouffer, S. Manabe, and K. Bryan, "Interhemispheric asymmetry in climate response to a gradual increase of atmospheric CO2", Nature, vol. 342, pp. 660-662, 1989. http://dx.doi.org/10.1038/342660a0
- K. Hasselmann, "Multi-pattern fingerprint method for detection and attribution of climate change", Climate Dynamics, vol. 13, pp. 601-611, 1997. http://dx.doi.org/10.1007/s003820050185
- U. Cubasch, K. Hasselmann, H. Höck, E. Maier-Reimer, U. Mikolajewicz, B.D. Santer, and R. Sausen, "Time-dependent greenhouse warming computations with a coupled ocean-atmosphere model", Climate Dynamics, vol. 8, pp. 55-69, 1992. http://dx.doi.org/10.1007/BF00209163
- G.C. Hegerl, H. von Storch, K. Hasselmann, B.D. Santer, U. Cubasch, and P.D. Jones, "Detecting Greenhouse-Gas-Induced Climate Change with an Optimal Fingerprint Method", Journal of Climate, vol. 9, pp. 2281-2306, 1996. http://dx.doi.org/10.1175/1520-0442(1996)009<2281:DGGICC>2.0.CO;2
- G.C. Hegerl, K. Hasselmann, U. Cubasch, J.F.B. Mitchell, E. Roeckner, R. Voss, and J. Waszkewitz, "Multi-fingerprint detection and attribution analysis of greenhouse gas, greenhouse gas-plus-aerosol and solar forced climate change", Climate Dynamics, vol. 13, pp. 613-634, 1997. http://dx.doi.org/10.1007/s003820050186
- K. Hasselmann, "Are We Seeing Global Warming?", Science, vol. 276, pp. 914-915, 1997. http://dx.doi.org/10.1126/science.276.5314.914
Fall is here (in the northern hemisphere at least), along with articles about the impact of climate change on autumnal colors. LandSat9 successfully launched to continue an almost 50 year long series of remote sensing (since 1972!), and the World Economic Forum has proposed and Earth Operations Center to monitor greenhouse gases and climate change. Please stick to climate science topics, and remember that (most) other commenters are real people.
There is a new push to reduce CH4 emissions as a possible quick ‘win-win’ for climate and air quality. To be clear this is an eminently sensible idea – as it has been for decades (remember the ‘Methane-to-markets’ initiative from the early 2000s?), but it inevitably brings forth a mish-mash of half-remembered, inappropriate or out-of-date comparisons between the impacts of carbon dioxide and methane. So this is an attempt to put all of that in context and provide a hopefully comprehensive guide to how, when, and why to properly compare the two greenhouse gases.
First of all, let’s be clear about the relative magnitude of the gas concentrations. In 2020, CO2 was at ~410 parts per million, while CH4 was around 1870 parts per billion (or 1.87 ppm, a factor of more than 200 smaller). However the relative rise since the pre-industrial is three times larger for CH4, around 150%, compared to the 50% increase in CO2.
The radiative forcing from these changes in concentrations can be easily calculated using standard formulas (from Etminan et al, 2016 which supersede the slightly simpler ones from IPCC TAR), as about 2 W/m2 for the CO2 change and 0.65 W/m2 for CH4.
But methane’s role in atmospheric chemistry and as a source of stratospheric water vapour means that it has a bigger effect on climate than just the direct effect of its concentration. Methane emissions have a feedback on its own lifetime, serve as an ozone precursor, and reduce the production of sulphate and nitrate aerosols (and consequently indirect cloud-aerosol effects), all of which amplify its net warming effect to about 1.2 W/m2 (to about 60% of the CO2 effect since 1750). There is also a very small impact of the CH4 oxidation to CO2 itself for any fossil-fuel derived methane.
This implies that if you convert the impacts of each set of emissions into temperatures, as was done in the IPCC AR6 report, you get about 0.75ºC from the changes in CO2 and 0.5ºC for CH4 (from the late 19th C, see figure below) or 1ºC and 0.6ºC, respectively, from 1750. Thus despite the smaller concentrations and changes in methane compared to carbon dioxide, the impacts are comparable.
Stocks and flows
Before we go any further though, we need to understand that the effective perturbation time for CO2 and CH4 in the atmosphere are very different. CO2 emissions embed themselves in the atmosphere/biosphere/upper-ocean carbon cycle and have very long-term impacts (under natural conditions, some 15% of the CO2 perturbation will still be in the atmosphere thousands of years from now). In contrast, methane has a perturbation time-scale of about 12 years. This implies that the impact of CO2 on temperature is cumulative (a function of the total emitted CO2 or stock), while the impact of CH4 is a function of current (~decadal) emissions (the flows). Stabilizing temperature effects from CO2 means getting down to net-zero anthropogenic emissions, while stabilizing temperature effects from CH4 means simply stabilizing emissions.
The impacts of emissions of CH4 compared to CO2 then will have a time-varying component. Over a short time, the enhanced effectiveness of methane will be important but on very long time scales the effects of CO2 will be dominant. This is the source of the difference between the “Global Warming Potential” (GWP) numbers calculated at 20 years or 100 years which have been used for decades. You might recall that GWP is defined as the ratio on per-kg basis of the temperature impact of other greenhouse gases compared to CO2 over a specific time period. But as is clearly stated in AR6, the suitability of comparative emission metrics depends on your end goal or values.
For instance, if you use GWP-100 to trade off emissions on the way to a temperature stabilization scenario, it simply doesn’t work (since you can’t balance any net CO2 emissions with a particular level of CH4 emissions – you would need to have constantly decreasing CH4). Hence, newer concepts like GWP* have been developed that take that into account. Nonetheless, the UNFCCC (and the EPA) use the GWPs from IPCC AR4 for calculating CO2eq emissions and have not updated them as the science has progressed.
People tend to be most interested in comparisons related to future choices, and it’s worth bearing in mind that while there are many ways to do this, most don’t relate to real choices that people have, nor do they clearly match up with a consistent set of values. I’ll return to that issue below. So let’s go:
- Molecule-to-molecule concentrations: On a per-ppm basis, methane is 25 times more effective as a direct greenhouse gas. Including the indirect effects, increases that to 45 times as effective.
- kg-to-kg: On a mass basis, methane is 70 times more effective as a greenhouse gas. This takes into account of the different molecular weights of the molecules. That would mean 126 times as effective including indirect effects.
- kgC-to-kgC: an equal amount of kgC as CH4 or CO2 gives rise to the same ppm change, so kgC-to-kgC, methane is again 45 times more effective as a greenhouse gas.
- kg to kg emitted: This is where it starts to get hairy because of the different timescales. Current (AR6) estimates for fossil-sourced methane are ~83 for GWP-20 and ~30 for GWP-100 (AR6 Table 7.15). (It’s slightly smaller than this for biogenic (non-fossil) methane since the oxidation product of CO2 in that case is carbon neutral). The assessed uncertainties in these values (largely related to direct and indirect aerosol effects) are ±25 and ±11. The AR4 value for methane GWP-100 was 25.
- kgC emitted to kgC emitted: For some applications, for instance judging the impact of flaring natural gas vs. releasing it directly into the atmosphere, the kg-to-kg comparisons are not relevant, since the same amount of carbon is being emitted, rather than the same total mass. For that, the GWP-like value over 100 years, choosing to release methane directly would be 30*16/44 = 11 times worse than flaring [Corrected 9/20/21].
- Emissions for temperature stabilization: Each additional GtC of carbon dioxide contributes to about 0.00165ºC of eventual warming (the TCRE), while a sustained TgCH4/yr of methane emissions (0.00075 GtC/yr), leads to ~3 ppb increase of methane concentrations (AR6 Table 5.2), about 0.0024 W/m2 in total radiative forcing, and, assuming a median climate sensitivity of 3ºC for 2xCO2, roughly 0.002ºC of equilibrium global warming. That implies you need a sustained reduction of 0.8 TgCH4/yr (0.0006 GtC/yr of methane) to compensate for a one-off GtC pulse of CO2.
Whatever way you slice this it implies that CH4 reductions have an outsize effect on climate, as well as an undeniably positive impact on air pollution, crop yields and public health (mainly through ozone reductions). It is therefore not a complicated decision to pursue methane reductions, taking care not to assume that doing so gets you off the hook for reducing CO2, whatever the EPA says.
I’d like this page to be useful and current, so if you think I should add an additional comparison, or use case, or if you think I’ve got something wrong, please let me know in the comments.
- M. Etminan, G. Myhre, E.J. Highwood, and K.P. Shine, "Radiative forcing of carbon dioxide, methane, and nitrous oxide: A significant revision of the methane radiative forcing", Geophysical Research Letters, vol. 43, 2016. http://dx.doi.org/10.1002/2016GL071930
This month’s open thread for climate science topics. Not sure about you, but we are still reading the details of the IPCC report. We are watching the unfolding hurricane season with trepidation, with particular concern related to the impacts of compound events (and not just those associated with climate), and anticipating another low, if not record, Arctic sea ice minimum.
PS. At some point this month we will be switching Internet service providers, so don’t be surprised if there are some oddities as we switch everything over.
My top 3 impressions up-front:
- The sea level projections for the year 2100 have been adjusted upwards again.
- The IPCC has introduced a new high-end risk scenario, stating that a global rise “approaching 2 m by 2100 and 5 m by 2150 under a very high greenhouse gas emissions scenario cannot be ruled out due to deep uncertainty in ice sheet processes.”
- The IPCC gives more consideration to the large long-term sea-level rise beyond the year 2100.
And here is the key sea-level graphic from the Summary for Policy Makers:
This is a pretty clear illustration of how sea level starts to rise slowly; but in the long run, sea-level rise caused by fossil-fuel burning and deforestation in our generation could literally go off the chart and inundate many coastal cities and wipe entire island nations off the map. But first things first.
Observed Past Rise
Let’s dive a little deeper into the full report and start with the observed sea level change. Since 1901 sea level has risen by 20 cm, a rise unprecedented in at least 3,000 years (disclosure: I co-authored some of the research behind the latter conclusion).
Since 1900 the rise has greatly accelerated. During the most recent period analyzed, 2006-2018, it’s been rising at a rate of 3.7 mm/year – nearly three times as fast as during 1901-1971 (1.3 mm/year). The IPCC calls this a “robust acceleration (high confidence) of global mean sea level rise over the 20th century”, as did the SROCC in 2019.
The finding of sea-level acceleration is not new. The AR4 already concluded in 2007: “There is high confidence that the rate of sea level rise has increased between the mid-19th and the mid-20th centuries.” And the AR5 found in 2013 that “there is high confidence that the rate of sea level rise has increased during the last two centuries, and it is likely that global mean sea level has accelerated since the early 1900’s.” (Which has not stopped “climate skeptics” from repeatedly claiming a lack of acceleration.)
The reason for earlier hedged wording by the IPCC was the possibility of natural decadal variability affecting the trend estimates, but the AR6 now concludes “that the main driver of the observed global mean sea-level rise since at least 1970 is very likely anthropogenic forcing”. That is the result of so-called “attribution studies” – attempts to differentiate with the help of a combination of data, models, pattern detection and statistics between all possible human-caused and natural factors in the observed changes. However, on the level of basic physical reasoning, it is of course a no-brainer that warming will cause land-ice to melt (and melt faster as it gets hotter) and ocean waters to expand, so sea-level rise is the inevitable result.
And there is this:
New observational evidence leads to an assessed sea level rise over the period 1901 to 2018 that is consistent with the sum of individual components contributing to sea level rise, including expansion due to ocean warming and melting of glaciers and ice sheets (high confidence).IPCC AR6
That’s an important consistency check; the independent data add up to the overall observed rise.
The Future Until 2100
It is virtually certain that global mean sea level will continue to rise over the 21st century in response to continued warming of the climate system.IPCC AR6
By how much? That depends on our emissions and is shown in the following figure. The take-away message is: for high emissions we’d likely get close to a meter, sticking to the Paris agreement would cut that down to half a meter.
And how does that compare to the recent previous reports? Here is the comparison the IPCC shows:
If you look at the 2100 projections for the last three reports (AR5, SROCC, AR6) you can see that the numbers have increased each time – and remember that the AR5 numbers had already increased by ~60% compared to the AR4. This illustrates the fact that IPCC has been too “cautious” in the past (which is not a virtue in risk assessment), having to correct itself upward again and again (all the while “climate skeptics” try to paint the IPCC as “alarmist”, for want of any better arguments to play down the climate crisis).
Related to that are notable changes in grappling with uncertainty and risk. The IPCC is now showing very likely (5-95 percentile) as well as likely (17-83 percentile) ranges. In the AR5, it had made the rather ad-hoc argument that “global mean sea level rise is likely (medium confidence) to be in the 5 to 95% range of projections from proces-based models”. So their likely range was actually the modelled very likely range.
The IPCC now splits the uncertainty into two types, hence the two different shadings in the uncertainty bars, in an attempt to also cover uncertainty in processes which we still cannot confidently model. They write:
Importantly, likely range projections do not include those ice-sheet-related processes whose quantification is highly uncertain or that are characterized by deep uncertainty. Higher amounts of global mean sea level rise before 2100 could be caused by earlier-than-projected disintegration of marine ice shelves, the abrupt, widespread onset of Marine Ice Sheet Instability (MISI) and Marine Ice Cliff Instability (MICI) around Antarctica, and faster-than-projected changes in the surface mass balance and dynamical ice loss from Greenland. In a low-likelihood, high-impact storyline and a high CO2 emissions scenario, such processes could in combination contribute more than one additional meter of sea level rise by 2100.
Note that this uncertainty goes to one side: up. For estimating this uncertainty they use an expert survey as well as a smaller but more detailed structured expert judgement. I co-authored the survey (see also 7-minute video about it) with Ben Horton and others, as well as a predecessor survey published in 2014, and I am happy to see that the IPCC now includes this type of expert judgement to assess risks that can’t yet be modelled reliably, but cannot be just ignored either. In dealing with the climate crisis, it simply is not enough to consider what is likely to happen – it is even more important to understand what the risks are.
Think about it: If someone builds a nuclear facility near to your house, would you be satisfied with knowing that it is “likely” to work well (say, 83% certain)? Or would you like to know about a few percent chance that it could blow up like Chernobyl in your lifetime?
With the high-end risk scenarios, the IPCC is catching up with other assessments such as the US National Climate Assessment of 2017, which already showed a “high” scenario of 2 meters and an “extreme” scenario of 2.5 meters of rise by 2100.
The Long Term Future
One of the headline statements of the AR6 is:
Many changes due to past and future greenhouse gas emissions are irreversible for centuries to millennia, especially changes in the ocean, ice sheets and global sea level.IPCC AR6
That’s because huge ice sheets take a long time to melt in a warmer climate, and the ocean waters take a long time to warm up as you go further down, away from the surface. So by what we are doing now in the next couple of decades we determine the rate and amount of sea-level rise for millennia to come, condemning many generations to continually changing coastlines and forcing them to abandon many coastal cities, large and small. That we cannot turn this back is the reason why the precautionary principle should be applied to the climate crisis.
Just look at the ranges expected by the year 2300, in the right-hand panel of the first image above. Even in the blue mitigation scenario, which limits warming to well below 2 °C, our descendants may well have to deal with 2-3 meters of sea-level rise, which would be catastrophic for the people living at the world’s coastlines. Not only would it be extremely hard and costly – if possible at all – to defend cities like New York during a storm surge with a so much higher sea level. We would see massive coastal erosion happening all around. And remember that “nuisance flooding” is already causing real problems after just 20 cm of sea-level rise, for example along the eastern seaboard of the US!
At least with this Paris scenario and a good portion of sheer luck, we may get away with less than a meter rise. But with further unmitigated increase in emissions, a desastrous 2 meter rise is about as likely as an utterly devastating 7 meter rise. What would our descendants think we were doing?
Guest post by Joeri Rogelj (Twitter: @joerirogelj)
Since temperature targets became international climate goals, we have been trying to understand and quantify the implications for our global emissions. Carbon budgets play an important role in this translation.
Carbon budgets tell us how much CO2 we can emit while keeping warming below specific limits. We can estimate the total carbon budget consistent with staying below a given temperature limit. If we subtract the CO2 emissions that we emitted over the past two centuries, we get an estimate of the remaining carbon budget.
I have been involved in the estimation of carbon budgets since the IPCC Fifth Assessment Report in the early 2010s. And since the first IPCC estimates published in 2013, we have learned a lot and have gotten much better at estimating remaining carbon budgets. In the 2018 IPCC Special Report on Global Warming of 1.5°C (SR1.5), the latest insights were integrated in a simple framework that allowed to estimate, track, and understand updates to these carbon budgets.
The most recent Working Group 1 Report of the IPCC Sixth Assessment Cycle (WG1 AR6) provides an updated assessment of the remaining carbon budget. Here’s an insider’s view providing a deep dive into how they differ from previous reports.
The scientific basis underlying a carbon budget is our robust scientific understanding that global warming is near-linearly proportional to the total amount of CO2 we ever emit as a society. This is illustrated in Fig. SPM10 of the WG1 AR6 report, both for the past and for future projections.
The estimates of remaining carbon budgets also made it into the Summary for Policymakers – the most prominent place that can be given for any finding of the report. Table SPM.2 gives an overview of the latest estimates, for different temperature limits and different probability levels.
How have these estimates changed since previous reports?
IPCC reported carbon budgets for the first time in 2013. And since, important advances have been made in how we estimate these. Five puzzle pieces combine to give carbon budget estimates, and allow us now to understand subsequent updates.
Starting with the key message of the AR6 carbon budget update: carbon budget estimates in AR6 are very similar to those published in the SR1.5 in 2018, but they represent a significant update since AR5 in 2013.
When adjusting for the emissions since AR5 and SR1.5, AR6 remaining carbon budget for limiting warming to 1.5C with 50% chance is about 300 GtCO2 larger than in AR5, but virtually the same as in SR1.5.
For 66% probability, the AR6 budget is about 60 GtCO2 larger than in SR1.5.
The budget is so much larger than in AR5, because since 2013 more accurate methods have been published that ensure that model uncertainties over the historical period are not accumulated into the future. This is best illustrated by this technical figure from SR1.5.
Between SR1.5 and AR6 every piece of the carbon budget was reassessed:
- warming to date
- how much warming we expect to get per tonne of CO2
- how much warming would still occur once we reach net zero CO2
- how much non-CO2 warming we can expect
- Earth system feedback otherwise not covered
Let’s dive into each piece of this puzzle to understand what has changed between SR1.5 and AR6.
Warming to date – SR1.5 used a 0.97°C warming estimate between 1850-1900 and 2006-2015. This estimate already included corrections for the incomplete global coverage of observations and the different ways in which global surface temperature can be estimated. The AR6, based on a full reassessment of all available data, assesses 0.94°C of global surface temperature increase for the same period.
In isolation, this update results in central estimates being about 65 GtCO2 larger in AR6 than in SR15. For the 33% and 67% estimates that’s about 110 and 50 GtCO2 higher, respectively.
Warming per tonne of CO2 – The next piece of the puzzle is the warming we project per tonne of CO2. SR1.5 used an estimate of 0.8-2.5°C per 1000 GtC (=3664 GtCO2). AR6 assessed this quantity, also known as the Transient Climate Response to Cumulative Emissions of CO2 (or TCRE), to fall in the 1.0-2.3°C range.
Having the same central estimate, the update in TCRE causes no shift in 50% estimates, but the higher and lower percentiles are narrowed. For a 67% chance, AR6 estimates are about 50 and 100 GtCO2 larger compared to SR1.5 for 1.5°C and 2°C of global warming, respectively.
Warming after net zero CO2 – The third piece of the puzzle is the how much warming is expected to still occur once global CO2 emissions reach (and remain at) net zero. This is known as the Zero Emissions Commitment to emissions of CO2 (or ZEC).
The AR6 estimate confirms the SR1.5 estimate of no further CO2-induced warming or cooling once global CO2 emissions reach and stay at next zero. The uncertainty surrounding this value are reported separately. ZEC therefore causes no changes between SR1.5 and AR6.
Non-CO2 warming contribution – The fourth puzzle piece is the projected warming from non-CO2 emissions. As SR1.5, AR6 uses deep mitigation pathways assessed by SR1.5 (Rogelj et al, 2018; Huppmann et al, 2018), but with climate projections updated entirely with dedicated climate emulators that integrate the scientific information across chapter.
By coincidence (and it is really coincidence), the updates in radiative forcing from tens of different gases, climate sensitivity, and carbon-cycle uncertainties result in no net shift in the estimate of non-CO2 warming for the remaining carbon budget.
Pure luck, given the many updated pieces of scientific knowledge that were integrated in AR6, but convenient for explaining differences in carbon budget estimates.
Updated non-CO2 warming estimates lead to no change in remaining carbon budget estimates compares to SR1.5.
Other Earth system feedbacks – The last piece is to account for Earth system feedbacks that would otherwise not be covered. SR1.5 assumed an additional blanket reduction of 100 GtCO2 for this century for these feedbacks. This was a crude estimate and therefore not included as a central part of the remaining carbon budget numbers in SR1.5 AR6 updates this assessment entirely and includes this contribution in its main estimates.
Taking into account not only permafrost thaw, but also a host of other biogeochemical and atmospheric feedbacks, the AR6 estimates to appropriately include the effect of all these feedbacks, remaining carbon budgets have to be reduced by 26 ± 97 GtCO2 per degree Celsius of additional warming.
Altogether these updates mean AR6 remaining carbon budget estimates are very similar compared to SR1.5, while they additionally include the effect of Earth system feedbacks that would otherwise not be covered.
Selecting a remaining carbon budget requires two normative choices as a minimum: the global warming level that is to be avoided, and the likelihood or chance with which this is achieved. Further choices involve how deeply non-CO2 emissions can be reduced.
In addition to updates to science underlying carbon budget estimates, the AR6 also provides a larger set of likelihood levels for its remaining carbon budget estimates (see Table SPM.2 above). As in previous reports, AR6 provides remaining carbon budget estimates for a 33%, 50%, and 67% chance of keeping warming to a given temperature limit. In addition, however, the AR6 also provides the bracketing percentiles for the central 66% range (the range covered between 17% and 83%), so that the uncertainty of the central estimate can be adequately understood.
These values can be used in a variety of ways. For example, the central estimate for the remaining carbon budget for keeping warming to 1.5°C is now 500 GtCO2 starting from the beginning of 2020, with a 66% uncertainty range of 300–900 GtCO2.
Designing a policy for limiting warming to 1.5°C with this global 500 GtCO2 number in mind means that in 1-out-of-2 cases warming will end up below and in 1-out-of-2 cases it will end up above 1.5°C. Alternatively, it can also be understood to mean that in 1-out-of-2 cases policy measures will have to be sharpened beyond the policies consistent with a 500 GtCO2 budget over the coming decades if warming is effectively to be kept to 1.5°C. Similar examples can be given for 1.7°C or other levels (see Table 5.8 in the underlying chapter; Canadell et al (2021)).
A last item affecting the selection of remaining carbon budgets is the expectation of how deeply non-CO2 emissions can be reduced. All remaining carbon budget estimates in AR6 assume that non-CO2 emissions such as methane are reduced consistent with a deep decarbonisation pathway that reaches net zero CO2 emissions. Depending on how effectively these non-CO2 emissions can be reduced, the remaining carbon budgets can vary by 220 GtCO2 or more.
Bottom line of this technical explanation remains, however, that these budgets are small, our current annual global CO2 emissions of about 40 GtCO2/yr are reducing them rapidly, and all budgets require CO2 to decline to net zero while global emissions have not yet shown to decline.
It’s nice to have remaining carbon budgets, but now we need to get on with it and make sure that global CO2 emissions start to decline.
If you would like to know all the ins and outs of AR6 remaining carbon budgets have a look at Section 5.5 in Canadell et al (2021). The entire section describes the assessment of TCRE and remaining carbon budgets, while Box 5.2 presents a more technical comparison with carbon budget estimates from previous reports.
Joeri Rogelj is Director of Research, Grantham Institute Climate Change & Environment, Imperial College London, UK, and Senior Research Scholar, International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria
Huppmann, D., Rogelj, J., Kriegler, E., Krey, V., et al. (2018) A new scenario resource for integrated 1.5 °C research. Nature Climate Change. [Online] 8 (12), 1027–1030. Available from: doi:10.1038/s41558-018-0317-4.
Josep G. Canadell, J. G., P. M.S. Monteiro, M. H. Costa, L. Cotrim da Cunha, P. M. Cox, A. V. Eliseev, S. Henson, M. Ishii, S. Jaccard, C. Koven, A. Lohila, P. K. Patra, S. Piao, J. Rogelj, S. Syampungani, S. Zaehle, K. Zickfeld, 2021, Global Carbon and other Biogeochemical Cycles and Feedbacks. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S. L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M. I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J. B. R. Matthews, T. K. Maycock, T. Waterfield, O. Yelekçi, R. Yu and B. Zhou (eds.)]. Cambridge University Press. In Press.
IPCC (2014) Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change.
IPCC, 2021: Summary for Policymakers. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [MassonDelmotte, V., P. Zhai, A. Pirani, S. L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M. I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J. B. R. Matthews, T. K. Maycock, T. Waterfield, O. Yelekçi, R. Yu and B. Zhou (eds.)]. Cambridge University Press. In Press
Rogelj, J., Shindell, D., Jiang, K., Fifita, S., et al. (2018) Mitigation pathways compatible with 1.5°C in the context of sustainable development. In: Greg Flato, Jan Fuglestvedt, Rachid Mrabet, & Roberto Schaeffer (eds.). Global Warming of 1.5 °C: an IPCC special report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. [Online]. Geneva, Switzerland, IPCC/WMO. pp. 93–174. Available from: http://www.ipcc.ch/report/sr15/.