Rapid progress in the use of machine learning for weather and climate models is evident almost everywhere, but can we distinguish between real advances and vaporware?
First off, let’s define some terms to maximize clarity. Machine Learning (ML) is a broad term to distinguish any kind of statistical fitting of large data sets to complicated functions (various flavors of neural nets etc.), but it’s simpler to think of this as just a kind of large regression. The complexity of the functions being fitted has increased a lot in recent years, and the dimensionality of the data that can be fitted has also. Artificial Intelligence (AI) encompasses this, but also concepts like expert systems and (for a while) was distinct from statistical ML methods*. Generative AI (such as demonstrated by ChatGPT, or DALL-E) is something else again – both in size of the training data, and number of degrees of freedom in the fits (~ a trillion nodes). None of these things are ‘intelligent’ in the more standard sense – that remains an unrealized (unrealizable?) goal.
Recent success in weather forecasting
The most obvious examples of rapid improvements in ML applied to weather have come from attempts to forecast weather using ERA5 as a training dataset. Starting with FourCastNet (from NVIDIA in 2022), and followed by GraphCast (2023) and NeuralGCM (2024), these systems have shown remarkable ability to predict weather out to 5 to 7 days with skill approaching or even matching the physics-based forecasts. Note that claims that these systems exceed the skill of the physics-based forecasts AFAIK are not (yet) supported across the wide range of metrics that ECMWF itself uses to assess improvements in the forecast systems.
Two recent improvements to these systems have recently been announced – one at AGU from Bill Collins which showed techniques (‘bred vectors‘) that can be used to generate ensemble spreads with FourCastNet (which is not chaotic) that match the spread of the (chaotic) physics-based models (see also GenCast). The second advance, announced just this week, is GraphDOP, an impressive effort to learn the forecasts using the raw observations directly (as opposed to going through the existing data assimilation/reanalysis system).
Climate is not weather
This is all very impressive, but it should be made clear that all of these efforts are tackling an initial value problem (IVP) – i.e. given the situation at a specific time, they track the evolution of that state over a number of days. This class of problem is appropriate for weather forecasts and seasonal-to-sub seasonal (S2S) predictions, but isn’t a good fit for climate projections – which are mostly boundary value problems (BVPs). The ‘boundary values’ important for climate are just the levels of greenhouse gases, solar irradiance, the Earth’s orbit, aerosol and reactive gas emissions etc. Model systems that don’t track any of these climate drivers are simply not going to be able to predict the effect of changes in those drivers. To be specific, none of the systems mentioned so far have a climate sensitivity (of any type).
But why can’t we learn climate predictions in the same way? The problem with this idea is that we simply don’t have the appropriate training data set. For weather, we have 45 years of skillful predictions and validations, and for the most part, new weather predictions are fully within sample. While for climate we have a much shorter record of skillful prediction over a very small range of forcings, and what we want to predict (climate in 2050, 2100 etc.) is totally out-of-sample. Even relatively simple targets (conceptually) like the attribution of the climate anomalies over the last two years are not approachable via FourCastNet or similar since they don’t have an energy balance, aerosol inputs, or stratospheric water vapor – even indirectly.
What can we do instead?
A successful ML project requires a good training dataset, one that encompasses (more or less) the full range of inputs and outputs so that the ML predictions are within sample (no extrapolation). One can envisage a number of possibilities:
- Whole Model Emulation: This would involve learning from existing climate model simulations as a whole (that could encompass various kinds of ensembles). For instance, one could learn from an perturbed physics ensemble to find optimal parameter sets for a climate model e.g. Elsaesser et al., learn from scenario-based simulation to produce results for new scenarios (Watson Parris et al. (2022)), or learn from attribution simulations for the historical period to calculate the attributions based on different combinations or breakdowns of the inputs.
- Process-based Learning: Specific processes can be learned from detailed (and more accurate) process models – such as radiative transfer, convection, large eddy simulations, etc. and then used within existing climate models to increase the speed of computation and reduce biases Behrens et al.. The key here is to ensure that the full range of inputs are included in the training data.
- Complexity-based Learning: ML parameterizations drawn from more complete models (for instance with carbon cycles or interactive composition) can be implemented within simpler versions of the same model.
- Error-based Learning: One could use a nudged or data-assimilated model for the historical period, save the increments (or errors), learn those, and then apply them as an online correction in the future scenarios [I saw a paper this month proposing this, but I can’t find the reference – I’ll update if I find it]. Downscaling to station data climate statistics with bias corrections would be another application of this.
Each of these approaches has advantages, but also come with potential issues. Emulation of the whole model implies the emulation of that model’s biases. ML-based parameterizations have to work well for thousands of years of simulations, and thus need to be very stable (no random glitches or periodic blow-ups) (harder than you might think). Bias corrections based on historical observations might not generalize correctly in the future. Nonetheless, all of these approaches are already showing positive results or are being heavily worked on.
Predictions are hard
The speed at which this area of the field is growing is frankly mind-boggling – it was included in a significant percentage of abstracts at the recent AGU meeting. Given the diversity of approaches and number of people working on this, predictions of what is going to work best and be widely adopted are foolhardy. But I will hazard a few guesses:
- ML for tuning and calibration of climate models via perturbed physics ensembles is a no-brainer and multiple groups are already using this for their CMIP7 contributions.
- Similarly, the emulation of scenarios – based perhaps on new single forcing projections – will be in place before the official CMIP7 scenarios will be available (in 2026/7?), and thus might alleviate the bottleneck caused by having to run all the scenarios through the physics-based models.
- Historical emulators will make it much easier to do new kinds of attribution analysis – via sector, country, and, intriguingly, fossil fuel company…
- I expect there will be move to predict changes in statistical properties of the climate (particularly the Climate Impact Drivers) at specific global warming levels rather than predicting time series.
- Some ML-enhanced models will be submitted to the CMIP7 archive but they will have pretty much the same spread in climate sensitivity as the non-ML enhanced models, though they may have smaller biases. That is, I don’t think we will be able to constrain the feedbacks in ML-based parameterizations using present-day observations alone. Having said that, the challenge of getting stable coupled models with ML-based components is not yet a solved problem. Similarly, a climate model made up of purely ML-based components but with physics-based constraints is still very much a work in progress.
One further point is worth making is that the computational cost of these efforts is tiny compared to the cost of generative AI, and so there is not going to be (an ironic) growth of fossil fueled data centers instituted just for this.
What I don’t think will happen
Despite a few claims made in the relevant papers or some press releases, the ML models based on the weather or reanalyses mentioned above will not magically become climate models – they don’t have the relevant inputs, but even were they given them, there isn’t sufficient training data to constrain the impact they will have if they change.
Neither will generative AI come to the rescue and magically tell us how climate change will happen and be prevented – well, they will tell us, but it will either be the regurgitation of knowledge already understood, or simply made up. And at enormous cost [Please do not ask ChatGPT for anything technical, and certainly don’t bother asking for references***]. There are potential uses for this technology – converting casual requests into specific information demands and building the code on the fly to extract relevant data for instance. But the notion that these tools will write better proposals, do real science, and write the ensuing papers is the stuff of nightmares – and were this to start to be commonplace would lead to the collapse of both the grant funding apparatus and scientific publishing. I expect science agencies to start requiring ‘no AI was used to write this content’ certifications perhaps as soon as this year.
I guess that one might imagine a single effort learning from an all-encompassing data set – all the CMIP models, the km-scale models, the reanalyses, the observations, the paleo-climate data, with internal constraints based on physics etc. – literally all the knowledge we have, and indeed maybe that could work. I won’t hold my breath.
To summarise, most of the near-term results using ML will be in areas where the ML allows us to tackle big data type problems more efficiently than we could do before. This will lead to more skillful models, and perhaps better predictions, and allow us to increase resolution and detail faster than expected. Real progress will not be as fast as some of the more breathless commentaries have suggested, but progress will be real.
Vive la evolution!
*To get a sense of the history, it’s interesting to read the assessment of AI research in the early 1970s by Sir James Lighthill** – it was pretty damning, and pointed out the huge gap between promise and actuality at that time. Progress has been enormous since then (for instance in machine translation), mostly based on pattern recognition drawn from large datasets, as opposed to coding for rules, which needed huge increases in computer power to realize.
**As an aside, I knew Sir James briefly when I was doing my PhD. He was notorious for sleeping through seminars, often snoring loudly, and then asking very astute questions at the end – a skill I still aspire to.
***I’ve had a number of people email me for input, advice etc. introduce themselves by saying that a paper I wrote (which simply doesn’t exist) was very influential. Please don’t do that.
References
- R. Lam, A. Sanchez-Gonzalez, M. Willson, P. Wirnsberger, M. Fortunato, F. Alet, S. Ravuri, T. Ewalds, Z. Eaton-Rosen, W. Hu, A. Merose, S. Hoyer, G. Holland, O. Vinyals, J. Stott, A. Pritzel, S. Mohamed, and P. Battaglia, "Learning skillful medium-range global weather forecasting", Science, vol. 382, pp. 1416-1421, 2023. http://dx.doi.org/10.1126/science.adi2336
- D. Kochkov, J. Yuval, I. Langmore, P. Norgaard, J. Smith, G. Mooers, M. Klöwer, J. Lottes, S. Rasp, P. Düben, S. Hatfield, P. Battaglia, A. Sanchez-Gonzalez, M. Willson, M.P. Brenner, and S. Hoyer, "Neural general circulation models for weather and climate", Nature, vol. 632, pp. 1060-1066, 2024. http://dx.doi.org/10.1038/s41586-024-07744-y
- I. Price, A. Sanchez-Gonzalez, F. Alet, T.R. Andersson, A. El-Kadi, D. Masters, T. Ewalds, J. Stott, S. Mohamed, P. Battaglia, R. Lam, and M. Willson, "Probabilistic weather forecasting with machine learning", Nature, vol. 637, pp. 84-90, 2024. http://dx.doi.org/10.1038/s41586-024-08252-9
- G. Elsaesser, M.V. Walqui, Q. Yang, M. Kelley, A.S. Ackerman, A. Fridlind, G. Cesana, G.A. Schmidt, J. Wu, A. Behrangi, S.J. Camargo, B. De, K. Inoue, N. Leitmann-Niimi, and J.D. Strong, "Using Machine Learning to Generate a GISS ModelE Calibrated Physics Ensemble (CPE)", 2024. http://dx.doi.org/10.22541/essoar.172745119.96698579/v1
- D. Watson‐Parris, Y. Rao, D. Olivié, . Seland, P. Nowack, G. Camps‐Valls, P. Stier, S. Bouabid, M. Dewey, E. Fons, J. Gonzalez, P. Harder, K. Jeggle, J. Lenhardt, P. Manshausen, M. Novitasari, L. Ricard, and C. Roesch, "ClimateBench v1.0: A Benchmark for Data‐Driven Climate Projections", Journal of Advances in Modeling Earth Systems, vol. 14, 2022. http://dx.doi.org/10.1029/2021MS002954
AlanJ says
Hi Gavin, this subject is fascinating to me and the potential seems huge. How can I become involved in this area of research? I hold an MS in earth science with a paleoclimate focus and currently work as a data engineer, with ML experience, so it seems like a skillset that could contribute meaningfully. Are there groups open to volunteer contributions?
Kevin McKinney says
Thanks once again for an intriguing glimpse at the cutting edge.
(And this may not be a universally response–but I do enjoy your punning titles.)
John Pollack says
Thanks! As a (retired) forecaster, I find it quite interesting to get your take on how ML will and won’t apply to climate models. For both forecast and climate models, I can see a lot of promise for downscaling. The potential to save computer time while generating highly specific forecasts is enormous – although providing detail can also result in overconfidence in the forecast from false precision.
While I would expect the modelers to remain well aware when the results are out of the model training range, the same is unlikely to be true of the journalists and many others who interpret ML results in their own terms (or let AI do the writing!) ML won’t tell you what’s on the other side of a tipping point that we haven’t crossed, folks.
Russell Seitz says
If you are suggesting that present climate policy discourse resembles an AI hallucination , what may become of it in months to come?
Cedders says
Interesting. I like the AI as aid to parameterisation idea.
Just posting to say there’s a reference to Behrens er al in the text, but no full reference.
jgnfld says
Agree with almost all of the above–especially in that ML in many ways is a giant lookup engine. That said, I wonder about your worries about using AI for writing.
If you are worried that AI can be, and is, just plain wrong at times, that is true. But then so are people as people are pretty much the training set! But then the authors and the reviewers are supposed to check for errors are they not?
Same with grants.
I sat on a university senate committee which reviewed such things up till I retired. The tech wasn’t quite as good 7 years ago, but the same issues appeared in the context of undergrad student papers.
The discussion of whether writing tools then were legitimate tools or illegal cheats was just as prevalent.
I am of the mind that a tool is an assistant however “smart”. My hand drives nails very poorly. My hammer does the job much better..
If AI writes the proposal you wish you could write is that cheating so long as it is 1) scientifically correct, 2) asks pertinent questions at just the correct level of analysis (AI is terrible at this one so far), 3) is fully vetted by the PI,. is that a truly bad thing? I’m not so sure. If you do this to flood granting agencies with dozens and scores of proposals with small variations is that a bad thing? Probably. My fears along the AI line run more to a gigantic increase in drivel in the wild more so than in scientific publishing. But it’s happening everywhere.
That said, I remember calculators not allowed in chem exams back in the day since calculators were “cheating” but slide rules were!
Spencer says
Historical note just for fun: in the archives you can find huge volumes of synoptic weather maps, typically at 6 hour intervals. Weather forecasters would leaf through these to find a past system similar to the current one, then page forward to see how it evolved. They added some rules of thumb and a big dose of intuition based on having done this exercise many hundreds of times. The result was predictions that computers could not excel until the late 1960s. Seems like we’re returning to this method only with the human brain replaced by things with much larger energy requirements.
Anduin says
Such an insightful post! I really appreciate how you highlighted both the benefits and challenges of using AI in climate science. It’s exciting to see how machine learning can help with complex tasks like modeling and predictions, but your point about the importance of transparency and human oversight is so important. Thanks for diving into this fascinating topic!
Susan Anderson says
Wait, what? As I recall, computers were pretty much in their infancy in the 1960s. Is that a typo, or is it true? If so, I’d love a reference.
Barton Paul Levenson says
SA: As I recall, computers were pretty much in their infancy in the 1960s.
BPL: Credit for the first computer is unclear. The ABC machine (Atanasoff-Berry Computer) dates from 1937, while the Z-machine (Zuse) dates from 1940 or 1950. ENIAC was set up about 1948 to calculate ballistic missile trajectories. IBM was selling commercial mainframes by the 1950s, and “minicomputers” were available by the 1960s. The high-level languages FORTRAN and COBOL both date from the 1950s.
Barton Paul Levenson says
Sorry, I meant to write 1940 or 1945 for the Zuse machine.
Susan Anderson says
This was meant to go with Spencer’s post, referencing computing in the 1960s. My bad.
jgnfld says
Well near the U of Minnesota at that time there was a company still producing actual CORE memories in the late 1960s/early 70s. I mean actual ferrite cores! The CDC mainframe I had access to ran out of memory when I tried to invert a 20×20 matrix so I had to go back to code old hand methods of submatrices in FORTRAN (these methods don’t require multiple copies of the full matrix) which allowed me to invert the 50×50 which was my task. Couple of years later had a natiional database on cards that took two moving trolleys piled high with boxesof punch cards to pull down to the card center!!!.
A lot of interesting work occurred in Prolog and Lisp in production systems and other early AI areas then. But they were utterly primitive by today’s standards. .So I guess what I’d say is AI existed in late 60s/early 70s in a way, but I wouldn’t say it’s the way we understand the term today.
Russell Seitz says
On a shelf beside me sits the main RAM memory of a late 1960’s Wang Lags computer,
Each side has 2 ten centimeter panes of 64 X 64 one millimeter ferrite rings, each strung suspended at the intersection of the three wires used to write , read and erase them as bytes.
As I recall ,this 16,394 core hand-strung array represented several thousand dollars of the cost of the CAD computer that employed it.
the literal core memory resembles nothing so much as chain mail., and each kilobyte takes up ~ 25 square centimeters, Were storage on silicone still on that scale, my laptop would be a little over a kilometer wide.
Susan Anderson says
Russell, my neighbor Don Eyles – https://www.sunburstandluminary.com/SLhome.html – was part of the computing crew at Draper for the Apollo 11 moon flight, and has 4 small RAM physical memory pieces which if I remember correctly, were the entire computing basis for that flight! Found it (dammit, hate to admit Google’s AI summary was useful): “This type of memory is referred to as RAM (Random Access Memory). Each word comprised 16 binary digits (bits), with a bit being a zero or a one. This means that the Apollo computer had 32,768 bits of RAM memory. In addition, it had 72KB of Read Only Memory (ROM), which is equivalent to 589,824 bits.”
[Come to think of it, his artwork has some similarities to yours …]
Barton Paul Levenson says
j: The CDC mainframe I had access to ran out of memory when I tried to invert a 20×20 matrix so I had to go back to code old hand methods of submatrices in FORTRAN (these methods don’t require multiple copies of the full matrix) which allowed me to invert the 50×50 which was my task.
BPL: Did you use Gauss or Gauss-Jordan elimination, or were you depending on determinants?
jgnfld says
Hey…this was 1970!I
know I didn’t use determinants directly as the system routine crapped out there as well and as such do remember having to do some sort of check on the determinant.. The method itself came from a mid-Victorian era book on vectors and matrices.
Paul Pukite (@whut) says
CDC=Control Data Corporation
William Norris, the CEO of CDC funded a department devoted to meteorology. This was a report from the 1980’s on creating an expert system for a weather station.
https://apps.dtic.mil/sti/pdfs/ADA184889.pdf
And this was an absolute classic paper (1000 cites) “A Climatology of Atmospheric Wavenumber Spectra of Wind and Temperature Observed by Commercial Aircraft” by G.D. Nastrom of CDC
https://pordlabs.ucsd.edu/pcessi/theory2015/nastrom_gage_85.pdf
For a brief period of time, Norris’ interest in MET lead to some interesting climate research.
BTW, Norris and his colleagues Seymour Cray and Frank Mullaney initially started CDC and then Cray and Mullaney later left to found the supercomputer company Cray Research.
A video from a few years ago describing the history: https://youtu.be/c8U07WsyLbw
I just discovered this book, “A few good men from Univac”, which is a 1st person account of the early history of scientific computing: https://tcm.computerhistory.org/exhibits/FewGoodMen.pdf
Mal Adapted says
Susan, the Computer History Museum has this to say about weather by computer:
Lots of great stuff on CHM. Techno-geeks beware – it’s potentially a huge time sink!
Susan Anderson says
Mal and jgn, thanks, those were both helpful. I overinterpreted Spencer’s post, but also didn’t know all that, which is fascinating! As a teenager mid 60s and brief drop-in to MIT early 70s, my knowledge of what was computing was limited by my direct experience. Though many friends were computer hacker types (some of whom made history), computing was the one course I struggled with, not having the right kind of mindset / brain leap potential required. The rest is history, of which my part went elsewhere.
Tom Dayton says
Excellent post, thank you! The Lighthill link is busted though; will you please fix?
Paul Pukite (@whut) says
What climate scientists trying to use NN haven’t learned yet is the closed-world assumption (CWA) that is at the core of classic AI. Neural networks are trained on a fixed dataset and if this is embodied only by the climate data itself, it will never be aware of information outside this dataset. That’s the closed world aspect, and unless all the relevant inputs are included, which are essentially the guiding boundary-values, they will likely be spinning their wheels.. Whatever gets produced will be an unresolved convolution of some unknown forcing with an also unknown response function — in other words, the fitted model is still totally encrypted! The NN essentially fits patterns without truly disentangling causation, so still no way to decode the resultant fit with meaningful insight, and thus highly unlikely to be of any use for predictive extrapolations..
I scan many of these machine learning climate papers and spend little effort if the authors do not acknowledge their closed-world assumptions.
It doesn’t have to be a statistical fitting. One can also generate a deterministic model from a ML training exercise. That’s essentially the same thing as a regression yielding the transfer function from an input. If you don’t think this aspect is important, consider autonomous driving — make the statistical or probability uncertainty window too big and expect many crashes. Many deterministic constraints involved in an autonomous situation. My favorite related example is tidal analysis — it is almost strictly deterministic and the remaining uncertainty is more than likely unresolved tidal factors.
This will be an interesting climate topic for years to come.
Paul Pukite (@whut) says
This is what machine learning experiments will be finding, tiny effects that people give up on..
Patrick in https://www.realclimate.org/index.php/archives/2024/11/unforced-variations-dec-2024/#comment-828641 said:
Whether an effect is tiny or not is a matter of scale. In the greater scheme of things, like dLOD, the QBO itself is pretty insignificant — as it’s a thin band of very low density atmosphere wrapped around the equator. Not much there really, but science is not always about big vs tiny effects. After all, CO2 is tiny too.
Yes, if the moon stayed in the same plane as the ecliptic orbit, it’s torque would be aligned with the sun’s torque. The angular momentum vector L of the degenerate tropical/draconic orbit would point the same way as the ecliptic orbit (orthogonal to the ecliptic plane), so at most would modulate the strength of the annual cycle. Thus, there would not be a different angular momentum vector L1 that would cause another wobble (i.e. Chandler) to beat with the annual wobble, or non-congruent vector torque to compete with the semi-annual oscillation (SAO) and thus form a QBO. That’s all part of the group symmetry argument I am offering.
Very few geophysicists want to take this on, as it overturns decades of conventional wisdom. I would not even be considering it if the numbers for the Chandler wobble and QBO didn’t match exactly to this model. That plus I have a strong inkling that the massive amounts of machine learning applied by the big guns will eventually cover this same ground and I want to be able to be ready for that. ML experiments search for numerical patterns and do extensive cross-validation to avoid over-fitting so climate scientists should take heed in case they “discover” the same agreement.
Piotr says
Gavin: Progress has been enormous since then (for instance in machine translation), mostly based on pattern recognition drawn from large datasets, as opposed to coding for rules
It may work for computer translation, but would it work for the generative AI? Creativity is about putting the words/ideas in the configurations/contexts that nobody has put them before – i.e. which patterns were not yet entered into the training sets … So the question is – can they recognize “meta-patterns” (?), and apply the techniques from one discipline into another, the way some techniques in physical oceanography apparently were borrowed from atmospheric sciences ?
Gavin “ I’ve had a number of people email me for input, advice etc. introduce themselves by saying that a paper I wrote (which simply doesn’t exist) was very influential.”
Are you sure these were people? ;-) I can’t imagine a human asking for advice, buttering you up with a praise of your …. non-existing paper. Unless they assumed you senile – they would have to know that it would backfire (and if they thought you senile – why ask you for advice in the first place?).
AI, on the other hand, may not see red flags there, and furthermore, is known to hallucinate by inventing sources that do not exist.
Or jumping into a deep end – maybe AI having already identified you as an AI-critic, decided to engage you preemptively – by running a reverse-Turing test on you – to test you whether you can tell AI from “people”. What a better praise for an AI system than having passed a Turing test administered by a AI-skeptic, who even didn’t realize that he had been manipulated into doing it. If this were true, then we are much closer to the Skynet scenario than we have thought. I, for one, would like to welcome our new AI overlords …
Ken Towe says
“Predictions are hard…”
Yogi Berra once said. “It’s tough to make predictions, especially about the future”.
It is even harder to accurately replicate the past to make future predictions. Hindcast attribution?
Mal Adapted says
[I ran my failed HTML through a free on-line previewer: html-online.com. Hallelujah! Hopefully, there’ll be no more rendering catastrophes. MA]
KT: Yogi Berra once said. “It’s tough to make predictions, especially about the future”.
That’s one of my favorite quotations, but according to QuoteInvestigator.com, (which I love), it’s a Danish folk saying, first appearing in Danish politician Karl Kristian Steincke‘s memoirs in 1948:
QI: Det er vanskeligt at spaa, især naar det gælder Fremtiden.
Translated by Google as
It is difficult to make predictions, especially when it comes to the future.
However:
QI: In 1971 a version of the saying was attributed to the famous physicist Niels Bohr in the pages of the “Bulletin of the Atomic Scientists”. This ascription occurs frequently in modern times.
A close associate of Bohr’s, mathematician (and co-father of the H-bomb) Stanislaw Ulam, credited it to Bohr in 1976, in the context of mathematical modeling:
QI: As Niels Bohr said in one of his amusing remarks: “It is very hard to predict, especially the future.” But I think mathematics will greatly change its aspect. Something drastic may evolve, an entirely different point of view on the axiomatic method itself.
Other colleagues of Bohr’s back Ulam up. Ascription to Berra apparently came later:
QI: In 1991 a marketer in the tourism industry in Virginia ascribed a variant of the saying to Yogi Berra.
But AFAICT, nobody ever heard Yogi say it. OTOH, Bohr was a Dane, and the saying seems particularly apt for scientists talking about models. Yogi was a rich source of pithy epigrams irresistible to politicians and marketing professionals, but WTF did he know about simulation modeling aside from its difficulty? I, for one, have settled on Bohr as its popularizer. Y’all check out quoteinvestigor.com. Hours of fun!
Russell Seitz says
You can see a lot just by observing attribution debates.
Jonathan David says
Oh well, as Berra said: “I really didn’t say everything I said”
Keith Woollard says
The obvious problem with using AI to forecast climate change is that there can be (by definition) no training data.
In all of palaeoclimatolgy (let’s restrict to the last 500,000years) temperature has driven CO2 change. What we are doing now by releasing huge amounts of carbon that had been sequestered over 150MY doesn’t have a historical analogue
Paul Pukite (@whut) says
Aye. Yet plenty of data for all the climate indices, such as for ENSO, PDO, QBO, AMO, IOD, MJO, NAO, etc. Let ML AI solve that and we have a significant uncertainty modeled and that we can then discriminate the AGW signal against.
Piotr says
Re Paul Pukite – I wouldn’t call ENSO and others “climate indices” – more like “extended weather oscillations” – in predicting the slope of T (AGW) we use typically averages over 30 years – the length chosen precisely to average out all those extended weather oscillations This is easily done using the concept of the running average (over 30 years), no machine learning or AI required.
Furthermore – unlike AGW – ENSO et al. are not affected by humans – hence irrelevant to the future AGW trends, which slope would change depending on what we do (or don’t).
As such – I will be with you that ” Let ML AI solve them” – this way human scientists can concentrate on what’s important to the future of the humanity – the effects of AGW as a function of different emission scenarios.
I mean – what’s worst that could happen? The machines got their oscillations around the mean wrong? Boo hoo – we couldn’t alter these natural oscillations even if we wanted to.
Paul Pukite (@whut) says
FYI, Several ENSO measures have index in their acronym
MEI – Multivariate ENSO Index
ONI – Oceanic Niño Index
SOI – Southern Oscillation Index
Climate data repositories often categorize all these under a climate index heading, see e.g.
https://climexp.knmi.nl/selectindex.cgi
Piotr says
Piotr: “I wouldn’t call ENSO and others “climate indices”
Paul Pukite: “ FYI, Several ENSO measures have index in their acronym
MEI – Multivariate ENSO Index
ONI – Oceanic Niño Index
SOI – Southern Oscillation Index”
Thank you, Captain Obvious, by my issue wasn’t with the word “index”.
As for some meteorological (?) site in Netherlands – puts them in the “climate indices” folder – does not prove that any of these are relevant to the climatic trend (AGW). And the falsifiable reasoning for that I offered in the rest of the sentence the first few words you quoted from:
P: ” [I wouldn’t call ENSO and others “climate indices”] – more like “extended weather oscillations” – in predicting the slope of T (AGW) we use typically averages over 30 years – the length chosen precisely to average out all those extended weather oscillations This is easily done using the concept of the running average (over 30 years), no machine learning or AI required.”
Or if you don’t believe me – how about the source I have already recommended to you before:
https://climatekids.nasa.gov/kids-guide-to-climate-change/
“Climate describes the typical weather conditions in an entire region for a very long time – 30 years or more.”
Geoff Miell says
Keith Woollard: – “What we are doing now by releasing huge amounts of carbon that had been sequestered over 150MY doesn’t have a historical analogue”
There certainly are paleo-historical analogues – see for example PNAS paper by K. D. Burke et al. titled Pliocene and Eocene provide best analogs for near-future climates, published 10 Dec 2018.
https://www.pnas.org/doi/10.1073/pnas.1809600115
Per NOAA, in terms of CO₂ equivalents, the atmosphere in 2023 contained 534 ppm, of which 419 is CO₂ alone. The rest comes from other gases.
https://gml.noaa.gov/aggi/
Looking back through the paleo-historical record, the Earth System is likely on a trajectory towards a Mid-Pliocene-like climate (400-450 ppm, +2-3 °C relative to pre-industrial GMST) as early as the 2040s and Mid-Miocene-like climate (300-500 ppm, +4-5 °C relative to pre-industrial GMST) perhaps likely by the end of this century. See the graph titled Where on Earth are We Heading: Pliocene or Miocene? presented by Professor H.J. Schellnhuber in the YouTube video titled Keynote Debate Can the Climate Emergency Action Plan lead to Collective Action_ (50 Years CoR).
https://youtu.be/QK2XLeGmHtE?t=1491
The UN Sustainable Development Solutions Network (SDSN) published on 3 Nov 2023 the YouTube video titled An Intimate Conversation with Leading Climate Scientists To Discuss New Research on Global Warming, duration 1:12:23. From time interval 0:17:03, James Hansen said:
“The 1.5-degree limit is deader than a doornail, and the 2-degree limit can be rescued only with the help of purposeful actions to effect Earth’s Energy Balance. We will need to cool off Earth to save our coastlines, coastal cities worldwide, and lowlands, while also addressing the other problems caused by global warming.”
https://youtu.be/NXDWpBlPCY8?t=1023
In the report titled Collision Course: 3-degrees of warming & humanity’s future, author David Spratt assembles some of the recent scientific literature on observations and projections, the systemic risks and the cascading impacts and non-linear features of the climate system. Some data on recent energy trends and projections is included. The final sections document research on the likely physical impacts on human systems, and particularly food production, in a 3 °C-warmer future.
https://www.climatecodered.org/2024/12/podcast-facing-world-at-3-degrees-of.html
Ken Towe says
More historical evidence….
Nature 461, 1110-1113 (22 October 2009)
Atmospheric carbon dioxide through the Eocene–Oligocene climate transition….
Paul N. Pearson, Gavin L. Foster, Bridget S. Wade
“Geological and geochemical evidence indicates that the Antarctic ice sheet formed during the Eocene–Oligocene transition 33.5–34.0 million years ago. Modelling studies suggest that such ice-sheet formation might have been triggered when atmospheric carbon dioxide levels fell below a critical threshold of ~750 p.p.m.v. During maximum ice-sheet growth, pCO2 was between 450 and 1,500 p.p.m.v., with a central estimate of 760 p.p.m.v.”
The Tertiary climate was warmer, the pH of the oceans was lower but the biosphere thrived. Plant life on land was lush…a rainforest, the marine carbonate plankton diversified. No “acidification”.
Barton Paul Levenson says
KT: The Tertiary climate was warmer, the pH of the oceans was lower but the biosphere thrived. Plant life on land was lush…a rainforest, the marine carbonate plankton diversified. No “acidification”.
BPL: Marine plankton undoubtedly had time to adapt to the more acidic ocean; the present rapid acidification is killing off marine life faster than it can evolve. Plant life was undoubtedly lush in the Tertiary, but there was no human agriculture tuned to the interglacial climate, nor were there trillions of dollars worth of infrastructure along the sea coast.
Christian says
Ken Towe says
31 Dec 2024 at 2:15 PM
“More historical evidence….”
That was not “evidence.” It was cherry-picked data, stripped of its context, which undermines its meaningful interpretation and prevents the presentation of sound, holistic scientific conclusions.
Piotr says
Ken Towe “ the marine carbonate plankton diversified. No “acidification”.”
Since atmospheric CO2 was dropping – even with a high school chemistry one shouldn’t expect acidification, but its opposite – alkalinization. And, no need for the quotation marks around the name of elementary chemical processes.
And you understanding of biology does not fare much better – diversification of plankton does not mean that it was all hunky-dory for marine life – diversifications are typically a result of the extinctions of the previous forms of life that were well adapted to previous conditions. For instance, the spectacular diversification of mammals followed the wiping out of all dinosaurs.
Our success as a species is based on having a civilization, and the civilization was made possible by agriculture. As a species we may have been ready for agriculture for many 10,000s of yrs, but it happened only 12,000 yrs ago – when the climate recovered from the last glaciation and became sufficiently moderate and predictable . Make the climate more extreme and less predictable – and you will reduce the amount of food produced – which will then collapse the very civilization we depend on – when the mass starvation starts – no more laws, no more state, no more trade, no more economy, no more industry no more Internet – those with bigger guns get the food – the rest starves or migrates to other countries and takes them over the edge. And the next year is worse since the farmers robbed of their seed grain – won’t plant their fields for the next year crop.
So for your children or grandchildren dying of starvation in a collapsing lawless society – it will be a poor consolation that their death – maybe will lead to a … diversification of other species. Only maybe – because diversification works only if you have time to EVOLVE – a great many generations to accumulate enough of right mutations, in the right order, to evolve a working adaptation. So if the climate changes too quickly – animal and plant taxa just go extinct, so we would get the worst of the both worlds.
jgnfld says
Wow…Haven’t seen anyone really try to push this one since the old “hiatus-that-wasn’t” days. Quoting that formidable scientist–Joe Barton–who said the same on the Senate floor decades ago really doesn’t cut it any more even with other deniers, don’t cha’ know. Next you’ll bring up [gasp on] climategate [gasp off], I suppose!
“the absolutely ludicrous “bootstrap” theory of interglacials. ” just happens to have now about 35 years of solid evidence behind it. See any standard climate science textbook for whole chapters of evidence re. Antarctic ice cores as well as ice cores from many other locations some of which show quite different patterns.
Lastly, FYI: Mars would be ~ 20C colder if there were no CO2 atmosphere instead of the very thin one it has.
That YOU (and Joe Barton) find it “ridiculous” and present zero evidence other than your disdain (Hint: NEITHER a youtube post nor a snowball is scientific evidence of anything other than credulousness on your part…well and Joe Barton’s too) and that you think anyone should care is, as you say, “absolutely ludicrous”. That you also find the many published refs in skeptical science “absolutely ludicrous” on the basis of your ignoring all the evidence easily found in that and other research, your uneducated opinion, and a you tube clip is equally astounding.
Susan Anderson says
@jgnfld: are you addressing Ken Towe or Piotr? The nesting indicates the latter, and I doubt that was your intention.
Ray Ladbury says
Keith Woolard: “But the temperature does rise for about 600 years before any potential +ve feedbacks occur.”
Wow, did you jump right into 2025 and say to yourself :”Hey, I think I’m gonna go for the stupidest comment of the year right away”?
‘Cause if so, kudos! I don’t know, dude. You need to learn to pace yourself.
Keith Woollard says
Thanks all for your replies but think you are missing the point.
Just finding a time when there was the same amount of CO2 in the atmosphere is hardly any better than looking at mars as an analogue. Mars has more CO2 than us, so it must be hotter!
What I was saying is that AI cannot make a projection on current data as there has never been a time with the continents and oceans in a similar layout where the CO2 has increased dramatically NOT because of a temperature rise. Like I said, for the last 500KY temperature has always driven CO2 change.
Geoff Miell says
Keith Woollard: – “Like I said, for the last 500KY temperature has always driven CO2 change.”
Nope. See the YouTube video titled The “Temp Leads Carbon” Crock: Updated, duration 0:11:59.
https://www.youtube.com/watch?v=8nrvrkVBt24
Piotr says
Keith Woolard: “ Mars has more CO2 than us, so it must be hotter! ”
It’s only Jan 2 and already a strong candidate for the RC most ignorant post of the year.
Mars is not in the same distance from Sun, has an atmosphere that is about 1% density of the Earth – therefore it absorbs hardly any solar radiation, practically does not have any water vapour – even though on Earth the water vapour is responsible for the majority of the background greenhouse effect, clouds are rare on Mars and they don’t have a massive role in climate the way the do on Earth, Mars does not have the ocean with their huge heat capacity that moderate the climate and transport heat. Mars does not have life that affects concentration of CO2 on Earth.
Comparison with Earth in the past are not ideal (although the main differences are NOT in processes, but their rates of change) – still it is incomparably better than comparing Earth’s climate with that of modern Mars. At worse it is comparing apples and oranges, while you are claiming that apples more comparable to orangutans.
Barton Paul Levenson says
KW: Mars has more CO2 than us, so it must be hotter!
BPL: The atmospheric pressure on Mars is about 0.6% of that on Earth, so there’s much less line broadening due to pressure and the greenhouse effect is smaller. Plus there’s no significant water vapor.
KW: for the last 500KY temperature has always driven CO2 change.
BPL: No, the present warming is the other way around.
jgnfld says
A better, more accurate, and more honest title for your youtuber to have used would be:
“Positive Feedbacks From CO2-driven Warming Lead To Further Warming”. but then it would 1) accurate if it said that and 2) your actual goal of spewing mis/disinformation wouldn’t be fulfilled.
Should you actually care about the actual best scientific info on this subject you should look at Myth #12 “CO2 lags temperature” here https://skepticalscience.com/co2-lags-temperature.htm
Kevin McKinney says
I think it’s pretty clear that KW was using the “Mars has more CO2 than us” idea ironically, in furtherance of his point that more than CO2 matters. (Of course, nobody was claiming otherwise here, so maybe not strictly a necessary point to make–but whatever.)
But since this is an occasional denialist meme of the “true but misleading” variety, and since this is a science site, and since I for one found it rather surprising when I first learned this nugget, I thought I’d expand bit for anyone who may be interested. (Cue the “stampede for the exits” SFX here.)
(Numbers from Wikipedia, and as always check my decimal places!)
Mass of Martian atmosphere: 2.5x10e16 kg
95% CO2, so mass of Martian CO2 = 2.375e16 kg
Mass of Earth’s atmosphere: 5.1480×10e18 kg
0.04% CO2, so mass of Earth’s CO2 = 0.206x10e16 kg
So, Mars as on the order of 11x the CO2 by mass that we do.
When used as a denialist talking point, meant to suggest that this rules out CO2 as climate forcing, the reasons it fails as an argument have already been presented above in this thread: weaker insolation (590 W/m2, versus 1371), very low atmospheric pressure meaning little pressure broadening, and most importantly no water vapor to speak of, etc.
Carry on!
Keith Woollard says
Thanks Kevin, not really ironically, but close. My point about Mars was to try and negate the whole “look at time Vs CO2″ in totally different settings and drawing conclusions” I certainly am not trying to say “look at Mars, there is no GHG affect”
And BPL, yes -that is exactly my point and it sounds like you are correcting me.???? We have a 500KY record of temp driving CO2, and 200 years of a huge dump of sequestered carbon being released……. AI (ML really) will not be of any use. ML weather predictions are based on learning from previous patterns.
And I didn’t want to discuss this as it isn’t really relevant to the whole ML debate but Geoff and jgnfld brought up the absolutely ludicrous “bootstrap” theory of interglacials. I cannot understand how anyone with a modicum of intelligence can read that SkepticalScience article and not laugh. Yes, obviously temperature rise causes out-gassing ofCO2 from the oceans. That is exactly why the temp/CO2 curves for the last 500KY look so perfect. But the bootstrap logic falls aver as soon as the warming stops. If temp rise causes CO2 rise which then causes a temp rise, all with various lags, then it won’t stop. But it does stop.
Piotr says
Keith Woolard – “I cannot understand how anyone with a modicum of intelligence can read that SkepticalScience article and not laugh”
No one with a modicum of intelligence would describe a positive feedback loop between A and B as “ A is always driving B“, as in:
Keith Woolard: “For the last 500KY temperature has always driven CO2 change”
T is a trigger of deglaciation via increase in T in Arctic in summer via Milankovic cycles. The timing of the glacial cycles correspond to the eccentricity cycles, which amplifies the seasons. During deglaciations summers get hotter and winters get colder. Which means what WITHOUT positive feedbacks – there would be no glacial cycles in global T (as warmer summers would be compensated by colder winters). The fact that we see a massive – between 6 and 8 C- difference between the ice maximum and top of the deglacial – proves that during last 500,000 years, T on its own, i.e. WITHOUT positive feedbacks – amounts to NOTHING – it contributes to deglaciation ONLY as a part of positive feedback. where again from the definition of a positive feedback: A increases B and in turn B increases A.
In this case A= T and B= CO2 and CH4: increase in T increases CO2 and CH4 conc. which in turn increase T, which then increases CO2 and CH4 even further, which then lead to even higher T – see for instance:
https://www.antarcticglaciers.org/wp-content/uploads/2012/07/Vostok_420ky_4curves_insolation_to_2004.jpg
Therefore, the conclusion from the glacial cycles triggered by the Milankovic cycles is OPPOSITE to those drawn by “the people without a modicum of intelligence” and/or deniers – glacial cycles DISPROVE their claim that CO2 and CH4 are just passive
variables, driven by changes in T – because in a positive feedback T increases CO2 and CH4 AND CO2 and CH4 increase T.
And since increases CO2 and CH4 increased T INSIDE a positive feedback loop during the glacial cycles, then CO2 and CH4 also increase T OUTSIDE of the positive feedback – when their values increased mainly NOT due to an increase in T in the previous loop, but because they are added by humans (150% of the preindustrial for CO2, 300% for CH4).
Therefore – contrary to the arrogant claims of Keith Woollard, who laughs at people who unlike him understand positive feedbacks – as “the people without a modicum of intelligence” – the last 500,000 years does teach us a lot about today and the future – the more CO2 and CH4 we emit the warmer it will be.
And the past increases in T associated with the interglacial CO2 give us the rough idea of equilibrium sensitivity of T to increases in CO2.
jgnfld says
Re…Therefore – contrary to the arrogant claims of Keith Woollard, who laughs at people who unlike him understand positive feedbacks – as “the people without a modicum of intelligence” … I suspect his real name is Nelson Muntz, not KW. Most people smart enough to get advanced science degrees–and dumb enough to let it be known publicly–have met Nelson Muntzes. We all know they are useless.
Keith Woollard says
All sounds very compelling Piotr, but it isn’t all correct. You say “WITHOUT positive feedbacks – there would be no glacial cycles in global T (as warmer summers would be compensated by colder winters). ”
But the temperature does rise for about 600 years before any potential +ve feedbacks occur.
Nothing in your long description says that there need to be +ve feedbacks, and nothing explains the stopping of the warming.
Nigelj says
Keith Woolard, wouldn’t the warming following a glacial period stop because the orbital cycle changes back to a cooling cycle and the co2 dissolved in the oceans providing the positive feedback has been mostly expelled?
Keith Woollard says
Yes Nigel, fair point, but the not insignificant lag (>600y) means that CO2 is still rising, and therefore pushing temps up whilst insolation is falling.
From a palaeo point of view any CO2 –> T effect it must therefore be less than the T –> CO2 effect
Susan Anderson says
KW: re laughing at SkepticalScience and others (Peter Sinclair, DeSmog, NASA, and just about every entity engaged in real science, including RealClimate) who have made an effort to answer false but common assertions in a readily accessible format:
“Better to remain silent and be thought a fool than to speak and to remove all doubt.”
Unfortunately, consequences are piling up and at some point they will accrue to you and people you care about, along with the rest of us. Lies are not truth, and reality is not fake. You can be ‘clever’ with the detail in various ways, but falling victim to deception is stupid.
phd Béla Tóth says
Dear Gavin!
Thank you very much for this work. I’ll spread it. A huge danger is that AI is believed by laymen and many scientists to be omnipotent. Like in our time, everything anyone did with a computer.
I want to make it clear why the green transition is wrong.
It makes my job very difficult that the ChatGPT says the same thing as the IPCC reports. Without criticism. There have been numerous scientific rebuttals of many of its parts. But these are kept silent by the media.
Nigelj says
The media don’t talk much now about so called rebuttals of the IPCC reports, because its essentially just the same old nonsense thats been repeated over and over for the last 20 years. Its been conclusively debunked many times. It was reported more than adequately in the media in the past, for so called balance.
Mal Adapted says
phd Béla Tóth: I want to make it clear why the green transition is wrong.
Hitchens’s Razor is a general rule for rejecting certain knowledge claims:
BT: It makes my job very difficult that the ChatGPT says the same thing as the IPCC reports. Without criticism. There have been numerous scientific rebuttals of many of its parts. But these are kept silent by the media.
Unless you provide links to at least one of those “numerous scientific rebuttals” appearing in a formal peer-reviewed venue, we can assume there’s nothing for “the media” to make noise about.
But why is it your job to convince anyone the “green transition” is more wrong than leaving global warming open-ended? Do you think you stand alone against the IPCC?
Piotr says
Mal Adapted: “ But why is it your job to convince anyone the “green transition” is more wrong than leaving global warming open-ended?”
Assuming that the name “Béla Tóth” is true – then it is a Hungarian name. Given his broken English – not likely a Hungarian emigrant, but living and working in Hungary, a country tightly controlled by Victor Orban, a right-wing politician who is against mitigation of any environmental damages simply on the ideological grounds. Furthermore, he is Putin’s voice in EU, thus representing Russia’s interests in torpedoing green transition – since it would destroy the demand for Russia’s oil and gas, taking away the political leverage Russia has had over EU, and destroying Russia’s economy – which would put Putin’s rule in question, make the Russian billionaires supporting him suffer, and destroy the financial capacity of Russia to continue to wage war on Ukraine, and the hybrid war on the West. The later includes intervening in the elections, by supporting extremes – increasing the polarization and therefore Western nations paralyzing politically, funding the right-wing parties, as well as the mainstream politicians (the famous case of Gerhard Schroeder – who after creating the dependance of Germany on the Russian gas as the Germany’s PM, continued his mission when Putin made him a chairman of the board of Nord Stream AG and of Rosneft).
On the disinformation front – Russian servers have been linked to the Climategate, which succeeded in delaying any action of GHG mitigation, and Russians produce and distribute the climate change denial materials, particularly on social media. But I am sure they love, if not all of their “voices” come from the troll farms in St. Petersburg, but instead from a member of the EU – Hungary. Hence the job for “phd Béla Tóth”
Mal Adapted says
Thanks, Piotr. I noted the Hungarian name, with credentials proudly prefixed. I clicked on his (apparently Béla is a male name) embedded link and reached a Magyar language site with a country domain of .hu. I had Google translate the page, and it’s all unambiguous climate-science denialism and natural-gas boosterism.
I, for one, am quite willing to believe the human behind the virtual ID is an agent of Orban’s disinformation apparatus, or even Putin’s directly. OTOH, he could be a volunteer denialist like so many in the US, and even feel it’s his patriotic duty to go global on RC. The problem for RC commenters is that we don’t really have any way to know whether “phd Béla Tóth” is fooling us, or only himself. Perhaps more information will emerge. Or perhaps we’ll hear no more from him. Meh.
Susan Anderson says
Hilarious that a critique would complain about the simple fact that AI machine learning which is based on vacuuming up masses of literature should cite the IPCC! What else would it do? This is not complicated.
Janne Sinkkonen says
That was excellent, just a couple of things.
The ECMWF paper modelling directly from observations instead of ERA5 is achieving only poor skill so far, but Google has a nowcast model in production which I think performs excellently, and it assimilates all kinds of observations including radar images. So maybe they get it working?
GraphCast is a graph transformer, some newer ones are diffusion models and are suitable for probabilistic forecasts, replacing ensembles. I think both are quite on par with physical models on 3-10-day Z500, slightly even being better. Their problems are more poor temporal (sic) and spatial resolution and producing all the lower-level parameters.
On applying similar techniques to climate, I know nothing about but agree. ;) Not enough straightforward data there, especially for controlled extrapolations that are required.
On AI writing scientific papers I don’t agree.
First, it is a mistake to think the free ChatGPT, or even the paid one (o1, o1-pro), is the model writing the papers a year or five from now. Intelligence is poorly defined, reasoning is poorly defined, no-one knows what these concepts exactly mean, yet the models beat benchmarks one after another. We don’t know their limits, but “not intelligent, just seems to be” easily turns out to “not taking over the world, just seems to be”.
Prohibiting AI writing doesn’t work because no-one can say what parts are written by AI. Second, it shouldn’t even matter if the quality is there; checking the quality may be a practical problem if the current peer review partly relies on external markers of quality such as the author and their institution. And it does, unless the reviews are blind.
Checking the quality may also be a problem if complexity of the papers, their overall volume, or rubbish increase. Some of these would in fact be good, for they’d mean scientific activity grows. On the other hand, AIs will be able to help with the review process too (or they are *now* able to help, at least o1 spots errors). The most inconvenient development would be an increase in noise without inability to automatically reject the the noise, but I don’t find that probable, except maybe transiently.
On AI not “saving us” by inventing miracles all by itself, I agree. Even if we invent them together with our AIs, the solutions need to be accepted politically, and implemented in scale.
Christian says
Janne Sinkkonen,
The ECMWF paper modelling directly from observations instead of ERA5 is achieving only poor skill so far,
especially for controlled extrapolations
First, it is a mistake to think the free ChatGPT, or even the paid one (o1, o1-pro), is the model writing the papers a year or five from now
unless the reviewers are blind.
Prohibiting AI writing doesn’t work because no-one can say what parts are written by AI.
yet the models beat benchmarks one after another.
but I don’t find that probable, except maybe transiently.
and
Even if we invent them together with our AIs, the solutions need to be accepted politically, and implemented in scale.
Dear Janne, you will not last long speaking like that here. It is not permitted.
Mal Adapted says
Christian: Dear Janne, you will not last long speaking like that here. It is not permitted.
Huh? How would you even know what he said, if he wasn’t permitted to say it here?
Indeed, Janne Sinkkonen appears to know something of what he speaks. Why wouldn’t he be permitted to say what he said here?
“Christian”, OTOH, may actually be an AI, crafted to be smugly oppositional by the evidence. I suppose one might suspect that of me as well! That’s the conundrum: if the AI is good enough, it will pass the Turing Test. Does that make anyone else nervous?
Piotr says
Christian “ Dear Janne, you will not last long speaking like that here. It is not permitted.”
How do you explain then that we can see your posts? You mean that the owners of this forum do not permit …. praising Gavin and mildly differing with him on minor points (the value of AI in scientific publishing), I quote Janne: That was excellent, just a couple of things“), while at the same time permitting YOU posting unsupported with any evidence attacks (accusation of censorship) toward them?
A human author would immediately spot this inconsistency – but an AI system perhaps wouldn’t? So Mal’s alarm: “ If the AI is good enough, it will pass the Turing Test. Does that make anyone else nervous?” may be a bit … premature ;-)
On the other hand, taking umbrage to Gavin’s criticism of AI capabilities would suggest that AI may be developing human emotions. Which may help it passing the Turing Test – as the human observer might attribute its lapses in the logic, to being blinded by resentment …
Ken Towe says
“The 1.5-degree limit is deader than a doornail, and the 2-degree limit can be rescued only with the help of purposeful actions to effect Earth’s Energy Balance. We will need to cool off Earth to save our coastlines, coastal cities worldwide, and lowlands, while also addressing the other problems caused by global warming.”
Given that just ONE part per million of CO2 represents 7.8 gigatons…7,800 million tons. it seems unlikely that there is anything meaningful that can be done to cool the Earth. But we can improve infrastructure to help survive and adapt.
Geoff Miell says
Ken Towe: – “But we can improve infrastructure to help survive and adapt.”
In the YouTube video titled sea level rise – is Greenland beyond its tipping point?, published 29 Jul 2024, duration 04:19, glaciologist Professor Dr Jason Box, from the Geological Survey of Denmark and Greenland, said from time interval 0:01:50:
“Now if climate continues warming, which is more than likely, then the loss commitment grows. My best guess, if I had to put out numbers; so by 2050, 40 centimetres above 2000 levels; and then by the year 2100, 150 centimetres, or 1.5 metres above the 2000 level, which is something like four feet. Those numbers follow the dashed-red curve on the IPCC’s 6th Assessment, which represents the upper 5-percentile of the model calculations, because the model calculations don’t deliver ice as quickly as is observed. If you take the last two decades of observations, the models don’t even reproduce that until 40 years from now.”
https://youtu.be/8jpPXcqNXpE?t=110
Per the World Meteorological Organization’s State of the Global Climate 2023, on page 6:
• From Jan 1993 to Dec 2002, the global average rate of SLR was 2.13 mm/y;
• From Jan 2003 to Dec 2012, the global average rate of SLR was 3.33 mm/y; and
• From Jan 2014 to Dec 2023, the global average rate of SLR was 4.77 mm/y.
https://wmo.int/publication-series/state-of-global-climate-2023
The current (year-2024) rate of global mean SLR is around 5.0 mm/y.
The average doubling time of the acceleration of the global mean rate of SLR appears to have been about 18-years since Jan 1993.
I wouldn’t be at all surprised to see the global mean rate of SLR double to 10 mm/year sometime in the late-2030s, and double again to 20 mm/year perhaps before 2050.
On 22 August 2022, at the Cryosphere 2022 Symposium at the Harpa Conference Centre Reykjavik, Iceland, glaciologist Professor Jason Box said from time interval 0:15:27:
“And at this level of CO₂, this rough approximation suggests that we’ve committed already to more than 20 metres of sea level rise. So, obviously it would help to remove a hell-of-a-lot of CO₂ from the atmosphere, and I don’t hear that conversation very much, because we’re still adding 35 gigatonnes per year.”
https://youtu.be/iE6QIDJIcUQ?t=927
That raises critical questions about whether it would be worthwhile to continue defending coastal infrastructure/property, or instead, abandon them and retreat. How do you defend against an apparently relentless and accelerating SLR?
If we want to keep a planet that looks more or less like the one that has existed the last ten thousand years, we actually have to cool off the planet back to a Holocene-level temperature.
Ken Towe says
“If we want to keep a planet that looks more or less like the one that has existed the last ten thousand years, we actually have to cool off the planet back to a Holocene-level temperature.”
That’s not possible. Going back to 1987…. 350 ppm would mean storing 70 ppm. That’s more than 500 billion metric tons. And the transportation involved would add more during the process.
Nigelj says
It may be possible. Regenerative agriculture is gaining some traction and as a side effect it is good at drawing down atmospheric CO2 and storing it as soil carbon. Remember you are talking vast areas of farmland, and over a period of many decades. That would potentially ultimately store a lot of carbon, without needing transporting any materials.
Ken Towe says
Potentially…But unless permanently buried, all bioenergy sources will eventually be recycled by the oxygen that plants have added. Over geological time that process has shifted the percentage ratio of oxygen to CO2 to 525-to-one. In the Archean it was the reverse. Industrial carbon capture and long-term storage is a heavily subsidized business that would never survive without those monies. Some might even call it a scam?
Nigelj says
Ken Towe,
I think the recent anthropogenic climate change is mainly a rate of change problem meaning its so fast its difficult for life to adapt. If we can slow the rate down by reducing, and ideally stopping the emissions and storing some carbon it’s going to help, and if the soil carbon slowly finds its way back into the atmosphere eventually, that might not matter if that rate of release is very slow .
I don’t think we can say that industrial carbon storage funded by subsidies is a scam. The current pilot plants are expensive for that they achieve, but as it scales up it may prove to be cost effective. I admit I have some doubts about the whole thing, but its a bit early to say for sure whether its viable or not.
The danger is in assuming industrial carbon storage will work really well and be a magic bullet solution in the future, and thus relax efforts to reduce emissions. Most of our efforts need to urgently go into reducing emissions at the source.
Barton Paul Levenson says
KT: Industrial carbon capture and long-term storage is a heavily subsidized business that would never survive without those monies.
BPL: We need it anyway.
KT: Some might even call it a scam?
BPL: Yes, and they’d say that without any evidence, or even without knowing what “scam” actually means.
Tomáš Kalisz says
In re to Nigelj, 4 JAN 2025 AT 4:54 PM,
https://www.realclimate.org/index.php/archives/2024/12/ai-caramba/#comment-828758
Barton Paul Levenson, 5 JAN 2025 AT 9:09 AM,
https://www.realclimate.org/index.php/archives/2024/12/ai-caramba/#comment-828767
and Ken Towe, 3 JAN 2025 AT 12:31 PM,
https://www.realclimate.org/index.php/archives/2024/12/ai-caramba/#comment-828719
Sirs,
Let me add a few comments.
1) To extract 10 GT carbon dioxide from ambient air comprising 400 ppm thereof, you need to process 25000 GT air – provided that you have a process with 100 % yield.
2) For a comparison, processing 13000 GT sea water in desalination plants was ridiculed some time ago in another Real Climate discussion thread as an absolutely absurd idea. In this respect, I would like to just recall that direct carbon dioxide removal from atmosphere is a comparably difficult task as changing Sahara desert into an artificial swamp land.
3) On the other hand, if any carbon sequestered from the atmosphere by plants should unavoidably re-oxidize again and return back into atmosphere, thee is a question how could coal form in the geological past.
Greetings
Tomáš
Piotr says
Ken Towe: “ Industrial carbon capture and long-term storage is a heavily subsidized business that would never survive without those monies.
BPL: We need it anyway.
Tomas Kalisz:
“ 1) To extract 10 GT carbon dioxide from ambient air comprising 400 ppm thereof, you need to process 25000 GT air
Only if one reads “Industrial carbon capture and long-term storage” and thinks it applies NOT to CO2 from the industrial smokestacks, but to CO2 from the air in nature. Concentration of CO2 in gas effluent from power plants and cement and steel plants is 12-30%, NOT 400 ppm, so you’d have to reduce your numbers 300-750 times.
So no – you can’t use this example to validate your modest proposal:
TK “ 2) For a comparison, processing 13000 GT sea water in desalination plants was ridiculed some time ago in another Real Climate discussion thread as an absolutely absurd idea.”
And rightly so – currently worldwide 22,000 desalination plants produce 36 Gt of water per year. Thus to desalinate of 13000 GT, you would need to build and operate 8 MILLIONS of such plants + plus building pipelines to move 40,000 ton of water per second over thousands of km and spraying it over 5 mln km2. And all this would have to be continued for 100s of year to even approach 0.3K cooling, and which would disappear the moment you stop pumping.
And all that assuming that all extraction and processing of the raw materials for those MILLIONS of desalination plans and pipeline systems, nor desalinating and pumping 40,000 ton of water per second for 100s of 1000s of years will be done … without any significant GHG emissions. You claim that all that energy will be provided by solar energy, but if you build enough solar panels to run MILLIONS of highly energy- consuming desalination plants – why waste the energy this way – instead of using it to displace fossil fuels and therefore dramatically cut emissions of CO2 at its source?
So no Mr. Kalisz, – your scheme is still the most absurd, the most ineffective, the most ecologically disruptive method of mitigating GW we have seen here. Or at least on par with your fellow denier KiA proposing to cover the polar oceans with a … 2-foot thick sheets of plastic- or fiberglass-reinforced styrofoam.
Geoff Miell says
Ken Towe: – “That’s not possible. Going back to 1987…. 350 ppm would mean storing 70 ppm.”
IF that is as you say “not possible” THEN I’d suggest that has catastrophic consequences for human civilisation and billions of lives and livelihoods.
See page 44 in Collision Course at: https://www.breakthroughonline.org.au/collisioncourse
Nigelj says
Thomas Kalisz,
“2)For a comparison, processing 13000 GT sea water in desalination plants was ridiculed some time ago in another Real Climate discussion thread as an absolutely absurd idea.”
I don’t recall saying that. My criticisms of greening the Sahara were about the huge costs involved, and because it only treats symptom of climate change, and it has many environmental downsides. Likewise I don’t care how much air needs treatment to make DAC work. It comes down to costs and other factors.
The first direct air capture pilot instillation sequestered 29 seconds of the equivalent of one years typical CO2 emissions!. On that basis you would need approximately one million DAC instillations to sequester the equivalent of a full years emissions. This is obviously cost prohibitive! But if you sequester the equivalent of 10% of a years emissions it would be 100,000 instillations , and apparently its realistically possible to make the instillations at least 10 times more efficient so that is 10,000 instillations. This is a slightly more manageable sounding number, and it could well be considerably less.
Its probably worth developing more instillations to see how efficient they can get. The cost of doing that might be worth the effort. Also there is some merit in BPLs position that climate change is a huge problem so you throw everything you have at the problem providing it doesn’t have huge downsides or risks.
Having said that, sequestering carbon at scale with industrial processes does look very challenging to me because of the costs. Trials show that regenerative agriculture does sequester soil carbon and this essentially has no cost. It’s a side effect of regenerative agriculture. And regenerative agriculture is being developed for other positive reasons. I doubt that regenerative agriculture is some magic bullet solution to anything, but it does seem prudent to become less reliant on industrial farming processes.
“3) On the other hand, if any carbon sequestered from the atmosphere by plants should unavoidably re-oxidize again and return back into atmosphere, thee is a question how could coal form in the geological past.”
I was reading about coal formation recently in a geology text. Basically swamps create peat which is half decayed plant material and this gets buried and covered in mud and rock and gets compressed and forms into coal. And so not much carbon escapes to the atmosphere. At certain geological periods there were a lot of swamps and sedimentary processes that buried everything under rock, so you had massive levels of coal formation.
Today we don’t have such extensive swamps so considerably more soil cabon decays and eventually finds its way into the atmosphere eventually, and presumably coal formation is insignificant. Of course climate change might ultimately change the extent of swamps and coal formation.
Tomáš Kalisz says
in Re to Nigelj, 10 Jan 2025 at 2:48 PM,
https://www.realclimate.org/index.php/archives/2024/12/ai-caramba/#comment-828867
Hallo Nigel,
Thank you for your feedback.
My comparison between the estimated size of the DAC task and the size of the “converting Sahara to a swamp” task had to express, primarily, my suspicion that the high costs may be comparable.
Please consider the fact that should you handle a certain amount of a material, the size of the necessary equipment (and price thereof) is commensurate to the amount of this material divided by the flow rate you can achieve. The achievable flow rate has certain physical limits and, as a rule of thumb, becomes increasingly energy consuming at higher rates. For these reasons, and also in accordance with my experience of a chemical technologist, I am very sceptical about making DAC processes economically feasible by any kind of their scaling up.
On the other hand, natural carbon sequestration processes in form of biomass accumulation and its slow anaerobic carbonization may have the advantage that we may not need to invest into the necessary “equipment”. Furthermore, the energy securing the necessary flow rate may come from the Sun.
You are right that current land hydrology may become increasingly unfavourable for this kind of natural carbon sequestration. This is, by the way, one of the reasons why I repeatedly ask how sure is the present climate science about relationships between global climate and human interferences with land hydrology. Should the prevailing trend of the anthropogenic climate change with respect to land hydrology be actually towards land desiccation, then it might be also highly unfavourable for agriculture and generally for modern industrial civilization.
For all these reasons, I still believe that my questions regarding the level of the presetn knowledge about relationships between human interferences with land hydrology and Earth climate are relevant and might deserve an attention.
Greetings
Tomáš
Keith Woollard says
Nigelj,
Point 3 is basically correct. And the reason (ignoring recent human influences) that this doesn’t happen on the same scale anymore is that the CO2 level is historically dangerously low, There is no way herbivorous animals 10 times the size of elephants could have fed at 280ppm CO2
The important fact you are missing though is that the vast majority of sequestered carbon is not in coal and O&G deposits but limestones. We as humans tend to forget about most of the world were we don’t live .
Barton Paul Levenson says
KT: it seems unlikely that there is anything meaningful that can be done to cool the Earth. But we can improve infrastructure to help survive and adapt.
BPL: We can stop making the problem worse by switching away from fossil fuels and preserving forests.
Mal Adapted says
BPL: We can stop making the problem worse by switching away from fossil fuels and preserving forests.
Yes. It never ceases to amaze me when deniers or doomers realize they’re in a hole, yet refuse to stop digging.
Ken Towe says
Switching away will take time. Try to remember that there are eight billion people who need transportation to provide them with food, not to mention all of the materials needed to continue the transition to renewables and EVs. That means more oil will be needed and used, not less.
Geoff Miell says
Ken Towe: – “Try to remember that there are eight billion people who need transportation to provide them with food, not to mention all of the materials needed to continue the transition to renewables and EVs. That means more oil will be needed and used, not less.”
More fossil fuel emissions = accelerating global warming = diminishing ‘human climate niche’ = trajectory towards civilisation collapse
Prof. Hans Joachim Schellnhuber said:
Prof. Johan Rockström said:
https://www.breakthroughonline.org.au/collisioncourse
Nate Hagens interviewed economist Steve Keen on 14 Dec 2023 that can be seen/heard in the YouTube video titled Steve Keen: “On the Origins of Energy Blindness” | The Great Simplification #108, duration 1:32:26.It included (per the Transcript):
In conclusion:
https://www.thegreatsimplification.com/episode/108-steve-keen
jgnfld says
While a much smaller problem, how long did it take to switch from horses to cars? How long did it take for cell phones to bury landlines.
Changes take time, true. But maybe not the impossible amounts you want to think.
Barton Paul Levenson says
KT: That means more oil will be needed and used, not less.
BPL: Not necessarily true, especially as renewables replace more and more transportation fuel.
Barton Paul Levenson says
KT: Try to remember that there are eight billion people who need transportation to provide them with food
BPL: No kidding, really?
KT: , not to mention all of the materials needed to continue the transition to renewables and EVs. That means more oil will be needed and used, not less.
BPL: Less and less the more we use renewable fuels and electrified transport.
Mal Adapted says
Ken Towe: That means more oil will be needed and used, not less.
False. Every kWh of renewable power available is that much fossil carbon unburned, whether to meet new demand or old. Every vehicle-mile driven in a BEV is one less in an ICEV. Over time, the renewable capacity online powers an increasing share of its own growth, while power consumers large and small retool to take advantage of RE’s dramatically lower operating cost. Capital and materials efficiency can reasonably be expected to improve for the next several decades, as power producers and consumers ascend learning curves.
While targeted collective intervention is needed to accelerate the transition, global market forces are already driving it. The USA’s newly-elected denialist kakistocracy just means we ride free on the emissions reductions of other nations, at least until the next election.
This really isn’t complicated, Ken. You obviously have a powerful cognitive motivator. Are you a professional disinformer, or merely a useful idiot? Whatever. You’ve just gone from “minor irritant” to “confirmed troll”.
Nick McGreivy says
“ML-based parameterizations have to work well for thousands of years of simulations, and thus need to be very stable (no random glitches or periodic blow-ups) (harder than you might think). Bias corrections based on historical observations might not generalize correctly in the future.”
This same issue arises when using ML to simulate PDEs. The solution is to analytically calculate what the stability condition(s) is (are), then at each timestep to add some numerical diffusion that nudges the solution towards satisfying the stability condition(s). I imagine this same technique could be used for ML-based parametrizations.
See https://arxiv.org/abs/2303.16110
Paul Pukite (@whut) says
Nick said:
The breakthrough won’t be on some massive computation but on a novel formulation that exposes some fundamental pattern. Over 10 years ago, I wrote on a blog post on how one can extract the ENSO signal by doing simple signal processing on a sea-level height (SLH) tidal time-series — in this case, at Fort Denison located in Sydney harbor. (Incidentally, this is the location used by climate change deniers to show how sea level does not change, guess how)
The formulation/trick is to take the difference between the SLH reading and that from 2 years (24 months) prior, described here
https://geoenergymath.com/2014/09/21/an-enso-predictor-based-on-a-tide-gauge-data-model/
With some averaging to reduce noise, the time series lines up very well with an ENSO index such as SOI. Could’t find any other match in the literature, and ChatGPT only finds my article with the exact details : https://chatgpt.com/share/677ef333-3a6c-8005-974b-7ccc4840d32c
The rationale for this 24 month difference is likely related to the sloshing of the ocean triggered on an annual basis. I think this is a pattern that any ML exercise would find with very little effort. After all, it didn’t take me that long to find it. But the point is that the ML configuration has to be open and flexible enough to be able to search, generate, and test for the same formulation. IOW, it may not find it if the configuration, perhaps focused on PDEs, is too narrow.
Paul Pukite (@whut) says
“doing simple signal processing on a sea-level height (SLH)”
It seems to me that the NASA JPL scientists should have the fear of god in place to solve this ENSO problem. They barely escaped the Eaton fire, which engulfed Altadena right next door. Likely the current climate extreme likely is part of the reason for the high Santa Ana winds, with drought from the recently formed La Nina. As NASA JPL is right at the base foothills of the San Gabriel mountains, they could get unlucky as the wind barrels down.
Doing literature research on the topic in the past, JPL has had 3 different scientists (no longer working there, AFAIK) looking at lunar effects on climate. J.H. Shirley, C. Perigaud, and S.L. Marcus have all touched on the tidal, lunar, ENSO connection over the years. For Perigaud, I found a full proposal that she had written which was apparently rejected for funding. Soon thereafter she left, and nothing else as I can see from her as her domain http://moonclimate.org/ is not responding.. Both Shirley and Marcus have been writing papers as independent researchers. It’s all interesting research, which I have cited.
So, please NASA JPL, fund this research. It’s truly a no-brainer to assign a team to it. You have the smartest scientists in the world working there, and you’d think some of them would understand the motivation for solving the problem. Right?
Arun says
Maybe some of the uncertainties such as in cloud formation can be helped with machine learning models, which can help tune the input parameters to climate models.
Dave_Geologist says
KT, the PETM is literally the text-book example of mass extinction caused by ocean acidification due to increased atmospheric CO2 (of benthic foraminifera: it was so bad even shells buried below the sea bed dissolved).
Calling your nonsense cherry-picked is being too generous.
BTW, pro tip for interacting on sites frequented by people knowledgeable about reality: don’t put scare quotes around reality. It’s a sure Tell that you’re a reality-denier. Works like a charm in Wattsupia and other reality-denial venues. Not so well here.
Dave_Geologist says
Keith, I cannot understand how anyone with a modicum of intelligence can read your comment on that SkepticalScience article and not laugh.
Yes, yes, I know it’s impolite to laugh at someone flaunting their ignorance in public, but given what’s about to happen in a couple of weeks, we need to find something to laugh about.
As an aside, I wonder what’s bringing about the resurrection of zombie memes that were debunked decades ago. Maybe the PR about the new season of The Last Of Us?
Dave_Geologist says
As probably the only person here who’s actually worked on CCS (in a consulting role, on the late oughties UK Miller/Peterhead project), I’ll add my two-penn’orth.
I’m very skeptical about getting clear-air capture working on a cost and time scale that is relevant for us, our children and our grandchildren.
However with the right industrial sites, CCS can work. With subsidies of course, at least in the early stages (UK offshore wind used to be more expensive than new nuclear, but by the last-but-one round it was cheaper; the last round failed because the price was cut too far, not because costs had escalated unexpectedly).
In everything but cement the first C, capture, has to be done anyway (I’m assuming kiln gas is clean already, and you can electrify the heating). Even power-station flue gas has to be cleaned up already: you just put an amine reformer after the SOx/NOx/particulates scrubbers. The current UK proposals are around clusters of petrochemical works, where the CO2 is already conveniently sequestered in pipes and tanks. You just need an export pipeline for the fraction you currently vent because there is insufficient market for CO2 among other industrial users. Picking the right disposal reservoir is key of course, but that expertise already exists.
In some ways making the disposal part greenfield not brownfield is a bonus. Part of what killed Miller was optimistic assumptions about continuing to use facilities and pipelines for decades beyond their design life, but it was mostly the Financial Crisis (the outgoing and incoming government had more pressing financial matters on their minds than subsidising CO2 emissions reduction a decade hence), and the subsequent collapse in oil price, which encouraged decommissioning of old, barely-profitable fields which looked like they’d be loss-making for years and expensive to mothball.
I’ll do the fourth reason as a separate post, as it partly riffs off some other comments above.
Dave_Geologist says
The fourth reason was making the perfect the enemy of the good, but on reflection I think that given time that could have been negotiated away. Basically, government lawyers and civil servants seemed to be paranoid about the possibility that it would not be 100% contained, forever. So they started off with so many conditions and impossible demands that the commercial risk would have been too great. E.g. massive financial penalties if (say) 5% of the inventory leaked in 20 or 50 years time because of an unforeseen well failure, repeat 3D seismic forever, not just for long enough to confirm that it is stable and there are no unexpected leakage paths, etc.
My response to that is a close cousin of BPL’s above: in my own words, every little bit helps, and if we can make one CO2 emissions source go away, even if there’s a risk some of it comes back decades hence, that’s a box ticked and a good deed done for the planet and its population.
Frankly, I don’t care if it all leaks in a century or two: by then we’ll either have solved the global problem, or the world will be in such a mess that the leak is a drop in the ocean.
Susan Anderson says
DaveG: Thank you for your informed response about CCS. Layperson here, but I’ve been persuaded by the likes of this, along with what seem to be common sense problems with scale and cost. You give me hope and I’d like to know more. Could you provide some information to contradict this? It’s short, 3 years old, from Australia:
https://www.youtube.com/watch?v=MSZgoFyuHC8
This treatment is longer and more recent, but appears to confirm the problems with scale.
https://www.youtube.com/watch?v=PlsjvKKugKI
Your point about not demanding perfection resonates with me, as it is so often used to prevent appropriate action by people who are making things worse, rather than doing what we can. I also agree that worrying about hundreds (even one) of years away do not remove the urgency of now.
In any case, going forward rather than backwards is imperative, but it does not appear to be a solution in hand from our demagogic bullying liars taking charge in far too many parts of the world, including my own (US).
Tomáš Kalisz says
In Re to Dave_Geologist, 8 Jan 2025 at 9:47 AM,
https://www.realclimate.org/index.php/archives/2024/12/ai-caramba/#comment-828806
and 8 Jan 2025 at 9:25 AM,
https://www.realclimate.org/index.php/archives/2024/12/ai-caramba/#comment-828805
Sir,
Could you perhaps shortly summarize the achieved experience with industrial-scale carbon capture and storage?
I have a feeling that, quite paradoxically, direct air capture (DAC) is more promoted than carbon (dioxide) capture from much more concentrated effluent gases (CCS) that could be technically much easier (and therefore also more feasible economically). It seems that various DAC projects are being increasingly subsidized, although there is no clear progress with a practical CCS implementation on an industrial scale yet.
Am I right or wrong? Could you comment?
Thank you in advance and best regards
Tomáš
Dave_Geologist says
Goes a bit to far down the paranoid-conspiracy-theory path Susan (if politicians were bribed, prove it). And Sleipner is doing fine. As was In Salah in Algeria until it filled up sooner than expected (but every cubic metre stored not vented is still a cubic metre not in the atmosphere). And there BP and Statoil picked up 100% of the cost, because the state oil company wanted to vent to atmosphere (the field in question has 10% CO2, and you have to get it down to 1% to make the gas saleable). And it was not counted as an allowable expense under the OPEC contract, so all they got back was 25% Corporation Tax relief on their lost profits, which themselves were a fraction of the capital and operating cost.
It also appears to conflate customer emissions with producer emissions. Sorry, but the CO2 coming out of our exhaust pipes and gas flues, and from the cattle that provide our steaks, is our CO2 not Chevron’s or the farmer’s CO2. If we can’t get our heads around that we’re still stuck in the world of Douglas Adams’ Somebody Else’s Problem Field (which if you haven’t read the book makes the problem invisible).
And did the government really give Chevron $60M, or did Chevron spend $250M of their shareholders’ money and get $60M tax relief? Presumably because the government regulator wouldn’t let them save money and just vent it (Chevron are the biggest climate-change deniers outside OPEC, more so than Exxon since Raymond retired, so you can bet your bottom dollar that they didn’t do it voluntarily).
How successful were the first six solar or wind farms? The first six ICE cars? The first six mega-batteries? Or indeed the first six oil wells?
In any case there are things we can fix (associated CO2 from oil and gas fields, and from cement and petrochemical production), and things we can’t change (CO2 is a necessary chemical by-product of cement and fertiliser manufacture, however you power the plant). It needn’t stop us transitioning to renewables, and we sure ain’t gonna stop eating and building houses any time soon.
As you said yourself, “we can’t fix everything” usually has an unspoken “so lets’ fix nothing” appended. We also should not get fixated on oil and gas companies being the contractor, any more than we should object to Ford benefiting from policies to promote electric cars, and of course they too get tax relief on their expenditure as well as sales cash. It’s our CO2, our problem, and we have to own it. There are no magic bullets, or evil witches who can be banished (Wicked notwithstanding ;-) ), to make the problem go away without us paying a price.
Ray Ladbury says
D_G, I have to admit that my attitudes toward CCS have been influenced by my familiarity with the toothpaste problem–it is much easier to leave the toothpaste in the tube than it is to squeeze it out and then try to put it back in. We have had a solution to the problem that was stable on geologic timescales until humans came along and mucked with it.
Given that we have F___ed around for 50 years and made no progress toward mitigating this crisis, I realize that we are reliant on technological miracles like CCS and fusion if we are to avoid catastrophe. However, the current situation seems to force us to believe that somehow such miracles will be realized precisely because our continued well being relies on them rather than because the evidence suggests they can in fact be realized.
Susan Anderson says
Thanks! I’ll have to stretch my brain a bit to take this in, but very much appreciate the trouble you’ve taken. The first video is based in Australia, the Just Have a Think one lacks the drama and appears to include more facts.
Re cement, there are already forms which do not emit (as much?) CO2, but apparently as with CCS, the big money people prefer cheap to improving emissions. Here’s what I found in a quick search, but have been watching it for a while.
https://www.weforum.org/stories/2024/09/cement-production-sustainable-concrete-co2-emissions/
Another positive effort seems to be cow/cattle feed, particularly effective in the neighborhood of seaweed. This is away from your expertise on CCS, and again affected by cost cutting instead of paying attention to the very real emissions problems.
As noted, you gave me some very welcome work to do to understand, for which I thank you.
Dave_Geologist says
Susan, the point about chemical and cement works is that the toothpaste is already in the tubes (and tanks).
There have been suggestions about less CO2-emitting cement (using for example volcanic ash, but how much of the right kind of ash is there compared to boring old shale and mudstone?). Fundamentally the reaction is CaCO3 + AlxSiyOz -> CaAluSivOw +CO2. The calcium can only give up two electrons per atom, so most of the CO2 has to be released even if you use renewable energy, and even if you can find an exothermic formula that doesn’t need an external heat source. Cement isn’t absorbing CO2 when it’s curing: calcium aluminosilicate needles are growing to form the interlocking mesh that gives it its strength. Which is why the Colosseum is in better shape than outdoor marble of the same age.
I haven’t had time to look at LC3 properly, but there is only one source of CO2 in the input to cement clinker, therefore only one non-heating source of CO2 emissions in making it: ground limestone (c. 40%). I struggle to see how replacing ground limestone with, err, ground limestone changes that.
Same for petrochemicals and fertiliser. It’s a necessary part of the chemistry that CO2 is released (of course some reactions absorb CO2, and the food and other industries use some of the leftovers – but not all or there wouldn’t be a problem in the first place).
You need to distinguish between CCS to offset the emissions from burning hydrocarbons, and CCS to avoid releasing CO2 which came out of the ground as CO2 not hydrocarbons. The first doesn’t exist anywhere, subsidised or otherwise. Although a common Straw-Man complaint is that the existing projects which do exist don’t offset emissions from combustion. That’s a Straw Man because nobody, not Chevron nor the Australian Government nor anyone else involved said they would. It’s a category error.
The harsh reality is that come 2050 (and unless you believe in XR’s dreamland, net zero by 2050 not 2030 means we will still be doing those things for the next 25 years, so we might as well do them with as little collateral CO2 release as possible), there will still be things that can’t be decarbonised but that we need to keep civilisation running. The right time to work out the bugs in what we can do to mitigate those is now, not in 2055. I’m not accusing you of this, but there’s something perverse in relying on magic negative-emissions technology decades hence to offset the CO2 we emit other than by combustion, and rejecting existing technology that already works and can be refined over decades. The cynic in me thinks a lot of that comes about because those arguing against non-offset CCS don’t like the idea that the companies which have the expertise to do it are on their shit-list. The irony of course is that those $60M didn’t go to Chevron’s shareholders: presumably government auditors tracked the money that Chevron spent on contractors and material and based any allowances or tax relief on actual spend.